id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
8,666,685 | https://en.wikipedia.org/wiki/Molecular%20Koch%27s%20postulates | Molecular Koch's postulates are a set of experimental criteria that must be satisfied to show that a gene found in a pathogenic microorganism encodes a product that contributes to the disease caused by the pathogen. Genes that satisfy molecular Koch's postulates are often referred to as virulence factors. The postulates were formulated by the microbiologist Stanley Falkow in 1988 and are based on Koch's postulates.
Postulates
As per Falkow's original descriptions, the three postulates are:
"The phenotype or property under investigation should be associated with pathogenic members of a genus or pathogenic strains of a species.
Specific inactivation of the gene(s) associated with the suspected virulence trait should lead to a measurable loss in pathogenicity or virulence.
Reversion or allelic replacement of the mutated gene should lead to restoration of pathogenicity."
To apply the molecular Koch's postulates to human diseases, researchers must identify which microbial genes are potentially responsible for symptoms of pathogenicity, often by sequencing the full genome to compare which nucleotides are homologous to the protein-coding genes of other species. Alternatively, scientists can identify which mRNA transcripts are at elevated levels in the diseased organs of infected hosts. Additionally, the tester must identify and verify methods for inactivating and reactivating the gene being studied.
In 1996, Fredricks and Relman proposed seven molecular guidelines for establishing microbial disease causation:
"A nucleic acid sequence belonging to a putative pathogen should be present in most cases of an infectious disease. Microbial nucleic acids should be found preferentially in those organs or gross anatomic sites known to be diseased (i.e., with anatomic, histologic, chemical, or clinical evidence of pathology) and not in those organs that lack pathology.
Fewer, or no, copy numbers of pathogen-associated nucleic acid sequences should occur in hosts or tissues without disease.
With resolution of disease (for example, with clinically effective treatment), the copy number of pathogen-associated nucleic acid sequences should decrease or become undetectable. With clinical relapse, the opposite should occur.
When sequence detection predates disease, or sequence copy number correlates with severity of disease or pathology, the sequence-disease association is more likely to be a causal relationship.
The nature of the microorganism inferred from the available sequence should be consistent with the known biological characteristics of that group of organisms. When phenotypes (e.g., pathology, microbial morphology, and clinical features) are predicted by sequence-based phylogenetic relationships, the meaningfulness of the sequence is enhanced.
Tissue-sequence correlates should be sought at the cellular level: efforts should be made to demonstrate specific in-situ hybridization of microbial sequence to areas of tissue pathology and to visible microorganisms or to areas where microorganisms are presumed to be located.
These sequence-based forms of evidence for microbial causation should be reproducible."
References
Epidemiology
Microbiology
Diseases and disorders
Cause (medicine) | Molecular Koch's postulates | [
"Chemistry",
"Biology",
"Environmental_science"
] | 658 | [
"Epidemiology",
"Microbiology",
"Environmental social science",
"Microscopy"
] |
8,668,103 | https://en.wikipedia.org/wiki/Animal%20transporter | Animal transporters are used to transport livestock or non-livestock animals over long distances. They could be specially-modified vehicles, trailers, ships or aircraft containers. While some animal transporters like horse trailers only carry a few animals, modern ships engaged in live export can carry tens of thousands.
The Animal Transportation Association campaigns for humane transporting of animals as do many other animal welfare organisations.
See also
Animal-powered transport
Drover's caboose
Horse box
Horse trailer
livestock carrier (Maritime)
Livestock transportation
Road transport
Stock car (rail)
References
Road transport
Livestock transportation vehicles
Intensive farming
Human–animal interaction | Animal transporter | [
"Chemistry",
"Biology"
] | 121 | [
"Animals",
"Eutrophication",
"Intensive farming",
"Human–animal interaction",
"Humans and other species"
] |
7,000,901 | https://en.wikipedia.org/wiki/Hydrogen-powered%20aircraft | A hydrogen-powered aircraft is an aeroplane that uses hydrogen fuel as a power source. Hydrogen can either be burned in a jet engine or another kind of internal combustion engine, or can be used to power a fuel cell to generate electricity to power an electric propulsor. It cannot be stored in a traditional wet wing, and hydrogen tanks have to be housed in the fuselage or be supported by the wing.
Hydrogen, which can be produced from low-carbon power and can produce zero emissions, can reduce the environmental impact of aviation. Boeing acknowledges the technology potential and Airbus plans to launch a first commercial hydrogen-powered aircraft by 2035. McKinsey & Company forecast hydrogen aircraft entering the market in the late 2030s and scaling up through 2050, when they could account for a third of aviation's energy demand.
Hydrogen properties
Hydrogen has a specific energy of 119.9 MJ/kg, compared to ~ MJ/kg for usual liquid fuels, times higher.
However, it has an energy density of 10.05 kJ/L at normal atmospheric pressure and temperature, compared to ~ kJ/L for liquid fuels, times lower.
When pressurised to , it reaches 4,500 kJ/L, still times lower than liquid fuels.
Cooled at , liquid hydrogen has an energy density of 8,491 kJ/L, times lower than liquid fuels.
Aircraft design
The low volumetric energy density of hydrogen poses challenges when designing an aircraft, where weight and exposed surface area are critical. To reduce the size of the tanks liquid hydrogen will be used, requiring cryogenic fuel tanks. Cylindrical tanks minimise surface for minimal thermal insulation weight, leading towards tanks in the fuselage rather than wet wings in conventional aircraft. Airplane volume and drag will be increased somewhat by larger fuel tanks. A larger fuselage adds more skin friction drag due to the extra wetted area. The extra tank weight is offset by dramatically lower liquid hydrogen fuel weight.
Gaseous hydrogen may be used for short-haul aircraft. Liquid hydrogen might be needed for long-haul aircraft.
Hydrogen's high specific energy means it would need less fuel weight for the same range, ignoring the repercussions of added volume and tank weight. As airliners have a fuel fraction of the Maximum Takeoff Weight MTOW between 26% for medium-haul to 45% for long-haul, maximum fuel weight could be reduced to % to % of the MTOW.
Fuel cells make sense for general aviation and regional aircraft but their engine efficiency is less than large gas turbines. They are more efficient than modern 7 to 90-passenger turboprop airliners such as the DASH 8.
The efficiency of a hydrogen-fueled aircraft is a trade-off of the larger wetted area, lower fuel weight, and added tank weight, varying with the aircraft size. Hydrogen is suited for short-range airliners. While longer-range aircraft need new aircraft designs.
Liquid hydrogen is one of the best coolants used in engineering, and precooled jet engines have been proposed to use this property for cooling the intake air of hypersonic aircraft, or even for cooling the aircraft's skin itself, particularly for scramjet-powered aircraft.
A study in the UK, NAPKIN (New Aviation, Propulsion Knowledge and Innovation Network), with collaboration from Heathrow Airport, Rolls-Royce, GKN Aerospace, and Cranfield Aerospace solutions, has investigated the potential of new hydrogen-powered aircraft designs to reduce the environmental impact of aviation. The aircraft designers have proposed a range of hydrogen-fuelled aircraft concepts, ranging from 7 to 90 seats, exploring the use of hydrogen with fuel cells and gas turbines to replace conventional aircraft engines powered by fossil fuels. The findings suggest that in the UK hydrogen-powered aircraft could be commercially viable for short-haul and regional flights by the second half of the 2020s with airlines potentially able to replace the entire UK regional fleet with hydrogen aircraft by 2040. However, the report highlighted that national supply, and the price of green liquid hydrogen relative to fossil kerosene are critical factors in determining uptake of hydrogen aircraft by airline operators. Modeling showed that, if hydrogen prices approach $1/kg, hydrogen aircraft uptake could cover almost 100% of the UK domestic market.
Emissions and environmental impact
Hydrogen aircraft using a fuel cell design are zero emission in operation, whereas aircraft using hydrogen as a fuel for a jet engine or an internal combustion engine are zero emission for (a greenhouse gas which contributes to global climate change) but not for (a local air pollutant). The burning of hydrogen in air leads to the production of , i.e., the + ½ → reaction in a nitrogen-rich environment also causes the production of . However, hydrogen combustion produces up to 90% less nitrogen oxides than kerosene fuel, and it eliminates the formation of particulate matter.
If hydrogen is available in quantity from low-carbon power such as wind or nuclear, its use in aircraft will produce fewer greenhouse gases than current aircraft: water vapor and a small amount of nitrogen oxide. However, as of 2021, less than 5% of all hydrogen produced is emissions free, and the majority comes from fossil fuels.
A 2020 study by the EU Clean Sky 2 and Fuel Cells and Hydrogen 2 Joint Undertakings found that hydrogen could power aircraft by 2035 for short-range aircraft. A short-range aircraft (< ) with hybrid Fuel cell/Turbines could reduce climate impact by 70–80% for a 20–30% additional cost, a medium-range airliner with H2 turbines could have a 50–60% reduced climate impact for a 30–40% overcost, and a long-range aircraft (> ) also with H2 turbines could reduce climate impact by 40–50% for a 40–50% additional cost. Research and development would be required, in aircraft technology and into hydrogen infrastructure, regulations and certification standards.
Water vapor is a greenhouse gas – in fact, most of the total greenhouse effect on earth is due to water vapor. However, in the troposphere the content of water vapor is not dominated by anthropogenic emissions but rather the natural water cycle as water does not long remain static in that layer of the atmosphere. This is different in the stratosphere which – absent human action – would be almost totally dry and still remains relatively devoid of water. If hydrogen is burned and the resulting water vapor is released at stratospheric heights (the cruising altitude of some commercial flights is within the stratosphere – supersonic flight takes place almost entirely at stratospheric altitude), the content of water vapor in the stratosphere is increased. Due to the long residence time of water vapor at those heights, the long term effects over years or even decades cannot be entirely discounted.
History
Demonstrations
In February 1957, a Martin B-57B of the NACA flew on hydrogen for 20 min for one of its two Wright J65 engines rather than jet fuel. On 15 April 1988, the Tu-155 first flew as the first hydrogen-powered experimental aircraft, an adapted Tu-154 airliner.
Boeing converted a two-seat Diamond DA20 to run on a fuel cell designed and built by Intelligent Energy. It first flew on April 3, 2008. The Antares DLR-H2 is a hydrogen-powered aeroplane from Lange Aviation and the German aerospace center. In July 2010, Boeing unveiled its hydrogen powered Phantom Eye UAV, that uses two converted Ford Motor Company piston engines.
In 2010, the Rapid 200FC concluded six flight tests fueled by gaseous hydrogen.
The aircraft and the electric and energy system was developed within the European Union's project coordinated by the Politecnico di Torino.
Hydrogen gas is stored at 350 bar, feeding a fuel cell powering a electric motor along a lithium polymer battery pack.
On January 11, 2011, an AeroVironment Global Observer unmanned aircraft completed its first flight powered by a hydrogen-fueled propulsion system.
Developed by Germany's DLR Institute of Engineering Thermodynamics, the DLR HY4 four-seater was powered by a hydrogen fuel cell, its first flight took place on September 29, 2016. It has the possibility to store of hydrogen, 4x11 kW fuel cells and 2x10 kWh batteries.
On 19 January 2023, ZeroAvia flew its Dornier 228 testbed with one turboprop replaced by a prototype hydrogen-electric powertrain in the cabin, consisting of two fuel cells and a lithium-ion battery for peak power. The aim is to have a certifiable system by 2025 to power airframes carrying up to 19 passengers over .
On 2 March 2023, Universal Hydrogen flew a Dash 8 40-passenger testbed with one engine powered by their hydrogen-electric powertrain. The company has received an order from Connect Airlines to convert 75 ATR 72-600 with their hydrogen powertrains.
On 8 November 2023, Airbus flew a modified Schempp-Hirth Arcus-M glider, dubbed the Blue Condor, equipped with a hydrogen combustion engine for the first time, using hydrogen as its sole source of fuel.
On 24 June 2024, Joby Aviation's S4 eVTOL demonstrator, refitted with a hydrogen-electric powertrain in May, completed a record 523 miles non-stop flight, more than triple the range of the battery powered version. It landed with 10% liquid hydrogen fuel remaining in its cyrogenic fuel tank, and the only in-flight emission was water vapor. A hydrogen fuel cell system provided the power for the six electric rotors of the eVTOL during its flight, and a small battery provided added takeoff and landing power.
Aircraft projects
In 1975, Lockheed prepared a study of liquid hydrogen fueled subsonic transport aircraft for NASA Langley, exploring airliners carrying 130 passengers over 2,780 km (1500 nmi); 200 passengers over 5,560 km (3,000 nmi); and 400 passengers over 9,265 km (5,000 nmi).
Between April 2000 and May 2002, the European Commission funded half of the Airbus-led Cryoplane Study, assessing the configurations, systems, engines, infrastructure, safety, environmental compatibility and transition scenarios.
Multiple configurations were envisioned: a 12 passenger business jet with a range, regional airliner for 44 passengers over and 70 passengers over , a medium range narrowbody aircraft for 185 passengers over and long range widebody aircraft for 380 to 550 passengers over .
In September 2020, Airbus presented three ZEROe hydrogen-fuelled concepts aiming for commercial service by 2035: a 100-passenger turboprop, a 200-passenger turbofan, and a futuristic design based around a blended wing body.
The aircraft are powered by gas turbines rather than fuel cells.
In December 2021, the UK Aerospace Technology Institute (ATI) presented its FlyZero study of cryogenic liquid hydrogen used in gas turbines for a 279-passenger design with of range. ATI is supported by Airbus, Rolls-Royce, GKN, Spirit, General Electric, Reaction Engines, Easyjet, NATS, Belcan, Eaton, Mott MacDonald and the MTC.
In August 2021 the UK Government claimed it was the first to have a Hydrogen Strategy. This report included a suggested strategy for hydrogen powered aircraft along with other transport modes.
In March 2022, FlyZero detailed its three concept aircraft:
the 75-seat FZR-1E regional airliner has six electric propulsors powered by fuel cells, a size comparable to the ATR 72 with a larger fuselage diameter at compared to to accommodate hydrogen storage, for a 325 kn (601 km/h) cruise and an 800 nmi (1,480 km) range;
its FZN-1E narrowbody has rear-mounted hydrogen-burning turbofans, a T-tail and nose-mounted canards, a longer fuselage than the Airbus A320neo becoming up to wider at the rear to accommodate two cryogenic fuel tanks, and a larger wingspan requiring folding wing-tips for a range with a cruise;
the small widebody FZM-1G is comparable to the Boeing 767-200ER, flying 279 passengers over , with a wide fuselage diameter closer to the A350 or 777X, a wingspan within airport gate limits, underwing engines and tanks in front of the wing.
Propulsion projects
In March 2021, Cranfield Aerospace Solutions announced the Project Fresson switched from batteries to hydrogen for the nine-passenger Britten-Norman Islander retrofit for a September 2022 demonstration. Project Fresson is supported by the Aerospace Technology Institute in partnership with the UK Department for Business, Energy & Industrial Strategy and Innovate UK.
Pratt & Whitney wants to associate its geared turbofan architecture with its Hydrogen Steam Injected, Inter‐Cooled Turbine Engine (HySIITE) project, to avoid carbon dioxide emissions, reduce NOx emissions by 80%, and reduce fuel consumption by 35% compared with the current jet-fuel PW1100G, for a service entry by 2035 with a compatible airframe.
On 21 February 2022, the US Department of Energy through the OPEN21 scheme run by its Advanced Research Projects Agency-Energy (ARPA-E) awarded P&W $3.8 million for a two-year early stage research initiative, to develop the combustor and the heat exchanger used to recover water vapour in the exhaust stream, injected into the combustor to increase its power, and into the compressor as an intercooler, and into the turbine as a coolant.
In February 2022, Airbus announced a demonstration of a liquid hydrogen-fueled turbofan, with CFM International modifying the combustor, fuel system and control system of a GE Passport, mounted on a fuselage pylon on an A380 prototype, for a first flight expected within five years.
Proposed aircraft and prototypes
Historical
Lockheed CL-400 Suntan, 1950's concept, dropped for the SR-71
National Aerospace Plane, 1986–1993 concept with a scramjet, cancelled during development
Tupolev Tu-155, 1988 modified Tupolev Tu-154 testbed, flew over 100 flights
AeroVironment Global Observer, 2010-2011 fuel-cell powered drone demonstrator, performed 9 flights before crashing
Boeing Phantom Eye, 2012-2016 piston engine powered drone demonstrator, flew 9 times with flights lasting up to 9 hours
Projects
AeroDelft, a student team from Delft University of Technology creating a gaseous and liquid hydrogen fuelled drone and Sling 4.
Airbus ZEROe, presented in late 2020, it aims to create four concept aircraft and launch the first commercial zero-emission aircraft, entering service by 2035
Cellsius H2-Sling, a student project at ETH Zürich building a modified Sling HW with a hydrogen fuel cell propulsion system.
DLR Smartfish, two seat experimental lifting body; based on the previous Hyfish model.
DLR HY4, operated by DLR spinoff H2Fly, completed the world's first piloted electric flights powered by liquid hydrogen in 2023
Project Fresson, a Britten-Norman Islander retrofit.
Reaction Engines Skylon, orbital hydrogen fuelled spaceplane.
Reaction Engines A2, antipodal hypersonic jet airliner.
Taifun 17H2, a student project retrofitting a Valentin Taifun 17E and 17EII with a gaseous hydrogen fuel cell electric propulsion system.
Universal Hydrogen (fuel cell powered Dash 8-300) the largest aircraft ever to cruise mainly on hydrogen power
ZeroAvia HyFlyer (fuel-cell powered Piper PA-46 demonstrator)
ZeroAvia (fuel-cell powered Dornier 228x)
See also
Electric aircraft
Aviation fuel#Emerging aviation fuels
References
External links
Aircraft configurations
Aviation and the environment | Hydrogen-powered aircraft | [
"Engineering"
] | 3,228 | [
"Aircraft configurations",
"Aerospace engineering"
] |
7,000,956 | https://en.wikipedia.org/wiki/Nomarski%20prism | A Nomarski prism is a modification of the Wollaston prism that is used in differential interference contrast microscopy. It is named after its inventor, Polish and naturalized-French physicist Georges Nomarski. Like the Wollaston prism, the Nomarski prism consists of two birefringent crystal wedges (e.g. quartz or calcite) cemented together at the hypotenuse (e.g. with Canada balsam). One of the wedges is identical to a conventional Wollaston wedge and has the optical axis oriented parallel to the surface of the prism. The second wedge of the prism is modified by cutting the crystal so that the optical axis is oriented obliquely with respect to the flat surface of the prism. The Nomarski modification causes the light rays to come to a focal point outside the body of the prism, and allows greater flexibility so that when setting up the microscope the prism can be actively focused.
See also
Glan–Foucault prism
Glan–Thompson prism
Nicol prism
Prism (optics)
Rochon prism
Sénarmont prism
References
External links
Nomarski Prism Action in Polarized Light
Wavefront Shear in Wollaston and Nomarski Prisms
Polarization (waves)
Prisms (optics)
Microscopy | Nomarski prism | [
"Physics",
"Chemistry"
] | 261 | [
"Polarization (waves)",
"Astrophysics",
"Microscopy"
] |
7,001,745 | https://en.wikipedia.org/wiki/Impedance%20of%20free%20space | In electromagnetism, the impedance of free space, , is a physical constant relating the magnitudes of the electric and magnetic fields of electromagnetic radiation travelling through free space. That is,
where is the electric field strength, and is the magnetic field strength. Its presently accepted value is
,
where Ω is the ohm, the SI unit of electrical resistance. The impedance of free space (that is, the wave impedance of a plane wave in free space) is equal to the product of the vacuum permeability and the speed of light in vacuum . Before 2019, the values of both these constants were taken to be exact (they were given in the definitions of the ampere and the metre respectively), and the value of the impedance of free space was therefore likewise taken to be exact. However, with the revision of the SI that came into force on 20 May 2019, the impedance of free space as expressed with an SI unit is subject to experimental measurement because only the speed of light in vacuum retains an exactly defined value.
Terminology
The analogous quantity for a plane wave travelling through a dielectric medium is called the intrinsic impedance of the medium and designated (eta). Hence is sometimes referred to as the intrinsic impedance of free space, and given the symbol . It has numerous other synonyms, including:
wave impedance of free space,
the vacuum impedance,
intrinsic impedance of vacuum,
characteristic impedance of vacuum,
wave resistance of free space.
Relation to other constants
From the above definition, and the plane wave solution to Maxwell's equations,
where
H/m is the magnetic constant, also known as the permeability of free space,
F/m is the electric constant, also known as the permittivity of free space,
is the speed of light in free space,
The reciprocal of is sometimes referred to as the admittance of free space and represented by the symbol .
Historical exact value
Between 1948 and 2019, the SI unit the ampere was defined by choosing the numerical value of to be exactly . Similarly, since 1983 the SI metre has been defined relative to the second by choosing the value of to be . Consequently, until the 2019 revision,
exactly,
or
exactly,
or
This chain of dependencies changed when the ampere was redefined on 20 May 2019.
Approximation as 120π ohms
It is very common in textbooks and papers written before about 1990 to substitute the approximate value 120 ohms for . This is equivalent to taking the speed of light to be precisely in conjunction with the then-current definition of as . For example, Cheng 1989 states that the radiation resistance of a Hertzian dipole is
(result in ohms; not exact).
This practice may be recognized from the resulting discrepancy in the units of the given formula. Consideration of the units, or more formally dimensional analysis, may be used to restore the formula to a more exact form, in this case to
See also
Electromagnetic wave equation
Mathematical descriptions of the electromagnetic field
Near and far field
Sinusoidal plane-wave solutions of the electromagnetic wave equation
Space cloth
Vacuum
Wave impedance
References and notes
Further reading
Electromagnetism
Physical constants | Impedance of free space | [
"Physics",
"Mathematics"
] | 642 | [
"Electromagnetism",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Physical constants",
"Fundamental interactions"
] |
7,005,062 | https://en.wikipedia.org/wiki/Energy%20conversion%20efficiency | Energy conversion efficiency (η) is the ratio between the useful output of an energy conversion machine and the input, in energy terms. The input, as well as the useful output may be chemical, electric power, mechanical work, light (radiation), or heat. The resulting value, η (eta), ranges between 0 and 1.
Overview
Energy conversion efficiency depends on the usefulness of the output. All or part of the heat produced from burning a fuel may become rejected waste heat if, for example, work is the desired output from a thermodynamic cycle. Energy converter is an example of an energy transformation. For example, a light bulb falls into the categories energy converter.
Even though the definition includes the notion of usefulness, efficiency is considered a technical or physical term. Goal or mission oriented terms include effectiveness and efficacy.
Generally, energy conversion efficiency is a dimensionless number between 0 and 1.0, or 0% to 100%. Efficiencies cannot exceed 100%, which would result in a perpetual motion machine, which is impossible.
However, other effectiveness measures that can exceed 1.0 are used for refrigerators, heat pumps and other devices that move heat rather than convert it. It is not called efficiency, but the coefficient of performance, or COP. It is a ratio of useful heating or cooling provided relative to the work (energy) required. Higher COPs equate to higher efficiency, lower energy (power) consumption and thus lower operating costs. The COP usually exceeds 1, especially in heat pumps, because instead of just converting work to heat (which, if 100% efficient, would be a COP of 1), it pumps additional heat from a heat source to where the heat is required. Most air conditioners have a COP of 2.3 to 3.5.
When talking about the efficiency of heat engines and power stations the convention should be stated, i.e., HHV ( Gross Heating Value, etc.) or LCV (a.k.a. Net Heating value), and whether gross output (at the generator terminals) or net output (at the power station fence) are being considered. The two are separate but both must be stated. Failure to do so causes endless confusion.
Related, more specific terms include
Electrical efficiency, useful power output per electrical power consumed;
Mechanical efficiency, where one form of mechanical energy (e.g. potential energy of water) is converted to mechanical energy (work);
Thermal efficiency or Fuel efficiency, useful heat and/or work output per input energy such as the fuel consumed;
'Total efficiency', e.g., for cogeneration, useful electric power and heat output per fuel energy consumed. Same as the thermal efficiency.
Luminous efficiency, that portion of the emitted electromagnetic radiation is usable for human vision.
Chemical conversion efficiency
The change of Gibbs energy of a defined chemical transformation at a particular temperature is the minimum theoretical quantity of energy required to make that change occur (if the change in Gibbs energy between reactants and products is positive) or the maximum theoretical energy that might be obtained from that change (if the change in Gibbs energy between reactants and products is negative). The energy efficiency of a process involving chemical change may be expressed relative to these theoretical minima or maxima.The difference between the change of enthalpy and the change of Gibbs energy of a chemical transformation at a particular temperature indicates the heat input required or the heat removal (cooling) required to maintain that temperature.
A fuel cell may be considered to be the reverse of electrolysis. For example, an ideal fuel cell operating at a temperature of 25 °C having gaseous hydrogen and gaseous oxygen as inputs and liquid water as the output could produce a theoretical maximum amount of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water produced and would require 48.701 kJ (0.01353 kWh) per gram mol of water produced of heat energy to be removed from the cell to maintain that temperature.
An ideal electrolysis unit operating at a temperature of 25 °C having liquid water as the input and gaseous hydrogen and gaseous oxygen as products would require a theoretical minimum input of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water consumed and would require 48.701 kJ (0.01353 kWh) per gram mol of water consumed of heat energy to be added to the unit to maintain that temperature. It would operate at a cell voltage of 1.24 V.
For a water electrolysis unit operating at a constant temperature of 25 °C without the input of any additional heat energy, electrical energy would have to be supplied at a rate equivalent of the enthalpy (heat) of reaction or 285.830 kJ (0.07940 kWh) per gram mol of water consumed. It would operate at a cell voltage of 1.48 V. The electrical energy input of this cell is 1.20 times greater than the theoretical minimum so the energy efficiency is 0.83 compared to the ideal cell.
A water electrolysis unit operating with a higher voltage that 1.48 V and at a temperature of 25 °C would have to have heat energy removed in order to maintain a constant temperature and the energy efficiency would be less than 0.83.
The large entropy difference between liquid water and gaseous hydrogen plus gaseous oxygen accounts for the significant difference between the Gibbs energy of reaction and the enthalpy (heat) of reaction.
Fuel heating values and efficiency
In Europe the usable energy content of a fuel is typically calculated using the lower heating value (LHV) of that fuel, the definition of which assumes that the water vapor produced during fuel combustion (oxidation) remains gaseous, and is not condensed to liquid water so the latent heat of vaporization of that water is not usable. Using the LHV, a condensing boiler can achieve a "heating efficiency" in excess of 100% (this does not violate the first law of thermodynamics as long as the LHV convention is understood, but does cause confusion). This is because the apparatus recovers part of the heat of vaporization, which is not included in the definition of the lower heating value of a fuel. In the U.S. and elsewhere, the higher heating value (HHV) is used, which includes the latent heat for condensing the water vapor, and thus the thermodynamic maximum of 100% efficiency cannot be exceeded.
Wall-plug efficiency, luminous efficiency, and efficacy
In optical systems such as lighting and lasers, the energy conversion efficiency is often referred to as wall-plug efficiency. The wall-plug efficiency is the measure of output radiative-energy, in watts (joules per second), per total input electrical energy in watts. The output energy is usually measured in terms of absolute irradiance and the wall-plug efficiency is given as a percentage of the total input energy, with the inverse percentage representing the losses.
The wall-plug efficiency differs from the luminous efficiency in that wall-plug efficiency describes the direct output/input conversion of energy (the amount of work that can be performed) whereas luminous efficiency takes into account the human eye's varying sensitivity to different wavelengths (how well it can illuminate a space). Instead of using watts, the power of a light source to produce wavelengths proportional to human perception is measured in lumens. The human eye is most sensitive to wavelengths of 555 nanometers (greenish-yellow) but the sensitivity decreases dramatically to either side of this wavelength, following a Gaussian power-curve and dropping to zero sensitivity at the red and violet ends of the spectrum. Due to this the eye does not usually see all of the wavelengths emitted by a particular light-source, nor does it see all of the wavelengths within the visual spectrum equally. Yellow and green, for example, make up more than 50% of what the eye perceives as being white, even though in terms of radiant energy white-light is made from equal portions of all colors (i.e.: a 5 mW green laser appears brighter than a 5 mW red laser, yet the red laser stands-out better against a white background). Therefore, the radiant intensity of a light source may be much greater than its luminous intensity, meaning that the source emits more energy than the eye can use. Likewise, the lamp's wall-plug efficiency is usually greater than its luminous efficiency. The effectiveness of a light source to convert electrical energy into wavelengths of visible light, in proportion to the sensitivity of the human eye, is referred to as luminous efficacy, which is measured in units of lumens per watt (lm/w) of electrical input-energy.
Unlike efficacy (effectiveness), which is a unit of measurement, efficiency is a unitless number expressed as a percentage, requiring only that the input and output units be of the same type. The luminous efficiency of a light source is thus the percentage of luminous efficacy per theoretical maximum efficacy at a specific wavelength. The amount of energy carried by a photon of light is determined by its wavelength. In lumens, this energy is offset by the eye's sensitivity to the selected wavelengths. For example, a green laser pointer can have greater than 30 times the apparent brightness of a red pointer of the same power output. At 555 nm in wavelength, 1 watt of radiant energy is equivalent to 683 lumens, thus a monochromatic light source at this wavelength, with a luminous efficacy of 683 lm/w, would have a luminous efficiency of 100%. The theoretical-maximum efficacy lowers for wavelengths at either side of 555 nm. For example, low-pressure sodium lamps produce monochromatic light at 589 nm with a luminous efficacy of 200 lm/w, which is the highest of any lamp. The theoretical-maximum efficacy at that wavelength is 525 lm/w, so the lamp has a luminous efficiency of 38.1%. Because the lamp is monochromatic, the luminous efficiency nearly matches the wall-plug efficiency of < 40%.
Calculations for luminous efficiency become more complex for lamps that produce white light or a mixture of spectral lines. Fluorescent lamps have higher wall-plug efficiencies than low-pressure sodium lamps, but only have half the luminous efficacy of ~ 100 lm/w, thus the luminous efficiency of fluorescents is lower than sodium lamps. A xenon flashtube has a typical wall-plug efficiency of 50–70%, exceeding that of most other forms of lighting. Because the flashtube emits large amounts of infrared and ultraviolet radiation, only a portion of the output energy is used by the eye. The luminous efficacy is therefore typically around 50 lm/w. However, not all applications for lighting involve the human eye nor are restricted to visible wavelengths. For laser pumping, the efficacy is not related to the human eye so it is not called "luminous" efficacy, but rather simply "efficacy" as it relates to the absorption lines of the laser medium. Krypton flashtubes are often chosen for pumping Nd:YAG lasers, even though their wall-plug efficiency is typically only ~ 40%. Krypton's spectral lines better match the absorption lines of the neodymium-doped crystal, thus the efficacy of krypton for this purpose is much higher than xenon; able to produce up to twice the laser output for the same electrical input. All of these terms refer to the amount of energy and lumens as they exit the light source, disregarding any losses that might occur within the lighting fixture or subsequent output optics. Luminaire efficiency refers to the total lumen-output from the fixture per the lamp output.
With the exception of a few light sources, such as incandescent light bulbs, most light sources have multiple stages of energy conversion between the "wall plug" (electrical input point, which may include batteries, direct wiring, or other sources) and the final light-output, with each stage producing a loss. Low-pressure sodium lamps initially convert the electrical energy using an electrical ballast, to maintain the proper current and voltage, but some energy is lost in the ballast. Similarly, fluorescent lamps also convert the electricity using a ballast (electronic efficiency). The electricity is then converted into light energy by the electrical arc (electrode efficiency and discharge efficiency). The light is then transferred to a fluorescent coating that only absorbs suitable wavelengths, with some losses of those wavelengths due to reflection off and transmission through the coating (transfer efficiency). The number of photons absorbed by the coating will not match the number then reemitted as fluorescence (quantum efficiency). Finally, due to the phenomenon of the Stokes shift, the re-emitted photons will have a longer wavelength (thus lower energy) than the absorbed photons (fluorescence efficiency). In very similar fashion, lasers also experience many stages of conversion between the wall plug and the output aperture. The terms "wall-plug efficiency" or "energy conversion efficiency" are therefore used to denote the overall efficiency of the energy-conversion device, deducting the losses from each stage, although this may exclude external components needed to operate some devices, such as coolant pumps.
Example of energy conversion efficiency
See also
Cost of electricity by source
Energy efficiency (disambiguation)
EROEI
Exergy efficiency
Figure of merit
Heat of combustion
International Electrotechnical Commission
Perpetual motion
Sensitivity (electronics)
Solar cell efficiency
Coefficient of performance
References
External links
Does it make sense to switch to LED?
Building engineering
Dimensionless numbers of thermodynamics
Energy conservation
Energy conversion
Energy efficiency | Energy conversion efficiency | [
"Physics",
"Chemistry",
"Engineering"
] | 2,821 | [
"Thermodynamic properties",
"Physical quantities",
"Dimensionless numbers of thermodynamics",
"Building engineering",
"Civil engineering",
"Architecture"
] |
7,005,786 | https://en.wikipedia.org/wiki/Hengzhi%20chip | The Hengzhi chip (, 共產主義監控晶片) is a microcontroller that can store secured information, designed by the People's Republic of China government and manufactured in China. Its functionalities should be similar to those offered by a Trusted Platform Module but, unlike the TPM, it does not follow Trusted Computing Group specifications. Lenovo is selling PCs installed with Hengzhi security chips. The chip could be a development of the IBM ESS (Embedded security subsystem) chip, which was a public key smart card placed directly on the motherboard's system management bus. As of September 2006, no public specifications about the chip are available.
The Hengzhi chip has caused issues with the installation of Windows 11 as it doesn't follow the TPM standards and foreign TPMs are banned in China.
See also
Trusted Computing
Trusted Platform Module
References
External links
Lenovo releases China's first security chip
Does China Own Your Box?
Cryptographic hardware
Trusted computing
Science and technology in the People's Republic of China | Hengzhi chip | [
"Engineering"
] | 217 | [
"Cybersecurity engineering",
"Trusted computing"
] |
7,007,246 | https://en.wikipedia.org/wiki/Roadheader | A roadheader, also called a boom-type roadheader, road header machine, road header or just header machine, is a piece of excavating equipment consisting of a boom-mounted cutting head, a loading device usually involving a conveyor, and a crawler travelling track to move the entire machine forward into the rock face.
The cutting head can be a general purpose rotating drum mounted in line or perpendicular to the boom, or can be special function heads such as jackhammer-like spikes, compression fracture micro-wheel heads like those on larger tunnel boring machines, a slicer head like a gigantic chain saw for dicing up rock, or simple jaw-like buckets of traditional excavators.
History
The first roadheader patent was applied for by Dr. Z. Ajtay in Hungary, in 1949. It was invented as a remote operated miner for exploitation of small seam, close walled deposits, typically in wet conditions.
Types
Cutting Heads:
Transverse - rotates parallel to the cutter boom axis
Longitudinal - rotates perpendicular to boom axis
Uses
Roadheaders were initially used in coal mines. The first use in a civil engineering project was the construction of the City Loop (then called the Melbourne Underground Rail Loop) in the 1970s, where the machines enabled around 80% of the excavation to be performed mechanically.
They are now widely used in such as tunneling both for mining and municipal government projects, building wine caves, and building cave homes such as those in Coober Pedy, Australia.
On February 21, 2014, Waller Street, just south of Laurier Avenue collapsed into an 8m-wide and 12m-deep sink-hole where a roadheader was excavating the eastern entrance to Ottawa's LRT O-Train tunnel. A similar incident occurred in June 2016, when a sink-hole opened up in Rideau Street during further construction of the tunnel, and filled with water up to a depth of three metres. The CBC reported that one of Rideau Transit Group’s 135-tonne roadheaders was in a part of the tunnel where the flooding was the deepest. Three roadheaders were used in the construction of the O-Train.
Projects utilizing roadheaders
Boston's Big Dig
Ground Zero Cleanup
Addison Airport Toll Tunnel
Fourth bore of Caldecott Tunnel
Malmö City Tunnel
Confederation Line, Ottawa
References
External links
An article on underground home design and construction, with a section on use of roadheader machines.
Ripping head roadheader Video
Engineering vehicles
Mining equipment
Excavating equipment | Roadheader | [
"Engineering"
] | 515 | [
"Engineering vehicles",
"Excavating equipment",
"Mining equipment"
] |
7,008,837 | https://en.wikipedia.org/wiki/Jig%20borer | The jig borer is a type of machine tool invented at the end of World War I to enable the quick and precise location of hole centers. It was invented independently in Switzerland and the United States. It resembles a specialized kind of milling machine that provides tool and die makers with a higher degree of positioning precision (repeatability) and accuracy than those provided by general machines. Although capable of light milling, a jig borer is more suited to highly accurate drilling, boring, and reaming, where the quill or headstock does not see the significant side loading that it would with mill work. The result is a machine designed more for location accuracy than heavy material removal.
A typical jig borer has a work table of around which can be moved using large handwheels (with micrometer-style readouts and verniers) on particularly carefully made shafts with a strong degree of gearing; this allows positions to be set on the two axes to an accuracy of . It is generally used to enlarge to a precise size smaller holes drilled with less accurate machinery in approximately the correct place (that is, with the small hole strictly within the area to be bored out for the large hole).
Jig borers are limited to working materials that are still soft enough to be bored. Often a jig is hardened; for a jig borer this requires the material to be bored first and then hardened, which may introduce distortion. Consequently, the jig grinder was developed as a machine with the precision of the jig borer, but capable of working materials in their hardened state.
History
Before the jig borer was developed, hole center location had been accomplished either with layout (either quickly-but-imprecisely or painstakingly-and-precisely) or with drill jigs (themselves made with painstaking-and-precise layout). The jig borer was invented to expedite the making of drill jigs, but it helped to eliminate the need for drill jigs entirely by making quick precision directly available for the parts that the jigs would have been created for. The revolutionary underlying principle was that advances in machine tool control that expedited the making of jigs were fundamentally a way to expedite the cutting process itself, for which the jig was just a means to an end. Thus, the jig borer's development helped advance machine tool technology toward later NC and CNC development. The jig borer was a logical extension of manual machine tool technology that began to incorporate some then-novel concepts that would become routine with NC and CNC control, such as:
coordinate dimensioning (dimensioning of all locations on the part from a single reference point);
working routinely in "tenths" (ten-thousandths of an inch, 0.0001 inch) as a fast, everyday machine capability (whereas it had been the exclusive domain of special, time-consuming, craftsman-dependent manual skills); and
circumventing jigs altogether.
Franklin D. Jones, in his textbook Machine Shop Training Course (5th ed), noted:
"In many cases, a jig borer is a 'jig eliminator.' In other words, such a machine may be used instead of a jig either when the quantity of work is not large enough to warrant making a jig or when there is insufficient time for jig making."
Several innovations in the development of the jig borer were the work of the Moore Special Tool Company, such as the adoption of hardened and accurate leadscrews, formed by grinding, rather than a soft leadscrew with a compensating nut.
The technological advances that led to the jig borer and NC were about to usher in the age of CNC and CAD/CAM, radically changing the way people manufacture many of their goods.
References
Hole making
Machine tools | Jig borer | [
"Engineering"
] | 795 | [
"Machine tools",
"Industrial machinery"
] |
10,820,517 | https://en.wikipedia.org/wiki/1963%20United%20States%20Tri-Service%20rocket%20and%20guided%20missile%20designation%20system | In 1963, the U.S. Department of Defense established a designation system for rockets and guided missiles jointly used by all the United States armed services. It superseded the separate designation systems the Air Force and Navy had for designating US guided missiles and drones, but also a short-lived interim USAF system for guided missiles and rockets.
History
On 11 December 1962, the U.S. Department of Defense issued Directive 4000.20 “Designating, Redesignating, and Naming Military Rockets and Guided Missiles” which called for a joint designation system for rockets and missiles which was to be used by all armed forces services. The directive was implemented via Air Force Regulation (AFR) 66-20, Army Regulation (AR) 705-36, Bureau of Weapons Instruction (BUWEPSINST) 8800.2 on 27 June 1963. A subsequent directive, DoD Directive 4120.15 "Designating and Naming Military Aircraft, Rockets, and Guided Missiles", was issued on 24 November 1971 and implemented via Air Force Regulation (AFR) 82-1/Army Regulation (AR) 70-50/Naval Material Command Instruction (NAVMATINST) 8800.4A on 27 March 1974. Within AFR 82-1/AR 70-50/NAVMATINST 8800.4A, the 1963 rocket and guided missile designation system was presented alongside the 1962 United States Tri-Service aircraft designation system and the two systems have been concurrently presented and maintained in joint publications since.
The current version of the rocket and missile designation system was mandated by Joint Regulation 4120.15E Designating and Naming Military Aerospace Vehicles and was implemented via Air Force Instruction (AFI) 16-401, Army Regulation (AR) 70-50, Naval Air Systems Command Instruction (NAVAIRINST) 13100.16 on 3 November 2020. The list of military rockets and guided missiles was maintained via 4120.15-L Model Designation of Military Aerospace Vehicles until its transition to data.af.mil on 31 August 2018.
Explanation
The basic designation of every rocket and guided missile is based in a set of letters called the Mission Design Sequence. The sequence indicates the following:
An optional status prefix
The environment from which the weapon is launched
The primary mission of the weapon
The type of weapon
Examples of guided missile designators are as follows:
AGM – (A) Air-launched (G) Surface-attack (M) Guided missile
AIM – (A) Air-launched (I) Intercept-aerial (M) Guided missile
ATM – (A) Air-launched (T) Training (M) Guided missile
RIM – (R) Ship-launched (I) Intercept-aerial (M) Guided missile
LGM – (L) Silo-launched (G) Surface-attack (M) Guided missile
The design or project number follows the basic designator. In turn, the number may be followed by consecutive letters, representing modifications.
Example:
RGM-84D means:
R – The weapon is ship-launched;
G – The weapon is designed to surface-attack;
M – The weapon is a guided missile;
84 – eighty-fourth missile design;
D – fourth modification;
In addition, most guided missiles have names, such as Harpoon, Tomahawk, Sea Sparrow, etc. These names are retained regardless of subsequent modifications to the missile.
Code
Prefixes
Additionally, a prefix may be added to the designation indicating a non-standard configuration.
For example:
YAIM-54A
XAIM-174B
See also
List of missiles
1962 United States Tri-Service aircraft designation system
United States military aircraft designation systems
Notes
References
External links
Guided missiles
Weapons of the United States
Naming conventions
Rocketry | 1963 United States Tri-Service rocket and guided missile designation system | [
"Engineering"
] | 747 | [
"Rocketry",
"Aerospace engineering"
] |
10,821,496 | https://en.wikipedia.org/wiki/Dullard%20protein | In cell biology, Dullard protein is a protein coding gene involved in neural development. It is a member of DXDX(T/V) phosphatase family and is a potential regulator of neural tube development in Xenopus. The gene promotes neural development by inhibiting Bone Morphogenetic Proteins (BMPs). Dullard is also known as CTDnep1, which stands for CTD nuclear envelope phosphatase 1. This gene is relatively small and only contains 244 amino acids.
Description
Dullard is also known as CTDnep1, which stands for CTD nuclear envelope phosphatase 1. It is a protein coding gene, which include phosphatase activity and protein serine/threonine phosphatase activity. This gene is relatively small and only contains 244 amino acids. Dullard protein or CTDnep1 encodes a protein serine/threonine phosphatase and dephosphorylates LPIN1 and LPIN2. LPIN1 and LPIN2 catalyze the reaction of the conversion of phosphatidic acid to diacylglycerol. The reaction can affect and change the lipid concentration of the endoplasmic reticulum and the nucleus.
Dullard and BNP signaling
Neural development happens in the dorsal ectoderm. In the genus Xenopus, over expression of Dullard undergoes apoptosis in early development. Dullard helps promote Ubiquitin by proteosomal degradation. Dullard mRNA is derived from maternal genes and is localized within the animal neural hemisphere. Functioning negatively for the regulation of Bone Morphogenetic Proteins (BMPs), Dullard conserves the C-terminal region of NLI-IF, in which is fairly dominant in cellular functions. Dullard is essential for inhibiting BMP receptor activation during Xenopus neuralization.
Human Dullard
Human Dullard has shown that the protein has two membrane spanning regions. One end is the N-terminal end, which helps localize the protein to the nuclear envelope. Dullard dephosphorylates the mammalian phosphatidic acid phosphatase, lipin. Dullard participates in a unique phosphatase cascade regulating nuclear membrane biogenesis, and that this cascade is conserved from yeast to mammals. There is belief that Dullard may have other targets that is not only associated with the nuclear envelope. In recent studies, dullard interacts with BMP type 1 to inhibit dependent phosphorylation. This can conclude that it is a potential source for regulating the level of BMP signaling and can affect germ cell specification.
References
Proteins | Dullard protein | [
"Chemistry"
] | 552 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
10,831,865 | https://en.wikipedia.org/wiki/Swimming%20pool%20sanitation | Swimming pool sanitation is the process of ensuring healthy conditions in swimming pools. Proper sanitation is needed to maintain the visual clarity of water and to prevent the transmission of infectious waterborne diseases.
Methods
Two distinct and separate methods are employed in the sanitation of a swimming pool. The filtration system removes organic waste on a daily basis by using the sieve baskets inside the skimmer and circulation pump and the sand unit with a backwash facility for easy removal of organic waste from the water circulation. Disinfection - normally in the form of hypochlorous acid (HClO) - kills infectious microorganisms. Alongside these two distinct measures within the pool owner's jurisdiction, swimmer hygiene and cleanliness helps reduce organic waste build-up.
Guidelines
The World Health Organization has published international guidelines for the safety of swimming pools and similar recreational-water environments, including standards for minimizing microbial and chemical hazards. The United States Centers for Disease Control and Prevention also provides information on pool sanitation and water related illnesses for health professionals and the public. The main organizations providing certifications for pool and spa operators and technicians are the National Swimming Pool Foundation and Association of Pool & Spa Professionals. The certifications are accepted by many state and local health departments.
Contaminants and disease
Swimming pool contaminants are introduced from environmental sources and swimmers. Affecting primarily outdoor swimming pools, environmental contaminants include windblown dirt and debris, incoming water from unsanitary sources, rain containing microscopic algae spores and droppings from birds possibly harboring disease-causing pathogens. Indoor pools are less susceptible to environmental contaminants.
Contaminants introduced by swimmers can dramatically influence the operation of indoor and outdoor swimming pools. Contaminants include micro-organisms from infected swimmers and body oils including sweat, cosmetics, suntan lotion, urine, saliva and fecal matter; for example, it was estimated by researchers that swimming pools contain, on average, 30 to 80 mL of urine for each person that uses the pool. In addition, the interaction between disinfectants and pool water contaminants can produce a mixture of chloramines and other disinfection by-products. The journal Environmental Science & Technology reported that sweat and urine react with chlorine and produce trichloramine and cyanogen chloride, two chemicals dangerous to human health. An answer to the perennial question: Is it safe to pee in the pool? Nitrosamines are another type of the disinfection by-products that are of concern as a potential health hazard.
Acesulfame potassium is widely used in the human diet and excreted by the kidneys. It has been used by researchers as a marker to estimate the degree to which swimming pools are contaminated by urine. It was estimated that a commercial-size swimming pool of 220,000 gallons would contain about 20 gallons of urine, equivalent to about 2 gallons of urine in a typical residential pool.
Pathogenic contaminants are of greatest concern in swimming pools as they have been associated with numerous recreational water illnesses (RWIs). Public health pathogens can be present in swimming pools as viruses, bacteria, protozoa and fungi. Diarrhea is the most commonly reported illness associated with pathogenic contaminants, while other diseases associated with untreated pools are Cryptosporidiosis and Giardiasis. Other illnesses commonly occurring in poorly maintained swimming pools include otitis externa, commonly called swimmers ear, skin rashes and respiratory infections.
Maintenance and hygiene
Contamination can be minimized by good swimmer hygiene practices such as showering before and after swimming, and not letting children with intestinal disorders swim. Effective treatments are needed to address contaminants in pool water because preventing the introduction of pool contaminants, pathogenic and non-pathogenic, into swimming pools is, in practice, impossible.
A well-maintained, properly operating pool filtration and re-circulation system is the first barrier, combating the contaminants large enough to be filtered. Rapid removal of these filterable contaminants reduces the impact on the disinfection system thereby limiting the formation of chloramines, restricting the formation of disinfection by-products and optimizing sanitation effectiveness. To kill pathogens and help prevent recreational water illnesses, pool operators must maintain proper levels of chlorine or another sanitizer.
Over time, calcium from municipal water tends to accumulate, developing salt deposits in the swimming pool walls and equipment (filters, pumps), reducing their effectiveness. Therefore, it is advised to either completely drain the pool, and refill it with fresh water, or recycle the existing pool water, using reverse osmosis. The advantage of the latter method is that 90% of the water can be reused.
Pool operators must also store and handle cleaning and sanitation chemicals safely.
Prevention of diseases in swimming pools and spas
Disease prevention should be the top priority for every water quality management program for pool and spa operators. Disinfection is critical to protect against pathogens, and is best managed through routine monitoring and maintenance of chemical feed equipment to ensure optimum chemical levels in accordance with state and local regulations.
Chemical parameters include disinfectant levels according to regulated pesticide label directions. pH should be kept between 7.2 and 7.8. Human tears have a pH of 7.4, making this an ideal point to set a pool. More often than not, it is improper pH and not the sanitiser that is responsible for irritating swimmers' skin and eyes.
Total alkalinity should be 80–120 ppm and calcium hardness between 200 and 400 ppm.
Good hygienic behavior at swimming pools is also important for reducing health risk factors at swimming pools and spas. Showering before swimming can reduce introduction of contaminants to the pool, and showering again after swimming will help to remove any that may have been picked up by the swimmer.
Those with diarrhea or other gastroenteritis illnesses should not swim within 2 weeks of an outbreak, especially children. Cryptosporidium is chlorine resistant.
In order to minimize exposure to pathogens, swimmers should avoid getting water into their mouths, and should never swallow pool or spa water.
Standards
Maintaining an effective concentration of disinfectant is critically important in assuring the safety and health of swimming pool and spa users. When any of these pool chemicals are used, it is very important to keep the pH of the pool in the range 7.2 to 7.8 – according to the Langelier Saturation Index, or 7.8 to 8.2 – according to the Hamilton Index; higher pH drastically reduces the sanitizing power of the chlorine due to reduced oxidation-reduction potential (ORP), while lower pH produces more rapid loss of chlorine and causes bather discomfort, especially to the eyes. However, according to the Hamilton Index, a higher pH can reduce unnecessary chlorine consumption while still remaining effective at preventing algae and bacteria growth.
To help ensure the health of bathers and protect pool equipment, it is essential to perform routine monitoring of water quality factors (or "parameters") on a regular basis. This process becomes the essence of an optimum water quality management program.
Systems and disinfection methods
Chlorine and bromine methods
Conventional halogen-based oxidizers such as chlorine and bromine are convenient and economical primary sanitizers for swimming pools and provide a residual level of sanitizer that remains in the water. Chlorine-releasing compounds are the most popular and frequently used in swimming pools whereas bromine-releasing compounds have found heightened popularity in spas and hot tubs. Both are members of the halogen group with demonstrated ability to destroy and deactivate a wide range of potentially dangerous bacteria and viruses in swimming pools and spas. Both exhibit three essential elements as ideal first-line-of-defense sanitizers for swimming pools and spas: they are fast-acting and enduring, they are effective algaecides, and they oxidize undesired contaminants.
Swimming pools can be disinfected with a variety of chlorine-releasing compounds. The most basic of these compounds is molecular chlorine (Cl2); however, its application is primarily in large commercial public swimming pools. Inorganic forms of chlorine-releasing compounds frequently used in residential and public swimming pools include sodium hypochlorite commonly known as liquid bleach or simply bleach, calcium hypochlorite and lithium hypochlorite. Chlorine residuals from Cl2 and inorganic chlorine-releasing compounds break down rapidly in sunlight. To extend their disinfectant usefulness and persistence in outdoor settings, swimming pools treated with one or more of the inorganic forms of chlorine-releasing compounds can be supplemented with cyanuric acid – a granular stabilizing agent capable of extending the active chlorine residual half-life (t½) by four to sixfold.
Chlorinated isocyanurates, a family of organic chlorine-releasing compounds, are stabilized to prevent UV degradation due to the presence of cyanurate as part of their chemical backbone. These are commonly sold for general use in small summer pools, where the water is expected to be used for only a few months and is expected to be regularly topped up with fresh, due to evaporation and splash loss. It is important to change the water frequently, otherwise, levels of cyanuric acid will build up to beyond the point at which the mechanism functions. Excess cyanurates will actually work in reverse and will inhibit the chlorine. A steadily lowering pH value of the water may at first be noticed. Algal growth may become visible, even though chlorine tests show sufficient levels.
Chlorine reacting with urea in urine and other nitrogen-containing wastes from bathers can produce chloramines. Chloramines typically occur when an insufficient amount of chlorine is used to disinfect a contaminated pool. Chloramines are generally responsible for the noxious, irritating smell prominently occurring in indoor pool settings. A common way to remove chloramines is to "superchlorinate" (commonly called "shocking") the pool with a high dose of inorganic chlorine sufficient to deliver 10 ppm chlorine. Regular superchlorination (every two weeks in summer) helps to eliminate these unpleasant odors in the pool. Levels of chloramines and other volatile compounds in water can be minimized by reducing contaminants that lead to their formation (e.g., urea, creatinine, amino acids and personal care products) as well as by use of non-chlorine "shock oxidizers" such as potassium peroxymonosulfate.
Medium pressure UV technology is used to control the level of chloramines in indoor pools. It is also used as a secondary form of disinfection to address chlorine-tolerant pathogens. A properly sized and maintained UV system should remove the need to shock for chloramines, although shocking would still be used to address a fecal accident in the pool. UV will not replace chlorine but is used to control the level of chloramines, which are responsible for the odor, irritation, and enhanced corrosion at an indoor pool.
Copper ion system
Copper ion systems use an electric current across .500 gm bars (solid copper, or a mixture of copper and .100 gm or silver) to free copper ions into the flow of pool water to kill organisms such as algae in the water and provide a "residual" in the water. Alternative systems also use titanium plates to produce oxygen in the water to help degrade organic compounds.
Private pool filtration
Water pumps
An electrically operated water pump is the prime motivator in recirculating the water from the pool. Water is forced through a filter and then returned to the pool. Using a water pump by itself is often not sufficient to completely sanitize a pool. Commercial and public pool pumps usually run 24 hours a day for the entire operating season of the pool. Residential pool pumps are typically run for 4 hours per day in winter (when the pool is not in use) and up to 24 hours in summer. To save electricity costs, most pools run water pumps for between 6 hours and 12 hours in summer with the pump being controlled by an electronic timer.
Most pool pumps available today incorporate a small filter basket as the last effort to avoid leaf or hair contamination reaching the close-tolerance impeller section of the pump.
Filtration units
Sand
A pressure-fed sand filter is typically placed in line immediately after the water pump. The filter typically contains a medium such as graded sand (called '14/24 Filter Media' in the UK system of grading the size of sand by sifting through a fine brass-wire mesh of 14 to the inch (5.5 per centimeter) to 24 to the inch (9.5 per cm)). A pressure fed sand filter is termed a 'High Rate' sand filter, and will generally filter turbid water of particulates no less than 10 micrometers in size. The rapid sand filter type are periodically 'back washed' as contaminants reduce water flow and increase back pressure. Indicated by a pressure gauge on the pressure side of the filter reaching into the 'red line' area, the pool owner is alerted to the need to 'backwash' the unit. The sand in the filter will typically last five to seven years before all the "rough edges" are worn off, and the more tightly packed sand no longer works as intended . Recommended filtration for public/commercial pools is 1 ton sand per 100,000 liters water (10 ounces avdp. per cubic foot of water) [7.48 US or 6.23 UK gallons].
Introduced in the early 1900s was another type of sand filter – the 'Rapid Sand' filter, whereby water was pumped into the top of a large volume tank (3' 0" or more cube) (1 cubic yard/200US gal/170UK gal/770 liters) containing filter grade sand and returning to the pool through a pipe at the bottom of the tank. As there is no pressure inside this tank, they were also known as "gravity filters". These types of filters are not greatly effective, and are no longer common in home swimming pools, being replaced by the pressure-fed type filter.
Diatomaceous earth
Some filters use diatomaceous earth to help filter out contaminants. Commonly referred to as 'D.E.' filters, they exhibit superior filtration capabilities. Often a D.E. filter will trap waterborne contaminants as small as 1 micrometer in size. D.E. filters are banned in some states, as they must be emptied out periodically and the contaminated media flushed down the sewer, causing a problem in some districts' sewage systems.
As of 2020, several companies now produce regenerative media filters, sometimes called precoat media filters, which use perlite as the filtration media rather than diatomaceous earth. As of 2021, perlite can safely be flushed down the sewer and is approved and NSF listed for use in the United States.
Cartridge filters
Other filter media that have been introduced to the residential swimming pool market since 1970 include sand particles and paper type cartridge filters of filter area arranged in a tightly packed 12" diameter x 24" long (300 mm x 600 mm) accordion-like circular cartridge. These units can be 'daisy-chained' together to collectively filter almost any size home pool. The cartridges are typically cleaned by removal from the filter body and hosing-off down a sewer connection. They are popular where backwashed water from a sand filter is not allowed to be discharged or go into the aquifer.
Fabric Filters
Traditional pool filters vary in the micron particle sizes that they can capture. Fabric filters can capture particles smaller than that of standard swimming pool filtration systems. This type of filter connects where the water return to the pool after passing through a standard filter. They are usually in the form of a bag. With filtration levels as small as 1 micrometer, users can attain much cleaner water, when using a sand of cartridge filter. These levels are equal or better than that of a diatomaceous earth filter.
Automated pool cleaners
Automated pool cleaners more commonly known as "Automatic pool cleaners" and in particular electric, robotic pool cleaners provide an extra measure of filtration, and in fact like the handheld vacuums can microfilter a pool, which a sand filter without flocculation or coagulants is unable to accomplish.
These cleaners are independent from the pool's main filter and pump system and are powered by a separate electricity source, usually in the form of a set-down transformer that is kept at least from the water in the pool, often on the pool deck. They have two internal motors: one to suck in water through a self-contained filter bag and then return the filtered water at a high speed back into the pool water, and one that is a drive motor connected to tractor-like rubber or synthetic tracks and "brushes" connected by rubber or plastic bands via a metal shaft. The brushes, resembling paint rollers, are located on the front and back of the machine, and help to remove contaminating particles from the pool's floor, walls, and, in some designs, even the pool steps (depending on size and configuration). They also direct the particles into the internal filter bag.
Other systems
Saline chlorination units, electronic oxidation systems, ionization systems, microbe disinfection with ultra-violet lamp systems, and "Tri-Chlor Feeders" are other independent or auxiliary systems for swimming pool sanitation.
Consecutive dilution
A consecutive dilution system is arranged to remove organic waste in stages after it passes through the skimmer. Waste matter is trapped inside one or more sequential skimmer basket sieves, each having a finer mesh to further dilute contaminant size. Dilution here is defined as the action of making something weaker in force, content, or value.
The first basket is placed closely after the skimmer mouth. The second is attached to the circulation pump. Here the 25% of water drawn from the main drain at the bottom of the swimming pool meets the 75% drawn from the surface. The circulation pump sieve basket is easily accessible for service and is to be emptied daily. The third sieve is the sand unit. Here smaller organic waste that has slipped through the previous sieves is trapped by sand.
If not removed regularly, organic waste will continue to rot down and affect water quality. The dilution process allows organic waste to be easily removed. Ultimately the sand sieve can be backwashed to remove smaller trapped organic waste which otherwise leaches ammonia and other compounds into the recirculated water. These additional solutes eventually lead to the formation of disinfection by-products (DBP's). The sieve baskets are easily removed daily for cleaning as is the sand unit, which should be back-washed at least once a week. A perfectly maintained consecutive dilution system drastically reduces the build-up of chloramines and other DBP's. The water returned to the pool should have been cleared of all organic waste above 10 microns in size.
Mineral sanitizers
Mineral sanitizers for the swimming pool and spa use minerals, metals, or elements derived from the natural environment to produce water quality benefits that would otherwise be produced by harsh or synthetic chemicals.
Companies are not allowed to sell a mineral sanitizer in the United States unless it has been registered with the United States Environmental Protection Agency (EPA). Currently, two mineral sanitizers are registered with the EPA: one is a silver salt with a controlled release mechanism which is applied to calcium carbonate granules that help neutralize pH; the other uses a colloidal form of silver released into water from ceramic beads.
Mineral technology takes advantage of the cleansing and filtering qualities of commonly occurring substances. Silver and copper are well-known oligodynamic substances that are effective in destroying pathogens. Silver has been shown to be effective against harmful bacteria, viruses, protozoa and fungi. Copper is widely used as an algicide. Alumina, derived from aluminates, filters detrimental materials at the molecular level and can be used to control the delivery rate of desirable metals such as copper. Working through the pool or spa filtration system, mineral sanitizers use combinations of these minerals to inhibit algae growth and eliminate contaminants.
Unlike chlorine or bromine, metals and minerals do not evaporate and do not degrade. Minerals can make the water noticeably softer, and by replacing harsh chemicals in the water they lower the potential for red-eye, dry skin and foul odors.
Skimmers
Coping apertures
Water is typically drawn from the pool via a rectangular aperture in the wall, connected through to a device fitted into one (or more) wall/s of the pool. The internals of the skimmer are accessed from the pool deck through a circular or rectangle lid, about one foot in diameter. If the pool's water pump is operational water is drawn from the pool over a floating hinged weir (operating from a vertical position to a 90-degree angle away from the pool, in order to stop leaves and debris being back-flooded into the pool by wave action), and down into a removable "skimmer basket", the purpose of which is to entrap leaves, dead insects and other larger floating debris.
The aperture visible from the pool side is typically 1' 0" (300 mm) wide by 6" (150 mm) high, which intersects the water midway through the center of the aperture. Skimmers with apertures wider than this are termed "wide angle" skimmers and may be as much as 2' 0" wide (600 mm). Floating skimmers have the advantage of not being affected by the level of the water as these are adjusted to work with the rate of pump suction and will retain optimum skimming regardless of water level leading to a markedly reduced amount of bio-material in the water. Skimmers should always have a leaf basket or filter between it and the pump to avoid blockages in the pipes leading to the pump and filter.
Prior to the mid 1970s most skimmers were either made of metal like copper or stainless steel either a large round or square shape. Built in concrete pour skimmers were also common on concrete pools before the introduction of PVC Skimmers in the late 1960s
Pool re-circulation
Water returning from the consecutive dilution system is passed through return jets below the surface. These are designed to impact a turbulent flow as the water enters the pool. This flow as a force is far less than the mass of water in the pool and takes the least pressure route upward where eventually surface tension reforms it into a laminar flow on the surface.
As the returned water disturbs the surface, it creates a capillary wave. If the return jets are positioned correctly, this wave creates a circular motion within the surface tension of the water, allowing that on the surface to slowly circulate around the pool walls. Organic waste floating on the surface through this circulation from the capillary wave is slowly drawn past the mouth of the skimmer where it is pulled in due to the laminar flow and surface tension over the skimmer weir. In a well-designed pool, circulation caused by the disturbed returned water aids in removing organic waste from the pool surface, directing it to be trapped inside the consecutive dilution system for easy disposal.
Many return jets are equipped with a swivel nozzle. Used correctly, it induces deeper circulation, further cleaning the water. Turning the jet nozzles at an angle imparts rotation within the entire depth of pool water. Orientation to the left or right would generate clockwise or anti-clockwise rotation respectively. This has the benefit of cleaning the bottom of the pool and slowly moving sunken inorganic debris to the main drain where it is removed by the circulation pump basket sieve.
In a correctly constructed pool, rotation of the water caused by the manner it is returned from the consecutive dilution system will reduce or even waive the need to vacuum the bottom. To gain the maximum rotation force on the main body of water, the consecutive dilution system needs to be as clean and unblocked as possible to allow maximum flow pressure from the pump. As the water rotates, it also disturbs organic waste at lower water layers, forcing it to the top. Rotational force the pool return jets create is the most important part of cleaning the pool water and pushing organic waste across the mouth of the skimmer.
With a correctly designed and operated swimming pool, this circulation is visible and after a period of time, reaches even the deep end, inducing a low-velocity vortex above the main drain due to suction. Correct use of the return jets is the most effective way of removing disinfection by-products caused by deeper decomposing organic waste and drawing it into the consecutive dilution system for immediate disposal.
Heaters
Another piece of equipment that may be optioned in the recirculation system is a pool water heater. They can be heat pumps, natural gas or propane gas heaters, electric heaters, wood-burning heaters, or Solar hot water panel heaters – increasingly used in the sustainable design of pools.
Other equipment
Diversions to electronic oxidation systems, ionization systems, microbe disinfection with ultra-violet lamp systems, and "Tri-Chlor Feeders" are other auxiliary systems for swimming pool sanitation - as well as solar panels - and are in most cases required to be placed after the filtration equipment, often the last items being placed before the water is returned to the pool.
Other features
Recreation amenities
Features that are part of the water circulation system can extend treatment capacity needs for sizing calculations and can include: artificial streams and waterfalls, in-pool fountains, integrated hot tubs and spas, water slides and sluices, artificial "pebble beaches", submerged seating as bench-ledges or as "stools" at in-pool bars, plunge pools, and shallow children's wading pools.
See also
Automated pool cleaners
Copper ion swimming pool system
Fountain
Reflecting pool
Respiratory risks of indoor swimming pools
Water purification
References
Swimming pools
Swimming pool equipment
Water filters
Water treatment
Water technology
da:Filter (vand) | Swimming pool sanitation | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 5,483 | [
"Water filters",
"Water treatment",
"Filters",
"Water pollution",
"Environmental engineering",
"Water technology"
] |
324,132 | https://en.wikipedia.org/wiki/Registered%20jack | A registered jack (RJ) is a standardized telecommunication network interface for connecting voice and data equipment to a computer service provided by a local exchange carrier or long distance carrier. Registered interfaces were first defined in the Universal Service Ordering Code (USOC) of the Bell System in the United States for complying with the registration program for customer-supplied telephone equipment mandated by the Federal Communications Commission (FCC) in the 1970s. Subsequently, in 1980 they were codified in title 47 of the Code of Federal Regulations Part 68. Registered jack connections began to see use after their invention in 1973 by Bell Labs.
The specification includes physical construction, wiring, and signal semantics. Accordingly, registered jacks are primarily named by the letters RJ, followed by two digits that express the type. Additional letter suffixes indicate minor variations. For example, RJ11, RJ14, and RJ25 are the most commonly used interfaces for telephone connections for one-, two-, and three-line service, respectively. Although these standards are legal definitions in the United States, some interfaces are used worldwide.
The connectors used for registered jack installations are primarily the modular connector and the 50-pin miniature ribbon connector. For example, RJ11 and RJ14 use female six-position modular connectors, and RJ21 uses a 25-pair (50-pin) miniature ribbon connector. RJ11 uses two conductors in a six-position female modular connector, so can be made with any female six-position modular connector, while RJ14 uses four, so can be made with either a 6P4C or a 6P6C connector.
Naming standard
The registered jack designations originated in the standardization process of telephone connections in the Bell System in the United States, and describe application circuits and not just the physical geometry of the connectors. The same modular connector type may be used for different registered jack applications. Modular connectors were developed to replace older telephone installation methods that used hardwired cords or bulkier varieties of telephone plugs.
Strictly, Registered Jack refers to both the female physical connector (modular connector) and specific wiring patterns, but the term is often used loosely to refer to modular connectors regardless of wiring, gender, or use, commonly for telephone line connections, but also for Ethernet over twisted pair, resulting in confusion over the various connection standards and applications. For example, the six-position physical connector, plug and jack, is identically dimensioned and inter-connectable, whether it is wired for one, two, or three lines. These are the RJ11, RJ14, and RJ25 interfaces. The RJ standards designations only pertain to the wiring of the (female) jacks, hence the name Registered Jack. It is commonplace, but not strictly correct, to refer to the unwired connectors or the (male) plugs by these names.
The nomenclature for modular connectors is based on the number of contact positions and the number of contacts present. 6P indicates a six-position modular plug or jack. A six-position modular plug with conductors in only the middle two positions is designated 6P2C; 6P4C has four conductors in the middle positions, and 6P6C has all six. An RJ11 without power, if made with a 6P6C connector, has four unused contacts.
History and authority
Registration interfaces were created by the Bell System under a Federal Communications Commission order for the standard interconnection between telephone company equipment and customer premises equipment. These interfaces used newly standardized jacks and plugs, primarily based on miniature modular connectors.
The wired communications provider (telephone company) is responsible for delivery of services to a minimum (or main) point of entry (MPOE). The MPOE is a utility box, usually containing surge protective circuitry, which connects the wiring on the customer's property to the communication provider's network. Customers are responsible for all jacks, wiring, and equipment on their side of the MPOE. The intent was to establish a universal standard for wiring and interfaces, and to separate ownership of in-home (or in-office) telephone wiring from the wiring owned by the service provider.
In the Bell System, following the Communications Act of 1934, the telephone companies owned all telecommunications equipment and they did not allow interconnection of third-party equipment. Telephones were generally hardwired, but may have been installed with Bell System connectors to permit portability. The legal case Hush-A-Phone v. United States (1956) and the Federal Communications Commission's (FCC) Carterfone (1968) decision brought changes to this policy, and required the Bell System to allow some interconnection, culminating in the development of registered interfaces using new types of miniature connectors.
Registered jacks replaced the use of protective couplers provided exclusively by the telephone company. The new modular connectors were much smaller and cheaper to produce than the earlier, bulkier connectors that were used in the Bell System since the 1930s. The Bell System issued specifications for the modular connectors and their wiring as Universal Service Order Codes (USOC), which were the only standards at the time. Large customers of telephone services commonly use the USOC to specify the interconnection type and, when necessary, pin assignments, when placing service orders with a network provider.
When the U.S. telephone industry was reformed to foster competition in the 1980s, the connection specifications became federal law, ordered by the FCC and codified in the Code of Federal Regulations (CFR), Title 47 CFR Part 68, Subpart F, superseded by T1.TR5-1999.
In January 2001, the FCC delegated responsibility for standardizing connections to the telephone network to a new private industry organization, the Administrative Council for Terminal Attachments (ACTA). For this delegation, the FCC removed Subpart F from the CFR and added Subpart G. The ACTA derives its recommendations for terminal attachments from the standards published by the engineering committees of the Telecommunications Industry Association (TIA). ACTA and TIA jointly published the standard TIA/EIA-IS-968, replacing the CFR information.
TIA-968-A, the current version of that standard, details the physical aspects of modular connectors, but not the wiring. Instead, TIA-968-A incorporates the standard T1.TR5-1999, "Network and Customer Installation Interface Connector Wiring Configuration Catalog", by reference. With the publication of TIA-968-B, the connector descriptions have been moved to TIA-1096-A. A registered jack name, such as RJ11, still identifies both the physical connectors and the wiring (pinout) for each application.
Types
The most widely implemented registered jack in telecommunications is the RJ11. This is a modular connector wired for one telephone line, using the center two contacts of six available positions. This configuration is also used for single-line telephones in many countries other than the United States. It may also use a 6P4C connector, to use an additional wire pair for powering lamps on the telephone set. RJ14 is similar to RJ11, but is wired for two lines and RJ25 has three lines. RJ61 is a similar registered jack for four lines, but uses an 8P8C connector.
The RJ45S jack is rarely used in telephone applications, and the keyed 8P8C modular plug used for RJ45S mechanically cannot be inserted into an Ethernet port, but a similar plug, the non-keyed 8P8C modular plugnever used for RJ45Sis used in Ethernet networks, and the connector is often, however improperly, referred to as RJ45 in this context.
Many of the basic names have suffixes that indicate subtypes:
C: flush-mount or surface mount
F: flex-mount
W: wall-mount
L: lamp-mount
S: single-line
M: multi-line
X: complex jack
For example, RJ11 comes in two forms: RJ11W is a jack from which a wall telephone can be hung, while RJ11C is a jack designed to have a cord plugged into it. A cord can be plugged into an RJ11W as well.
RJ11, RJ14, RJ25 wiring
All of these registered jacks are described as containing a number of potential contact positions and the actual number of contacts installed within these positions. RJ11, RJ14, and RJ25 all use the same six-position modular connector, thus are physically identical except for the different number of contacts (two, four and six respectively) allowing connections for one, two, or three telephone lines respectively.
Cords connecting to an RJ11 interface require a 6P2C connector. Nevertheless, cords sold as RJ11 often use 6P4C connectors (six position, four conductor) with four wires. Two of the six possible contact positions connect tip and ring, and the other two conductors are unused. RJ11 is commonly used to connect DSL modems to the customer line.
The conductors other than the two central tip and ring conductors are in practice variously used for a second or third telephone line, a ground for selective ringers, low-voltage power for a dial light, or for anti-tinkle circuitry to prevent pulse dialing phones from sounding the bell on other extensions.
Pinout
The pins of the 6P6C connector are numbered 1 to 6, counting left to right when holding the connector tab side down with the opening for the cable facing the viewer.
Provisioning of power
Some telephones such as the Western Electric Princess and Trimline telephone models require additional power (~6 V AC) for operation of the incandescent dial light. This power is delivered to the telephone set from a transformer by the second wire pair (pins 2 and 5) of the 6P4C connector.
RJ21
RJ21 is a registered jack standard using a micro ribbon connector with contacts for up to fifty conductors. It is used to implement connections for up to 25 lines, or circuits that require many wire pairs, such as used in the 1A2 key telephone system. The miniature ribbon connector of this interface is also known as a 50-pin telco connector, CHAMP(AMP), or Amphenol connector, the last being a genericized trademark, as Amphenol was a prominent manufacturer of these at one time.
A cable color scheme, known as even-count color code, is determined for 25 pairs of conductors as follows: For each ring, the primary, more prominent color is chosen from the set blue, orange, green, brown, and slate, in that order, and the secondary, thinner stripe color from the set of white, red, black, yellow, and violet colors, in that order. The tip conductor color scheme uses the same colors as the matching ring but switches the thickness of the primary and secondary colored stripes. Since the sets are ordered, an orange (color 2 in its set) with a yellow (color 4) is the color scheme for the 4·5 + 2 − 5 = 17th pair of wires. If the yellow is the more prominent, thicker stripe, then the wire is a tip conductor connecting to the pin numbered 25 + the pair #, which is pin 42 in this case. Ring conductors connect to the same pin number as the pair number.
A conventional enumeration of wire color pairs then begins blue (and white), orange (and white), green (and white) and brown (and white), which subsumes a color-coding convention used in cables of 4 or fewer pairs (8 wires or less) with 8P and 6P connectors.
Dual 50-pin ribbon connectors are often used on punch blocks to create a breakout box for private branch exchange (PBX) and other key telephone systems.
RJ45S
The RJ45S, an obsolete standard jack once specified for modem or data interfaces, has a slot on one side to allow mating with a special variation of the 8P plug: a mechanically-keyed plug with an extra tab on one side that prevents it from mating with regular (non-keyed) 8P jacks. The visual difference from the more-common 8P female is subtle. The RJ45S keyed 8P modular connector has only pins 5 and 4 wired for tip and ring (respectively) of a single telephone line, and a "programming" resistor connected to pins 7 and 8.
RJ48
RJ48 is used for T1 and ISDN termination, local-area data channels, and subrate digital services. It uses the eight-position modular connector (8P8C).
RJ48C is commonly used for T1 circuits and uses pin numbers 1, 2, 4 and 5.
RJ48X is a variation that contains shorting blocks in the jack for troubleshooting: With no plug inserted, pins 2 and 5 (the two tip wires) are connected to each other, and likewise 1 and 4 (ring), creating a loopback so that a signal received on one pair is returned on the other. Sometimes this is referred to as a self-looping jack.
RJ48S is typically used for local-area data channels and subrate digital services and carries one line. It accepts a keyed variety of the 8P modular connector.
RJ48 connectors are fastened to shielded twisted pair (STP) cables, not the unshielded twisted-pair (UTP) commonly used in other installations.
RJ61
RJ61 is a physical interface that was often used for terminating twisted pair cables. It uses an eight-position, eight-conductor (8P8C) modular connector.
This wiring pattern is for multi-line analog telephone use only; RJ61 is unsuitable for use with high-speed data because the pins for pairs 3 and 4 are too widely spaced for high signaling frequencies. T1 lines use another wiring for the same connector, designated RJ48. Ethernet over twisted pair (10BASE-T, 100BASE-TX and 1000BASE-T) also uses different wiring for the same connector, either T568A or T568B. RJ48, T568A, and T568B are all designed to keep both wires of each pair close together.
The flat eight-conductor silver-satin cable conventionally used with four-line analog telephones and RJ61 jacks is also unsuitable for use with high-speed data. Twisted pair cabling is required for data applications. Twisted-pair patch cable typically used with common Ethernet and other data network standards is not compatible with RJ61, because RJ61 pairs 3 and 4 would each be split across two different twisted pairs in the patch cable, causing excessive cross-talk between voice lines 3 and 4, with conversations on each line literally being audible on the other.
With the advent of structured wiring systems and TIA/EIA-568 (now ANSI/TIA-568) conventions, the RJ61 wiring pattern is falling into disuse. The T568A and T568B standards are used in place of RJ61 so that a single wiring standard in a facility can be used for both voice and data.
Similar jacks and unofficial names
The following RJ-style names do not refer to official ACTA types.
The labels RJ9, RJ10, RJ22 are variously used for 4P4C and 4P2C modular connectors, most typically installed on telephone handsets and their cordage. Telephone handsets do not connect directly to the public network, and therefore have no registered jack designation.
RJ45 is often incorrectly used when referring to an 8P8C connector used for ANSI/TIA-568 T568A and T568B and Ethernet, however, the plug used for RJ45 is both mechanically and electrically incompatible with any Ethernet port: it cannot fit into an Ethernet port, and it is wired in a way that is incompatible with Ethernet. The connector commonly used for twisted-pair Ethernet is a non-keyed 8P8C connector, quite distinct from that used for RJ45S. The new ARJ45 interface, however, is a plug and jack allowing higher transmission rates, and the jack can, optionally, be backward-compatible with the common 8P8C plugs of Gigabit Ethernet and earlier standards.
RJ50 is often a 10P10C interface, often used for data applications.
The micro ribbon connector, first made by Amphenol, that is used in the RJ21 interface, has also been used to connect Ethernet ports in bulk from a switch with 50-pin ports to a Cat-5 rated patch panel, or between two patch panels. A cable with a 50-pin connector on one end can support six fully wired 8P8C connectors or Ethernet ports on a patch panel with one spare pair. Alternatively, only the necessary pairs for 10/100 Ethernet can be wired allowing twelve Ethernet ports with a single spare pair.
This connector is also used with spring bail locks for SCSI-1 connections. Some computer printers use a shorter 36-pin version known as a Centronics connector.
The 8P8C modular jack was chosen as a candidate for ISDN systems. In order to be considered, the connector system had to be defined by an international standard, leading to the creation of the ISO 8877 standard. Under the rules of the IEEE 802 standards project, international standards are to be preferred over national standards, so when the original 10BASE-T twisted-pair wiring version of Ethernet was developed, this modular connector was chosen as the basis for IEEE 802.3i-1990.
See also
Audio and video interfaces and connectors generic article
BS 6312 British equivalent to RJ25
EtherCON ruggedized 8P8C Ethernet connector
Key telephone system
Modified Modular Jack a variation used by Digital Equipment Corporation for serial computer connections, and also for CEA-909 antennas.
Protea (telephone) South African telephone jack standard
Telecommunications Industry Association Standards Developing Organization for ACTA
References
External links
RJ glossary
ANSI/TIA-968-B documents of FCC specifications from the Administrative Council for Terminal Attachments, section 6.2 in particular
ANSI/TIA-1096-A
Administrative Council for Terminal Attachments
Doing your own telephone wiring
Connecting a second phone line
Telephone connectors
Computer connectors
Networking hardware | Registered jack | [
"Engineering"
] | 3,825 | [
"Computer networks engineering",
"Networking hardware"
] |
324,749 | https://en.wikipedia.org/wiki/Sine%20wave | A sine wave, sinusoidal wave, or sinusoid (symbol: ∿) is a periodic wave whose waveform (shape) is the trigonometric sine function. In mechanics, as a linear motion over time, this is simple harmonic motion; as rotation, it corresponds to uniform circular motion. Sine waves occur often in physics, including wind waves, sound waves, and light waves, such as monochromatic radiation. In engineering, signal processing, and mathematics, Fourier analysis decomposes general functions into a sum of sine waves of various frequencies, relative phases, and magnitudes.
When any two sine waves of the same frequency (but arbitrary phase) are linearly combined, the result is another sine wave of the same frequency; this property is unique among periodic waves. Conversely, if some phase is chosen as a zero reference, a sine wave of arbitrary phase can be written as the linear combination of two sine waves with phases of zero and a quarter cycle, the sine and cosine components, respectively.
Audio example
A sine wave represents a single frequency with no harmonics and is considered an acoustically pure tone. Adding sine waves of different frequencies results in a different waveform. Presence of higher harmonics in addition to the fundamental causes variation in the timbre, which is the reason why the same musical pitch played on different instruments sounds different.
Sinusoid form
Sine waves of arbitrary phase and amplitude are called sinusoids and have the general form:
where:
, amplitude, the peak deviation of the function from zero.
, the real independent variable, usually representing time in seconds.
, angular frequency, the rate of change of the function argument in units of radians per second.
, ordinary frequency, the number of oscillations (cycles) that occur each second of time.
, phase, specifies (in radians) where in its cycle the oscillation is at t = 0.
When is non-zero, the entire waveform appears to be shifted backwards in time by the amount seconds. A negative value represents a delay, and a positive value represents an advance.
Adding or subtracting (one cycle) to the phase results in an equivalent wave.
As a function of both position and time
Sinusoids that exist in both position and time also have:
a spatial variable that represents the position on the dimension on which the wave propagates.
a wave number (or angular wave number) , which represents the proportionality between the angular frequency and the linear speed (speed of propagation) :
wavenumber is related to the angular frequency by where (lambda) is the wavelength.
Depending on their direction of travel, they can take the form:
, if the wave is moving to the right, or
, if the wave is moving to the left.
Since sine waves propagate without changing form in distributed linear systems, they are often used to analyze wave propagation.
Standing waves
When two waves with the same amplitude and frequency traveling in opposite directions superpose each other, then a standing wave pattern is created.
On a plucked string, the superimposing waves are the waves reflected from the fixed endpoints of the string. The string's resonant frequencies are the string's only possible standing waves, which only occur for wavelengths that are twice the string's length (corresponding to the fundamental frequency) and integer divisions of that (corresponding to higher harmonics).
Multiple spatial dimensions
The earlier equation gives the displacement of the wave at a position at time along a single line. This could, for example, be considered the value of a wave along a wire.
In two or three spatial dimensions, the same equation describes a travelling plane wave if position and wavenumber are interpreted as vectors, and their product as a dot product. For more complex waves such as the height of a water wave in a pond after a stone has been dropped in, more complex equations are needed.
Sinusoidal plane wave
Fourier analysis
French mathematician Joseph Fourier discovered that sinusoidal waves can be summed as simple building blocks to approximate any periodic waveform, including square waves. These Fourier series are frequently used in signal processing and the statistical analysis of time series. The Fourier transform then extended Fourier series to handle general functions, and birthed the field of Fourier analysis.
Differentiation and integration
Differentiation
Differentiating any sinusoid with respect to time can be viewed as multiplying its amplitude by its angular frequency and advancing it by a quarter cycle:
A differentiator has a zero at the origin of the complex frequency plane. The gain of its frequency response increases at a rate of +20 dB per decade of frequency (for root-power quantities), the same positive slope as a 1 order high-pass filter's stopband, although a differentiator doesn't have a cutoff frequency or a flat passband. A n-order high-pass filter approximately applies the n time derivative of signals whose frequency band is significantly lower than the filter's cutoff frequency.
Integration
Integrating any sinusoid with respect to time can be viewed as dividing its amplitude by its angular frequency and delaying it a quarter cycle:
The constant of integration will be zero if the bounds of integration is an integer multiple of the sinusoid's period.
An integrator has a pole at the origin of the complex frequency plane. The gain of its frequency response falls off at a rate of -20 dB per decade of frequency (for root-power quantities), the same negative slope as a 1 order low-pass filter's stopband, although an integrator doesn't have a cutoff frequency or a flat passband. A n-order low-pass filter approximately performs the n time integral of signals whose frequency band is significantly higher than the filter's cutoff frequency.
See also
Crest (physics)
Complex exponential
Damped sine wave
Euler's formula
Fourier transform
Harmonic analysis
Harmonic series (mathematics)
Harmonic series (music)
Helmholtz equation
Instantaneous phase
In-phase and quadrature components
Least-squares spectral analysis
Oscilloscope
Phasor
Pure tone
Simple harmonic motion
Sinusoidal model
Wave (physics)
Wave equation
∿ the sine wave symbol (U+223F)
References
External links
Trigonometry
Wave mechanics
Waves
Waveforms
Sound
Acoustics | Sine wave | [
"Physics"
] | 1,288 | [
"Physical phenomena",
"Classical mechanics",
"Acoustics",
"Waves",
"Wave mechanics",
"Motion (physics)",
"Waveforms"
] |
325,020 | https://en.wikipedia.org/wiki/Antarctic%20Circumpolar%20Wave | The Antarctic Circumpolar Wave (ACW) is a coupled ocean/atmosphere wave that circles the Southern Ocean in approximately eight years at . Since it is a wave-2 phenomenon (there are two ridges and two troughs in a latitude circle) at each fixed point in space a signal with a period of four years is seen. The wave moves eastward with the prevailing currents.
History of the concept
Although the "wave" is seen in temperature, atmospheric pressure, sea ice and ocean height, the variations are hard to see in the raw data and need to be filtered to become apparent. Because the reliable record for the Southern Ocean is short (since the early 1980s) and signal processing is needed to reveal its existence, some climatologists doubt the existence of the wave. Others accept its existence but say that it varies in strength over decades.
The wave was discovered simultaneously by and . Since then, ideas about the wave structure and maintenance mechanisms have changed and grown: by some accounts it is now to be considered as part of a global ENSO wave.
See also
Antarctic Circle
Antarctic Convergence
References
Notes
Sources
External links
Antarctic Circumpolar Wave Description
The Antarctic Circumpolar Wave: A Beta Effect in Ocean–Atmosphere Coupling over the Southern Ocean
Environment of Antarctica
Physical oceanography
Geography of the Southern Ocean | Antarctic Circumpolar Wave | [
"Physics"
] | 264 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
325,060 | https://en.wikipedia.org/wiki/Tidal%20power | Tidal power or tidal energy is harnessed by converting energy from tides into useful forms of power, mainly electricity using various methods.
Although not yet widely used, tidal energy has the potential for future electricity generation. Tides are more predictable than the wind and the sun. Among sources of renewable energy, tidal energy has traditionally suffered from relatively high cost and limited availability of sites with sufficiently high tidal ranges or flow velocities, thus constricting its total availability. However many recent technological developments and improvements, both in design (e.g. dynamic tidal power, tidal lagoons) and turbine technology (e.g. new axial turbines, cross flow turbines), indicate that the total availability of tidal power may be much higher than previously assumed and that economic and environmental costs may be brought down to competitive levels.
Historically, tide mills have been used both in Europe and on the Atlantic coast of North America. Incoming water was contained in large storage ponds, and as the tide goes out, it turns waterwheels that use the mechanical power to mill grain. The earliest occurrences date from the Middle Ages, or even from Roman times. The process of using falling water and spinning turbines to create electricity was introduced in the U.S. and Europe in the 19th century.
Electricity generation from marine technologies increased an estimated 16% in 2018, and an estimated 13% in 2019. Policies promoting R&D are needed to achieve further cost reductions and large-scale development. The world's first large-scale tidal power plant was France's Rance Tidal Power Station, which became operational in 1966. It was the largest tidal power station in terms of output until Sihwa Lake Tidal Power Station opened in South Korea in August 2011. The Sihwa station uses sea wall defense barriers complete with 10 turbines generating 254 MW.
Principle
Tidal energy is taken from the Earth's oceanic tides. Tidal forces result from periodic variations in gravitational attraction exerted by celestial bodies. These forces create corresponding motions or currents in the world's oceans. This results in periodic changes in sea levels, varying as the Earth rotates. These changes are highly regular and predictable, due to the consistent pattern of the Earth's rotation and the Moon's orbit around the Earth. The magnitude and variations of this motion reflect the changing positions of the Moon and Sun relative to the Earth, the effects of Earth's rotation, and local geography of the seafloor and coastlines.
Tidal power is the only technology that draws on energy inherent in the orbital characteristics of the Earth–Moon system, and to a lesser extent in the Earth–Sun system. Other natural energies exploited by human technology originate directly or indirectly from the Sun, including fossil fuel, conventional hydroelectric, wind, biofuel, wave and solar energy. Nuclear energy makes use of Earth's mineral deposits of fissionable elements, while geothermal power utilizes the Earth's internal heat, which comes from a combination of residual heat from planetary accretion (about 20%) and heat produced through radioactive decay (80%).
A tidal generator converts the energy of tidal flows into electricity. Greater tidal variation and higher tidal current velocities can dramatically increase the potential of a site for tidal electricity generation. On the other hand, tidal energy has high reliability, excellent energy density, and high durability.
Because the Earth's tides are ultimately due to gravitational interaction with the Moon and Sun and the Earth's rotation, tidal power is practically inexhaustible, and is thus classified as a renewable energy resource. Movement of tides causes a loss of mechanical energy in the Earth-Moon system: this results from pumping of water through natural restrictions around coastlines and consequent viscous dissipation at the seabed and in turbulence. This loss of energy has caused the rotation of the Earth to slow in the 4.5 billion years since its formation. During the last 620 million years the period of rotation of the Earth (length of a day) has increased from 21.9 hours to 24 hours; in this period the Earth-Moon system has lost 17% of its rotational energy. While tidal power will take additional energy from the system, the effect is negligible and would not be noticeable in the foreseeable future.
Methods
Tidal power can be classified into four generating methods:
Tidal stream generator
Tidal stream generators make use of the kinetic energy of moving water to power turbines, in a similar way to wind turbines that use the wind to power turbines. Some tidal generators can be built into the structures of existing bridges or are entirely submersed, thus avoiding concerns over aesthetics or visual impact. Land constrictions such as straits or inlets can create high velocities at specific sites, which can be captured using turbines. These turbines can be horizontal, vertical, open, or ducted.
Tidal barrage
Tidal barrages use potential energy in the difference in height (or hydraulic head) between high and low tides. When using tidal barrages to generate power, the potential energy from a tide is seized through the strategic placement of specialized dams. When the sea level rises and the tide begins to come in, the temporary increase in tidal power is channeled into a large basin behind the dam, holding a large amount of potential energy. With the receding tide, this energy is then converted into mechanical energy as the water is released through large turbines that create electrical power through the use of generators. Barrages are essentially dams across the full width of a tidal estuary.
Tidal lagoon
A new tidal energy design option is to construct circular retaining walls embedded with turbines that can capture the potential energy of tides. The created reservoirs are similar to those of tidal barrages, except that the location is artificial and does not contain a pre-existing ecosystem.
The lagoons can also be in double (or triple) format without pumping or with pumping that will flatten out the power output. The pumping power could be provided by excess to grid demand renewable energy from for example wind turbines or solar photovoltaic arrays. Excess renewable energy rather than being curtailed could be used and stored for a later period of time. Geographically dispersed tidal lagoons with a time delay between peak production would also flatten out peak production providing near baseload production at a higher cost than other alternatives such as district heating renewable energy storage. The cancelled Tidal Lagoon Swansea Bay in Wales, United Kingdom would have been the first tidal power station of this type once built.
Dynamic tidal power
Dynamic tidal power (or DTP) is a theoretical technology that would exploit an interaction between potential and kinetic energies in tidal flows. It proposes that very long dams (for example: 30–50 km length) be built from coasts straight out into the sea or ocean, without enclosing an area. Tidal phase differences are introduced across the dam, leading to a significant water-level differential in shallow coastal seas – featuring strong coast-parallel oscillating tidal currents such as found in the UK, China, and Korea.
US and Canadian studies in the 20th century
The first study of large scale tidal power plants was by the US Federal Power Commission in 1924. If built, power plants would have been located in the northern border area of the US state of Maine and the southeastern border area of the Canadian province of New Brunswick, with various dams, powerhouses, and ship locks enclosing the Bay of Fundy and Passamaquoddy Bay (note: see map in reference). Nothing came of the study, and it is unknown whether Canada had been approached about the study by the US Federal Power Commission.
In 1956, utility Nova Scotia Light and Power of Halifax commissioned a pair of studies into commercial tidal power development feasibility on the Nova Scotia side of the Bay of Fundy. The two studies, by Stone & Webster of Boston and by Montreal Engineering Company of Montreal, independently concluded that millions of horsepower (i.e. gigawatts) could be harnessed from Fundy but that development costs would be commercially prohibitive.
There was also a report on the international commission in April 1961 entitled "Investigation of the International Passamaquoddy Tidal Power Project" produced by both the US and Canadian Federal Governments. According to benefit to costs ratios, the project was beneficial to the US but not to Canada.
A study was commissioned by the Canadian & Nova Scotian and New Brunswick governments (Reassessment of Fundy Tidal Power) to determine the potential for tidal barrages at Chignecto Bay and Minas Basin – at the end of the Fundy Bay estuary. There were three sites determined to be financially feasible: Shepody Bay (1550 MW), Cumberland Basin (1085 MW), and Cobequid Bay (3800 MW). These were never built despite their apparent feasibility in 1977.
US studies in the 21st century
The Snohomish PUD, a public utility district located primarily in Snohomish County, Washington State, began a tidal energy project in 2007. In April 2009 the PUD selected OpenHydro, a company based in Ireland, to develop turbines and equipment for eventual installation. The project as initially designed was to place generation equipment in areas of high tidal flow and operate that equipment for four to five years. After the trial period the equipment would be removed. The project was initially budgeted at a total cost of $10 million, with half of that funding provided by the PUD out of utility reserve funds, and half from grants, primarily from the US federal government. The PUD paid for part of this project from reserves and received a $900,000 grant in 2009 and a $3.5 million grant in 2010 in addition to using reserves to pay an estimated $4 million of costs. In 2010 the budget estimate was increased to $20 million, half to be paid by the utility, half by the federal government. The utility was unable to control costs on this project, and by October 2014, the costs had ballooned to an estimated $38 million and were projected to continue to increase. The PUD proposed that the federal government provide an additional $10 million towards this increased cost, citing a gentlemen's agreement. When the federal government refused to pay this, the PUD cancelled the project after spending nearly $10 million from reserves and grants. The PUD abandoned all tidal energy exploration after this project was cancelled and does not own or operate any tidal energy sources.
Rance tidal power plant in France
In 1966, Électricité de France opened the Rance Tidal Power Station, located on the estuary of the Rance River in Brittany. It was the world's first tidal power station. The plant was for 45 years the largest tidal power station in the world by installed capacity: Its 24 turbines reach peak output at 240 megawatts (MW) and average 57 MW, a capacity factor of approximately 24%.
Tidal power development in the UK
The world's first marine energy test facility was established in 2003 to start the development of the wave and tidal energy industry in the UK. Based in Orkney, Scotland, the European Marine Energy Centre (EMEC) has supported the deployment of more wave and tidal energy devices than at any other single site in the world. EMEC provides a variety of test sites in real sea conditions. Its grid connected tidal test site is located at the Fall of Warness, off the island of Eday, in a narrow channel which concentrates the tide as it flows between the Atlantic Ocean and North Sea. This area has a very strong tidal current, which can travel up to in spring tides. Tidal energy developers that have tested at the site include: Alstom (formerly Tidal Generation Ltd); ANDRITZ HYDRO Hammerfest; Atlantis Resources Corporation; Nautricity; OpenHydro; Scotrenewables Tidal Power; Voith. The resource could be 4 TJ per year. Elsewhere in the UK, annual energy of 50 TWh can be extracted if 25 GW capacity is installed with pivotable blades.
Current and future tidal power schemes
The Rance tidal power plant built over a period of six years from 1960 to 1966 at La Rance, France. It has 240 MW installed capacity.
254 MW Sihwa Lake Tidal Power Plant in South Korea is the largest tidal power installation in the world. Construction was completed in 2011.
The Jiangxia Tidal Power Station, south of Hangzhou in China has been operational since 1985, with current installed capacity of 3.2 MW. More tidal power is planned near the mouth of the Yalu River.
The first in-stream tidal current generator in North America (Race Rocks Tidal Power Demonstration Project) was installed at Race Rocks on southern Vancouver Island in September 2006. The Race Rocks project was shut down after operating for five years (2006–2011) because high operating costs produced electricity at a rate that was not economically feasible. The next phase in the development of this tidal current generator will be in Nova Scotia (Bay of Fundy).
A small project was built by the Soviet Union at Kislaya Guba on the Barents Sea. It has 0.4 MW installed capacity. In 2006 it was upgraded with a 1.2 MW experimental advanced orthogonal turbine.
Jindo Uldolmok Tidal Power Plant in South Korea is a tidal stream generation scheme planned to be expanded progressively to 90 MW of capacity by 2013. The first 1 MW was installed in May 2009.
A 1.2 MW SeaGen system became operational in late 2008 on Strangford Lough in Northern Ireland. It was decommissioned and removed in 2016.
The contract for an 812 MW tidal barrage near Ganghwa Island (South Korea) north-west of Incheon has been signed by Daewoo. Completion was planned for 2015 but project was retracted in 2013.
A 1,320 MW barrage was proposed by the South Korean government in 2009, to be built around islands west of Incheon. The project halted since 2012 due to environmental concerns.
The Scottish Government has approved plans for a 10 MW ''Òran na Mara'' array of tidal stream generators near Islay, Scotland, costing 40 million pounds, and consisting of 10 turbines – enough to power over 5,000 homes. The first turbine was expected to be in operation by 2013 and then once again announced in 2021, but as of 2023 none existed.
The Indian state of Gujarat was planning to host South Asia's first commercial-scale tidal power station. The company Atlantis Resources planned to install a 50 MW tidal farm in the Gulf of Kutch on India's west coast, with construction planned to start 2012, later withdrawn due to high costs.
Ocean Renewable Power Corporation was the first company to deliver tidal power to the US grid in September 2012 when its pilot TidGen system was successfully deployed in Cobscook Bay, near Eastport.
In New York City, Verdant Power successfully deployed and operated three tidal turbines in the East River near Roosevelt Island, on a single triangular base system, called a TriFrame. The Roosevelt Island Tidal Energy (RITE) Project generated over 300MWh of electricity to the local grid, an American marine energy record. The system's performance was independently confirmed by Scotland's European Marine Energy Centre (EMEC) under the new International Electrotechnical Commission (IEC) international standards. This is the first instance of a third-party verification of a tidal energy converter to an international standard.
The largest tidal energy project entitled MeyGen (398 MW) is currently in construction in the Pentland Firth in northern Scotland with 6 MW operational since 2018.
Construction of a 320 MW tidal lagoon power plant outside the city of Swansea in the UK was granted planning permission in June 2015, however it was later rejected by the UK government in 2018. If built it would have been the world's first tidal power plant based on a constructed lagoon.
Mersey Tidal Power, a proposed tidal range barrage within the channel of the Mersey Estuary with a capacity of up to 1 GW is undergoing local consultation by the Liverpool City Region Combined Authority.
Up to 240 MW of tidal stream generation is proposed at Morlais, Anglesey from multiple developers, with the first turbines expected to be installed in 2026. , a total of 38 MW of capacity has been awarded Contracts for Difference to supply power to the GB grid.
Issues and challenges
Environmental concerns
Tidal power can affect marine life. The turbines' rotating blades can accidentally kill swimming sea life. Projects such as the one in Strangford include a safety mechanism that turns off the turbine when marine animals approach. However, this feature causes a major loss in energy because of the amount of marine life that passes through the turbines. Some fish may avoid the area if threatened by a constantly rotating or noisy object. Marine life is a huge factor when siting tidal power energy generators, and precautions are taken to ensure that as few marine animals as possible are affected by it. In terms of global warming potential (i.e. carbon footprint), the impact of tidal power generation technologies ranges between 15 and 37 gCO2-eq/kWhe, with a median value of 23.8 gCO2-eq/kWhe. This is in line with the impact of other renewables like wind and solar power, and significantly better than fossil-based technologies. The Tethys database provides access to scientific literature and general information on the potential environmental effects of tidal energy.
Tidal turbines
The main environmental concern with tidal energy is associated with blade strike and entanglement of marine organisms as high-speed water increases the risk of organisms being pushed near or through these devices. As with all offshore renewable energies, there is also a concern about how the creation of electromagnetic fields and acoustic outputs may affect marine organisms. Because these devices are in the water, the acoustic output can be greater than those created with offshore wind energy. Depending on the frequency and amplitude of sound generated by the tidal energy devices, this acoustic output can have varying effects on marine mammals (particularly those who echolocate to communicate and navigate in the marine environment, such as dolphins and whales). Tidal energy removal can also cause environmental concerns such as degrading far-field water quality and disrupting sediment processes. Depending on the size of the project, these effects can range from small traces of sediment building up near the tidal device to severely affecting nearshore ecosystems and processes.
Tidal barrage
Installing a barrage may change the shoreline within the bay or estuary, affecting a large ecosystem that depends on tidal flats. Inhibiting the flow of water in and out of the bay, there may also be less flushing of the bay or estuary, causing additional turbidity (suspended solids) and less saltwater, which may result in the death of fish that act as a vital food source to birds and mammals. Migrating fish may also be unable to access breeding streams, and may attempt to pass through the turbines. The same acoustic concerns apply to tidal barrages. Decreasing shipping accessibility can become a socio-economic issue, though locks can be added to allow slow passage. However, the barrage may improve the local economy by increasing land access as a bridge. Calmer waters may also allow better recreation in the bay or estuary. In August 2004, a humpback whale swam through the open sluice gate of the Annapolis Royal Generating Station at slack tide, ending up trapped for several days before eventually finding its way out to the Annapolis Basin.
Tidal lagoon
Environmentally, the main concerns are blade strike on fish attempting to enter the lagoon, the acoustic output from turbines, and changes in sedimentation processes. However, all these effects are localized and do not affect the entire estuary or bay.
Corrosion
Saltwater causes corrosion in metal parts. It can be difficult to maintain tidal stream generators due to their size and depth in the water. The use of corrosion-resistant materials such as stainless steels, high-nickel alloys, copper-nickel alloys, nickel-copper alloys and titanium can greatly reduce, or eliminate corrosion damage. Composite materials could also be used, as composites do not corrode and could provide lightweight, durable structures for tidal power. Composite materials are being evaluated for tidal power.
Mechanical fluids, such as lubricants, can leak out, which may be harmful to the marine life nearby. Proper maintenance can minimize the number of harmful chemicals that may enter the environment.
Fouling
The biological events that happen when placing any structure in an area of high tidal currents and high biological productivity in the ocean will ensure that the structure becomes an ideal substrate for the growth of marine organisms.
Cost
Tidal energy has a high initial cost, which may be one of the reasons why it is not a popular source of renewable energy, although research has shown that the public is willing to pay for and support research and development of tidal energy devices. The methods of generating electricity from tidal energy are relatively new technology. Tidal energy is however still very early in the research process and it may be possible to reduce costs in future. The cost-effectiveness varies according to the site of the tidal generators. One indication of cost-effectiveness is the Gibrat ratio, which is the length of the barrage in metres divided by the annual energy production in kilowatt hours.
As tidal energy is reliable, it can reasonably be predicted how long it will take to pay off the high up-front cost of these generators. Due to the success of a greatly simplified design, the orthogonal turbine offers considerable cost savings. As a result, the production period of each generating unit is reduced, lower metal consumption is needed and technical efficiency is greater.
A possible risk is rising sea levels due to climate change, which may alter the characteristics of the local tides reducing future power generation.
Structural health monitoring
The high load factors resulting from the fact that water is around 800 times denser than air, and the predictable and reliable nature of tides compared with the wind, make tidal energy particularly attractive for electric power generation. Condition monitoring is the key for exploiting it cost-efficiently.
See also
Hydroelectricity
Hydropower
List of tidal power stations
Run-of-the-river hydroelectricity
Structural health monitoring
Tidal barrage
Tidal farm
Tidal power in Canada
Tidal power in New Zealand
Tidal power in Scotland
Tidal stream generator
Marine energy
Marine current power
Wave power
Ocean thermal energy conversion
Osmotic power
World energy consumption
References
Further reading
Baker, A. C. 1991, Tidal power, Peter Peregrinus Ltd., London.
Baker, G. C., Wilson E. M., Miller, H., Gibson, R. A. & Ball, M., 1980. "The Annapolis tidal power pilot project", in Waterpower '79 Proceedings, ed. Anon, U.S. Government Printing Office, Washington, pp 550–559.
Hammons, T. J. 1993, "Tidal power", Proceedings of the IEEE, [Online], v81, n3, pp 419–433. Available from: IEEE/IEEE Xplore. [July 26, 2004].
Lecomber, R. 1979, "The evaluation of tidal power projects", in Tidal Power and Estuary Management, eds. Severn, R. T., Dineley, D. L. & Hawker, L. E., Henry Ling Ltd., Dorchester, pp 31–39.
Jubilo, A., 2019, "Renewable Tidal Energy Potential: Basis for Technology Development in Eastern Mindanao", 80th PIChE National Convention; Crowne Plaza Galleria, Ortigas Center, Quezon City, Philippines.
Could the UK's tides help wean us off fossil fuels?. BBC News. Published 22 October 2023.
Enhancing Electrical Supply by Pumped Storage in Tidal Lagoons. David J.C. MacKay, Cavendish Laboratory, University of Cambridge, UK. Published 3 May 2007.
Turning the Tide: Tidal Power in the UK – Report by Sustainable Development Commission. Published October 2007.
2007 – Report by Global Energy Survey. Published 2007.
External links
Portal and Repository for Information on Marine Renewable Energy A network of databases providing broad access to marine energy information.
Marine Energy Basics: Current Energy Basic information about current energy.
Marine Energy Projects Database A database that provides up-to-date information on marine energy deployments in the U.S. and around the world.
Tethys Database A database of information on potential environmental effects of marine energy and offshore wind energy development.
Tethys Engineering Database A database of information on technical design and engineering of marine energy devices.
Marine and Hydrokinetic Data Repository A database for all data collected by marine energy research and development projects funded by the U.S. Department of Energy.
Severn Estuary Partnership: Tidal Power Resource Page
University of Strathclyde ESRU—Detailed analysis of marine energy resource, current energy capture technology appraisal and environmental impact outline
Coastal Research – Foreland Point Tidal Turbine and warnings on proposed Severn Barrage
European Marine Energy Centre – Listing of Tidal Energy Developers -retrieved 1 July 2011 (link updated 31 January 2014)
Resources on Tidal Energy
Structural Health Monitoring of composite tidal energy converters
Tidal Power: A New Source of Energy (1959)
Tidal projects funded by the Australian Renewable Energy Agency
Bright green environmentalism
Coastal construction
Tides
Renewable energy | Tidal power | [
"Engineering"
] | 5,084 | [
"Construction",
"Coastal construction"
] |
325,077 | https://en.wikipedia.org/wiki/Domain%20theory | Domain theory is a branch of mathematics that studies special kinds of partially ordered sets (posets) commonly called domains. Consequently, domain theory can be considered as a branch of order theory. The field has major applications in computer science, where it is used to specify denotational semantics, especially for functional programming languages. Domain theory formalizes the intuitive ideas of approximation and convergence in a very general way and is closely related to topology.
Motivation and intuition
The primary motivation for the study of domains, which was initiated by Dana Scott in the late 1960s, was the search for a denotational semantics of the lambda calculus. In this formalism, one considers "functions" specified by certain terms in the language. In a purely syntactic way, one can go from simple functions to functions that take other functions as their input arguments. Using again just the syntactic transformations available in this formalism, one can obtain so-called fixed-point combinators (the best-known of which is the Y combinator); these, by definition, have the property that f(Y(f)) = Y(f) for all functions f.
To formulate such a denotational semantics, one might first try to construct a model for the lambda calculus, in which a genuine (total) function is associated with each lambda term. Such a model would formalize a link between the lambda calculus as a purely syntactic system and the lambda calculus as a notational system for manipulating concrete mathematical functions. The combinator calculus is such a model. However, the elements of the combinator calculus are functions from functions to functions; in order for the elements of a model of the lambda calculus to be of arbitrary domain and range, they could not be true functions, only partial functions.
Scott got around this difficulty by formalizing a notion of "partial" or "incomplete" information to represent computations that have not yet returned a result. This was modeled by considering, for each domain of computation (e.g. the natural numbers), an additional element that represents an undefined output, i.e. the "result" of a computation that never ends. In addition, the domain of computation is equipped with an ordering relation, in which the "undefined result" is the least element.
The important step to finding a model for the lambda calculus is to consider only those functions (on such a partially ordered set) that are guaranteed to have least fixed points. The set of these functions, together with an appropriate ordering, is again a "domain" in the sense of the theory. But the restriction to a subset of all available functions has another great benefit: it is possible to obtain domains that contain their own function spaces, i.e. one gets functions that can be applied to themselves.
Beside these desirable properties, domain theory also allows for an appealing intuitive interpretation. As mentioned above, the domains of computation are always partially ordered. This ordering represents a hierarchy of information or knowledge. The higher an element is within the order, the more specific it is and the more information it contains. Lower elements represent incomplete knowledge or intermediate results.
Computation then is modeled by applying monotone functions repeatedly on elements of the domain in order to refine a result. Reaching a fixed point is equivalent to finishing a calculation. Domains provide a superior setting for these ideas since fixed points of monotone functions can be guaranteed to exist and, under additional restrictions, can be approximated from below.
A guide to the formal definitions
In this section, the central concepts and definitions of domain theory will be introduced. The above intuition of domains being information orderings will be emphasized to motivate the mathematical formalization of the theory. The precise formal definitions are to be found in the dedicated articles for each concept. A list of general order-theoretic definitions, which include domain theoretic notions as well can be found in the order theory glossary. The most important concepts of domain theory will nonetheless be introduced below.
Directed sets as converging specifications
As mentioned before, domain theory deals with partially ordered sets to model a domain of computation. The goal is to interpret the elements of such an order as pieces of information or (partial) results of a computation, where elements that are higher in the order extend the information of the elements below them in a consistent way. From this simple intuition it is already clear that domains often do not have a greatest element, since this would mean that there is an element that contains the information of all other elements—a rather uninteresting situation.
A concept that plays an important role in the theory is that of a directed subset of a domain; a directed subset is a non-empty subset of the order in which any two elements have an upper bound that is an element of this subset. In view of our intuition about domains, this means that any two pieces of information within the directed subset are consistently extended by some other element in the subset. Hence we can view directed subsets as consistent specifications, i.e. as sets of partial results in which no two elements are contradictory. This interpretation can be compared with the notion of a convergent sequence in analysis, where each element is more specific than the preceding one. Indeed, in the theory of metric spaces, sequences play a role that is in many aspects analogous to the role of directed sets in domain theory.
Now, as in the case of sequences, we are interested in the limit of a directed set. According to what was said above, this would be an element that is the most general piece of information that extends the information of all elements of the directed set, i.e. the unique element that contains exactly the information that was present in the directed set, and nothing more. In the formalization of order theory, this is just the least upper bound of the directed set. As in the case of the limit of a sequence, the least upper bound of a directed set does not always exist.
Naturally, one has a special interest in those domains of computations in which all consistent specifications converge, i.e. in orders in which all directed sets have a least upper bound. This property defines the class of directed-complete partial orders, or dcpo for short. Indeed, most considerations of domain theory do only consider orders that are at least directed complete.
From the underlying idea of partially specified results as representing incomplete knowledge, one derives another desirable property: the existence of a least element. Such an element models that state of no information—the place where most computations start. It also can be regarded as the output of a computation that does not return any result at all.
Computations and domains
Now that we have some basic formal descriptions of what a domain of computation should be, we can turn to the computations themselves. Clearly, these have to be functions, taking inputs from some computational domain and returning outputs in some (possibly different) domain. However, one would also expect that the output of a function will contain more information when the information content of the input is increased. Formally, this means that we want a function to be monotonic.
When dealing with dcpos, one might also want computations to be compatible with the formation of limits of a directed set. Formally, this means that, for some function f, the image f(D) of a directed set D (i.e. the set of the images of each element of D) is again directed and has as a least upper bound the image of the least upper bound of D. One could also say that f preserves directed suprema. Also note that, by considering directed sets of two elements, such a function also has to be monotonic. These properties give rise to the notion of a Scott-continuous function. Since this often is not ambiguous one also may speak of continuous functions.
Approximation and finiteness
Domain theory is a purely qualitative approach to modeling the structure of information states. One can say that something contains more information, but the amount of additional information is not specified. Yet, there are some situations in which one wants to speak about elements that are in a sense much simpler (or much more incomplete) than a given state of information. For example, in the natural subset-inclusion ordering on some powerset, any infinite element (i.e. set) is much more "informative" than any of its finite subsets.
If one wants to model such a relationship, one may first want to consider the induced strict order < of a domain with order ≤. However, while this is a useful notion in the case of total orders, it does not tell us much in the case of partially ordered sets. Considering again inclusion-orders of sets, a set is already strictly smaller than another, possibly infinite, set if it contains just one less element. One would, however, hardly agree that this captures the notion of being "much simpler".
Way-below relation
A more elaborate approach leads to the definition of the so-called order of approximation, which is more suggestively also called the way-below relation. An element x is way below an element y, if, for every directed set D with supremum such that
,
there is some element d in D such that
.
Then one also says that x approximates y and writes
.
This does imply that
,
since the singleton set {y} is directed. For an example, in an ordering of sets, an infinite set is way above any of its finite subsets. On the other hand, consider the directed set (in fact, the chain) of finite sets
Since the supremum of this chain is the set of all natural numbers N, this shows that no infinite set is way below N.
However, being way below some element is a relative notion and does not reveal much about an element alone. For example, one would like to characterize finite sets in an order-theoretic way, but even infinite sets can be way below some other set. The special property of these finite elements x is that they are way below themselves, i.e.
.
An element with this property is also called compact. Yet, such elements do not have to be "finite" nor "compact" in any other mathematical usage of the terms. The notation is nonetheless motivated by certain parallels to the respective notions in set theory and topology. The compact elements of a domain have the important special property that they cannot be obtained as a limit of a directed set in which they did not already occur.
Many other important results about the way-below relation support the claim that this definition is appropriate to capture many important aspects of a domain.
Bases of domains
The previous thoughts raise another question: is it possible to guarantee that all elements of a domain can be obtained as a limit of much simpler elements? This is quite relevant in practice, since we cannot compute infinite objects but we may still hope to approximate them arbitrarily closely.
More generally, we would like to restrict to a certain subset of elements as being sufficient for getting all other elements as least upper bounds. Hence, one defines a base of a poset P as being a subset B of P, such that, for each x in P, the set of elements in B that are way below x contains a directed set with supremum x. The poset P is a continuous poset if it has some base. Especially, P itself is a base in this situation. In many applications, one restricts to continuous (d)cpos as a main object of study.
Finally, an even stronger restriction on a partially ordered set is given by requiring the existence of a base of finite elements. Such a poset is called algebraic. From the viewpoint of denotational semantics, algebraic posets are particularly well-behaved, since they allow for the approximation of all elements even when restricting to finite ones. As remarked before, not every finite element is "finite" in a classical sense and it may well be that the finite elements constitute an uncountable set.
In some cases, however, the base for a poset is countable. In this case, one speaks of an ω-continuous poset. Accordingly, if the countable base consists entirely of finite elements, we obtain an order that is ω-algebraic.
Special types of domains
A simple special case of a domain is known as an elementary or flat domain. This consists of a set of incomparable elements, such as the integers, along with a single "bottom" element considered smaller than all other elements.
One can obtain a number of other interesting special classes of ordered structures that could be suitable as "domains". We already mentioned continuous posets and algebraic posets. More special versions of both are continuous and algebraic cpos. Adding even further completeness properties one obtains continuous lattices and algebraic lattices, which are just complete lattices with the respective properties. For the algebraic case, one finds broader classes of posets that are still worth studying: historically, the Scott domains were the first structures to be studied in domain theory. Still wider classes of domains are constituted by SFP-domains, L-domains, and bifinite domains.
All of these classes of orders can be cast into various categories of dcpos, using functions that are monotone, Scott-continuous, or even more specialized as morphisms. Finally, note that the term domain itself is not exact and thus is only used as an abbreviation when a formal definition has been given before or when the details are irrelevant.
Important results
A poset D is a dcpo if and only if each chain in D has a supremum. (The 'if' direction relies on the axiom of choice.)
If f is a continuous function on a domain D then it has a least fixed point, given as the least upper bound of all finite iterations of f on the least element ⊥:
.
This is the Kleene fixed-point theorem. The symbol is the directed join.
Generalizations
A continuity space is a generalization of metric spaces and posets that can be used to unify the notions of metric spaces and domains.
See also
Category theory
Denotational semantics
Scott domain
Scott information system
Type theory
Further reading
External links
Introduction to Domain Theory by Graham Hutton, University of Nottingham
Fixed points (mathematics) | Domain theory | [
"Mathematics"
] | 2,902 | [
"Mathematical analysis",
"Fixed points (mathematics)",
"Topology",
"Domain theory",
"Order theory",
"Dynamical systems"
] |
325,496 | https://en.wikipedia.org/wiki/Servomechanism | In mechanical and control engineering, a servomechanism (also called servo system, or simply servo) is a control system for the position and its time derivatives, such as velocity, of a mechanical system. It often includes a servomotor, and uses closed-loop control to reduce steady-state error and improve dynamic response. In closed-loop control, error-sensing negative feedback is used to correct the action of the mechanism. In displacement-controlled applications, it usually includes a built-in encoder or other position feedback mechanism to ensure the output is achieving the desired effect. Following a specified motion trajectory is called servoing, where "servo" is used as a verb. The servo prefix originates from the Latin word servus meaning slave.
The term correctly applies only to systems where the feedback or error-correction signals help control mechanical position, speed, attitude or any other measurable variables. For example, an automotive power window control is not a servomechanism, as there is no automatic feedback that controls position—the operator does this by observation. By contrast a car's cruise control uses closed-loop feedback, which classifies it as a servomechanism.
Applications
Position control
A common type of servo provides position control. Commonly, servos are electric, hydraulic, or pneumatic. They operate on the principle of negative feedback, where the control input is compared to the actual position of the mechanical system as measured by some type of transducer at the output. Any difference between the actual and wanted values (an "error signal") is amplified (and converted) and used to drive the system in the direction necessary to reduce or eliminate the error. This procedure is one widely used application of control theory. Typical servos can give a rotary (angular) or linear output.
Speed control
Speed control via a governor is another type of servomechanism. The steam engine uses mechanical governors; another early application was to govern the speed of water wheels. Prior to World War II the constant speed propeller was developed to control engine speed for maneuvering aircraft. Fuel controls for gas turbine engines employ either hydromechanical or electronic governing.
Others
Positioning servomechanisms were first used in military fire-control and marine navigation equipment. Today servomechanisms are used in automatic machine tools, satellite-tracking antennas, remote control airplanes, automatic navigation systems on boats and planes, and antiaircraft-gun control systems. Other examples are fly-by-wire systems in aircraft which use servos to actuate the aircraft's control surfaces, and radio-controlled models which use RC servos for the same purpose. Many autofocus cameras also use a servomechanism to accurately move the lens. A hard disk drive has a magnetic servo system with sub-micrometer positioning accuracy. In industrial machines, servos are used to perform complex motion, in many applications.
Servomotor
A servomotor is a specific type of motor that is combined with a rotary encoder or a potentiometer to form a servomechanism. This assembly may in turn form part of another servomechanism. A potentiometer provides a simple analog signal to indicate position, while an encoder provides position and usually speed feedback, which by the use of a PID controller allow more precise control of position and thus faster achievement of a stable position (for a given motor power). Potentiometers are subject to drift when the temperature changes whereas encoders are more stable and accurate.
Servomotors are used for both high-end and low-end applications. On the high end are precision industrial components that use a rotary encoder. On the low end are inexpensive radio control servos (RC servos) used in radio-controlled models which use a free-running motor and a simple potentiometer position sensor with an embedded controller. The term servomotor generally refers to a high-end industrial component while the term servo is most often used to describe the inexpensive devices that employ a potentiometer. Stepper motors are not considered to be servomotors, although they too are used to construct larger servomechanisms. Stepper motors have inherent angular positioning, owing to their construction, and this is generally used in an open-loop manner without feedback. They are generally used for medium-precision applications.
RC servos are used to provide actuation for various mechanical systems such as the steering of a car, the control surfaces on a plane, or the rudder of a boat. Due to their affordability, reliability, and simplicity of control by microprocessors, they are often used in small-scale robotics applications. A standard RC receiver (or a microcontroller) sends pulse-width modulation (PWM) signals to the servo. The electronics inside the servo translate the width of the pulse into a position. When the servo is commanded to rotate, the motor is powered until the potentiometer reaches the value corresponding to the commanded position.
History
James Watt's steam engine governor is generally considered the first powered feedback system. The windmill fantail is an earlier example of automatic control, but since it does not have an amplifier or gain, it is not usually considered a servomechanism.
The first feedback position control device was the ship steering engine, used to position the rudder of large ships based on the position of the ship's wheel.
John McFarlane Gray was a pioneer. His patented design was used on the SS Great Eastern in 1866.
Joseph Farcot may deserve equal credit for the feedback concept, with several patents between 1862 and 1868.
The telemotor was invented around 1872 by Andrew Betts Brown, allowing elaborate mechanisms between the control room and the engine to be greatly simplified. Steam steering engines had the characteristics of a modern servomechanism: an input, an output, an error signal, and a means for amplifying the error signal used for negative feedback to drive the error towards zero. The Ragonnet power reverse mechanism was a general purpose air or steam-powered servo amplifier for linear motion patented in 1909.
Electrical servomechanisms were used as early as 1888 in Elisha Gray's Telautograph.
Electrical servomechanisms require a power amplifier. World War II saw the development of electrical fire-control servomechanisms, using an amplidyne as the power amplifier. Vacuum tube amplifiers were used in the UNISERVO tape drive for the UNIVAC I computer. The Royal Navy began experimenting with Remote Power Control (RPC) on HMS Champion in 1928 and began using RPC to control searchlights in the early 1930s. During WW2 RPC was used to control gun mounts and gun directors.
Modern servomechanisms use solid state power amplifiers, usually built from MOSFET or thyristor devices. Small servos may use power transistors.
The origin of the word is believed to come from the French "Le Servomoteur" or the slavemotor, first used by J. J. L. Farcot in 1868 to describe hydraulic and steam engines for use in ship steering.
The simplest kind of servos use bang–bang control. More complex control systems use proportional control, PID control, and state space control, which are studied in modern control theory.
Types of performances
Servos can be classified by means of their feedback control systems:
type 0 servos: under steady-state conditions they produce a constant value of the output with a constant error signal;
type 1 servos: under steady-state conditions they produce a constant value of the output with null error signal, but a constant rate of change of the reference implies a constant error in tracking the reference;
type 2 servos: under steady-state conditions they produce a constant value of the output with null error signal. A constant rate of change of the reference implies a null error in tracking the reference. A constant rate of acceleration of the reference implies a constant error in tracking the reference.
The servo bandwidth indicates the capability of the servo to follow rapid changes in the commanded input.
See also
Further reading
Hsue-Shen Tsien (1954) Engineering Cybernetics, McGraw Hill, link from HathiTrust
References
External links
Ontario News "pioneer in servo technology"
Rane Pro Audio Reference definition of "servo-loop"
Seattle Robotics Society's "What is a Servo?"
different types of servo motors"
Control theory
Control devices
Mechanical amplifiers | Servomechanism | [
"Mathematics",
"Technology",
"Engineering"
] | 1,755 | [
"Mechanical amplifiers",
"Control devices",
"Applied mathematics",
"Control theory",
"Control engineering",
"Amplifiers",
"Dynamical systems"
] |
325,714 | https://en.wikipedia.org/wiki/Hopf%20algebra | In mathematics, a Hopf algebra, named after Heinz Hopf, is a structure that is simultaneously an (unital associative) algebra and a (counital coassociative) coalgebra, with these structures' compatibility making it a bialgebra, and that moreover is equipped with an antihomomorphism satisfying a certain property. The representation theory of a Hopf algebra is particularly nice, since the existence of compatible comultiplication, counit, and antipode allows for the construction of tensor products of representations, trivial representations, and dual representations.
Hopf algebras occur naturally in algebraic topology, where they originated and are related to the H-space concept, in group scheme theory, in group theory (via the concept of a group ring), and in numerous other places, making them probably the most familiar type of bialgebra. Hopf algebras are also studied in their own right, with much work on specific classes of examples on the one hand and classification problems on the other. They have diverse applications ranging from condensed matter physics and quantum field theory to string theory and LHC phenomenology.
Formal definition
Formally, a Hopf algebra is an (associative and coassociative) bialgebra H over a field K together with a K-linear map S: H → H (called the antipode) such that the following diagram commutes:
Here Δ is the comultiplication of the bialgebra, ∇ its multiplication, η its unit and ε its counit. In the sumless Sweedler notation, this property can also be expressed as
As for algebras, one can replace the underlying field K with a commutative ring R in the above definition.
The definition of Hopf algebra is self-dual (as reflected in the symmetry of the above diagram), so if one can define a dual of H (which is always possible if H is finite-dimensional), then it is automatically a Hopf algebra.
Structure constants
Fixing a basis for the underlying vector space, one may define the algebra in terms of structure constants for multiplication:
for co-multiplication:
and the antipode:
Associativity then requires that
while co-associativity requires that
The connecting axiom requires that
Properties of the antipode
The antipode S is sometimes required to have a K-linear inverse, which is automatic in the finite-dimensional case, or if H is commutative or cocommutative (or more generally quasitriangular).
In general, S is an antihomomorphism, so S2 is a homomorphism, which is therefore an automorphism if S was invertible (as may be required).
If S2 = idH, then the Hopf algebra is said to be involutive (and the underlying algebra with involution is a *-algebra). If H is finite-dimensional semisimple over a field of characteristic zero, commutative, or cocommutative, then it is involutive.
If a bialgebra B admits an antipode S, then S is unique ("a bialgebra admits at most 1 Hopf algebra structure"). Thus, the antipode does not pose any extra structure which we can choose: Being a Hopf algebra is a property of a bialgebra.
The antipode is an analog to the inversion map on a group that sends g to g−1.
Hopf subalgebras
A subalgebra A of a Hopf algebra H is a Hopf subalgebra if it is a subcoalgebra of H and the antipode S maps A into A. In other words, a Hopf subalgebra A is a Hopf algebra in its own right when the multiplication, comultiplication, counit and antipode of H are restricted to A (and additionally the identity 1 of H is required to be in A). The Nichols–Zoeller freeness theorem of Warren Nichols and Bettina Zoeller (1989) established that the natural A-module H is free of finite rank if H is finite-dimensional: a generalization of Lagrange's theorem for subgroups. As a corollary of this and integral theory, a Hopf subalgebra of a semisimple finite-dimensional Hopf algebra is automatically semisimple.
A Hopf subalgebra A is said to be right normal in a Hopf algebra H if it satisfies the condition of stability, adr(h)(A) ⊆ A for all h in H, where the right adjoint mapping adr is defined by adr(h)(a) = S(h(1))ah(2) for all a in A, h in H. Similarly, a Hopf subalgebra A is left normal in H if it is stable under the left adjoint mapping defined by adl(h)(a) = h(1)aS(h(2)). The two conditions of normality are equivalent if the antipode S is bijective, in which case A is said to be a normal Hopf subalgebra.
A normal Hopf subalgebra A in H satisfies the condition (of equality of subsets of H): HA+ = A+H where A+ denotes the kernel of the counit on A. This normality condition implies that HA+ is a Hopf ideal of H (i.e. an algebra ideal in the kernel of the counit, a coalgebra coideal and stable under the antipode). As a consequence one has a quotient Hopf algebra H/HA+ and epimorphism H → H/A+H, a theory analogous to that of normal subgroups and quotient groups in group theory.
Hopf orders
A Hopf order O over an integral domain R with field of fractions K is an order in a Hopf algebra H over K which is closed under the algebra and coalgebra operations: in particular, the comultiplication Δ maps O to O⊗O.
Group-like elements
A group-like element is a nonzero element x such that Δ(x) = x⊗x. The group-like elements form a group with inverse given by the antipode. A primitive element x satisfies Δ(x) = x⊗1 + 1⊗x.
Examples
Note that functions on a finite group can be identified with the group ring, though these are more naturally thought of as dual – the group ring consists of finite sums of elements, and thus pairs with functions on the group by evaluating the function on the summed elements.
Cohomology of Lie groups
The cohomology algebra (over a field ) of a Lie group is a Hopf algebra: the multiplication is provided by the cup product, and the comultiplication
by the group multiplication . This observation was actually a source of the notion of Hopf algebra. Using this structure, Hopf proved a structure theorem for the cohomology algebra of Lie groups.
Theorem (Hopf) Let be a finite-dimensional, graded commutative, graded cocommutative Hopf algebra over a field of characteristic 0. Then (as an algebra) is a free exterior algebra with generators of odd degree.
Quantum groups and non-commutative geometry
Most examples above are either commutative (i.e. the multiplication is commutative) or co-commutative (i.e. Δ = T ∘ Δ where the twist map T: H ⊗ H → H ⊗ H is defined by T(x ⊗ y) = y ⊗ x). Other interesting Hopf algebras are certain "deformations" or "quantizations" of those from example 3 which are neither commutative nor co-commutative. These Hopf algebras are often called quantum groups, a term that is so far only loosely defined. They are important in noncommutative geometry, the idea being the following: a standard algebraic group is well described by its standard Hopf algebra of regular functions; we can then think of the deformed version of this Hopf algebra as describing a certain "non-standard" or "quantized" algebraic group (which is not an algebraic group at all). While there does not seem to be a direct way to define or manipulate these non-standard objects, one can still work with their Hopf algebras, and indeed one identifies them with their Hopf algebras. Hence the name "quantum group".
Representation theory
Let A be a Hopf algebra, and let M and N be A-modules. Then, M ⊗ N is also an A-module, with
for m ∈ M, n ∈ N and Δ(a) = (a1, a2). Furthermore, we can define the trivial representation as the base field K with
for m ∈ K. Finally, the dual representation of A can be defined: if M is an A-module and M* is its dual space, then
where f ∈ M* and m ∈ M.
The relationship between Δ, ε, and S ensure that certain natural homomorphisms of vector spaces are indeed homomorphisms of A-modules. For instance, the natural isomorphisms of vector spaces M → M ⊗ K and M → K ⊗ M are also isomorphisms of A-modules. Also, the map of vector spaces M* ⊗ M → K with f ⊗ m → f(m) is also a homomorphism of A-modules. However, the map M ⊗ M* → K is not necessarily a homomorphism of A-modules.
Related concepts
Graded Hopf algebras are often used in algebraic topology: they are the natural algebraic structure on the direct sum of all homology or cohomology groups of an H-space.
Locally compact quantum groups generalize Hopf algebras and carry a topology. The algebra of all continuous functions on a Lie group is a locally compact quantum group.
Quasi-Hopf algebras are generalizations of Hopf algebras, where coassociativity only holds up to a twist. They have been used in the study of the Knizhnik–Zamolodchikov equations.
Multiplier Hopf algebras introduced by Alfons Van Daele in 1994 are generalizations of Hopf algebras where comultiplication from an algebra (with or without unit) to the multiplier algebra of tensor product algebra of the algebra with itself.
Hopf group-(co)algebras introduced by V. G. Turaev in 2000 are also generalizations of Hopf algebras.
Weak Hopf algebras
Weak Hopf algebras, or quantum groupoids, are generalizations of Hopf algebras. Like Hopf algebras, weak Hopf algebras form a self-dual class of algebras; i.e., if H is a (weak) Hopf algebra, so is H*, the dual space of linear forms on H (with respect to the algebra-coalgebra structure obtained from the natural pairing with H and its coalgebra-algebra structure). A weak Hopf algebra H is usually taken to be a
finite-dimensional algebra and coalgebra with coproduct Δ: H → H ⊗ H and counit ε: H → k satisfying all the axioms of Hopf algebra except possibly Δ(1) ≠ 1 ⊗ 1 or ε(ab) ≠ ε(a)ε(b) for some a,b in H. Instead one requires the following:
for all a, b, and c in H.
H has a weakened antipode S: H → H satisfying the axioms:
for all a in H (the right-hand side is the interesting projection usually denoted by ΠR(a) or εs(a) with image a separable subalgebra denoted by HR or Hs);
for all a in H (another interesting projection usually denoted by ΠR(a) or εt(a) with image a separable algebra HL or Ht, anti-isomorphic to HL via S);
for all a in H.
Note that if Δ(1) = 1 ⊗ 1, these conditions reduce to the two usual conditions on the antipode of a Hopf algebra.
The axioms are partly chosen so that the category of H-modules is a rigid monoidal category. The unit H-module is the separable algebra HL mentioned above.
For example, a finite groupoid algebra is a weak Hopf algebra. In particular, the groupoid algebra on [n] with one pair of invertible arrows eij and eji between i and j in [n] is isomorphic to the algebra H of n x n matrices. The weak Hopf algebra structure on this particular H is given by coproduct Δ(eij) = eij ⊗ eij, counit ε(eij) = 1 and antipode S(eij) = eji. The separable subalgebras HL and HR coincide and are non-central commutative algebras in this particular case (the subalgebra of diagonal matrices).
Early theoretical contributions to weak Hopf algebras are to be found in as well as
Hopf algebroids
See Hopf algebroid
Analogy with groups
Groups can be axiomatized by the same diagrams (equivalently, operations) as a Hopf algebra, where G is taken to be a set instead of a module. In this case:
the field K is replaced by the 1-point set
there is a natural counit (map to 1 point)
there is a natural comultiplication (the diagonal map)
the unit is the identity element of the group
the multiplication is the multiplication in the group
the antipode is the inverse
In this philosophy, a group can be thought of as a Hopf algebra over the "field with one element".
Hopf algebras in braided monoidal categories
The definition of Hopf algebra is naturally extended to arbitrary braided monoidal categories. A Hopf algebra in such a category is a sextuple where is an object in , and
(multiplication),
(unit),
(comultiplication),
(counit),
(antipode)
— are morphisms in such that
1) the triple is a monoid in the monoidal category , i.e. the following diagrams are commutative:
2) the triple is a comonoid in the monoidal category , i.e. the following diagrams are commutative:
3) the structures of monoid and comonoid on are compatible: the multiplication and the unit are morphisms of comonoids, and (this is equivalent in this situation) at the same time the comultiplication and the counit are morphisms of monoids; this means that the following diagrams must be commutative:
where is the left unit morphism in , and the natural transformation of functors which is unique in the class of natural transformations of functors composed from the structural transformations (associativity, left and right units, transposition, and their inverses) in the category .
The quintuple with the properties 1),2),3) is called a bialgebra in the category ;
4) the diagram of antipode is commutative:
The typical examples are the following.
Groups. In the monoidal category of sets (with the cartesian product as the tensor product, and an arbitrary singletone, say, , as the unit object) a triple is a monoid in the categorical sense if and only if it is a monoid in the usual algebraic sense, i.e. if the operations and behave like usual multiplication and unit in (but possibly without the invertibility of elements ). At the same time, a triple is a comonoid in the categorical sense iff is the diagonal operation (and the operation is defined uniquely as well: ). And any such a structure of comonoid is compatible with any structure of monoid in the sense that the diagrams in the section 3 of the definition always commute. As a corollary, each monoid in can naturally be considered as a bialgebra in , and vice versa. The existence of the antipode for such a bialgebra means exactly that every element has an inverse element with respect to the multiplication . Thus, in the category of sets Hopf algebras are exactly groups in the usual algebraic sense.
Classical Hopf algebras. In the special case when is the category of vector spaces over a given field , the Hopf algebras in are exactly the classical Hopf algebras described above.
Functional algebras on groups. The standard functional algebras , , , (of continuous, smooth, holomorphic, regular functions) on groups are Hopf algebras in the category (Ste,) of stereotype spaces,
Group algebras. The stereotype group algebras , , , (of measures, distributions, analytic functionals and currents) on groups are Hopf algebras in the category (Ste,) of stereotype spaces. These Hopf algebras are used in the duality theories for non-commutative groups.
See also
Quasitriangular Hopf algebra
Algebra/set analogy
Representation theory of Hopf algebras
Ribbon Hopf algebra
Superalgebra
Supergroup
Anyonic Lie algebra
Sweedler's Hopf algebra
Hopf algebra of permutations
Milnor–Moore theorem
Notes and references
Notes
Citations
References
.
Heinz Hopf, Uber die Topologie der Gruppen-Mannigfaltigkeiten und ihrer Verallgemeinerungen, Annals of Mathematics 42 (1941), 22–52. Reprinted in Selecta Heinz Hopf, pp. 119–151, Springer, Berlin (1964). ,
.
.
Monoidal categories
Representation theory | Hopf algebra | [
"Mathematics"
] | 3,758 | [
"Mathematical structures",
"Monoidal categories",
"Fields of abstract algebra",
"Category theory",
"Representation theory"
] |
325,772 | https://en.wikipedia.org/wiki/Nova%20%28American%20TV%20program%29 | Nova (stylized as NOVΛ) is an American popular science television program produced by WGBH in Boston, Massachusetts, since 1974. It is broadcast on PBS in the United States, and in more than 100 other countries. The program has won many major television awards.
Nova often includes interviews with scientists doing research in the subject areas covered and occasionally includes footage of a particular discovery. Some episodes have focused on the history of science. Examples of topics covered include the following:
Colditz Castle,
the Drake equation,
elementary particles,
the 1980 eruption of Mount St. Helens,
Fermat's Last Theorem,
the AIDS epidemic,
global warming,
moissanite,
Project Jennifer,
storm chasing,
Unterseeboot 869,
Vinland,
Tarim mummies,
and the COVID-19 pandemic.
The Nova programs have been praised for their pacing, writing, and editing. Websites that accompany the segments have also won awards.
Episodes
History
Nova was first aired on March 3, 1974. The show was created by Michael Ambrosino, inspired by the BBC 2 television series Horizon, which Ambrosino had seen while working in the UK. In the early years, many Nova episodes were either co-productions with the BBC Horizon team, or other documentaries originating outside of the United States, with the narration re-voiced in American English. Of the first 50 programs, only 19 were original WGBH productions, and the first Nova episode, "The Making of a Natural History Film", was originally an episode of Horizon that premiered in 1972. The practice continues to this day. All the producers and associate producers for the original Nova teams came from either England (with experience on the Horizon series), Los Angeles or New York. Ambrosino was succeeded as executive producer by John Angier, John Mansfield, and Paula S. Apsell, acting as senior executive producer.
Reception
Rob Owen of Pittsburgh Post-Gazette wrote, "Fascinating and gripping." Alex Strachan of Calgary Herald wrote,"TV for people who don't normally watch TV." Lynn Elber of the Associated Press wrote of the episode "The Fabric of the Cosmos", "Mind-blowing TV." The Futon Critic wrote of the episode "Looking for Life on Mars", "Astounding [and] exhilarating."
Awards
Nova has been recognized with multiple Peabody Awards and Emmy Awards. The program won a Peabody in 1974, citing it as "an imaginative series of science adventures," with a "versatility rarely found in television." Subsequent Peabodys went to specific episodes:
"The Miracle of Life" (1983) was cited as a "fascinating and informative documentary of the human reproductive process," which used "revolutionary microphotographic techniques." This episode also won an Emmy.
"Spy Machines" (1987) was cited for "neatly recount[ing] the key events of the Cold War and look[ing] into the future of American/Soviet SDI competition."
"The Elegant Universe" (2003) was lauded for exploring "science's most elaborate and ambitious theory, the string theory" while making "the abstract concrete, the complicated clear, and the improbable understandable" by "blending factual story telling with animation, special effects, and trick photography." The episode also won an Emmy for editing.
The National Academy of Television Arts and Sciences (responsible for documentary Emmys) recognized the program with awards in 1978, 1981, 1983, and 1989. Julia Cort won an Emmy in 2001 for writing "Life's Greatest Miracle." Emmys were also awarded for the following episodes:
1982 "Here's Looking at You, Kid"
1983 "The Miracle of Life" (also won a Peabody)
1985 "AIDS: Chapter One", "Acid Rain: New Bad News"
1992 "Suicide Mission to Chernobyl", "The Russian Right Stuff"
1994 "Secret of the Wild Child"
1995 "Siamese Twins", "Secret of the Wild Child"
1999 "Decoding Nazi Secrets"
2001 "Bioterror"
2002 "Galileo's Battle for the Heavens", "Mountain of Ice", "Shackleton's Voyage of Endurance", "Why the Towers Fell"
2003 "Battle of the X-planes", "The Elegant Universe" (also won a Peabody)
2005 "Rx for Survival: A Global Health Challenge"
In 1998, the National Science Board of the National Science Foundation awarded Nova its first-ever Public Service Award.
References
External links
1974 American television series debuts
1970s American documentary television series
1980s American documentary television series
1990s American documentary television series
2000s American documentary television series
2010s American documentary television series
2020s American documentary television series
American educational television series
Emmy Award–winning programs
American English-language television shows
PBS original programming
Peabody Award–winning television programs
Science education television series
Physics education
Television series by WGBH
Documentary television shows about evolution | Nova (American TV program) | [
"Physics"
] | 1,003 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
325,831 | https://en.wikipedia.org/wiki/Injection%20moulding | Injection moulding (U.S. spelling: injection molding) is a manufacturing process for producing parts by injecting molten material into a mould, or mold. Injection moulding can be performed with a host of materials mainly including metals (for which the process is called die-casting), glasses, elastomers, confections, and most commonly thermoplastic and thermosetting polymers. Material for the part is fed into a heated barrel, mixed (using a helical screw), and injected into a mould cavity, where it cools and hardens to the configuration of the cavity. After a product is designed, usually by an industrial designer or an engineer, moulds are made by a mould-maker (or toolmaker) from metal, usually either steel or aluminium, and precision-machined to form the features of the desired part. Injection moulding is widely used for manufacturing a variety of parts, from the smallest components to entire body panels of cars. Advances in 3D printing technology, using photopolymers that do not melt during the injection moulding of some lower-temperature thermoplastics, can be used for some simple injection moulds.
Injection moulding uses a special-purpose machine that has three parts: the injection unit, the mould and the clamp. Parts to be injection-moulded must be very carefully designed to facilitate the moulding process; the material used for the part, the desired shape and features of the part, the material of the mould, and the properties of the moulding machine must all be taken into account. The versatility of injection moulding is facilitated by this breadth of design considerations and possibilities.
Applications
Injection moulding is used to create many things such as wire spools, packaging, bottle caps, automotive parts and components, toys, pocket combs, some musical instruments (and parts of them), one-piece chairs and small tables, storage containers, mechanical parts (including gears), and most other plastic products available today. Injection moulding is the most common modern method of manufacturing plastic parts; it is ideal for producing high volumes of the same object.
Process characteristics
Injection moulding uses a ram or screw-type plunger to force molten plastic or rubber material into a mould cavity; this solidifies into a shape that has conformed to the contour of the mould. It is most commonly used to process both thermoplastic and thermosetting polymers, with the volume used of the former being considerably higher. Thermoplastics are prevalent due to characteristics that make them highly suitable for injection moulding, such as ease of recycling, versatility for a wide variety of applications, and ability to soften and flow on heating. Thermoplastics also have an element of safety over thermosets; if a thermosetting polymer is not ejected from the injection barrel in a timely manner, chemical crosslinking may occur causing the screw and check valves to seize and potentially damaging the injection moulding machine.
Injection moulding consists of the high pressure injection of the raw material into a mould, which shapes the polymer into the desired form. Moulds can be of a single cavity or multiple cavities. In multiple cavity moulds, each cavity can be identical and form the same parts or can be unique and form multiple different geometries during a single cycle. Moulds are generally made from tool steels, but stainless steels and aluminium moulds are suitable for certain applications. Aluminium moulds are typically ill-suited for high volume production or parts with narrow dimensional tolerances, as they have inferior mechanical properties and are more prone to wear, damage, and deformation during the injection and clamping cycles; however, aluminium moulds are cost-effective in low-volume applications, as mould fabrication costs and time are considerably reduced. Many steel moulds are designed to process well over a million parts during their lifetime and can cost hundreds of thousands of dollars to fabricate.
When thermoplastics are moulded, typically pelletised raw material is fed through a hopper into a heated barrel with a reciprocating screw. Upon entrance to the barrel, the temperature increases and the Van der Waals forces that resist relative flow of individual chains are weakened as a result of increased space between molecules at higher thermal energy states. This process reduces its viscosity, which enables the polymer to flow with the driving force of the injection unit. The screw delivers the raw material forward, mixes and homogenises the thermal and viscous distributions of the polymer, and reduces the required heating time by mechanically shearing the material and adding a significant amount of frictional heating to the polymer. The material feeds forward through a check valve and collects at the front of the screw into a volume known as a shot. A shot is the volume of material that is used to fill the mould cavity, compensate for shrinkage, and provide a cushion (approximately 10% of the total shot volume, which remains in the barrel and prevents the screw from bottoming out) to transfer pressure from the screw to the mould cavity. When enough material has gathered, the material is forced at high pressure and velocity into the part forming cavity. The exact amount of shrinkage is a function of the resin being used, and can be relatively predictable. To prevent spikes in pressure, the process normally uses a transfer position corresponding to a 95–98% full cavity where the screw shifts from a constant velocity to a constant pressure control. Often injection times are well under 1 second. Once the screw reaches the transfer position the packing pressure is applied, which completes mould filling and compensates for thermal shrinkage, which is quite high for thermoplastics relative to many other materials. The packing pressure is applied until the gate (cavity entrance) solidifies. Due to its small size, the gate is normally the first place to solidify through its entire thickness. Once the gate solidifies, no more material can enter the cavity; accordingly, the screw reciprocates and acquires material for the next cycle while the material within the mould cools so that it can be ejected and be dimensionally stable. This cooling duration is dramatically reduced by the use of cooling lines circulating water or oil from an external temperature controller. Once the required temperature has been achieved, the mould opens and an array of pins, sleeves, strippers, etc. are driven forward to demould the article. Then, the mould closes and the process is repeated.
For a two-shot mould, two separate materials are incorporated into one part. This type of injection moulding is used to add a soft touch to knobs, to give a product multiple colours, or to produce a part with multiple performance characteristics.
For thermosets, typically two different chemical components are injected into the barrel. These components immediately begin irreversible chemical reactions that eventually crosslinks the material into a single connected network of molecules. As the chemical reaction occurs, the two fluid components permanently transform into a viscoelastic solid. Solidification in the injection barrel and screw can be problematic and have financial repercussions; therefore, minimising the thermoset curing within the barrel is vital. This typically means that the residence time and temperature of the chemical precursors are minimised in the injection unit. The residence time can be reduced by minimising the barrel's volume capacity and by maximising the cycle times. These factors have led to the use of a thermally isolated, cold injection unit that injects the reacting chemicals into a thermally isolated hot mould, which increases the rate of chemical reactions and results in shorter time required to achieve a solidified thermoset component. After the part has solidified, valves close to isolate the injection system and chemical precursors, and the mould opens to eject the moulded parts. Then, the mould closes and the process repeats.
Pre-moulded or machined components can be inserted into the cavity while the mould is open, allowing the material injected in the next cycle to form and solidify around them. This process is known as insert moulding and allows single parts to contain multiple materials. This process is often used to create plastic parts with protruding metal screws so they can be fastened and unfastened repeatedly. This technique can also be used for In-mould labelling and film lids may also be attached to moulded plastic containers.
A parting line, sprue, gate marks, and ejector pin marks are usually present on the final part. None of these features are typically desired, but are unavoidable due to the nature of the process. Gate marks occur at the gate that joins the melt-delivery channels (sprue and runner) to the part forming cavity. Parting line and ejector pin marks result from minute misalignments, wear, gaseous vents, clearances for adjacent parts in relative motion, and/or dimensional differences of the melting surfaces contacting the injected polymer. Dimensional differences can be attributed to non-uniform, pressure-induced deformation during injection, machining tolerances, and non-uniform thermal expansion and contraction of mould components, which experience rapid cycling during the injection, packing, cooling, and ejection phases of the process. Mould components are often designed with materials of various coefficients of thermal expansion. These factors cannot be simultaneously accounted for without astronomical increases in the cost of design, fabrication, processing, and quality monitoring. The skillful mould and part designer positions these aesthetic detriments in hidden areas if feasible.
History
In 1846 the British inventor Charles Hancock, a relative of Thomas Hancock, patented an injection molding machine.
American inventor John Wesley Hyatt, together with his brother Isaiah, patented one of the first injection moulding machines in 1872. This machine was relatively simple compared to machines in use today: it worked like a large hypodermic needle, using a plunger to inject plastic through a heated cylinder into a mould. The industry progressed slowly over the years, producing products such as collar stays, buttons, and hair combs (generally though, plastics, in its modern definition, are a more recent development ).
The German chemists Arthur Eichengrün and Theodore Becker invented the first soluble forms of cellulose acetate in 1903, which was much less flammable than cellulose nitrate. It was eventually made available in a powder form from which it was readily injection moulded. Arthur Eichengrün developed the first injection moulding press in 1919. In 1939, Arthur Eichengrün patented the injection moulding of plasticised cellulose acetate.
The industry expanded rapidly in the 1940s because World War II created a huge demand for inexpensive, mass-produced products. In 1946, American inventor James Watson Hendry built the first screw injection machine, which allowed much more precise control over the speed of injection and the quality of articles produced. This machine also allowed material to be mixed before injection, so that coloured or recycled plastic could be added to virgin material and mixed thoroughly before being injected. In the 1970s, Hendry went on to develop the first gas-assisted injection moulding process, which permitted the production of complex, hollow articles that cooled quickly. This greatly improved design flexibility as well as the strength and finish of manufactured parts while reducing production time, cost, weight and waste. By 1979, plastic production overtook steel production, and by 1990, aluminium moulds were widely used in injection moulding. Today, screw injection machines account for the vast majority of all injection machines.
The plastic injection moulding industry has evolved over the years from producing combs and buttons to producing a vast array of products for many industries including automotive, medical, aerospace, consumer products, toys, plumbing, packaging, and construction.
Examples of polymers best suited for the process
Most polymers, sometimes referred to as resins, may be used, including all thermoplastics, some thermosets, and some elastomers. Since 1995, the total number of available materials for injection moulding has increased at a rate of 750 per year; there were approximately 18,000 materials available when that trend began. Available materials include alloys or blends of previously developed materials, so product designers can choose the material with the best set of properties from a vast selection. Major criteria for selection of a material are the strength and function required for the final part, as well as the cost, but also each material has different parameters for moulding that must be taken into account. Other considerations when choosing an injection moulding material include flexural modulus of elasticity, or the degree to which a material can be bent without damage, as well as heat deflection and water absorption. Common polymers like epoxy and phenolic are examples of thermosetting plastics while nylon, polyethylene, and polystyrene are thermoplastic. Until comparatively recently, plastic springs were not possible, but advances in polymer properties make them now quite practical. Applications include buckles for anchoring and disconnecting outdoor-equipment webbing.
Equipment
Injection moulding machines consist of a material hopper, an injection ram or screw-type plunger, and a heating unit. Also known as platens, they hold the moulds in which the components are shaped. Presses are rated by tonnage, which expresses the amount of clamping force that the machine can exert. This force keeps the mould closed during the injection process. Tonnage can vary from less than 5 tons to over 9,000 tons, with the higher figures used in comparatively few manufacturing operations. The total clamp force needed is determined by the projected area of the part being moulded. This projected area is multiplied by a clamp force of from 1.8 to 7.2 tons for each square centimetre of the projected areas. As a rule of thumb, 4 or 5 tons/in2 can be used for most products. If the plastic material is very stiff, it requires more injection pressure to fill the mould, and thus more clamp tonnage to hold the mould closed. The required force can also be determined by the material used and the size of the part. Larger parts require higher clamping force.
Mould
Mould or die are the common terms used to describe the tool used to produce plastic parts in moulding.
Since moulds have been expensive to manufacture, they were usually only used in mass production where thousands of parts were being produced. Typical moulds are constructed from hardened steel, pre-hardened steel, aluminium, and/or beryllium-copper alloy. The choice of material for the mold is not only based on cost considerations, but also has a lot to do with the product life cycle. In general, steel moulds cost more to construct, but their longer lifespan offsets the higher initial cost over a higher number of parts made before wearing out. Pre-hardened steel moulds are less wear-resistant and are used for lower volume requirements or larger components; their typical steel hardness is 38–45 on the Rockwell-C scale. Hardened steel moulds are heat treated after machining; these are by far superior in terms of wear resistance and lifespan. Typical hardness ranges between 50 and 60 Rockwell-C (HRC). Aluminium moulds can cost substantially less, and when designed and machined with modern computerised equipment can be economical for moulding tens or even hundreds of thousands of parts. Beryllium copper is used in areas of the mould that require fast heat removal or areas that see the most shear heat generated. The moulds can be manufactured either by CNC machining or by using electrical discharge machining processes.
Mould design
The mould consists of two primary components, the injection mould (A plate) and the ejector mould (B plate). These components are also referred to as moulder and mouldmaker. Plastic resin enters the mould through a sprue or gate in the injection mould; the sprue bushing is to seal tightly against the nozzle of the injection barrel of the moulding machine and to allow molten plastic to flow from the barrel into the mould, also known as the cavity. The sprue bushing directs the molten plastic to the cavity images through channels that are machined into the faces of the A and B plates. These channels allow plastic to run along them, so they are referred to as runners. The molten plastic flows through the runner and enters one or more specialised gates and into the cavity geometry to form the desired part.
The amount of resin required to fill the sprue, runner and cavities of a mould comprises a "shot". Trapped air in the mould can escape through air vents that are ground into the parting line of the mould, or around ejector pins and slides that are slightly smaller than the holes retaining them. If the trapped air is not allowed to escape, it is compressed by the pressure of the incoming material and squeezed into the corners of the cavity, where it prevents filling and can also cause other defects. The air can even become so compressed that it ignites and burns the surrounding plastic material.
To allow for removal of the moulded part from the mould, the mould features must not overhang one another in the direction that the mould opens, unless parts of the mould are designed to move from between such overhangs when the mould opens using components called Lifters.
Sides of the part that appear parallel with the direction of draw (the axis of the cored position (hole) or insert is parallel to the up and down movement of the mould as it opens and closes) are typically angled slightly, called draft, to ease release of the part from the mould. Insufficient draft can cause deformation or damage. The draft required for mould release is primarily dependent on the depth of the cavity; the deeper the cavity, the more draft necessary. Shrinkage must also be taken into account when determining the draft required. If the skin is too thin, then the moulded part tends to shrink onto the cores that form while cooling and cling to those cores, or the part may warp, twist, blister or crack when the cavity is pulled away.
A mould is usually designed so that the moulded part reliably remains on the ejector (B) side of the mould when it opens, and draws the runner and the sprue out of the (A) side along with the parts. The part then falls freely when ejected from the (B) side. Tunnel gates, also known as submarine or mould gates, are located below the parting line or mould surface. An opening is machined into the surface of the mould on the parting line. The moulded part is cut (by the mould) from the runner system on ejection from the mould. Ejector pins, also known as knockout pins, are circular pins placed in either half of the mould (usually the ejector half), which push the finished moulded product, or runner system out of a mould.The ejection of the article using pins, sleeves, strippers, etc., may cause undesirable impressions or distortion, so care must be taken when designing the mould.
The standard method of cooling is passing a coolant (usually water) through a series of holes drilled through the mould plates and connected by hoses to form a continuous pathway. The coolant absorbs heat from the mould (which has absorbed heat from the hot plastic) and keeps the mould at a proper temperature to solidify the plastic at the most efficient rate.
To ease maintenance and venting, cavities and cores are divided into pieces, called inserts, and sub-assemblies, also called inserts, blocks, or chase blocks. By substituting interchangeable inserts, one mould may make several variations of the same part.
More complex parts are formed using more complex moulds. These may have sections called slides, that move into a cavity perpendicular to the draw direction, to form overhanging part features. When the mould is opened, the slides are pulled away from the plastic part by using stationary “angle pins” on the stationary mould half. These pins enter a slot in the slides and cause the slides to move backward when the moving half of the mould opens. The part is then ejected and the mould closes. The closing action of the mould causes the slides to move forward along the angle pins.
A mould can produce several copies of the same parts in a single "shot". The number of "impressions" in the mould of that part is often incorrectly referred to as cavitation. A tool with one impression is often called a single impression (cavity) mould. A mould with two or more cavities of the same parts is usually called a multiple impression (cavity) mould. (Not to be confused with "Multi-shot moulding" {which is dealt with in the next section.}) Some extremely high production volume moulds (like those for bottle caps) can have over 128 cavities.
In some cases, multiple cavity tooling moulds a series of different parts in the same tool. Some toolmakers call these moulds family moulds, as all the parts are related—e.g., plastic model kits.
Some moulds allow previously moulded parts to be reinserted to allow a new plastic layer to form around the first part. This is often referred to as overmoulding. This system can allow for production of one-piece tires and wheels.
Moulds for highly precise and extremely small parts from micro injection molding requires extra care in the design stage, as material resins react differently compared to their full-sized counterparts where they must quickly fill these incredibly small spaces, which puts them under intense shear strains.
Multi-shot moulding
Two-shot, double-shot or multi-shot moulds are designed to "overmould" within a single moulding cycle and must be processed on specialised injection moulding machines with two or more injection units. This process is actually an injection moulding process performed twice and therefore can allow only for a much smaller margin of error. In the first step, the base colour material is moulded into a basic shape, which contains spaces for the second shot. Then the second material, a different colour, is injection-moulded into those spaces. Pushbuttons and keys, for instance, made by this process have markings that cannot wear off, and remain legible with heavy use.
Mould storage
Manufacturers go to great lengths to protect custom moulds due to their high average costs. The perfect temperature and humidity levels are maintained to ensure the longest possible lifespan for each custom mould. Custom moulds, such as those used for rubber injection moulding, are stored in temperature and humidity controlled environments to prevent warping.
Tool materials
Tool steel is often used. Mild steel, aluminium, nickel or epoxy are suitable only for prototype or very short production runs. Modern hard aluminium (7075 and 2024 alloys) with proper mould design, can easily make moulds capable of 100,000 or more part life with proper mould maintenance.
Machining
Moulds are built through two main methods: standard machining and EDM. Standard machining, in its conventional form, has historically been the method of building injection moulds. With technological developments, CNC machining became the predominant means of making more complex moulds with more accurate mould details in less time than traditional methods.
The electrical discharge machining (EDM) or spark erosion process has become widely used in mould making. As well as allowing the formation of shapes that are difficult to machine, the process allows pre-hardened moulds to be shaped so that no heat treatment is required. Changes to a hardened mould by conventional drilling and milling normally require annealing to soften the mould, followed by heat treatment to harden it again. EDM is a simple process in which a shaped electrode, usually made of copper or graphite, is very slowly lowered onto the mould surface over a period of many hours, which is immersed in paraffin oil (kerosene). A voltage applied between tool and mould causes spark erosion of the mould surface in the inverse shape of the electrode.
Cost
The number of cavities incorporated into a mould directly correlate in moulding costs. Fewer cavities require far less tooling work, so limiting the number of cavities lowers initial manufacturing costs to build an injection mould.
As the number of cavities play a vital role in moulding costs, so does the complexity of the part's design. Complexity can be incorporated into many factors such as surface finishing, tolerance requirements, internal or external threads, fine detailing or the number of undercuts that may be incorporated.
Further details, such as undercuts, or any feature that needs additional tooling, increases mould cost. Surface finish of the core and cavity of moulds further influences cost.
Rubber injection moulding process produces a high yield of durable products, making it the most efficient and cost-effective method of moulding. Consistent vulcanisation processes involving precise temperature control significantly reduces all waste material.
Injection process
Usually, the plastic materials are formed in the shape of pellets or granules and sent from the raw material manufacturers in paper bags. With injection moulding, pre-dried granular plastic is fed by a forced ram from a hopper into a heated barrel. As the granules are slowly moved forward by a screw-type plunger, the plastic is forced into a heated chamber, where it is melted. As the plunger advances, the melted plastic is forced through a nozzle that rests against the mould, allowing it to enter the mould cavity through a gate and runner system. The mould remains cold so the plastic solidifies almost as soon as the mould is filled.
Injection moulding cycle
The sequence of events during the injection mould of a plastic part is called the injection moulding cycle. The cycle begins when the mould closes, followed by the injection of the polymer into the mould cavity. Once the cavity is filled, a holding pressure is maintained to compensate for material shrinkage. In the next step, the screw turns, feeding the next shot to the front screw. This causes the screw to retract as the next shot is prepared. Once the part is sufficiently cool, the mould opens and the part is ejected.
Scientific versus traditional moulding
Traditionally, the injection portion of the moulding process was done at one constant pressure to fill and pack the cavity. This method, however, allowed for a large variation in dimensions from cycle-to-cycle. More commonly used now is scientific or decoupled moulding, a method pioneered by RJG Inc. In this the injection of the plastic is "decoupled" into stages to allow better control of part dimensions and more cycle-to-cycle (commonly called shot-to-shot in the industry) consistency. First the cavity is filled to approximately 98% full using velocity (speed) control. Although the pressure should be sufficient to allow for the desired speed, pressure limitations during this stage are undesirable. Once the cavity is 98% full, the machine switches from velocity control to pressure control, where the cavity is "packed out" at a constant pressure, where sufficient velocity to reach desired pressures is required. This lets workers control part dimensions to within thousandths of an inch or better.
Different types of injection moulding processes
Although most injection moulding processes are covered by the conventional process description above, there are several important moulding variations including, but not limited to:
Die casting
Metal injection moulding
Thin-wall injection moulding
Injection moulding of liquid silicone rubber
Reaction injection moulding
Micro injection moulding
Gas-assisted injection moulding
Cube mold technology
Multi-material injection molding
A more comprehensive list of injection moulding processes may be found here:
Process troubleshooting
Like all industrial processes, injection molding can produce flawed parts, even in toys. In the field of injection moulding, troubleshooting is often performed by examining defective parts for specific defects and addressing these defects with the design of the mould or the characteristics of the process itself. Trials are often performed before full production runs in an effort to predict defects and determine the appropriate specifications to use in the injection process.
When filling a new or unfamiliar mould for the first time, where shot size for that mould is unknown, a technician/tool setter may perform a trial run before a full production run. They start with a small shot weight and fills gradually until the mould is 95 to 99% full. Once they achieve this, they apply a small amount of holding pressure and increase holding time until gate freeze off (solidification time) has occurred. Gate freeze off time can be determined by increasing the hold time, and then weighing the part. When the weight of the part does not change, the gate has frozen and no more material is injected into the part. Gate solidification time is important, as this determines cycle time and the quality and consistency of the product, which itself is an important issue in the economics of the production process. Holding pressure is increased until the parts are free of sinks and part weight has been achieved.
Moulding defects
Injection moulding is a complex technology with possible production problems. They can be caused either by defects in the moulds, or more often by the moulding process itself.
Methods such as industrial CT scanning can help with finding these defects externally as well as internally.
Tolerances
Tolerance depends on the dimensions of the part. An example of a standard tolerance for a 1-inch dimension of an LDPE part with 0.125 inch wall thickness is +/- 0.008 inch (0.2 mm).
Power requirements
The power required for this process of injection moulding depends on many things and varies between materials used. Manufacturing Processes Reference Guide states that the power requirements depend on "a material's specific gravity, melting point, thermal conductivity, part size, and molding rate." Below is a table from page 243 of the same reference as previously mentioned that best illustrates the characteristics relevant to the power required for the most commonly used materials.
Robotic moulding
Automation means that the smaller size of parts permits a mobile inspection system to examine multiple parts more quickly. In addition to mounting inspection systems on automatic devices, multiple-axis robots can remove parts from the mould and position them for further processes.
Specific instances include removing of parts from the mould immediately after the parts are created, as well as applying machine vision systems. A robot grips the part after the ejector pins have been extended to free the part from the mould. It then moves them into either a holding location or directly onto an inspection system. The choice depends upon the type of product, as well as the general layout of the manufacturing equipment. Vision systems mounted on robots have greatly enhanced quality control for insert moulded parts. A mobile robot can more precisely determine the placement accuracy of the metal component, and inspect faster than a human can.
Gallery
See also
Craft
Design of plastic components
Direct injection expanded foam molding
Extrusion moulding
Fusible core injection moulding
Gravimetric blender
Hobby injection moulding
Injection mould construction
Matrix moulding
Multi-material injection moulding
Rapid Heat Cycle Molding
Reaction injection moulding
Rotational moulding
Urethane casting
References
Further reading
External links
Page information
Shrinkage and Warpage – Santa Clara University Engineering Design Center
Industrial design | Injection moulding | [
"Engineering"
] | 6,601 | [
"Industrial design",
"Design engineering",
"Design"
] |
325,996 | https://en.wikipedia.org/wiki/National%20Superconducting%20Cyclotron%20Laboratory | The National Superconducting Cyclotron Laboratory (NSCL), located on the campus of Michigan State University was a rare isotope research facility in the United States. Established in 1963, the cyclotron laboratory has been succeeded by the Facility for Rare Isotope Beams, a linear accelerator providing beam to the same detector halls.
NSCL was the nation's largest nuclear science facility on a university campus.
Funded primarily by the National Science Foundation and MSU, the NSCL operated two superconducting cyclotrons. The lab's scientists investigated the properties of rare isotopes and nuclear reactions. In nature, these reactions would take place in stars and exploding stellar environments such as novae and supernovae. The K1200 cyclotron was the highest-energy continuous beam accelerator in the world (as compared to synchrotrons such as the Large Hadron Collider which provide beam in "cycles").
The laboratory's primary goal was to understand the properties of atomic nuclei. Atomic nuclei are ten thousand times smaller than the atoms they reside in, but they contain nearly all the atom's mass (more than 99.9 percent).
Most of the atomic nuclei found on earth are stable, but there are many unstable and rare isotopes that exist in the universe, sometimes only for a fleeting moment in conditions of high pressure or temperature. The NSCL made and studied atomic nuclei that could not be found on earth. Rare isotope research is essential for understanding how the elements—and ultimately the universe—were formed.
The nuclear physics graduate program at MSU was ranked best in America by the 2018 Best Grad Schools index published by U.S. News & World Report.
Laboratory upgrades
The upgrade plans are in close alignment with a report issued December 2006 by the National Academies, "Scientific Opportunities with a Rare-Isotope Facility in the United States", which defines a scientific agenda for a U.S.-based rare-isotope facility and addresses the need for such a facility in context of international efforts in this area. NSCL is working towards a significant capability upgrade that will keep the laboratory – and nuclear science – at the cutting edge well into the 21st century.
The upgrade of NSCL – the $750 million Facility for Rare Isotope Beams (FRIB), under construction as of 2020 – will boost intensities and varieties of rare isotope beams produced at MSU by replacing the K500 and K1200 cyclotrons with a powerful linear accelerator to be built beneath the ground. Such beams will allow researchers and students to continue to address a host of questions at the intellectual frontier of nuclear science: How does the behavior of novel and short-lived nuclei differ from more stable nuclei? What is the nature of nuclear processes in explosive stellar environments? What is the structure of hot nuclear matter at abnormal densities?
Beyond basic research, FRIB may lead to cross-disciplinary benefits. Experiments there will help astronomers better interpret data from ground- and space-based observatories. Scientists at the Isotope Science Facility will contribute to research on self-organization and complexity arising from elementary interactions, a topic relevant to the life sciences and quantum computing. Additionally, the facility's capabilities may lead to advances in fields as diverse as biomedicine, materials science, national and international security, and nuclear energy.
Joint Institute for Nuclear Astrophysics
The Joint Institute for Nuclear Astrophysics (JINA) is a collaboration between Michigan State University, the University of Notre Dame, and the University of Chicago to address a broad range of experimental, theoretical, and observational questions in nuclear astrophysics. A portion of the Michigan State collaboration is housed at the National Superconducting Cyclotron Laboratory, directly involving roughly 30 nuclear physicists and astrophysicists.
See also
CERN
Cyclotron
Elementary particle
FRIB
Gesellschaft für Schwerionenforschung
Particle physics
Particle accelerator
References
External links
Isotope Science Facility at Michigan State University
"Scientific Opportunities with a Rare-Isotope Facility in the United States" A report by the National Academies
"Nuclear science hits new frontiers" A commentary by NSCL Director C. Konrad Gelbke in the December 2006 CERN Courier
“NSCL nets $100 million in NSF funds” MSU News Bulletin, Oct. 26, 2006
The Spartan Podcast – Arden L. Bement, Jr. An audio interview with NSF Director Arden L. Bement Jr. who visited MSU Oct. 26, 2006 to award NSCL more than $100 million to fund operations through 2011, highlighting the lab's status as a world-leading nuclear science facility
Online Tour of Isotope Science Facility at Michigan State University
Michigan State University
Michigan State University campus
Nuclear research institutes
Research institutes in Michigan | National Superconducting Cyclotron Laboratory | [
"Engineering"
] | 959 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
326,182 | https://en.wikipedia.org/wiki/Isoperimetric%20inequality | In mathematics, the isoperimetric inequality is a geometric inequality involving the square of the circumference of a closed curve in the plane and the area of a plane region it encloses, as well as its various generalizations. Isoperimetric literally means "having the same perimeter". Specifically, the isoperimetric inequality states, for the length L of a closed curve and the area A of the planar region that it encloses, that
and that equality holds if and only if the curve is a circle.
The isoperimetric problem is to determine a plane figure of the largest possible area whose boundary has a specified length. The closely related Dido's problem asks for a region of the maximal area bounded by a straight line and a curvilinear arc whose endpoints belong to that line. It is named after Dido, the legendary founder and first queen of Carthage. The solution to the isoperimetric problem is given by a circle and was known already in Ancient Greece. However, the first mathematically rigorous proof of this fact was obtained only in the 19th century. Since then, many other proofs have been found.
The isoperimetric problem has been extended in multiple ways, for example, to curves on surfaces and to regions in higher-dimensional spaces. Perhaps the most familiar physical manifestation of the 3-dimensional isoperimetric inequality is the shape of a drop of water. Namely, a drop will typically assume a symmetric round shape. Since the amount of water in a drop is fixed, surface tension forces the drop into a shape which minimizes the surface area of the drop, namely a round sphere.
The isoperimetric problem in the plane
The classical isoperimetric problem dates back to antiquity. The problem can be stated as follows: Among all closed curves in the plane of fixed perimeter, which curve (if any) maximizes the area of its enclosed region? This question can be shown to be equivalent to the following problem: Among all closed curves in the plane enclosing a fixed area, which curve (if any) minimizes the perimeter?
This problem is conceptually related to the principle of least action in physics, in that it can be restated: what is the principle of action which encloses the greatest area, with the greatest economy of effort? The 15th-century philosopher and scientist, Cardinal Nicholas of Cusa, considered rotational action, the process by which a circle is generated, to be the most direct reflection, in the realm of sensory impressions, of the process by which the universe is created. German astronomer and astrologer Johannes Kepler invoked the isoperimetric principle in discussing the morphology of the solar system, in Mysterium Cosmographicum (The Sacred Mystery of the Cosmos, 1596).
Although the circle appears to be an obvious solution to the problem, proving this fact is rather difficult. The first progress toward the solution was made by Swiss geometer Jakob Steiner in 1838, using a geometric method later named Steiner symmetrisation. Steiner showed that if a solution existed, then it must be the circle. Steiner's proof was completed later by several other mathematicians.
Steiner begins with some geometric constructions which are easily understood; for example, it can be shown that any closed curve enclosing a region that is not fully convex can be modified to enclose more area, by "flipping" the concave areas so that they become convex. It can further be shown that any closed curve which is not fully symmetrical can be "tilted" so that it encloses more area. The one shape that is perfectly convex and symmetrical is the circle, although this, in itself, does not represent a rigorous proof of the isoperimetric theorem (see external links).
On a plane
The solution to the isoperimetric problem is usually expressed in the form of an inequality that relates the length L of a closed curve and the area A of the planar region that it encloses. The isoperimetric inequality states that
and that the equality holds if and only if the curve is a circle. The area of a disk of radius R is πR2 and the circumference of the circle is 2πR, so both sides of the inequality are equal to 4π2R2 in this case.
Dozens of proofs of the isoperimetric inequality have been found. In 1902, Hurwitz published a short proof using the Fourier series that applies to arbitrary rectifiable curves (not assumed to be smooth). An elegant direct proof based on comparison of a smooth simple closed curve with an appropriate circle was given by E. Schmidt in 1938. It uses only the arc length formula, expression for the area of a plane region from Green's theorem, and the Cauchy–Schwarz inequality.
For a given closed curve, the isoperimetric quotient is defined as the ratio of its area and that of the circle having the same perimeter. This is equal to
and the isoperimetric inequality says that Q ≤ 1. Equivalently, the isoperimetric ratio is at least 4 for every curve.
The isoperimetric quotient of a regular n-gon is
Let be a smooth regular convex closed curve. Then the improved isoperimetric inequality states the following
where denote the length of , the area of the region bounded by and the oriented area of the Wigner caustic of , respectively, and the equality holds if and only if is a curve of constant width.
On a sphere
Let C be a simple closed curve on a sphere of radius 1. Denote by L the length of C and by A the area enclosed by C. The spherical isoperimetric inequality states that
and that the equality holds if and only if the curve is a circle. There are, in fact, two ways to measure the spherical area enclosed by a simple closed curve, but the inequality is symmetric with the respect to taking the complement.
This inequality was discovered by Paul Lévy (1919) who also extended it to higher dimensions and general surfaces.
In the more general case of arbitrary radius R, it is known that
In Euclidean space
The isoperimetric inequality states that a sphere has the smallest surface area per given volume. Given a bounded open set with boundary, having surface area and volume , the isoperimetric inequality states
where is a unit ball. The equality holds when is a ball in . Under additional restrictions on the set (such as convexity, regularity, smooth boundary), the equality holds for a ball only. But in full generality the situation is more complicated. The relevant result of (for a simpler proof see ) is clarified in as follows. An extremal set consists of a ball and a "corona" that contributes neither to the volume nor to the surface area. That is, the equality holds for a compact set if and only if contains a closed ball such that and For example, the "corona" may be a curve.
The proof of the inequality follows directly from Brunn–Minkowski inequality between a set and a ball with radius , i.e. . By taking Brunn–Minkowski inequality to the power , subtracting from both sides, dividing them by , and taking the limit as (; ).
In full generality , the isoperimetric inequality states that for any set whose closure has finite Lebesgue measure
where is the (n-1)-dimensional Minkowski content, Ln is the n-dimensional Lebesgue measure, and ωn is the volume of the unit ball in . If the boundary of S is rectifiable, then the Minkowski content is the (n-1)-dimensional Hausdorff measure.
The n-dimensional isoperimetric inequality is equivalent (for sufficiently smooth domains) to the Sobolev inequality on with optimal constant:
for all .
In Hadamard manifolds
Hadamard manifolds are complete simply connected manifolds with nonpositive curvature. Thus they generalize the Euclidean space , which is a Hadamard manifold with curvature zero. In 1970's and early 80's, Thierry Aubin, Misha Gromov, Yuri Burago, and Viktor Zalgaller conjectured that the Euclidean isoperimetric inequality
holds for bounded sets in Hadamard manifolds, which has become known as the Cartan–Hadamard conjecture.
In dimension 2 this had already been established in 1926 by André Weil, who was a student of Hadamard at the time.
In dimensions 3 and 4 the conjecture was proved by Bruce Kleiner in 1992, and Chris Croke in 1984 respectively.
In a metric measure space
Most of the work on isoperimetric problem has been done in the context of smooth regions in Euclidean spaces, or more generally, in Riemannian manifolds. However, the isoperimetric problem can be formulated in much greater generality, using the notion of Minkowski content. Let be a metric measure space: X is a metric space with metric d, and μ is a Borel measure on X. The boundary measure, or Minkowski content, of a measurable subset A of X is defined as the lim inf
where
is the ε-extension of A.
The isoperimetric problem in X asks how small can be for a given μ(A). If X is the Euclidean plane with the usual distance and the Lebesgue measure then this question generalizes the classical isoperimetric problem to planar regions whose boundary is not necessarily smooth, although the answer turns out to be the same.
The function
is called the isoperimetric profile of the metric measure space . Isoperimetric profiles have been studied for Cayley graphs of discrete groups and for special classes of Riemannian manifolds (where usually only regions A with regular boundary are considered).
For graphs
In graph theory, isoperimetric inequalities are at the heart of the study of expander graphs, which are sparse graphs that have strong connectivity properties. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes.
Isoperimetric inequalities for graphs relate the size of vertex subsets to the size of their boundary, which is usually measured by the number of edges leaving the subset (edge expansion) or by the number of neighbouring vertices (vertex expansion). For a graph and a number , the following are two standard isoperimetric parameters for graphs.
The edge isoperimetric parameter:
The vertex isoperimetric parameter:
Here denotes the set of edges leaving and denotes the set of vertices that have a neighbour in . The isoperimetric problem consists of understanding how the parameters and behave for natural families of graphs.
Example: Isoperimetric inequalities for hypercubes
The -dimensional hypercube is the graph whose vertices are all Boolean vectors of length , that is, the set . Two such vectors are connected by an edge in if they are equal up to a single bit flip, that is, their Hamming distance is exactly one.
The following are the isoperimetric inequalities for the Boolean hypercube.
Edge isoperimetric inequality
The edge isoperimetric inequality of the hypercube is . This bound is tight, as is witnessed by each set that is the set of vertices of any subcube of .
Vertex isoperimetric inequality
Harper's theorem says that Hamming balls have the smallest vertex boundary among all sets of a given size. Hamming balls are sets that contain all points of Hamming weight at most and no points of Hamming weight larger than for some integer . This theorem implies that any set with
satisfies
As a special case, consider set sizes of the form
for some integer . Then the above implies that the exact vertex isoperimetric parameter is
Isoperimetric inequality for triangles
The isoperimetric inequality for triangles in terms of perimeter p and area T states that
with equality for the equilateral triangle. This is implied, via the AM–GM inequality, by a stronger inequality which has also been called the isoperimetric inequality for triangles:
See also
Blaschke–Lebesgue theorem
Chaplygin problem: isoperimetric problem is a zero wind speed case of Chaplygin problem
Curve-shortening flow
Expander graph
Gaussian isoperimetric inequality
Isoperimetric dimension
Isoperimetric point
List of triangle inequalities
Planar separator theorem
Mixed volume
Notes
References
Blaschke and Leichtweiß, Elementare Differentialgeometrie (in German), 5th edition, completely revised by K. Leichtweiß. Die Grundlehren der mathematischen Wissenschaften, Band 1. Springer-Verlag, New York Heidelberg Berlin, 1973
.
Gromov, M.: "Paul Levy's isoperimetric inequality". Appendix C in Metric structures for Riemannian and non-Riemannian spaces. Based on the 1981 French original. With appendices by M. Katz, P. Pansu and S. Semmes. Translated from the French by Sean Michael Bates. Progress in Mathematics, 152. Birkhäuser Boston, Inc., Boston, Massachusetts, 1999.
.
.
.
.
External links
History of the Isoperimetric Problem at Convergence
Treiberg: Several proofs of the isoperimetric inequality
Isoperimetric Theorem at cut-the-knot
Analytic geometry
Calculus of variations
Geometric inequalities
Multivariable calculus
Theorems in measure theory | Isoperimetric inequality | [
"Mathematics"
] | 2,804 | [
"Theorems in mathematical analysis",
"Theorems in measure theory",
"Calculus",
"Geometric inequalities",
"Inequalities (mathematics)",
"Theorems in geometry",
"Multivariable calculus"
] |
326,298 | https://en.wikipedia.org/wiki/Power%20center%20%28geometry%29 | In geometry, the power center of three circles, also called the radical center, is the intersection point of the three radical axes of the pairs of circles. If the radical center lies outside of all three circles, then it is the center of the unique circle (the radical circle) that intersects the three given circles orthogonally; the construction of this orthogonal circle corresponds to Monge's problem. This is a special case of the three conics theorem.
The three radical axes meet in a single point, the radical center, for the following reason. The radical axis of a pair of circles is defined as the set of points that have equal power with respect to both circles. For example, for every point on the radical axis of circles 1 and 2, the powers to each circle are equal: . Similarly, for every point on the radical axis of circles 2 and 3, the powers must be equal, . Therefore, at the intersection point of these two lines, all three powers must be equal, . Since this implies that , this point must also lie on the radical axis of circles 1 and 3. Hence, all three radical axes pass through the same point, the radical center.
The radical center has several applications in geometry. It has an important role in a solution to Apollonius' problem published by Joseph Diaz Gergonne in 1814. In the power diagram of a system of circles, all of the vertices of the diagram are located at radical centers of triples of circles. The Spieker center of a triangle is the radical center of its excircles. Several types of radical circles have been defined as well, such as the radical circle of the Lucas circles.
Notes
Further reading
External links
Radical Center at Cut-the-Knot
Radical Axis and Center at Cut-the-Knot
Elementary geometry
Geometric centers | Power center (geometry) | [
"Physics",
"Mathematics"
] | 366 | [
"Point (geometry)",
"Geometric centers",
"Elementary mathematics",
"Elementary geometry",
"Symmetry"
] |
326,365 | https://en.wikipedia.org/wiki/Reverse%20mathematics | Reverse mathematics is a program in mathematical logic that seeks to determine which axioms are required to prove theorems of mathematics. Its defining method can briefly be described as "going backwards from the theorems to the axioms", in contrast to the ordinary mathematical practice of deriving theorems from axioms. It can be conceptualized as sculpting out necessary conditions from sufficient ones.
The reverse mathematics program was foreshadowed by results in set theory such as the classical theorem that the axiom of choice and Zorn's lemma are equivalent over ZF set theory. The goal of reverse mathematics, however, is to study possible axioms of ordinary theorems of mathematics rather than possible axioms for set theory.
Reverse mathematics is usually carried out using subsystems of second-order arithmetic, where many of its definitions and methods are inspired by previous work in constructive analysis and proof theory. The use of second-order arithmetic also allows many techniques from recursion theory to be employed; many results in reverse mathematics have corresponding results in computable analysis. In higher-order reverse mathematics, the focus is on subsystems of higher-order arithmetic, and the associated richer language.
The program was founded by and brought forward by Steve Simpson. A standard reference for the subject is , while an introduction for non-specialists is . An introduction to higher-order reverse mathematics, and also the founding paper, is . A comprehensive introduction, covering major results and methods, is
General principles
In reverse mathematics, one starts with a framework language and a base theory—a core axiom system—that is too weak to prove most of the theorems one might be interested in, but still powerful enough to develop the definitions necessary to state these theorems. For example, to study the theorem “Every bounded sequence of real numbers has a supremum” it is necessary to use a base system that can speak of real numbers and sequences of real numbers.
For each theorem that can be stated in the base system but is not provable in the base system, the goal is to determine the particular axiom system (stronger than the base system) that is necessary to prove that theorem. To show that a system S is required to prove a theorem T, two proofs are required. The first proof shows T is provable from S; this is an ordinary mathematical proof along with a justification that it can be carried out in the system S. The second proof, known as a reversal, shows that T itself implies S; this proof is carried out in the base system. The reversal establishes that no axiom system S′ that extends the base system can be weaker than S while still proving T.
Use of second-order arithmetic
Most reverse mathematics research focuses on subsystems of second-order arithmetic. The body of research in reverse mathematics has established that weak subsystems of second-order arithmetic suffice to formalize almost all undergraduate-level mathematics. In second-order arithmetic, all objects can be represented as either natural numbers or sets of natural numbers. For example, in order to prove theorems about real numbers, the real numbers can be represented as Cauchy sequences of rational numbers, each of which sequence can be represented as a set of natural numbers.
The axiom systems most often considered in reverse mathematics are defined using axiom schemes called comprehension schemes. Such a scheme states that any set of natural numbers definable by a formula of a given complexity exists. In this context, the complexity of formulas is measured using the arithmetical hierarchy and analytical hierarchy.
The reason that reverse mathematics is not carried out using set theory as a base system is that the language of set theory is too expressive. Extremely complex sets of natural numbers can be defined by simple formulas in the language of set theory (which can quantify over arbitrary sets). In the context of second-order arithmetic, results such as Post's theorem establish a close link between the complexity of a formula and the (non)computability of the set it defines.
Another effect of using second-order arithmetic is the need to restrict general mathematical theorems to forms that can be expressed within arithmetic. For example, second-order arithmetic can express the principle "Every countable vector space has a basis" but it cannot express the principle "Every vector space has a basis". In practical terms, this means that theorems of algebra and combinatorics are restricted to countable structures, while theorems of analysis and topology are restricted to separable spaces. Many principles that imply the axiom of choice in their general form (such as "Every vector space has a basis") become provable in weak subsystems of second-order arithmetic when they are restricted. For example, "every field has an algebraic closure" is not provable in ZF set theory, but the restricted form "every countable field has an algebraic closure" is provable in RCA0, the weakest system typically employed in reverse mathematics.
Use of higher-order arithmetic
A recent strand of higher-order reverse mathematics research, initiated by Ulrich Kohlenbach in 2005, focuses on subsystems of higher-order arithmetic.
Due to the richer language of higher-order arithmetic, the use of representations (aka 'codes') common in second-order arithmetic, is greatly reduced.
For example, a continuous function on the Cantor space is just a function that maps binary sequences to binary sequences, and that also satisfies the usual 'epsilon-delta'-definition of continuity.
Higher-order reverse mathematics includes higher-order versions of (second-order) comprehension schemes. Such a higher-order axiom states the existence of a functional that decides the truth or falsity of formulas of a given complexity. In this context, the complexity of formulas is also measured using the arithmetical hierarchy and analytical hierarchy. The higher-order counterparts of the major subsystems of second-order arithmetic generally prove the same second-order sentences (or a large subset) as the original second-order systems. For instance, the base theory of higher-order reverse mathematics, called , proves the same sentences as RCA0, up to language.
As noted in the previous paragraph, second-order comprehension axioms easily generalize to the higher-order framework. However, theorems expressing the compactness of basic spaces behave quite differently in second- and higher-order arithmetic: on one hand, when restricted to countable covers/the language of second-order arithmetic, the compactness of the unit interval is provable in WKL0 from the next section. On the other hand, given uncountable covers/the language of higher-order arithmetic, the compactness of the unit interval is only provable from (full) second-order arithmetic. Other covering lemmas (e.g. due to Lindelöf, Vitali, Besicovitch, etc.) exhibit the same behavior, and many basic properties of the gauge integral are equivalent to the compactness of the underlying space.
The big five subsystems of second-order arithmetic
Second-order arithmetic is a formal theory of the natural numbers and sets of natural numbers. Many mathematical objects, such as countable rings, groups, and fields, as well as points in effective Polish spaces, can be represented as sets of natural numbers, and modulo this representation can be studied in second-order arithmetic.
Reverse mathematics makes use of several subsystems of second-order arithmetic. A typical reverse mathematics theorem shows that a particular mathematical theorem T is equivalent to a particular subsystem S of second-order arithmetic over a weaker subsystem B. This weaker system B is known as the base system for the result; in order for the reverse mathematics result to have
meaning, this system must not itself be able to prove the mathematical theorem T.
Steve Simpson describes five particular subsystems of second-order arithmetic, which he calls the Big Five, that occur frequently in reverse mathematics. In order of increasing strength, these systems are named by the initialisms RCA0, WKL0, ACA0, ATR0, and Π-CA0.
The following table summarizes the "big five" systems and lists the counterpart systems in higher-order arithmetic.
The latter generally prove the same second-order sentences (or a large subset) as the original second-order systems.
The subscript 0 in these names means that the induction scheme has been restricted from the full second-order induction scheme. For example, ACA0 includes the induction axiom . This together with the full comprehension axiom of second-order arithmetic implies the full second-order induction scheme given by the universal closure of for any second-order formula φ. However ACA0 does not have the full comprehension axiom, and the subscript 0 is a reminder that it does not have the full second-order induction scheme either. This restriction is important: systems with restricted induction have significantly lower proof-theoretical ordinals than systems with the full second-order induction scheme.
The base system RCA0
RCA0 is the fragment of second-order arithmetic whose axioms are the axioms of Robinson arithmetic, induction for Σ formulas, and comprehension for formulas.
The subsystem RCA0 is the one most commonly used as a base system for reverse mathematics. The initials "RCA" stand for "recursive comprehension axiom", where "recursive" means "computable", as in recursive function. This name is used because RCA0 corresponds informally to "computable mathematics". In particular, any set of natural numbers that can be proven to exist in RCA0 is computable, and thus any theorem that implies that noncomputable sets exist is not provable in RCA0. To this extent, RCA0 is a constructive system, although it does not meet the requirements of the program of constructivism because it is a theory in classical logic including the law of excluded middle.
Despite its seeming weakness (of not proving any non-computable sets exist), RCA0 is sufficient to prove a number of classical theorems which, therefore, require only minimal logical strength. These theorems are, in a sense, below the reach of the reverse mathematics enterprise because they are already provable in the base system. The classical theorems provable in RCA0 include:
Basic properties of the natural numbers, integers, and rational numbers (for example, that the latter form an ordered field).
Basic properties of the real numbers (the real numbers are an Archimedean ordered field; any nested sequence of closed intervals whose lengths tend to zero has a single point in its intersection; the real numbers are not countable).Section II.4
The Baire category theorem for a complete separable metric space (the separability condition is necessary to even state the theorem in the language of second-order arithmetic).theorem II.5.8
The intermediate value theorem on continuous real functions.theorem II.6.6
The Banach–Steinhaus theorem for a sequence of continuous linear operators on separable Banach spaces.theorem II.10.8
A weak version of Gödel's completeness theorem (for a set of sentences, in a countable language, that is already closed under consequence).
The existence of an algebraic closure for a countable field (but not its uniqueness).II.9.4--II.9.8
The existence and uniqueness of the real closure of a countable ordered field.II.9.5, II.9.7
The first-order part of RCA0 (the theorems of the system that do not involve any set variables) is the set of theorems of first-order Peano arithmetic with induction limited to Σ formulas. It is provably consistent, as is RCA0, in full first-order Peano arithmetic.
Weak Kőnig's lemma WKL0
The subsystem WKL0 consists of RCA0 plus a weak form of Kőnig's lemma, namely the statement that every infinite subtree of the full binary tree (the tree of all finite sequences of 0's and 1's) has an infinite path. This proposition, which is known as weak Kőnig's lemma, is easy to state in the language of second-order arithmetic. WKL0 can also be defined as the principle of Σ separation (given two Σ formulas of a free variable n that are exclusive, there is a set containing all n satisfying the one and no n satisfying the other). When this axiom is added to RCA0, the resulting subsystem is called WKL0. A similar distinction between particular axioms on the one hand, and subsystems including the basic axioms and induction on the other hand, is made for the stronger subsystems described below.
In a sense, weak Kőnig's lemma is a form of the axiom of choice (although, as stated, it can be proven in classical Zermelo–Fraenkel set theory without the axiom of choice). It is not constructively valid in some senses of the word "constructive".
To show that WKL0 is actually stronger than (not provable in) RCA0, it is sufficient to exhibit a theorem of WKL0 that implies that noncomputable sets exist. This is not difficult; WKL0 implies the existence of separating sets for effectively inseparable recursively enumerable sets.
It turns out that RCA0 and WKL0 have the same first-order part, meaning that they prove the same first-order sentences. WKL0 can prove a good number of classical mathematical results that do not follow from RCA0, however. These results are not expressible as first-order statements but can be expressed as second-order statements.
The following results are equivalent to weak Kőnig's lemma and thus to WKL0 over RCA0:
The Heine–Borel theorem for the closed unit real interval, in the following sense: every covering by a sequence of open intervals has a finite subcovering.
The Heine–Borel theorem for complete totally bounded separable metric spaces (where covering is by a sequence of open balls).
A continuous real function on the closed unit interval (or on any compact separable metric space, as above) is bounded (or: bounded and reaches its bounds).
A continuous real function on the closed unit interval can be uniformly approximated by polynomials (with rational coefficients).
A continuous real function on the closed unit interval is uniformly continuous.
A continuous real function on the closed unit interval is Riemann integrable.
The Brouwer fixed point theorem (for continuous functions on an -simplex).Theorem IV.7.7
The separable Hahn–Banach theorem in the form: a bounded linear form on a subspace of a separable Banach space extends to a bounded linear form on the whole space.
The Jordan curve theorem
Gödel's completeness theorem (for a countable language).
Determinacy for open (or even clopen) games on {0,1} of length ω.
Every countable commutative ring has a prime ideal.
Every countable formally real field is orderable.
Uniqueness of algebraic closure (for a countable field).
Arithmetical comprehension ACA0
ACA0 is RCA0 plus the comprehension scheme for arithmetical formulas (which is sometimes called the "arithmetical comprehension axiom"). That is, ACA0 allows us to form the set of natural numbers satisfying an arbitrary arithmetical formula (one with no bound set variables, although possibly containing set parameters).pp. 6--7 Actually, it suffices to add to RCA0 the comprehension scheme for Σ1 formulas (also including second-order free variables) in order to obtain full arithmetical comprehension.Lemma III.1.3
The first-order part of ACA0 is exactly first-order Peano arithmetic; ACA0 is a conservative extension of first-order Peano arithmetic.Corollary IX.1.6 The two systems are provably (in a weak system) equiconsistent. ACA0 can be thought of as a framework of predicative mathematics, although there are predicatively provable theorems that are not provable in ACA0. Most of the fundamental results about the natural numbers, and many other mathematical theorems, can be proven in this system.
One way of seeing that ACA0 is stronger than WKL0 is to exhibit a model of WKL0 that doesn't contain all arithmetical sets. In fact, it is possible to build a model of WKL0 consisting entirely of low sets using the low basis theorem, since low sets relative to low sets are low.
The following assertions are equivalent to ACA0
over RCA0:
The sequential completeness of the real numbers (every bounded increasing sequence of real numbers has a limit).theorem III.2.2
The Bolzano–Weierstrass theorem.theorem III.2.2
Ascoli's theorem: every bounded equicontinuous sequence of real functions on the unit interval has a uniformly convergent subsequence.
Every countable field embeds isomorphically into its algebraic closure.theorem III.3.2
Every countable commutative ring has a maximal ideal.theorem III.5.5
Every countable vector space over the rationals (or over any countable field) has a basis.theorem III.4.3
For any countable fields , there is a transcendence basis for over .theorem III.4.6
Kőnig's lemma (for arbitrary finitely branching trees, as opposed to the weak version described above).theorem III.7.2
For any countable group and any subgroups of , the subgroup generated by exists.p.40
Any partial function can be extended to a total function.
Various theorems in combinatorics, such as certain forms of Ramsey's theorem.Theorem III.7.2
Arithmetical transfinite recursion ATR0
The system ATR0 adds to ACA0 an axiom that states, informally, that any arithmetical functional (meaning any arithmetical formula with a free number variable n and a free set variable X, seen as the operator taking X to the set of n satisfying the formula) can be iterated transfinitely along any countable well ordering starting with any set. ATR0 is equivalent over ACA0 to the principle of Σ separation. ATR0 is impredicative, and has the proof-theoretic ordinal , the supremum of that of predicative systems.
ATR0 proves the consistency of ACA0, and thus by Gödel's theorem it is strictly stronger.
The following assertions are equivalent to
ATR0 over RCA0:
Any two countable well orderings are comparable. That is, they are isomorphic or one is isomorphic to a proper initial segment of the other.theorem V.6.8
Ulm's theorem for countable reduced Abelian groups.
The perfect set theorem, which states that every uncountable closed subset of a complete separable metric space contains a perfect closed set.
Lusin's separation theorem (essentially Σ separation).Theorem V.5.1
Determinacy for open sets in the Baire space.
Π comprehension Π-CA0
Π-CA0 is stronger than arithmetical transfinite recursion and is fully impredicative. It consists of RCA0 plus the comprehension scheme for Π formulas.
In a sense, Π-CA0 comprehension is to arithmetical transfinite recursion (Σ separation) as ACA0 is to weak Kőnig's lemma (Σ separation). It is equivalent to several statements of descriptive set theory whose proofs make use of strongly impredicative arguments; this equivalence shows that these impredicative arguments cannot be removed.
The following theorems are equivalent to Π-CA0 over RCA0:
The Cantor–Bendixson theorem (every closed set of reals is the union of a perfect set and a countable set).Exercise VI.1.7
Silver's dichotomy (every coanalytic equivalence relation has either countably many equivalence classes or a perfect set of incomparables)Theorem VI.3.6
Every countable abelian group is the direct sum of a divisible group and a reduced group.Theorem VI.4.1
Determinacy for games.Theorem VI.5.4
Additional systems
Weaker systems than recursive comprehension can be defined. The weak system RCA consists of elementary function arithmetic EFA (the basic axioms plus Δ induction in the enriched language with an exponential operation) plus Δ comprehension. Over RCA, recursive comprehension as defined earlier (that is, with Σ induction) is equivalent to the statement that a polynomial (over a countable field) has only finitely many roots and to the classification theorem for finitely generated Abelian groups. The system RCA has the same proof theoretic ordinal ω3 as EFA and is conservative over EFA for Π sentences.
Weak Weak Kőnig's Lemma is the statement that a subtree of the infinite binary tree having no infinite paths has an asymptotically vanishing proportion of the leaves at length n (with a uniform estimate as to how many leaves of length n exist). An equivalent formulation is that any subset of Cantor space that has positive measure is nonempty (this is not provable in RCA0). WWKL0 is obtained by adjoining this axiom to RCA0. It is equivalent to the statement that if the unit real interval is covered by a sequence of intervals then the sum of their lengths is at least one. The model theory of WWKL0 is closely connected to the theory of algorithmically random sequences. In particular, an ω-model of RCA0 satisfies weak weak Kőnig's lemma if and only if for every set X there is a set Y that is 1-random relative to X.
DNR (short for "diagonally non-recursive") adds to RCA0 an axiom asserting the existence of a diagonally non-recursive function relative to every set. That is, DNR states that, for any set A, there exists a total function f such that for all e the eth partial recursive function with oracle A is not equal to f. DNR is strictly weaker than WWKL (Lempp et al., 2004).
Δ-comprehension is in certain ways analogous to arithmetical transfinite recursion as recursive comprehension is to weak Kőnig's lemma. It has the hyperarithmetical sets as minimal ω-model. Arithmetical transfinite recursion proves Δ-comprehension but not the other way around.
Σ-choice is the statement that if η(n,X) is a Σ formula such that for each n there exists an X satisfying η then there is a sequence of sets Xn such that η(n,Xn) holds for each n. Σ-choice also has the hyperarithmetical sets as minimal ω-model. Arithmetical transfinite recursion proves Σ-choice but not the other way around.
HBU (short for "uncountable Heine-Borel") expresses the (open-cover) compactness of the unit interval, involving uncountable covers. The latter aspect of HBU makes it only expressible in the language of third-order arithmetic. Cousin's theorem (1895) implies HBU, and these theorems use the same notion of cover due to Cousin and Lindelöf. HBU is hard to prove: in terms of the usual hierarchy of comprehension axioms, a proof of HBU requires full second-order arithmetic.
Ramsey's theorem for infinite graphs does not fall into one of the big five subsystems, and there are many other weaker variants with varying proof strengths.
Stronger Systems
Over RCA0, Π transfinite recursion, ∆ determinacy, and the ∆ Ramsey theorem are all equivalent to each other.
Over RCA0, Σ monotonic induction, Σ determinacy, and the Σ Ramsey theorem are all equivalent to each other.
The following are equivalent:
(schema) Π consequences of Π-CA0
RCA0 + (schema over finite n) determinacy in the nth level of the difference hierarchy of Σ sets
RCA0 + {τ: τ is a true S2S sentence}
The set of Π consequences of second-order arithmetic Z2 has the same theory as RCA0 + (schema over finite n) determinacy in the nth level of the difference hierarchy of Σ sets.
For a poset , let denote the topological space consisting of the filters on whose open sets are the sets of the form for some . The following statement is equivalent to over : for any countable poset , the topological space is completely metrizable iff it is regular.
ω-models and β-models
The ω in ω-model stands for the set of non-negative integers (or finite ordinals). An ω-model is a model for a fragment of second-order arithmetic whose first-order part is the standard model of Peano arithmetic, but whose second-order part may be non-standard. More precisely, an ω-model is given by a choice of subsets of . The first-order variables are interpreted in the usual way as elements of , and , have their usual meanings, while second-order variables are interpreted as elements of . There is a standard ω-model where one just takes to consist of all subsets of the integers. However, there are also other ω-models; for example, RCA0 has a minimal ω-model where consists of the recursive subsets of .
A β-model is an ω model that agrees with the standard ω-model on truth of and sentences (with parameters).
Non-ω models are also useful, especially in the proofs of conservation theorems.
See also
Closed-form expression § Conversion from numerical forms
Induction, bounding and least number principles
Ordinal analysis
Notes
References
External links
Stephen G. Simpson's home page
Reverse Mathematics Zoo
Computability theory
Mathematical logic
Proof theory | Reverse mathematics | [
"Mathematics"
] | 5,432 | [
"Computability theory",
"Mathematical logic",
"Proof theory"
] |
326,386 | https://en.wikipedia.org/wiki/Ion%20source | An ion source is a device that creates atomic and molecular ions. Ion sources are used to form ions for mass spectrometers, optical emission spectrometers, particle accelerators, ion implanters and ion engines.
Electron ionization
Electron ionization is widely used in mass spectrometry, particularly for organic molecules. The gas phase reaction producing electron ionization is
M{} + e^- -> M^{+\bullet}{} + 2e^-
where M is the atom or molecule being ionized, e^- is the electron, and M^{+\bullet} is the resulting ion.
The electrons may be created by an arc discharge between a cathode and an anode.
An electron beam ion source (EBIS) is used in atomic physics to produce highly charged ions by bombarding atoms with a powerful electron beam. Its principle of operation is shared by the electron beam ion trap.
Electron capture ionization
Electron capture ionization (ECI) is the ionization of a gas phase atom or molecule by attachment of an electron to create an ion of the form A−•. The reaction is
A + e^- ->[M] A^-
where the M over the arrow denotes that to conserve energy and momentum a third body is required (the molecularity of the reaction is three).
Electron capture can be used in conjunction with chemical ionization.
An electron capture detector is used in some gas chromatography systems.
Chemical ionization
Chemical ionization (CI) is a lower energy process than electron ionization because it involves ion/molecule reactions rather than electron removal. The lower energy yields less fragmentation, and usually a simpler spectrum. A typical CI spectrum has an easily identifiable molecular ion.
In a CI experiment, ions are produced through the collision of the analyte with ions of a reagent gas in the ion source. Some common reagent gases include: methane, ammonia, and isobutane. Inside the ion source, the reagent gas is present in large excess compared to the analyte. Electrons entering the source will preferentially ionize the reagent gas. The resultant collisions with other reagent gas molecules will create an ionization plasma. Positive and negative ions of the analyte are formed by reactions with this plasma. For example, protonation occurs by
CH4 + e^- -> CH4+ + 2e^- (primary ion formation),
CH4 + CH4+ -> CH5+ + CH3 (reagent ion formation),
M + CH5+ -> CH4 + [M + H]+ (product ion formation, e.g. protonation).
Charge exchange ionization
Charge-exchange ionization (also known as charge-transfer ionization) is a gas phase reaction between an ion and an atom or molecule in which the charge of the ion is transferred to the neutral species.
A+ + B -> A + B+
Chemi-ionization
Chemi-ionization is the formation of an ion through the reaction of a gas phase atom or molecule with an atom or molecule in an excited state. Chemi-ionization can be represented by
G^\ast{} + M -> G{} + M^{+\bullet}{} + e^-
where G is the excited state species (indicated by the superscripted asterisk), and M is the species that is ionized by the loss of an electron to form the radical cation (indicated by the superscripted "plus-dot").
Associative ionization
Associative ionization is a gas phase reaction in which two atoms or molecules interact to form a single product ion. One or both of the interacting species may have excess internal energy.
For example,
A^\ast{} + B -> AB^{+\bullet}{} + e^-
where species A with excess internal energy (indicated by the asterisk) interacts with B to form the ion AB+.
Penning ionization
Penning ionization is a form of chemi-ionization involving reactions between neutral atoms or molecules. The process is named after the Dutch physicist Frans Michel Penning who first reported it in 1927. Penning ionization involves a reaction between a gas-phase excited-state atom or molecule G* and a target molecule M resulting in the formation of a radical molecular cation M+., an electron e−, and a neutral gas molecule G:
G^\ast{} + M -> G{} + M^{+\bullet}{} + e^-
Penning ionization occurs when the target molecule has an ionization potential lower than the internal energy of the excited-state atom or molecule.
Associative Penning ionization can proceed via
G^\ast{} + M -> MG^{+\bullet}{} + e^-
Surface Penning ionization (also known as Auger deexcitation) refers to the interaction of the excited-state gas with a bulk surface S, resulting in the release of an electron according to
G^\ast{} + S -> G{} + S{} + e^-.
Ion attachment
Ion-attachment ionization is similar to chemical ionization in which a cation is attached to the analyte molecule in a reactive collision:
M + X+ + A -> MX+ + A
Where M is the analyte molecule, X+ is the cation and A is a non-reacting collision partner.
In a radioactive ion source, a small piece of radioactive material, for instance 63Ni or 241Am, is used to ionize a gas. This is used in ionization smoke detectors and ion mobility spectrometers.
Gas-discharge ion sources
These ion sources use a plasma source or electric discharge to create ions.
Inductively-coupled plasma
Ions can be created in an inductively coupled plasma, which is a plasma source in which the energy is supplied by electrical currents which are produced by electromagnetic induction, that is, by time-varying magnetic fields.
Microwave-induced plasma
Microwave induced plasma ion sources are capable of exciting electrodeless gas discharges to create ions for trace element mass spectrometry. A microwave plasma has high frequency electromagnetic radiation in the GHz range. It is capable of exciting electrodeless gas discharges. If applied in surface-wave-sustained mode, they are especially well suited to generate large-area plasmas of high plasma density. If they are both in surface-wave and resonator mode, they can exhibit a high degree of spatial localization. This allows to spatially separate the location of plasma generations from the location of surface processing. Such a separation (together with an appropriate gas-flow scheme) may help reduce the negative effect, that particles released from a processed substrate may have on the plasma chemistry of the gas phase.
ECR ion source
The ECR ion source makes use of the electron cyclotron resonance to ionize a plasma. Microwaves are injected into a volume at the frequency corresponding to the electron cyclotron resonance, defined by the magnetic field applied to a region inside the volume. The volume contains a low pressure gas.
Glow discharge
Ions can be created in an electric glow discharge. A glow discharge is a plasma formed by the passage of electric current through a low-pressure gas. It is created by applying a voltage between two metal electrodes in an evacuated chamber containing gas. When the voltage exceeds a certain value, called the striking voltage, the gas forms a plasma.
A duoplasmatron is a type of glow discharge ion source that consists of a hot cathode or cold cathode that produces a plasma that is used to ionize a gas. THey can produce positive or negative ions. They are used for secondary ion mass spectrometry, ion beam etching, and high-energy physics.
Flowing afterglow
In a flowing plasma afterglow, ions are formed in a flow of inert gas, typically helium or argon. Reagents are added downstream to create ion products and study reaction rates. Flowing-afterglow mass spectrometry is used for trace gas analysis for organic compounds.
Spark ionization
Electric spark ionization is used to produce gas phase ions from a solid sample. When incorporated with a mass spectrometer the complete instrument is referred to as a spark ionization mass spectrometer or as a spark source mass spectrometer (SSMS).
A closed drift ion source uses a radial magnetic field in an annular cavity in order to confine electrons for ionizing a gas. They are used for ion implantation and for space propulsion (Hall-effect thrusters).
Photoionization
Photoionization is the ionization process in which an ion is formed from the interaction of a photon with an atom or molecule.
Multi-photon ionization
In multi-photon ionization (MPI), several photons of energy below the ionization threshold may actually combine their energies to ionize an atom.
Resonance-enhanced multiphoton ionization (REMPI) is a form of MPI in which one or more of the photons accesses a bound-bound transition that is resonant in the atom or molecule being ionized.
Atmospheric pressure photoionization
Atmospheric pressure photoionization (APPI) uses a source of photons, usually a vacuum UV (VUV) lamp, to ionize the analyte with single photon ionization process. Analogous to other atmospheric pressure ion sources, a spray of solvent is heated to relatively high temperatures (above 400 degrees Celsius) and sprayed with high flow rates of nitrogen for desolvation. The resulting aerosol is subjected to UV radiation to create ions. Atmospheric-pressure laser ionization uses UV laser light sources to ionize the analyte via MPI.
Desorption ionization
Field desorption
Field desorption refers to an ion source in which a high-potential electric field is applied to an emitter with a sharp surface, such as a razor blade, or more commonly, a filament from which tiny "whiskers" have formed. This results in a very high electric field which can result in ionization of gaseous molecules of the analyte. Mass spectra produced by FI have little or no fragmentation. They are dominated by molecular radical cations and less often, protonated molecules
Particle bombardment
Fast atom bombardment
Particle bombardment with atoms is called fast atom bombardment (FAB) and bombardment with atomic or molecular ions is called secondary ion mass spectrometry (SIMS). Fission fragment ionization uses ionic or neutral atoms formed as a result of the nuclear fission of a suitable nuclide, for example the Californium isotope 252Cf.
In FAB the analytes is mixed with a non-volatile chemical protection environment called a matrix and is bombarded under vacuum with a high energy (4000 to 10,000 electron volts) beam of atoms. The atoms are typically from an inert gas such as argon or xenon. Common matrices include glycerol, thioglycerol, 3-nitrobenzyl alcohol (3-NBA), 18-crown-6 ether, 2-nitrophenyloctyl ether, sulfolane, diethanolamine, and triethanolamine. This technique is similar to secondary ion mass spectrometry and plasma desorption mass spectrometry.
Secondary ionization
Secondary ion mass spectrometry (SIMS) is used to analyze the composition of solid surfaces and thin films by sputtering the surface of the specimen with a focused primary ion beam and collecting and analyzing ejected secondary ions. The mass/charge ratios of these secondary ions are measured with a mass spectrometer to determine the elemental, isotopic, or molecular composition of the surface to a depth of 1 to 2 nm.
In a liquid metal ion source (LMIS), a metal (typically gallium) is heated to the liquid state and provided at the end of a capillary or a needle. Then a Taylor cone is formed under the application of a strong electric field. As the cone's tip get sharper, the electric field becomes stronger, until ions are produced by field evaporation. These ion sources are particularly used in ion implantation or in focused ion beam instruments.
Plasma desorption ionization
Plasma desorption ionization mass spectrometry (PDMS), also called fission fragment ionization, is a mass spectrometry technique in which ionization of material in a solid sample is accomplished by bombarding it with ionic or neutral atoms formed as a result of the nuclear fission of a suitable nuclide, typically the californium isotope 252Cf.
Laser desorption ionization
Matrix-assisted laser desorption/ionization (MALDI) is a soft ionization technique. The sample is mixed with a matrix material. Upon receiving a laser pulse, the matrix absorbs the laser energy and it is thought that primarily the matrix is desorbed and ionized (by addition of a proton) by this event. The analyte molecules are also desorbed. The matrix is then thought to transfer proton to the analyte molecules (e.g., protein molecules), thus charging the analyte.
Surface-assisted laser desorption/ionization
Surface-assisted laser desorption/ionization (SALDI) is a soft laser desorption technique used for analyzing biomolecules by mass spectrometry. In its first embodiment, it used graphite matrix. At present, laser desorption/ionization methods using other inorganic matrices, such as nanomaterials, are often regarded as SALDI variants. A related method named "ambient SALDI" - which is a combination of conventional SALDI with ambient mass spectrometry incorporating the DART ion source - has also been demonstrated.
Surface-enhanced laser desorption/ionization
Surface-enhanced laser desorption/ionization (SELDI) is a variant of MALDI that is used for the analysis of protein mixtures that uses a target modified to achieve biochemical affinity with the analyte compound.
Desorption ionization on silicon
Desorption ionization on silicon (DIOS) refers to laser desorption/ionization of a sample deposited on a porous silicon surface.
Smalley source
A laser vaporization cluster source produces ions using a combination of laser desorption ionization and supersonic expansion. The Smalley source (or Smalley cluster source) was developed by Richard Smalley at Rice University in the 1980s and was central to the discovery of fullerenes in 1985.
Aerosol ionization
In aerosol mass spectrometry with time-of-flight analysis, micrometer sized solid aerosol particles extracted from the atmosphere are simultaneously desorbed and ionized by a precisely timed laser pulse as they pass through the center of a time-of-flight ion extractor.
Spray ionization
Spray ionization methods involve the formation of aerosol particles from a liquid solution and the formation of bare ions after solvent evaporation.
Solvent-assisted ionization (SAI) is a method in which charged droplets are produced by introducing a solution containing analyte into a heated inlet tube of an atmospheric pressure ionization mass spectrometer. Just as in Electrospray Ionization (ESI), desolvation of the charged droplets produces multiply charged analyte ions. Volatile and nonvolatile compounds are analyzed by SAI, and high voltage is not required to achieve sensitivity comparable to ESI. Application of a voltage to the solution entering the hot inlet through a zero dead volume fitting connected to fused silica tubing produces ESI-like mass spectra, but with higher sensitivity. The inlet tube to the mass spectrometer becomes the ion source.
Matrix-Assisted Ionization
Matrix-Assisted Ionization (MAI) is similar to MALDI in sample preparation, but a laser is not required to convert analyte molecules included in a matrix compound into gas-phase ions. In MAI, analyte ions have charge states similar to electrospray ionization but obtained from a solid matrix rather than a solvent. No voltage or laser is required, but a laser can be used to obtain spatial resolution for imaging. Matrix-analyte samples are ionized in the vacuum of a mass spectrometer and can be inserted into the vacuum through an atmospheric pressure inlet. Less volatile matrices such as 2,5-dihydroxybenzoic acid require a hot inlet tube to produce analyte ions by MAI, but more volatile matrices such as 3-nitrobenzonitrile require no heat, voltage, or laser. Simply introducing the matrix-analyte sample to the inlet aperture of an atmospheric pressure ionization mass spectrometer produces abundant ions. Compounds at least as large as bovine serum albumin [66 kDa] can be ionized with this method. In this method, the inlet to the mass spectrometer can be considered the ion source.
Atmospheric-pressure chemical ionization
Atmospheric-pressure chemical ionization uses a solvent spray at atmospheric pressure. A spray of solvent is heated to relatively high temperatures (above 400 degrees Celsius), sprayed with high flow rates of nitrogen and the entire aerosol cloud is subjected to a corona discharge that creates ions with the evaporated solvent acting as the chemical ionization reagent gas. APCI is not as "soft" (low fragmentation) an ionization technique as ESI. Note that atmospheric pressure ionization (API) should not be used as a synonym for APCI.
Thermospray ionization
Thermospray ionization is a form of atmospheric pressure ionization in mass spectrometry. It transfers ions from the liquid phase to the gas phase for analysis. It is particularly useful in liquid chromatography-mass spectrometry.
Electrospray ionization
In electrospray ionization, a liquid is pushed through a very small, charged and usually metal, capillary. This liquid contains the substance to be studied, the analyte, dissolved in a large amount of solvent, which is usually much more volatile than the analyte. Volatile acids, bases or buffers are often added to this solution as well. The analyte exists as an ion in solution either in its anion or cation form. Because like charges repel, the liquid pushes itself out of the capillary and forms an aerosol, a mist of small droplets about 10 μm across. The aerosol is at least partially produced by a process involving the formation of a Taylor cone and a jet from the tip of this cone. An uncharged carrier gas such as nitrogen is sometimes used to help nebulize the liquid and to help evaporate the neutral solvent in the droplets. As the solvent evaporates, the analyte molecules are forced closer together, repel each other and break up the droplets. This process is called Coulombic fission because it is driven by repulsive Coulombic forces between charged molecules. The process repeats until the analyte is free of solvent and is a bare ion. The ions observed are created by the addition of a proton (a hydrogen ion) and denoted , or of another cation such as sodium ion, , or the removal of a proton, . Multiply charged ions such as are often observed. For macromolecules, there can be many charge states, occurring with different frequencies; the charge can be as great as , for example.
Probe electrospray ionization
Probe electrospray ionization (PESI) is a modified version of electrospray, where the capillary for sample solution transferring is replaced by a sharp-tipped solid needle with periodic motion.
Contactless atmospheric pressure ionization
Contactless atmospheric pressure ionization is a technique used for analysis of liquid and solid samples by mass spectrometry. Contactless API can be operated without an additional electric power supply (supplying voltage to the source emitter), gas supply, or syringe pump. Thus, the technique provides a facile means for analyzing chemical compounds by mass spectrometry at atmospheric pressure.
Sonic spray ionization
Sonic spray ionization is method for creating ions from a liquid solution, for example, a mixture of methanol and water. A pneumatic nebulizer is used to turn the solution into a supersonic spray of small droplets. Ions are formed when the solvent evaporates and the statistically unbalanced charge distribution on the droplets leads to a net charge and complete desolvation results in the formation of ions. Sonic spray ionization is used to analyze small organic molecules and drugs and can analyze large molecules when an electric field is applied to the capillary to help increase the charge density and generate multiple charged ions of proteins.
Sonic spray ionization has been coupled with high performance liquid chromatography for the analysis of drugs. Oligonucleotides have been studied with this method. SSI has been used in a manner similar to desorption electrospray ionization for ambient ionization and has been coupled with thin-layer chromatography in this manner.
Ultrasonication-assisted spray ionization
Ultrasonication-assisted spray ionization (UASI) is similar to the above techniques but uses an ultrasonic transducer to achieve atomization of the material and generate ions.
Thermal ionization
Thermal ionization (also known as surface ionization, or contact ionization) involves spraying vaporized, neutral atoms onto a hot surface, from which the atoms re-evaporate in ionic form. To generate positive ions, the atomic species should have a low ionization energy, and the surface should have a high work function. This technique is most suitable for alkali atoms (Li, Na, K, Rb, Cs) which have low ionization energies and are easily evaporated.
To generate negative ions, the atomic species should have a high electron affinity, and the surface should have a low work function. This second approach is most suited for halogen atoms Cl, Br, I, At.
Ambient ionization
In ambient ionization, ions are formed outside the mass spectrometer without sample preparation or separation. Ions can be formed by extraction into charged electrospray droplets, thermally desorbed and ionized by chemical ionization, or laser desorbed or ablated and post-ionized before they enter the mass spectrometer.
Solid-liquid extraction based ambient ionization uses a charged spray to create a liquid film on the sample surface. Molecules on the surface are extracted into the solvent. The action of the primary droplets hitting the surface produces secondary droplets that are the source of ions for the mass spectrometer. Desorption electrospray ionization (DESI) creates charged droplets that are directed at a solid sample a few millimeters to a few centimeters away. The charged droplets pick up the sample through interaction with the surface and then form highly charged ions that can be sampled into a mass spectrometer.
Plasma-based ambient ionization is based on an electrical discharge in a flowing gas that produces metastable atoms and molecules and reactive ions. Heat is often used to assist in the desorption of volatile species from the sample. Ions are formed by chemical ionization in the gas phase. A direct analysis in real time (DART) source operates by exposing the sample to a dry gas stream (typically helium or nitrogen) that contains long-lived electronically or vibronically excited neutral atoms or molecules (or "metastables"). Excited states are typically formed in the DART source by creating a glow discharge in a chamber through which the gas flows. A similar method called atmospheric solids analysis probe (ASAP) uses the heated gas from ESI or APCI probes to vaporize sample placed on a melting point tube inserted into an ESI/APCI source. Ionization is by APCI.
Laser-based ambient ionization is a two-step process in which a pulsed laser is used to desorb or ablate material from a sample and the plume of material interacts with an electrospray or plasma to create ions. Electrospray-assisted laser desorption/ionization (ELDI) uses a 337 nm UV laser or 3 μm infrared laser to desorb material into an electrospray source. Matrix-assisted laser desorption electrospray ionization (MALDESI) is an atmospheric pressure ionization source for generation of multiply charged ions. An ultraviolet or infrared laser is directed onto a solid or liquid sample containing the analyte of interest and matrix desorbing neutral analyte molecules that are ionized by interaction with electrosprayed solvent droplets generating multiply charged ions. Laser ablation electrospray ionization (LAESI) is an ambient ionization method for mass spectrometry that combines laser ablation from a mid-infrared (mid-IR) laser with a secondary electrospray ionization (ESI) process.
Applications
Mass spectrometry
In a mass spectrometer a sample is ionized in an ion source and the resulting ions are separated by their mass-to-charge ratio. The ions are detected and the results are displayed as spectra of the relative abundance of detected ions as a function of the mass-to-charge ratio. The atoms or molecules in the sample can be identified by correlating known masses to the identified masses or through a characteristic fragmentation pattern.
Particle accelerators
In particle accelerators an ion source creates a particle beam at the beginning of the machine, the source. The technology to create ion sources for particle accelerators depends strongly on the type of particle that needs to be generated: electrons, protons, H− ion or a Heavy ions.
Electrons are generated with an electron gun, of which there are many varieties.
Protons are generated with a plasma-based device, like a duoplasmatron or a magnetron.
H− ions are generated with a magnetron or a Penning source. A magnetron consists of a central cylindrical cathode surrounded by an anode. The discharge voltage is typically greater than 150 V and the current drain is around 40 A. A magnetic field of about 0.2 tesla is parallel to the cathode axis. Hydrogen gas is introduced by a pulsed gas valve. Caesium is often used to lower the work function of the cathode, enhancing the amount of ions that are produced. Large caesiated sources are also used for plasma heating in nuclear fusion devices.
For a Penning source, a strong magnetic field parallel to the electric field of the sheath guides electrons and ions on cyclotron spirals from cathode to cathode. Fast H-minus ions are generated at the cathodes as in the magnetron. They are slowed down due to the charge exchange reaction as they migrate to the plasma aperture. This makes for a beam of ions that is colder than the ions obtained from a magnetron.
Heavy ions can be generated with an electron cyclotron resonance ion source. The use of electron cyclotron resonance (ECR) ion sources for the production of intense beams of highly charged ions has immensely grown over the last decade. ECR ion sources are used as injectors into linear accelerators, Van-de-Graaff generators or cyclotrons in nuclear and elementary particle physics. In atomic and surface physics ECR ion sources deliver intense beams of highly charged ions for collision experiments or for the investigation of surfaces. For the highest charge states, however, Electron beam ion sources (EBIS) are needed. They can generate even bare ions of mid-heavy elements. The Electron beam ion trap (EBIT), based on the same principle, can produce up to bare uranium ions and can be used as an ion source as well.
Heavy ions can also be generated with an ion gun which typically uses the thermionic emission of electrons to ionize a substance in its gaseous state. Such instruments are typically used for surface analysis.
Gas flows through the ion source between the anode and the cathode. A positive voltage is applied to the anode. This voltage, combined with the high magnetic field between the tips of the internal and external cathodes allow a plasma to start. Ions from the plasma are repelled by the anode's electric field. This creates an ion beam.
Surface modification
Surface cleaning and pretreatment for large area deposition
Thin film deposition
Deposition of thick diamond-like carbon (DLC) films
Surface roughening of polymers for improved adhesion and/or biocompatibility
See also
Ion beam
RF antenna ion source
On-Line Isotope Mass Separator
References
Ions
Accelerator physics | Ion source | [
"Physics",
"Chemistry"
] | 5,902 | [
"Matter",
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Ion source",
"Experimental physics",
"Mass spectrometry",
"Accelerator physics",
"Ions"
] |
326,454 | https://en.wikipedia.org/wiki/Free%20module | In mathematics, a free module is a module that has a basis, that is, a generating set that is linearly independent. Every vector space is a free module, but, if the ring of the coefficients is not a division ring (not a field in the commutative case), then there exist non-free modules.
Given any set and ring , there is a free -module with basis , which is called the free module on or module of formal -linear combinations of the elements of .
A free abelian group is precisely a free module over the ring of integers.
Definition
For a ring and an -module , the set is a basis for if:
is a generating set for ; that is to say, every element of is a finite sum of elements of multiplied by coefficients in ; and
is linearly independent if for every of distinct elements, implies that (where is the zero element of and is the zero element of ).
A free module is a module with a basis.
An immediate consequence of the second half of the definition is that the coefficients in the first half are unique for each element of M.
If has invariant basis number, then by definition any two bases have the same cardinality. For example, nonzero commutative rings have invariant basis number. The cardinality of any (and therefore every) basis is called the rank of the free module . If this cardinality is finite, the free module is said to be free of finite rank, or free of rank if the rank is known to be .
Examples
Let R be a ring.
R is a free module of rank one over itself (either as a left or right module); any unit element is a basis.
More generally, If R is commutative, a nonzero ideal I of R is free if and only if it is a principal ideal generated by a nonzerodivisor, with a generator being a basis.
Over a principal ideal domain (e.g., ), a submodule of a free module is free.
If R is commutative, the polynomial ring in indeterminate X is a free module with a possible basis 1, X, X2, ....
Let be a polynomial ring over a commutative ring A, f a monic polynomial of degree d there, and the image of t in B. Then B contains A as a subring and is free as an A-module with a basis .
For any non-negative integer n, , the cartesian product of n copies of R as a left R-module, is free. If R has invariant basis number, then its rank is n.
A direct sum of free modules is free, while an infinite cartesian product of free modules is generally not free (cf. the Baer–Specker group).
A finitely generated module over a commutative local ring is free if and only if it is faithfully flat. Also, Kaplansky's theorem states a projective module over a (possibly non-commutative) local ring is free.
Sometimes, whether a module is free or not is undecidable in the set-theoretic sense. A famous example is the Whitehead problem, which asks whether a Whitehead group is free or not. As it turns out, the problem is independent of ZFC.
Formal linear combinations
Given a set and ring , there is a free -module that has as a basis: namely, the direct sum of copies of R indexed by E
.
Explicitly, it is the submodule of the Cartesian product (R is viewed as say a left module) that consists of the elements that have only finitely many nonzero components. One can embed E into as a subset by identifying an element e with that of whose e-th component is 1 (the unity of R) and all the other components are zero. Then each element of can be written uniquely as
where only finitely many are nonzero. It is called a formal linear combination of elements of .
A similar argument shows that every free left (resp. right) R-module is isomorphic to a direct sum of copies of R as left (resp. right) module.
Another construction
The free module may also be constructed in the following equivalent way.
Given a ring R and a set E, first as a set we let
We equip it with a structure of a left module such that the addition is defined by: for x in E,
and the scalar multiplication by: for r in R and x in E,
Now, as an R-valued function on E, each f in can be written uniquely as
where are in R and only finitely many of them are nonzero and is given as
(this is a variant of the Kronecker delta). The above means that the subset of is a basis of . The mapping is a bijection between and this basis. Through this bijection, is a free module with the basis E.
Universal property
The inclusion mapping defined above is universal in the following sense. Given an arbitrary function from a set to a left -module , there exists a unique module homomorphism such that ; namely, is defined by the formula:
and is said to be obtained by extending by linearity. The uniqueness means that each R-linear map is uniquely determined by its restriction to E.
As usual for universal properties, this defines up to a canonical isomorphism. Also the formation of for each set E determines a functor
,
from the category of sets to the category of left -modules. It is called the free functor and satisfies a natural relation: for each set E and a left module N,
where is the forgetful functor, meaning is a left adjoint of the forgetful functor.
Generalizations
Many statements true for free modules extend to certain larger classes of modules. Projective modules are direct summands of free modules. Flat modules are defined by the property that tensoring with them preserves exact sequences. Torsion-free modules form an even broader class. For a finitely generated module over a PID (such as Z), the properties free, projective, flat, and torsion-free are equivalent.
See local ring, perfect ring and Dedekind ring.
See also
Free object
Projective object
free presentation
free resolution
Quillen–Suslin theorem
stably free module
generic freeness
Notes
References
.
Module theory
Free algebraic structures | Free module | [
"Mathematics"
] | 1,315 | [
"Mathematical structures",
"Fields of abstract algebra",
"Category theory",
"Module theory",
"Algebraic structures",
"Free algebraic structures"
] |
984,070 | https://en.wikipedia.org/wiki/Relief%20valve | A relief valve or pressure relief valve (PRV) is a type of safety valve used to control or limit the pressure in a system; excessive pressure might otherwise build up and create a process upset, instrument or equipment failure, explosion, or fire.
Pressure relief
Excess pressure is relieved by allowing the pressurized fluid to flow from an auxiliary passage out of the system. The relief valve is designed or set to open at a predetermined set pressure to protect pressure vessels and other equipment from being subjected to pressures that exceed their design limits. When the set pressure is exceeded, the relief valve becomes the "path of least resistance" as the valve is forced open and a portion of the fluid is diverted through the auxiliary route.
In systems containing flammable fluids, the diverted fluid (liquid, gas or liquid-gas mixture) is either recaptured by a low pressure, high-flow vapor recovery system or is routed through a piping system known as a flare header or relief header to a central, elevated gas flare where it is burned, releasing naked combustion gases into the atmosphere. In non-hazardous systems, the fluid is often discharged to the atmosphere by a suitable discharge pipework designed to prevent rainwater ingress which can affect the set lift pressure, and positioned not to cause a hazard to personnel.
As the fluid is diverted, the pressure inside the vessel will stop rising. Once it reaches the valve's reseating pressure, the valve will close. The blowdown is usually stated as a percentage of set pressure and refers to how much the pressure needs to drop before the valve reseats. The blowdown can vary roughly 2–20%, and some valves have adjustable blowdowns.
In high-pressure gas systems, it is recommended that the outlet of the relief valve be in the open air. In systems where the outlet is connected to piping, the opening of a relief valve will give a pressure build-up in the piping system downstream of the relief valve. This often means that the relief valve will not re-seat once the set pressure is reached. For these systems often so-called "differential" relief valves are used. This means that the pressure is only working on an area that is much smaller than the area of the opening of the valve. If the valve is opened, the pressure has to decrease enormously before the valve closes and also the outlet pressure of the valve can easily keep the valve open. Another consideration is that if other relief valves are connected to the outlet pipe system, they may open as the pressure in the exhaust pipe system increases. This may cause undesired operation.
In some cases, a so-called bypass valve acts as a relief valve by being used to return all or part of the fluid discharged by a pump or gas compressor back to either a storage reservoir or the inlet of the pump or gas compressor. This is done to protect the pump or gas compressor and any associated equipment from excessive pressure. The bypass valve and bypass path can be internal (an integral part of the pump or compressor) or external (installed as a component in the fluid path). Many fire engines have such relief valves to prevent the overpressurization of fire hoses.
In other cases, equipment must be protected against being subjected to an internal vacuum (i.e., low pressure) that is lower than the equipment can withstand. In such cases, vacuum relief valves are used to open at a predetermined low-pressure limit and to admit air or an inert gas into the equipment to control the amount of vacuum.
Technical terms
In the petroleum refining, petrochemical and chemical manufacturing, natural gas processing and power generation industries, the term relief valve is associated with the terms pressure relief valve (PRV), pressure safety valve (PSV) and safety valve:
Pressure relief valve (PRV) or Pressure Release valve (PRV) or pressure safety valve (PSV): The difference is that PSVs have a manual lever to activate the valve in case of emergency. Most PRVs are spring operated. At lower pressures some use a diaphragm in place of a spring. The oldest PRV designs use a weight to seal the valve.
Set pressure: When the system pressure increases to this value, the PRV opens. The accuracy of the set pressure may follow guidelines set by the American Society of Mechanical Engineers (ASME).
Relief valve (RV): A valve is used on a liquid service, which opens proportionally as the increasing pressure overcomes the spring pressure.
Safety valve (SV): Used in gas service. Most SVs are full lift or snap-acting, in that they pop completely open.
Safety relief valve (SRV): A relief valve that can be used for gas or liquid service. However, the set pressure will usually only be accurate for one type of fluid at a time.
Pilot-operated relief valve (POSRV, PORV, POPRV): A device that relieves by remote command from a pilot valve which is connected to the upstream system pressure.
Low-pressure safety valve (LPSV): An automatic system that relieves by the static pressure of a gas. The relieving pressure is small and near the atmospheric pressure.
Vacuum pressure safety valve (VPSV): An automatic system that relieves by the static pressure of a gas. The relieving pressure is small, negative, and near the atmospheric pressure.
Low and vacuum pressure safety valve (LVPSV): An automatic system that relieves by the static pressure of a gas. The relieving pressure is small, negative, or positive, and near the atmospheric pressure.
Pressure vacuum release valve (PVRV): A combination of vacuum pressure and a relief valve in one housing. Used on storage tanks for liquids to prevent implosion or overpressure.
Snap acting: The opposite of modulating, refers to a valve that "pops" open. It snaps into a full lift in milliseconds. Usually accomplished with a skirt on the disc so that the fluid passing the seat suddenly affects a larger area and creates more lifting force.
Modulating: Opens in proportion to the overpressure.
Legal and code requirements in industry
In most countries, industries are legally required to protect pressure vessels and other equipment by using relief valves. Also in most countries, equipment design codes such as those provided by the American Society of Mechanical Engineers (ASME), American Petroleum Institute (API) and other organizations like ISO (ISO 4126) must be complied with and those codes include design standards for relief valves.
The main standards, laws, or directives are:
AD Merkblatt (German)
American Petroleum Institute (API); Standards 520, 521, 526, and 2000
American Society of Mechanical Engineers (ASME); Boiler & Pressure Vessel Code, Section VIII Division 1 and Section I
American Water Works Association (AWWA), storage tanks
EN 764-7; European Standard based on pressure Equipment Directive 97/23/EC
Eurocode EN 1993-4-2, storage tanks.
International Organization for Standardization; ISO 4126
Pressure Systems Safety Regulations 2000 (PSSR); UK
Design Institute for Emergency Relief Systems (DIERS)
Formed in 1977, the Design Institute for Emergency Relief Systems was a consortium of 29 companies under the auspices of the American Institute of Chemical Engineers (AIChE) that developed methods for the design of emergency relief systems to handle runaway reactions. Its purpose was to develop the technology and methods needed for sizing pressure relief systems for chemical reactors, particularly those in which exothermic reactions are carried out. Such reactions include many classes of industrially important processes including polymerizations, nitrations, diazotizations, sulphonations, epoxidations, aminations, esterifications, neutralizations, and many others. Pressure relief systems can be difficult to design, not least because what is expelled can be gas/vapor, liquid, or a mixture of the two – just as with a can of carbonated drink when it is suddenly opened. For chemical reactions, it requires extensive knowledge of both chemical reaction hazards and fluid flow.
DIERS has investigated the two-phase vapor-liquid onset/disengagement dynamics and the hydrodynamics of emergency relief systems with extensive experimental and analysis work. Of particular interest to DIERS were the prediction of two-phase flow venting and the applicability of various sizing methods for two-phase vapor-liquid flashing flow. DIERS became a user's group in 1985.
European DIERS Users' Group (EDUG) is a group of mainly European industrialists, consultants and academics who use the DIERS technology. The EDUG started in the late 1980s and has an annual meeting. A summary of many of key aspects of the DIERS technology has been published in the UK by the HSE.
See also
Blowoff valve
Rupture disc
Safety valve
Surge control
References
External links
PED 97/23/EC; Pressure Equipment Directive – European Union.
Pressure vessels
Safety valves
ru:Сбросной клапан (сантехника) | Relief valve | [
"Physics",
"Chemistry",
"Engineering"
] | 1,862 | [
"Structural engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Industrial safety devices",
"Pressure vessels",
"Safety valves"
] |
986,051 | https://en.wikipedia.org/wiki/Fine-tuning%20%28physics%29 | In theoretical physics, fine-tuning is the process in which parameters of a model must be adjusted very precisely in order to fit with certain observations.
Theories requiring fine-tuning are regarded as problematic in the absence of a known mechanism to explain why the parameters happen to have precisely the observed values that they return. The heuristic rule that parameters in a fundamental physical theory should not be too fine-tuned is called naturalness.
Background
The idea that naturalness will explain fine tuning was brought into question by Nima Arkani-Hamed, a theoretical physicist, in his talk "Why is there a Macroscopic Universe?", a lecture from the mini-series "Multiverse & Fine Tuning" from the "Philosophy of Cosmology" project, a University of Oxford and Cambridge Collaboration 2013. In it he describes how naturalness has usually provided a solution to problems in physics; and that it had usually done so earlier than expected. However, in addressing the problem of the cosmological constant, naturalness has failed to provide an explanation though it would have been expected to have done so a long time ago.
The necessity of fine-tuning leads to various problems that do not show that the theories are incorrect, in the sense of falsifying observations, but nevertheless suggest that a piece of the story is missing. For example, the cosmological constant problem (why is the cosmological constant so small?); the hierarchy problem; and the strong CP problem, among others.
Example
An example of a fine-tuning problem considered by the scientific community to have a plausible "natural" solution is the cosmological flatness problem, which is solved if inflationary theory is correct: inflation forces the universe to become very flat, answering the question of why the universe is today observed to be flat to such a high degree.
Measurement
Although fine-tuning was traditionally measured by ad hoc fine-tuning measures, such as the Barbieri-Giudice-Ellis measure, over the past decade many scientists recognized that fine-tuning arguments were a specific application of Bayesian statistics.
See also
Anthropic principle
Fine-tuned universe
Hierarchy problem
Strong CP problem
References
External links
Chaos theory
Theoretical physics | Fine-tuning (physics) | [
"Physics"
] | 444 | [
"Theoretical physics"
] |
986,096 | https://en.wikipedia.org/wiki/Fermi%20surface | In condensed matter physics, the Fermi surface is the surface in reciprocal space which separates occupied electron states from unoccupied electron states at zero temperature. The shape of the Fermi surface is derived from the periodicity and symmetry of the crystalline lattice and from the occupation of electronic energy bands. The existence of a Fermi surface is a direct consequence of the Pauli exclusion principle, which allows a maximum of one electron per quantum state. The study of the Fermi surfaces of materials is called fermiology.
Theory
Consider a spin-less ideal Fermi gas of particles. According to Fermi–Dirac statistics, the mean occupation number of a state with energy is given by
where
is the mean occupation number of the th state
is the kinetic energy of the th state
is the chemical potential (at zero temperature, this is the maximum kinetic energy the particle can have, i.e. Fermi energy )
is the absolute temperature
is the Boltzmann constant
Suppose we consider the limit . Then we have,
By the Pauli exclusion principle, no two fermions can be in the same state. Additionally, at zero temperature the enthalpy of the electrons must be minimal, meaning that they cannot change state. If, for a particle in some state, there existed an unoccupied lower state that it could occupy, then the energy difference between those states would give the electron an additional enthalpy. Hence, the enthalpy of the electron would not be minimal. Therefore, at zero temperature all the lowest energy states must be saturated. For a large ensemble the Fermi level will be approximately equal to the chemical potential of the system, and hence every state below this energy must be occupied. Thus, particles fill up all energy levels below the Fermi level at absolute zero, which is equivalent to saying that is the energy level below which there are exactly states.
In momentum space, these particles fill up a ball of radius , the surface of which is called the Fermi surface.
The linear response of a metal to an electric, magnetic, or thermal gradient is determined by the shape of the Fermi surface, because currents are due to changes in the occupancy of states near the Fermi energy. In reciprocal space, the Fermi surface of an ideal Fermi gas is a sphere of radius
,
determined by the valence electron concentration where is the reduced Planck constant. A material whose Fermi level falls in a gap between bands is an insulator or semiconductor depending on the size of the bandgap. When a material's Fermi level falls in a bandgap, there is no Fermi surface.
Materials with complex crystal structures can have quite intricate Fermi surfaces. Figure 2 illustrates the anisotropic Fermi surface of graphite, which has both electron and hole pockets in its Fermi surface due to multiple bands crossing the Fermi energy along the direction. Often in a metal, the Fermi surface radius is larger than the size of the first Brillouin zone, which results in a portion of the Fermi surface lying in the second (or higher) zones. As with the band structure itself, the Fermi surface can be displayed in an extended-zone scheme where is allowed to have arbitrarily large values or a reduced-zone scheme where wavevectors are shown modulo (in the 1-dimensional case) where a is the lattice constant. In the three-dimensional case the reduced zone scheme means that from any wavevector there is an appropriate number of reciprocal lattice vectors subtracted that the new now is closer to the origin in -space than to any . Solids with a large density of states at the Fermi level become unstable at low temperatures and tend to form ground states where the condensation energy comes from opening a gap at the Fermi surface. Examples of such ground states are superconductors, ferromagnets, Jahn–Teller distortions and spin density waves.
The state occupancy of fermions like electrons is governed by Fermi–Dirac statistics so at finite temperatures the Fermi surface is accordingly broadened. In principle all fermion energy level populations are bound by a Fermi surface although the term is not generally used outside of condensed-matter physics.
Experimental determination
Electronic Fermi surfaces have been measured through observation of the oscillation of transport properties in magnetic fields , for example the de Haas–van Alphen effect (dHvA) and the Shubnikov–de Haas effect (SdH). The former is an oscillation in magnetic susceptibility and the latter in resistivity. The oscillations are periodic versus and occur because of the quantization of energy levels in the plane perpendicular to a magnetic field, a phenomenon first predicted by Lev Landau. The new states are called Landau levels and are separated by an energy where is called the cyclotron frequency, is the electronic charge, is the electron effective mass and is the speed of light. In a famous result, Lars Onsager proved that the period of oscillation is related to the cross-section of the Fermi surface (typically given in Å−2) perpendicular to the magnetic field direction by the equation. Thus the determination of the periods of oscillation for various applied field directions allows mapping of the Fermi surface. Observation of the dHvA and SdH oscillations requires magnetic fields large enough that the circumference of the cyclotron orbit is smaller than a mean free path. Therefore, dHvA and SdH experiments are usually performed at high-field facilities like the High Field Magnet Laboratory in Netherlands, Grenoble High Magnetic Field Laboratory in France, the Tsukuba Magnet Laboratory in Japan or the National High Magnetic Field Laboratory in the United States.
The most direct experimental technique to resolve the electronic structure of crystals in the momentum-energy space (see reciprocal lattice), and, consequently, the Fermi surface, is the angle-resolved photoemission spectroscopy (ARPES). An example of the Fermi surface of superconducting cuprates measured by ARPES is shown in Figure 3.
With positron annihilation it is also possible to determine the Fermi surface as the annihilation process conserves the momentum of the initial particle. Since a positron in a solid will thermalize prior to annihilation, the annihilation radiation carries the information about the electron momentum. The corresponding experimental technique is called angular correlation of electron positron annihilation radiation (ACAR) as it measures the angular deviation from of both annihilation quanta. In this way it is possible to probe the electron momentum density of a solid and determine the Fermi surface. Furthermore, using spin polarized positrons, the momentum distribution for the two spin states in magnetized materials can be obtained. ACAR has many advantages and disadvantages compared to other experimental techniques: It does not rely on UHV conditions, cryogenic temperatures, high magnetic fields or fully ordered alloys. However, ACAR needs samples with a low vacancy concentration as they act as effective traps for positrons. In this way, the first determination of a smeared Fermi surface in a 30% alloy was obtained in 1978.
See also
Fermi energy
Brillouin zone
Fermi surface of superconducting cuprates
Kelvin probe force microscope
Luttinger's theorem
References
External links
Experimental Fermi surfaces of some superconducting cuprates and strontium ruthenates in "Angle-resolved photoemission spectroscopy of the cuprate superconductors (Review Article)" (2002)
Experimental Fermi surfaces of some cuprates, transition metal dichalcogenides, ruthenates, and iron-based superconductors in "ARPES experiment in fermiology of quasi-2D metals (Review Article)" (2014)
Condensed matter physics
Electric and magnetic fields in matter
Fermi–Dirac statistics | Fermi surface | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,617 | [
"Phases of matter",
"Electric and magnetic fields in matter",
"Materials science",
"Condensed matter physics",
"Matter"
] |
986,135 | https://en.wikipedia.org/wiki/Peccei%E2%80%93Quinn%20theory | In particle physics, the Peccei–Quinn theory is a well-known, long-standing proposal for the resolution of the strong CP problem formulated by Roberto Peccei and Helen Quinn in 1977. The theory introduces a new anomalous symmetry to the Standard Model along with a new scalar field which spontaneously breaks the symmetry at low energies, giving rise to an axion that suppresses the problematic CP violation. This model has long since been ruled out by experiments and has instead been replaced by similar invisible axion models which utilize the same mechanism to solve the strong CP problem.
Overview
Quantum chromodynamics (QCD) has a complicated vacuum structure which gives rise to a CP violating θ-term in the Lagrangian. Such a term can have a number of non-perturbative effects, one of which is to give the neutron an electric dipole moment. The absence of this dipole moment in experiments requires the fine-tuning of the θ-term to be very small, something known as the strong CP problem. Motivated as a solution to this problem, Peccei–Quinn (PQ) theory introduces a new complex scalar field in addition to the standard Higgs doublet. This scalar field couples to d-type quarks through Yukawa terms, while the Higgs now only couples to the up-type quarks. Additionally, a new global chiral anomalous U(1) symmetry is introduced, the Peccei–Quinn symmetry, under which is charged, requiring some of the fermions also have a PQ charge. The scalar field also has a potential
where is a dimensionless parameter and is known as the decay constant. The potential results in having the vacuum expectation value of at the electroweak phase transition.
Spontaneous symmetry breaking of the Peccei–Quinn symmetry below the electroweak scale gives rise to a pseudo-Goldstone boson known as the axion , with the resulting Lagrangian taking the form
where the first term is the Standard Model (SM) and axion Lagrangian which includes axion-fermion interactions arising from the Yukawa terms. The second term is the CP violating θ-term, with the strong coupling constant, the gluon field strength tensor, and the dual field strength tensor. The third term is known as the color anomaly, a consequence of the Peccei–Quinn symmetry being anomalous, with determined by the choice of PQ charges for the quarks. If the symmetry is also anomalous in the electromagnetic sector, there will additionally be an anomaly term coupling the axion to photons. Due to the presence of the color anomaly, the effective angle is modified to , giving rise to an effective potential through instanton effects, which can be approximated in the dilute gas approximation as
To minimize the ground state energy, the axion field picks the vacuum expectation value , with axions now being excitations around this vacuum. This prompts the field redefinition which leads to the cancellation of the angle, dynamically solving the strong CP problem. It is important to point out that the axion is massive since the Peccei–Quinn symmetry is explicitly broken by the chiral anomaly, with the axion mass roughly given in terms of the pion mass and pion decay constant as .
Invisible axion models
For the Peccei–Quinn model to work, the decay constant must be set at the electroweak scale, leading to a heavy axion. Such an axion has long been ruled out by experiments, for example through bounds on rare kaon decays . Instead, there are a variety of modified models called invisible axion models which introduce the new scalar field independently of the electroweak scale, enabling much larger vacuum expectation values, hence very light axions.
The most popular such models are the Kim–Shifman–Vainshtein–Zakharov (KSVZ) and the Dine–Fischler–Srednicki–Zhitnisky (DFSZ) models. The KSVZ model introduces a new heavy quark doublet with PQ charge, acquiring its mass through a Yukawa term involving . Since in this model the only fermions that carry a PQ charge are the heavy quarks, there are no tree-level couplings between the SM fermions and the axion. Meanwhile, the DFSZ model replaces the usual Higgs with two PQ charged Higgs doublets, and , that give mass to the SM fermions through the usual Yukawa terms, while the new scalar only interacts with the standard model through a quartic coupling . Since the two Higgs doublets carry PQ charge, the resulting axion couples to SM fermions at tree-level.
See also
Axion
QCD vacuum
Strong CP problem
References
Further reading
Physics beyond the Standard Model
Quantum chromodynamics
Anomalies (physics) | Peccei–Quinn theory | [
"Physics"
] | 1,023 | [
"Unsolved problems in physics",
"Particle physics",
"Physics beyond the Standard Model"
] |
987,423 | https://en.wikipedia.org/wiki/Reducing%20sugar | A reducing sugar is any sugar that is capable of acting as a reducing agent. In an alkaline solution, a reducing sugar forms some aldehyde or ketone, which allows it to act as a reducing agent, for example in Benedict's reagent. In such a reaction, the sugar becomes a carboxylic acid.
All monosaccharides are reducing sugars, along with some disaccharides, some oligosaccharides, and some polysaccharides. The monosaccharides can be divided into two groups: the aldoses, which have an aldehyde group, and the ketoses, which have a ketone group. Ketoses must first tautomerize to aldoses before they can act as reducing sugars. The common dietary monosaccharides galactose, glucose and fructose are all reducing sugars.
Disaccharides are formed from two monosaccharides and can be classified as either reducing or nonreducing. Nonreducing disaccharides like sucrose and trehalose have glycosidic bonds between their anomeric carbons and thus cannot convert to an open-chain form with an aldehyde group; they are stuck in the cyclic form. Reducing disaccharides like lactose and maltose have only one of their two anomeric carbons involved in the glycosidic bond, while the other is free and can convert to an open-chain form with an aldehyde group.
The aldehyde functional group allows the sugar to act as a reducing agent, for example, in the Tollens' test or Benedict's test. The cyclic hemiacetal forms of aldoses can open to reveal an aldehyde, and certain ketoses can undergo tautomerization to become aldoses. However, acetals, including those found in polysaccharide linkages, cannot easily become free aldehydes.
Reducing sugars react with amino acids in the Maillard reaction, a series of reactions that occurs while cooking food at high temperatures and that is important in determining the flavor of food. Also, the levels of reducing sugars in wine, juice, and sugarcane are indicative of the quality of these food products.
Terminology
Oxidation-reduction
A reducing sugar is one that reduces another compound and is itself oxidized; that is, the carbonyl carbon of the sugar is oxidized to a carboxyl group.
A sugar is classified as a reducing sugar only if it has an open-chain form with an aldehyde group or a free hemiacetal group.
Aldoses and ketoses
Monosaccharides which contain an aldehyde group are known as aldoses, and those with a ketone group are known as ketoses. The aldehyde can be oxidized via a redox reaction in which another compound is reduced. Thus, aldoses are reducing sugars. Sugars with ketone groups in their open chain form are capable of isomerizing via a series of tautomeric shifts to produce an aldehyde group in solution. Therefore, ketones like fructose are considered reducing sugars but it is the isomer containing an aldehyde group which is reducing since ketones cannot be oxidized without decomposition of the sugar. This type of isomerization is catalyzed by the base present in solutions which test for the presence of reducing sugars.
Reducing end
Disaccharides consist of two monosaccharides and may be either reducing or nonreducing. Even a reducing disaccharide will only have one reducing end, as disaccharides are held together by glycosidic bonds, which consist of at least one anomeric carbon. With one anomeric carbon unable to convert to the open-chain form, only the free anomeric carbon is available to reduce another compound, and it is called the reducing end of the disaccharide.
A nonreducing disaccharide is that which has both anomeric carbons tied up in the glycosidic bond.
Similarly, most polysaccharides have only one reducing end.
Examples
All monosaccharides are reducing sugars because they either have an aldehyde group (if they are aldoses) or can tautomerize in solution to form an aldehyde group (if they are ketoses). This includes common monosaccharides like galactose, glucose, glyceraldehyde, fructose, ribose, and xylose.
Many disaccharides, like cellobiose, lactose, and maltose, also have a reducing form, as one of the two units may have an open-chain form with an aldehyde group. However, sucrose and trehalose, in which the anomeric carbon atoms of the two units are linked together, are nonreducing disaccharides since neither of the rings is capable of opening.
In glucose polymers such as starch and starch-derivatives like glucose syrup, maltodextrin and dextrin the macromolecule begins with a reducing sugar, a free aldehyde. When starch has been partially hydrolyzed the chains have been split and hence it contains more reducing sugars per gram. The percentage of reducing sugars present in these starch derivatives is called dextrose equivalent (DE).
Glycogen is a highly branched polymer of glucose that serves as the main form of carbohydrate storage in animals. It is a reducing sugar with only one reducing end, no matter how large the glycogen molecule is or how many branches it has (note, however, that the unique reducing end is usually covalently linked to glycogenin and will therefore not be reducing). Each branch ends in a nonreducing sugar residue. When glycogen is broken down to be used as an energy source, glucose units are removed one at a time from the nonreducing ends by enzymes.
Characterization
Several qualitative tests are used to detect the presence of reducing sugars. Two of them use solutions of copper(II) ions: Benedict's reagent (Cu2+ in aqueous sodium citrate) and Fehling's solution (Cu2+ in aqueous sodium tartrate). The reducing sugar reduces the copper(II) ions in these test solutions to copper(I), which then forms a brick red copper(I) oxide precipitate. Reducing sugars can also be detected with the addition of Tollen's reagent, which consist of silver ions (Ag+) in aqueous ammonia. When Tollen's reagent is added to an aldehyde, it precipitates silver metal, often forming a silver mirror on clean glassware.
3,5-dinitrosalicylic acid is another test reagent, one that allows quantitative detection. It reacts with a reducing sugar to form 3-amino-5-nitrosalicylic acid, which can be measured by spectrophotometry to determine the amount of reducing sugar that was present.
Some sugars, such as sucrose, do not react with any of the reducing-sugar test solutions. However, a non-reducing sugar can be hydrolyzed using dilute hydrochloric acid. After hydrolysis and neutralization of the acid, the product may be a reducing sugar that gives normal reactions with the test solutions.
All carbohydrates are converted to aldehydes and respond positively in Molisch's test. But the test has a faster rate when it comes to monosaccharides.
Importance in medicine
Fehling's solution was used for many years as a diagnostic test for diabetes, a disease in which blood glucose levels are dangerously elevated by a failure to produce enough insulin (type 1 diabetes) or by an inability to respond to insulin (type 2 diabetes). Measuring the amount of oxidizing agent (in this case, Fehling's solution) reduced by glucose makes it possible to determine the concentration of glucose in the blood or urine. This then enables the right amount of insulin to be injected to bring blood glucose levels back into the normal range.
Importance in food chemistry
Maillard reaction
The carbonyl groups of reducing sugars react with the amino groups of amino acids in the Maillard reaction, a complex series of reactions that occurs when cooking food. Maillard reaction products (MRPs) are diverse; some are beneficial to human health, while others are toxic. However, the overall effect of the Maillard reaction is to decrease the nutritional value of food. One example of a toxic product of the Maillard reaction is acrylamide, a neurotoxin and possible carcinogen that is formed from free asparagine and reducing sugars when cooking starchy foods at high temperatures (above 120 °C). However, evidence from epidemiological studies suggest that dietary acrylamide is unlikely to raise the risk of people developing cancer.
Food quality
The level of reducing sugars in wine, juice, and sugarcane are indicative of the quality of these food products, and monitoring the levels of reducing sugars during food production has improved market quality. The conventional method for doing so is the Lane-Eynon method, which involves titrating the reducing sugar with copper(II) in Fehling's solution in the presence of methylene blue, a common redox indicator. However, it is inaccurate, expensive, and sensitive to impurities.
References
Carbohydrate chemistry
Biomolecules
Carbohydrates | Reducing sugar | [
"Chemistry",
"Biology"
] | 2,029 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Natural products",
"Biochemistry",
"Organic compounds",
"Carbohydrate chemistry",
"Chemical synthesis",
"Biomolecules",
"Structural biology",
"Glycobiology",
"nan",
"Molecular biology"
] |
987,507 | https://en.wikipedia.org/wiki/Fire%20safety | Fire safety is the set of practices intended to reduce destruction caused by fire. Fire safety measures include those that are intended to prevent the ignition of an uncontrolled fire and those that are used to limit the spread and impact of a fire.
Fire safety measures include those that are planned during the construction of a building or implemented in structures that are already standing and those that are taught or provided to occupants of the building.
Threats to fire safety are commonly referred to as fire hazards. A fire hazard may include a situation that increases the likelihood of a fire or may impede escape in the event a fire occurs.
Fire safety is often a component of building safety. Those who inspect buildings for violations of the Fire Code and go into schools to educate children on fire safety topics are Fire Department members known as Fire Prevention Officers. The Chief Fire Prevention Officer or Chief of Fire Prevention will normally train newcomers to the Fire Prevention Division and may also conduct inspections or make presentations.
Elements of a fire safety policy
Fire safety policies apply at the construction of a building and throughout its operating life. Building codes are enacted by local, sub-national, or national governments to ensure such features as adequate fire exits, signage, and construction details such as fire stops and fire rated doors, windows, and walls. Fire safety is also an objective of electrical codes to prevent overheating of wiring or equipment, and to protect from ignition by electrical faults.
Fire codes regulate such requirements as the maximum occupancy for buildings such as theatres or restaurants, for example. Fire codes may require portable fire extinguishers within a building, or may require permanently installed fire detection and suppression equipment such as a fire sprinkler system and a fire alarm system.
Local authorities charged with fire safety may conduct regular inspections for such items as usable fire exit and proper exit signage, functional fire extinguishers of the correct type in accessible places, and proper storage and handling of flammable materials. Depending on local regulations, a fire inspection may result in a notice of required action, or closing of a building until it can be put into compliance with fire code requirements.
Owners and managers of a building may implement additional fire policies. For example, an industrial site may designate and train particular employees as a fire fighting force. Managers must ensure buildings comply with fire evacuation regulations, and that building features such as spray fireproofing remains undamaged. Fire policies may be in place to dictate training and awareness of occupants and users of the building to avoid obvious mistakes, such as the propping open of fire doors. Buildings, especially institutions such as schools, may conduct fire drills at regular intervals throughout the year.
Beyond individual buildings, other elements of fire safety policies may include technologies such as wood coatings, education and prevention, preparedness measures, wildfire detection and suppression, and ensuring geographic coverage of local and sufficient fire extinguishing capacities.
Common fire hazards
Some common fire hazards are:
Kitchen fires from unattended cooking, grease fires/chip pan fires
Electrical systems that are overloaded, poorly maintained or defective
Combustible storage areas with insufficient protection
Combustibles near equipment that generates heat, flame, or sparks
Candles and other open flames
Smoking (Cigarettes, cigars, pipes, lighters, etc.)
Equipment that generates heat and utilizes combustible materials
Flammable liquids and aerosols
Flammable solvents (and rags soaked with solvent) placed in enclosed trash cans
Fireplace chimneys not properly or regularly cleaned
Cooking appliances - stoves, ovens
Heating appliances - fireplaces, wood-burning stoves, furnaces, boilers, portable heaters, solid fuels
Household appliances - clothes dryers, curling irons, hair dryers, refrigerators, freezers, boilers
Chimneys that concentrate creosote
Electrical wiring in poor condition
Leaking/defective batteries
Personal ignition sources - matches, lighters
Electronic and electrical equipment
Exterior cooking equipment - barbecue
Fire code
In the United States, the fire code (also fire prevention code or fire safety code) is a model code adopted by the state or local jurisdiction and enforced by fire prevention officers within municipal fire departments. It is a set of rules prescribing minimum requirements to prevent fire and explosion hazards arising from storage, handling, or use of dangerous materials, or from other specific hazardous conditions. It complements the building code. The fire code is aimed primarily at preventing fires, ensuring that necessary training and equipment will be on hand, and that the original design basis of the building, including the basic plan set out by the architect, is not compromised. The fire code also addresses inspection and maintenance requirements of various fire protection equipment in order to maintain optimal active fire protection and passive fire protection measures.
A typical fire safety code includes administrative sections about the rule-making and enforcement process, and substantive sections dealing with fire suppression equipment, particular hazards such as containers and transportation for combustible materials, and specific rules for hazardous occupancies, industrial processes, and exhibitions.
Sections may establish the requirements for obtaining permits and specific precautions required to remain in compliance with a permit. For example, a fireworks exhibition may require an application to be filed by a licensed pyrotechnician, providing the information necessary for the issuing authority to determine whether safety requirements can be met. Once a permit is issued, the same authority (or another delegated authority) may inspect the site and monitor safety during the exhibition, with the power to halt operations, when unapproved practices are seen or when unforeseen hazards arise.
List of some typical fire and explosion issues in a fire code
Fireworks, explosives, mortars and cannons, model rockets (licenses for manufacture, storage, transportation, sale, use)
Certification for servicing, placement, and inspecting fire extinguishing equipment
General storage and handling of flammable liquids, solids, gases (tanks, personnel training, markings, equipment)
Limitations on locations and quantities of flammables (e.g., 10 liters of gasoline inside a residential dwelling)
Specific uses and specific flammables (e.g., dry cleaning, gasoline distribution, explosive dusts, pesticides, space heaters, plastics manufacturing)
Permits and limitations in various building occupancies (assembly hall, hospital, school, theater, elderly care, child care centers) that require a smoke detector, sprinkler system, fire extinguisher, or other specific equipment or procedures
Removal of interior and exterior obstructions to emergency exits or firefighters and removal of hazardous materials
Permits and limitations in special outdoor applications (tents, asphalt kettles, bonfires, etc.)
Other hazards (flammable decorations, welding, smoking, bulk matches, tire yards)
Electrical safety codes such as the National Electrical Code (by the National Fire Protection Association) for the U.S. and some other places in the Americas
Fuel gas code
Car fire
Public fire safety education
Most U.S. fire departments have fire safety education programs.
Fire prevention programs may include distribution of smoke detectors, visiting schools to review key topics with the students and implementing nationally recognized programs such as NFPAS "Risk Watch" and "Learn not to burn".
Other programs or props can be purchased by fire departments or community organizations. These are usually entertaining and designed to capture children's attention and relay important messages. Props include those that are mostly auditory, such as puppets and robots. The prop is visually stimulating but the safety message is only transmitted orally. Other props are more elaborate, access more senses and increase the learning factor. They mix audio messages and visual cues with hands-on interaction. Examples of these include mobile trailer safety houses and tabletop hazard house simulators. Some fire prevention software is also being developed to identify hazards in a home.
All programs tend to mix messages of general injury prevention, safety, fire prevention, and escape in case of fire. In most cases the fire department representative is regarded as the expert and is expected to present information in a manner that is appropriate for each age group.
Fire educator qualifications
The US industry standard that outlines the recommended qualifications for fire safety educators is NFPA 1035: Standard for Professional Qualifications for Public Fire and Life Safety Educator, which includes the requirements for Fire and Life Safety Educator Levels I, II, and III; Public Information Officer; and Juvenile Firesetter Intervention Specialist Levels I and II.
Target audiences
According to the United States Fire Administration, the very young and the elderly are considered to be "at risk" populations. These groups represent approximately 33% of the population.
Global perspectives
Fire safety has been highlighted in relation to global supply chain management. Sedex, the Supplier Ethical Data Exchange, a collaborative platform for sharing ethical supply chain data, and Verité, Inc., a Massachusetts-based supply chain investigatory NGO, issued a briefing in August 2013 which highlighted the significance of this issue. The briefing referred to several major factory fires, including the 2012 Dhaka garment factory fire in the Tazreen Fashion factory and other examples of fires in Bangladesh, Pakistan and elsewhere, compared the incidence of fire safety issues in a manufacturing context, and highlighted the need for buyers, suppliers and local fire safety enforcement agencies all to take action to improve fire safety within the supply chains for ready-made garments and other products. The briefing recommended that buyers seek greater visibility of fire safety and other risks across the supply chain and identify opportunities to improve standards: "buyers can encourage change through more responsible and consistent practices".
Fire safety plan
A fire safety plan is required by all North American national, state and provincial fire codes based on building use or occupancy types. Generally, the owner of the building is responsible for the preparation of a fire safety plan. Buildings with elaborate emergency systems may require the assistance of a fire protection consultant. After the plan has been prepared, it must be submitted to the Chief Fire Official or authority having jurisdiction for approval. Once approved, the owner is responsible for implementing the fire safety plan and training all staff in their duties. It is also the owner's responsibility to ensure that all visitors and staff are informed of what to do in case of fire. During a fire emergency, a copy of the approved fire safety plan must be available for the responding fire department's use.
In the United Kingdom, a fire safety plan is called a fire risk assessment.
Fire safety plan structure
Key contact information
Utility services (Including shut-off valves for water, gas and electric)
Access issues
Dangerous stored materials
Location of people with special needs
Connections to sprinkler system
Layout, drawing, and site plan of building
Maintenance schedules for life safety systems
Personnel training and fire drill procedure
Create assemble point/safe zone
Use of fire safety plans
Fire safety plans are a useful tool for fire fighters to have because they allow them to know critical information about a building that they may have to go into. Using this, fire fighters can locate and avoid potential dangers such as hazardous material (hazmat) storage areas and flammable chemicals. In addition to this, fire safety plans can also provide specialized information that, in the case of a hospital fire, can provide information about the location of things like the nuclear medicine ward. In addition to this, fire safety plans also greatly improve the safety of fire fighters. According to FEMA, 16 percent of all fire fighter deaths in 2002 occurred due to a structural collapse or because the fire fighter got lost. Fire safety plans can outline any possible structural hazards, as well as give the fire fighter knowledge of where he is in the building.
Fire safety plans in the fire code
In North America alone, there are around 8 million buildings that legally require a fire safety plan, be it due to provincial or state law. Not having a fire safety plan for buildings which fit the fire code occupancy type can result in a fine, and they are required for all buildings, such as commercial, industrial, assembly, etc.
Advances in fire safety planning
As previously stated, a copy of the approved fire safety plan shall be available for the responding fire department. This, however, is not always the case. Up until now, all fire plans were stored in paper form in the fire department. The problem with this is that sorting and storing these plans is a challenge, and it is difficult for people to update their fire plans. As a result, only half of the required buildings have fire plans, and of those, only around 10 percent are up-to-date. This problem has been solved through the introduction of digital fire plans. These fire plans are stored in a database and can be accessed wirelessly on site by firefighters and are much simpler for building owners to update.
Insurance companies
Fire is one of the biggest threats to property with losses adding up to billions of dollars in damages every year. In 2019 alone, the total amount of property damage resulting from fire was $14.8 billion in the United States. Insurance companies in the United States are not only responsible for financially covering fire loss but are also responsible for managing risk associated with it. Most commercial insurance companies hire a risk control specialist whose primary job is to survey property to ensure compliance with NFPA standards, assess the current risk level of the property, and make recommendations to reduce the probability of fire loss. Careers in property risk management continue to grow and have been projected to grow 4 to 8% from 2018 to 2028 in the United States.
See also
References
External links
Sample Fire Code Table of Contents from International Code Council
Fire prevention
Fire protection
Legal codes
Safety practices | Fire safety | [
"Engineering"
] | 2,720 | [
"Building engineering",
"Fire protection"
] |
988,191 | https://en.wikipedia.org/wiki/Polygon%20mesh | In 3D computer graphics and solid modeling, a polygon mesh is a collection of , s and s that defines the shape of a polyhedral object's surface. It simplifies rendering, as in a wire-frame model. The faces usually consist of triangles (triangle mesh), quadrilaterals (quads), or other simple convex polygons (n-gons). A polygonal mesh may also be more generally composed of concave polygons, or even polygons with holes.
The study of polygon meshes is a large sub-field of computer graphics (specifically 3D computer graphics) and geometric modeling. Different representations of polygon meshes are used for different applications and goals. The variety of operations performed on meshes may include: Boolean logic (Constructive solid geometry), smoothing, simplification, and many others. Algorithms also exist for ray tracing, collision detection, and rigid-body dynamics with polygon meshes. If the mesh's edges are rendered instead of the faces, then the model becomes a wireframe model.
Several methods exist for mesh generation, including the marching cubes algorithm.
Volumetric meshes are distinct from polygon meshes in that they explicitly represent both the surface and interior region of a structure, while polygon meshes only explicitly represent the surface (the volume is implicit).
Elements
Objects created with polygon meshes must store different types of elements. These include vertices, edges, faces, polygons and surfaces. In many applications, only vertices, edges and either faces or polygons are stored. A renderer may support only 3-sided faces, so polygons must be constructed of many of these, as shown above. However, many renderers either support quads and higher-sided polygons, or are able to convert polygons to triangles on the fly, making it unnecessary to store a mesh in a triangulated form.
Representations
Polygon meshes may be represented in a variety of ways, using different methods to store the vertex, edge and face data. These include:
Each of the representations above have particular advantages and drawbacks, further discussed in Smith (2006).
The choice of the data structure is governed by the application, the performance required, size of the data, and the operations to be performed. For example, it is easier to deal with triangles than general polygons, especially in computational geometry. For certain operations it is necessary to have a fast access to topological information such as edges or neighboring faces; this requires more complex structures such as the winged-edge representation. For hardware rendering, compact, simple structures are needed; thus the corner-table (triangle fan) is commonly incorporated into low-level rendering APIs such as DirectX and OpenGL.
Vertex-vertex meshes
Vertex-vertex meshes represent an object as a set of vertices connected to other vertices. This is the simplest representation, but not widely used since the face and edge information is implicit. Thus, it is necessary to traverse the data in order to generate a list of faces for rendering. In addition, operations on edges and faces are not easily accomplished.
However, VV meshes benefit from small storage space and efficient morphing of shape. The above figure shows a four-sided box as represented by a VV mesh. Each vertex indexes its neighboring vertices. The last two vertices, 8 and 9 at the top and bottom center of the "box-cylinder", have four connected vertices rather than five. A general system must be able to handle an arbitrary number of vertices connected to any given vertex.
For a complete description of VV meshes see Smith (2006).
Face-vertex meshes
Face-vertex meshes represent an object as a set of faces and a set of vertices. This is the most widely used mesh representation, being the input typically accepted by modern graphics hardware.
Face-vertex meshes improve on VV-mesh for modeling in that they allow explicit lookup of the vertices of a face, and the faces surrounding a vertex. The above figure shows the "box-cylinder" example as an FV mesh. Vertex v5 is highlighted to show the faces that surround it. Notice that, in this example, every face is required to have exactly 3 vertices. However, this does not mean every vertex has the same number of surrounding faces.
For rendering, the face list is usually transmitted to the GPU as a set of indices to vertices, and the vertices are sent as position/color/normal structures (in the figure, only position is given). This has the benefit that changes in shape, but not geometry, can be dynamically updated by simply resending the vertex data without updating the face connectivity.
Modeling requires easy traversal of all structures. With face-vertex meshes it is easy to find the vertices of a face. Also, the vertex list contains a list of faces connected to each vertex. Unlike VV meshes, both faces and vertices are explicit, so locating neighboring faces and vertices is constant time. However, the edges are implicit, so a search is still needed to find all the faces surrounding a given face. Other dynamic operations, such as splitting or merging a face, are also difficult with face-vertex meshes.
Winged-edge meshes
Introduced by Baumgart in 1975, winged-edge meshes explicitly represent the vertices, faces, and edges of a mesh. This representation is widely used in modeling programs to provide the greatest flexibility in dynamically changing the mesh geometry, because split and merge operations can be done quickly. Their primary drawback is large storage requirements and increased complexity due to maintaining many indices. A good discussion of implementation issues of Winged-edge meshes may be found in the book Graphics Gems II.
Winged-edge meshes address the issue of traversing from edge to edge, and providing an ordered set of faces around an edge. For any given edge, the number of outgoing edges may be arbitrary. To simplify this, winged-edge meshes provide only four, the nearest clockwise and counter-clockwise edges at each end. The other edges may be traversed incrementally. The information for each edge therefore resembles a butterfly, hence "winged-edge" meshes. The above figure shows the "box-cylinder" as a winged-edge mesh. The total data for an edge consists of 2 vertices (endpoints), 2 faces (on each side), and 4 edges (winged-edge).
Rendering of winged-edge meshes for graphics hardware requires generating a Face index list. This is usually done only when the geometry changes. Winged-edge meshes are ideally suited for dynamic geometry, such as subdivision surfaces and interactive modeling, since changes to the mesh can occur locally. Traversal across the mesh, as might be needed for collision detection, can be accomplished efficiently.
See Baumgart (1975) for more details.
Render dynamic meshes
Winged-edge meshes are not the only representation which allows for dynamic changes to geometry. A new representation which combines winged-edge meshes and face-vertex meshes is the render dynamic mesh, which explicitly stores both, the vertices of a face and faces of a vertex (like FV meshes), and the faces and vertices of an edge (like winged-edge).
Render dynamic meshes require slightly less storage space than standard winged-edge meshes, and can be directly rendered by graphics hardware since the face list contains an index of vertices. In addition, traversal from vertex to face is explicit (constant time), as is from face to vertex. RD meshes do not require the four outgoing edges since these can be found by traversing from edge to face, then face to neighboring edge.
RD meshes benefit from the features of winged-edge meshes by allowing for geometry to be dynamically updated.
See Tobler & Maierhofer (WSCG 2006) for more details.
Summary of mesh representation
In the above table, explicit indicates that the operation can be performed in constant time, as the data is directly stored; list compare indicates that a list comparison between two lists must be performed to accomplish the operation; and pair search indicates a search must be done on two indices. The notation avg(V,V) means the average number of vertices connected to a given vertex; avg(E,V) means the average number of edges connected to a given vertex, and avg(F,V) is the average number of faces connected to a given vertex.
The notation "V → f1, f2, f3, ... → v1, v2, v3, ..." describes that a traversal across multiple elements is required to perform the operation. For example, to get "all vertices around a given vertex V" using the face-vertex mesh, it is necessary to first find the faces around the given vertex V using the vertex list. Then, from those faces, use the face list to find the vertices around them. Winged-edge meshes explicitly store nearly all information, and other operations always traverse to the edge first to get additional info. Vertex-vertex meshes are the only representation that explicitly stores the neighboring vertices of a given vertex.
As the mesh representations become more complex (from left to right in the summary), the amount of information explicitly stored increases. This gives more direct, constant time, access to traversal and topology of various elements but at the cost of increased overhead and space in maintaining indices properly.
Figure 7 shows the connectivity information for each of the four technique described in this article. Other representations also exist, such as half-edge and corner tables. These are all variants of how vertices, faces and edges index one another.
As a general rule, face-vertex meshes are used whenever an object must be rendered on graphics hardware that does not change geometry (connectivity), but may deform or morph shape (vertex positions) such as real-time rendering of static or morphing objects. Winged-edge or render dynamic meshes are used when the geometry changes, such as in interactive modeling packages or for computing subdivision surfaces. Vertex-vertex meshes are ideal for efficient, complex changes in geometry or topology so long as hardware rendering is not of concern.
Other representations
File formats
There exist many different file formats for storing polygon mesh data. Each format is most effective when used for the purpose intended by its creator.
Popular formats include .fbx, .dae, .obj, and .stl. A table of some more of these formats is presented below:
See also
Boundary representation
Euler operator
Hypergraph
Manifold (a mesh can be manifold or non-manifold)
Mesh subdivision (a technique for adding detail to a polygon mesh)
Polygon modeling
Polygonizer
Simplex
T-spline
Triangulation (geometry)
Wire-frame model
References
External links
OpenMesh open source half-edge mesh representation.
Polygon Mesh Processing Library
3D computer graphics
Virtual reality
Computer graphics data structures
Mesh generation
Geometry processing | Polygon mesh | [
"Physics"
] | 2,251 | [
"Tessellation",
"Mesh generation",
"Symmetry"
] |
159,023 | https://en.wikipedia.org/wiki/Tree%20decomposition | In graph theory, a tree decomposition is a mapping of a graph into a tree that can be used to define the treewidth of the graph and speed up solving certain computational problems on the graph.
Tree decompositions are also called junction trees, clique trees, or join trees. They play an important role in problems like probabilistic inference, constraint satisfaction, query optimization, and matrix decomposition.
The concept of tree decomposition was originally introduced by . Later it was rediscovered by and has since been studied by many other authors.
Definition
Intuitively, a tree decomposition represents the vertices of a given graph as subtrees of a tree, in such a way that vertices in are adjacent only when the corresponding subtrees intersect. Thus, forms a subgraph of the intersection graph of the subtrees. The full intersection graph is a chordal graph.
Each subtree associates a graph vertex with a set of tree nodes. To define this formally, we represent each tree node as the set of vertices associated with it.
Thus, given a graph , a tree decomposition is a pair , where is a family of subsets (sometimes called bags) of , and is a tree whose nodes are the subsets , satisfying the following properties:
The union of all sets equals . That is, each graph vertex is associated with at least one tree node.
For every edge in the graph, there is a subset that contains both and . That is, vertices are adjacent in the graph only when the corresponding subtrees have a node in common.
If and both contain a vertex , then all nodes of the tree in the (unique) path between and contain as well. That is, the nodes associated with vertex form a connected subset of . This is also known as coherence, or the running intersection property. It can be stated equivalently that if , and are nodes, and is on the path from to , then .
The tree decomposition of a graph is far from unique; for example, a trivial tree decomposition contains all vertices of the graph in its single root node.
A tree decomposition in which the underlying tree is a path graph is called a path decomposition, and the width parameter derived from these special types of tree decompositions is known as pathwidth.
A tree decomposition of treewidth is smooth, if for all , and for all .
Treewidth
The width of a tree decomposition is the size of its largest set minus one. The treewidth of a graph is the minimum width among all possible tree decompositions of . In this definition, the size of the largest set is diminished by one in order to make the treewidth of a tree equal to one. Treewidth may also be defined from other structures than tree decompositions, including chordal graphs, brambles, and havens.
It is NP-complete to determine whether a given graph has treewidth at most a given variable .
However, when is any fixed constant, the graphs with treewidth can be recognized, and a width tree decomposition constructed for them, in linear time. The time dependence of this algorithm on is an exponential function of .
Dynamic programming
At the beginning of the 1970s, it was observed that a large class of combinatorial optimization problems defined on graphs could be efficiently solved by non-serial dynamic programming as long as the graph had a bounded dimension, a parameter related to treewidth. Later, several authors independently observed, at the end of the 1980s, that many algorithmic problems that are NP-complete for arbitrary graphs may be solved efficiently by dynamic programming for graphs of bounded treewidth, using the tree-decompositions of these graphs.
As an example, consider the problem of finding the maximum independent set in a graph of treewidth . To solve this problem, first choose one of the nodes of the tree decomposition to be the root, arbitrarily. For a node of the tree decomposition, let be the union of the sets descending from . For an independent set let denote the size of the largest independent subset of such that Similarly, for an adjacent pair of nodes and , with farther from the root of the tree than , and an independent set let denote the size of the largest independent subset of such that We may calculate these and values by a bottom-up traversal of the tree:
where the sum in the calculation of is over the children of node .
At each node or edge, there are at most sets for which we need to calculate these values, so if is a constant then the whole calculation takes constant time per edge or node. The size of the maximum independent set is the largest value stored at the root node, and the maximum independent set itself can be found (as is standard in dynamic programming algorithms) by backtracking through these stored values starting from this largest value. Thus, in graphs of bounded treewidth, the maximum independent set problem may be solved in linear time. Similar algorithms apply to many other graph problems.
This dynamic programming approach is used in machine learning via the junction tree algorithm for belief propagation in graphs of bounded treewidth. It also plays a key role in algorithms for computing the treewidth and constructing tree decompositions: typically, such algorithms have a first step that approximates the treewidth, constructing a tree decomposition with this approximate width, and then a second step that performs dynamic programming in the approximate tree decomposition to compute the exact value of the treewidth.
See also
Brambles and havensTwo kinds of structures that can be used as an alternative to tree decomposition in defining the treewidth of a graph.
Branch-decompositionA closely related structure whose width is within a constant factor of treewidth.
Decomposition MethodTree Decomposition is used in Decomposition Method for solving constraint satisfaction problem.
Notes
References
.
.
.
.
.
.
.
.
.
Trees (graph theory)
Graph minor theory
Graph theory objects | Tree decomposition | [
"Mathematics"
] | 1,202 | [
"Graph minor theory",
"Mathematical relations",
"Graph theory",
"Graph theory objects"
] |
159,046 | https://en.wikipedia.org/wiki/Geothermal%20desalination | Geothermal desalination refers to the process of using geothermal energy to power the process of converting salt water to fresh water. The process is considered economically efficient, and while overall environmental impact is uncertain, it has potential to be more environmentally friendly compared to conventional desalination options. Geothermal desalination plants have already been successful in various regions, and there is potential for further development to allow the process to be used in an increased number of water scarce regions.
Process explanation
Desalination is the process of removing minerals from seawater to convert it into fresh water. Desalination is divided into two categories in terms of processes: processes driven by thermal energy and processes driven by mechanical energy. Geothermal desalination uses geothermal energy as the thermal energy source to drive the desalination process.
There are two types of geothermal desalination: direct and indirect. Direct geothermal desalination heats seawater to boiling in an evaporator, then transferring to a condenser. In contrast, indirect geothermal desalination converts geothermal energy into electricity which is then used for membrane desalination. If the geothermal energy is used indirectly, it can be used to generate power for the water desalination process, as well as excess electricity that can be used for consumers. Similarly, if the geothermal energy is used directly, the excess geothermal energy can be used to drive heating and cooling processes.
Applications
Current
One use of geothermal desalination is in producing fresh water for agriculture. One example of agricultural applications of geothermal energy is the Balcova-Naridere Geothermal Field (BNGF) in Turkey. However, arsenic and boron, two potentially toxic elements, have been found in the geothermal water used to generate electricity. Since the construction of the geothermal desalination plant in this region, these toxic elements have contaminated freshwater wells, rendering this water unusable for agriculture. Due to the increase in contamination in the surrounding environment, this project is not considered a success.
Another use of geothermal desalination is the production of drinking water, as shown by the Milos Island Project in Greece, which relied entirely on geothermal energy to produce desalinated water. This plant was constructed because geothermal energy is readily available in this region, as Milos Island is located in a volcanic region, which makes using geothermal energy a viable way to power the desalination of salt water. The Milos Island plant utilizes a combination of direct and indirect desalination. Unlike the BNGF project, this is considered a success as it produced drinkable water without polluting the environment at a low cost using only geothermal energy.
Future potential
Research indicates geothermal desalination can be implemented in some regions with water scarcity, as it is a relatively low cost solution to increasing available fresh water. In particular, two regions that have ample geothermal resources and are experiencing water scarcity are California and Saudi Arabia. Because these regions already have existing desalination plants, implementation of geothermal desalination plants would be relatively easy.
Furthermore, as the technology for producing geothermal energy improves, geothermal desalination will become possible in more regions. Technologies that are currently being developed will allow the geothermal water used to produce energy to be the water that becomes desalinated. This will allow regions that are not close to an ocean to perform geothermal desalination, which will widely expand the potential for regions to perform geothermal desalination.
Environmental impacts
Much of the environmental impact in the geothermal desalination process stems from the use of geothermal energy, not from the desalination process itself. Geothermal desalination has both environmental benefits and drawbacks. One benefit is that geothermal energy is a renewable resource and emits fewer greenhouse gasses than non-renewable energy sources. Another benefit to the environment is that geothermal energy has a smaller land footprint compared to wind or solar energy. More specifically, the land usage required for geothermal desalination site has been estimated to be 1.2 to 2.7 square terameters are required for each megawatt of energy produced.
One environmental drawback is due to geothermal desalination being an energy intensive process; the energy consumption ranges from about 4 to 27 kWh per square meter of the desalination plant. Moreover, some researchers are concerned that due to lack of regulation on carbon dioxide () emissions from geothermal plants, particularly in the United States, there are significant detrimental emissions from these plants that are not being measured. Geothermal power has been found to leak toxic elements such as mercury, boron, and arsenic into the environment, meaning geothermal desalination plants are a potential health hazard for their surrounding environment. Ultimately though, the long term environmental consequences of geothermal power desalination plants are still not clear.
Economic factors
Geothermal energy is not dependent on day or night cycles and weather conditions, meaning it has a high-capacity factor, which is a measure of how often a plant is running at maximum power. This provides a stable and reliable energy supply. This also means that geothermal desalination plants can operate in any weather condition at any time of day. In terms of capacity, the United States, Indonesia, Philippines, Turkey, New Zealand, and Mexico accounted for 75% of the global geothermal energy capacity. It would be the most economically feasible to perform geothermal desalination in these countries due to their geothermal energy capacity.
For membrane desalination specifically, using geothermal energy reduces cost compared to using other energy sources. This is because geothermal power is traditionally produced at a competitive cost compared to other energy sources including fossil fuels; a 2011 study estimates the cost to be $0.10/kWh. Specifically, the US Department of Energy has estimated that geothermal desalination can produce desalinated water at a cost of $1.50 per cubic meter of desalinated water.
History
The exact origins of geothermal desalination are unclear; however some early work is credited to Leon Awerbuch, a scientist working in Research & Development at the Bechtel Group at the time, who proposed the process of using geothermal energy for water desalination in 1972. In 1994, a prototype that used geothermal energy to power desalination was built by Caldor-Marseille. This prototype was able to produce a few cubic meters of desalinated water per day. In 1995, a geothermal desalination prototype plant was built in Tunisia, which is one of the earliest documented cases of a geothermal desalination plant. Its capacity was three cubic meters of water per day, which could meet the needs of the surrounding communities. The cost of water was estimated to be $1.20 per cubic meter.
See also
Geothermal power
Desalination
References
Geothermal energy
Water desalination | Geothermal desalination | [
"Chemistry"
] | 1,397 | [
"Water treatment",
"Water technology",
"Water desalination"
] |
159,225 | https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac%20statistics | Fermi–Dirac statistics is a type of quantum statistics that applies to the physics of a system consisting of many non-interacting, identical particles that obey the Pauli exclusion principle. A result is the Fermi–Dirac distribution of particles over energy states. It is named after Enrico Fermi and Paul Dirac, each of whom derived the distribution independently in 1926. Fermi–Dirac statistics is a part of the field of statistical mechanics and uses the principles of quantum mechanics.
Fermi–Dirac statistics applies to identical and indistinguishable particles with half-integer spin (1/2, 3/2, etc.), called fermions, in thermodynamic equilibrium. For the case of negligible interaction between particles, the system can be described in terms of single-particle energy states. A result is the Fermi–Dirac distribution of particles over these states where no two particles can occupy the same state, which has a considerable effect on the properties of the system. Fermi–Dirac statistics is most commonly applied to electrons, a type of fermion with spin 1/2.
A counterpart to Fermi–Dirac statistics is Bose–Einstein statistics, which applies to identical and indistinguishable particles with integer spin (0, 1, 2, etc.) called bosons. In classical physics, Maxwell–Boltzmann statistics is used to describe particles that are identical and treated as distinguishable. For both Bose–Einstein and Maxwell–Boltzmann statistics, more than one particle can occupy the same state, unlike Fermi–Dirac statistics.
History
Before the introduction of Fermi–Dirac statistics in 1926, understanding some aspects of electron behavior was difficult due to seemingly contradictory phenomena. For example, the electronic heat capacity of a metal at room temperature seemed to come from 100 times fewer electrons than were in the electric current. It was also difficult to understand why the emission currents generated by applying high electric fields to metals at room temperature were almost independent of temperature.
The difficulty encountered by the Drude model, the electronic theory of metals at that time, was due to considering that electrons were (according to classical statistics theory) all equivalent. In other words, it was believed that each electron contributed to the specific heat an amount on the order of the Boltzmann constant kB.
This problem remained unsolved until the development of Fermi–Dirac statistics.
Fermi–Dirac statistics was first published in 1926 by Enrico Fermi and Paul Dirac. According to Max Born, Pascual Jordan developed in 1925 the same statistics, which he called Pauli statistics, but it was not published in a timely manner. According to Dirac, it was first studied by Fermi, and Dirac called it "Fermi statistics" and the corresponding particles "fermions".
Fermi–Dirac statistics was applied in 1926 by Ralph Fowler to describe the collapse of a star to a white dwarf. In 1927 Arnold Sommerfeld applied it to electrons in metals and developed the free electron model, and in 1928 Fowler and Lothar Nordheim applied it to field electron emission from metals. Fermi–Dirac statistics continue to be an important part of physics.
Fermi–Dirac distribution
For a system of identical fermions in thermodynamic equilibrium, the average number of fermions in a single-particle state is given by the Fermi–Dirac (F–D) distribution:
where is the Boltzmann constant, is the absolute temperature, is the energy of the single-particle state , and is the total chemical potential. The distribution is normalized by the condition
that can be used to express in that can assume either a positive or negative value.
At zero absolute temperature, is equal to the Fermi energy plus the potential energy per fermion, provided it is in a neighbourhood of positive spectral density. In the case of a spectral gap, such as for electrons in a semiconductor, the point of symmetry is typically called the Fermi level or—for electrons—the electrochemical potential, and will be located in the middle of the gap.
The Fermi–Dirac distribution is only valid if the number of fermions in the system is large enough so that adding one more fermion to the system has negligible effect on . Since the Fermi–Dirac distribution was derived using the Pauli exclusion principle, which allows at most one fermion to occupy each possible state, a result is that .
The variance of the number of particles in state i can be calculated from the above expression for :
Distribution of particles over energy
From the Fermi–Dirac distribution of particles over states, one can find the distribution of particles over energy. The average number of fermions with energy can be found by multiplying the Fermi–Dirac distribution by the degeneracy (i.e. the number of states with energy ),
When , it is possible that , since there is more than one state that can be occupied by fermions with the same energy .
When a quasi-continuum of energies has an associated density of states (i.e. the number of states per unit energy range per unit volume), the average number of fermions per unit energy range per unit volume is
where is called the Fermi function and is the same function that is used for the Fermi–Dirac distribution :
so that
Quantum and classical regimes
The Fermi–Dirac distribution approaches the Maxwell–Boltzmann distribution in the limit of high temperature and low particle density, without the need for any ad hoc assumptions:
In the limit of low particle density, , therefore or equivalently . In that case, , which is the result from Maxwell-Boltzmann statistics.
In the limit of high temperature, the particles are distributed over a large range of energy values, therefore the occupancy on each state (especially the high energy ones with ) is again very small, . This again reduces to Maxwell-Boltzmann statistics.
The classical regime, where Maxwell–Boltzmann statistics can be used as an approximation to Fermi–Dirac statistics, is found by considering the situation that is far from the limit imposed by the Heisenberg uncertainty principle for a particle's position and momentum. For example, in physics of semiconductor, when the density of states of conduction band is much higher than the doping concentration, the energy gap between conduction band and fermi level could be calculated using Maxwell-Boltzmann statistics. Otherwise, if the doping concentration is not negligible compared to density of states of conduction band, the Fermi–Dirac distribution should be used instead for accurate calculation. It can then be shown that the classical situation prevails when the concentration of particles corresponds to an average interparticle separation that is much greater than the average de Broglie wavelength of the particles:
where is the Planck constant, and is the mass of a particle.
For the case of conduction electrons in a typical metal at = 300 K (i.e. approximately room temperature), the system is far from the classical regime because . This is due to the small mass of the electron and the high concentration (i.e. small ) of conduction electrons in the metal. Thus Fermi–Dirac statistics is needed for conduction electrons in a typical metal.
Another example of a system that is not in the classical regime is the system that consists of the electrons of a star that has collapsed to a white dwarf. Although the temperature of white dwarf is high (typically = on its surface), its high electron concentration and the small mass of each electron precludes using a classical approximation, and again Fermi–Dirac statistics is required.
Derivations
Grand canonical ensemble
The Fermi–Dirac distribution, which applies only to a quantum system of non-interacting fermions, is easily derived from the grand canonical ensemble. In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential μ fixed by the reservoir).
Due to the non-interacting quality, each available single-particle level (with energy level ϵ) forms a separate thermodynamic system in contact with the reservoir.
In other words, each single-particle level is a separate, tiny grand canonical ensemble.
By the Pauli exclusion principle, there are only two possible microstates for the single-particle level: no particle (energy E = 0), or one particle (energy E = ε). The resulting partition function for that single-particle level therefore has just two terms:
and the average particle number for that single-particle level substate is given by
This result applies for each single-particle level, and thus gives the Fermi–Dirac distribution for the entire state of the system.
The variance in particle number (due to thermal fluctuations) may also be derived (the particle number has a simple Bernoulli distribution):
This quantity is important in transport phenomena such as the Mott relations for electrical conductivity and thermoelectric coefficient for an electron gas, where the ability of an energy level to contribute to transport phenomena is proportional to .
Canonical ensemble
It is also possible to derive Fermi–Dirac statistics in the canonical ensemble. Consider a many-particle system composed of N identical fermions that have negligible mutual interaction and are in thermal equilibrium. Since there is negligible interaction between the fermions, the energy of a state of the many-particle system can be expressed as a sum of single-particle energies:
where is called the occupancy number and is the number of particles in the single-particle state with energy . The summation is over all possible single-particle states .
The probability that the many-particle system is in the state is given by the normalized canonical distribution:
where , is called the Boltzmann factor, and the summation is over all possible states of the many-particle system. The average value for an occupancy number is
Note that the state of the many-particle system can be specified by the particle occupancy of the single-particle states, i.e. by specifying so that
and the equation for becomes
where the summation is over all combinations of values of which obey the Pauli exclusion principle, and = 0 or for each . Furthermore, each combination of values of satisfies the constraint that the total number of particles is :
Rearranging the summations,
where the upper index on the summation sign indicates that the sum is not over and is subject to the constraint that the total number of particles associated with the summation is . Note that still depends on through the constraint, since in one case and is evaluated with while in the other case and is evaluated with To simplify the notation and to clearly indicate that still depends on through define
so that the previous expression for can be rewritten and evaluated in terms of the :
The following approximation will be used to find an expression to substitute for :
where
If the number of particles is large enough so that the change in the chemical potential is very small when a particle is added to the system, then Applying the exponential function to both sides, substituting for and rearranging,
Substituting the above into the equation for and using a previous definition of to substitute for , results in the Fermi–Dirac distribution:
Like the Maxwell–Boltzmann distribution and the Bose–Einstein distribution, the Fermi–Dirac distribution can also be derived by the Darwin–Fowler method of mean values.
Microcanonical ensemble
A result can be achieved by directly analyzing the multiplicities of the system and using Lagrange multipliers.
Suppose we have a number of energy levels, labeled by index i, each level having energy εi and containing a total of ni particles. Suppose each level contains gi distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta (i.e. their momenta may be along different directions), in which case they are distinguishable from each other, yet they can still have the same energy. The value of gi associated with level i is called the "degeneracy" of that energy level. The Pauli exclusion principle states that only one fermion can occupy any such sublevel.
The number of ways of distributing ni indistinguishable particles among the gi sublevels of an energy level, with a maximum of one particle per sublevel, is given by the binomial coefficient, using its combinatorial interpretation:
For example, distributing two particles in three sublevels will give population numbers of 110, 101, or 011 for a total of three ways which equals 3!/(2!1!).
The number of ways that a set of occupation numbers ni can be realized is the product of the ways that each individual energy level can be populated:
Following the same procedure used in deriving the Maxwell–Boltzmann statistics, we wish to find the set of ni for which W is maximized, subject to the constraint that there be a fixed number of particles and a fixed energy. We constrain our solution using Lagrange multipliers forming the function:
Using Stirling's approximation for the factorials, taking the derivative with respect to ni, setting the result to zero, and solving for ni yields the Fermi–Dirac population numbers:
By a process similar to that outlined in the Maxwell–Boltzmann statistics article, it can be shown thermodynamically that and , so that finally, the probability that a state will be occupied is
See also
Grand canonical ensemble
Pauli exclusion principle
Complete Fermi-Dirac integral
Fermi level
Fermi gas
Maxwell–Boltzmann statistics
Bose–Einstein statistics
Parastatistics
Logistic function
Sigmoid function
Notes
References
Further reading
Statistical mechanics | Fermi–Dirac statistics | [
"Physics"
] | 2,853 | [
"Statistical mechanics"
] |
159,266 | https://en.wikipedia.org/wiki/Gene%20expression | Gene expression is the process by which information from a gene is used in the synthesis of a functional gene product that enables it to produce end products, proteins or non-coding RNA, and ultimately affect a phenotype. These products are often proteins, but in non-protein-coding genes such as transfer RNA (tRNA) and small nuclear RNA (snRNA), the product is a functional non-coding RNA.
The process of gene expression is used by all known life—eukaryotes (including multicellular organisms), prokaryotes (bacteria and archaea), and utilized by viruses—to generate the macromolecular machinery for life.
In genetics, gene expression is the most fundamental level at which the genotype gives rise to the phenotype, i.e. observable trait. The genetic information stored in DNA represents the genotype, whereas the phenotype results from the "interpretation" of that information. Such phenotypes are often displayed by the synthesis of proteins that control the organism's structure and development, or that act as enzymes catalyzing specific metabolic pathways.
All steps in the gene expression process may be modulated (regulated), including the transcription, RNA splicing, translation, and post-translational modification of a protein. Regulation of gene expression gives control over the timing, location, and amount of a given gene product (protein or ncRNA) present in a cell and can have a profound effect on the cellular structure and function. Regulation of gene expression is the basis for cellular differentiation, development, morphogenesis and the versatility and adaptability of any organism. Gene regulation may therefore serve as a substrate for evolutionary change.
Mechanism
Transcription
The production of a RNA copy from a DNA strand is called transcription, and is performed by RNA polymerases, which add one ribonucleotide at a time to a growing RNA strand as per the complementarity law of the nucleotide bases. This RNA is complementary to the template 3′ → 5′ DNA strand, with the exception that thymines (T) are replaced with uracils (U) in the RNA and possible errors.
In bacteria, transcription is carried out by a single type of RNA polymerase, which needs to bind a DNA sequence called a Pribnow box with the help of the sigma factor protein (σ factor) to start transcription. In eukaryotes, transcription is performed in the nucleus by three types of RNA polymerases, each of which needs a special DNA sequence called the promoter and a set of DNA-binding proteins—transcription factors—to initiate the process (see regulation of transcription below). RNA polymerase I is responsible for transcription of ribosomal RNA (rRNA) genes. RNA polymerase II (Pol II) transcribes all protein-coding genes but also some non-coding RNAs (e.g., snRNAs, snoRNAs or long non-coding RNAs). RNA polymerase III transcribes 5S rRNA, transfer RNA (tRNA) genes, and some small non-coding RNAs (e.g., 7SK). Transcription ends when the polymerase encounters a sequence called the terminator.
mRNA processing
While transcription of prokaryotic protein-coding genes creates messenger RNA (mRNA) that is ready for translation into protein, transcription of eukaryotic genes leaves a primary transcript of RNA (pre-RNA), which first has to undergo a series of modifications to become a mature RNA. Types and steps involved in the maturation processes vary between coding and non-coding preRNAs; i.e. even though preRNA molecules for both mRNA and tRNA undergo splicing, the steps and machinery involved are different. The processing of non-coding RNA is described below (non-coding RNA maturation).
The processing of pre-mRNA include 5′ capping, which is set of enzymatic reactions that add 7-methylguanosine (m7G) to the 5′ end of pre-mRNA and thus protect the RNA from degradation by exonucleases. The m7G cap is then bound by cap binding complex heterodimer (CBP20/CBP80), which aids in mRNA export to cytoplasm and also protect the RNA from decapping.
Another modification is 3′ cleavage and polyadenylation. They occur if polyadenylation signal sequence (5′- AAUAAA-3′) is present in pre-mRNA, which is usually between protein-coding sequence and terminator. The pre-mRNA is first cleaved and then a series of ~200 adenines (A) are added to form poly(A) tail, which protects the RNA from degradation. The poly(A) tail is bound by multiple poly(A)-binding proteins (PABPs) necessary for mRNA export and translation re-initiation. In the inverse process of deadenylation, poly(A) tails are shortened by the CCR4-Not 3′-5′ exonuclease, which often leads to full transcript decay.
A very important modification of eukaryotic pre-mRNA is RNA splicing. The majority of eukaryotic pre-mRNAs consist of alternating segments called exons and introns. During the process of splicing, an RNA-protein catalytical complex known as spliceosome catalyzes two transesterification reactions, which remove an intron and release it in form of lariat structure, and then splice neighbouring exons together. In certain cases, some introns or exons can be either removed or retained in mature mRNA. This so-called alternative splicing creates series of different transcripts originating from a single gene. Because these transcripts can be potentially translated into different proteins, splicing extends the complexity of eukaryotic gene expression and the size of a species proteome.
Extensive RNA processing may be an evolutionary advantage made possible by the nucleus of eukaryotes. In prokaryotes, transcription and translation happen together, whilst in eukaryotes, the nuclear membrane separates the two processes, giving time for RNA processing to occur.
Non-coding RNA maturation
In most organisms non-coding genes (ncRNA) are transcribed as precursors that undergo further processing. In the case of ribosomal RNAs (rRNA), they are often transcribed as a pre-rRNA that contains one or more rRNAs. The pre-rRNA is cleaved and modified (2′-O-methylation and pseudouridine formation) at specific sites by approximately 150 different small nucleolus-restricted RNA species, called snoRNAs. SnoRNAs associate with proteins, forming snoRNPs. While snoRNA part basepair with the target RNA and thus position the modification at a precise site, the protein part performs the catalytical reaction. In eukaryotes, in particular a snoRNP called RNase, MRP cleaves the 45S pre-rRNA into the 28S, 5.8S, and 18S rRNAs. The rRNA and RNA processing factors form large aggregates called the nucleolus.
In the case of transfer RNA (tRNA), for example, the 5′ sequence is removed by RNase P, whereas the 3′ end is removed by the tRNase Z enzyme and the non-templated 3′ CCA tail is added by a nucleotidyl transferase. In the case of micro RNA (miRNA), miRNAs are first transcribed as primary transcripts or pri-miRNA with a cap and poly-A tail and processed to short, 70-nucleotide stem-loop structures known as pre-miRNA in the cell nucleus by the enzymes Drosha and Pasha. After being exported, it is then processed to mature miRNAs in the cytoplasm by interaction with the endonuclease Dicer, which also initiates the formation of the RNA-induced silencing complex (RISC), composed of the Argonaute protein.
Even snRNAs and snoRNAs themselves undergo series of modification before they become part of functional RNP complex. This is done either in the nucleoplasm or in the specialized compartments called Cajal bodies. Their bases are methylated or pseudouridinilated by a group of small Cajal body-specific RNAs (scaRNAs), which are structurally similar to snoRNAs.
RNA export
In eukaryotes most mature RNA must be exported to the cytoplasm from the nucleus. While some RNAs function in the nucleus, many RNAs are transported through the nuclear pores and into the cytosol. Export of RNAs requires association with specific proteins known as exportins. Specific exportin molecules are responsible for the export of a given RNA type. mRNA transport also requires the correct association with Exon Junction Complex (EJC), which ensures that correct processing of the mRNA is completed before export. In some cases RNAs are additionally transported to a specific part of the cytoplasm, such as a synapse; they are then towed by motor proteins that bind through linker proteins to specific sequences (called "zipcodes") on the RNA.
Translation
For some non-coding RNA, the mature RNA is the final gene product. In the case of messenger RNA (mRNA) the RNA is an information carrier coding for the synthesis of one or more proteins. mRNA carrying a single protein sequence (common in eukaryotes) is monocistronic whilst mRNA carrying multiple protein sequences (common in prokaryotes) is known as polycistronic.
Every mRNA consists of three parts: a 5′ untranslated region (5′UTR), a protein-coding region or open reading frame (ORF), and a 3′ untranslated region (3′UTR). The coding region carries information for protein synthesis encoded by the genetic code to form triplets. Each triplet of nucleotides of the coding region is called a codon and corresponds to a binding site complementary to an anticodon triplet in transfer RNA. Transfer RNAs with the same anticodon sequence always carry an identical type of amino acid. Amino acids are then chained together by the ribosome according to the order of triplets in the coding region. The ribosome helps transfer RNA to bind to messenger RNA and takes the amino acid from each transfer RNA and makes a structure-less protein out of it. Each mRNA molecule is translated into many protein molecules, on average ~2800 in mammals.
In prokaryotes translation generally occurs at the point of transcription (co-transcriptionally), often using a messenger RNA that is still in the process of being created. In eukaryotes translation can occur in a variety of regions of the cell depending on where the protein being written is supposed to be. Major locations are the cytoplasm for soluble cytoplasmic proteins and the membrane of the endoplasmic reticulum for proteins that are for export from the cell or insertion into a cell membrane. Proteins that are supposed to be produced at the endoplasmic reticulum are recognised part-way through the translation process. This is governed by the signal recognition particle—a protein that binds to the ribosome and directs it to the endoplasmic reticulum when it finds a signal peptide on the growing (nascent) amino acid chain.
Folding
Each protein exists as an unfolded polypeptide or random coil when translated from a sequence of mRNA into a linear chain of amino acids. This polypeptide lacks any developed three-dimensional structure (the left hand side of the neighboring figure). The polypeptide then folds into its characteristic and functional three-dimensional structure from a random coil. Amino acids interact with each other to produce a well-defined three-dimensional structure, the folded protein (the right hand side of the figure) known as the native state. The resulting three-dimensional structure is determined by the amino acid sequence (Anfinsen's dogma).
The correct three-dimensional structure is essential to function, although some parts of functional proteins may remain unfolded. Failure to fold into the intended shape usually produces inactive proteins with different properties including toxic prions. Several neurodegenerative and other diseases are believed to result from the accumulation of misfolded proteins. Many allergies are caused by the folding of the proteins, for the immune system does not produce antibodies for certain protein structures.
Enzymes called chaperones assist the newly formed protein to attain (fold into) the 3-dimensional structure it needs to function. Similarly, RNA chaperones help RNAs attain their functional shapes. Assisting protein folding is one of the main roles of the endoplasmic reticulum in eukaryotes.
Translocation
Secretory proteins of eukaryotes or prokaryotes must be translocated to enter the secretory pathway. Newly synthesized proteins are directed to the eukaryotic Sec61 or prokaryotic SecYEG translocation channel by signal peptides. The efficiency of protein secretion in eukaryotes is very dependent on the signal peptide which has been used.
Protein transport
Many proteins are destined for other parts of the cell than the cytosol and a wide range of signalling sequences or (signal peptides) are used to direct proteins to where they are supposed to be. In prokaryotes this is normally a simple process due to limited compartmentalisation of the cell. However, in eukaryotes there is a great variety of different targeting processes to ensure the protein arrives at the correct organelle.
Not all proteins remain within the cell and many are exported, for example, digestive enzymes, hormones and extracellular matrix proteins. In eukaryotes the export pathway is well developed and the main mechanism for the export of these proteins is translocation to the endoplasmic reticulum, followed by transport via the Golgi apparatus.
Regulation of gene expression
Regulation of gene expression is the control of the amount and timing of appearance of the functional product of a gene. Control of expression is vital to allow a cell to produce the gene products it needs when it needs them; in turn, this gives cells the flexibility to adapt to a variable environment, external signals, damage to the cell, and other stimuli. More generally, gene regulation gives the cell control over all structure and function, and is the basis for cellular differentiation, morphogenesis and the versatility and adaptability of any organism.
Numerous terms are used to describe types of genes depending on how they are regulated; these include:
A constitutive gene is a gene that is transcribed continually as opposed to a facultative gene, which is only transcribed when needed.
A housekeeping gene is a gene that is required to maintain basic cellular function and so is typically expressed in all cell types of an organism. Examples include actin, GAPDH and ubiquitin. Some housekeeping genes are transcribed at a relatively constant rate and these genes can be used as a reference point in experiments to measure the expression rates of other genes.
A facultative gene is a gene only transcribed when needed as opposed to a constitutive gene.
An inducible gene is a gene whose expression is either responsive to environmental change or dependent on the position in the cell cycle.
Any step of gene expression may be modulated, from the DNA-RNA transcription step to post-translational modification of a protein. The stability of the final gene product, whether it is RNA or protein, also contributes to the expression level of the gene—an unstable product results in a low expression level. In general gene expression is regulated through changes in the number and type of interactions between molecules that collectively influence transcription of DNA and translation of RNA.
Some simple examples of where gene expression is important are:
Control of insulin expression so it gives a signal for blood glucose regulation.
X chromosome inactivation in female mammals to prevent an "overdose" of the genes it contains.
Cyclin expression levels control progression through the eukaryotic cell cycle.
Transcriptional regulation
Regulation of transcription can be broken down into three main routes of influence; genetic (direct interaction of a control factor with the gene), modulation interaction of a control factor with the transcription machinery and epigenetic (non-sequence changes in DNA structure that influence transcription).
Direct interaction with DNA is the simplest and the most direct method by which a protein changes transcription levels. Genes often have several protein binding sites around the coding region with the specific function of regulating transcription. There are many classes of regulatory DNA binding sites known as enhancers, insulators and silencers. The mechanisms for regulating transcription are varied, from blocking key binding sites on the DNA for RNA polymerase to acting as an activator and promoting transcription by assisting RNA polymerase binding.
The activity of transcription factors is further modulated by intracellular signals causing protein post-translational modification including phosphorylation, acetylation, or glycosylation. These changes influence a transcription factor's ability to bind, directly or indirectly, to promoter DNA, to recruit RNA polymerase, or to favor elongation of a newly synthesized RNA molecule.
The nuclear membrane in eukaryotes allows further regulation of transcription factors by the duration of their presence in the nucleus, which is regulated by reversible changes in their structure and by binding of other proteins. Environmental stimuli or endocrine signals may cause modification of regulatory proteins eliciting cascades of intracellular signals, which result in regulation of gene expression.
It has become apparent that there is a significant influence of non-DNA-sequence specific effects on transcription. These effects are referred to as epigenetic and involve the higher order structure of DNA, non-sequence specific DNA binding proteins and chemical modification of DNA. In general epigenetic effects alter the accessibility of DNA to proteins and so modulate transcription.
In eukaryotes the structure of chromatin, controlled by the histone code, regulates access to DNA with significant impacts on the expression of genes in euchromatin and heterochromatin areas.
Enhancers, transcription factors, mediator complex and DNA loops in mammalian transcription
Gene expression in mammals is regulated by many cis-regulatory elements, including core promoters and promoter-proximal elements that are located near the transcription start sites of genes, upstream on the DNA (towards the 5' region of the sense strand). Other important cis-regulatory modules are localized in DNA regions that are distant from the transcription start sites. These include enhancers, silencers, insulators and tethering elements. Enhancers and their associated transcription factors have a leading role in the regulation of gene expression.
Enhancers are genome regions that regulate genes. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. Multiple enhancers, each often tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control gene expression.
The illustration shows an enhancer looping around to come into proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1). One member of the dimer is anchored to its binding motif on the enhancer and the other member is anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function-specific transcription factors (among the about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer. A small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern transcription level of the target gene. Mediator (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter.
Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the figure. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating transcription of messenger RNA from its target gene.
DNA methylation and demethylation in transcriptional regulation
DNA methylation is a widespread mechanism for epigenetic influence on gene expression and is seen in bacteria and eukaryotes and has roles in heritable transcription silencing and transcription regulation. Methylation most often occurs on a cytosine (see Figure). Methylation of cytosine primarily occurs in dinucleotide sequences where a cytosine is followed by a guanine, a CpG site. The number of CpG sites in the human genome is about 28 million. Depending on the type of cell, about 70% of the CpG sites have a methylated cytosine.
Methylation of cytosine in DNA has a major role in regulating gene expression. Methylation of CpGs in a promoter region of a gene usually represses gene transcription while methylation of CpGs in the body of a gene increases expression. TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene.
Transcriptional regulation in learning and memory
In a rat, contextual fear conditioning (CFC) is a painful learning experience. Just one episode of CFC can result in a life-long fearful memory. After an episode of CFC, cytosine methylation is altered in the promoter regions of about 9.17% of all genes in the hippocampus neuron DNA of a rat. The hippocampus is where new memories are initially stored. After CFC about 500 genes have increased transcription (often due to demethylation of CpG sites in a promoter region) and about 1,000 genes have decreased transcription (often due to newly formed 5-methylcytosine at CpG sites in a promoter region). The pattern of induced and repressed genes within neurons appears to provide a molecular basis for forming the first transient memory of this training event in the hippocampus of the rat brain.
Some specific mechanisms guiding new DNA methylations and new DNA demethylations in the hippocampus during memory establishment have been established (see for summary). One mechanism includes guiding the short isoform of the TET1 DNA demethylation enzyme, TET1s, to about 600 locations on the genome. The guidance is performed by association of TET1s with EGR1 protein, a transcription factor important in memory formation. Bringing TET1s to these locations initiates DNA demethylation at those sites, up-regulating associated genes. A second mechanism involves DNMT3A2, a splice-isoform of DNA methyltransferase DNMT3A, which adds methyl groups to cytosines in DNA. This isoform is induced by synaptic activity, and its location of action appears to be determined by histone post-translational modifications (a histone code). The resulting new messenger RNAs are then transported by messenger RNP particles (neuronal granules) to synapses of the neurons, where they can be translated into proteins affecting the activities of synapses.
In particular, the brain-derived neurotrophic factor gene (BDNF) is known as a "learning gene". After CFC there was upregulation of BDNF gene expression, related to decreased CpG methylation of certain internal promoters of the gene, and this was correlated with learning.
Transcriptional regulation in cancer
The majority of gene promoters contain a CpG island with numerous CpG sites. When many of a gene's promoter CpG sites are methylated the gene becomes silenced. Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs. In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-transcribed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers).
Post-transcriptional regulation
In eukaryotes, where export of RNA is required before translation is possible, nuclear export is thought to provide additional control over gene expression. All transport in and out of the nucleus is via the nuclear pore and transport is controlled by a wide range of importin and exportin proteins.
Expression of a gene coding for a protein is only possible if the messenger RNA carrying the code survives long enough to be translated. In a typical cell, an RNA molecule is only stable if specifically protected from degradation. RNA degradation has particular importance in regulation of expression in eukaryotic cells where mRNA has to travel significant distances before being translated. In eukaryotes, RNA is stabilised by certain post-transcriptional modifications, particularly the 5′ cap and poly-adenylated tail.
Intentional degradation of mRNA is used not just as a defence mechanism from foreign RNA (normally from viruses) but also as a route of mRNA destabilisation. If an mRNA molecule has a complementary sequence to a small interfering RNA then it is targeted for destruction via the RNA interference pathway.
Three prime untranslated regions and microRNAs
Three prime untranslated regions (3′UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression. Such 3′-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3′-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3′-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.
The 3′-UTR often contains microRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3′-UTRs. Among all regulatory motifs within the 3′-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.
As of 2014, the miRBase web site, an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). Friedman et al. estimate that >45,000 miRNA target sites within human mRNA 3′UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.
Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).
The effects of miRNA dysregulation of gene expression seem to be important in cancer. For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes.
The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders.
Translational regulation
Direct regulation of translation is less prevalent than control of transcription or mRNA stability but is occasionally used. Inhibition of protein translation is a major target for toxins and antibiotics, so they can kill a cell by overriding its normal gene expression control. Protein synthesis inhibitors include the antibiotic neomycin and the toxin ricin.
Post-translational modifications
Post-translational modifications (PTMs) are covalent modifications to proteins. Like RNA splicing, they help to significantly diversify the proteome. These modifications are usually catalyzed by enzymes. Additionally, processes like covalent additions to amino acid side chain residues can often be reversed by other enzymes. However, some, like the proteolytic cleavage of the protein backbone, are irreversible.
PTMs play many important roles in the cell. For example, phosphorylation is primarily involved in activating and deactivating proteins and in signaling pathways. PTMs are involved in transcriptional regulation: an important function of acetylation and methylation is histone tail modification, which alters how accessible DNA is for transcription. They can also be seen in the immune system, where glycosylation plays a key role. One type of PTM can initiate another type of PTM, as can be seen in how ubiquitination tags proteins for degradation through proteolysis. Proteolysis, other than being involved in breaking down proteins, is also important in activating and deactivating them, and in regulating biological processes such as DNA transcription and cell death.
Measurement
Measuring gene expression is an important part of many life sciences, as the ability to quantify the level at which a particular gene is expressed within a cell, tissue or organism can provide a lot of valuable information. For example, measuring gene expression can:
Identify viral infection of a cell (viral protein expression).
Determine an individual's susceptibility to cancer (oncogene expression).
Find if a bacterium is resistant to penicillin (beta-lactamase expression).
Gene expression profiling evaluates a panel of genes to help understand the fundamental mechanism of a cell. This is increasingly used in cancer therapy to target specific chemotherapy. (See RNA-Seq and DNA_microarray for details.)
Similarly, the analysis of the location of protein expression is a powerful tool, and this can be done on an organismal or cellular scale. Investigation of localization is particularly important for the study of development in multicellular organisms and as an indicator of protein function in single cells. Ideally, measurement of expression is done by detecting the final gene product (for many genes, this is the protein); however, it is often easier to detect one of the precursors, typically mRNA and to infer gene-expression levels from these measurements.
mRNA quantification
Levels of mRNA can be quantitatively measured by northern blotting, which provides size and sequence information about the mRNA molecules. A sample of RNA is separated on an agarose gel and hybridized to a radioactively labeled RNA probe that is complementary to the target sequence. The radiolabeled RNA is then detected by an autoradiograph. Because the use of radioactive reagents makes the procedure time-consuming and potentially dangerous, alternative labeling and detection methods, such as digoxigenin and biotin chemistries, have been developed. Perceived disadvantages of Northern blotting are that large quantities of RNA are required and that quantification may not be completely accurate, as it involves measuring band strength in an image of a gel. On the other hand, the additional mRNA size information from the Northern blot allows the discrimination of alternately spliced transcripts.
Another approach for measuring mRNA abundance is RT-qPCR. In this technique, reverse transcription is followed by quantitative PCR. Reverse transcription first generates a DNA template from the mRNA; this single-stranded template is called cDNA. The cDNA template is then amplified in the quantitative step, during which the fluorescence emitted by labeled hybridization probes or intercalating dyes changes as the DNA amplification process progresses. With a carefully constructed standard curve, qPCR can produce an absolute measurement of the number of copies of original mRNA, typically in units of copies per nanolitre of homogenized tissue or copies per cell. qPCR is very sensitive (detection of a single mRNA molecule is theoretically possible), but can be expensive depending on the type of reporter used; fluorescently labeled oligonucleotide probes are more expensive than non-specific intercalating fluorescent dyes.
For expression profiling, or high-throughput analysis of many genes within a sample, quantitative PCR may be performed for hundreds of genes simultaneously in the case of low-density arrays. A second approach is the hybridization microarray. A single array or "chip" may contain probes to determine transcript levels for every known gene in the genome of one or more organisms. Alternatively, "tag based" technologies like Serial analysis of gene expression (SAGE) and RNA-Seq, which can provide a relative measure of the cellular concentration of different mRNAs, can be used. An advantage of tag-based methods is the "open architecture", allowing for the exact measurement of any transcript, with a known or unknown sequence. Next-generation sequencing (NGS) such as RNA-Seq is another approach, producing vast quantities of sequence data that can be matched to a reference genome. Although NGS is comparatively time-consuming, expensive, and resource-intensive, it can identify single-nucleotide polymorphisms, splice-variants, and novel genes, and can also be used to profile expression in organisms for which little or no sequence information is available.
RNA profiles in Wikipedia
Profiles like these are found for almost all proteins listed in Wikipedia. They are generated by organizations such as the Genomics Institute of the Novartis Research Foundation and the European Bioinformatics Institute. Additional information can be found by searching their databases (for an example of the GLUT4 transporter pictured here, see citation). These profiles indicate the level of DNA expression (and hence RNA produced) of a certain protein in a certain tissue, and are color-coded accordingly in the images located in the Protein Box on the right side of each Wikipedia page.
Protein quantification
For genes encoding proteins, the expression level can be directly assessed by a number of methods with some clear analogies to the techniques for mRNA quantification.
One of the most commonly used methods is to perform a Western blot against the protein of interest. This gives information on the size of the protein in addition to its identity. A sample (often cellular lysate) is separated on a polyacrylamide gel, transferred to a membrane and then probed with an antibody to the protein of interest. The antibody can either be conjugated to a fluorophore or to horseradish peroxidase for imaging and/or quantification. The gel-based nature of this assay makes quantification less accurate, but it has the advantage of being able to identify later modifications to the protein, for example proteolysis or ubiquitination, from changes in size.
mRNA-protein correlation
While transcription directly reflects gene expression, the copy number of mRNA molecules does not directly correlate with the number of protein molecules translated from mRNA. Quantification of both protein and mRNA permits a correlation of the two levels. Regulation on each step of gene expression can impact the correlation, as shown for regulation of translation or protein stability. Post-translational factors, such as protein transport in highly polar cells, can influence the measured mRNA-protein correlation as well.
Localization
Analysis of expression is not limited to quantification; localization can also be determined. mRNA can be detected with a suitably labelled complementary mRNA strand and protein can be detected via labelled antibodies. The probed sample is then observed by microscopy to identify where the mRNA or protein is.
By replacing the gene with a new version fused to a green fluorescent protein marker or similar, expression may be directly quantified in live cells. This is done by imaging using a fluorescence microscope. It is very difficult to clone a GFP-fused protein into its native location in the genome without affecting expression levels, so this method often cannot be used to measure endogenous gene expression. It is, however, widely used to measure the expression of a gene artificially introduced into the cell, for example via an expression vector. By fusing a target protein to a fluorescent reporter, the protein's behavior, including its cellular localization and expression level, can be significantly changed.
The enzyme-linked immunosorbent assay works by using antibodies immobilised on a microtiter plate to capture proteins of interest from samples added to the well. Using a detection antibody conjugated to an enzyme or fluorophore the quantity of bound protein can be accurately measured by fluorometric or colourimetric detection. The detection process is very similar to that of a Western blot, but by avoiding the gel steps more accurate quantification can be achieved.
Expression system
An expression system is a system specifically designed for the production of a gene product of choice. This is normally a protein although may also be RNA, such as tRNA or a ribozyme. An expression system consists of a gene, normally encoded by DNA, and the molecular machinery required to transcribe the DNA into mRNA and translate the mRNA into protein using the reagents provided. In the broadest sense this includes every living cell but the term is more normally used to refer to expression as a laboratory tool. An expression system is therefore often artificial in some manner. Expression systems are, however, a fundamentally natural process. Viruses are an excellent example where they replicate by using the host cell as an expression system for the viral proteins and genome.
Inducible expression
Doxycycline is also used in "Tet-on" and "Tet-off" tetracycline controlled transcriptional activation to regulate transgene expression in organisms and cell cultures.
In nature
In addition to these biological tools, certain naturally observed configurations of DNA (genes, promoters, enhancers, repressors) and the associated machinery itself are referred to as an expression system. This term is normally used in the case where a gene or set of genes is switched on under well defined conditions, for example, the simple repressor switch expression system in Lambda phage and the lac operator system in bacteria. Several natural expression systems are directly used or modified and used for artificial expression systems such as the Tet-on and Tet-off expression system.
Gene networks
Genes have sometimes been regarded as nodes in a network, with inputs being proteins such as transcription factors, and outputs being the level of gene expression. The node itself performs a function, and the operation of these functions have been interpreted as performing a kind of information processing within cells and determines cellular behavior.
Gene networks can also be constructed without formulating an explicit causal model. This is often the case when assembling networks from large expression data sets. Covariation and correlation of expression is computed across a large sample of cases and measurements (often transcriptome or proteome data). The source of variation can be either experimental or natural (observational). There are several ways to construct gene expression networks, but one common approach is to compute a matrix of all pair-wise correlations of expression across conditions, time points, or individuals and convert the matrix (after thresholding at some cut-off value) into a graphical representation in which nodes represent genes, transcripts, or proteins and edges connecting these nodes represent the strength of association (see GeneNetwork GeneNetwork 2).
Techniques and tools
The following experimental techniques are used to measure gene expression and are listed in roughly chronological order, starting with the older, more established technologies. They are divided into two groups based on their degree of multiplexity.
Low-to-mid-plex techniques:
Reporter gene
Northern blot
Western blot
Fluorescent in situ hybridization
Reverse transcription PCR
Higher-plex techniques:
SAGE
DNA microarray
Tiling array
RNA-Seq
Gene expression databases
Gene expression omnibus (GEO) at NCBI
Expression Atlas at the EBI
Bgee Bgee at the SIB Swiss Institute of Bioinformatics
Mouse Gene Expression Database at the Jackson Laboratory
CollecTF: a database of experimentally validated transcription factor-binding sites in Bacteria.
COLOMBOS: collection of bacterial expression compendia.
Many Microbe Microarrays Database: microbial Affymetrix data
See also
References
External links
Plant Transcription Factor Database and Plant Transcriptional Regulation Data and Analysis Platform
Molecular biology | Gene expression | [
"Chemistry",
"Biology"
] | 8,503 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
159,441 | https://en.wikipedia.org/wiki/Type%20metal | In printing, type metal refers to the metal alloys used in traditional typefounding and hot metal typesetting. Historically, type metal was an alloy of lead, tin and antimony in different proportions depending on the application, be it individual character mechanical casting for hand setting, mechanical line casting or individual character mechanical typesetting and stereo plate casting. The proportions used are in the range: lead 50‒86%, antimony 11‒30% and tin 3‒20%. Antimony and tin are added to lead for durability while reducing the difference between the coefficients of expansion of the matrix and the alloy. Apart from durability, the general requirements for type-metal are that it should produce a true and sharp cast, and retain correct dimensions and form after cooling down. It should also be easy to cast, at reasonable low melting temperature, iron should not dissolve in the molten metal, and mould and nozzles should stay clean and easy to maintain. Today, Monotype machines can utilize a wide range of different alloys. Mechanical linecasting equipment uses alloys that are close to eutectic.
History
Although the knowledge of casting soft metals in moulds was well established before Johannes Gutenberg's time, his discovery of an alloy that was hard, durable, and would take a clear impression from the mould represents a fundamental aspect of his solution to the problem of printing with movable type. This alloy did not shrink as much as lead alone when cooled. Gutenberg's other contributions were the creation of inks that would adhere to metal type and a method of softening handmade printing paper so that it would take the impression well.
Required characteristics
Cheap, plentifully available as galena and easily workable, lead has many of the ideal characteristics, but on its own it lacks the necessary hardness and does not make castings with sharp details because molten lead shrinks and sags when it cools to a solid.
After much experimentation it was found that adding pewterer's tin, obtained from cassiterite, improved the ability of the cast type to withstand the wear and tear of the printing process, making it tougher but not more brittle.
Despite patiently trying different proportions of both metals, solving the second part of the type metal problem proved very difficult without the addition of yet a third metal, antimony.
Alchemists had shown that when stibnite, an antimony sulfide ore, was heated with scrap iron, metallic antimony was produced. The typefounder would typically introduce powdered stibnite and horseshoe nails into his crucible to melt lead, tin and antimony into type metal. Both the iron and the sulfides would be rejected in the process.
The addition of antimony conferred the much needed improvements in the properties of hardness, wear resistance and especially, the sharpness of reproduction of the type design, given that it has the curious property of diminishing the shrinkage of the alloy upon solidification.
Composition of type metal
Type metal is an alloy of lead, tin and antimony in different proportions depending on the application, be it individual character mechanical casting for hand setting, mechanical line casting or individual character mechanical typesetting and stereo plate casting.
The proportions used are in the range: lead 50‒86%, antimony 11‒30% and tin 3‒20%. The basic characteristics of these metals are as follows:
Lead
Type metal is an alloy of lead (Pb). Pure lead is a relatively cheap metal, is soft thus easy to work, and it is easy to cast since it melts at . However, it shrinks when it solidifies making letters that are not sharp enough for printing. In addition pure lead letters will quickly deform during use; a direct result of the easy workability of lead.
Lead is exceptionally soft, malleable, and ductile but with little tensile strength.
Lead oxide is a poison, that primarily damages brain function. Metallic lead is more stable and less toxic than its oxidized form. Metallic lead cannot be absorbed through contact with skin, so may be handled, carefully, with far less risk than lead oxide.
Tin
Tin (Sn) promotes the fluidity of the molten alloy and makes the type tough, giving the alloy resistance to wear. It is harder, stiffer and tougher than lead.
Antimony
Antimony (Sb) is a metalloid element, which melts at . Antimony has a crystalline appearance while being both brittle and fusible.
When alloyed with lead to produce type metal, antimony gives it the hardness it needs to resist deformation during printing, and gives it sharper castings from the mould to produce clear, easily read printed text on the page.
Typical type metal proportions
The actual compositions differed over time, different machines were adjusted to different alloys depending on the intended uses of the type. Printers had sometimes their own preferences about the quality of particular alloys. The Lanston Monotype Corporation in the United Kingdom had a whole range of alloys listed in their manuals.
Alloys for mechanical composition
Most mechanical typesetting is divided basically into two different competing technologies: line casting (Linotype and Intertype) and single character casting (Monotype).
The manuals for the Monotype composition caster (1952 and later editions) mention at least five different alloys to be used for casting, depending the purpose of the type and the work to be done with it.
Although in general Monotype cast type characters can be visually identified as having a square nick (as opposed to the round nicks used on foundry type), there is no easy way to identify the alloy aside from an expensive chemical assay in a laboratory.
Apart from this the two Monotype companies in the United States and the UK also made moulds with 'round' nicks. Typefounders and printers could and did order specially designed moulds to their own specifications: height, size, kind of nick, even the number of nicks could be changed.
Type produced with these special moulds can only be identified if the foundry or printer is known.
In Switzerland the company "Metallum Pratteln AG", in Basel had yet another list of type-metal alloys. If needed, any alloy according to customer specifications could be produced.
Dross
Regeneration-metal was melted into the crucible to replace lost tin and antimony through the dross.
Every time type metal is remelted, tin and antimony oxidise. These oxides form on the surface of the crucible and must be removed. After stirring the molten metal, grey powder forms on the surface, the dross, needing to be skimmed. Dross contains recoverable amounts of tin and antimony.
Dross must be processed at specialized companies, in order to extract the pure metals in conditions that would prevent environmental pollution and remain economically feasible.
Behaviour of bipolar alloys
Pure metal melts and solidifies in a simple manner at a specific temperature. This is not the case with alloys. There we find a range of temperatures with all kinds of different events. The melting temperature of all mixtures is considerably lower than the pure components.
Antimony/Lead mixture examples
The addition of a small amount of antimony (5% to 6%) to lead will significantly alter the alloy's behavior compared to pure lead: although the melting point of pure antimony is 630 °C, this mixture will be completely molten and a homogeneous fluid even at temperatures as low as 371 °C. Letting this mixture cool the alloy will remain liquid even through 355 °C, the melting point of pure lead. Once the temperature reaches 291 °C, lead crystals will start to form, increasing the cohesion of the liquid alloy. At 252 °C, the mixture will start to fully solidify, during which the temperature will remain constant. Only when the mixture has fully solidified will the temperature start to decrease again.
Using a 10% antimony, 90% lead mixture delays lead crystal formation until approximately 260 °C.
Using a 12% antimony, 88% lead mixture prevents crystal formation entirely, becoming a eutectic. This alloy has a clear melting point, at 252 °C.
Increasing the antimony content beyond 12% will lead to predominantly antimony crystallization.
Tri-polar mixtures
Adding tin to this bipolar-system complicates the behaviour even further. Some tin enters into the eutectic. A mixture of 4% tin, 12% antimony, and 84% lead solidifies at 240 °C.
Depending from the metals in excess, compared with the eutectic, crystals are formed, depleting the liquid, until the eutectic 4/12 mixture is formed once more.
The 12/20 alloy contains many mixed crystals of tin and antimony, these crystals constitute the hardness of the alloy and the resistance against wear.
Raising the content of antimony cannot be done without adding some tin too. Because the fluidity of the mixture will dramatically diminish when the temperature goes down somewhere in the channels of the machine. Nozzles can be blocked by antimony crystals.
Metals used on typecasting machines
Eutectic alloys are used on Linotype-machines and Ludlow-casters to prevent blockage of the mould and to ensure continuous trouble-free casting.
Alloys used on Monotype machines tend to contain higher contents of tin, to obtain tougher character. All characters should be able to resist the pressure during printing. This meant an extra investment, but Monotype was an expensive system all the way.
Present usage of type metal
The fierce competition between the different mechanical typecasting systems like Linotype and Monotype has given rise to some lasting fairy tales about typemetal. Linotype users looked down on Monotype and vice versa.
Monotype machines however can utilize a wide range of different alloys; maintaining a constant and a high production meant a strict standardization of the typemetal in the company, so as to reduce by all means any interruption of the production. Repeated assays were done at regular intervals to monitor the alloy used, since every time the metal is recycled, roughly half a per cent of tin content is lost through oxidation. These oxides are removed with the dross while cleaning the surface of the molten metal.
Nowadays this "battle" has lost its importance, at least for Monotype. The quality of the produced type is far more important. Alloys with a high-content of antimony, and subsequently a high content of tin, can be cast at a higher temperature, and at a lower speed and with more cooling at a Monotype composition or supercaster.
Although care was taken to avoid mixing different types of type metal in shops with different type casting systems, in actual practice this often occurred. Since a Monotype composition caster can cope with a variety of different metal alloys, occasional mixing of Linotype alloy with discarded typefounders alloy has proven its usefulness.
Mechanical linecasting equipment use alloys that are close to eutectic.
Contamination of type metals
Copper
Copper has been used for hardening type metal; this metal easily forms mixed crystals with tin when the alloy cools down. These crystals will grow just below the exit opening of the nozzle in Monotype machines, resulting in a total blockage after some time. These nozzles are very difficult to clean, because the hard crystals will resist drilling.
Zinc
Brass spaces contain zinc, which is extremely counterproductive in type metal. Even a tiny amount — less than 1% — will form a dusty surface on the molten metal surface that is difficult to remove. Characters cast from contaminated type metal such as this are of inferior quality, the solution being to discard and replace with fresh alloy.
Brass and zinc should therefore be removed before remelting. The same applies to aluminium, although this metal will float on top of the melt, and will be easily discovered and removed, before it is dissolved into the lead.
Magnesium
Magnesium plates are very dangerous in molten lead, because this metal can easily burn and will ignite in the molten lead.
Iron
Iron is hardly dissolved into type metal, although the molten metal is always in contact with the cast iron surface of the melting pot.
Historic references to type metals
Joseph Moxon, in his Mechanick Exercises, mentions a mix of equal amounts of "antimony" and iron nails.
The "antimony" here was in fact stibnite, antimony-sulfide (Sb2S3). The iron was burned away in this process, reducing the antimony and at the same time removing the unwanted sulfur. In this way ferro-sulfide was formed, that would evaporate with all the fumes.
The mixture of stibnite and nails was heated red hot in an open-air furnace, until all is molten and finished. The resulting metal can contain up to 9% of iron. Further purification can be done by mixing the hot melt with kitchen-salt, NaCl. After this red hot lead from another melting pot is added and stirred thoroughly.
Some tin was added to the alloy for casting small characters and narrow spaces, to better fill narrow areas of the mould. The good properties of tin were well known. The use of tin was sometime minimized to save expenses.
Much of this toxic work was done by child labour, a labor force that includes children.
As a supposed antidote to the inhaled toxic metal fumes, the workers were given a mixture of red wine and salad oil:
References
Alloys
Printing | Type metal | [
"Chemistry"
] | 2,741 | [
"Alloys",
"Chemical mixtures"
] |
159,472 | https://en.wikipedia.org/wiki/Flight | Flight or flying is the motion of an object through an atmosphere, or through the vacuum of outer space, without contacting any planetary surface. This can be achieved by generating aerodynamic lift associated with gliding or propulsive thrust, aerostatically using buoyancy, or by ballistic movement.
Many things can fly, from animal aviators such as birds, bats and insects, to natural gliders/parachuters such as patagial animals, anemochorous seeds and ballistospores, to human inventions like aircraft (airplanes, helicopters, airships, balloons, etc.) and rockets which may propel spacecraft and spaceplanes.
The engineering aspects of flight are the purview of aerospace engineering which is subdivided into aeronautics, the study of vehicles that travel through the atmosphere, and astronautics, the study of vehicles that travel through space, and ballistics, the study of the flight of projectiles.
Types of flight
Buoyant flight
Humans have managed to construct lighter-than-air vehicles that raise off the ground and fly, due to their buoyancy in the air.
An aerostat is a system that remains aloft primarily through the use of buoyancy to give an aircraft the same overall density as air. Aerostats include free balloons, airships, and moored balloons. An aerostat's main structural component is its envelope, a lightweight skin that encloses a volume of lifting gas to provide buoyancy, to which other components are attached.
Aerostats are so named because they use "aerostatic" lift, a buoyant force that does not require lateral movement through the surrounding air mass to effect a lifting force. By contrast, aerodynes primarily use aerodynamic lift, which requires the lateral movement of at least some part of the aircraft through the surrounding air mass.
Aerodynamic flight
Unpowered flight versus powered flight
Some things that fly do not generate propulsive thrust through the air, for example, the flying squirrel. This is termed gliding. Some other things can exploit rising air to climb such as raptors (when gliding) and man-made sailplane gliders. This is termed soaring. However most other birds and all powered aircraft need a source of propulsion to climb. This is termed powered flight.
Animal flight
The only groups of living things that use powered flight are birds, insects, and bats, while many groups have evolved gliding. The extinct pterosaurs, an order of reptiles contemporaneous with the dinosaurs, were also very successful flying animals, and there were apparently some flying dinosaurs (see Flying and gliding animals#Non-avian dinosaurs). Each of these groups' wings evolved independently, with insects the first animal group to evolve flight. The wings of the flying vertebrate groups are all based on the forelimbs, but differ significantly in structure; insect wings are hypothesized to be highly modified versions of structures that form gills in most other groups of arthropods.
Bats are the only mammals capable of sustaining level flight (see bat flight). However, there are several gliding mammals which are able to glide from tree to tree using fleshy membranes between their limbs; some can travel hundreds of meters in this way with very little loss in height. Flying frogs use greatly enlarged webbed feet for a similar purpose, and there are flying lizards which fold out their mobile ribs into a pair of flat gliding surfaces. "Flying" snakes also use mobile ribs to flatten their body into an aerodynamic shape, with a back and forth motion much the same as they use on the ground.
Flying fish can glide using enlarged wing-like fins, and have been observed soaring for hundreds of meters. It is thought that this ability was chosen by natural selection because it was an effective means of escape from underwater predators. The longest recorded flight of a flying fish was 45 seconds.
Most birds fly (see bird flight), with some exceptions. The largest birds, the ostrich and the emu, are earthbound flightless birds, as were the now-extinct dodos and the Phorusrhacids, which were the dominant predators of South America in the Cenozoic era. The non-flying penguins have wings adapted for use under water and use the same wing movements for swimming that most other birds use for flight. Most small flightless birds are native to small islands, and lead a lifestyle where flight would offer little advantage.
Among living animals that fly, the wandering albatross has the greatest wingspan, up to ; the great bustard has the greatest weight, topping at .
Most species of insects can fly as adults. Insect flight makes use of either of two basic aerodynamic models: creating a leading edge vortex, found in most insects, and using clap and fling, found in very small insects such as thrips.
Many species of spiders, spider mites and lepidoptera use a technique called ballooning to ride air currents such as thermals, by exposing their gossamer threads which gets lifted by wind and atmospheric electric fields.
Mechanical
Mechanical flight is the use of a machine to fly. These machines include aircraft such as airplanes, gliders, helicopters, autogyros, airships, balloons, ornithopters as well as spacecraft. Gliders are capable of unpowered flight. Another form of mechanical flight is para-sailing, where a parachute-like object is pulled by a boat. In an airplane, lift is created by the wings; the shape of the wings of the airplane are designed specially for the type of flight desired. There are different types of wings: tempered, semi-tempered, sweptback, rectangular and elliptical. An aircraft wing is sometimes called an airfoil, which is a device that creates lift when air flows across it.
Supersonic
Supersonic flight is flight faster than the speed of sound. Supersonic flight is associated with the formation of shock waves that form a sonic boom that can be heard from the ground, and is frequently startling. The creation of this shockwave requires a significant amount of energy; because of this, supersonic flight is generally less efficient than subsonic flight at about 85% of the speed of sound.
Hypersonic
Hypersonic flight is very high speed flight where the heat generated by the compression of the air due to the motion through the air causes chemical changes to the air. Hypersonic flight is achieved primarily by reentering spacecraft such as the Space Shuttle and Soyuz.
Ballistic
Atmospheric
Some things generate little or no lift and move only or mostly under the action of momentum, gravity, air drag and in some cases thrust. This is termed ballistic flight. Examples include balls, arrows, bullets, fireworks etc.
Spaceflight
Essentially an extreme form of ballistic flight, spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space. Examples include ballistic missiles, orbital spaceflight, etc.
Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.
A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of the Earth. Once in space, the motion of a spacecraft—both when unpropelled and when under propulsion—is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.
Solid-state propulsion
In 2018, researchers at Massachusetts Institute of Technology (MIT) managed to fly an aeroplane with no moving parts, powered by an "ionic wind" also known as electroaerodynamic thrust.
History
Many human cultures have built devices that fly, from the earliest projectiles such as stones and spears, the
boomerang in Australia, the hot air Kongming lantern, and kites.
Aviation
George Cayley studied flight scientifically in the first half of the 19th century, and in the second half of the 19th century Otto Lilienthal made over 200 gliding flights and was also one of the first to understand flight scientifically. His work was replicated and extended by the Wright brothers who made gliding flights and finally the first controlled and extended, manned powered flights.
Spaceflight
Spaceflight, particularly human spaceflight became a reality in the 20th century following theoretical and practical breakthroughs by Konstantin Tsiolkovsky and Robert H. Goddard. The first orbital spaceflight was in 1957, and Yuri Gagarin was carried aboard the first crewed orbital spaceflight in 1961.
Physics
There are different approaches to flight. If an object has a lower density than air, then it is buoyant and is able to float in the air without expending energy. A heavier than air craft, known as an aerodyne, includes flighted animals and insects, fixed-wing aircraft and rotorcraft. Because the craft is heavier than air, it must generate lift to overcome its weight. The wind resistance caused by the craft moving through the air is called drag and is overcome by propulsive thrust except in the case of gliding.
Some vehicles also use thrust in the place of lift; for example rockets and Harrier jump jets.
Forces
Forces relevant to flight are
Propulsive thrust (except in gliders)
Lift, created by the reaction to an airflow
Drag, created by aerodynamic friction
Weight, created by gravity
Buoyancy, for lighter than air flight
These forces must be balanced for stable flight to occur.
Thrust
A fixed-wing aircraft generates forward thrust when air is pushed in the direction opposite to flight. This can be done in several ways including by the spinning blades of a propeller, or a rotating fan pushing air out from the back of a jet engine, or by ejecting hot gases from a rocket engine. The forward thrust is proportional to the mass of the airstream multiplied by the difference in velocity of the airstream. Reverse thrust can be generated to aid braking after landing by reversing the pitch of variable-pitch propeller blades, or using a thrust reverser on a jet engine. Rotary wing aircraft and thrust vectoring V/STOL aircraft use engine thrust to support the weight of the aircraft, and vector sum of this thrust fore and aft to control forward speed.
Lift
In the context of an air flow relative to a flying body, the lift force is the component of the aerodynamic force that is perpendicular to the flow direction. Aerodynamic lift results when the wing causes the surrounding air to be deflected - the air then causes a force on the wing in the opposite direction, in accordance with Newton's third law of motion.
Lift is commonly associated with the wing of an aircraft, although lift is also generated by rotors on rotorcraft (which are effectively rotating wings, performing the same function without requiring that the aircraft move forward through the air). While common meanings of the word "lift" suggest that lift opposes gravity, aerodynamic lift can be in any direction. When an aircraft is cruising for example, lift does oppose gravity, but lift occurs at an angle when climbing, descending or banking. On high-speed cars, the lift force is directed downwards (called "down-force") to keep the car stable on the road.
Drag
For a solid object moving through a fluid, the drag is the component of the net aerodynamic or hydrodynamic force acting opposite to the direction of the movement. Therefore, drag opposes the motion of the object, and in a powered vehicle it must be overcome by thrust. The process which creates lift also causes some drag.
Lift-to-drag ratio
Aerodynamic lift is created by the motion of an aerodynamic object (wing) through the air, which due to its shape and angle deflects the air. For sustained straight and level flight, lift must be equal and opposite to weight. In general, long narrow wings are able deflect a large amount of air at a slow speed, whereas smaller wings need a higher forward speed to deflect an equivalent amount of air and thus generate an equivalent amount of lift. Large cargo aircraft tend to use longer wings with higher angles of attack, whereas supersonic aircraft tend to have short wings and rely heavily on high forward speed to generate lift.
However, this lift (deflection) process inevitably causes a retarding force called drag. Because lift and drag are both aerodynamic forces, the ratio of lift to drag is an indication of the aerodynamic efficiency of the airplane. The lift to drag ratio is the L/D ratio, pronounced "L over D ratio." An airplane has a high L/D ratio if it produces a large amount of lift or a small amount of drag. The lift/drag ratio is determined by dividing the lift coefficient by the drag coefficient, CL/CD.
The lift coefficient Cl is equal to the lift L divided by the (density r times half the velocity V squared times the wing area A). [Cl = L / (A * .5 * r * V^2)] The lift coefficient is also affected by the compressibility of the air, which is much greater at higher speeds, so velocity V is not a linear function. Compressibility is also affected by the shape of the aircraft surfaces.
The drag coefficient Cd is equal to the drag D divided by the (density r times half the velocity V squared times the reference area A). [Cd = D / (A * .5 * r * V^2)]
Lift-to-drag ratios for practical aircraft vary from about 4:1 for vehicles and birds with relatively short wings, up to 60:1 or more for vehicles with very long wings, such as gliders. A greater angle of attack relative to the forward movement also increases the extent of deflection, and thus generates extra lift. However a greater angle of attack also generates extra drag.
Lift/drag ratio also determines the glide ratio and gliding range. Since the glide ratio is based only on the relationship of the aerodynamics forces acting on the aircraft, aircraft weight will not affect it. The only effect weight has is to vary the time that the aircraft will glide for – a heavier aircraft gliding at a higher airspeed will arrive at the same touchdown point in a shorter time.
Buoyancy
Air pressure acting up against an object in air is greater than the pressure above pushing down. The buoyancy, in both cases, is equal to the weight of fluid displaced - Archimedes' principle holds for air just as it does for water.
A cubic meter of air at ordinary atmospheric pressure and room temperature has a mass of about 1.2 kilograms, so its weight is about 12 newtons. Therefore, any 1-cubic-meter object in air is buoyed up with a force of 12 newtons. If the mass of the 1-cubic-meter object is greater than 1.2 kilograms (so that its weight is greater than 12 newtons), it falls to the ground when released. If an object of this size has a mass less than 1.2 kilograms, it rises in the air. Any object that has a mass that is less than the mass of an equal volume of air will rise in air - in other words, any object less dense than air will rise.
Thrust to weight ratio
Thrust-to-weight ratio is, as its name suggests, the ratio of instantaneous thrust to weight (where weight means weight at the Earth's standard acceleration ). It is a dimensionless parameter characteristic of rockets and other jet engines and of vehicles propelled by such engines (typically space launch vehicles and jet aircraft).
If the thrust-to-weight ratio is greater than the local gravity strength (expressed in gs), then flight can occur without any forward motion or any aerodynamic lift being required.
If the thrust-to-weight ratio times the lift-to-drag ratio is greater than local gravity then takeoff using aerodynamic lift is possible.
Flight dynamics
Flight dynamics is the science of air and space vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation in three dimensions about the vehicle's center of mass, known as pitch, roll and yaw (See Tait-Bryan rotations for an explanation).
The control of these dimensions can involve a horizontal stabilizer (i.e. "a tail"), ailerons and other movable aerodynamic devices which control angular stability i.e. flight attitude (which in turn affects altitude, heading). Wings are often angled slightly upwards- they have "positive dihedral angle" which gives inherent roll stabilization.
Energy efficiency
To create thrust so as to be able to gain height, and to push through the air to overcome the drag associated with lift all takes energy. Different objects and creatures capable of flight vary in the efficiency of their muscles, motors and how well this translates into forward thrust.
Propulsive efficiency determines how much energy vehicles generate from a unit of fuel.
Range
The range that powered flight articles can achieve is ultimately limited by their drag, as well as how much energy they can store on board and how efficiently they can turn that energy into propulsion.
For powered aircraft the useful energy is determined by their fuel fraction- what percentage of the takeoff weight is fuel, as well as the specific energy of the fuel used.
Power-to-weight ratio
All animals and devices capable of sustained flight need relatively high power-to-weight ratios to be able to generate enough lift and/or thrust to achieve take off.
Takeoff and landing
Vehicles that can fly can have different ways to takeoff and land. Conventional aircraft accelerate along the ground until sufficient lift is generated for takeoff, and reverse the process for landing. Some aircraft can take off at low speed; this is called a short takeoff. Some aircraft such as helicopters and Harrier jump jets can take off and land vertically. Rockets also usually take off and land vertically, but some designs can land horizontally.
Guidance, navigation and control
Navigation
Navigation is the systems necessary to calculate current position (e.g. compass, GPS, LORAN, star tracker, inertial measurement unit, and altimeter).
In aircraft, successful air navigation involves piloting an aircraft from place to place without getting lost, breaking the laws applying to aircraft, or endangering the safety of those on board or on the ground.
The techniques used for navigation in the air will depend on whether the aircraft is flying under the visual flight rules (VFR) or the instrument flight rules (IFR). In the latter case, the pilot will navigate exclusively using instruments and radio navigation aids such as beacons, or as directed under radar control by air traffic control. In the VFR case, a pilot will largely navigate using dead reckoning combined with visual observations (known as pilotage), with reference to appropriate maps. This may be supplemented using radio navigation aids.
Guidance
A guidance system is a device or group of devices used in the navigation of a ship, aircraft, missile, rocket, satellite, or other moving object. Typically, guidance is responsible for the calculation of the vector (i.e., direction, velocity) toward an objective.
Control
A conventional fixed-wing aircraft flight control system consists of flight control surfaces, the respective cockpit controls, connecting linkages, and the necessary operating mechanisms to control an aircraft's direction in flight. Aircraft engine controls are also considered as flight controls as they change speed.
Traffic
In the case of aircraft, air traffic is controlled by air traffic control systems.
Collision avoidance is the process of controlling spacecraft to try to prevent collisions.
Flight safety
Air safety is a term encompassing the theory, investigation and categorization of flight failures, and the prevention of such failures through regulation, education and training. It can also be applied in the context of campaigns that inform the public as to the safety of air travel.
See also
Aerodynamics
Levitation
Transvection (flying)
Backward flying
References
Notes
Bibliography
Coulson-Thomas, Colin. The Oxford Illustrated Dictionary. Oxford, UK: Oxford University Press, 1976, First edition 1975, .
French, A. P. Newtonian Mechanics (The M.I.T. Introductory Physics Series) (1st ed.). New York: W. W. Norton & Company Inc., 1970.
Honicke, K., R. Lindner, P. Anders, M. Krahl, H. Hadrich and K. Rohricht. Beschreibung der Konstruktion der Triebwerksanlagen. Berlin: Interflug, 1968.
Sutton, George P. Oscar Biblarz. Rocket Propulsion Elements. New York: Wiley-Interscience, 2000 (7th edition). .
Walker, Peter. Chambers Dictionary of Science and Technology. Edinburgh: Chambers Harrap Publishers Ltd., 2000, First edition 1998. .
External links
History and photographs of early aeroplanes etc.
'Birds in Flight and Aeroplanes' by Evolutionary Biologist and trained Engineer John Maynard-Smith Freeview video provided by the Vega Science Trust.
Aerodynamics
Sky | Flight | [
"Physics",
"Chemistry",
"Engineering"
] | 4,272 | [
"Physical phenomena",
"Aerodynamics",
"Flight",
"Motion (physics)",
"Aerospace engineering",
"Fluid dynamics"
] |
159,851 | https://en.wikipedia.org/wiki/Sclerophyll | Sclerophyll is a type of vegetation that is adapted to long periods of dryness and heat. The plants feature hard leaves, short internodes (the distance between leaves along the stem) and leaf orientation which is parallel or oblique to direct sunlight. The word comes from the Greek sklēros (hard) and phyllon (leaf). The term was coined by A.F.W. Schimper in 1898 (translated in 1903), originally as a synonym of xeromorph, but the two words were later differentiated.
Sclerophyllous plants occur in many parts of the world, but are most typical of areas with low rainfall or seasonal droughts, such as Australia, Africa, and western North and South America. They are prominent throughout Australia, parts of Argentina, the Cerrado biogeographic region of Bolivia, Paraguay and Brazil, and in the Mediterranean biomes that cover the Mediterranean Basin, California, Chile, and the Cape Province of South Africa.
In the Mediterranean basin, holm oak, cork oak and olives are typical hardwood trees. In addition, there are several species of pine under the trees in the vegetation zone. The shrub layer contains numerous herbs such as rosemary, thyme and lavender. In relation to the potential natural vegetation, around 2% of the Earth's land surface is covered by sclerophyll woodlands, and a total of 10% of all plant species on Earth live there.
Description
Sclerophyll woody plants are characterized by their relatively small, stiff, leathery and long-lasting leaves. The sclerophyll vegetation is the result of an adaptation of the flora to the summer dry period of a Mediterranean-type climate.
Plant species with this type of adaptation tend to be evergreen with great longevity, slow growth and with no loss of leaves during the unfavorable season. As a result, the thickets that make up these ecosystems are of the persistent evergreen type, in addition to the predominance of plants, even herbaceous ones, with "hard" leaves, which are covered by a thick leathery layer called the cuticle, that prevents water loss during the dry season. The aerial and underground structures of these plants are modified to make up for water shortages that may affect their survival.
The name sclerophyll derives from the highly developed sclerenchyma from the plant, which is responsible for the hardness or stiffness of the leaves. This structure of the leaves inhibits transpiration and thus prevents major water losses during the dry season. Most of the plant species in the sclerophyll zone are not only insensitive to summer drought, they have also used various strategies to adapt to frequent wildfires, heavy rainfall and nutrient deficiencies.
Ecology
The type of sclerophyllic trees in the Palearctic flora region include the holm oak (Quercus ilex), myrtle (Myrtus communis), strawberry tree (Arbutus unedo), wild olive (Olea europaea), laurel (Laurus nobilis), mock privet (Phillyrea latifolia), the Italian buckthorn (Rhamnus alaternus), etc.
In central and southern California, the coastal hills are covered in sclerophyll vegetation known as chaparral. The flora of this ecoregion also includes tree species Scrub oak (Quercus dumosa), California buckeye (Aesculus californica), San Gabriel Mountain liveforever (Dudlea densiflora), Catalina mahogany (Cercocarpus traskiae), and the threatened jewelflower (Streptanthus albidus ssp. Peramoenus).
In South Africa, in the Cape region, there are Mediterranean open forests known as fynbos. The abundance of endemics is so extraordinary (68% of the 8600 vascular plant species in the area) that the South African sclerophyll area, the cape flora, forms the smallest of the six flora kingdoms on earth. Plants include Elegia, Thamnochortus, and Willdenowia and proteas such as king protea (Protea cynaroides) and blushing bride (Serruria florida).
In most of Australia, sclerophyll vegetation such as eucalyptus trees, melaleucas, banksias, callistemons and grevilleas dominate the mallee and woodland areas of its cities, including those lacking a Mediterranean climate, such as Sydney, Melbourne, Hobart and Brisbane.
In Chile, south of the desert areas, there is evergreen bushland called matorral. Typical species include Litre (Lithraea venenosa), Quillay or Soapbark Tree (Quillaja saponaria), and bromeliads of genus Puya.
Climate
The sclerophyll regions are located in the outer subtropics bordering the temperate zone (also known as the warm-temperate zone). Accordingly, the annual average temperatures are relatively high at ; An average of over is reached for at least four months, eight to twelve months it is over and no month is below on average. Frost and snow occur only occasionally and the growing season lasts longer than 150 days and is in the winter half-year. The lower limit of the moderate annual precipitation is (semi-arid climate) and the upper limit .
Generally, the summers are dry and hot with a dry season of a maximum of seven months, but at least two to three months. The winters are rainy and cool. However, not all regions with sclerophyll vegetation feature the classic Mediterranean climate; parts of eastern Italy, eastern Australia and eastern South Africa, which feature sclerophyll woodlands, tend to have uniform rainfall or even a more summer-dominant rainfall, whereby falling under the humid subtropical climate zone (Cfa/Cwa). Furthermore, other areas with sclerophyll flora would grade to the oceanic climate (Cfb); particularly the eastern parts of the Eastern Cape province in South Africa, and Tasmania, Victoria and southern New South Wales in Australia.
Soils
Sclerophyll plants are also found in areas with nutrient-poor and acidic soils, and soils with heavy concentrations of aluminum and other metals. Sclerophyll leaves transpire less and have a lower uptake than malacophyllous or laurophyllous leaves. These lower transpiration rates may reduce the uptake of toxic ions and better provide for C-carboxylation under nutrient-poor conditions, particularly low availability of mineral nitrogen and phosphate. Sclerophyllous plants are found in tropical heath forests, which grown on nutrient-poor sandy soils in humid regions in the Rio Orinoco and the Rio Negro basins of northern South America on quartz sand, in the kerangas forests of Borneo and on the Malay Peninsula, in coastal sandy areas along the Gulf of Guinea in Gabon, Cameroon, and Côte d'Ivoire, and in eastern Australia. Since water drains rapidly through these soils, sclerophylly also protects plants against drought stress during dry periods.
Sclerophylly's advantages in nutrient-poor conditions may be another factor in the prevalence of sclerophyllous plants in nutrient-poor areas in drier-climate regions, like much of Australia and the Cerrado of Brazil.
Distribution
The zone of the sclerophyll vegetation lies in the border area between the subtropics and the temperate zone, approximately between the 30th and 40th degree of latitude (in the northern hemisphere also up to the 45th degree of latitude). Their presence is limited to the coastal western sides of the continents, but nonetheless can typical in any regions of a continent with scarce annual precipitation or frequent seasonal droughts and poor soils that are heavily leached.
The sclerophyll zone often merges into temperate deciduous forests towards the poles, on the coasts also into temperate rainforests and towards the equator in hot semi-deserts or deserts. The Mediterranean areas, which have a very high biodiversity, are under great pressure from the population. This is especially true for the Mediterranean region since ancient times. Through overexploitation (logging, grazing, agricultural use) and frequent fires caused by people, the original forest vegetation is converted. In extreme cases, the hard-leaf vegetation disappears completely and is replaced by open rock heaths.
Some sclerophyll areas are closer to the equator than the Mediterranean zone—for example, the interior of Madagascar, the dry half of New Caledonia, the lower edge areas of the Madrean pine-oak woodlands of the Mexican highlands between 800 and 1800/2000 m or around 2000 m high plateaus of the Asir Mountains on the western edge of the Arabian Peninsula.
Land use
While the winter rain areas of America, South Africa and Australia, with an unusually large variety of food crops, were ideal gathering areas for hunter gatherers until European colonization, agriculture and cattle breeding spread in the Mediterranean area since the Neolithic, which permanently changed the face of the landscape. In the sclerophyll regions near the coast, permanent crops such as olive and wine cultivation established themselves; However, the landscape forms that characterize the degenerate shrubbery and shrub heaths Macchie and Garigue are predominantly a result of grazing (especially with goats).
In the course of the last millennia, the original vegetation in almost all areas of this vegetation zone has been greatly changed by the influence of humans. Where the plants have not been replaced by vineyards and olive groves, the maquis was the predominant form of vegetation on the Mediterranean. The maquis has been degraded in many places to the low shrub heather, the garigue. Many plant species that are rich in aromatic oils belong to both vegetation societies. The diversity of the original sclerophyll vegetation in the world is high to extremely high (3000–5000 species per ha).
Australian bush
Most areas of the Australian continent able to support woody plants are occupied by sclerophyll communities as forests, savannas, or heathlands. Common plants include the Proteaceae (grevilleas, banksias and hakeas), tea-trees, acacias, boronias, and eucalypts.
The most common sclerophyll communities in Australia are savannas dominated by grasses with an overstorey of eucalypts and acacias. Acacia (particularly mulga) shrublands also cover extensive areas. All the dominant overstorey acacia species and a majority of the understorey acacias have a scleromorphic adaptation in which the leaves have been reduced to phyllodes consisting entirely of the petiole.
Many plants of the sclerophyllous woodlands and shrublands also produce leaves unpalatable to herbivores by the inclusion of toxic and indigestible compounds which assure survival of these long-lived leaves. This trait is particularly noticeable in the eucalypt and Melaleuca species which possess oil glands within their leaves that produce a pungent volatile oil that makes them unpalatable to most browsers. These traits make the majority of woody plants in these woodlands largely unpalatable to domestic livestock. It is therefore important from a grazing perspective that these woodlands support a more or less continuous layer of herbaceous ground cover dominated by grasses.
Sclerophyll forests cover a much smaller area of the continent, being restricted to relatively high rainfall locations. They have a eucalyptus overstory (10 to 30 metres) with the understory also being hard-leaved. Dry sclerophyll forests are the most common forest type on the continent, and although it may seem barren dry sclerophyll forest is highly diverse. For example, a study of sclerophyll vegetation in Seal Creek, Victoria, found 138 species.
Even less extensive are wet sclerophyll forests. They have a taller eucalyptus overstory than dry sclerophyll forests, or more (typically mountain ash, alpine ash, rose gum, karri, messmate stringybark, or manna gum, and a soft-leaved, fairly dense understory (tree ferns are common). They require ample rainfall—at least 1000 mm (40 inches).
Evolution
Sclerophyllous plants are all part of a specific environment and are anything but newcomers. By the time of European settlement, sclerophyll forest accounted for the vast bulk of the forested areas.
Most of the wooded parts of present-day Australia have become sclerophyll dominated as a result of the extreme age of the continent combined with Aboriginal fire use. Deep weathering of the crust over many millions of years leached chemicals out of the rock, leaving Australian soils deficient in nutrients, particularly phosphorus. Such nutrient deficient soils support non-sclerophyllous plant communities elsewhere in the world and did so over most of Australia prior to European arrival. However such deficient soils cannot support the nutrient losses associated with frequent fires and are rapidly replaced with sclerophyllous species under traditional Aboriginal burning regimens. With the cessation of traditional burning non-sclerophyllous species have re-colonized sclerophyll habitat in many parts of Australia.
The presence of toxic compounds combined with a high carbon : nitrogen ratio make the leaves and branches of scleromorphic species long-lived in the litter, and can lead to a large build-up of litter in woodlands. The toxic compounds of many species, notably Eucalyptus species, are volatile and flammable and the presence of large amounts of flammable litter, coupled with an herbaceous understorey, encourages fire.
All the Australian sclerophyllous communities are liable to be burnt with varying frequencies and many of the woody plants of these woodlands have developed adaptations to survive and minimise the effects of fire.
Sclerophyllous plants generally resist dry conditions well, making them successful in areas of seasonally variable rainfall. In Australia, however, they evolved in response to the low level of phosphorus in the soil—indeed, many native Australian plants cannot tolerate higher levels of phosphorus and will die if fertilised incorrectly. The leaves are hard due to lignin, which prevents wilting and allows plants to grow, even when there is not enough phosphorus for substantial new cell growth.
Regions
These are the biomes or ecoregions in the world that feature an abundance of, or are known for having, sclerophyll vegetation:
Cumberland Plain Woodland
Sydney Sandstone Ridgetop Woodland
Eastern Suburbs Banksia Scrub
Tasmanian dry sclerophyll forests
Aegean and Western Turkey sclerophyllous and mixed forests
California chaparral and woodlands
California coastal sage and chaparral
Chilean Matorral
Mallee Woodlands and Shrublands
Italian sclerophyllous and semi-deciduous forests
Eastern Mediterranean conifer–sclerophyllous–broadleaf forests
Southwest Iberian Mediterranean sclerophyllous and mixed forests
Tyrrhenian–Adriatic sclerophyllous and mixed forests
Canary Islands dry woodlands and forests
Mediterranean acacia–argania dry woodlands
Mediterranean dry woodlands and steppe
Southeastern Iberian shrubs and woodlands
Cyprus Mediterranean forests
Crete Mediterranean forests
Cape Floristic Region
Southern Anatolian montane conifer and deciduous forests
Albany thickets
Northwest Iberian montane forests
See also
Mediterranean forests, woodlands, and scrub
Chaparral
Fynbos
Maquis shrubland
Garrigue
Kwongan
Matorral
Barren vegetation
References
Mediterranean forests, woodlands, and scrub
California chaparral and woodlands
Flora of the Chilean Matorral
Mallee Woodlands and Shrublands
Ecology
Sclerophyll forests | Sclerophyll | [
"Biology"
] | 3,191 | [
"Ecology"
] |
160,125 | https://en.wikipedia.org/wiki/Mitochondrial%20disease | Mitochondrial disease is a group of disorders caused by mitochondrial dysfunction. Mitochondria are the organelles that generate energy for the cell and are found in every cell of the human body except red blood cells. They convert the energy of food molecules into the ATP that powers most cell functions.
Mitochondrial diseases take on unique characteristics both because of the way the diseases are often inherited and because mitochondria are so critical to cell function. A subclass of these diseases that have neuromuscular symptoms are known as mitochondrial myopathies.
Types
Mitochondrial disease can manifest in many different ways whether in children or adults. Examples of mitochondrial diseases include:
Mitochondrial myopathy
Maternally inherited diabetes mellitus and deafness (MIDD)
While diabetes mellitus and deafness can be found together for other reasons, at an early age this combination can be due to mitochondrial disease, as may occur in Kearns–Sayre syndrome and Pearson syndrome
Leber's hereditary optic neuropathy (LHON)
LHON is an eye disorder characterized by progressive loss of central vision due to degeneration of the optic nerves and retina (apparently affecting between 1 in 30,000 and 1 in 50,000 people); visual loss typically begins in young adulthood
Leigh syndrome, subacute necrotizing encephalomyelopathy
after normal development the disease usually begins late in the first year of life, although onset may occur in adulthood
a rapid decline in function occurs and is marked by seizures, altered states of consciousness, dementia, ventilatory failure
Neuropathy, ataxia, retinitis pigmentosa, and ptosis (NARP)
progressive symptoms as described in the acronym
dementia
Myoneurogenic gastrointestinal encephalopathy (MNGIE)
gastrointestinal pseudo-obstruction
neuropathy
MERRF syndrome
progressive myoclonic epilepsy
"Ragged Red Fibers" are clumps of diseased mitochondria that accumulate in the subsarcolemmal region of the muscle fiber and appear when muscle is stained with modified Gömöri trichrome stain
short stature
hearing loss
lactic acidosis
exercise intolerance
MELAS syndrome, mitochondrial encephalopathy, lactic acidosis, and stroke-like episodes
Mitochondrial DNA depletion syndrome
Conditions such as Friedreich's ataxia can affect the mitochondria but are not associated with mitochondrial proteins.
Presentation
Associated conditions
Acquired conditions in which mitochondrial dysfunction has been involved include:
ALS
Alzheimer's disease,
Bipolar disorder, schizophrenia, aging and senescence, anxiety disorders
Cancer
Cardiovascular disease
Diabetes
Huntington's disease
Long Covid
ME/CFS
Parkinson's disease
Sarcopenia
The body, and each mutation, is modulated by other genome variants; the mutation that in one individual may cause liver disease might in another person cause a brain disorder. The severity of the specific defect may also be great or small. Some defects include exercise intolerance. Defects often affect the operation of the mitochondria and multiple tissues more severely, leading to multi-system diseases.
It has also been reported that drug tolerant cancer cells have an increased number and size of mitochondria, which suggested an increase in mitochondrial biogenesis. A recent study in Nature Nanotechnology has reported that cancer cells can hijack the mitochondria from immune cells via physical tunneling nanotubes.
As a rule, mitochondrial diseases are worse when the defective mitochondria are present in the muscles, cerebrum, or nerves, because these cells use more energy than most other cells in the body.
Although mitochondrial diseases vary greatly in presentation from person to person, several major clinical categories of these conditions have been defined, based on the most common phenotypic features, symptoms, and signs associated with the particular mutations that tend to cause them.
An outstanding question and area of research is whether ATP depletion or reactive oxygen species are in fact responsible for the observed phenotypic consequences.
Cerebellar atrophy or hypoplasia has sometimes been reported to be associated.
Causes
Mitochondrial disorders may be caused by mutations (acquired or inherited), in mitochondrial DNA (mtDNA), or in nuclear genes that code for mitochondrial components. They may also be the result of acquired mitochondrial dysfunction due to adverse effects of drugs, infections, or other environmental causes.
Nuclear DNA has two copies per cell (except for sperm and egg cells), one copy being inherited from the father and the other from the mother. Mitochondrial DNA, however, is inherited from the mother only (with some exceptions) and each mitochondrion typically contains between 2 and 10 mtDNA copies. During cell division the mitochondria segregate randomly between the two new cells. Those mitochondria make more copies, normally reaching 500 mitochondria per cell. As mtDNA is copied when mitochondria proliferate, they can accumulate random mutations, a phenomenon called heteroplasmy. If only a few of the mtDNA copies inherited from the mother are defective, mitochondrial division may cause most of the defective copies to end up in just one of the new mitochondria (for more detailed inheritance patterns, see human mitochondrial genetics). Mitochondrial disease may become clinically apparent once the number of affected mitochondria reaches a certain level; this phenomenon is called "threshold expression".
Mitochondria possess many of the same DNA repair pathways as nuclei do—but not all of them; therefore, mutations occur more frequently in mitochondrial DNA than in nuclear DNA (see Mutation rate). This means that mitochondrial DNA disorders may occur spontaneously and relatively often. Defects in enzymes that control mitochondrial DNA replication (all of which are encoded for by genes in the nuclear DNA) may also cause mitochondrial DNA mutations.
Most mitochondrial function and biogenesis is controlled by nuclear DNA. Human mitochondrial DNA encodes 13 proteins of the respiratory chain, while most of the estimated 1,500 proteins and components targeted to mitochondria are nuclear-encoded. Defects in nuclear-encoded mitochondrial genes are associated with hundreds of clinical disease phenotypes including anemia, dementia, hypertension, lymphoma, retinopathy, seizures, and neurodevelopmental disorders.
A study by Yale University researchers (published in the February 12, 2004, issue of the New England Journal of Medicine) explored the role of mitochondria in insulin resistance among the offspring of patients with type 2 diabetes.
Other studies have shown that the mechanism may involve the interruption of the mitochondrial signaling process in body cells (intramyocellular lipids). A study conducted at the Pennington Biomedical Research Center in Baton Rouge, Louisiana showed that this, in turn, partially disables the genes that produce mitochondria.
Mechanisms
The effective overall energy unit for the available body energy is referred to as the daily glycogen generation capacity, and is used to compare the mitochondrial output of affected or chronically glycogen-depleted individuals to healthy individuals.
The glycogen generation capacity is entirely dependent on, and determined by, the operating levels of the mitochondria in all of the cells of the human body; however, the relation between the energy generated by the mitochondria and the glycogen capacity is very loose and is mediated by many biochemical pathways. The energy output of full healthy mitochondrial function can be predicted exactly by a complicated theoretical argument, but this argument is not straightforward, as most energy is consumed by the brain and is not easily measurable.
Diagnosis
Mitochondrial diseases are usually detected by analysing muscle samples, where the presence of these organelles is higher. The most common tests for the detection of these diseases are:
Southern blot to detect large deletions or duplications
Polymerase chain reaction and specific mutation testing
Sequencing
Treatments
Although research is ongoing, treatment options are currently limited; vitamins are frequently prescribed, though the evidence for their effectiveness is limited.
Pyruvate has been proposed in 2007 as a treatment option. N-acetyl cysteine reverses many models of mitochondrial dysfunction.
Mood disorders
In the case of mood disorders, specifically bipolar disorder, it is hypothesized that N-acetyl-cysteine (NAC), acetyl-L-carnitine (ALCAR), S-adenosylmethionine (SAMe), coenzyme Q10 (CoQ10), alpha-lipoic acid (ALA), creatine monohydrate (CM), and melatonin could be potential treatment options.
Gene therapy prior to conception
Mitochondrial replacement therapy (MRT), where the nuclear DNA is transferred to another healthy egg cell leaving the defective mitochondrial DNA behind, is an IVF treatment procedure. Using a similar pronuclear transfer technique, researchers at Newcastle University led by Douglass Turnbull successfully transplanted healthy DNA in human eggs from women with mitochondrial disease into the eggs of women donors who were unaffected. In such cases, ethical questions have been raised regarding biological motherhood, since the child receives genes and gene regulatory molecules from two different women. Using genetic engineering in attempts to produce babies free of mitochondrial disease is controversial in some circles and raises important ethical issues. A male baby was born in Mexico in 2016 from a mother with Leigh syndrome using MRT.
In September 2012 a public consultation was launched in the UK to explore the ethical issues involved. Human genetic engineering was used on a small scale to allow infertile women with genetic defects in their mitochondria to have children.
In June 2013, the United Kingdom government agreed to develop legislation that would legalize the 'three-person IVF' procedure as a treatment to fix or eliminate mitochondrial diseases that are passed on from mother to child. The procedure could be offered from 29 October 2015 once regulations had been established.
Embryonic mitochondrial transplant and protofection have been proposed as a possible treatment for inherited mitochondrial disease, and allotopic expression of mitochondrial proteins as a radical treatment for mtDNA mutation load.
In June 2018 Australian Senate's Senate Community Affairs References Committee recommended a move towards legalising Mitochondrial replacement therapy (MRT). Research and clinical applications of MRT were overseen by laws made by federal and state governments. State laws were, for the most part, consistent with federal law. In all states, legislation prohibited the use of MRT techniques in the clinic, and except for Western Australia, research on a limited range of MRT was permissible up to day 14 of embryo development, subject to a license being granted. In 2010, the Hon. Mark Butler MP, then Federal Minister for Mental Health and Ageing, had appointed an independent committee to review the two relevant acts: the Prohibition of Human Cloning for Reproduction Act 2002 and the Research Involving Human Embryos Act 2002. The committee's report, released in July 2011, recommended the existing legislation remain unchanged
Currently, human clinical trials are underway at GenSight Biologics (ClinicalTrials.gov # NCT02064569) and the University of Miami (ClinicalTrials.gov # NCT02161380) to examine the safety and efficacy of mitochondrial gene therapy in Leber's hereditary optic neuropathy.
Epidemiology
About 1 in 4,000 children in the United States will develop mitochondrial disease by the age of 10 years. Up to 4,000 children per year in the US are born with a type of mitochondrial disease. Because mitochondrial disorders contain many variations and subsets, some particular mitochondrial disorders are very rare.
The average number of births per year among women at risk for transmitting mtDNA disease is estimated to approximately 150 in the United Kingdom and 800 in the United States.
History
The first pathogenic mutation in mitochondrial DNA was identified in 1988; from that time to 2016, around 275 other disease-causing mutations were identified.
Notable cases
Notable people with mitochondrial disease include:
Mattie Stepanek, a poet, peace advocate, and motivational speaker who had dysautonomic mitochondrial myopathy, and who died at age 13.
Rocco Baldelli, a coach and former center fielder in Major League Baseball who had to retire from active play at age 29 due to mitochondrial channelopathy.
Charlie Gard, a British boy who had mitochondrial DNA depletion syndrome; decisions about his care were taken to various law courts.
Charles Darwin, a nineteenth century naturalist who suffered from a disabling illness, is speculated to have MELAS syndrome.
References
External links
International Mito Patients (IMP)
Molecular biology
Mitochondrial genetics | Mitochondrial disease | [
"Chemistry",
"Biology"
] | 2,548 | [
"Biochemistry",
"Molecular biology"
] |
160,815 | https://en.wikipedia.org/wiki/CAS%20Registry%20Number | A CAS Registry Number (also referred to as CAS RN or informally CAS Number) is a unique identification number, assigned by the Chemical Abstracts Service (CAS) in the US to every chemical substance described in the open scientific literature, in order to index the substance in the CAS Registry. This registry includes all substances described since 1957, plus some substances from as far back as the early 1800s; it is a chemical database that includes organic and inorganic compounds, minerals, isotopes, alloys, mixtures, and nonstructurable materials (UVCBs, substances of unknown or variable composition, complex reaction products, or biological origin). CAS RNs are generally serial numbers (with a check digit), so they do not contain any information about the structures themselves the way SMILES and InChI strings do.
The CAS Registry is an authoritative collection of disclosed chemical substance information. It identifies more than 204 million unique organic and inorganic substances and 69 million protein and DNA sequences, plus additional information about each substance. It is updated with around 15,000 additional new substances daily. A collection of almost 500 thousand CAS registry numbers are made available under a CC BY-NC license at ACS Commons Chemistry.
History and use
Historically, chemicals have been identified by a wide variety of synonyms. One of the biggest challenges in the early development of substance indexing, a task undertaken by the Chemical Abstracts Service, was in identifying if a substance in literature was new or if it had been previously discovered. Well-known chemicals may additionally be known via multiple generic, historical, commercial, and/or (black)-market names, and even systematic nomenclature based on structure alone was not universally useful. An algorithm was developed to translate the structural formula of a chemical into a computer-searchable table, which provided a basis for the service that listed each chemical with its CAS Registry Number, the CAS Chemical Registry System, which became operational in 1965.
CAS Registry Numbers (CAS RN) are simple and regular, convenient for database searches. They offer a reliable, common and international link to every specific substance across the various nomenclatures and disciplines used by branches of science, industry, and regulatory bodies. Almost all molecule databases today allow searching by CAS Registry Number, and it is used as a global standard.
Format
A CAS Registry Number has no inherent meaning, but is assigned in sequential, increasing order when the substance is identified by CAS scientists for inclusion in the CAS Registry database.
A CAS RN is separated by hyphens into three parts, the first consisting from two up to seven digits, the second consisting of two digits, and the third consisting of a single digit serving as a check digit. This format gives CAS a maximum capacity of 1,000,000,000 unique numbers.
The check digit is found by taking the last digit times 1, the preceding digit times 2, the preceding digit times 3 etc., adding all these up and computing the sum modulo 10. For example, the CAS number of water is 7732-18-5: the checksum 5 is calculated as (8×1 + 1×2 + 2×3 + 3×4 + 7×5 + 7×6) = 105; 105 mod 10 = 5.
Granularity
Stereoisomers and racemic mixtures are assigned discrete CAS Registry Numbers: -epinephrine has 51-43-4, -epinephrine has 150-05-0, and racemic -epinephrine has 329-65-7
Different phases do not receive different CAS RNs (liquid water and ice both have 7732-18-5), but different crystal structures do (carbon in general is 7440-44-0, graphite is 7782-42-5 and diamond is 7782-40-3)
Commonly encountered mixtures of known or unknown composition may receive a CAS RN; examples are Leishman stain (12627-53-1) and mustard oil (8007-40-7).
Some chemical elements are discerned by their oxidation state, e.g. the element chromium has 7440-47-3, the trivalent Cr(III) has 16065-83-1 and the hexavalent Cr(VI) species have 18540-29-9.
Occasionally whole classes of molecules receive a single CAS RN: the class of enzymes known as alcohol dehydrogenases has 9031-72-5.
Search engines
CHEMINDEX Search via Canadian Centre for Occupational Health and Safety
ChemIDplus Advanced via United States National Library of Medicine
Common Chemistry via Australian Inventory of Chemical Substances
European chemical Substances Information System via the website of Royal Society of Chemistry
HSNO Chemical Classification Information Database via Environmental Risk Management Authority
Search Tool of Australian Inventory of Chemical Substances
USEPA CompTox Chemicals Dashboard
See also
Academic publishing
Beilstein Registry Number
Chemical file format
Dictionary of chemical formulas
EC# (EINECS and ELINCS, European Community)
EC number (Enzyme Commission)
International Union of Pure and Applied Chemistry
List of CAS numbers by chemical compound
MDL number
PubChem
Registration authority
UN number
References
External links
CAS registry description, by Chemical Abstracts Service
To find the CAS number of a compound given its name, formula or structure, the following free resources can be used:
CAS Common Chemistry
NIST Chemistry WebBook
NCI/CADD Chemical Identifier Resolver
ChemSub Online (Multilingual chemical names)
NIOSH Pocket Guide to Chemical Hazards, index of CAS numbers
Chemical numbering schemes
American Chemical Society
Unique identifiers | CAS Registry Number | [
"Chemistry",
"Mathematics"
] | 1,119 | [
"Mathematical objects",
"American Chemical Society",
"Chemical numbering schemes",
"Numbers"
] |
161,244 | https://en.wikipedia.org/wiki/Orthocenter | The orthocenter of a triangle, usually denoted by , is the point where the three (possibly extended) altitudes intersect. The orthocenter lies inside the triangle if and only if the triangle is acute. For a right triangle, the orthocenter coincides with the vertex at the right angle.
Formulation
Let denote the vertices and also the angles of the triangle, and let be the side lengths. The orthocenter has trilinear coordinates
and barycentric coordinates
Since barycentric coordinates are all positive for a point in a triangle's interior but at least one is negative for a point in the exterior, and two of the barycentric coordinates are zero for a vertex point, the barycentric coordinates given for the orthocenter show that the orthocenter is in an acute triangle's interior, on the right-angled vertex of a right triangle, and exterior to an obtuse triangle.
In the complex plane, let the points represent the numbers and assume that the circumcenter of triangle is located at the origin of the plane. Then, the complex number
is represented by the point , namely the altitude of triangle . From this, the following characterizations of the orthocenter by means of free vectors can be established straightforwardly:
The first of the previous vector identities is also known as the problem of Sylvester, proposed by James Joseph Sylvester.
Properties
Let denote the feet of the altitudes from respectively. Then:
The product of the lengths of the segments that the orthocenter divides an altitude into is the same for all three altitudes:
The circle centered at having radius the square root of this constant is the triangle's polar circle.
The sum of the ratios on the three altitudes of the distance of the orthocenter from the base to the length of the altitude is 1: (This property and the next one are applications of a more general property of any interior point and the three cevians through it.)
The sum of the ratios on the three altitudes of the distance of the orthocenter from the vertex to the length of the altitude is 2:
The isogonal conjugate of the orthocenter is the circumcenter of the triangle.
The isotomic conjugate of the orthocenter is the symmedian point of the anticomplementary triangle.
Four points in the plane, such that one of them is the orthocenter of the triangle formed by the other three, is called an orthocentric system or orthocentric quadrangle.
Orthocentric system
Relation with circles and conics
Denote the circumradius of the triangle by . Then
In addition, denoting as the radius of the triangle's incircle, as the radii of its excircles, and again as the radius of its circumcircle, the following relations hold regarding the distances of the orthocenter from the vertices:
If any altitude, for example, , is extended to intersect the circumcircle at , so that is a chord of the circumcircle, then the foot bisects segment :
The directrices of all parabolas that are externally tangent to one side of a triangle and tangent to the extensions of the other sides pass through the orthocenter.
A circumconic passing through the orthocenter of a triangle is a rectangular hyperbola.
Relation to other centers, the nine-point circle
The orthocenter , the centroid , the circumcenter , and the center of the nine-point circle all lie on a single line, known as the Euler line. The center of the nine-point circle lies at the midpoint of the Euler line, between the orthocenter and the circumcenter, and the distance between the centroid and the circumcenter is half of that between the centroid and the orthocenter:
The orthocenter is closer to the incenter than it is to the centroid, and the orthocenter is farther than the incenter is from the centroid:
In terms of the sides , , , inradius and circumradius ,
Orthic triangle
If the triangle is oblique (does not contain a right-angle), the pedal triangle of the orthocenter of the original triangle is called the orthic triangle or altitude triangle. That is, the feet of the altitudes of an oblique triangle form the orthic triangle, . Also, the incenter (the center of the inscribed circle) of the orthic triangle is the orthocenter of the original triangle .
Trilinear coordinates for the vertices of the orthic triangle are given by
The extended sides of the orthic triangle meet the opposite extended sides of its reference triangle at three collinear points.
In any acute triangle, the inscribed triangle with the smallest perimeter is the orthic triangle. This is the solution to Fagnano's problem, posed in 1775. The sides of the orthic triangle are parallel to the tangents to the circumcircle at the original triangle's vertices.
The orthic triangle of an acute triangle gives a triangular light route.
The tangent lines of the nine-point circle at the midpoints of the sides of are parallel to the sides of the orthic triangle, forming a triangle similar to the orthic triangle.
The orthic triangle is closely related to the tangential triangle, constructed as follows: let be the line tangent to the circumcircle of triangle at vertex , and define analogously. Let The tangential triangle is , whose sides are the tangents to triangle 's circumcircle at its vertices; it is homothetic to the orthic triangle. The circumcenter of the tangential triangle, and the center of similitude of the orthic and tangential triangles, are on the Euler line.
Trilinear coordinates for the vertices of the tangential triangle are given by
The reference triangle and its orthic triangle are orthologic triangles.
For more information on the orthic triangle, see here.
History
The theorem that the three altitudes of a triangle concur (at the orthocenter) is not directly stated in surviving Greek mathematical texts, but is used in the Book of Lemmas (proposition 5), attributed to Archimedes (3rd century BC), citing the "commentary to the treatise about right-angled triangles", a work which does not survive. It was also mentioned by Pappus (Mathematical Collection, VII, 62; 340). The theorem was stated and proved explicitly by al-Nasawi in his (11th century) commentary on the Book of Lemmas, and attributed to al-Quhi ().
This proof in Arabic was translated as part of the (early 17th century) Latin editions of the Book of Lemmas, but was not widely known in Europe, and the theorem was therefore proven several more times in the 17th–19th century. Samuel Marolois proved it in his Geometrie (1619), and Isaac Newton proved it in an unfinished treatise Geometry of Curved Lines Later William Chapple proved it in 1749.
A particularly elegant proof is due to François-Joseph Servois (1804) and independently Carl Friedrich Gauss (1810): Draw a line parallel to each side of the triangle through the opposite point, and form a new triangle from the intersections of these three lines. Then the original triangle is the medial triangle of the new triangle, and the altitudes of the original triangle are the perpendicular bisectors of the new triangle, and therefore concur (at the circumcenter of the new triangle).
See also
Triangle center
References
External links
Orthocenter of a triangle With interactive animation
Animated demonstration of orthocenter construction Compass and straightedge.
Fagnano's Problem by Jay Warendorff, Wolfram Demonstrations Project.
Triangle centers | Orthocenter | [
"Physics",
"Mathematics"
] | 1,654 | [
"Point (geometry)",
"Triangle centers",
"Points defined for a triangle",
"Geometric centers",
"Symmetry"
] |
161,253 | https://en.wikipedia.org/wiki/Quantum%20fluctuation | In quantum physics, a quantum fluctuation (also known as a vacuum state fluctuation or vacuum fluctuation) is the temporary random change in the amount of energy in a point in space, as prescribed by Werner Heisenberg's uncertainty principle. They are minute random fluctuations in the values of the fields which represent elementary particles, such as electric and magnetic fields which represent the electromagnetic force carried by photons, W and Z fields which carry the weak force, and gluon fields which carry the strong force.
The uncertainty principle states the uncertainty in energy and time can be related by , where ≈ . This means that pairs of virtual particles with energy and lifetime shorter than are continually created and annihilated in empty space. Although the particles are not directly detectable, the cumulative effects of these particles are measurable. For example, without quantum fluctuations, the "bare" mass and charge of elementary particles would be infinite; from renormalization theory the shielding effect of the cloud of virtual particles is responsible for the finite mass and charge of elementary particles.
Another consequence is the Casimir effect. One of the first observations which was evidence for vacuum fluctuations was the Lamb shift in hydrogen. In July 2020, scientists reported that quantum vacuum fluctuations can influence the motion of macroscopic, human-scale objects by measuring correlations below the standard quantum limit between the position/momentum uncertainty of the mirrors of LIGO and the photon number/phase uncertainty of light that they reflect.
Field fluctuations
In quantum field theory, fields undergo quantum fluctuations. A reasonably clear distinction can be made between quantum fluctuations and thermal fluctuations of a quantum field (at least for a free field; for interacting fields, renormalization substantially complicates matters). An illustration of this distinction can be seen by considering quantum and classical Klein–Gordon fields: For the quantized Klein–Gordon field in the vacuum state, we can calculate the probability density that we would observe a configuration at a time in terms of its Fourier transform to be
In contrast, for the classical Klein–Gordon field at non-zero temperature, the Gibbs probability density that we would observe a configuration at a time is
These probability distributions illustrate that every possible configuration of the field is possible, with the amplitude of quantum fluctuations controlled by the Planck constant , just as the amplitude of thermal fluctuations is controlled by , where is the Boltzmann constant. Note that the following three points are closely related:
the Planck constant has units of action (joule-seconds) instead of units of energy (joules),
the quantum kernel is instead of (the quantum kernel is nonlocal from a classical heat kernel viewpoint, but it is local in the sense that it does not allow signals to be transmitted),
the quantum vacuum state is Lorentz-invariant (although not manifestly in the above), whereas the classical thermal state is not (the classical dynamics is Lorentz-invariant, but the Gibbs probability density is not a Lorentz-invariant initial condition).
A classical continuous random field can be constructed that has the same probability density as the quantum vacuum state, so that the principal difference from quantum field theory is the measurement theory (measurement in quantum theory is different from measurement for a classical continuous random field, in that classical measurements are always mutually compatible – in quantum-mechanical terms they always commute).
See also
Cosmic microwave background
False vacuum
Hawking radiation
Quantum annealing
Quantum foam
Stochastic interpretation
Vacuum energy
Vacuum polarization
Virtual black hole
Zitterbewegung
References
Quantum mechanics
Inflation (cosmology)
Articles containing video clips
Energy (physics) | Quantum fluctuation | [
"Physics",
"Mathematics"
] | 722 | [
"Physical quantities",
"Quantity",
"Theoretical physics",
"Quantum mechanics",
"Energy (physics)",
"Wikipedia categories named after physical quantities"
] |
161,291 | https://en.wikipedia.org/wiki/Noble%20metal | A noble metal is ordinarily regarded as a metallic element that is generally resistant to corrosion and is usually found in nature in its raw form. Gold, platinum, and the other platinum group metals (ruthenium, rhodium, palladium, osmium, iridium) are most often so classified. Silver, copper, and mercury are sometimes included as noble metals, but each of these usually occurs in nature combined with sulfur.
In more specialized fields of study and applications the number of elements counted as noble metals can be smaller or larger. It is sometimes used for the three metals copper, silver, and gold which have filled d-bands, while it is often used mainly for silver and gold when discussing surface-enhanced Raman spectroscopy involving metal nanoparticles. It is sometimes applied more broadly to any metallic or semimetallic element that does not react with a weak acid and give off hydrogen gas in the process. This broader set includes copper, mercury, technetium, rhenium, arsenic, antimony, bismuth, polonium, gold, the six platinum group metals, and silver.
Many of the noble metals are used in alloys for jewelry or coinage. In dentistry, silver is not always considered a noble metal because it is subject to corrosion when present in the mouth. All the metals are important heterogeneous catalysts.
Meaning and history
While lists of noble metals can differ, they tend to cluster around gold and the six platinum group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum.
In addition to this term's function as a compound noun, there are circumstances where noble is used as an adjective for the noun metal. A galvanic series is a hierarchy of metals (or other electrically conductive materials, including composites and semimetals) that runs from noble to active, and allows one to predict how materials will interact in the environment used to generate the series. In this sense of the word, graphite is more noble than silver and the relative nobility of many materials is highly dependent upon context, as for aluminium and stainless steel in conditions of varying pH.
The term noble metal can be traced back to at least the late 14th century and has slightly different meanings in different fields of study and application.
Prior to Mendeleev's publication in 1869 of the first (eventually) widely accepted periodic table, Odling published a table in 1864, in which the "noble metals" rhodium, ruthenium, palladium; and platinum, iridium, and osmium were grouped together, and adjacent to silver and gold.
Properties
Geochemical
The noble metals are siderophiles (iron-lovers). They tend to sink into the Earth's core because they dissolve readily in iron either as solid solutions or in the molten state. Most siderophile elements have practically no affinity whatsoever for oxygen: indeed, oxides of gold are thermodynamically unstable with respect to the elements.
Copper, silver, gold, and the six platinum group metals are the only native metals that occur naturally in relatively large amounts.
Corrosion resistance
Noble metals tend to be resistant to oxidation and other forms of corrosion, and this corrosion resistance is often considered to be a defining characteristic. Some exceptions are described below.
Copper is dissolved by nitric acid and aqueous potassium cyanide.
Ruthenium can be dissolved in aqua regia, a highly concentrated mixture of hydrochloric acid and nitric acid, only when in the presence of oxygen, while rhodium must be in a fine pulverized form. Palladium and silver are soluble in nitric acid, while silver's solubility in aqua regia is limited by the formation of silver chloride precipitate.
Rhenium reacts with oxidizing acids, and hydrogen peroxide, and is said to be tarnished by moist air. Osmium and iridium are chemically inert in ambient conditions. Platinum and gold can be dissolved in aqua regia. Mercury reacts with oxidising acids.
In 2010, US researchers discovered that an organic "aqua regia" in the form of a mixture of thionyl chloride SOCl2 and the organic solvent pyridine C5H5N achieved "high dissolution rates of noble metals under mild conditions, with the added benefit of being tunable to a specific metal" for example, gold but not palladium or platinum.
However, Gold can be dissolved in Selenic Acid (H2SeO4).
Anion (-ide) formation
The noble elements Gold and Platinum also have a comparatively high electronegativity for a metallic element, thus alowing them to exist as single-metallic anions.
For example:
Cs + Au -> CsAu
(Caesium Auride, a yellow crystalline salt with the ion). Platinum also exhibits similar properties with
BaPt, BaPt2, Cs2Pt (Barium and Caesium Platinides, which are reddish salts).
Electronic
The expression noble metal is sometimes confined to copper, silver, and gold since their full d-subshells can contribute to their noble character. There are also known to be significant contributions from how readily there is overlap of the d-electron states with the orbitals of other elements, particularly for gold. Relativistic contributions are also important, playing a role in the catalytic properties of gold.
The elements to the left of gold and silver have incompletely filled d-bands, which is believed to play a role in their catalytic properties. A common explanation is the d-band filling model of Hammer and Jens Nørskov, where the total d-bands are considered, not just the unoccupied states.
The low-energy plasmon properties are also of some importance, particularly those of silver and gold nanoparticles for surface-enhanced Raman spectroscopy, localized surface plasmons and other plasmonic properties.
Electrochemical
Standard reduction potentials in aqueous solution are also a useful way of predicting the non-aqueous chemistry of the metals involved. Thus, metals with high negative potentials, such as sodium, or potassium, will ignite in air, forming the respective oxides. These fires cannot be extinguished with water, which also react with the metals involved to give hydrogen, which is itself explosive. Noble metals, in contrast, are disinclined to react with oxygen and, for that reason (as well as their scarcity) have been valued for millennia, and used in jewellery and coins.
The adjacent table lists standard reduction potential in volts; electronegativity (revised Pauling); and electron affinity values (kJ/mol), for some metals and metalloids.
The simplified entries in the reaction column can be read in detail from the Pourbaix diagrams of the considered element in water. Noble metals have large positive potentials; elements not in this table have a negative standard potential or are not metals.
Electronegativity is included since it is reckoned to be, "a major driver of metal nobleness and reactivity".
The black tarnish commonly seen on silver arises from its sensitivity to sulphur containing gases such as hydrogen sulfide:
2 Ag + H2S + O2 → Ag2S + H2O.
Rayner-Canham contends that, "silver is so much more chemically-reactive and has such a different chemistry, that it should not be considered as a 'noble metal'." In dentistry, silver is not regarded as a noble metal due to its tendency to corrode in the oral environment.
The relevance of the entry for water is addressed by Li et al. in the context of galvanic corrosion. Such a process will only occur when:
"(1) two metals which have different electrochemical potentials are...connected, (2) an aqueous phase with electrolyte exists, and (3) one of the two metals has...potential lower than the potential of the reaction ( + 4e + = 4 OH•) which is 0.4 V...The...metal with...a potential less than 0.4 V acts as an anode...loses electrons...and dissolves in the aqueous medium. The noble metal (with higher electrochemical potential) acts as a cathode and, under many conditions, the reaction on this electrode is generally − 4 e• − = 4 OH•)."
The superheavy elements from hassium (element 108) to livermorium (116) inclusive are expected to be "partially very noble metals"; chemical investigations of hassium has established that it behaves like its lighter congener osmium, and preliminary investigations of nihonium and flerovium have suggested but not definitively established noble behavior. Copernicium's behaviour seems to partly resemble both its lighter congener mercury and the noble gas radon.
Oxides
As long ago as 1890, Hiorns observed as follows:
"Noble Metals. Gold, Platinum, Silver, and a few rare metals. The members of this class have little or no tendency to unite with oxygen in the free state, and when placed in water at a red heat do not alter its composition. The oxides are readily decomposed by heat in consequence of the feeble affinity between the metal and oxygen."
Smith, writing in 1946, continued the theme:
"There is no sharp dividing line [between 'noble metals' and 'base metals'] but perhaps the best definition of a noble metal is a metal whose oxide is easily decomposed at a temperature below a red heat."
"It follows from this that noble metals...have little attraction for oxygen and are consequently not oxidised or discoloured at moderate temperatures."
Such nobility is mainly associated with the relatively high electronegativity values of the noble metals, resulting in only weakly polar covalent bonding with oxygen. The table lists the melting points of the oxides of the noble metals, and for some of those of the non-noble metals, for the elements in their most stable oxidation states.
Catalytic properties
All the noble metals can act as catalysts. For example, platinum is used in catalytic converters, devices which convert toxic gases produced in car engines, such as the oxides of nitrogen, into non-polluting substances.
Gold has many industrial applications; it is used as a catalyst in hydrogenation and the water gas shift reaction.
See also
Galvanic series
Minor metals
Hallmark
Precious metal
Notes
References
Further reading
Balshaw L 2020, "Noble metals dissolved without aqua regia", Chemistry World, 1 September
Beamish FE 2012, The analytical chemistry of the noble metals, Elsevier Science, Burlington
Brasser R, Mojzsis SJ 2017, "A colossal impact enriched Mars' mantle with noble metals", Geophys. Res. Lett., vol. 44, pp. 5978–5985,
Brooks RR (ed.) 1992, Noble metals and biological systems: Their role in medicine, mineral exploration, and the environment, CRC Press, Boca Raton
Brubaker PE, Moran JP, Bridbord K, Hueter FG 1975, "Noble metals: a toxicological appraisal of potential new environmental contaminants", Environmental Health Perspectives, vol. 10, pp. 39–56,
Du R et al. 2019, "Emerging noble metal aerogels: State of the art and a look forward", Matter, vol. 1, pp. 39–56
Hämäläinen J, Ritala M, Leskelä M 2013, "Atomic layer deposition of noble metals and their oxides", Chemistry of Materials, vol. 26, no. 1, pp. 786–801,
Kepp K 2020, "Chemical causes of metal nobleness", ChemPhysChem, vol. 21 no. 5. pp. 360−369,
Lal H, Bhagat SN 1985, "Gradation of the metallic character of noble metals on the basis of thermoelectric properties", Indian Journal of Pure and Applied Physics, vol. 23, no. 11, pp. 551–554
Lyon SB 2010, "3.21 - Corrosion of noble metals", in B Cottis et al. (eds.), Shreir's Corrosion, Elsevier, pp. 2205–2223,
Medici S, Peana MF, Zoroddu MA 2018, "Noble metals in pharmaceuticals: Applications and limitations", in M Rai M, Ingle, S Medici (eds.), Biomedical applications of metals, Springer,
Pan S et al. 2019, "Noble-noble strong union: Gold at its best to make a bond with a noble gas atom", ChemistryOpen, vol. 8, p. 173,
Russel A 1931, "Simple deposition of reactive metals on noble metals", Nature, vol. 127, pp. 273–274,
St. John J et al. 1984, Noble metals, Time-Life Books, Alexandria, VA
Wang H 2017, "Chapter 9 - Noble Metals", in LY Jiang, N Li (eds.), Membrane-based separations in metallurgy, Elsevier, pp. 249–272,
External links
Noble metal – chemistry Encyclopædia Britannica, online edition
Chemical nomenclature
Metallurgy | Noble metal | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,777 | [
"Metallurgy",
"Materials science",
"nan"
] |
161,306 | https://en.wikipedia.org/wiki/Sellmeier%20equation | The Sellmeier equation is an empirical relationship between refractive index and wavelength for a particular transparent medium. The equation is used to determine the dispersion of light in the medium.
It was first proposed in 1872 by Wolfgang Sellmeier and was a development of the work of Augustin Cauchy on Cauchy's equation for modelling dispersion.
The equation
In its original and the most general form, the Sellmeier equation is given as
,
where n is the refractive index, λ is the wavelength, and Bi and Ci are experimentally determined Sellmeier coefficients. These coefficients are usually quoted for λ in micrometres. Note that this λ is the vacuum wavelength, not that in the material itself, which is λ/n. A different form of the equation is sometimes used for certain types of materials, e.g. crystals.
Each term of the sum representing an absorption resonance of strength Bi at a wavelength . For example, the coefficients for BK7 below correspond to two absorption resonances in the ultraviolet, and one in the mid-infrared region. Analytically, this process is based on approximating the underlying optical resonances as dirac delta functions, followed by the application of the Kramers-Kronig relations. This results in real and imaginary parts of the refractive index which are physically sensible. However, close to each absorption peak, the equation gives non-physical values of n2 = ±∞, and in these wavelength regions a more precise model of dispersion such as Helmholtz's must be used.
If all terms are specified for a material, at long wavelengths far from the absorption peaks the value of n tends to
where εr is the relative permittivity of the medium.
For characterization of glasses the equation consisting of three terms is commonly used:
As an example, the coefficients for a common borosilicate crown glass known as BK7 are shown below:
For common optical glasses, the refractive index calculated with the three-term Sellmeier equation deviates from the actual refractive index by less than 5×10−6 over the wavelengths' range of 365 nm to 2.3 μm, which is of the order of the homogeneity of a glass sample. Additional terms are sometimes added to make the calculation even more precise.
Sometimes the Sellmeier equation is used in two-term form:
Here the coefficient A is an approximation of the short-wavelength (e.g., ultraviolet) absorption contributions to the refractive index at longer wavelengths. Other variants of the Sellmeier equation exist that can account for a material's refractive index change due to temperature, pressure, and other parameters.
Derivation
Analytically, the Sellmeier equation models the refractive index as due to a series of optical resonances within the bulk material. Its derivation from the Kramers-Kronig relations requires a few assumptions about the material, from which any deviations will affect the model's accuracy:
There exists a number of resonances, and the final refractive index can be calculated from the sum over the contributions from all resonances.
All optical resonances are at wavelengths far away from the wavelengths of interest, where the model is applied.
At these resonant frequencies, the imaginary component of the susceptibility () can be modeled as a delta function.
From the last point, the complex refractive index (and the electric susceptibility) becomes:
The real part of the refractive index comes from applying the Kramers-Kronig relations to the imaginary part:
Plugging in the first equation above for the imaginary component:
The order of summation and integration can be swapped. When evaluated, this gives the following, where is the Heaviside function:
Since the domain is assumed to be far from any resonances (assumption 2 above), evaluates to 1 and a familiar form of the Sellmeier equation is obtained:
By rearranging terms, the constants and can be substituted into the equation above to give the Sellmeier equation.
Coefficients
See also
Cauchy's equation
References
Internal links
RefractiveIndex.INFO Refractive index database featuring Sellmeier coefficients for many hundreds of materials.
A browser-based calculator giving refractive index from Sellmeier coefficients.
Annalen der Physik - free Access, digitized by the French national library
Sellmeier coefficients for 356 glasses from Ohara, Hoya, and Schott
Eponymous equations of physics
Optics | Sellmeier equation | [
"Physics",
"Chemistry"
] | 916 | [
"Applied and interdisciplinary physics",
"Equations of physics",
"Optics",
"Eponymous equations of physics",
" molecular",
"Atomic",
" and optical physics"
] |
15,988,913 | https://en.wikipedia.org/wiki/Stripping%20%28chemistry%29 | Stripping is a physical separation process where one or more components are removed from a liquid stream by a vapor stream. In industrial applications the liquid and vapor streams can have co-current or countercurrent flows. Stripping is usually carried out in either a packed or trayed column.
Theory
Stripping works on the basis of mass transfer. The idea is to make the conditions favorable for the component, A, in the liquid phase to transfer to the vapor phase. This involves a gas–liquid interface that A must cross. The total amount of A that has moved across this boundary can be defined as the flux of A, NA.
Equipment
Stripping is mainly conducted in trayed towers (plate columns) and packed columns, and less often in spray towers, bubble columns, and centrifugal contactors.
Trayed towers consist of a vertical column with liquid flowing in the top and out the bottom. The vapor phase enters in the bottom of the column and exits out of the top. Inside of the column are trays or plates. These trays force the liquid to flow back and forth horizontally while the vapor bubbles up through holes in the trays. The purpose of these trays is to increase the amount of contact area between the liquid and vapor phases.
Packed columns are similar to trayed columns in that the liquid and vapor flows enter and exit in the same manner. The difference is that in packed towers there are no trays. Instead, packing is used to increase the contact area between the liquid and vapor phases. There are many different types of packing used and each one has advantages and disadvantages.
Variables
The variables and design considerations for strippers are many. Among them are the entering conditions, the degree of recovery of the solute needed, the choice of the stripping agent and its flow, the operating conditions, the number of stages, the heat effects, and the type and size of the equipment.
The degree of recovery is often determined by environmental regulations, such as for volatile organic compounds like chloroform.
Frequently, steam, air, inert gases, and hydrocarbon gases are used as stripping agents. This is based on solubility, stability, degree of corrosiveness, cost, and availability. As stripping agents are gases, operation at nearly the highest temperature and lowest pressure that will maintain the components and not vaporize the liquid feed stream is desired. This allows for the minimization of flow. As with all other variables, minimizing cost while achieving efficient separation is the ultimate goal.
The size of the equipment, and particularly the height and diameter, is important in determining the possibility of flow channeling that would reduce the contact area between the liquid and vapor streams. If flow channeling is suspected to be occurring, a redistribution plate is often necessary to, as the name indicates, redistribute the liquid flow evenly to reestablish a higher contact area.
As mentioned previously, strippers can be trayed or packed. Packed columns, and particularly when random packing is used, are usually favored for smaller columns with a diameter less than 2 feet and a packed height of not more than 20 feet. Packed columns can also be advantageous for corrosive fluids, high foaming fluids, when fluid velocity is high, and when particularly low pressure drop is desired. Trayed strippers are advantageous because of ease of design and scale up. Structured packing can be used similar to trays despite possibly being the same material as dumped (random) packing. Using structured packing is a common method to increase the capacity for separation or to replace damaged trays.
Trayed strippers can have sieve, valve, or bubble cap trays while packed strippers can have either structured packing or random packing. Trays and packing are used to increase the contact area over which mass transfer can occur as mass transfer theory dictates. Packing can have varying material, surface area, flow area, and associated pressure drop. Older generation packing include ceramic Raschig rings and Berl saddles. More common packing materials are metal and plastic Pall rings, metal and plastic Zbigniew Białecki rings, and ceramic Intalox saddles. Each packing material of this newer generation improves the surface area, the flow area, and/or the associated pressure drop across the packing. Also important, is the ability of the packing material to not stack on top of itself. If such stacking occurs, it drastically reduces the surface area of the material. Lattice design work has been increasing of late that will further improve these characteristics.
During operation, monitoring the pressure drop across the column can help to determine the performance of the stripper. A changed pressure drop over a significant range of time can be an indication that the packing may need to be replaced or cleaned.
Typical applications
Stripping is commonly used in industrial applications to remove harmful contaminants from waste streams. One example would be the removal of TBT and PAH contaminants from harbor soils. The soils are dredged from the bottom of contaminated harbors, mixed with water to make a slurry and then stripped with steam. The cleaned soil and contaminant rich steam mixture are then separated. This process is able to decontaminate soils almost completely.
Steam is also frequently used as a stripping agent for water treatment. Volatile organic compounds are partially soluble in water and because of environmental considerations and regulations, must be removed from groundwater, surface water, and wastewater. These compounds can be present because of industrial, agricultural, and commercial activity.
See also
Steam stripping (Similar concept, but more specialized to Refinery Operations )
Continuous distillation
Distillation
Distillation Design
Fractionating column
Packed bed
Steam distillation
Theoretical plate
Stripping Enhanced Distillation
References
Separation processes | Stripping (chemistry) | [
"Chemistry"
] | 1,166 | [
"nan",
"Separation processes"
] |
15,990,144 | https://en.wikipedia.org/wiki/Libin%20Cardiovascular%20Institute | The Libin Cardiovascular Institute is an entity of Alberta Health Services and the University of Calgary. It connects all cardiovascular research, education and patient care in Southern Alberta, serving a population of about two million. Its more than 1,500 members include physicians, clinicians and other health professionals, researchers and trainees.
The Libin Cardiovascular Institute was made possible through the donation of founding donors Mona and Alvin Libin.
On March 6, 2003, the Alvin and Mona Libin Foundation presented $15 million to Alberta Health Services and the University of Calgary to form the Libin Cardiovascular Institute. It was then the largest one-time donation to the organizations. The institute was formally created on January 27, 2004.
The Foundation renewed their commitment to the Institute in May 2022 with a $7.5 million donation.
Research
Research within the Libin Cardiovascular Institute extends from basic biomedical and clinical research to health outcomes and care delivery research. Notable successes include:
A global change in treatment of arrhythmia as a result of trials led by D. George Wyse.
APPROACH database and Heart Alert
Innovative STEMI protocol resulting in mean time to Percutaneous Coronary Intervention of a commendable 62 minutes
Stephenson Cardiovascular MR Centre, ranking first internationally among CMR centres as measured by research impact factor points, and ranking first in North America among CMR centres as measured by volume of patient studies - a recent publication documented for the first time, the imaging of salvaged heart muscle as a result of a post-MI Intervention. To date, a White Paper on myocarditis with lead author Dr. Matthias Friedrich of the Stephenson CMR Centre, is the only White Paper ever published by the Journal of the American College of Cardiology.
Highest 30-day myocardial infarction survival rate in Canada according to the Canadian Institute for Health Information
Education
Programs under the jurisdiction of the Libin Cardiovascular Institute include Cardiology and Cardiovascular Surgery, in addition to contributions to other medical programs as well as graduate studies in the sciences.
The LCI also offers fellowships and/or advanced training in interventional cardiology, electrophysiology, amyloidosis, heart function and cardiac MRI.
Sites
The Libin Cardiovascular Institute is a wide-ranging program of cardiovascular integration which houses a growing list of scientists, clinicians, and researchers from various sites working together to advance the cardiovascular health of Albertans.
Health Research Innovation Centre (HRIC) contains a new hub for the Libin Cardiovascular Institute's basic scientists. The space, co-located on the same campus as the Foothills Medical Centre, opened in June 2009. Elements of different University of Calgary Institutes occupy the various floors / areas in this building, as to encourage research integration.
HRIC Teaching Research & Wellness Building (TRW), opened in Q3 of 2009, houses scientists focused on translational research. Directly connected to the primary area of HRIC, the spaces have been constructed to encourage interaction between basic and clinical researchers.
South Health Campus, a $1.5B project completed in 2013, offers a full suite of services relating to cardiovascular health.
Rockyview General Hospital
Peter Lougheed Centre
Alberta Children's Hospital
Notable people
Eldon Smith OC, FRCPC - Officer of the Order of Canada, penultimate Editor-in-Chief of the Canadian Journal of Cardiology, chair of the steering committee responsible for developing a new Heart Health Strategy to fight heart disease in Canada.
D. George Wyse MD, FRCPC, PHD - Professor Emeritus, University of Calgary
Alvin Libin LLD- Officer of the Order of Canada, Member of the Alberta Order of Excellence, Chair of the Libin Foundation
Dr. Todd Anderson, MD, FRCPC, former director of the Libin Cardiovascular Institute and dean of the Cumming School of Medicine at the University of Calgary.
Dr. Paul Fedak, MD, PHD, director of the Libin Cardiovascular Institute, cardiac surgeon, translational scientist, and senior medical leader at the University of Calgary.
Libin/AHFMR Prize in Cardiovascular Research
The Alberta Heritage Foundation for Medical Research (AHFMR) Prize for Excellence in Cardiovascular Research was established in honour of Mr. Alvin Libin for his many contributions to the AHFMR (now Alberta Innovates).
This $25,000 prize is awarded to an outstanding international researcher whose work has had a major impact on the understanding, prevention, recognition, or treatment of cardiovascular disease.
Past winners include:
2019 Robert Califf
2018 Christine Seidman
2016 Eric N. Olson
2012 Eric Topol
2010 A. John Camm
2008 Valentín Fuster
2006 The Texas Heart Institute James T. Willerson
2004 Eugene Braunwald
References
Sources and external links
Libin Cardiovascular Institute - official web-site
Magnetic resonance imaging
Medical and health organizations based in Alberta
Cardiac electrophysiology
Heart disease organizations
University of Calgary | Libin Cardiovascular Institute | [
"Chemistry"
] | 974 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
15,990,232 | https://en.wikipedia.org/wiki/Thermal%20management%20of%20high-power%20LEDs | High power light-emitting diodes (LEDs) can use 350 milliwatts or more in a single LED. Most of the electricity in an LED becomes heat rather than light – about 70% heat and 30% light. If this heat is not removed, the LEDs run at high temperatures, which not only lowers their efficiency, but also makes the LED less reliable, shortens its lifespan. Thus, thermal management of high power LEDs is a crucial area of the research and development. Limiting both the junction and the phosphor particles temperatures to a low value is required, which will guarantee desired LED lifetime.
Thermal management is a universal problem having to do with power density, which occurs both at higher powers or in smaller devices. Many lighting applications wish to combine a high light flux with an extremely small light emitting substrate, causing concerns with LED power management to be particularly acute.
Heat transfer procedure
In order to maintain a low junction temperature to keep good performance of an LED, every method of removing heat from LEDs should be considered. Conduction, convection, and radiation are the three means of heat transfer. Typically, LEDs are encapsulated in a transparent polyurethane-based resin, which is a poor thermal conductor. Nearly all heat produced is conducted through the back side of the chip. Heat is generated from the p–n junction by electrical energy that was not converted to useful light, and conducted to outside ambience through a long path, from junction to solder point, solder point to board, and board to the heat sink and then to the atmosphere. A typical LED side view and its thermal model are shown in the figures.
The junction temperature will be lower if the thermal impedance is smaller and likewise, with a lower ambient temperature. To maximize the useful ambient temperature range for a given power dissipation, the total thermal resistance from junction to ambient must be minimized.
The values for the thermal resistance vary widely depending on the material or component supplier. For example, RJC will range from 2.6 °C/W to 18 °C/W, depending on the LED manufacturer. The thermal interface material’s (TIM) thermal resistance will also vary depending on the type of material selected. Common TIMs are epoxy, thermal grease, pressure-sensitive adhesive and solder. Power LEDs are often mounted on metal-core printed circuit boards (MCPCB), which will be attached to a heat sink. Heat conducted through the MCPCB and heat sink is dissipated by convection and radiation. In the package design, the surface flatness and quality of each component, applied mounting pressure, contact area, the type of interface material and its thickness are all important parameters to thermal resistance design.
Passive thermal designs
Some considerations for passive thermal designs to ensure good thermal management for high power LED operation include:
Adhesive
Adhesive is a thermal conductive interface layer, which is commonly used to bond LED and board, and board and heat sinks and further optimizes the thermal performance. Current commercial adhesive is limited by relatively low thermal conductivity ~1 W/(mK).
Heat sink
Heat sinks provide a path for the heat from the LED source to outside medium. Heat sinks can dissipate power in three ways:
conduction - heat transfer from one solid to another
convection - heat transfer from a solid to a moving fluid, which for most LED applications will be air
radiation - heat transfer from two bodies of different surface temperatures through Thermal radiation.
Also, heatsink:
Material – The thermal conductivity of the material that the heat sink is made from directly affects the dissipation efficiency through conduction. Normally this is aluminum, although copper may be used with an advantage for flat-sheet heat sinks. New materials include thermoplastics that are used when heat dissipation requirements are lower than normal or complex shape would be advantaged by injection molding, and natural graphite solutions which offer better thermal transfer than copper with a lower weight than aluminum plus the ability to be formed into complex two-dimensional shapes. Graphite is considered an exotic cooling solution and does come at a higher production cost. Heat pipes may also be added to aluminum or copper heat sinks to reduce spreading resistance.
Shape – Thermal transfer takes place at the surface of the heat sink. Therefore, heat sinks should be designed to have a large surface area. This goal can be reached by using a large number of fine fins or by increasing the size of the heat sink itself.
Although a bigger surface area leads to better cooling performance, there must be sufficient space between the fins to generate a considerable temperature difference between the fin and the surrounding air.
When the fins stand too close together, the air in between can become almost the same temperature as the fins, so that thermal transmission will not occur. Therefore, more fins do not necessarily lead to better cooling performance.
Surface Finish – Thermal radiation of heat sinks is a function of surface finish, especially at higher temperatures. A painted surface will have a greater emissivity than a bright, unpainted one. The effect is most remarkable with flat-plate heat sinks, where about one-third of the heat is dissipated by radiation. Moreover, a perfectly flat contact area allows the use of a thinner layer of thermal compound, which will reduce the thermal resistance between the heat sink and LED source. On the other hand, anodizing or etching will also decrease the thermal resistance.
Mounting method – Heat-sink mountings with screws or springs are often better than regular clips, thermal conductive glue or sticky tape.
For heat transfer between LED sources over 15 Watt and LED coolers, it is recommended to use a high thermal conductive interface material (TIM) which will create a thermal resistance over the interface lower than 0.2 K/W. Currently, the most common solution is to use a phase-change material, which is applied in the form of a solid pad at room temperature, but then changes to a thick, gelatinous fluid once it rises above 45 °C.
Heat pipes and vapor chambers
Heat pipes and vapor chambers are passive, and have effective thermal conductivities ranging from 10,000 to 100,000 W/m K. They can provide the following benefits in LED thermal management:
Transport heat to a remote heat sink with minimum temperature drop
Isothermalize a natural convection heat sink, increasing its efficiency and reducing its size. In one case, adding five heat pipes reduced the heat sink mass by 34%, from 4.4 kg to 2.9 kg.
Efficiently transform the high heat flux directly under an LED to a lower heat flux that can be removed more easily.
PCB - printed circuit board
MCPCB – Metal Core PCB are the boards, which incorporate a metal material base as heat spreader as an integral part of the circuit board. The metal core usually consists of aluminum or copper alloy. Furthermore MCPCB can take advantage of incorporating a dielectric polymer layer with high thermal conductivity to reduce thermal resistance.
Separation – Separating the LED drive circuitry from the LED board prevents the heat generated by the driver from raising the LED junction temperature.
Thick-film materials system
Additive Process – Thick film is a selective additive deposition process which uses material only where it is needed. A more direct connection to the Al heat sink is provided; therefore thermal interface material is not needed for circuit building. Reduces the heat spreading layers and thermal footprint. Processing steps are reduced, along with the number of materials and amount of materials consumed.
Insulated Aluminum Materials System – Increases thermal connectivity and provides high dielectric breakdown strength. Materials can be fired at less than 600 °C. Circuits are built directly onto aluminum substrates, eliminating the need for thermal interface materials. Through improved thermal connectivity, the junction temperature of the LED can be decreased by up to 10 °C. This allows the designer to either decrease the number of LEDs needed on a board, by increasing the power to each LED; or decrease the size of the substrate, to manage dimensional restrictions. It is also proven that decreasing the junction temperature of the LED dramatically improves the LED’s lifetime.
Package type
Flip chip – concept is similar to flip-chip in package configuration widely used in the silicon integrated circuit industry. Briefly speaking, the LED die is assembled face down on the sub-mount, which is usually silicon or ceramic, acting as the heat spreader and supporting substrate. The flip-chip joint can be eutectic, high-lead, lead-free solder or gold stub. The primary source of light comes from the back side of the LED chip, and there is usually a built-in reflective layer between the light emitter and the solder joints to reflect up the light which is emitted downward. Several companies have adopted flip-chip packages for their high-power LED, achieving about 60% reduction in the thermal resistance of the LED while keeping its thermal reliability.
LED filament
The LED filament style of lamp combines many relatively low-power LEDs on a transparent glass substrate, coated with phosphor, and then encapsulated in the silicone. The lamp bulb is filled with inert gas, which convects heat away from the extended array of LEDs to the envelope of the bulb. This design avoids the requirement for a large heat sink.
Active thermal designs
Some works about using active thermal designs to realize good thermal management for high power LED operation include:
Thermoelectric (TE) device
Thermoelectric devices are a promising candidate for thermal management of high power LED owing to the small size and fast response. A TE device made by two ceramic plates can be integrated into a high power LED and adjust the temperature of LED by heat-conducting and electrical current insulation. Since ceramic TE devices tend to have a coefficient of thermal expansion mismatch with the silicon substrate of LED, silicon-based TE devices have been invented to substitute traditional ceramic TE devices. Silicon owning higher thermal conductivity (149 W/(m·K)) compared with aluminum oxide(30 W/(m·K)) also makes the cooling performance of silicon-based TE devices better than traditional ceramic TE devices.
The cooling effect of thermoelectric materials depends on the Peltier effect. When an external current is applied to a circuit composed of n-type and p-type thermoelectric units, the current will drive carriers in the thermoelectric units to move from one side to the other. When carriers move, heat also flows along with the carriers from one side to the other. Since the direction of heat transfer relies on the applied current, thermoelectric materials can function as a cooler with currents that drive carriers from the heated side to the other side.
A typical silicon-based TE device has a sandwich structure. Thermoelectric materials are sandwiched between two substrates made by high thermal conductivity materials. N-type and p-type thermoelectric units are connected sequentially in series as the middle layer. When a high power LED generates heat, the heat will first transfer through the top substrate to the thermoelectric units. With an applied external current, the heat will then be forced to flow to the bottom substrate through the thermoelectric units so that the temperature of the high power LED can be stable.
Liquid cooling system
Cooling systems using liquids such as liquid metals, water, and stream also actively manage high power LED's temperature. Liquid cooling systems are made up of a driving pump, a cold plate, and a fan-cooled radiator. The heat generated by a high power LED will first transfer to liquids through a cold plate. Then liquids driven by a pump will circulate in the system to absorb the heat. Lastly, a fan-cooled radiator will cool the heated fluids for the next circulation. The circulation of liquids manages the temperature of the high power LED.
See also
LED lamp – solid state lighting (SSL)
Thermal resistance in electronics
Thermal management (electronics)
Active cooling
Synthetic jet
References
External links
Thermal Management of Cree® XLamp® LEDs
LED Thermal Management
Thermal management of Osram Soleriq COB LED modules
Light-emitting diodes
Optical diodes
Semiconductor technology | Thermal management of high-power LEDs | [
"Materials_science"
] | 2,490 | [
"Semiconductor technology",
"Microtechnology"
] |
15,993,881 | https://en.wikipedia.org/wiki/1000%20Genomes%20Project | The 1000 Genomes Project (1KGP), taken place from January 2008 to 2015, was an international research effort to establish the most detailed catalogue of human genetic variation at the time. Scientists planned to sequence the genomes of at least one thousand anonymous healthy participants from a number of different ethnic groups within the following three years, using advancements in newly developed technologies. In 2010, the project finished its pilot phase, which was described in detail in a publication in the journal Nature. In 2012, the sequencing of 1092 genomes was announced in a Nature publication. In 2015, two papers in Nature reported results and the completion of the project and opportunities for future research.
Many rare variations, restricted to closely related groups, were identified, and eight structural-variation classes were analyzed.
The project united multidisciplinary research teams from institutes around the world, including China, Italy, Japan, Kenya, Nigeria, Peru, the United Kingdom, and the United States contributing to the sequence dataset and to a refined human genome map freely accessible through public databases to the scientific community and the general public alike.
The International Genome Sample Resource was created to host and expand on the data set after the project's end.
Background
Since the completion of the Human Genome Project advances in human population genetics and comparative genomics enabled further insight into genetic diversity. The understanding about structural variations (insertions/deletions (indels), copy number variations (CNV), retroelements), single-nucleotide polymorphisms (SNPs), and natural selection were being solidified.
The diversity of Human genetic variation such as that Indels were being uncovered and investigating human genomic variations
Natural selection
It also aimed to provide evidence that can be used to explore the impact of natural selection on population differences. Patterns of DNA polymorphisms can be used to reliably detect signatures of selection and may help to identify genes that might underlie variation in disease resistance or drug metabolism. Such insights could improve understanding of phenotypic variations, genetic disorders and Mendelian inheritance and their effects on survival and/or reproduction of different human populations.
Project description
Goals
The 1000 Genomes Project was designed to bridge the gap of knowledge between rare genetic variants that have a severe effect predominantly on simple traits (e.g. cystic fibrosis, Huntington disease) and common genetic variants have a mild effect and are implicated in complex traits (e.g. cognition, diabetes, heart disease).
The primary goal of this project was to create a complete and detailed catalogue of human genetic variations, which can be used for association studies relating genetic variation to disease. The consortium aimed to discover >95 % of the variants (e.g. SNPs, CNVs, indels) with minor allele frequencies as low as 1% across the genome and 0.1-0.5% in gene regions, as well as to estimate the population frequencies, haplotype backgrounds and linkage disequilibrium patterns of variant alleles.
Secondary goals included the support of better SNP and probe selection for genotyping platforms in future studies and the improvement of the human reference sequence. The completed database was expected be a useful tool for studying regions under selection, variation in multiple populations and understanding the underlying processes of mutation and recombination.
Outline
The human genome consists of approximately 3 billion DNA base pairs and is estimated to carry around 20,000 protein coding genes. In designing the study the consortium needed to address several critical issues regarding the project metrics such as technology challenges, data quality standards and sequence coverage.
Over the course of the next three years, scientists at the Sanger Institute, BGI Shenzhen and the National Human Genome Research Institute’s Large-Scale Sequencing Network planned to sequence a minimum of 1,000 human genomes. Due to the large amount of sequence data that was required, recruiting additional participants was maintained.
Almost 10 billion bases were to be sequenced per day over a period of the two year production phase, equating to more than two human genomes every 24 hours. The intended sequence dataset was to comprise 6 trillion DNA bases, 60-fold more sequence data than what has been published in DNA databases at the time.
To determine the final design of the full project three pilot studies were to be carried out within the first year of the project. The first pilot intends to genotype 180 people of 3 major geographic groups at low coverage (2×). For the second pilot study, the genomes of two nuclear families (both parents and an adult child) are going to be sequenced with deep coverage (20× per genome). The third pilot study involves sequencing the coding regions (exons) of 1,000 genes in 1,000 people with deep coverage (20×).
It was estimated that the project would likely cost more than $500 million if standard DNA sequencing technologies were used. Several newer technologies (e.g. Solexa, 454, SOLiD) were to be applied, lowering the expected costs to between $30 million and $50 million. The major support was provided by the Wellcome Trust Sanger Institute in Hinxton, England; the Beijing Genomics Institute, Shenzhen (BGI Shenzhen), China; and the NHGRI, part of the National Institutes of Health (NIH).
In keeping with Fort Lauderdale principles all genome sequence data (including variant calls) is freely available as the project progresses and can be downloaded via ftp from the 1000 genomes project webpage.
Human genome samples
Based on the overall goals for the project, the samples will be chosen to provide power in populations where association studies for common diseases are being carried out. Furthermore, the samples do not need to have medical or phenotype information since the proposed catalogue will be a basic resource on human variation.
For the pilot studies human genome samples from the HapMap collection will be sequenced. It will be useful to focus on samples that have additional data available (such as ENCODE sequence, genome-wide genotypes, fosmid-end sequence, structural variation assays, and gene expression) to be able to compare the results with those from other projects.
Complying with extensive ethical procedures, the 1000 Genomes Project will then use samples from volunteer donors. The following populations will be included in the study: Yoruba in Ibadan (YRI), Nigeria; Japanese in Tokyo (JPT); Chinese in Beijing (CHB); Utah residents with ancestry from northern and western Europe (CEU); Luhya in Webuye, Kenya (LWK); Maasai in Kinyawa, Kenya (MKK); Toscani in Italy (TSI); Peruvians in Lima, Peru (PEL); Gujarati Indians in Houston (GIH); Chinese in metropolitan Denver (CHD); people of Mexican ancestry in Los Angeles (MXL); and people of African ancestry in the southwestern United States (ASW).
* Population that was collected in diaspora
Community meeting
Data generated by the 1000 Genomes Project is widely used by the genetics community, making the first 1000 Genomes Project one of the most cited papers in biology. To support this user community, the project held a community analysis meeting in July 2012 that included talks highlighting key project discoveries, their impact on population genetics and human disease studies, and summaries of other large-scale sequencing studies.
Project findings
Pilot phase
The pilot phase consisted of three projects:
low-coverage whole-genome sequencing of 179 individuals from 4 populations
high-coverage sequencing of 2 trios (mother-father-child)
exon-targeted sequencing of 697 individuals from 7 populations
It was found that on average, each person carries around 250–300 loss-of-function variants in annotated genes and 50-100 variants previously implicated in inherited disorders. Based on the two trios, it is estimated that the rate of de novo germline mutation is approximately 10−8 per base per generation.
See also
Human Genome Project
HapMap Project
Personal genomics
Population groups in biomedicine
1000 Plant Genomes Project
List of biological databases
References
External links
1000 Genomes - A Deep Catalog of Human Genetic Variation - official web page
International HapMap Project - official web page
Human Genome Project Information
Human genome projects
Population genetics organizations
Single-nucleotide polymorphisms
Genome projects
Genomics
Bioinformatics | 1000 Genomes Project | [
"Chemistry",
"Engineering",
"Biology"
] | 1,712 | [
"Biological engineering",
"Single-nucleotide polymorphisms",
"Bioinformatics",
"Biodiversity",
"Molecular biology",
"Genome projects",
"Human genome projects"
] |
15,994,159 | https://en.wikipedia.org/wiki/Underwater%20acoustic%20communication | Underwater acoustic communication is a technique of sending and receiving messages in water. There are several ways of employing such communication but the most common is by using hydrophones. Underwater communication is difficult due to factors such as multi-path propagation, time variations of the channel, small available bandwidth and strong signal attenuation, especially over long ranges. Compared to terrestrial communication, underwater communication has low data rates because it uses acoustic waves instead of electromagnetic waves.
At the beginning of the 20th century some ships communicated by underwater bells as well as using the system for navigation. Submarine signals were at the time competitive with the primitive maritime radionavigation. The later Fessenden oscillator allowed communication with submarines.
Types of modulation used for underwater acoustic communications
In general the modulation methods developed for radio communications can be adapted for underwater acoustic communications (UAC). However some of the modulation schemes are more suited to the unique underwater acoustic communication channel than others. Some of the modulation methods used for UAC are as follows:
Frequency-shift keying (FSK)
Phase-shift keying (PSK)
Frequency-hopping spread spectrum (FHSS)
Direct-sequence spread spectrum (DSSS)
Frequency and pulse-position modulation (FPPM and PPM)
Multiple frequency-shift keying (MFSK)
Orthogonal frequency-division multiplexing (OFDM)
Continuous Phase Modulation (CPM)
The following is a discussion on the different types of modulation and their utility to UAC.
Frequency-shift keying
FSK is the earliest form of modulation used for acoustic modems. FSK usually employs two distinct frequencies to modulate data; for example, frequency F1 to indicate bit 0 and frequency F2 to indicate bit 1. Hence a binary string can be transmitted by alternating these two frequencies depending on whether it is a 0 or 1. The receiver can be as simple as having analogue matched filters to the two frequencies and a level detector to decide if a 1 or 0 was received. This is a relatively easy form of modulation and therefore used in the earliest acoustic modems. However more sophisticated demodulator using digital signal processors (DSP) can be used in the present day.
The biggest challenge FSK faces in the UAC is multi-path reflections. With multi-path (particularly in UAC) several strong reflections can be present at the receiving hydrophone and the threshold detectors become confused, thus severely limiting the use of this type of UAC to vertical channels. Adaptive equalization methods have been tried with limited success. Adaptive equalization tries to model the highly reflective UAC channel and subtract the effects from the received signal. The success has been limited due to the rapidly varying conditions and the difficulty to adapt in time.
Phase-shift keying
Phase-shift keying (PSK) is a digital modulation scheme that conveys data by changing (modulating) the phase of a reference signal (the carrier wave). The signal is impressed into the magnetic field x,y area by varying the sine and cosine inputs at a precise time. It is widely used for wireless LANs , RFID and Bluetooth communication.
Orthogonal frequency-division multiplexing
Orthogonal frequency-division multiplexing (OFDM) is a digital multi-carrier modulation scheme. OFDM conveys data on several parallel data channels by incorporating closely spaced orthogonal sub-carrier signals.
OFDM is a favorable communication scheme in underwater acoustic communications thanks to its resilience against frequency selective channels with long delay spreads.
Continuous phase modulation
Continuous phase modulation (CPM) is a modulation technique, which is a continuous phase shift, where the phase of the carrier signal varies over time and avoids abrupt changes between successive symbols. This smooth phase trajectory reduces spectral side lobes.
Reducing spectral side lobes increases the spectral efficiency of CPM and enables it to transmit data within a narrower bandwidth. Notable variants of CPM include minimum shift keying (MSK) and Gaussian minimum shift keying (GMSK), which uses a Gaussian filter to smooth out phase shifts.
Since the underwater environment is highly scattered, it can cause multipath propagation and signal degradation. The CPM's continuous phase featur mitigates these effects and maintains signal integrity. Besides itss high spectral efficiency helps make optimal use of limited bandwidth underwater.
Use of vector sensors
Compared to a scalar pressure sensor, such as a hydrophone, which measures the scalar acoustic field component, a vector sensor measures the vector field components such as acoustic particle velocities. Vector sensors can be categorized into inertial and gradient sensors.
Vector sensors have been widely researched over the past few decades. Many vector sensor signal processing algorithms have been designed.
Underwater vector sensor applications have been focused on sonar and target detection. They have also been proposed to be used as underwater multi‐channel communication receivers and equalizers. Other researchers have used arrays of scalar sensors as multi‐channel equalizers and receivers.
Applications
Underwater telephone
The underwater telephone, also known as UQC, AN/WQC-2, or Gertrude, was used by the U.S. Navy in 1945 after in Kiel, Germany, in 1935 different realizations at sea were demonstrated. The terms UQC and AN/WQC-2 follow the nomenclature of the Joint Electronics Type Designation System. The type designation "UQC" stands for General Utility (multi use), Sonar and Underwater Sound and Communications (Receiving/Transmitting, two way). The "W" in WQC stands for Water Surface and Underwater combined. The underwater telephone is used on all crewed submersibles and many Naval surface ships in operation. Voice or an audio tone (morse code) communicated through the UQC are heterodyned to a high pitch for acoustic transmission through water.
JANUS
In April 2017, NATO's Centre for Maritime Research and Experimentation announced the approval of JANUS, a standardized protocol to transmit digital information underwater using acoustic sound (like modems and fax machines do over telephone lines). Documented in STANAG 4748, it uses 900 Hz to 60 kHz frequencies at distances of up to . It is available for use with military and civilian, NATO and non-NATO devices; it was named after the Roman god of gateways, openings, etc.
The JANUS specification (ANEP-87) provides for a flexible plug-in-based payload scheme. A baseline JANUS packet consists of 64 bits to which further arbitrary data (Cargo) can be appended. This enables multiple different applications such as Emergency location, Underwater AIS (Automatic Identification System), and Chat. An example of an Emergency Position and Status message is the following JSON representation:{
"ClassUserID": 0,
"ApplicationType": 3,
"Nationality": "PT",
"Latitude": "38.386547",
"Longitude": "-9.055858",
"Depth": "16",
"Speed": "1.400000",
"Heading": "0.000000",
"O2": "17.799999",
"CO2": "5.000000",
"CO": "76.000000",
"H2": "3.500000",
"Pressure": "45.000000",
"Temperature": "21.000000",
"Survivors": "43",
"MobilityFlag": "1",
"ForwardingCapability": "1",
"TxRxFlag": "0",
"ScheduleFlag": "0"
}
This Emergency Position and Status Message (Class ID 0 Application 3 Plug-in) message shows a Portuguese submarine at 38.386547 latitude -9.055858 longitude at a depth of 16 meters. It is moving north at 1.4 meters per second, and has 43 survivors on board and shows the environmental conditions.
Underwater messaging
Commercial hardware products have been designed to enable two-way underwater messaging between scuba divers. These support sending from a list of pre-defined messages from a dive computer using acoustic communication.
Research efforts have also explored the use of smartphones in water-proof cases for underwater communication, using acoustic modem hardware as phone attachments as well as using a software app without any additional hardware. The Android software app, AquaApp, from University of Washington uses the microphones and speakers on existing smartphones and smart watches to enable underwater acoustic communication. It had been tested to send digital messages using smartphones between divers at distances of up to 100 m.
See also
Acoustic communication in aquatic animals
Acoustic communication in fish
Telecommunications
References
External links
DSPComm – underwater acoustic modem manufacturer
uWAVE - the smallest underwater acoustic modem
NetSim UWAN - underwater acoustic network simulation
Telecommunications techniques
Acoustics | Underwater acoustic communication | [
"Physics"
] | 1,809 | [
"Classical mechanics",
"Acoustics"
] |
15,995,078 | https://en.wikipedia.org/wiki/FMRI%20adaptation | Functional magnetic resonance imaging adaptation (FMRIa) is a method of functional magnetic resonance imaging that reads the brain changes occurring in response to long exposure to evocative stimulus. If Stimulus 1 (S1) excites a certain neuronal population, repeated exposure to S1 will result in subsequently attenuated responses. This adaptation may be due to neural fatigue or coupled hemodynamic processes. However, when S1 is followed by a unique stimulus, S2, the response amplitudes should not be attenuated as a fresh sub-population of neurons is excited. Using this technique can allow researchers to determine if the same or unique neuronal groups are involved in processing two stimuli.
Usage
This technique has been used successfully in examination of the visual system, particularly orientation, motion, and face recognition.
See also
Adaptive system
Functional magnetic resonance imaging
Neural adaptation
References
Magnetic resonance imaging | FMRI adaptation | [
"Chemistry"
] | 180 | [
"Nuclear magnetic resonance stubs",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
15,997,468 | https://en.wikipedia.org/wiki/Polyacrylic%20acid | Poly(acrylic acid) (PAA; trade name Carbomer) is a polymer with the formula (CH2−CHCO2H)n. It is a derivative of acrylic acid (CH2=CHCO2H). In addition to the homopolymers, a variety of copolymers and crosslinked polymers, and partially deprotonated derivatives thereof, are known and of commercial value. In a water solution at neutral pH, PAA is an anionic polymer, i.e., many of the side chains of PAA lose their protons and acquire a negative charge. Partially or wholly deprotonated PAAs are polyelectrolytes, with the ability to absorb and retain water and swell to many times their original volume. These properties acid–base and water-attracting are the basis of many applications.
Synthesis
PAA, like any acrylate polymer, is usually synthesized through a process known as free radical polymerization, though graft polymerization may also be used. Free radical polymerization involves the conversion of monomers, in this case, acrylic acid (CH2=CHCO2H), into a polymer chain through the action of free radicals. The process typically follows these steps:
Initiation: Free radicals are generated by initiators such as potassium persulfate (K2S2O8) or Azobisisobutyronitrile (AIBN). These radicals are highly reactive and can start the polymerization process by reacting with the monomer units.
Propagation: Once the radical reacts with a monomer, it creates a new radical at the end of the growing chain. This new radical can react with additional monomer units, allowing the chain to grow.
Termination: The reaction continues until two radicals recombine, or a radical is transferred to another molecule, terminating the growth of the polymer chain.
Chain transfer and inhibition: Other reactions can also occur, such as chain transfer (where the radical is transferred to a different molecule, creating a new radical) or inhibition (where impurities stop the growth of the chain).
Production
The global market was estimated to be worth $3.4 billion in 2022.
Structure and derivatives
Polyacrylic acid is a weak anionic polyelectrolyte, whose degree of ionisation is dependent on solution pH. In its non-ionised form at low pHs, PAA may associate with various non-ionic polymers (such as polyethylene oxide, poly-N-vinyl pyrrolidone, polyacrylamide, and some cellulose ethers) and form hydrogen-bonded interpolymer complexes. In aqueous solutions PAA can also form polycomplexes with oppositely charged polymers such as chitosan, surfactants, and drug molecules (for example, streptomycin).
Physical properties
Dry PAAs are sold as white, fluffy powders.
Derivatives
In the dry powder form of sodium polyacrylate, the positively charged sodium ions are bound to the polyacrylate, however, in aqueous solutions the sodium ions can dissociate. The presence of sodium cations allows the polymer to absorb a high amount of water.
Applications
Absorbent
PAA is widely used in dispersants. Its molecular weight has a significant impact on the rheological properties and dispersion capacity, and hence applications. The dominant application for PAA is as a superabsorbent. About 25% of PAA is used for detergents and dispersants.
Polyacrylic acid and its derivatives (particularly sodium polyacrylate) are used in disposable diapers. Acrylic acid is also the main component of Superabsorbent Polymers (SAPs), which are cross-linked polyacrylates that can absorb and retain more than 100 times of their own weight in liquid. The US Food and Drug Administration authorised the use of SAPs in packaging with indirect food contact.
Cleaning
Detergents often contain copolymers of acrylic acid that assist in sequestering dirt. Cross-linked polyacrylic acid has also been used in the production of household products, including floor cleaners.
PAA may inactivate the antiseptic chlorhexidine gluconate.
Biocompatible materials
The neutralized polyacrylic acid gels are suitable biocompatible matrices for medical applications such as gels for skin care products. PAA films can be deposited on orthopaedic implants to protect them from corrosion. Crosslinked hydrogels of PAA and gelatin have also been used as medical glue.
Paints and cosmetics
Other applications involve paints and cosmetics. They stabilize suspended solid in liquids, prevent emulsions from separating, and control the consistency in flow of cosmetics. Carbomer codes (910, 934, 940, 941, and 934P) are an indication of molecular weight and the specific components of the polymer. For many applications PAAs are used in form of alkali metal or ammonium salts, e.g. sodium polyacrylate.
Emerging applications
Hydrogels derived from PAA have attracted much study for use as bandages and aids for wound healing.
Drilling fluid and metal quenching
A few reports were made on PAA use as deflocculant (so called alkaline polyacrylates) for oil drilling industry.
It was also reported to be used for metal quenching in metalworking (see Sodium polyacrylate).
References
Acrylate polymers
Cosmetics chemicals
Polyelectrolytes
Polymers | Polyacrylic acid | [
"Chemistry",
"Materials_science"
] | 1,170 | [
"Polymers",
"Polymer chemistry"
] |
7,162,263 | https://en.wikipedia.org/wiki/System-specific%20impulse | System-specific Impulse, Issp is a measure that describes performance of jet propulsion systems. A reference number is introduced, which defines the total impulse, Itot, delivered by the system, divided by the system mass, mPS:
Issp=Itot/mPS
Because of the resulting dimension, - delivered impulse per kilogram of system mass mPS, this number is called ‘System-specific Impulse’. In SI units, impulse is measured in newton-seconds (N·s) and Issp in N·s/kg.
The Issp allows a more accurate determination of the propulsive performance of jet propulsion systems than the commonly used Specific Impulse, Isp, which only takes into account the propellant and the thrust engine performance characteristics. Therefore, the Issp permits an objective and comparative performance evaluation of systems of different designs and with different propellants.
The Issp can be derived directly from actual jet propulsion systems by determining the total impulse delivered by the mass of contained propellant, divided by the known total (wet) mass of the propulsion system. This allows a quantitative comparison of for example, built systems.
In addition, the Issp can be derived analytically, for example for spacecraft propulsion systems, in order to facilitate a preliminary selection of systems (chemical, electrical) for spacecraft missions of given impulse and velocity-increment requirements. A more detailed presentation of derived mathematical formulas for Issp and their applications for spacecraft propulsion are given in the cited references. In 2019 Koppel and others used ISSP as a criterion in selection of electric thrusters.
See also
Specific Impulse
References
Spacecraft propulsion
Physical quantities
Classical mechanics | System-specific impulse | [
"Physics",
"Mathematics"
] | 328 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Classical mechanics",
"Mechanics",
"Physical properties"
] |
7,162,656 | https://en.wikipedia.org/wiki/Stellar%20triangulation | Stellar triangulation is a method of geodesy and of its subdiscipline space geodesy used to measure Earth's geometric shape. Stars were first used for this purpose by the Finnish astronomer Yrjö Väisälä in 1959, who made astrometric photographs of the night sky at two stations together with a lighted balloon probe between them.
Even this first step showed the potential of the method, as Väisälä got the azimuth between Helsinki and Turku (a distance of 150 km) with an accuracy of 1″. Soon the method was successfully tested by ballistic rockets and for some special satellites.
Adequate computer programs were written for
the astrometric reduction of the photographic plates,
the intersection of the "observation planes" containing the stations and the targets,
and the least-squares adjustment of stellar-terrestrial networks with redundancy.
The advantages of stellar triangulation were the possibility to cross far distances (terrestrial observations are restricted to approx. 30 km, and even in high mountains to 60 km), and the independency of the Earth's gravity field. The results are azimuths between the stations in the stellar-inertial navigation system, despite no direct line of sight.
In 1960, the first appropriate space probe was launched: Project Echo, a 30 m diameter balloon satellite. By then the whole of Western Europe could be linked together geodetically with accuracies 2–10 times better than by classical triangulation.
During the late 1960s, a global project was begun by H.H. Schmid (Switzerland) to connect 45 stations all over the continents, with distances of 3000–5000 km. It was finished in 1974 by precise reduction of some 3000 stellar plates and network adjustment of 46 stations (2 additional ones in Germany and the Pacific, but without the areas of Russia and China). The mean accuracy was between ±5 m (Europe, USA) and 7–10 m (Africa, Antarctica), depending on weather and infrastructure conditions. Combined with Doppler measurements (such as from Transit) the global accuracy was even 3 m. This is more than 20 times better than previously, because the gravity field up to 1974 couldn't be calculated better than 100 meters between distant continents.
The use of stars as a reference system was expanded in the 70s and early 80s for continental networks, but then the laser and electronic distance measurements became better than 2 m and could be carried out automatically. Nowadays some similar techniques are carried out by interferometry with very distant radio quasars (VLBI) instead of optical satellite & star observations. The geodetic connection of radio telescopes is now possible up to mm–cm precision as published periodically by the community. This global project group was founded in 2000 by Harald Schuh (Munich/TU Vienna) and some dozen research projects worldwide, and is now a permanent service of International Union of Geodesy and Geophysics (IUGG) and International Earth Rotation and Reference Systems Service (IERS).
The photographic observations as done in 1959–1985 are considered irrelevant now because of their expense, but they have led to a revival of electro-optical techniques like CCD.
See also
Figure of the Earth
Fundamental station
Triangulation
Trilateration
Satellite geodesy
Satellite geodesy#Optical triangulation
PAGEOS satellite
Satellite laser ranging (SLR)
Stellar parallax for distances to stars
References
A.Berroth, W.Hofmann: Kosmische Geodäsie(Cosmic Geodesy) (356 p.), G.Braun, Karlsruhe 1960
Karl Ledersteger: "Astronomische und Physikalische Geodäsie (Erdmessung)", Handbuch der Vermessungskunde, Wilhelm Jordan, Otto Eggert and Max Kneissl ed., Volume V, (870 S., espec. §§ 2, 5, 13), J.B.Metzler, Stuttgart 1968.
Hellmut Schmid: Das Weltnetz der Satelitentriangulation. Wiss. Mitteilungen ETH Zurich and Journal of Geophysical Research, 1974.
Klaus Schnädelbach et al.: Western European Satellite Triangulation Programme (WEST), 2nd Experimental Computation. Mitteilungen Geodät.Inst. Graz 11/1, Graz 1972
Nothnagel, Schlüter, Seeger: Die Geschichte der geodätischen VLBI in Deutschland, Bonn 2000.
External links
NOAA's geodesy photo library
Geodesy
Astrometry | Stellar triangulation | [
"Astronomy",
"Mathematics"
] | 941 | [
"Applied mathematics",
"Geodesy",
"Astrometry",
"Astronomical sub-disciplines"
] |
7,163,587 | https://en.wikipedia.org/wiki/Anchor%20bolt | Anchor bolts are used to connect structural and non-structural elements to concrete. The connection can be made by a variety of different components: anchor bolts (also named fasteners), steel plates, or stiffeners. Anchor bolts transfer different types of load: tension forces and shear forces.
A connection between structural elements can be represented by steel columns attached to a reinforced concrete foundation. A common case of a non-structural element attached to a structural one is the connection between a facade system and a reinforced concrete wall.
Types
Cast-in-place
The simplest – and strongest – form of anchor bolt is cast-in-place, with its embedded end consisting of a standard hexagonal head bolt and washer, 90-bend, or some sort of forged or welded flange (see also stud welding). The last are used in concrete-steel composite structures as shear connectors.
Other uses include anchoring machines to poured concrete floors and buildings to their concrete foundations.
Various typically disposable aids, mainly of plastic, are produced to secure and align cast-in-place anchors prior to concrete placement. Moreover, their position must also be coordinated with the reinforcement layout. Different types of cast-in-place anchors might be distinguished:
Lifting inserts: used for lifting operations of plain or prestressed RC beams. The insert can be a threaded rod. See also bolt (climbing).
Anchor channels: used in precast concrete connections. The channel can be a hot-rolled or a cold-formed steel shape in which a T-shape screw is placed in order to transfer the load to the base material.
Headed stud: consist of a steel plate with headed studs welded on (see also threaded rod).
Threaded sleeves: consist of a tube with an internal thread which is anchored back into the concrete.
For all the type of the cast-in-place anchors, the load-transfer mechanisms is the mechanical interlock, i.e. the embedded part of the anchors in concrete transfers and the applied load (axial or shear) via bearing pressure at the contact zone. At failure conditions, the level of bearing pressure can be higher than 10 times the concrete compressive strength, if a pure tension force is transferred.
Cast-in-place type anchors are also utilized in masonry applications, placed in wet mortar joints during the laying of brick and cast blocks (CMUs).
Post-installed
Post-installed anchors can be installed in any position of hardened concrete after a drilling operation. A distinction is made according to their principle of operation.
Mechanical expansion anchors
The force-transfer mechanism is based on friction mechanical interlock guaranteed by expansion forces. They can be further divided into two categories:
torque controlled: the anchor is inserted into the hole and secured by applying a specified torque to the bolt head or nut with a torque wrench. A particular sub-category of this anchor is called wedge type. As shown in the figure, tightening the bolt results in a wedge being driven up against a sleeve, which expands it and causes it to compress against the material it is being fastened to.
displacement controlled: usually consist of an expansion sleeve and a conical expansion plug, whereby the sleeve is internally threaded to accept a threaded element.
Undercut anchors
The force-transfer mechanism is based on mechanical interlock. A special drilling operation allows to create a contact surface between the anchor head and the hole's wall where bearing stresses are exchanged.
Bonded anchors
Bonded anchors are also referred as adhesive anchors or chemical anchors. The anchoring material is an adhesive (also called mortar) usually consisting of epoxy, polyester, or vinylester resins.
In bonded anchors, the force-transfer mechanism is based on bond stresses provided by binding organic materials. Both ribbed bars and threaded rods can be used and a change of the local bond mechanism can be appreciated experimentally. In ribbed bars the resistance is prevalently due to shear behavior of concrete between the ribs whereas for threaded rods friction prevails (see also anchorage in reinforced concrete).
The performance of this anchor's types in terms of 'load-bearing capacity', especially under tension loads, is strictly related to the cleaning condition of the hole. Experimental results showed that the reduction of the capacity is up to 60%. The same applies also for moisture condition of concrete, for wet concrete the reduction is of 20% using polyester resin. Other issues are represented by high temperature behavior and creep response.
Screw anchors
The force-transfer mechanism of the screw anchor is based on concentrated pressure exchange between the screw and concrete through the pitches.
Plastic anchors
Their force-transfer mechanism is similar to mechanical expansion anchors. A torque moment is applied to a screw which is inserted in a plastic sleeve. As the torque is applied the plastic expands the sleeve against the sides of the hole acting as expansion force.
Tapcon screws
Tapcon screws are a popular anchor that stands for self tapping (self threading) concrete screw. Larger diameter screws are referred to as LDT's. This type of fastener requires a pre-drilled hole—using a Tapcon drillbit—and are then screwed into the hole using a standard hex or phillips bit. These screws are often blue, white, or stainless. They are also available in versions for marine or high stress applications.
Powder-actuated anchors
They act transferring the forces via mechanical interlock. This fastening technology is used in steel-to-steel connection, for instance to connect cold-formed profiles. A screw is inserted into the base material via a gas actuated gas gun. The driving energy is usually provided by firing a combustible propellant in powder form. The fastener's insertion provokes the plastic deformation of the base material which accommodates the fastener's head where the force transfer takes place.
Mechanical behavior
Modes of failure in tension
Anchors can fail in different way when loaded in tension:
Steel failure: the weak part of the connection is represented by the rod. The failure corresponds to the tensile break-out of steel as in case of tensile testing. In this case, concrete base material might be undamaged.
Pull-out: the anchor is pulled out from the drilled hole partially damaging the surrounding concrete. When the concrete is damaged the failure is also indicated as pull-through.
Concrete cone: after reaching the load-bearing capacity a cone shape is formed. The failure is governed by crack growth in concrete. This kind of failure is typical in pull-out test.
Splitting failure: failure is characterized by a splitting crack which divides the base material into two parts. This kind of failure occurs when the dimensions of the concrete component are limited or the anchor is installed close to an edge.
Blow-out failure: failure is characterized by the lateral spalling of concrete in the proximity of the anchor's head. This kind of failure occurs for anchors (prevalently cast-in-place) installed near the edge of the concrete element.
In design verification under ultimate limit state, codes prescribe to verify all the possible failure mechanisms.
Modes of failure in shear
Anchors can fail in different way when loaded in shear:
Steel failure: the rod reaches the yielding capacity then rupture occurs after development of large deformations.
Concrete edge: a semi-conical fracture surface develops originating from the point of bearing up to the free surface. This type of failure occurs, for an anchor in the proximity of the edge of the concrete member.
Pry-out: a semi-conical fracture surface develops characterize the failure. The pryout mechanism for cast-in anchors usually occurs with very short, stocky studs. The studs are typically so short and stiff that under a direct shear load, they bend causing contemporarily crushing in front of the stud and a crater of concrete behind.
In design verification under ultimate limit state, codes prescribe to verify all the possible failure mechanisms.
Combined tension/shear
When contemporarily tension and shear load are applied to an anchor the failure occurs earlier (at a less load-bearing capacity) with respect the un-coupled case. In current design codes a linear interaction domain is assumed.
Group of anchors
In order to increase the load-carrying capacity anchors are assembled in group, moreover this allow also to arrange a bending moment resisting connection. For tension and shear load, the mechanical behavior is markedly influenced by (i) the spacing between the anchors and (ii) the possible difference in the applied forces.
Service load behavior
Under service loads (tension and shear) anchor's displacement must be limited. The anchor performance (load-carrying capacity and characteristic displacements) under different loading condition is assessed experimentally, then an official document is produced by technical assessment body. In design phase, the displacement occurring under the characteristic actions should be not larger than the admissible displacement reported in the technical document.
Seismic load behavior
Under seismic loads and there would be the possibility that an anchor is contemporarily (i) installed in a crack and (ii) subjected to inertia loads proportional both to the mass and the acceleration of the attached element (secondary structure) to the base material (primary structure). The load conditions in this case can be summarized as follow:
Pulsating Axial load: force aligned with the anchor's axis, positive in case of pullout condition and zero in case of pushing-in.
Reverse Shear load (also named “alternate shear”): force perpendicular to the anchor's axis, positive and negative depending on an arbitrary sign convention.
Cyclic Crack (also named “crack movement”): RC primary structure undergoes in severe damage condition (i.e. cracking) and the most un-favorable case for anchor performance is when the crack plane contains the anchor's axis and the anchor is loaded by a positive axial force (constant during crack cycles).
Exceptional loads behavior
Exceptional loads differ from ordinary static loads for their rise time. High displacement rates are involved in impact loading. Regarding steel to concrete connections, some examples consist in collision of vehicle on barriers connected to concrete base and explosions. Apart from these extraordinary loads, structural connections are subjected to seismic actions, which rigorously have to be treated via dynamic approach. For instance, seismic pull-out action on anchor can have 0.03 seconds of rise time. On the contrary, in a quasi-static test, 100 second may be assumed as time interval to reach the peak load. Regarding the concrete base failure mode: Concrete cone failure loads increase with elevated loading rates with respect the static one.
Designs
See also
Well nut
References
Structural connectors
Threaded fasteners
Wall anchors | Anchor bolt | [
"Engineering"
] | 2,148 | [
"Structural engineering",
"Structural connectors"
] |
7,167,651 | https://en.wikipedia.org/wiki/Pneumocystis%20jirovecii | Pneumocystis jirovecii (previously P. carinii) is a yeast-like fungus of the genus Pneumocystis. The causative organism of Pneumocystis pneumonia, it is an important human pathogen, particularly among immunocompromised hosts. Prior to its discovery as a human-specific pathogen, P. jirovecii was known as P. carinii.
Lifecycle
The complete lifecycles of any of the species of Pneumocystis are not known, but presumably all resemble the others in the genus. The terminology follows zoological terms, rather than mycological terms, reflecting the initial misdetermination as a protozoan parasite. It is an extracellular fungus. All stages are found in lungs and because they cannot be cultured ex vivo, direct observation of living Pneumocystis is difficult. The trophozoite stage is thought to be equivalent to the so-called vegetative state of other species (such as Schizosaccharomyces pombe), which like Pneumocystis, belong to the Taphrinomycotina branch of the fungal kingdom. The trophozoite stage is single-celled and appears amoeboid (multilobed) and closely associated with host cells. Globular cysts eventually form that have a thicker wall. Within these ascus-like cysts, eight spores form, which are released through rupture of the cyst wall. The cysts often collapse, forming crescent-shaped bodies visible in stained tissue. Whether meiosis takes place within the cysts, or what the genetic status is of the various cell types, is not known for certain.
Homothallism
The lifecycle of P. jirovecii is thought to include both asexual and sexual phases. Asexual multiplication of haploid cells likely occurs by binary fission. The mode of sexual reproduction appears to be primary homothallism, a form of self-fertilization. The sexual phase takes place in the host's lungs. This phase is presumed to involve formation of a diploid zygote, followed by meiosis, and then production of an ascus containing the products of meiosis, eight haploid ascospores. The ascospores may be disseminated by airborne transmission to new hosts.
Medical relevance
Pneumocystis pneumonia is an important disease of immunocompromised humans, particularly patients with HIV, but also patients with an immune system that is severely suppressed for other reasons, for example, following a bone marrow transplant. In humans with a normal immune system, it is an extremely common silent infection.
Identified by methenamine silver stain of lung tissue, type I pneumocytes, and type II pneumocytes over-replicate and damage alveolar epithelium, causing death by asphyxiation. Fluid leaks into alveoli, producing an exudate seen as honeycomb/cotton candy appearance on hematoxylin and eosin-stained slides. Drug of choice is trimethoprim/sulfamethoxazole, pentamidine, or dapsone. In HIV patients, most cases occur when the CD4 count is below 200 cells per microliter.
Nomenclature
At first, the name Pneumocystis carinii was applied to the organisms found in both rats and humans, as the parasite was not yet known to be host-specific. In 1976, the name "Pneumocystis jiroveci" was proposed for the first time, to distinguish the organism found in humans from variants of Pneumocystis in other animals. The organism was named thus in honor of Czech parasitologist Otto Jirovec, who described Pneumocystis pneumonia in humans in 1952. After DNA analysis showed significant differences in the human variant, the proposal was made again in 1999 and has come into common use.
The name was spelled according to the International Code of Zoological Nomenclature, since the organism was believed to be a protozoan. After it became clear that it was a fungus, the name was changed to Pneumocystis jirovecii, according to the International Code of Nomenclature for algae, fungi, and plants (ICNafp), which requires such names be spelled with double i (ii).
Both spellings are commonly used, but according to the ICNafp, P. jirovecii is correct. A change in the ICNafp now recognizes the validity of the 1976 publication, making the 1999 proposal redundant, and cites Pneumocystis and P. jiroveci as examples of the change in ICN Article 45, Ex 7. The name P. jiroveci is typified (both lectotypified and epitypified) by samples from human autopsies dating from the 1960s.
The term PCP, which was widely used by practitioners and patients, has been retained for convenience, with the rationale that it now stands for the more general Pneumocystis pneumonia rather than Pneumocystis carinii pneumonia.
The name P. carinii is incorrect for the human variant, but still describes the species found in rats, and that name is typified by an isolate from rats.
Pneumocystis genome
Pneumocystis species cannot be grown in culture, so the availability of the human disease-causing agent, P. jirovecii, is limited. Hence, investigation of the whole genome of a Pneumocystis is largely based upon true P. carinii available from experimental rats, which can be maintained with infections. Genetic material of other species, such as P. jirovecii, can be compared to the genome of P. carinii.
The genome of P. jirovecii has been sequenced from a bronchoalveolar lavage sample. The genome is small, low in G+C content, and lacks most amino-acid biosynthesis enzymes.
History
The earliest report of this genus appears to have been that of Carlos Chagas in 1909, who discovered it in experimental animals, but confused it with part of the lifecycle of Trypanosoma cruzi (causal agent of Chagas disease) and later called both organisms Schizotrypanum cruzi, a form of trypanosome infecting humans. The rediscovery of Pneumocystis cysts was reported by Antonio Carini in 1910, also in Brazil. The genus was again discovered in 1912 by Delanoë and Delanoë, this time at the Pasteur Institute in Paris, who found it in rats and proposed the genus and species name Pneumocystis carinii after Carini.
Pneumocystis was redescribed as a human pathogen in 1942 by two Dutch investigators, van der Meer and Brug, who found it in three new cases: a 3-month-old infant with congenital heart disease and in two of 104 autopsy cases – a 4-month-old infant and a 21-year-old adult. There being only one described species in the genus, they considered the human parasite to be P. carinii. Nine years later (1951), Dr. Josef Vanek at Charles University in Prague, Czechoslovakia, showed in a study of lung sections from 16 children that the organism labelled "P. carinii" was the causative agent of pneumonia in these children. The following year, Czech scientist Otto Jírovec reported "P. carinii" as the cause of interstitial pneumonia in neonates. Following the realization that Pneumocystis from humans could not infect experimental animals such as rats, and that the rat form of Pneumocystis differed physiologically and had different antigenic properties, Frenkel was the first to recognize the human pathogen as a distinct species. He named it "Pneumocystis jiroveci" (corrected to P. jirovecii - see nomenclature above). Controversy existed over the relabeling of P. carinii in humans as P. jirovecii, which is why both names still appear in publications. However, only the name P. jirovecii is used exclusively for the human pathogen, whereas the name P. carinii has had a broader application to many species. Frenkel and those before him believed that all Pneumocystis were protozoans, but soon afterwards evidence began accumulating that Pneumocystis was a fungal genus. Recent studies show it to be an unusual, in some ways a primitive genus of Ascomycota, related to a group of yeasts. Every tested primate, including humans, appears to have its own type of Pneumocystis that is incapable of cross-infecting other host species and has co-evolved with each species. Currently, only five species have been formally named: P. jirovecii from humans, P. carinii as originally named from rats, P. murina from mice, P. wakefieldiae also from rats, and P. oryctolagi from rabbits.
Historical and even recent reports of P. carinii from humans are based upon older classifications (still used by many, or those still debating the recognition of distinct species in the genus Pneumocystis) which does not mean that the true P. carinii from rats actually infects humans. In an intermediate classification system, the various taxa in different mammals have been called formae speciales or forms. For example, the human "form" was called Pneumocystis carinii f. [or f. sp.] hominis, while the original rat infecting form was called Pneumocystis carinii f. [or f. sp.] carinii. This terminology is still used by some researchers. The species of Pneumocystis originally seen by Chagas have not yet been named as distinct species. Many other undescribed species presumably exist and those that have been detected in many mammals are only known from molecular sample detection from lung tissue or fluids, rather than by direct physical observation. Currently, they are cryptic taxa.
References
External links
Ascomycota
Parasitic fungi
Fungal pathogens of humans
Fungus species | Pneumocystis jirovecii | [
"Biology"
] | 2,176 | [
"Fungi",
"Fungus species"
] |
7,168,569 | https://en.wikipedia.org/wiki/Scanning%20acoustic%20microscope | A scanning acoustic microscope (SAM) is a device which uses focused sound to investigate, measure, or image an object (a process called scanning acoustic tomography). It is commonly used in failure analysis and non-destructive evaluation. It also has applications in biological and medical research. The semiconductor industry has found the SAM useful in detecting voids, cracks, and delaminations within microelectronic packages.
History
The first scanning acoustic microscope (SAM), with a 50 MHz ultrasonic lens, was developed in 1974 by R. A. Lemons and C. F. Quate at the Microwave Laboratory of Stanford University. A few years later, in 1980, the first high-resolution (with a frequency up to 500 MHz) through-transmission SAM was built by R.Gr. Maev and his students at his Laboratory of Biophysical Introscopy of the Russian Academy of Sciences. First commercial SAM ELSAM, with a broad frequency range from 100 MHz up to 1.8 GHz, was built at the Ernst Leitz GmbH by the group led by Martin Hoppe and his consultants Abdullah Atalar (Stanford University), Roman Maev (Russian Academy of Sciences) and Andrew Briggs (Oxford University.)
Since then, many improvements to such systems have been made to enhance resolution and accuracy. Most of them were described in detail in the monograph Advanced in Acoustic Microscopy, Ed. by Andrew Briggs, 1992, Oxford University Press and in monograph by Roman Maev, Acoustic Microscopy Fundamentals and Applications, Monograph, Wiley & Son - VCH, 291 pages, August 2008, as well as recently in.
C-SAM versus other Techniques
There are many methods for failure analysis of damages in microelectronic packages, including laser decapsulation, wet etch decapsulation, optical microscopy, SEM microscopy, and X-ray. The problem with most of these methods is the fact that they are destructive. This means it’s possible that the damage itself will be done during preparation. Also, most of these destructive methods need time-consuming and complicated sample preparation. So, in most cases, it is important to study damages with a non-destructive technique. And unlike other non-destructive techniques such as X-Ray, CSAM is highly sensitive to the elastic properties of the materials it travels through. For example, CSAM is highly sensitive to the presence of delaminations and air-gaps at sub-micron thicknesses, so it is particularly useful for inspection of small, complex devices.
Physics Principle
The technique makes use of the high penetration depth of acoustic waves to image the internal structure of the specimen. So, in scanning acoustic microscopy either reflected or transmitted acoustic waves are processed to analyze the internal features. When the acoustic wave propagates though the sample it may be scattered, absorbed or reflected at media interfaces. Thus, the technique registers the echo generated by the acoustic impedance (Z) contrast between two materials.
Scanning acoustic microscopy works by directing focused sound from a transducer at a small point on a target object. Sound hitting the object is either scattered, absorbed, reflected (scattered at 180°) or transmitted (scattered at 0°). It is possible to detect the scattered pulses travelling in a particular direction. A detected pulse informs of the presence of a boundary or object. The `time of flight' of the pulse is defined as the time taken for it to be emitted by an acoustic source, scattered by an object and received by the detector, which is usually coincident with the source. The time of flight can be used to determine the distance of the inhomogeneity from the source given knowledge of the speed through the medium.
Based on the measurement, a value is assigned to the location investigated. The transducer (or object) is moved slightly and then insonified again. This process is repeated in a systematic pattern until the entire region of interest has been investigated. Often the values for each point are assembled into an image of the object. The contrast seen in the image is based either on the object's geometry or material composition. The resolution of the image is limited either by the physical scanning resolution or the width of the sound beam (which in turn is determined by the frequency of the sound).
Methodology
Different types of analysis modes are available in high-definition SAM. The main three modes are A-scans, B-scans, and C-scans. Each one provides different information about the integrity of the sample’s structure.
The A-scan is the amplitude of the echo signal over ToF. The transducer is mounted on the z-axis of the SAM. It can be focused to a specific target layer located in a hard-to-access area by changing the z-position with respect to the sample under testing that is mechanically fixed.
The B-scan provides a vertical cross section of the sample with visualization of the depth information. It is a very good feature when it comes to damage detection in the cross section.
The C-scan is a commonly used scanning mode, which gives 2D images (slices) of a target layer at a specific depth in the samples; multiple equidistant layers are feasible through the X-scan mode.
Pulse-reflection method
2D or 3D-dimensional images of the internal structure become available by means of the pulse-reflection method, in which the impedance mismatch between two materials leads to a reflection of the ultrasonic beam. Phase inversion of the reflected signal can allow for discrimination of the delamination (acoustic impedance almost zero) from inclusions and particles, but not from air bubbles, which show same impedance behavior as delamination.
The higher the impedance mismatch at the interface, the higher the intensity of the reflected signal (more brightness in the 2D image), which is measured by the echo amplitude. In the case of an interface with air (Z = 0), total reflection of the ultrasonic wave occurs; therefore, SAM is highly sensitive to any entrapped air in the sample under testing.
In order to enhance the insertion of the acoustic wave into the specimen both the acoustic transducer and the sample are immersed in a coupling media, typically water, to avoid the high reflection at air interfaces.
In the pulse-wave mode, a lens having good focusing properties on an axis is used to focus the ultrasonic waves onto a spot on the specimen and to receive the reflected waves back from the spot, typically in less than 100 ns. The acoustic beam can be focused to a sufficiently small spot at a depth up to 2–3 mm to resolve typical interlaminar cracks and other critical crack geometries. The received echoes are analysed and stored for each point to build up an image of the entire scanned area. The reflected signal is monitored and sent to a synchronous display to develop a complete image, as in a scanning electron microscope.
Applications
- Fast production control
- Standards : IPC A610, Mil-Std883, J-Std-035, Esa, etc
- Parts sorting
- Inspection of solder pads, flip-chip, underfill, die-attach
- Sealing joints
- Brazed and welded joints
- Qualification and fast selection of glues, adhesive, comparative analyses of aging, etc
- Inclusions, heterogeneities, porosities, cracks in material
Medicine and biology
SAM can provide data on the elasticity of cells and tissues, which can give useful information on the physical forces holding structures in a particular shape and the mechanics of structures such as the cytoskeleton. These studies are particularly valuable in investigating processes such as cell motility.
Some work has also been performed to assess penetration depth of particles injected into skin using needle-free injection
Another promising direction was initiated by different groups to design and build portable hand-held SAM for subsurface diagnostics of soft and hard tissues and this direction currently in the commercialization process in clinical and cosmetology practice.
See also
Acoustic microscopy
References
Acoustics
Microscopes
American inventions | Scanning acoustic microscope | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,639 | [
"Classical mechanics",
"Measuring instruments",
"Acoustics",
"Microscopes",
"Microscopy"
] |
9,315,395 | https://en.wikipedia.org/wiki/Parametricity | In programming language theory, parametricity is an abstract uniformity property enjoyed by parametrically polymorphic functions, which captures the intuition that all instances of a polymorphic function act the same way.
Idea
Consider this example, based on a set X and the type T(X) = [X → X] of functions from X to itself. The higher-order function twiceX : T(X) → T(X) given by twiceX(f) = f ∘ f, is intuitively independent of the set X. The family of all such functions twiceX, parametrized by sets X, is called a "parametrically polymorphic function". We simply write twice for the entire family of these functions and write its type as X. T(X) → T(X). The individual functions twiceX are called the components or instances of the polymorphic function. Notice that all the component functions twiceX act "the same way" because they are given by the same rule. Other families of functions obtained by picking one arbitrary function from each T(X) → T(X) would not have such uniformity. They are called "ad hoc polymorphic functions". Parametricity is the abstract property enjoyed by the uniformly acting families such as twice, which distinguishes them from ad hoc families. With an adequate formalization of parametricity, it is possible to prove that the parametrically polymorphic functions of type X. T(X) → T(X) are one-to-one with natural numbers. The function corresponding to the natural number n is given by the rule f fn, i.e., the polymorphic Church numeral for n. In contrast, the collection of all ad hoc families would be too large to be a set.
History
The parametricity theorem was originally stated by John C. Reynolds, who called it the abstraction theorem. In his paper "Theorems for free!", Philip Wadler described an application of parametricity to derive theorems about parametrically polymorphic functions based on their types.
Programming language implementation
Parametricity is the basis for many program transformations implemented in compilers for the Haskell programming language. These transformations were traditionally thought to be correct in Haskell because of Haskell's non-strict semantics. Despite being a lazy programming language, Haskell does support certain primitive operations—such as the operator seq—that enable so-called "selective strictness", allowing the programmer to force the evaluation of certain expressions. In their paper "Free theorems in the presence of seq", Patricia Johann and Janis Voigtlaender showed that because of the presence of these operations, the general parametricity theorem does not hold for Haskell programs; thus, these transformations are unsound in general.
Dependent types
See also
Parametric polymorphism
Non-strict programming language
References
External links
Wadler: Parametricity
Programming language topics
Type theory
Polymorphism (computer science) | Parametricity | [
"Mathematics",
"Engineering"
] | 608 | [
"Polymorphism (computer science)",
"Mathematical structures",
"Mathematical logic",
"Mathematical objects",
"Type theory",
"Software engineering",
"Programming language topics"
] |
4,124,553 | https://en.wikipedia.org/wiki/Bruno%20Grollo | Bruno Gordano Grollo (born 1942, Melbourne, Victoria) is an Australian businessman, property developer, former Director of Grocon and is noted for his controversy surrounding the Swanston Street Wall incident on 29 March 2013.
Bruno is the son of Luigi Grollo, who founded Grocon, one of Australia’s largest construction companies, in 1948 after immigrating to Australia from Italy. Bruno’s role in his company remains despite handing the title of chief executive and chairman to his son, Daniel Grollo in 1999.
Following public disputes with Infrastructure NSW in 2020, Grocon announced that it and 86 of its subsidiaries have entered Voluntary Administration.
Early life
Bruno Grollo was born in Melbourne in 1942 and is the son of accountant Emma Girardi (1913-1986) and builder Luigi Arturo Grollo (1909-1994). His grandfather, Giovanni Grollo, was a farmer.
Luigi Grollo emigrated to Australia at 18 years old, due to his adolescent life being rife with war, drought, storms and the death of his mother at 52 years old. He said of the experience growing up in Italy, ‘The following year, 1928, I saw that things were still going bad there. There was another storm that carried off everything. It left only the soles of our feet! Here were some new debts to pay off.’
Luigi Grollo and his family left their hometown of Arcade, Treviso, Italy after it became a World War I battleground and was no longer habitable. At 18 years old, with his older brother sponsoring him, he boarded the passenger ship named the Principe d’Udine and arrived in Melbourne on 24 July 1928 to start a new life in Australia. His cousin Carlo Zanatta was awaiting his arrival but did not recognise Luigi as they had not been together since he was a young boy. Luigi said of Carlo, ‘He was a good man to me. Zanatta took me to a boarding house in Russell Street, Melbourne. There we stayed all one day and one night. The next morning we left for Healesville to go to work.’
In 1938, Luigi settled in Carlton and began concreting work, whilst building his construction business, formerly known as L Grollo & Sons, on the weekends, while wife, Emma, helped with bookkeeping and accounts. Luigi’s one-man company began with residential paths, gutters, fireplace foundations and swimming pools before rapidly expanding in the 1950s to become the Grollo Group, transitioning to constructing multiple high-rises in Melbourne.
Bruno had a substantial role in his father’s company whilst growing up; he and his brother would help out, gaining trade experience whilst still at school. He had minimal formal education growing up, recalling his attendance as a ‘series of Catholic schools’ before beginning his career as a labourer. In 1958, at 15 years old, Bruno left school and began his career in construction when he joined his father’s company, of, at the time almost 130 employees. His brother, Rino Grollo, soon after joined the company in 1965. In 1968, after suffering a heart-attack, the patriarch and Director of Grocon, Luigi Grollo, retired and left sons, Bruno and Rino as co-Directors of his company Grocon. Following the stressful period after their mother’s death in 2001, Bruno and Rino divided the company and its assets into two. Bruno headed Grocon Constructions and multiple building assets and in 2003 made his two sons, Adam and Daniel, joint managing directors.
Controversy
Bruno Grollo has been involved in several media controversies concerning himself and his company, Grocon.
On Trial
In 1997, Bruno Grollo and co-accuseds John William Flanagan and Robert Charles Howard were acquitted of conspiracy charges. They were accused of bribing a Federal Police officer, Superintendent Lloyd Farrell, and of conspiring to pervert the course of justice. This conspiracy arose from fears surrounding the taxation office in which the court alleged that Grollo had failed to declare $59 million in the process of building the Rialto Towers. Recorded as one of the longest trials in Victorian history, running for 13 months, this investigation into the taxation affairs of the Grollo Group resulted in a not-guilty verdict on all charges for all three men, Grollo, Flanagan and Howard, and ended on 26 June 1997.
Swanston Street collapse
On 28 March 2013, during wind gusts of up to 102 kilometres per hour (63 mph) a Grocon building site construction wall collapsed on Swanston Street, Melbourne killing three pedestrians walking by. This collapse resulted in the death of Bridget Jones, Alexander Jones and Marie-Faith Fiawoo. This fatal incident in which promotional hoarding incorrectly fastened to a Grocon brick wall, resulted in a court case in which Grocon Victoria Street Pty Ltd pleaded guilty to a charge of failing to warrant a safe workplace. Grollo stated about the incident, ‘I personally, along with all of the directors and employees of Grocon, reiterate our deep regret at the tragic and untimely loss’. The court case against WorkSafe Victoria, concluding in 2014, resulted in a guilty verdict and a $250,000 fine for Grollo’s company, Grocon.
Grocon Constructions
As the new co-director of Grocon, Bruno and his company were involved in many of the projects that created Melbourne’s skyline. His projects included the Rialto Towers, the Hyatt Hotel and the Eureka Tower in 2006, which was one of the world’s highest residential towers at the time. Continuing his expansion into Sydney with the Governor Phillip Tower, the Macquarie Towers and 1 Bligh Street, the two-brothers led Australia’s construction industry to new heights.
Grollo Tower
The Grollo Tower proposal was a $1.7 billion, 500m skyscraper for the Melbourne Docklands, proposed by Bruno as a gift to Melburnians in 1995, but also partially funded by the Victorian public. Bruno stated of the tower, 'It would be a golden building for a golden city for the golden times to come ... it has to put the city on the world map’ . His ambitious ideals underlined many aspects of the company, Grollo stated he wanted, ‘To do something for Melbourne that did what the pyramids did for Egypt, or the Colosseum did for Rome, or the Opera House and Harbour Bridge did for Sydney'. The Grollo Tower, although never coming to fruition, would have been the tallest in the world at that time. The proposal was reviewed again in 2003 for construction to begin in Dubai, commissioned by The Grollo Corporation and Emaar Properties, the largest development company in the Arab Emirates. The $3 billion deal was proposed as an exact replica of the original Grollo Tower, however ultimately the project was cancelled and Bruno’s ambitious skyscraper was never built.
Cyclone Tracy restoration
On Christmas Day in 1974, Cyclone Tracy destroyed more than 70 percent of Darwin’s buildings, including 80 percent of its houses; this led to the Northern Territory Government signing a contract with the Grollo Group to help with restorations. Both Rino and Bruno were involved in the restoration of the cyclone-torn city, building 400 cyclone-proof houses with various designs for the government. This contract substantially grew their business and by the 1980s, Grocon had a total workforce of over 1,000 employees.
Voluntary administration
In 2020, following public disputes with Infrastructure NSW, Grocon announced that it and 86 of its subsidiaries had been placed in voluntary administration. The Grocon predicament began in November 2020 when Daniel Grollo experienced troubles with the latest Grocon projects, in Barangaroo, Sydney and inner-city Melbourne.
In January 2018, Grocon was awarded construction rights for a project in Central Barangaroo, Sydney as a deal with Aqualand and Scentre Group. In 2019, during a court battle with Dexus over a $28 million lease claim they put two subsidiaries into voluntary administration. In 2020, during the COVID-19 pandemic, Grocon's only Melbourne based project consisting of a $111 million office development stopped construction, with subcontractors, employees and creditors said to be owed more than $100 million.
Grocon is suing the Government of New South Wales, claiming they lost $270 million during the sale of the Barangaroo Central project to Aqualand for $73 million in 2020, and it is due to be seen in the Supreme Court of New South Wales in 2022.
Personal life
Bruno Grollo married Dina Bettiol in 1965 and they had three children together, Danie, Leanna and Adam. They were married for 26 years before Dina suffered a stroke which left her severely paralysed until her death, aged 58, in December 2001; Bruno keeps a room in her honour at his house.
On 14 February 2004, Grollo was remarried to Pierina Biondo at St Patrick's Cathedral, Melbourne.
In 2014, he revealed in an interview with Melbourne journalist, Ruth Ostrow, of his ongoing struggles with leukemia, melanoma and prostate cancer. Grollo stated, ‘My biggest goal now is staying alive. I’m trying to live long enough to see the success of gene, nano and stem-cell therapies which will keep us alive.’ He now employs a professional team within his Melbourne home in Thornbury, ‘Casa Del Matto’ which translates to House of the Madman in English to research products on the market and new science on anti-ageing and longevity. Grollo stated, ‘This is cutting-edge biology and those young and healthy enough to be around will be able to live indefinitely.’ Grollo takes up to 100 tablets per day, exercises regularly and every day will hang upside down on a machine with a backwards tilt to increase longevity.
Since retiring from the construction industry, stating, ‘buildings are hard work, they’re stressful, they are draining. They’re hard to put up. I’d had enough. I got out.' Bruno has found a passion for meditation and Maharishi yoga and has since invested $3 million into a transcendental meditation college in Watsonia. Stating that, ‘The Maharishi said consciousness is everything. It’s the closest thing to what God might be, your consciousness, mine, the dog, the cat, the flowers, the trees… transcendental meditation was the closest thing to euphoria and youth I’ve ever discovered.'''
In 1991, Grollo was appointed an Officer of the Order of Australia for service to building and construction and to the community.
Net worth
In 2006, Grollo was listed in Forbes top 40 richest people in Australia and New Zealand. Bruno Grollo and family were listed on the Financial Review Rich List 2018 with an assessed net worth of 702 million. Bruno Grollo and family did not appear on the 2019 Rich List, although Rino Grollo and his family were independently assessed with a net worth of 583 million.
Bruno, Rino, and/or their father, Luigi (whilst living), are one of thirteen living Australians who have appeared on every Financial Review Rich List, since it was first published in 1984.
{| class="wikitable"
! rowspan=2 | Year
! colspan=2 width=40% | Financial Review Rich List
! colspan=2 width=40% | Forbes
|-
! Rank
! Net worth bn
! Rank
! Net worth bn
|-
| 2006
| align="center" |
| align="right" |
| align="center" |
| align="right" |
|-
! colspan=5 style="background:#cccccc;" |
|-
| 2014
| align="center" |
| align="right" |
| align="center" |
| align="right" |
|-
| 2015
| align="center" |
| align="right" |
| align="center" | n/a
| align="right" | not listed
|-
| 2016
| align="center" |
| align="right" |
| align="center" | n/a
| align="right" | not listed
|-
| 2017
| align="center" |
| align="right" | 0.720
| align="center" | n/a
| align="right" | not listed
|-
| 2018
| align="center" | 113
| align="right" | 0.702
| align="center" |
| align="right" |
|-
| 2019
| align="center" | n/a
| align="right" | not listed
| align="center" | n/a
| align="right" | not listed
|}
Philanthropy
Bruno and his brother, Rino, along with their wives, Dina Bettiol and Diana Ruzzene, became well known in the Melbourne community for being generous philanthropists. They would all often donate to community groups, charities, educational organisations and sporting institutions.
After their mother’s death in December 2001, they established The Emma Grollo Memorial Scholarship in her memory funded by Bruno, Rino and the Grollo Group. The scholarship seeks to provide financial support to students studying Italian language or literature at the University of Melbourne.
Bruno remembers his mother with these words, ‘My mother had a unique ability to keep us united. She managed to keep us united right up until the very end ... and sometimes this was not easy ... Of all her merits, this for me was the greatest.’
References
External links
Official website
Grocon website
Eureka Tower website
1942 births
Australian businesspeople
Australian people of Italian descent
Living people
Construction and civil engineering companies
Cyclone Tracy
Italian-Australian culture
Transcendental Meditation
Officers of the Order of Australia | Bruno Grollo | [
"Engineering"
] | 2,896 | [
"Construction and civil engineering companies",
"Civil engineering organizations"
] |
4,125,012 | https://en.wikipedia.org/wiki/Demjanov%20rearrangement | The Demjanov rearrangement is the chemical reaction of primary amines with nitrous acid to give rearranged alcohols. It involves substitution by a hydroxyl group with a possible ring expansion. It is named after the Russian chemist Nikolai Jakovlevich Demjanov (Dem'anov, Demianov), who first reported it in 1903.
Reaction mechanism
The reaction process begins with diazotization of the amine by nitrous acid. The diazonium group is a good leaving group, forming nitrogen gas when displaced from the organic structure. This displacement can occur via a rearrangement (path A), in which one of the sigma bonds adjacent to the diazo group migrates. This migration results in an expansion of the ring. The resulting carbocation is then attacked by a molecule of water. Alternately, the diazo group can be displaced directly by a molecule of water in an SN2 reaction (path B). Both routes lead to formation of an alcohol.
Uses
The Demjanov rearrangement is a method to produce a 1-carbon ring enlargement in four, five or six membered rings. The resulting five, six, and seven-membered rings can then be used in further synthetic reactions.
It has been shown that the Demjanov reaction is susceptible to regioselectivity. One example of this is a study conducted by D. Fattori looking at the regioselectivity of the Demjanov rearrangement in one-carbon enlargements of naked sugars. It showed that when an exo methylamine underwent Demjanov nitrous acid deamination, ring enlargement was not produced.
However, when the endo methylamine underwent the same conditions, a mixture of rearranged alcohols were produced.
Problems
This rearrangement also leads to a substituted, but not expanded, byproduct. Thus it can be difficult to isolate the two products and acquire the desired yield. Also, stereoisomers are produced depending on the direction of addition of the water molecule and other molecules may be produced depending on rearrangements.
Variations
Tiffeneau-Demjanov rearrangement
The Tiffeneau-Demjanov rearrangement (after Marc Tiffeneau and Nikolai Demjanov) is a variation of the Demjanov rearrangement, which involves both a ring expansion and the production of a ketone by using sodium nitrite and hydrogen cation. Using the Tiffeneau-Demjanov reaction is often advantageous as, while there are rearrangements possible in the products, the reactant always undergoes ring enlargement. As in the Demjanov rearrangement, products illustrate regioselectivity in the reaction. Migratory aptitudes of functional groups dictate rearrangement products.
Use of diazomethane
Diazomethane also produces ring enlargement, and its reaction is mechanistically similar to the Tiffeneau-Demjanov rearrangement.
References
(Review)
Rearrangement reactions
Name reactions | Demjanov rearrangement | [
"Chemistry"
] | 632 | [
"Name reactions",
"Rearrangement reactions",
"Organic reactions"
] |
4,125,123 | https://en.wikipedia.org/wiki/Rydberg%E2%80%93Ritz%20combination%20principle | The Rydberg–Ritz combination principle is an empirical rule proposed by Walther Ritz in 1908 to describe the relationship of the spectral lines for all atoms, as a generalization of an earlier rule by Johannes Rydberg for the hydrogen atom and the alkali metals. The principle states that the spectral lines of any element include frequencies that are either the sum or the difference of the frequencies of two other lines. Lines of the spectra of elements could be predicted from existing lines. Since the frequency of light is proportional to the wavenumber or reciprocal wavelength, the principle can also be expressed in terms of wavenumbers which are the sum or difference of wavenumbers of two other lines.
Another related version is that the wavenumber or reciprocal wavelength of each spectral line can be written as the difference of two terms. The simplest example is the hydrogen atom, described by the Rydberg formula
where is the wavelength, is the Rydberg constant, and and are positive integers such that . This is the difference of two terms of form .
The exact Ritz Combination formula was mathematically derived from this where:
Where:
is the wavenumber,
is the limit of the series,
is a universal constant, (now known as R)
is the numeral, (now known as n)
and are constants.
Relation to quantum theory
The combination principle is explained using quantum theory. Light consists of photons whose energy E is proportional to the frequency and wavenumber of the light: (where h is the Planck constant, c is the speed of light, and is the wavelength. A combination of frequencies or wavenumbers is then equivalent to a combination of energies.
According to the quantum theory of the hydrogen atom proposed by Niels Bohr in 1913, an atom can have only certain energy levels. Absorption or emission of a particle of light or photon corresponds to a transition between two possible energy levels, and the photon energy equals the difference between their two energies. On dividing by hc, the photon wavenumber equals the difference between two terms, each equal to an energy divided by hc or an energy in wavenumber units (cm−1). Energy levels of atoms and molecules are today described by term symbols which indicate their quantum numbers.
Also, a transition from an initial to a final energy level involves the same energy change whether it occurs in a single step or in two steps via an intermediate state. The energy of transition in a single step is the sum of the energies of transition in two steps: .
The NIST database tables of lines of spectra contains observed lines and the lines calculated by use of the Ritz combination principle.
History
The spectral lines of hydrogen had been analyzed and found to have a mathematical relationship in the Balmer series. This was later extended to a general formula called the Rydberg formula. This could only be applied to hydrogen-like atoms. In 1908 Ritz derived a relationship that could be applied to all atoms which he calculated prior to the first 1913 quantum atom and his ideas are based on classical mechanics. This principle, the Rydberg–Ritz combination principle, is used today in identifying the transition lines of atoms.
References
External links
Emission spectroscopy
Old quantum theory | Rydberg–Ritz combination principle | [
"Physics",
"Chemistry"
] | 641 | [
"Spectrum (physical sciences)",
"Emission spectroscopy",
"Quantum mechanics",
"Old quantum theory",
"Spectroscopy"
] |
4,129,530 | https://en.wikipedia.org/wiki/Dasymeter | A dasymeter was meant initially as a device to demonstrate the buoyant effect of gases like air (as shown in the adjacent pictures). A dasymeter which allows weighing acts as a densimeter used to measure the density of gases.
Principle
The Principle of Archimedes permits to derive a formula which does not rely on any information of volume: a sample, the big sphere in the adjacent images, of known mass-density is weighed in vacuum and then immersed into the gas and weighed again.
(The above formula was taken from the article buoyancy and still has to be solved for the density of the gas.)
From the known mass density of the sample (sphere) and its two weight-values, the mass-density of the gas can be calculated as:
Construction and use
It consists of a thin sphere made of glass, ideally with an average density close to that of the gas to be investigated. This sphere is immersed in the gas and weighed.
History of the dasymeter
The dasymeter was invented in 1650 by Otto von Guericke. Archimedes used a pair of scales which he immersed into water to demonstrate the buoyant effect of water. A dasymeter can be seen as a variant of that pair of scales, only immersed into gas.
References
External links
Volume Conversion
Measuring instruments
Laboratory equipment
Laboratory glassware | Dasymeter | [
"Technology",
"Engineering"
] | 276 | [
"Measuring instruments"
] |
4,129,998 | https://en.wikipedia.org/wiki/Whittington%20chimes | Whittington chimes, also called St. Mary's, are a family of clock chime melodies associated with St Mary-le-Bow church in London, which is related to the historical figure of Whittington by legend.
Whittington is usually the secondary chime selection for most chiming clocks, the first being the Westminster. It is also one of two clock chime melodies with multiple variations, the other being the Ave Maria chimes.
Before the name Whittington became common, the melody used to be referred to as “chimes on eight bells”. However, evidence suggests it was originally a chime on six bells – a melody that has not been in use at St Mary-le-Bow since 1666. In 1905, based on what was known about the six-bell version, Sir Charles Villiers Stanford composed a new melody (still called Whittington chimes) that uses 11 out of the 12 bells in the tower of St Mary-le-Bow; this 11-bell version is the one now used at that church.
Dick Whittington story
The customary English theatre story, adapted from the life of the real Richard Whittington, is that the young boy Dick Whittington was an unhappy apprentice running away from his master, and heard the tune ringing from the bell tower of the church of St Mary-le-Bow in London in 1392. The penniless boy heard the bells seemingly saying to him "Turn again Dick Whittington". Dick returned to London upon hearing the bells, where he went on to find his fortune and became the Lord Mayor of London four times.
According to tradition, Whittington used the tune as a campaign song for his three returns to the office of mayor. A short version of the campaign song goes:
Turn again Dick Whittington,
Right Lord Mayor of London Town.
Chimes of St Mary-le-Bow
The twelve bells in the tower of St Mary-le-Bow, cast in 1956, all have inscriptions on them; the first letters of each inscription spell out:
D W H I T T I N G T O N
Chimes on domestic clocks
The Whittington chimes are less well known than the Westminster (Cambridge) chimes, despite being much older. The chimes are found in many early English bracket and longcase clocks. The melody was not given the name "Whittington Chimes" on domestic clocks until the late Victorian period onwards.
Whittington chimes found on domestic clocks are variations on the eight-bell melody, and there are at least four variations of this chime sequence. Currently the Whittington chime is often available on grandfather clock movements that have selectable chimes and some quartz clocks.
Bawo & Dotter Chimes
One of the Whittington chime variations is also known as the Bawo & Dotter chimes, and is usually found on many older German movements such as early models of Junghans grandfather clocks. This version of the chimes is remarkably different and unique from the other three variations; only the first-quarter melody remains the same with the other variations.
References
Clocks
Anonymous musical compositions
Compositions by Charles Villiers Stanford | Whittington chimes | [
"Physics",
"Technology",
"Engineering"
] | 663 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
11,801,199 | https://en.wikipedia.org/wiki/Zero%20field%20NMR | Zero- to ultralow-field (ZULF) NMR is the acquisition of nuclear magnetic resonance (NMR) spectra of chemicals with magnetically active nuclei (spins 1/2 and greater) in an environment carefully screened from magnetic fields (including from the Earth's field). ZULF NMR experiments typically involve the use of passive or active shielding to attenuate Earth’s magnetic field. This is in contrast to the majority of NMR experiments which are performed in high magnetic fields provided by superconducting magnets. In ZULF experiments the sample is moved through a low field magnet into the "zero field" region where the dominant interactions are nuclear spin-spin couplings, and the coupling between spins and the external magnetic field is a perturbation to this. There are a number of advantages to operating in this regime: magnetic-susceptibility-induced line broadening is attenuated which reduces inhomogeneous broadening of the spectral lines for samples in heterogeneous environments. Another advantage is that the low frequency signals readily pass through conductive materials such as metals due to the increased skin depth; this is not the case for high-field NMR for which the sample containers are usually made of glass, quartz or ceramic.
High-field NMR employs inductive detectors to pick up the radiofrequency signals, but this would be inefficient in ZULF NMR experiments since the signal frequencies are typically much lower (on the order of hertz to kilohertz). The development of highly sensitive magnetic sensors in the early 2000s including SQUIDs, magnetoresistive sensors, and SERF atomic magnetometers made it possible to detect NMR signals directly in the ZULF regime. Previous ZULF NMR experiments relied on indirect detection where the sample had to be shuttled from the shielded ZULF environment into a high magnetic field for detection with a conventional inductive pick-up coil. One successful implementation was using atomic magnetometers at zero magnetic field working with rubidium vapor cells to detect zero-field NMR.
Without a large magnetic field to induce nuclear spin polarization, the nuclear spins must be polarized externally using hyperpolarization techniques. This can be as simple as polarizing the spins in a magnetic field followed by shuttling to the ZULF region for signal acquisition, and alternative chemistry-based hyperpolarization techniques can also be used.
It is sometimes but inaccurately referred to as nuclear quadrupole resonance (NQR).
Zero-field NMR experiments
Spin Hamiltonians
Free evolution of nuclear spins is governed by a Hamiltonian (), which in the case of liquid-state nuclear magnetic resonance may be split into two major terms. The first term () corresponds to the Zeeman interaction between spins and the external magnetic field, which includes chemical shift (). The second term () corresponds to the indirect spin-spin, or J-coupling, interaction.
, where:
, and
.
Here the summation is taken over the whole system of coupled spins; denotes the reduced Planck constant; denotes the gyromagnetic ratio of spin a; denotes the isotropic part of the chemical shift for the a-th spin; denotes the spin operator of the a-th spin; is the external magnetic field experienced by all considered spins, and; is the J-coupling constant between spins a and b.
Importantly, the relative strength of and (and therefore the spin dynamics behavior of such a system) depends on the magnetic field. For example, in conventional NMR, is typically larger than 1 T, so the Larmor frequency of 1H exceeds tens of MHz. This is much larger than -coupling values which are typically Hz to hundreds of Hz. In this limit, is a perturbation to . In contrast, at nanotesla fields, Larmor frequencies can be much smaller than -couplings, and dominates.
Polarization
Before signals can be detected in a ZULF NMR experiment, it is first necessary to polarize the nuclear spin ensemble, since the signal is proportional to the nuclear spin magnetization. There are a number of methods to generate nuclear spin polarization. The most common is to allow the spins to thermally equilibrate in a magnetic field, and the nuclear spin alignment with the magnetic field due to the Zeeman interaction leads to weak spin polarization. The polarization generated in this way is on the order of 10−6 for tesla field-strengths.
An alternative approach is to use hyperpolarization techniques, which are chemical and physical methods to generate nuclear spin polarization. Examples include parahydrogen-induced polarization, spin-exchange optical pumping of noble gas atoms, dissolution dynamic nuclear polarization, and chemically-induced dynamic nuclear polarization.
Excitation and spin manipulation
NMR experiments require creating a transient non-stationary state of the spin system. In conventional high-field experiments, radio frequency pulses tilt the magnetization from along the main magnetic field direction to the transverse plan. Once in the transverse plan, the magnetization is no longer in a stationary state (or eigenstate) and so it begins to precess about the main magnetic field creating a detectable oscillating magnetic field.
In ZULF experiments, constant magnetic field pulses are used to induce non-stationary states of the spin system. The two main strategies consist of (1) switching of the magnetic field from pseudo-high field to zero (or ultra-low) field, or (2) of ramping down the magnetic field experienced by the spins to zero field in order to convert the Zeeman populations into zero-field eigenstates adiabatically and subsequently in applying a constant magnetic field pulse to induce a coherence between the zero-field eigenstates. In the simple case of a heteronuclear pair of J-coupled spins, both these excitation schemes induce a transition between the singlet and triplet-0 states, which generates a detectable oscillatory magnetic field.
More sophisticated pulse sequences have been reported including selective pulses, two-dimensional experiments and decoupling schemes.
Signal detection
NMR signals are usually detected inductively, but the low frequencies of the electromagnetic radiation emitted by samples in a ZULF experiment makes inductive detection impractical at low fields. Hence, the earliest approach for measuring zero-field NMR in solid samples was via field-cycling techniques. The field cycling involves three steps: preparation, evolution and detection. In the preparation stage, a field is applied in order to magnetize the nuclear spins. Then the field is suddenly switched to zero to initiate the evolution interval and the magnetization evolves under the zero-field Hamiltonian. After a time period, the field is again switched on and the signal is detected inductively at high field. In a single field cycle, the magnetization observed corresponds only to a single value of the zero-field evolution time. The time-varying magnetization can be detected by repeating the field cycle with incremented lengths of the zero-field interval, and hence the evolution and decay of the magnetization is measured point by point. The Fourier transform of this magnetization will result to the zero-field absorption spectrum.
The emergence of highly sensitive magnetometry techniques has allowed for the detection of zero-field NMR signals in situ. Examples include superconducting quantum interference devices (SQUIDs), magnetoresistive sensors, and SERF atomic magnetometers. SQUIDs have high sensitivity, but require cryogenic conditions to operate, which makes them practically somewhat difficult to employ for the detection of chemical or biological samples. Magnetoresistive sensors are less sensitive, but are much easier to handle and to bring close to the NMR sample which is advantageous since proximity improves sensitivity. The most common sensors employed in ZULF NMR experiments are optically-pumped magnetometers, which have high sensitivity and can be placed in close proximity to an NMR sample.
Definition of the ZULF regime
The boundaries between zero-, ultralow-, low- and high-field NMR are not rigorously defined, although approximate working definitions are in routine use for experiments involving small molecules in solution. The boundary between zero and ultralow field is usually defined as the field at which the nuclear spin precession frequency matches the spin relaxation rate, i.e., at zero field the nuclear spins relax faster than they precess about the external field. The boundary between ultralow and low field is usually defined as the field at which Larmor frequency differences between different nuclear spin species match the spin-spin (J or dipolar) couplings, i.e., at ultralow field spin-spin couplings dominate and the Zeeman interaction is a perturbation. The boundary between low and high field is more ambiguous and these terms are used differently depending on the application or research topic. In the context of ZULF NMR, the boundary is defined as the field at which chemical shift differences between nuclei of the same isotopic species in a sample match the spin-spin couplings.
Note that these definitions strongly depend on the sample being studied, and the field regime boundaries can vary by orders of magnitude depending on sample parameters such as the nuclear spin species, spin-spin coupling strengths, and spin relaxation times.
See also
Earth's field NMR
Low field NMR
References
Further reading
M. P. Ledbetter, C. Crawford, A. Pines, D. Wemmer, S. Knappe, J. Kitching, D. Budker "Optical detection of NMR J-spectra at zero magnetic field" J. Magn. Reson. (2009), 199, 25-29.
T. Theis, P. Ganssle, G. Kervern, S. Knappe, J. Kitching, M. P. Ledbetter, D. Budker and A. Pines; “Parahydrogen-enhanced zero-field nuclear magnetic resonance” Nature Physics (2011), 7, 571–575.
External links
https://pines.berkeley.edu/research/ultra-low-field-zero-field-nmr
https://pines.berkeley.edu/publications/chemical-analysis-using-j-coupling-multiplets-zero-field-nmr-0
Nuclear magnetic resonance | Zero field NMR | [
"Physics",
"Chemistry"
] | 2,133 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
11,801,987 | https://en.wikipedia.org/wiki/Omnibus%20Autism%20Proceeding | The Omnibus Autism Proceeding was a set of six test cases heard by Special Masters of the United States Court of Federal Claims to examine claims of a causal link between vaccines and autism.
Because there were so many National Vaccine Injury Compensation Program (NVICP) cases that involve a claim that vaccines caused autism, over 5000 of them in fact, the attorneys for the plaintiffs and the Special Masters agreed to examine three test cases to determine if there were sufficient evidence to support a link between vaccines and autism. They directly confronted the claim of whether there is evidence of causality between vaccines and autism.
In 2002, the NVICP, in consultation with a Petitioners Steering Committee, set up the Omnibus Autism Proceeding to aggregate these cases. They decided to examine six test cases that made one or more of the following claims about the vaccines-autism link:
• Claims that MMR vaccines and other thimerosal-containing vaccines can combine to cause autism.
• Claims that center on vaccines containing thimerosal causing autism.
• Claims that MMR vaccines alone (with no mention of thimerosal) can cause autism.
Three Special Masters examined the evidence for each of those claims. In 2009, they handed down their decisions. For each claim, the three Special Masters concluded that there were no links between vaccines and autism.
References
Vaccine controversies
MMR vaccine and autism | Omnibus Autism Proceeding | [
"Chemistry",
"Biology"
] | 273 | [
"Vaccination",
"Drug safety",
"Vaccine controversies"
] |
11,810,505 | https://en.wikipedia.org/wiki/Polymerase%20cycling%20assembly | Polymerase cycling assembly (or PCA, also known as Assembly PCR) is a method for the assembly of large DNA oligonucleotides from shorter fragments. The process uses the same technology as PCR, but takes advantage of DNA hybridization and annealing as well as DNA polymerase to amplify a complete sequence of DNA in a precise order based on the single stranded oligonucleotides used in the process. It thus allows for the production of synthetic genes and even entire synthetic genomes.
PCA principles
Much like how primers are designed such that there is a forward primer and a reverse primer capable of allowing DNA polymerase to fill the entire template sequence, PCA uses the same technology but with multiple oligonucleotides. While in PCR the customary size of oligonucleotides used is 18 base pairs, in PCA lengths of up to 50 are used to ensure uniqueness and correct hybridization.
Each oligonucleotide is designed to be either part of the top or bottom strand of the target sequence. As well as the basic requirement of having to be able to tile the entire target sequence, these oligonucleotides must also have the usual properties of similar melting temperatures, hairpin free, and not too GC rich to avoid the same complications as PCR.
During the polymerase cycles, the oligonucleotides anneal to complementary fragments and then are filled in by polymerase. Each cycle thus increases the length of various fragments randomly depending on which oligonucleotides find each other. It is critical that there is complementarity between all the fragments in some way or a final complete sequence will not be produced as polymerase requires a template to follow.
After this initial construction phase, additional primers encompassing both ends are added to perform a regular PCR reaction, amplifying the target sequence away from all the shorter incomplete fragments. A gel purification can then be used to identify and isolate the complete sequence.
Typical reaction
A typical reaction consists of oligonucleotides ~50 base pairs long each overlapping by about 20 base pairs. The reaction with all the oligonucleotides is then carried out for ~30 cycles followed by an additional 23 cycles with the end primers.
Gibson assembly
A modification of this method, Gibson assembly, described by Gibson et al. allows for single-step isothermal assembly of DNA with up to several hundreds kb. By using T5 exonuclease to 'chew back' complementary ends, an overlap of about 40bp can be created. The reaction takes place at 50 °C, a temperature where the T5 exonuclease is unstable. After a short timestep it is degraded, the overlaps can anneal and be ligated. Cambridge University IGEM team made a video describing the process. Ligation independent cloning (LIC) is a new variant of the method for compiling several DNA pieces together and needing only exonuclease enzyme for the reaction.
References
Amplifiers
Genetic engineering
Genetics techniques
Laboratory techniques | Polymerase cycling assembly | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 630 | [
"Genetics techniques",
"Biological engineering",
"Genetic engineering",
"nan",
"Molecular biology",
"Amplifiers"
] |
3,003,448 | https://en.wikipedia.org/wiki/Angular%20diameter%20distance | In astronomy, angular diameter distance is a distance (in units of length) defined in terms of an object's physical size (also in units of length), , and its angular size (necessarily in radians), , as viewed from Earth:
Cosmology dependence
The angular diameter distance depends on the assumed cosmology of the universe. The angular diameter distance to an object at redshift, , is expressed in terms of the comoving distance, as:
where is the FLRW coordinate defined as:
where is the curvature density and is the value of the Hubble parameter today.
In the currently favoured geometric model of our Universe, the "angular diameter distance" of an object is a good approximation to the "real distance", i.e. the proper distance when the light left the object.
Angular size redshift relation
The angular size redshift relation describes the relation between the angular size observed on the sky of an object of given physical size, and the object's redshift from Earth (which is related to its distance, , from Earth). In a Euclidean geometry the relation between size on the sky and distance from Earth would simply be given by the equation:
where is the angular size of the object on the sky, is the size of the object and is the distance to the object. Where is small this approximates to:
However, in the ΛCDM model, the relation is more complicated. In this model, objects at redshifts greater than about 1.5 appear larger on the sky with increasing redshift.
This is related to the angular diameter distance, which is the distance an object is calculated to be at from and , assuming the Universe is Euclidean.
The Mattig relation yields the angular-diameter distance, , as a function of redshift z for a universe with ΩΛ = 0. is the present-day value of the deceleration parameter, which measures the deceleration of the expansion rate of the Universe; in the simplest models, corresponds to the case where the Universe will expand forever, to closed models which will ultimately stop expanding and contract, corresponds to the critical case – Universes which will just be able to expand to infinity without re-contracting.
This formula however is invalid, since the value of Λ ≫ 0 and ≪ 0.
Angular diameter turnover point
The angular diameter distance reaches a maximum at a redshift (in the ΛCDM model, this occurs at ), such that the slope of changes sign at , or , . In reference to its appearance when plotted, is sometimes referred to as the turnover point. At this point also . Practically, this means that if we look at objects at increasing redshift (and thus objects that are increasingly far away) those at greater redshift will span a smaller angle on the sky only until , above which the objects will begin to span greater angles on the sky at greater redshift. The turnover point seems paradoxical because it contradicts our intuition that the farther something is, the smaller it will appear.
The turnover point occurs because of the expansion of the universe and because we observe distant galaxies as they were in the past. Because the universe is expanding, a pair of distant objects that are now distant from each other were closer to each other at earlier times. Because the speed of light is finite, the light reaching us from this pair of objects must have left them long ago when they were nearer to one another and spanned a larger angle on the sky. The turnover point can therefore tell us about the rate of expansion of the universe (or the relationship between the expansion rate and the speed of light if we do not assume the latter to be constant).
See also
Distance measure
Standard ruler
References
External links
iCosmos: Cosmology Calculator (With Graph Generation )
Physical quantities
Distance
Equations of astronomy | Angular diameter distance | [
"Physics",
"Astronomy",
"Mathematics"
] | 782 | [
"Physical phenomena",
"Distance",
"Physical quantities",
"Concepts in astronomy",
"Quantity",
"Size",
"Space",
"Equations of astronomy",
"Spacetime",
"Wikipedia categories named after physical quantities",
"Physical properties"
] |
3,003,614 | https://en.wikipedia.org/wiki/Bis%282-ethylhexyl%29%20phthalate | Bis(2-ethylhexyl) phthalate (di-2-ethylhexyl phthalate, diethylhexyl phthalate, diisooctyl phthalate, DEHP; incorrectly — dioctyl phthalate, DIOP) is an organic compound with the formula C6H4(CO2C8H17)2. DEHP is the most common member of the class of phthalates, which are used as plasticizers. It is the diester of phthalic acid and the branched-chain 2-ethylhexanol. This colorless viscous liquid is soluble in oil, but not in water.
Production
Di(2-ethylhexyl) phthalate is produced commercially by the reaction of excess 2-ethylhexanol with phthalic anhydride in the presence of an acid catalyst such as sulfuric acid or para-toluenesulfonic acid. It was first produced in commercial quantities in Japan circa 1933 and in the United States in 1939.
DEHP has two stereocenters, located at the carbon atoms carrying the ethyl groups. As a result, it has three distinct stereoisomers, consisting of an (R,R) form, an (S,S) form (diastereomers), and a meso (R, S) form. As most 2-ethylhexanol is produced as a racemic mixture, commercially-produced DEHP is therefore racemic as well, and consists of a 1:1:2 statistical mixture of stereoisomers.
Use
Due to its suitable properties and the low cost, DEHP is widely used as a plasticizer in manufacturing of articles made of PVC. Plastics may contain 1% to 40% of DEHP. It is also used as a hydraulic fluid and as a dielectric fluid in capacitors. DEHP also finds use as a solvent in glowsticks.
Approximately three million tonnes are produced and used annually worldwide.
Manufacturers of flexible PVC articles can choose among several alternative plasticizers offering similar technical properties as DEHP. These alternatives include other phthalates such as diisononyl phthalate (DINP), di-2-propyl heptyl phthalate (DPHP), diisodecyl phthalate (DIDP), and non-phthalates such as 1,2-cyclohexane dicarboxylic acid diisononyl ester (DINCH), dioctyl terephthalate (DOTP), and citrate esters.
Environmental exposure
DEHP is a component of many household items, including tablecloths, floor tiles, shower curtains, garden hoses, rainwear, dolls, toys, shoes, medical tubing, furniture upholstery, and swimming pool liners. DEHP is an indoor air pollutant in homes and schools. Common exposures come from the use of DEHP as a fragrance carrier in cosmetics, personal care products, laundry detergents, colognes, scented candles, and air fresheners.
The most common exposure to DEHP comes through food with an average consumption of 0.25 milligrams per day. It can also leach into a liquid that comes in contact with the plastic; it extracts faster into nonpolar solvents (e.g. oils and fats in foods packed in PVC). Fatty foods that are packaged in plastics that contain DEHP are more likely to have higher concentrations such as milk products, fish or seafood, and oils. The US FDA therefore permits use of DEHP-containing packaging only for foods that primarily contain water.
DEHP can leach into drinking water from discharges from rubber and chemical factories; The US EPA limits for DEHP in drinking water is 6 ppb. It is also commonly found in bottled water, but unlike tap water, the EPA does not regulate levels in bottled water. DEHP levels in some European samples of milk, were found at 2000 times higher than the EPA Safe Drinking Water limits (12,000 ppb). Levels of DEHP in some European cheeses and creams were even higher, up to 200,000 ppb, in 1994. Additionally, workers in factories that utilize DEHP in production experience greater exposure. The U.S. agency OSHA's limit for occupational exposure is 5 mg/m3 of air.
Use in medical devices
DEHP is the most common phthalate plasticizer in medical devices such as intravenous tubing and bags, IV catheters, nasogastric tubes, dialysis bags and tubing, blood bags and transfusion tubing, and air tubes. DEHP makes these plastics softer and more flexible and was first introduced in the 1940s in blood bags. For this reason, concern has been expressed about leachates of DEHP transported into the patient, especially for those requiring extensive infusions or those who are at the highest risk of developmental abnormalities, e.g. newborns in intensive care nursery settings, hemophiliacs, kidney dialysis patients, neonates, premature babies, lactating, and pregnant women. According to the European Commission Scientific Committee on Health and Environmental Risks (SCHER), exposure to DEHP may exceed the tolerable daily intake in some specific population groups, namely people exposed through medical procedures such as kidney dialysis. The American Academy of Pediatrics has advocated not to use medical devices that can leach DEHP into patients and, instead, to resort to DEHP-free alternatives. In July 2002, the U.S. FDA issued a Public Health Notification on DEHP, stating in part, "We recommend considering such alternatives when these high-risk procedures are to be performed on male neonates, pregnant women who are carrying male fetuses, and peripubertal males" noting that the alternatives were to look for non-DEHP exposure solutions; they mention a database of alternatives. The CBC documentary The Disappearing Male raised concerns about sexual development in male fetal development, miscarriage, and as a cause of dramatically lower sperm counts in men. A review article in 2010 in the Journal of Transfusion Medicine showed a consensus that the benefits of lifesaving treatments with these devices far outweigh the risks of DEHP leaching out of these devices. Although more research is needed to develop alternatives to DEHP that gives the same benefits of being soft and flexible, which are required for most medical procedures, if a procedure requires one of these devices and if patient is at high risk to suffer from DEHP then a DEHP alternative should be considered if medically safe.
Metabolism
DEHP hydrolyzes to mono-ethylhexyl phthalate (MEHP) and subsequently to phthalate salts. The released alcohol is susceptible to oxidation to the aldehyde and carboxylic acid.
Effects on living organisms
Toxicity
The acute toxicity of DEHP is low in animal models: 30 g/kg in rats (oral) and 24 g/kg in rabbits (dermal). Concerns instead focus on its potential as an endocrine disruptor.
Endocrine disruption
DEHP, along with other phthalates, is believed to cause endocrine disruption in males, through its action as an androgen antagonist, and may have lasting effects on reproductive function, for both childhood and adult exposures. Prenatal phthalate exposure has been shown to be associated with lower levels of reproductive function in adolescent males. In another study, airborne concentrations of DEHP at a PVC pellet plant were significantly associated with a reduction in sperm motility and chromatin DNA integrity. Additionally, the authors noted the daily intake estimates for DEHP were comparable to the general population, indicating a "high percentage of men are exposed to levels of DEHP that may affect sperm motility and chromatin DNA integrity". The claims have received support by a study using dogs as a "sentinel species to approximate human exposure to a selection of chemical mixtures present in the environment". The authors analyzed the concentration of DEHP and other common chemicals such as PCBs in testes from dogs from five different world regions. The results showed that regional differences in concentration of the chemicals are reflected in dog testes and that pathologies such as tubule atrophy and germ cells were more prevalent in testes of dogs from regions with higher concentrations.
Development
Numerous studies of DEHP have shown changes in sexual function and development in mice and rats. DEHP exposure during pregnancy has been shown to disrupt placental growth and development in mice, resulting in higher rates of low birthweight, premature birth, and fetal loss. In a separate study, exposure of neonatal mice to DEHP through lactation caused hypertrophy of the adrenal glands and higher levels of anxiety during puberty. In another study, pubertal administration of higher-dose DEHP delayed puberty in rats, reduced testosterone production, and inhibited androgen-dependent development; low doses showed no effect.
Obesity
When DEHP is ingested intestinal lipases convert it to MEHP, which then is absorbed. MEHP is suspected to have an obesogenic effect. Rodent studies and human studies have shown DEHP to be a possible disruptor of thyroid function, which plays a key role in energy balance and metabolism. Exposure to DEHP has been associated with lower plasma thyroxine levels and decreased uptake of iodine in thyroid follicular cells. Previous studies have shown that slight changes in thyroxine levels can have dramatic effects on resting energy expenditure, similar to that of patients with hypothyroidism, which has been shown to cause increased weight gain in those study populations.
Cardiotoxicity
Even at relatively low doses of DEHP, cardiovascular reactivity was significantly affected in mice. A clinically relevant dose and duration of exposure to DEHP has been shown to have a significant impact on the behavior of cardiac cells in culture. This includes an uncoupling effect that leads to irregular rhythms in vitro. Untreated cells had fast conduction velocity, along with homogenous activation wave fronts and synchronized beating. Cells treated with DEHP exhibited fractured wave fronts with slow propagation speeds. This is observed in conjunction with a significant decrease in the amount of expression and instability of gap junctional connexin proteins, specifically connexin-43, in cardiomyocytes treated with DEHP.
The decrease in expression and instability of connexin-43 may be due to the down regulation of tubulin and kinesin genes, and the alteration of microtubule structure, caused by DEHP; all of which are responsible for the transport of protein products. Also, DEHP caused down regulation of several growth factors, such as angiotensinogen, transforming growth factor-beta, vascular endothelial growth factor C and A, and endothelial-1. The DEHP-induced down regulation of these growth factors may also contribute to the reduced expression and instability of connexin-43.
DEHP has also been shown, in vitro using cardiac muscle cells, to cause activation of PPAR-alpha gene, which is a key regulator in lipid metabolism and peroxisome proliferation; both of which can be involved in atherosclerosis and hyperlipidemia, which are precursors of cardiovascular disease.
Once metabolized into MEHP, the molecule has been shown to lengthen action potential duration and slow epicardial conduction velocity in Langendorff perfused rodent hearts.
Other health effects
Studies in mice have shown other adverse health effects due to DEHP exposure. Ingestion of 0.01% DEHP caused damage to the blood-testis barrier as well as induction of experimental autoimmune orchitis. There is also a correlation between DEHP plasma levels in women and endometriosis.
DEHP is also a possible cancer causing agent in humans, although human studies remain inconclusive, due to the exposure of multiple elements and limited research. In vitro and rodent studies indicate that DEHP is involved in many molecular events, including increased cell proliferation, decreased apoptosis, oxidative damage, and selective clonal expansion of the initiated cells; all of which take place in multiple sites of the human body.
Government and industry response
Taiwan
In October 2009, Consumers' Foundation, Taiwan (CFCT) published test results that found 5 out of the sampled 12 shoes contained over 0.1% of phthalate plasticizer content, including DEHP, which exceeds the government's Toy Safety Standard (CNS 4797). CFCT recommend that users should first wear socks to avoid direct skin contact.
In May 2011, the illegal use of the plasticizer DEHP in clouding agents for use in food and beverages has been reported in Taiwan. An inspection of products initially discovered the presence of plasticizers. As more products were tested, inspectors found more manufacturers using DEHP and DINP. The Department of Health confirmed that contaminated food and beverages had been exported to other countries and regions, which reveals the widespread prevalence of toxic plasticizers.
European Union
Concerns about chemicals ingested by children when chewing plastic toys prompted the European Commission to order a temporary ban on phthalates in 1999, the decision of which is based on an opinion by the Commission's Scientific Committee on Toxicity, Ecotoxicity and the Environment (CSTEE). A proposal to make the ban permanent was tabled. Until 2004, EU banned the use of DEHP along with several other phthalates (DBP, BBP, DINP, DIDP and DNOP) in toys for young children. In 2005, the Council and the Parliament compromised to propose a ban on three types of phthalates (DINP, DIDP, and DNOP) "in toys and childcare articles which can be placed in the mouth by children". Therefore, more products than initially planned will thus be affected by the directive. In 2008, six substances were considered to be of very high concern (SVHCs) and added to the Candidate List including musk xylene, MDA, HBCDD, DEHP, BBP, and DBP. In 2011, those six substances have been listed for Authorization in Annex XIV of REACH by Regulation (EU) No 143/2011. According to the regulation, phthalates including DEHP, BBP and DBP will be banned from February 2015.
In 2012, Danish Environment Minister Ida Auken announced the ban of DEHP, DBP, DIBP and BBP, pushing Denmark ahead of the European Union which has already started a process of phasing out phthalates. However, it was postponed by two years and would take effect in 2015 and not in December 2013, which was the initial plan. The reason is that the four phthalates are far more common than expected and that producers cannot phase out phthalates as fast as the Ministry of Environment requested.
In 2012, France became the first country in the EU to ban the use of DEHP in pediatrics, neonatal, and maternity wards in hospitals.
DEHP has now been classified as a Category 1B reprotoxin, and is now on the Annex XIV of the European Union's REACH legislation. DEHP has been phased out in Europe under REACH and can only be used in specific cases if an authorization has been granted. Authorizations are granted by the European Commission, after obtaining the opinion of the Committee for Risk Assessment (RAC) and the Committee for Socio-economic Analysis (SEAC) of the European Chemicals Agency (ECHA).
California
DEHP is classified as a "chemical known to the State of California to cause cancer and birth defects or other reproductive harm" (in this case, both) under the terms of Proposition 65.
References
Further reading
External links
FDA Public Health Notification: PVC devices containing the plasticizer DEHP (archived page)
ATSDR ToxFAQs
CDC - NIOSH Pocket Guide to Chemical Hazards
National Pollutant Inventory - DEHP fact sheet
Healthcare without Harm - PVC and DEHP accessed 25 March 2014
Healthcare without Harm: "Weight of the Evidence on DEHP: Exposures are a Cause for Concern, Especially During Medical Care"; 6p-fact sheet, 16 March 2009 accessed 25 March 2014
Spectrum Laboratories Fact Sheet (archived page)
ChemSub Online : Bis(2-ethylhexyl) phthalate -DEHP
Safety Assessment of Di(2-ethylhexyl)phthalate (DEHP) Released from PVC Medical Devices - Center for Devices and Radiological Health U.S. Food and Drug Administration (archived page)
Ester solvents
IARC Group 2B carcinogens
Phthalate esters
Endocrine disruptors
Plasticizers
2-Ethylhexyl esters | Bis(2-ethylhexyl) phthalate | [
"Chemistry"
] | 3,469 | [
"Endocrine disruptors"
] |
3,005,139 | https://en.wikipedia.org/wiki/Cauchy%27s%20equation | In optics, Cauchy's transmission equation is an empirical relationship between the refractive index and wavelength of light for a particular transparent material. It is named for the mathematician Augustin-Louis Cauchy, who originally defined it in 1830 in his article "The refraction and reflection of light".
The equation
The most general form of Cauchy's equation is
where n is the refractive index, λ is the wavelength, A, B, C, etc., are coefficients that can be determined for a material by fitting the equation to measured refractive indices at known wavelengths. The coefficients are usually quoted for λ as the vacuum wavelength in micrometres.
Usually, it is sufficient to use a two-term form of the equation:
where the coefficients A and B are determined specifically for this form of the equation.
A table of coefficients for common optical materials is shown below:
The theory of light-matter interaction on which Cauchy based this equation was later found to be incorrect. In particular, the equation is only valid for regions of normal dispersion in the visible wavelength region. In the infrared, the equation becomes inaccurate, and it cannot represent regions of anomalous dispersion. Despite this, its mathematical simplicity makes it useful in some applications.
The Sellmeier equation is a later development of Cauchy's work that handles anomalously dispersive regions, and more accurately models a material's refractive index across the ultraviolet, visible, and infrared spectrum.
Humidity dependence for air
Cauchy's two-term equation for air, expanded by Lorentz to account for humidity, is as follows:
where p is the air pressure in millibar, T is the temperature in kelvin, and v is the vapor pressure of water in millibar.
See also
Sellmeier equation
References
F.A. Jenkins and H.E. White, Fundamentals of Optics, 4th ed., McGraw-Hill, Inc. (1981).
Augustin-Louis Cauchy
Optics
Electric and magnetic fields in matter
Eponymous equations of physics | Cauchy's equation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 422 | [
"Applied and interdisciplinary physics",
"Equations of physics",
"Optics",
"Eponymous equations of physics",
"Electric and magnetic fields in matter",
"Materials science",
"Condensed matter physics",
" molecular",
"Atomic",
" and optical physics"
] |
3,005,202 | https://en.wikipedia.org/wiki/Atomic%20model%20%28mathematical%20logic%29 | In model theory, a subfield of mathematical logic, an atomic model is a model such that the complete type of every tuple is axiomatized by a single formula. Such types are called principal types, and the formulas that axiomatize them are called complete formulas.
Definitions
Let T be a theory. A complete type p(x1, ..., xn) is called principal or atomic (relative to T) if it is axiomatized relative to T by a single formula φ(x1, ..., xn) ∈ p(x1, ..., xn).
A formula φ is called complete in T if for every formula ψ(x1, ..., xn), the theory T ∪ {φ} entails exactly one of ψ and ¬ψ.
It follows that a complete type is principal if and only if it contains a complete formula.
A model M is called atomic if every n-tuple of elements of M satisfies a formula that is complete in Th(M)—the theory of M.
Examples
The ordered field of real algebraic numbers is the unique atomic model of the theory of real closed fields.
Any finite model is atomic.
A dense linear ordering without endpoints is atomic.
Any prime model of a countable theory is atomic by the omitting types theorem.
Any countable atomic model is prime, but there are plenty of atomic models that are not prime, such as an uncountable dense linear order without endpoints.
The theory of a countable number of independent unary relations is complete but has no completable formulas and no atomic models.
Properties
The back-and-forth method can be used to show that any two countable atomic models of a theory that are elementarily equivalent are isomorphic.
Notes
References
Model theory | Atomic model (mathematical logic) | [
"Mathematics"
] | 378 | [
"Mathematical logic",
"Model theory"
] |
3,005,304 | https://en.wikipedia.org/wiki/Insulin%20analog | An insulin analog (also called an insulin analogue) is any of several types of medical insulin that are altered forms of the hormone insulin, different from any occurring in nature, but still available to the human body for performing the same action as human insulin in terms of controlling blood glucose levels in diabetes. Through genetic engineering of the underlying DNA, the amino acid sequence of insulin can be changed to alter its ADME (absorption, distribution, metabolism, and excretion) characteristics. Officially, the U.S. Food and Drug Administration (FDA) refers to these agents as insulin receptor ligands (because, like insulin itself, they are ligands of the insulin receptor), although they are usually just referred to as insulin analogs or even (loosely but commonly) just insulin (without further specification).
These modifications have been used to create two types of insulin analogs: those that are more readily absorbed from the injection site and therefore act faster than natural insulin injected subcutaneously, intended to supply the bolus level of insulin needed at mealtime (prandial insulin); and those that are released slowly over a period of between 8 and 24 hours, intended to supply the basal level of insulin during the day and particularly at nighttime (basal insulin). The first insulin analog (insulin Lispro rDNA) was approved for human therapy in 1996 and was manufactured by Eli Lilly and Company.
Fast acting
Lispro
Eli Lilly and Company developed and marketed the first rapid-acting insulin analogue (insulin lispro rDNA) Humalog. It was engineered through recombinant DNA technology so that the penultimate lysine and proline residues on the C-terminal end of the B-chain were reversed. This modification did not alter the insulin receptor binding, but blocked the formation of insulin dimers and hexamers. This allowed larger amounts of active monomeric insulin to be available for postprandial (after meal) injections.
Aspart
Novo Nordisk created "aspart" and marketed it as NovoLog/NovoRapid (UK-CAN) as a rapid-acting insulin analogue. It was created through recombinant DNA technology so that the amino acid, B28, which is normally proline, is substituted with an aspartic acid residue. The sequence was inserted into the yeast genome, and the yeast expressed the insulin analogue, which was then harvested from a bioreactor. This analogue prevents the formation of hexamers, to create a faster acting insulin. It is approved for use in CSII pumps and Flexpen, Novopen delivery devices for subcutaneous injection.
Glulisine
Glulisine is rapid acting insulin analog from Sanofi-Aventis, approved for use with a regular syringe, in an insulin pump. Standard syringe delivery is also an option. It is sold under the name Apidra. The FDA-approved label states that it differs from regular human insulin by its rapid onset and shorter duration of action. The differences are: replacement of B3 asparagine with lysine, replacement of B29 lysine with glutamic acid.
Long acting
Detemir insulin
Novo Nordisk created insulin detemir and markets it under the trade name Levemir as a long-lasting insulin analogue for maintaining the basal level of insulin. The basal level of insulin may be maintained for up to 20 hours, but the time is affected by the size of the injected dose.
The changes are: removal of B30 threonine, attachment of a myristic acid tail to the B29 lysine's "tail" nitrogen. As a result, this insulin binds to serum albumin with high affinity, increasing its duration of action.
Degludec insulin
This is an ultralong-acting insulin analogue developed by Novo Nordisk, which markets it under the brand name Tresiba. It is administered once daily and has a duration of action that lasts up to 40 hours (compared to 18 to 26 hours provided by other marketed long-acting insulins such as insulin glargine and insulin detemir). The change involves attachment of a hexadecanedioic acid tail to the B29 lysine's "tail" nitrogen. This allows the molecule to form multi-hexamers that last longer.
Glargine insulin
Sanofi-Aventis developed glargine as a longer-lasting insulin analogue, and sells it under the brand name Lantus. It was created by modifying three amino acids. Two positively charged arginine molecules were added to the C-terminus of the B-chain, and they shift the isoelectric point from 5.4 to 6.7, making glargine more soluble at a slightly acidic pH and less soluble at a physiological pH. Replacing the acid-sensitive asparagine at position 21 in the A-chain (A21) by glycine is needed to avoid deamination and dimerization of the arginine residue. These three structural changes and formulation with zinc result in a prolonged action when compared with biosynthetic human insulin. When the pH 4.0 solution is injected, most of the material precipitates and is not bioavailable. A small amount is immediately available for use, and the remainder is sequestered in subcutaneous tissue. As the glargine is used, small amounts of the precipitated material will move into solution in the bloodstream, and the basal level of insulin will be maintained up to 24 hours. The onset of action of subcutaneous insulin glargine is somewhat slower than NPH human insulin. It is clear solution as there is no zinc in formula. The biosimilar insulin glargine-yfgn (Semglee) was approved for medical use in the United States in July 2021, and in the European Union in March 2018.
Comparison with other insulins
NPH
NPH (Neutral Protamine Hagedorn) insulin is an intermediate-acting insulin with delayed absorption after subcutaneous injection, used for basal insulin support in diabetes type 1 and type 2. NPH insulins are suspensions that require shaking for reconstitution prior to injection. Many people reported problems when being switched to intermediate acting insulins in the 1980s, using NPH formulations of porcine/bovine insulins. Basal (long-acting) insulin analogs were subsequently developed and introduced into clinical practice to achieve more predictable absorption profiles and clinical efficacy.
As its name suggests, it contains both insulin and protamine and has a neutral pH. It also contains zinc and phenol.
Animal insulin
The amino acid sequence of animal insulins in different mammals may be similar to human insulin (insulin human INN), there is however considerable viability within vertebrate species. Porcine insulin has only a single amino acid variation from the human variety, and bovine insulin varies by three amino acids. Both are active on the human receptor with approximately the same strength. Bovine insulin and porcine insulin may be considered as the first clinically used insulin analogs (naturally occurring, produced by extraction from animal pancreas), at the time when biosynthetic human insulin (insulin human rDNA) was not available. There are extensive reviews on structure-relationship of naturally occurring insulins (phylogenic relationship in animals) and structural modifications. Prior to the introduction of biosynthetic human insulin, insulin derived from sharks was widely used in Japan. Insulin from some species of fish may be also effective in humans. Non-human insulins have caused allergic reactions in some patients related to the extent of purification, formation of non-neutralising antibodies is rarely observed with recombinant human insulin (insulin human rDNA) but allergy may occur in some patients. This may be enhanced by the preservatives used in insulin preparations, or occur as a reaction to the preservative. Biosynthetic insulin (insulin human rDNA) has largely replaced animal insulin.
Modifications
Before biosynthetic human recombinant analogues were available, porcine insulin was chemically converted into human insulin. Chemical modifications of the amino acid side chains at the N-terminus and/or the C-terminus were made in order to alter the ADME characteristics of the analogue. Semisynthetic insulins were clinically used for some time based on chemical modification of animal insulins, for example Novo Nordisk enzymatically converted porcine insulin into semisynthetic 'human' insulin by removing the single amino acid that varies from the human variety, and chemically adding the human amino acid.
Normal unmodified insulin is soluble at physiological pH. Analogues have been created that have a shifted isoelectric point so that they exist in a solubility equilibrium in which most precipitates out but slowly dissolves in the bloodstream and is eventually excreted by the kidneys. These insulin analogues are used to replace the basal level of insulin, and may be effective over a period of up to 24 hours. However, some insulin analogues, such as insulin detemir, bind to albumin rather than fat like earlier insulin varieties, and results from long-term usage (e.g. more than 10 years) are currently not available but required for assessment of clinical benefit.
Unmodified human and porcine insulins tend to complex with zinc in the blood, forming hexamers. Insulin in the form of a hexamer will not bind to its receptors, so the hexamer has to slowly equilibrate back into its monomers to be biologically useful. Hexameric insulin delivered subcutaneously is not readily available for the body when insulin is needed in larger doses, such as after a meal (although this is more a function of subcutaneously administered insulin, as intravenously dosed insulin is distributed rapidly to the cell receptors, and therefore, avoids this problem). Zinc combinations of insulin are used for slow release of basal insulin. Basal insulin support is required throughout the day representing about 50% of daily insulin requirement, the insulin amount needed at mealtime makes up for the remaining 50%. Non hexameric insulins (monomeric insulins) were developed to be faster acting and to replace the injection of normal unmodified insulin before a meal. There are phylogenetic examples for such monomeric insulins in animals.
Carcinogenicity
All insulin analogs must be tested for carcinogenicity, as insulin engages in cross-talk with IGF pathways, which can cause abnormal cell growth and tumorigenesis. Modifications to insulin always carry the risk of unintentionally enhancing IGF signalling in addition to the desired pharmacological properties. There has been concern with the mitogenic activity and the potential for carcinogenicity of glargine. Several epidemiological studies have been performed to address these issues. Recent study result of the 6.5 years Origin study with glargine have been published.
Research on safety, efficacy, and comparative effectiveness
A meta-analysis completed in 2007 and updated in 2020 of numerous randomized controlled trials by the international Cochrane Collaboration found that the effects on blood glucose and glycated haemoglobin A1c (HbA1c) were comparable, treatment with glargine and detemir resulted in fewer cases of hypoglycemia when compared to NPH insulin. Treatment with detrimir also reduced the frequency of serious hypoglycemia. This review did note limitations, such as low glucose and HbA1c targets, that could limit the applicability of these findings to daily clinical practice.
In 2007, Germany's Institute for Quality and Cost Effectiveness in the Health Care Sector (IQWiG) report, concluded that there is currently "no evidence" available of the superiority of rapid-acting insulin analogs over synthetic human insulins in the treatment of adult patients with type 1 diabetes. Many of the studies reviewed by IQWiG were either too small to be considered statistically reliable and, perhaps most significantly, none of the studies included in their widespread review were blinded, the gold-standard methodology for conducting clinical research. However, IQWiG's terms of reference explicitly disregard any issues which cannot be tested in double-blind studies, for example a comparison of radically different treatment regimes. IQWiG is regarded with skepticism by some doctors in Germany, being seen merely as a mechanism to reduce costs. But the lack of study blinding does increase the risk of bias in these studies. The reason this is important is because patients, if they know they are using a different type of insulin, might behave differently (such as testing blood glucose levels more frequently, for example), which leads to bias in the study results, rendering the results inapplicable to the diabetes population at large. Numerous studies have concluded that any increase in testing of blood glucose levels is likely to yield improvements in glycemic control, which raises questions as to whether any improvements observed in the clinical trials for insulin analogues were the result of more frequent testing or due to the drug undergoing trials.
In 2008, the Canadian Agency for Drugs and Technologies in Health (CADTH) found, in its comparison of the effects of insulin analogues and biosynthetic human insulin, that insulin analogues failed to show any clinically relevant differences, both in terms of glycemic control and adverse reaction profile.
Timeline
1922 Banting and Best use bovine insulin extract on human
1923 Eli Lilly and Company (Lilly) produces commercial quantities of bovine insulin
1923 Hagedorn founds the Nordisk Insulinlaboratorium in Denmark forerunner of Novo Nordisk
1926 Nordisk receives Danish charter to produce insulin as a non-profit
1936 Canadians D.M. Scott and A.M. Fisher formulate zinc insulin mixture and license to Novo
1936 Hagedorn discovers that adding protamine to insulin prolongs the effect of insulin
1946 Nordisk formulates Isophane porcine insulin a.k.a. Neutral Protamine Hagedorn or NPH insulin
1946 Nordisk crystallizes a protamine and insulin mixture
1950 Nordisk markets NPH insulin
1953 Novo formulates Lente porcine and bovine insulins by adding zinc for longer-lasting insulin
1978 Genentech develop biosynthesis of recombinant human insulin in Escherichia coli bacteria using recombinant DNA technology
1981 Novo Nordisk chemically and enzymatically converts porcine insulin to 'human' insulin (Actrapid HM)
1982 Genentech synthetic 'human' insulin approved, in partnership with Eli Lilly and Company, who shepherded the product through the U.S. Food and Drug Administration (FDA) approval process
1983 Lilly produces biosynthetic recombinant "rDNA insulin human INN" (Humulin)
1985 Axel Ullrich sequences the human insulin receptor
1988 Novo Nordisk produces synthetic, recombinant insulin ("insulin human INN")
1996 Lilly Humalog "insulin lispro INN" approved by the U.S. Food and Drug Administration
2003 Aventis Lantus "glargine" insulin analogue approved in USA
2004 Sanofi Aventis Apidra insulin "glulisine" analogue approved in the USA.
2006 Novo Nordisk's Levemir "insulin detemir INN" analogue approved in the USA-
2013 Novo Nordisk's Tresiba "insulin degludec INN" analogue approved in Europe (EMA with additional monitoring]
References
External links
Analog Insulin
Insulin receptor agonists
Drugs developed by Eli Lilly and Company
Human proteins
Recombinant proteins
Peptide hormones
Peptide therapeutics | Insulin analog | [
"Biology"
] | 3,221 | [
"Recombinant proteins",
"Biotechnology products"
] |
3,007,126 | https://en.wikipedia.org/wiki/Benzyl%20butyl%20phthalate | Benzyl butyl phthalate (BBP) is an organic compound historically used a plasticizer, but which has now been largely phased out due to health concerns. It is a phthalate ester of containing benzyl alcohol, and n-butanol tail groups. Like most phthalates, BBP is non-volatile and remains liquid over a wide range of temperatures. It was mostly used as a plasticizer for PVC, but was also a common plasticizer for PVCA and PVB.
BBP was commonly used as a plasticizer for vinyl foams, which are often used as sheet vinyl flooring and tiles. Compared to other phthalates it was less volatile than dibutyl phthalate and imparted better low temperature flexibility than di(2-ethylhexyl) phthalate.
BBP is classified as toxic by the European Chemical Bureau (ECB) and hence its use in Europe has declined rapidly.
Structure and reactivity
BBP is a diester. Since BBP contains two ester bonds it can react in a variety of chemical pathways. Both the carbonyl C-atoms are weakly electrophilic and therefore targets for attacks by strong nucleophilic compounds. Besides the carbonyl C-atom target, it contains a C-H bond whereas the H-atom is weakly acidic, which makes it susceptible for deprotonation by a strong base. BBP is hydrolyzed under either acidic or basic conditions. The hydrolysis under acidic conditions is a reversion of the Fischer-Speier esterification, whereas the hydrolysis under basic conditions is performed by saponification. Since BBP contains two ester bonds it is difficult to perform a chemoselective reaction.
Under basic conditions BBP can undergo saponification. The saponification number of BBP is 360 mg KOH/g. The amount of carboxylic functional groups per molecule are relatively high (2 carboxylic functional groups with a molecular weight of 312.36). This makes the compound relatively unsaponifiable.
Synthesis
Concentrated sulfuric acid dehydrates n-butyl alcohol to yield 1-butene, which reacts with phthalic anhydride to produce n-butyl phthalate. Phthalic anhydride does react directly with 1-butanol to form this same intermediate, but further reaction to form dibutyl phthalate does occur to a significant extent. Carrying out the procedure using 1-butene avoids this side reaction. Monobutyl phthalate is isolated and then added to a mixture of benzyl bromide in acetone in the presence of potassium carbonate (to keep the pH high to facilitate the substitution reaction required to form the second ester linkage), from which BBP can then be isolated.
Metabolism
BBP can be absorbed by the human body in a variety of ways. First of all, it can be taken up dermally, meaning that the compound is absorbed by the skin. Studies in rats show that 27% of the uptake of BBP occurs via this route. During this process, the structure of the phthalate diester determines the degree of dermal absorption.
BBP can also be taken up orally. The amount of the compound that is being absorbed by the body depends on the dose that has been administered. Absorption seems to be limited at high doses, meaning that small amounts are taken up more easily than great amounts. Finally, BBP can be inhaled. In this case, BBP is absorbed via the lungs.
BBP is biotransformed in the human body in numerous of ways. Gut esterases metabolize BBP to monoester metabolites. Those are mainly monobutyl and mono-benzyl phthalate (MBzP) plus small amounts of mono-n-butyl phthalate. The ratio of monobutyl to monobenzyl phthalate has been determined to be 5:3. These metabolites can be absorbed and excreted directly or undergo a phase II reaction. In the latter, they are conjugated with glucuronic acid and then excreted as glucuronate. Studies in rats have shown that 70% of BBP is not conjugated while 30% is conjugated. At high concentrations of BBP, relatively less metabolite is conjugated. This indicates that the conjugation pathway (glucuronidation) is saturated at high amounts of administered BBP. The metabolites of BBP are excreted rapidly, 90% of them has left the body within 24 hours. As a consequence, the half-life of BBP in the blood is quite low and counts up to only 10 minutes. However, monoester metabolites of BBP (such as monophthalate) have a longer half-life of 6 hours.
BBP is metabolized quite efficiently within the body. While a major part of the BBP is excreted as a mono-benzyl phthalate metabolite, a minor fragment of the BBP is excreted in the form of mono-butyl phthalate. BBP is rarely found in the bile in its original form. Nevertheless, metabolites like monobutyl glucuronide and monobenzyl phthalate glucuronide as well as trace amounts of free monoesters can be found there.
Mode of action
Relatively little is known about the modes of action of BBP. Experimental research does hint at a number of mechanisms, though. One phenomenon is that BBP binds to the estrogen receptor of rats. In vitro-experiments do show a weak potential of BBP to have an influence on estrogen-mediated gene expression. This is because phthalates like BBP are mimicking estrogens. Metabolites of BBP, on the other hand, are only weakly reactive with the estrogen receptor. Not much is known about if and how this mechanism plays out in vivo.
Furthermore, BBP binds to intracellular steroid receptors and causes genomic effects by doing so. BBP also interferes with ion-channel receptors which cause non-genomic effects. The underlying mechanism is that BBP blocks the calcium signaling that is coupled with P2X receptors. Calcium signaling, mediated via P2X, eventually has an influence in cell proliferation and bone remodeling. During developmental phases of bone remodelling, high environmental exposure of BBP might therefore pose a problem.
Exposure
The exposure of the general population to BBP has been estimated by several authorities. One of the authorities, the International Program on Chemical Safety (IPCS), came to the conclusion that exposure to BBP is mainly caused by food intake. BBP, as many other phthalates, is used to increase the flexibility of plastics. However, phthalates are not bound to the plastics which means that they can easily be released into the environment. From there it can be taken up into food during crop cultivation. Alternatively, BBP can enter food via food packaging materials. Moreover, children may be exposed to BBP by mouthing of toys. Various studies by authorities, between the 1980s and 2000s, have been done to estimate the general population exposure to BBP in different countries with varying results. The adult exposure was estimated to be 2 μg/kg body weight/day in the U.S. BBP exposure to children is likely to be higher due to differences in food intake.
Nonetheless, these estimates should be interpreted with caution as they are based on different food types, different assumptions were used in calculations, levels of BBP in food vary in different countries and levels of BBP in food changes over time.
Next to general exposure there is also occupation-related exposure to BBP . This can occur via inhalation of vapors or via skin contact. This has been estimated to be 286 μg/kg body weight/day. However, in general the occupational exposure is thought be lower than this. The NOAEL of BBP was experimentally found to be 50 mg/kg body weight/day and the associated margin of safety is ca. 4,800 or more. Thus, BBP does not seem to pose a very high risk under conditions of general or occupational exposure based on current estimates.
Toxicity and adverse effects
No primary irritation or sensitization reactions were found in a patch test involving 200 volunteers. However, if BBP is taken up by the body it can exert toxic effects. It has a LD50 for rats ranging from 2 to 20 g/kg body weight.
Occupational hazards
Workers in the PVC processing industry are exposed to higher levels of BBP than the general public and are thus more at risk of experiencing negative health effects. No effects of the respiratory or peripheral nerve system have been observed in workers. Although slightly higher levels of BBP metabolites were found in their urine. Long-term occupational exposure to BBP does, however, significantly increase the risk of multiple myeloma.
Children
Children are possibly exposed to higher levels of BBP than adults. Since children form a vulnerable group for chemical exposure, studies have been conducted to evaluate the effects of BBP exposure. PVC flooring has been linked to a significant increase in the risk of bronchial obstruction in the first two years of life and in the development of language delay in pre-school aged children. BBP has also been positively associated with airway inflammation in children living in urban areas. Moreover, there is evidence suggesting that prenatal exposure to BBP coming from in house dust affects the risk of childhood eczema. The exact mechanism of how phthalates and their metabolites reach the fetus remain unclear. However, since these chemicals seem to be able to reach the fetus they are thought to affect fetal health and development. Further research is needed to establish the effect of prenatal exposure on fetal development.
Teratogenicity and reproductive effects
Only a few studies have been done on reproductive effects of BBP on humans, but the results are inconclusive. According to the NTP-CERHR the adverse reproductive effects are negligible for exposed men. Yet, one study found a link between altered semen quality and exposure to monobutyl phthalate, a major metabolite of BBP. No research has been done on the teratogenic effects of BBP on humans. However, numerous studies have been conducted with animals. Prenatal exposure to high levels of BBP in rats can lead to lower fetal body weight, increased incidence of fetal malformations, post-implantation loss and even embryonic death. The precise teratogenic effects observed in rat fetuses seem to be related to the period of exposure in development. Exposure to BBP in the first half of pregnancy lead to embryolethality while exposure in the second half to teratogenicity.
In a two-generational study male offspring were found to have macroscopic and microscopic changes in the testes, decreased serum testosterone concentrations in addition to reduced sperm production. Additionally, reduced seminal vesicle weight has been observed. These results indicate a clear negative effect on the fertility.
Other toxicity studies in animals
Numerous studies have been carried out in animals to elucidate the adverse effects of BBP exposure. Long-term BBP exposure in rats leads to reduced body weight, increased weight of the liver and kidneys and carcinogenicity. In male rats the incidence of pancreatic tumors increased while in female rats the incidence for both pancreatic and bladder tumors increased.
Although BBP has been linked to carcinogenicity, studies indicate that BBP is not genotoxic.
Environmental toxicology
BBP, like other low molecular weight phthalate esters, is toxic to aquatic organisms. This includes unicellular freshwater green algae such as Selenastrum capricornutum. BBP has also been shown to be toxic to freshwater invertebrates like D. magna. For these organisms, the toxic effect correlates with the water solubility of the phthalate which is relatively high for BBP compared to high molecular weight phthalates. BBP affects saltwater invertebrates significantly. Experiments with mysid shrimp show that BBP is acutely toxic to these organisms. Among the species of fish, the sweetwater fish bluegills were shown to be toxically affected by BBP. Furthermore, a rapid lethal effect has been observed for the saltwater fish Parophrys vetulus.
Degradation
When the degradation of BBP is taken into consideration, one should be aware of the fact that it contains two ester functional groups. This gives organisms a handle for biotransformations. The ester groups gives BBP hydrophilic properties and will therefore hydrolyze fairly easy. Following an examination performed in 1997, it was found that biotransformations play a very important role in the degeneration of BBP. Furthermore, the solubility in water plays a significant role in the effectiveness of biotransformation in an environment. The butyl group gives BBP a slightly more hydrophobic property, compared to other plasticizer it is relatively good soluble. The longer the alkyl chain the less soluble and the less well it is degenerated.
Legislation measures
BBP was listed as a developmental toxicant under California's Proposition 65 on December 2, 2005. California's Office of Environmental Health Hazard Assessment (OEHHA), on July 1, 2013, approved a Maximum Allowable Dose Level of 1,200 micrograms per day for BBP. Canadian Authorities have restricted the usage of phthalates, including BBP, in soft vinyl children's toys and child care articles.
According to EU Council Directive 67/548/EEC1, BBP is classified as reproductive toxicant and therefore restricted in use. The restriction covers the placing on the market and use in any type of toys and childcare articles. These restrictions are in place since 16 January 2017. Due to the classification and labelling of BBP companies have moved to the use of alternatives. Restrictions are not limited to toys. Since 22 November 2006 cosmetic products containing BBP shall not be supplied to consumers in the EU.
References
External links
C-307 An Act respecting bis(2-ethylhexyl)phthalate, benzyl butyl phthalate and dibutyl phthalate
Datasheet
Plasticizers
Phthalate esters
Endocrine disruptors
Benzyl esters
Butyl esters | Benzyl butyl phthalate | [
"Chemistry"
] | 3,003 | [
"Endocrine disruptors"
] |
3,007,616 | https://en.wikipedia.org/wiki/HER2 | Receptor tyrosine-protein kinase erbB-2 is a protein that normally resides in the membranes of cells and is encoded by the ERBB2 gene. ERBB is abbreviated from erythroblastic oncogene B, a gene originally isolated from the avian genome. The human protein is also frequently referred to as HER2 (human epidermal growth factor receptor 2) or CD340 (cluster of differentiation 340).
HER2 is a member of the human epidermal growth factor receptor (HER/EGFR/ERBB) family. But contrary to other members of the ERBB family, HER2 does not directly bind ligand. HER2 activation results from heterodimerization with another ERBB member or by homodimerization when HER2 concentration are high, for instance in cancer. Amplification or over-expression of this oncogene has been shown to play an important role in the development and progression of certain aggressive types of breast cancer. In recent years the protein has become an important biomarker and target of therapy for approximately 30% of breast cancer patients.
Name
HER2 is so named because it has a similar structure to human epidermal growth factor receptor, or HER1. Neu is so named because it was derived from a rodent glioblastoma cell line, a type of neural tumor. ErbB-2 was named for its similarity to ErbB (avian erythroblastosis oncogene B), the oncogene later found to code for EGFR. Molecular cloning of the gene showed that HER2, Neu, and ErbB-2 are all encoded by the same orthologs.
Gene
ERBB2, a known proto-oncogene, is located at the long arm of human chromosome 17 (17q12).
Function
The ErbB family consists of four individual plasma membrane-bound receptor tyrosine kinases. One of which is erbB-2, and the other members being erbB-1, erbB-3 (neuregulin-binding; lacks kinase domain), and erbB-4. All four contain an extracellular ligand binding domain, a transmembrane domain, and an intracellular domain that can interact with a multitude of signaling molecules and exhibit both ligand-dependent and ligand-independent activity. Notably, no ligands for HER2 have yet been identified. HER2 can heterodimerise with any of the other three receptors and is considered to be the preferred dimerisation partner of the other ErbB receptors.
Dimerisation results in the autophosphorylation of tyrosine residues within the cytoplasmic domain of the receptors and initiates a variety of signaling pathways.
Signal transduction
Signaling pathways activated by HER2 include:
mitogen-activated protein kinase (MAPK)
phosphoinositide 3-kinase (PI3K/Akt)
phospholipase C γ
protein kinase C (PKC)
Signal transducer and activator of transcription (STAT)
In summary, signaling through the ErbB family of receptors promotes cell proliferation and opposes apoptosis, and therefore must be tightly regulated to prevent uncontrolled cell growth from occurring.
Clinical significance
Cancer
Amplification, also known as the over-expression of the ERBB2 gene, occurs in approximately 15-30% of breast cancers. HER2-positive breast cancers are well established as being associated with increased disease recurrence and a poor prognosis compared with other identifiably genetically distinct breast cancers with other known, or lack thereof, genetic markers that are thought to be associated with other breast cancers; however, drug agents targeting HER2 in breast cancer have significantly and positively altered the otherwise poor prognosis of the historically problematic difficulties associated with HER2-positive breast cancer. Over-expression is also known to occur in ovarian, stomach, adenocarcinoma of the lung and aggressive forms of uterine cancer, such as uterine serous endometrial carcinoma, e.g. HER2 is over-expressed in approximately 7-34% of patients with gastric cancer and in 30% of salivary duct carcinomas.
HER2 is colocalised and most of the time, coamplified with the gene GRB7, which is a proto-oncogene associated with breast, testicular germ cell, gastric, and esophageal tumours.
HER2 proteins have been shown to form clusters in cell membranes that may play a role in tumorigenesis.
Evidence has also implicated HER2 signaling in resistance to the EGFR-targeted cancer drug cetuximab.
The high expression of HER2 correlates with better survival in esophageal adenocarcinoma.
The high amplification of HER2 copy number positively contributes to the survival time of gastric cardia adenocarcinoma patients.
Mutations
Furthermore, diverse structural alterations have been identified that cause ligand-independent firing of this receptor, doing so in the absence of receptor over-expression. HER2 is found in a variety of tumours and some of these tumours carry point mutations in the sequence specifying the transmembrane domain of HER2. Substitution of a valine for a glutamic acid or a glutamine in the transmembrane domain can result in the constitutive dimerisation of this protein in the absence of a ligand.
HER2 mutations have been found in non-small-cell lung cancers (NSCLC) and can direct treatment.
As a drug target
HER2 is the target of the monoclonal antibody trastuzumab (marketed as Herceptin). Trastuzumab is effective only in cancers where HER2 is over-expressed. One year of trastuzumab therapy is recommended for all patients with HER2-positive breast cancer who are also receiving chemotherapy. Twelve months of trastuzumab therapy is optimal. Randomized trials have demonstrated no additional benefit beyond 12 months, whereas 6 months has been shown to be inferior to 12. Trastuzumab is administered intravenously weekly or every 3 weeks.
An important downstream effect of trastuzumab binding to HER2 is an increase in p27, a protein that halts cell proliferation. Another monoclonal antibody, Pertuzumab, which inhibits dimerisation of HER2 and HER3 receptors, was approved by the FDA for use in combination with trastuzumab in June 2012.
As of November 2015, there are a number of ongoing and recently completed clinical trials of novel targeted agents for HER2+ metastatic breast cancer, e.g. margetuximab.
Additionally, NeuVax (Galena Biopharma) is a peptide-based immunotherapy that directs "killer" T cells to target and destroy cancer cells that express HER2. It has entered phase 3 clinical trials.
It has been found that patients with ER+ (Estrogen receptor positive)/HER2+ compared with ER-/HER2+ breast cancers may actually benefit more from drugs that inhibit the PI3K/AKT molecular pathway.
Over-expression of HER2 can also be suppressed by the amplification of other genes. Research is currently being conducted to discover which genes may have this desired effect.
The expression of HER2 is regulated by signaling through estrogen receptors. Normally, estradiol and tamoxifen acting through the estrogen receptor down-regulate the expression of HER2. However, when the ratio of the coactivator AIB-3 exceeds that of the corepressor PAX2, the expression of HER2 is upregulated in the presence of tamoxifen, leading to tamoxifen-resistant breast cancer.
Among approved anti-HER2 therapeutics are also tyrosine kinase inhibitors (Lapatinib, Neratinib, and Tucatinib) and antibody-drug conjugates (ado-trastuzumab emtansine and trastuzumab deruxtecan).
Diagnostics
HER2 testing is performed on breast biopsy of breast cancer patients to assess prognosis and to determine suitability for trastuzumab therapy. It is important that trastuzumab is restricted to HER2-positive individuals as it is expensive and has been associated with cardiac toxicity. For HER2-positive tumors, the benefits of trastuzumab clearly outweigh the risks.
Tests are usually performed on breast biopsy samples obtained by either fine-needle aspiration, core needle biopsy, vacuum-assisted breast biopsy, or surgical excision.
Immunohistochemistry (IHC) is generally used to measure the amount of HER2 protein present in the sample, with fluorescence in situ hybridisation (FISH) being used on samples that are equivocal in IHC. However, in several locations, FISH is used initially, followed by IHC in equivocal cases.
Immunohistochemistry
By immunohistochemistry, the sample is given a score based on the cell membrane staining pattern.
Micrographs showing each score:
Fluorescence in situ hybridisation
FISH can be used to measure the number of copies of the gene which are present and is thought to be more reliable than immunohistochemistry. It usually uses chromosome enumeration probe 17 (CEP17) to count the amount of chromosomes. Hence, the HER2/CEP17 ratio reflects any amplification of HER2 as compared to the number of chromosomes. The signals of 20 cells are usually counted.
If the initial HER2 result is negative for a needle biopsy of a primary breast cancer, a new HER2 test may be performed on the subsequent breast excision.
Serum
The extracellular domain of HER2 can be shed from the surface of tumour cells and enter the circulation. Measurement of serum HER2 by enzyme-linked immunosorbent assay (ELISA) offers a far less invasive method of determining HER2 status than a biopsy and consequently has been extensively investigated. Results so far have suggested that changes in serum HER2 concentrations may be useful in predicting response to trastuzumab therapy. However, its ability to determine eligibility for trastuzumab therapy is less clear.
Interactions
HER2/neu has been shown to interact with:
CTNNB1,
DLG4,
Erbin,
GRB2,
HSP90AA1,
IL6ST,
MUC1,
PICK1 and
PIK3R2,
PLCG1, and
SHC1.
See also
SkBr3 Cell Line, over-expresses HER2
References
Further reading
External links
AACR Cancer Concepts Factsheet on HER2
Breast Friends for Life Network - A South African Breast Cancer Support Forum for HER2 Positive Women
HerceptinR : Herceptin Resistance Database for Understanding Mechanism of Resistance in Breast Cancer Patients. Sci. Rep. 4:4483
PDBe-KB provides an overview of all the structure information available in the PDB for Human Receptor tyrosine-protein kinase erbB-2
Clusters of differentiation
Tyrosine kinase receptors
Cancer treatments
Oncogenes
Breast cancer | HER2 | [
"Chemistry"
] | 2,341 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
3,007,899 | https://en.wikipedia.org/wiki/Antimycobacterial | An antimycobacterial is a type of medication used to treat Mycobacteria infections.
Types include:
Tuberculosis treatments
Leprostatic agents
Notes
Antibiotics | Antimycobacterial | [
"Biology"
] | 35 | [
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
5,501,493 | https://en.wikipedia.org/wiki/Quark-nova | A quark-nova is the hypothetical violent explosion resulting from the conversion of a neutron star to a quark star. Analogous to a supernova heralding the birth of a neutron star, a quark nova signals the creation of a quark star. The term quark-novae was coined in 2002 by Dr. Rachid Ouyed (currently at the University of Calgary, Canada) and Drs. J. Dey and M. Dey (Calcutta University, India).
The nova process
When a neutron star spins down, it may convert to a quark star through a process known as quark deconfinement. The resultant star would have quark matter in its interior. The process would release immense amounts of energy, perhaps explaining the most energetic explosions in the universe; calculations have estimated that as much as 1046 J could be released from the phase transition inside a neutron star. Quark-novae may be one cause of gamma ray bursts. According to Jaikumar and collaborators, they may also be involved in producing heavy elements such as platinum through r-process nucleosynthesis.
Candidates
Rapidly spinning neutron stars with masses between 1.5 and 1.8 solar masses are hypothetically the best candidates for conversion due to spin down of the star within a Hubble time. This amounts to a small fraction of the projected neutron star population. A conservative estimate based on this, indicates that up to two quark-novae may occur in the observable universe each day.
Hypothetically, quark stars would be radio-quiet, so radio-quiet neutron stars may be quark stars.
Observations
Direct evidence for quark-novae is scant; however, recent observations of supernovae SN 2006gy, SN 2005gj and SN 2005ap may point to their existence.
See also
References
External links
Quark-novae produce neutrino bursts, which can be detected by neutrino observatories
Quark Stars Could Produce Biggest Bang (SpaceDaily) June 7, 2006
Quark Nova Project animations (University of Calgary)
Quark stars
Supernovae
Hypothetical astronomical objects | Quark-nova | [
"Chemistry",
"Astronomy"
] | 437 | [
"Supernovae",
"Astronomical hypotheses",
"Astronomical events",
"Astronomical myths",
"Hypothetical astronomical objects",
"Explosions",
"Astronomical objects"
] |
5,501,781 | https://en.wikipedia.org/wiki/Cohobation | In pre-modern chemistry and alchemy, cohobation was the process of repeated distillation of the same matter, with the liquid drawn from it (successive redistillation); that liquid being poured again and again upon the matter left at the bottom of the vessel. Cohobation is a kind of circulation, only differing from it in this, that the liquid is drawn off in cohobation, as in common distillation, and thrown back again; whereas in circulation, it rises and falls in the same vessel, without ever being drawn out.
Cohobation is not recognized as a useful process in modern chemistry. Indeed, it is equivalent to performing the same distillation a number of times and does not increase the purity of the distillate or alter the residue any more than would be done by maintaining it at elevated temperature for the same period of time. The Dean-Stark trap does involve returning some distillate to the reaction flask: a solution is distilled and the condensed liquid is collected in a tube wherein water settles to the bottom and is drained out, while an organic solvent returns to the boiling solution. However, the process is not manual, most of the solvent does not leave the reaction flask, and the apparatus achieves a useful purpose (removing water from the reaction mixture). Circulation, on the other hand, is approximately the same as reflux, where a solution is maintained at its boiling point by condensing the distilling vapors and returning them directly to the reaction mixture.
References
Alchemical processes
Distillation
Chemical processes
az:Rektifikasiya
ba:Ректификация
bg:Ректификация
cs:Rektifikace (chemie)
de:Rektifikation (Verfahrenstechnik)
eo:Rektifikilo
et:Rektifikatsioon
hr:Rektifikacija (kemija)
hy:Ռեկտիֆիկացում
kk:Ректификаттау
ky:Ректификация
lv:Rektificēšana
ru:Ректификация
sl:Rektifikacija (kemija)
sv:Rektifikation
uk:Ректифікація
uz:Rektifikatsiya | Cohobation | [
"Chemistry"
] | 508 | [
"Separation processes",
"Chemical processes",
"Distillation",
"Alchemical processes",
"nan",
"Chemical process engineering"
] |
5,501,977 | https://en.wikipedia.org/wiki/Zero-order%20hold | The zero-order hold (ZOH) is a mathematical model of the practical signal reconstruction done by a conventional digital-to-analog converter (DAC). That is, it describes the effect of converting a discrete-time signal to a continuous-time signal by holding each sample value for one sample interval. It has several applications in electrical communication.
Time-domain model
A zero-order hold reconstructs the following continuous-time waveform from a sample sequence x[n], assuming one sample per time interval T:
where is the rectangular function.
The function is depicted in Figure 1, and is the piecewise-constant signal depicted in Figure 2.
Frequency-domain model
The equation above for the output of the ZOH can also be modeled as the output of a linear time-invariant filter with impulse response equal to a rect function, and with input being a sequence of dirac impulses scaled to the sample values. The filter can then be analyzed in the frequency domain, for comparison with other reconstruction methods such as the Whittaker–Shannon interpolation formula suggested by the Nyquist–Shannon sampling theorem, or such as the first-order hold or linear interpolation between sample values.
In this method, a sequence of Dirac impulses, xs(t), representing the discrete samples, x[n], is low-pass filtered to recover a continuous-time signal, x(t).
Even though this is not what a DAC does in reality, the DAC output can be modeled by applying the hypothetical sequence of dirac impulses, xs(t), to a linear, time-invariant filter with such characteristics (which, for an LTI system, are fully described by the impulse response) so that each input impulse results in the correct constant pulse in the output.
Begin by defining a continuous-time signal from the sample values, as above but using delta functions instead of rect functions:
The scaling by , which arises naturally by time-scaling the delta function, has the result that the mean value of xs(t) is equal to the mean value of the samples, so that the lowpass filter needed will have a DC gain of 1. Some authors use this scaling, while many others omit the time-scaling and the T, resulting in a low-pass filter model with a DC gain of T, and hence dependent on the units of measurement of time.
The zero-order hold is the hypothetical filter or LTI system that converts the sequence of modulated Dirac impulses xs(t)to the piecewise-constant signal (shown in Figure 2):
resulting in an effective impulse response (shown in Figure 4) of:
The effective frequency response is the continuous Fourier transform of the impulse response.
where is the (normalized) sinc function commonly used in digital signal processing.
The Laplace transform transfer function of the ZOH is found by substituting s = i 2 π f:
The fact that practical digital-to-analog converters (DAC) do not output a sequence of dirac impulses, xs(t) (that, if ideally low-pass filtered, would result in the unique underlying bandlimited signal before sampling), but instead output a sequence of rectangular pulses, xZOH(t) (a piecewise constant function), means that there is an inherent effect of the ZOH on the effective frequency response of the DAC, resulting in a mild roll-off of gain at the higher frequencies (a 3.9224 dB loss at the Nyquist frequency, corresponding to a gain of sinc(1/2) = 2/π). This drop is a consequence of the hold property of a conventional DAC, and is not due to the sample and hold that might precede a conventional analog-to-digital converter (ADC).
See also
Nyquist–Shannon sampling theorem
First-order hold
Discretization of linear state space models (assuming zero-order hold)
References
Digital signal processing
Electrical engineering
Control theory
Signal processing | Zero-order hold | [
"Mathematics",
"Technology",
"Engineering"
] | 832 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Applied mathematics",
"Control theory",
"Electrical engineering",
"Dynamical systems"
] |
5,503,004 | https://en.wikipedia.org/wiki/Insulin-like%20growth%20factor%201%20receptor | The insulin-like growth factor 1 (IGF-1) receptor is a protein found on the surface of human cells. It is a transmembrane receptor that is activated by a hormone called insulin-like growth factor 1 (IGF-1) and by a related hormone called IGF-2. It belongs to the large class of tyrosine kinase receptors. This receptor mediates the effects of IGF-1, which is a polypeptide protein hormone similar in molecular structure to insulin. IGF-1 plays an important role in growth and continues to have anabolic effects in adults – meaning that it can induce hypertrophy of skeletal muscle and other target tissues. Mice lacking the IGF-1 receptor die late in development, and show a dramatic reduction in body mass. This testifies to the strong growth-promoting effect of this receptor.
Structure
Two alpha subunits and two beta subunits make up the IGF-1 receptor. Both the α and β subunits are synthesized from a single mRNA precursor. The precursor is then glycosylated, proteolytically cleaved, and crosslinked by cysteine bonds to form a functional transmembrane αβ chain. The α chains are located extracellularly, while the β subunit spans the membrane and is responsible for intracellular signal transduction upon ligand stimulation. The mature IGF-1R has a molecular weight of approximately 320 kDa.citation? The receptor is a member of a family which consists of the insulin receptor and the IGF-2R (and their respective ligands IGF-1 and IGF-2), along with several IGF-binding proteins.
IGF-1R and the insulin receptor both have a binding site for ATP, which is used to provide the phosphates for autophosphorylation. There is a 60% homology between IGF-1R and the insulin receptor. The structures of the autophosphorylation complexes of tyrosine residues 1165 and 1166 have been identified within crystals of the IGF1R kinase domain.
In response to ligand binding, the α chains induce the tyrosine autophosphorylation of the β chains. This event triggers a cascade of intracellular signaling that, while cell type-specific, often promotes cell survival and cell proliferation.
Family members
Tyrosine kinase receptors, including the IGF-1 receptor, mediate their activity by causing the addition of a phosphate groups to particular tyrosines on certain proteins within a cell. This addition of phosphate induces what are called "cell signaling" cascades - and the usual result of activation of the IGF-1 receptor is survival and proliferation in mitosis-competent cells, and growth (hypertrophy) in tissues such as skeletal muscle and cardiac muscle.
Function
Embryonic development
During embryonic development, the IGF-1R pathway is involved with the developing limb buds.
Lactation
The IGFR signalling pathway is of critical importance during normal development of mammary gland tissue during pregnancy and lactation. During pregnancy, there is intense proliferation of epithelial cells which form the duct and gland tissue. Following weaning, the cells undergo apoptosis and all the tissue is destroyed. Several growth factors and hormones are involved in this overall process, and IGF-1R is believed to have roles in the differentiation of the cells and a key role in inhibiting apoptosis until weaning is complete.
Insulin signaling
IGF-1 binds to at least two cell surface receptors: the IGF1 Receptor (IGFR), and the insulin receptor. The IGF-1 receptor seems to be the "physiologic" receptor—it binds IGF-1 at significantly higher affinity than it binds insulin. Like the insulin receptor, the IGF-1 receptor is a receptor tyrosine kinase—meaning it signals by causing the addition of a phosphate molecule on particular tyrosines. IGF-1 activates the insulin receptor at approximately 10% the potency of insulin. Part of this signaling may be via IGF1R/insulin receptor heterodimers (the reason for the confusion is that binding studies show that IGF-1 binds the insulin receptor 100-fold less well than insulin, yet that does not correlate with the actual potency of IGF-1 in vivo at inducing phosphorylation of the insulin receptor, and hypoglycemia).
Aging
Studies in female mice have shown that both supraoptic nucleus (SON) and paraventricular nucleus (PVN) lose approximately one-third of IGF-1R immunoreactive cells with normal aging. Also, old calorically restricted (CR) mice lost higher numbers of IGF-1R non-immunoreactive cells while maintaining similar counts of IGF-1R immunoreactive cells in comparison to old-Al mice. Consequently, old-CR mice show a higher percentage of IGF-1R immunoreactive cells, reflecting increased hypothalamic sensitivity to IGF-1 in comparison to normally aging mice.
Craniosynostosis
Mutations in IGF1R have been associated with craniosynostosis.
Body size
IGF-1R has been shown to have a significant effect on body size in small dog breeds. A "nonsynonymous SNP at chr3:44,706,389 that changes a highly conserved arginine at amino acid 204 to histidine" is associated with particularly tiny body size. "This mutation is predicted to prevent formation of several hydrogen bonds within the cysteine-rich domain of the receptor’s ligand-binding extracellular subunit. Nine of 13 tiny dog breeds carry the mutation and many dogs are homozygous for it." Smaller individuals within several small and medium-sized breeds were shown to carry this mutation as well.
Mice carrying only one functional copy of IGF-1R are normal, but exhibit a ~15% decrease in body mass. IGF-1R has also been shown to regulate body size in dogs. A mutated version of this gene is found in a number of small dog breeds.
Gene inactivation/deletion
Deletion of the IGF-1 receptor gene in mice results in lethality during early embryonic development, and for this reason, IGF-1 insensitivity, unlike the case of growth hormone (GH) insensitivity (Laron syndrome), is not observed in the human population.
Clinical significance
Cancer
The IGF-1R is implicated in several cancers, including breast, prostate, and lung cancers. In some instances its anti-apoptotic properties allow cancerous cells to resist the cytotoxic properties of chemotherapeutic drugs or radiotherapy. In breast cancer, where EGFR inhibitors such as erlotinib are being used to inhibit the EGFR signaling pathway, IGF-1R confers resistance by forming one half of a heterodimer (see the description of EGFR signal transduction in the erlotinib page), allowing EGFR signaling to resume in the presence of a suitable inhibitor. This process is referred to as crosstalk between EGFR and IGF-1R. It is further implicated in breast cancer by increasing the metastatic potential of the original tumour by conferring the ability to promote vascularisation.
Increased levels of the IGF-IR are expressed in the majority of primary and metastatic prostate cancer patient tumors. Evidence suggests that IGF-IR signaling is required for survival and growth when prostate cancer cells progress to androgen independence. In addition, when immortalized prostate cancer cells mimicking advanced disease are treated with the IGF-1R ligand, IGF-1, the cells become more motile.
Members of the IGF receptor family and their ligands also seem to be involved in the carcinogenesis of mammary tumors of dogs. IGF1R is amplified in several cancer types based on analysis of TCGA data, and gene amplification could be one mechanism for overexpression of IGF1R in cancer.
Lung cancer cells stimulated using glucocorticoids were induced into a reversible dormancy state which was dependent on the IGF-1R and its accompanying survival signaling pathways.
Inhibitors
Due to the similarity of the structures of IGF-1R and the insulin receptor (IR), especially in the regions of the ATP binding site and tyrosine kinase regions, synthesising selective inhibitors of IGF-1R is difficult. Prominent in current research are three main classes of inhibitor:
Tyrphostins such as AG538 and AG1024. These are in early pre-clinical testing. They are not thought to be ATP-competitive, although they are when used in EGFR as described in QSAR studies. These show some selectivity towards IGF-1R over IR.
Pyrrolo(2,3-d)-pyrimidine derivatives such as NVP-AEW541, invented by Novartis, which show far greater (100 fold) selectivity towards IGF-1R over IR.
Monoclonal antibodies are probably the most specific and promising therapeutic compounds. Teprotumumab is a novel therapy showing significant benefit for Thyroid Eye Disease.
Interactions
Insulin-like growth factor 1 receptor has been shown to interact with:
ARHGEF12,
C-src tyrosine kinase,
Cbl gene,
EHD1,
GRB10,
IRS1,
Mdm2,
NEDD4,
PIK3R3,
PTPN11,
RAS p21 protein activator 1,
SHC1
SOCS2,
SOCS3, and
YWHAE.
Regulation
There is evidence to suggest that IGF1R is negatively regulated by the microRNA miR-7.
See also
Hypothalamic–pituitary–somatic axis
Insulin receptor
Linsitinib, an inhibitor of IGF-1R in clinical trials for cancer treatment
References
Further reading
External links
Clusters of differentiation
Tyrosine kinase receptors
Integral membrane proteins | Insulin-like growth factor 1 receptor | [
"Chemistry"
] | 2,106 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
5,504,630 | https://en.wikipedia.org/wiki/DNA%20repair-deficiency%20disorder | A DNA repair-deficiency disorder is a medical condition due to reduced functionality of DNA repair.
DNA repair defects can cause an accelerated aging disease or an increased risk of cancer, or sometimes both.
DNA repair defects and accelerated aging
DNA repair defects are seen in nearly all of the diseases described as accelerated aging disease, in which various tissues, organs or systems of the human body age prematurely. Because the accelerated aging diseases display different aspects of aging, but never every aspect, they are often called segmental progerias by biogerontologists.
Human disorders with accelerated aging
Ataxia-telangiectasia
Bloom syndrome
Cockayne syndrome
Fanconi anemia
Progeria (Hutchinson–Gilford progeria syndrome)
Rothmund–Thomson syndrome
Trichothiodystrophy
Werner syndrome
Xeroderma pigmentosum
Examples
Some examples of DNA repair defects causing progeroid syndromes in humans or mice are shown in Table 1.
DNA repair defects distinguished from "accelerated aging"
Most of the DNA repair deficiency diseases show varying degrees of "accelerated aging" or cancer (often some of both). But elimination of any gene essential for base excision repair kills the embryo—it is too lethal to display symptoms (much less symptoms of cancer or "accelerated aging").
Rothmund-Thomson syndrome and xeroderma pigmentosum display symptoms dominated by vulnerability to cancer, whereas progeria and Werner syndrome show the most features of "accelerated aging". Hereditary nonpolyposis colorectal cancer (HNPCC) is very often caused by a defective MSH2 gene leading to defective mismatch repair, but displays no symptoms of "accelerated aging". On the other hand, Cockayne Syndrome and trichothiodystrophy show mainly features of accelerated aging, but apparently without an increased risk of cancer Some DNA repair defects manifest as neurodegeneration rather than as cancer or "accelerated aging". (Also see the "DNA damage theory of aging" for a discussion of the evidence that DNA damage is the primary underlying cause of aging.)
Debate concerning "accelerated aging"
Some biogerontologists question that such a thing as "accelerated aging" actually exists, at least partly on the grounds that all of the so-called accelerated aging diseases are segmental progerias. Many disease conditions such as diabetes, high blood pressure, etc., are associated with increased mortality. Without reliable biomarkers of aging it is hard to support the claim that a disease condition represents more than accelerated mortality.
Against this position other biogerontologists argue that premature aging phenotypes are identifiable symptoms associated with mechanisms of molecular damage. The fact that these phenotypes are widely recognized justifies classification of the relevant diseases as "accelerated aging". Such conditions, it is argued, are readily distinguishable from genetic diseases associated with increased mortality, but not associated with an aging phenotype, such as cystic fibrosis and sickle cell anemia. It is further argued that segmental aging phenotype is a natural part of aging insofar as genetic variation leads to some people being more disposed than others to aging-associated diseases such as cancer and Alzheimer's disease.
DNA repair defects and increased cancer risk
Individuals with an inherited impairment in DNA repair capability are often at increased risk of cancer. When a mutation is present in a DNA repair gene, the repair gene will either not be expressed or be expressed in an altered form. Then the repair function will likely be deficient, and, as a consequence, damages will tend to accumulate. Such DNA damages can cause errors during DNA synthesis leading to mutations, some of which may give rise to cancer. Germ-line DNA repair mutations that increase the risk of cancer are listed in the Table.
See also
Biogerontology
Degenerative disease
DNA damage theory of aging
Genetic disorder
Senescence
References
External links
BRCA - Companion Reviews and Search Terms
BRCA1 - Companion Reviews and Search Terms
BRCA2 - Companion Reviews and Search Terms
ATM - Companion Reviews and Search Terms
NBS1 - Companion Reviews and Search Terms
Bloom s syndrome - Companion Reviews and Search Terms
Fanconi s anemia - Companion Reviews and Search Terms
WRN - Companion Reviews and Search Terms
RECQ- Companion Reviews and Search Terms
RECQL4 - Companion Reviews and Search Terms
FANCJ - Companion Reviews and Search Terms
FANCM - Companion Reviews and Search Terms
FANCN - Companion Reviews and Search Terms
XPB - Companion Reviews and Search Terms
XPD - Companion Reviews and Search Terms
XPG - Companion Reviews and Search Terms
MSH6 - Companion Reviews and Search Terms
MUTYH - Companion Reviews and Search Terms
DNA repair and toxicology - Companion Reviews and Search Terms
Neoplasia inherited - Companion Reviews and Search Terms
Neoplasia carcinogenesis - Companion Reviews and Search Terms
Segmental Progeria
Cancer
DNA repair
Mutation
DNA replication and repair-deficiency disorders
Causes of conditions
Senescence | DNA repair-deficiency disorder | [
"Chemistry",
"Biology"
] | 998 | [
"DNA repair",
"Senescence",
"DNA replication and repair-deficiency disorders",
"Molecular genetics",
"Cellular processes",
"Metabolism"
] |
5,504,687 | https://en.wikipedia.org/wiki/Ionotropic%20glutamate%20receptor | Ionotropic glutamate receptors (iGluRs) are ligand-gated ion channels that are activated by the neurotransmitter glutamate. They mediate the majority of excitatory synaptic transmission throughout the central nervous system and are key players in synaptic plasticity, which is important for learning and memory. iGluRs have been divided into four subtypes on the basis of their ligand binding properties (pharmacology) and sequence similarity: AMPA receptors, kainate receptors, NMDA receptors and delta receptors (see below).
AMPA receptors are the main charge carriers during basal transmission, permitting influx of sodium ions to depolarise the postsynaptic membrane. NMDA receptors are blocked by magnesium ions and therefore only permit ion flux following prior depolarisation. This enables them to act as coincidence detectors for synaptic plasticity. Calcium influx through NMDA receptors leads to persistent modifications in the strength of synaptic transmission.
iGluRs are tetramers (they are formed of four subunits). All subunits have a shared architecture with four domain layers: two extracellular clamshell domains called the N-terminal domain (NTD) and ligand-binding domain (LBD; which binds glutamate), the transmembrane domain (TMD) that forms the ion channel, and an intracellular C-terminal domain (CTD).
Human proteins/genes encoding iGluR subunits
AMPA receptors: GluA1/GRIA1; GluA2/GRIA2; GluA3/GRIA3; GluA4/GRIA4;
delta receptors: GluD1/GRID1; GluD2/GRID2;
kainate receptors: GluK1/GRIK1; GluK2/GRIK2; GluK3/GRIK3; GluK4/GRIK4; GluK5/GRIK5;
NMDA receptors: GluN1/GRIN1; GluN2A/GRIN2A; GluN2B/GRIN2B; GluN2C/GRIN2C; GluN2D/GRIN2D; GluN3A/GRIN3A; GluN3B/GRIN3B;
References
Protein domains
Protein families
Membrane proteins
Ionotropic glutamate receptors | Ionotropic glutamate receptor | [
"Biology"
] | 497 | [
"Protein families",
"Protein domains",
"Protein classification",
"Membrane proteins"
] |
5,504,842 | https://en.wikipedia.org/wiki/Sense%20%28molecular%20biology%29 | In molecular biology and genetics, the sense of a nucleic acid molecule, particularly of a strand of DNA or RNA, refers to the nature of the roles of the strand and its complement in specifying a sequence of amino acids. Depending on the context, sense may have slightly different meanings. For example, the negative-sense strand of DNA is equivalent to the template strand, whereas the positive-sense strand is the non-template strand whose nucleotide sequence is equivalent to the sequence of the mRNA transcript.
DNA sense
Because of the complementary nature of base-pairing between nucleic acid polymers, a double-stranded DNA molecule will be composed of two strands with sequences that are reverse complements of each other. To help molecular biologists specifically identify each strand individually, the two strands are usually differentiated as the "sense" strand and the "antisense" strand. An individual strand of DNA is referred to as positive-sense (also positive (+) or simply sense) if its nucleotide sequence corresponds directly to the sequence of an RNA transcript which is translated or translatable into a sequence of amino acids (provided that any thymine bases in the DNA sequence are replaced with uracil bases in the RNA sequence). The other strand of the double-stranded DNA molecule is referred to as negative-sense (also negative (−) or antisense), and is reverse complementary to both the positive-sense strand and the RNA transcript. It is actually the antisense strand that is used as the template from which RNA polymerases construct the RNA transcript, but the complementary base-pairing by which nucleic acid polymerization occurs means that the sequence of the RNA transcript will look identical to the positive-sense strand, apart from the RNA transcript's use of uracil instead of thymine.
Sometimes the phrases coding strand and template strand are encountered in place of sense and antisense, respectively, and in the context of a double-stranded DNA molecule the usage of these terms is essentially equivalent. However, the coding/sense strand need not always contain a code that is used to make a protein; both protein-coding and non-coding RNAs may be transcribed.
The terms "sense" and "antisense" are relative only to the particular RNA transcript in question, and not to the DNA strand as a whole. In other words, either DNA strand can serve as the sense or antisense strand. Most organisms with sufficiently large genomes make use of both strands, with each strand functioning as the template strand for different RNA transcripts in different places along the same DNA molecule. In some cases, RNA transcripts can be transcribed in both directions (i.e. on either strand) from a common promoter region, or be transcribed from within introns on either strand (see "ambisense" below).
Sense DNA
The DNA sense strand looks like the messenger RNA (mRNA) transcript, and can therefore be used to read the expected codon sequence that will ultimately be used during translation (protein synthesis) to build an amino acid sequence and then a protein. For example, the sequence "ATG" within a DNA sense strand corresponds to an "AUG" codon in the mRNA, which codes for the amino acid methionine. However, the DNA sense strand itself is not used as the template for the mRNA; it is the DNA antisense strand that serves as the source for the protein code, because, with bases complementary to the DNA sense strand, it is used as a template for the mRNA. Since transcription results in an RNA product complementary to the DNA template strand, the mRNA is complementary to the DNA antisense strand.
Hence, a base triplet 3′-TAC-5′ in the DNA antisense strand (complementary to the 5′-ATG-3′ of the DNA sense strand) is used as the template which results in a 5′-AUG-3′ base triplet in the mRNA. The DNA sense strand will have the triplet ATG, which looks similar to the mRNA triplet AUG but will not be used to make methionine because it will not be directly used to make mRNA. The DNA sense strand is called a "sense" strand not because it will be used to make protein (it won't be), but because it has a sequence that corresponds directly to the RNA codon sequence. By this logic, the RNA transcript itself is sometimes described as "sense".
Example with double-stranded DNA
DNA strand 1: antisense strand (transcribed to) → RNA strand (sense)
DNA strand 2: sense strand
Some regions within a double-stranded DNA molecule code for genes, which are usually instructions specifying the order in which amino acids are assembled to make proteins, as well as regulatory sequences, splicing sites, non-coding introns, and other gene products. For a cell to use this information, one strand of the DNA serves as a template for the synthesis of a complementary strand of RNA. The transcribed DNA strand is called the template strand, with antisense sequence, and the mRNA transcript produced from it is said to be sense sequence (the complement of antisense). The untranscribed DNA strand, complementary to the transcribed strand, is also said to have sense sequence; it has the same sense sequence as the mRNA transcript (though T bases in DNA are substituted with U bases in RNA).
The names assigned to each strand actually depend on which direction you are writing the sequence that contains the information for proteins (the "sense" information), not on which strand is depicted as "on the top" or "on the bottom" (which is arbitrary). The only biological information that is important for labeling strands is the relative locations of the terminal 5′ phosphate group and the terminal 3′ hydroxyl group (at the ends of the strand or sequence in question), because these ends determine the direction of transcription and translation. A sequence written 5′-CGCTAT-3′ is equivalent to a sequence written 3′-TATCGC-5′ as long as the 5′ and 3′ ends are noted. If the ends are not labeled, convention is to assume that both sequences are written in the 5′-to-3′ direction. The "Watson strand" refers to 5′-to-3′ top strand (5′→3′), whereas the "Crick strand" refers to the 5′-to-3′ bottom strand (3′←5′). Both Watson and Crick strands can be either sense or antisense strands depending on the specific gene product made from them.
For example, the notation "YEL021W", an alias of the URA3 gene used in the National Center for Biotechnology Information (NCBI) database, denotes that this gene is in the 21st open reading frame (ORF) from the centromere of the left arm (L) of Yeast (Y) chromosome number V (E), and that the expression coding strand is the Watson strand (W). "YKL074C" denotes the 74th ORF to the left of the centromere of chromosome XI and that the coding strand is the Crick strand (C). Another confusing term referring to "Plus" and "Minus" strand is also widely used. Whether the strand is sense (positive) or antisense (negative), the default query sequence in NCBI BLAST alignment is "Plus" strand.
Ambisense
A single-stranded genome that is used in both positive-sense and negative-sense capacities is said to be ambisense. Some viruses have ambisense genomes. Bunyaviruses have three single-stranded RNA (ssRNA) fragments, some of them containing both positive-sense and negative-sense sections; arenaviruses are also ssRNA viruses with an ambisense genome, as they have three fragments that are mainly negative-sense except for part of the 5′ ends of the large and small segments of their genome.
Antisense RNA
An RNA sequence that is complementary to an endogenous mRNA transcript is sometimes called "antisense RNA". In other words, it is a non-coding strand complementary to the coding sequence of RNA; this is similar to negative-sense viral RNA. When mRNA forms a duplex with a complementary antisense RNA sequence, translation is blocked. This process is related to RNA interference. Cells can produce antisense RNA molecules naturally, called microRNAs, which interact with complementary mRNA molecules and inhibit their expression. The concept has also been exploited as a molecular biology technique, by artificially introducing a transgene coding for antisense RNA in order to block the expression of a gene of interest. Radioactively or fluorescently labelled antisense RNA can be used to show the level of transcription of genes in various cell types.
Some alternative antisense structural types have been experimentally applied as antisense therapy. In the United States, the Food and Drug Administration (FDA) has approved the phosphorothioate antisense oligonucleotides fomivirsen (Vitravene) and mipomersen (Kynamro) for human therapeutic use.
RNA sense in viruses
In virology, the term "sense" has a slightly different meaning. The genome of an RNA virus can be said to be either positive-sense, also known as a "plus-strand", or negative-sense, also known as a "minus-strand". In most cases, the terms "sense" and "strand" are used interchangeably, making terms such as "positive-strand" equivalent to "positive-sense", and "plus-strand" equivalent to "plus-sense". Whether a viral genome is positive-sense or negative-sense can be used as a basis for classifying viruses.
Positive-sense
Positive-sense (5′-to-3′) viral RNA signifies that a particular viral RNA sequence may be directly translated into viral proteins (e.g., those needed for viral replication). Therefore, in positive-sense RNA viruses, the viral RNA genome can be considered viral mRNA, and can be immediately translated by the host cell. Unlike negative-sense RNA, positive-sense RNA is of the same sense as mRNA. Some viruses (e.g. Coronaviridae) have positive-sense genomes that can act as mRNA and be used directly to synthesize proteins without the help of a complementary RNA intermediate. Because of this, these viruses do not need to have an RNA polymerase packaged into the virion—the RNA polymerase will be one of the first proteins produced by the host cell, since it is needed in order for the virus's genome to be replicated.
Negative-sense
Negative-sense (3′-to-5′) viral RNA is complementary to the viral mRNA, thus a positive-sense RNA must be produced by an RNA-dependent RNA polymerase from it prior to translation. Like DNA, negative-sense RNA has a nucleotide sequence complementary to the mRNA that it encodes; also like DNA, this RNA cannot be translated into protein directly. Instead, it must first be transcribed into a positive-sense RNA that acts as an mRNA. Some viruses (e.g. influenza viruses) have negative-sense genomes and so must carry an RNA polymerase inside the virion.
Antisense oligonucleotides
Gene silencing can be achieved by introducing into cells a short "antisense oligonucleotide" that is complementary to an RNA target. This experiment was first done by Zamecnik and Stephenson in 1978 and continues to be a useful approach, both for laboratory experiments and potentially for clinical applications (antisense therapy). Several viruses, such as influenza viruses Respiratory syncytial virus (RSV) and SARS coronavirus (SARS-CoV), have been targeted using antisense oligonucleotides to inhibit their replication in host cells.
If the antisense oligonucleotide contains a stretch of DNA or a DNA mimic (phosphorothioate DNA, 2′F-ANA, or others) it can recruit RNase H to degrade the target RNA. This makes the mechanism of gene silencing catalytic. Double-stranded RNA can also act as a catalytic, enzyme-dependent antisense agent through the RNAi/siRNA pathway, involving target mRNA recognition through sense-antisense strand pairing followed by target mRNA degradation by the RNA-induced silencing complex (RISC). The R1 plasmid hok/sok system provides yet another example of an enzyme-dependent antisense regulation process through enzymatic degradation of the resulting RNA duplex.
Other antisense mechanisms are not enzyme-dependent, but involve steric blocking of their target RNA (e.g. to prevent translation or to induce alternative splicing). Steric blocking antisense mechanisms often use oligonucleotides that are heavily modified. Since there is no need for RNase H recognition, this can include chemistries such as 2′-O-alkyl, peptide nucleic acid (PNA), locked nucleic acid (LNA), and Morpholino oligomers.
See also
Antisense therapy
Directionality (molecular biology)
DNA codon table
RNA virus
Transcription (genetics)
Translation (genetics)
Viral replication
References
DNA
Molecular biology
RNA
Virology | Sense (molecular biology) | [
"Chemistry",
"Biology"
] | 2,778 | [
"Biochemistry",
"Molecular biology"
] |
5,504,984 | https://en.wikipedia.org/wiki/Legal%20year | The legal year, in English law as well as in some other common law jurisdictions, is the calendar during which the judges sit in court. It is traditionally divided into periods called "terms".
Asia
Hong Kong
Hong Kong's legal year is marked as Ceremonial Opening of the Legal Year with an address by the Chief Justice of Hong Kong and begins in January.
Taiwan
The start of the legal year for courts in Taiwan is referred to as Judicial Day and marked in early January.
Europe
England
In England, the year is divided into four terms:
Michaelmas term - from October to December
Hilary term - from January to April
Easter term - from April to May
Trinity term - from June to July.
Between terms, the courts are in vacation, and no trials or appeals are heard in the High Court, Court of Appeal and Supreme Court. The legal terms apply to the High Court, Court of Appeal and Supreme Court only, and so have no application to the Crown Court, County Court, or magistrates' courts. The longest vacation period is between July and October. The dates of the terms are determined in law by a practice direction in the Civil Procedure Rules. The Hilary term was formerly from the 11th to the 31st of January, during which superior courts of
England were open.
The legal year commences at the beginning of October, with a ceremony dating back to the Middle Ages in which the judges arrive in a procession from the Temple Bar to Westminster Abbey for a religious service, followed by a reception known as the Lord Chancellor's breakfast, which is held in Westminster Hall. Although in former times the judges walked the distance from Temple to Westminster, they now mostly arrive by car. The service is held by the Dean of Westminster with the reading performed by the Lord Chancellor.
The ceremony dates back to 1897 and has been held continuously since with the exception of the years 1940 to 1946 because of the Second World War and 2020 because of the COVID-19 pandemic. In 1953 it was held in St Margaret's Church because Westminster Abbey was still decorated for the Coronation of Queen Elizabeth II.
Ireland
In Ireland, the year is divided as per the English system, with identical Michaelmas, Hilary, Easter and Trinity terms. These have a Christmas, Easter, Whit and Long Vacation between them respectively. The Michaelmas term, and legal year, is opened with a service in St. Michan's Church, Dublin attended by members of the Bar and Law Society who then adjourn to a breakfast given in the King's Inns.
France
In France, a rentrée solennelle, a ceremonial sitting of the court, is held in most courts in September to swear in new judges and in January or February, to mark the start of the legal year. New judges may also be sworn in at that event. Bar associations (barreaux), especially larger ones, may also hold a rentrée solennelle, but often at a completely different time of the year to the court-organised official ceremonies, such as in November.
French courts do not sit in a formal term structure, although the practice of vacances judiciaires (legal vacations) between July and the end of August, in late December around Christmas and New Year's and, to a lesser extent, Easter, mean that courts often do not sit to hear non-urgent business during those times, creating, de facto, three legal terms each year.
North America
Canada
Courts in Canada do not have formal terms. They are open year-round but tend to be less busy over the summer months. There is a formal opening of the courts in Ontario in September.
United States
The United States Supreme Court follows part of the legal year tradition, albeit without the elaborate ceremony. The court's year-long term commences on the first Monday in October (and is simply called "October Term"), with a Red Mass the day before. The court then alternates between "sittings" and "recesses" and goes into final recess at the end of June.
Several Midwest and East Coast states and some federal courts still use the legal year and terms of court. Like the Supreme Court, the U.S. Court of Appeals for the Second Circuit has a single year-long term with designated sittings within that term, although the Second Circuit begins its term in August instead of October (hence the name "August Term"). The U.S. Tax Court divides the year into four season-based terms starting in January.
Connecticut appellate courts divide the legal year into eight terms starting in September. New York courts divide the year into 13 terms starting in January. The Georgia Court of Appeals uses a three-term year starting in January. The Illinois Supreme Court divides the year into six terms starting in January.
Several states, like Ohio and Mississippi, do not have a uniform statewide rule for terms of court, so the number of terms varies greatly from one court to the next because every single court sets forth its own terms of court in its local rules.
However, the majority of U.S. states and most federal courts have abandoned the legal year and the related concept of terms of court. Instead, they reverse the presumption. They merely mandate that the courts are to be open year-round during business hours on every day that is not Saturday, Sunday, or a legal holiday. A typical example is Rule 77(c)(1) of the Federal Rules of Civil Procedure, which states that "The clerk's office ... must be open during business hours every day except Saturdays, Sundays, and legal holidays." Furthermore, states: "All courts of the United States shall be deemed always open for the purpose of filing proper papers, issuing and returning process, and making motions and orders."
References
See also
Law Terms Act 1830
Fiscal year
Further reading
External links
The legal year, term dates and sitting days 2024 and 2025 | Courts and Tribunals Judiciary
Practice Direction setting out term dates
English law
Calendars | Legal year | [
"Physics"
] | 1,214 | [
"Spacetime",
"Calendars",
"Physical quantities",
"Time"
] |
13,313,106 | https://en.wikipedia.org/wiki/Astrophysics%20and%20Space%20Science | Astrophysics and Space Science is a bimonthly peer-reviewed scientific journal covering astronomy, astrophysics, and space science and astrophysical aspects of astrobiology. It was established in 1968 and is published by Springer Science+Business Media. From 2016 to 2020, the editors-in-chief were both Prof. Elias Brinks and Prof. Jeremy Mould. Since 2020 the sole editor-in-chief is Prof. Elias Brinks. Other editors-in-chief in the past have been Zdeněk Kopal (Univ. of Manchester) (1968–1993) and Michael A. Dopita (Australian National University) (1994–2015).
Abstracting and indexing
The journal is abstracted and indexed in the following databases:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.830.
References
External links
Space science journals
Academic journals established in 1968
Springer Science+Business Media academic journals
Bimonthly journals
Astrophysics journals
English-language journals
Plasma science journals | Astrophysics and Space Science | [
"Physics",
"Astronomy"
] | 205 | [
"Plasma science journals",
"Plasma physics",
"Astronomy stubs",
"Astrophysics journals",
"Astrophysics",
"Astronomy journal stubs"
] |
13,313,910 | https://en.wikipedia.org/wiki/Sticking%20coefficient | Sticking coefficient is the term used in surface physics to describe the ratio of the number of adsorbate atoms (or molecules) that adsorb, or "stick", to a surface to the total number of atoms that impinge upon that surface during the same period of time. Sometimes the symbol Sc is used to denote this coefficient, and its value is between 1 (all impinging atoms stick) and 0 (no atoms stick). The coefficient is a function of surface temperature, surface coverage (θ) and structural details as well as the kinetic energy of the impinging particles. The original formulation was for molecules adsorbing from the gas phase and the equation was later extended to adsorption from the liquid phase by comparison with molecular dynamics simulations. For use in adsorption from liquids the equation is expressed based on solute density (molecules per volume) rather than the pressure.
Derivation
When arriving at a site of a surface, an adatom has three options. There is a probability that it will adsorb to the surface (), a probability that it will migrate to another site on the surface (), and a probability that it will desorb from the surface and return to the bulk gas (). For an empty site (θ=0) the sum of these three options is unity.
For a site already occupied by an adatom (θ>0), there is no probability of adsorbing, and so the probabilities sum as:
For the first site visited, the P of migrating overall is the P of migrating if the site is filled plus the P of migrating if the site is empty. The same is true for the P of desorption. The P of adsorption, however, does not exist for an already filled site.
The P of migrating from the second site is the P of migrating from the first site and then migrating from the second site, and so we multiply the two values.
Thus the sticking probability () is the P of sticking of the first site, plus the P of migrating from the first site and then sticking to the second site, plus the P of migrating from the second site and then sticking at the third site etc.
There is an identity we can make use of.
The sticking coefficient when the coverage is zero can be obtained by simply setting . We also remember that
If we just look at the P of migration at the first site, we see that it is certainty minus all other possibilities.
Using this result, and rearranging, we find:
References
King-Ning Tu, James W. Mayer, and Leonard C. Feldman, in Electronic Thin Film Science for Electrical Engineers and Materials Scientists, Macmillan, New York, 1992, pp. 101–102.
Surface science
Materials science
Dimensionless numbers of physics | Sticking coefficient | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 566 | [
"Applied and interdisciplinary physics",
"Materials science",
"Surface science",
"Condensed matter physics",
"nan"
] |
13,314,606 | https://en.wikipedia.org/wiki/Untranslated%20region | In molecular genetics, an untranslated region (or UTR) refers to either of two sections, one on each side of a coding sequence on a strand of mRNA. If it is found on the 5' side, it is called the 5' UTR (or leader sequence), or if it is found on the 3' side, it is called the 3' UTR (or trailer sequence). mRNA is RNA that carries information from DNA to the ribosome, the site of protein synthesis (translation) within a cell. The mRNA is initially transcribed from the corresponding DNA sequence and then translated into protein. However, several regions of the mRNA are usually not translated into protein, including the 5' and 3' UTRs.
Although they are called untranslated regions, and do not form the protein-coding region of the gene, uORFs located within the 5' UTR can be translated into peptides.
The 5' UTR is upstream from the coding sequence. Within the 5' UTR is a sequence that is recognized by the ribosome which allows the ribosome to bind and initiate translation. The mechanism of translation initiation differs in prokaryotes and eukaryotes. The 3' UTR is found immediately following the translation stop codon. The 3' UTR plays a critical role in translation termination as well as post-transcriptional modification.
These often long sequences were once thought to be useless or junk mRNA that has simply accumulated over evolutionary time. However, it is now known that the untranslated region of mRNA is involved in many regulatory aspects of gene expression in eukaryotic organisms. The importance of these non-coding regions is supported by evolutionary reasoning, as natural selection would have otherwise eliminated this unusable RNA.
It is important to distinguish the 5' and 3' UTRs from other non-protein-coding RNA. Within the coding sequence of pre-mRNA, there can be found sections of RNA that will not be included in the protein product. These sections of RNA are called introns. The RNA that results from RNA splicing is a sequence of exons. The reason why introns are not considered untranslated regions is that the introns are spliced out in the process of RNA splicing. The introns are not included in the mature mRNA molecule that will undergo translation and are thus considered non-protein-coding RNA.
History
The untranslated regions of mRNA became a subject of study as early as the late 1970s, after the first mRNA molecule was fully sequenced. In 1978, the 5' UTR of the human gamma-globin mRNA was fully sequenced. In 1980, a study was conducted on the 3' UTR of the duplicated human alpha-globin genes.
Evolution
The untranslated region is seen in prokaryotes and eukaryotes, although the length and composition may vary. In prokaryotes, the 5' UTR is typically between 3 and 10 nucleotides long. In eukaryotes, the 5' UTR can be hundreds to thousands of nucleotides long. This is consistent with the higher complexity of the genomes of eukaryotes compared to prokaryotes. The 3' UTR varies in length as well. The poly-A tail is essential for keeping the mRNA from being degraded. Although there is variation in lengths of both the 5' and 3' UTR, it has been seen that the 5' UTR length is more highly conserved in evolution than the 3' UTR length.
Prokaryotes
The 5' UTR of prokaryotes consists of the Shine–Dalgarno sequence (5'-AGGAGGU-3'). This sequence is found 3-10 base pairs upstream from the initiation codon. The initiation codon is the start site of translation into protein.
Eukaryotes
The 5' UTR of eukaryotes is more complex than prokaryotes. It contains a Kozak consensus sequence (ACCAUGG). This sequence contains the initiation codon. The initiation codon is the start site of translation into protein.
Links to disease
The importance of these untranslated regions of mRNA is just beginning to be understood. Various medical studies are being conducted that have found connections between mutations in untranslated regions and increased risk for developing a particular disease, such as cancer. For example, associations between polymorphisms in the HLA-G 3′UTR region and development of colorectal cancer have been discovered. Single Nucleotide Polymorphisms in the 3' UTR of another gene have also been associated with susceptibility to preterm birth. Mutations in the 3' UTR of the APP gene are related to development of cerebral amyloid angiopathy.
Further study
Through the recent study of untranslated regions, general information has been gathered about the nature and function of these elements. However, there is still much that is unknown about these regions of mRNA. Since the regulation of gene expression is critical in the proper function of cells, this is an area of study that needs to be investigated further. It is important to consider that mutations in 3' untranslated regions have the potential to alter the expression of several genes that may appear unrelated. We are only beginning to understand the links between proper untranslated region function, and disease states of cells.
See also
Atlas of UTR Regulatory Activity
Coding region
Five prime untranslated region
History of RNA biology
MiRNA
Three prime untranslated region
Upstream open reading frames (uORFs)
References
External links
UTResource
RNA
Gene expression | Untranslated region | [
"Chemistry",
"Biology"
] | 1,170 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
13,320,316 | https://en.wikipedia.org/wiki/Ozsv%C3%A1th%E2%80%93Sch%C3%BCcking%20metric | The Ozsváth–Schücking metric, or the Ozsváth–Schücking solution, is a vacuum solution of the Einstein field equations. The metric was published by István Ozsváth and Engelbert Schücking in 1962. It is noteworthy among vacuum solutions for being the first known solution that is stationary, globally defined, and singularity-free but nevertheless not isometric to the Minkowski metric. This stands in contradiction to a claimed strong Mach principle, which would forbid a vacuum solution from being anything but Minkowski without singularities, where the singularities are to be construed as mass as in the Schwarzschild metric.
With coordinates , define the following tetrad:
It is straightforward to verify that e(0) is timelike, e(1), e(2), e(3) are spacelike, that they are all orthogonal, and that there are no singularities. The corresponding proper time is
The Riemann tensor has only one algebraically independent, nonzero component
which shows that the spacetime is Ricci flat but not conformally flat. That is sufficient to conclude that it is a vacuum solution distinct from Minkowski spacetime. Under a suitable coordinate transformation, the metric can be rewritten as
and is therefore an example of a pp-wave spacetime.
References
Exact solutions in general relativity
General relativity | Ozsváth–Schücking metric | [
"Physics",
"Mathematics"
] | 283 | [
"Exact solutions in general relativity",
"Mathematical objects",
"Equations",
"General relativity",
"Theory of relativity"
] |
10,832,464 | https://en.wikipedia.org/wiki/Fineness%20ratio | In naval architecture and aerospace engineering, the fineness ratio is the ratio of the length of a body to its maximum width. Shapes that are short and wide have a low fineness ratio, those that are long and narrow have high fineness ratios. Aircraft that spend time at supersonic speeds, e.g. the Concorde, generally have high fineness ratios.
At speeds below critical mach, one of the primary forms of drag is skin friction. As the name implies, this is drag caused by the interaction of the airflow with the aircraft's skin. To minimize this drag, the aircraft should be designed to minimize the exposed skin area, or "wetted surface". One solution to this problem is constructing an "egg shaped" fuselage, for example as used on the home-built Questair Venture.
Theoretical ideal fineness ratios in subsonic aircraft fuselages are typically found at about 6:1, however this may be compromised by other design considerations such as seating or freight size requirements. Because a higher fineness fuselage can have reduced tail surfaces, this ideal ratio can practically be increased to 8:1.
Most aircraft have fineness ratios significantly greater than this, however. This is often due to the competing need to place the tail control surfaces at the end of a longer moment arm to increase their effectiveness. Reducing the length of the fuselage would require larger controls, which would offset the drag savings from using the ideal fineness ratio. An example of a high-performance design with an imperfect fineness ratio is the Lancair. In other cases, the designer is forced to use a non-ideal design due to outside factors such as seating arrangements or cargo pallet sizes. Modern airliners often have fineness ratios much higher than ideal, a side effect of their cylindrical cross-section which is selected for strength, as well as providing a single width to simplify seating layout and air cargo handling.
As an aircraft approaches the speed of sound, shock waves form on areas of greater curvature. These shock waves radiate away energy that the engines must supply, energy that does not go into making the aircraft go faster. This appears to be a new form of drag—referred to as wave drag—which peaks at about three times the drag at speeds even slightly below the critical mach. In order to minimize the wave drag, the curvature of the aircraft should be kept to a minimum, which implies much higher fineness ratios. This is why high-speed aircraft have long pointed noses and tails, and cockpit canopies that are flush to the fuselage line.
More technically, the best possible performance for a supersonic design is typified by two "perfect shapes", the Sears-Haack body which is pointed at both ends, or the von Kármán ogive, which has a blunt tail. Examples of the latter design include the Concorde, F-104 Starfighter and XB-70 Valkyrie, although to some degree practically every post-World War II interceptor aircraft featured such a design. The latter is mostly seen on rockets and missiles, with the blunt end being the rocket nozzle. Missile designers are even less interested in low-speed performance, and missiles generally have higher fineness ratios than most aircraft.
The introduction of aircraft with higher fineness ratios also introduced a new form of instability, inertial coupling. As the engines and cockpit moved away from the aircraft's center of mass with the longer fuselages demanded by the high fineness ratio, the roll inertia of these masses grew to be able to overwhelm the power of the aerodynamic surfaces. A variety of methods are used to combat this effect, including oversized controls and stability augmentation systems.
References
Inline citations
General references
Form Factor
Basic Fluid Dynamics
Aerospace engineering
Aerodynamics
Engineering ratios | Fineness ratio | [
"Chemistry",
"Mathematics",
"Engineering"
] | 763 | [
"Metrics",
"Engineering ratios",
"Quantity",
"Aerodynamics",
"Aerospace engineering",
"Fluid dynamics"
] |
10,843,099 | https://en.wikipedia.org/wiki/Egor%20Popov | Egor Pavlovich Popov (; February 6, 1913 – April 19, 2001) was a structural and seismic engineer who helped transform the design of buildings, structures, and civil engineering around earthquake-prone regions.
A relative of inventor Alexander Stepanovich Popov, Egor Popov was born in Kiev, Russian Empire and after moving to the United States of America in 1927, he eventually earned a B.S. from UC Berkeley, his master's degree from MIT and his doctorate degree from Stanford in 1946.
During his career, Popov was primarily famous for his work doing research for the University of California, Berkeley. Some of his accomplishments include: working with buckling problems for NASA in Houston, Texas, involvement with the San Francisco–Oakland Bay Bridge, assisting with pipe testing for the Trans-Alaskan Pipeline, developing the Steel Moment Resisting Frame (resistance to earthquake forces), and eccentrically braced frames (ebf's).
Textbooks
Introduction to Mechanics of Solids, Prentice Hall, 1968.
Mechanics of Materials, 2nd ed., Prentice Hall, 1976.
Engineering Mechanics of Solids, 2nd ed., Prentice Hall, 1998.
References
Further reading
Egor Popov Connections: The EERI Oral History Series. Oakland, CA: Earthquake Engineering Research Institute. 1994. ISBN 0-943198-12-7.
1913 births
2001 deaths
American civil engineers
Earthquake engineering
Soviet emigrants to the United States
University of California, Berkeley faculty
American people of Russian descent
University of California, Berkeley alumni
Massachusetts Institute of Technology alumni
Stanford University alumni
20th-century American engineers | Egor Popov | [
"Engineering"
] | 320 | [
"Structural engineering",
"Earthquake engineering",
"Civil engineering"
] |
10,844,157 | https://en.wikipedia.org/wiki/Parasite%20Rex | Parasite Rex: Inside the Bizarre World of Nature's Most Dangerous Creatures is a nonfiction book by Carl Zimmer that was published by Free Press in 2000. The book discusses the history of parasites on Earth and how the field and study of parasitology formed, along with a look at the most dangerous parasites ever found in nature. A special paperback edition was released in March 2011 for the tenth anniversary of the book's publishing, including a new epilogue written by Zimmer. Signed bookplates were also given to fans that sent in a photo of themselves with a copy of the special edition.
The cover of Parasite Rex includes a scanning electron microscope image of a tick as the focus, along with illustrations in the centerfold of parasites and topics discussed in the book.
Content
The book begins by discussing the history of parasites in human knowledge, from the earliest writings about them in ancient cultures, up through modern times. The focus comes to rest extensively on the views and experiments conducted by scientists in the 17th, 18th, and 19th centuries, such as those done by Antonie van Leeuwenhoek, Japetus Steenstrup, Friedrich Küchenmeister, and Ray Lankester. Among them, Leeuwenhoek was the first to ever physically view cells through a microscope, Steenstrup was the first to explain and confirm the multiple stages and life cycles of parasites that are different from most other living organisms, and Küchenmeister, through his religious beliefs and his views on every creature having a place in the natural order, denied the ideas of his time and proved that all parasites are a part of active evolutionary niches and not biological dead ends by conducting morally ambiguous experiments on prisoners. Lankester is given a specific focus and repeated discussion throughout the book due to his belief that parasites are examples of degenerative evolution, especially in regards to Sacculina, and Zimmer's repeated refutation of this idea.
Several chapters are taken to discuss various types of parasites and how they infect and control their hosts, along with the biochemistry involved in their take-over or evasion of their host's immune system, eventually leading to their dispersal into their next form and life cycle. An extended time is also given on the workings of immunology and how the immune systems of living beings respond to parasite infection, along with the methods that bodily functions use to counteract and potentially kill invading microorganisms. Woven into this discussion are several specific sites that Zimmer visited during his writing of Parasite Rex and the scientists he worked with to understand different biosystems and all the parasites that live within them, including human sleeping sickness infections in Sudan from the tsetse fly, the parasites of frogs in Costa Rica, primarily showcased by filarial worms that infect humans and a variety of species, and the USDA National Parasite Collection based out of Maryland.
The final chapters focus on an overall effect parasites have had on the evolution of life and the theory that it is due to parasitic infection that sexual reproduction evolved to become dominant, in contrast to previous asexual reproduction methods, due to the increased genetic variety and thus potential parasitic resistance that this would confer. This research was showcased by W. D. Hamilton and his theories on the evolution of sex, along with the Red Queen hypothesis and the idea of an evolutionary arms race between parasites and their hosts. Zimmer then discusses a final time the wide variety of parasites that evolved to have humans as their primary hosts and our attempts through scientific advancement to eradicate them. The closing chapter considers the positive benefits of parasites and how humans have used them to improve agriculture and medical technology, but also how ill-considered usage of parasites could also destroy various habitats by having them act as invasive species. In the end, Zimmer ponders whether humanity counts as a parasite on the planet and what the effects of this relationship could be.
Style and tone
In a review for Science, Albert O. Bush pointed out how Zimmer creates a writing style that is written with "clarity, conviction, and seemingly without prejudice" and that while the "purist will find the odd mistakes, oversights, and minor errors of fact", these are "insignificant" and do not remove from Parasite Rex'''s "overall quality or, more importantly, its focus and take-home message."
Reception The New York Times' Kevin Padian praised the book and Zimmer's writing, saying that it showcases him as "fine a science essayist as we have" and that the importance of this book rests "not only in its accessible presentation of the new science of evolutionary parasitology but in its thoughtful treatment of the global strategies and policies that scientists, health workers and governments will have to consider in order to manage parasites in the future". Publishers Weekly called the book a "exemplary work of popular science" and one of the "most fascinating works" of its kind, while also being "its most disgusting". Margaret Henderson, writing for the Library Journal, recommended the book for placement in all libraries, saying that the book "makes parasitology interesting and accessible to anyone". Writing in the Quarterly Review of Biology, May Berenbaum describes Parasite Rex as a "remarkable book" that is "unique in its focus and is extremely readable" and earns the reviewer's "respect and recommendation" for being able to discuss the life cycles of lancet flukes and the Red Queen hypothesis properly in a single book. Joe Eaton in the Whole Earth Review categorized Parasite Rex as "one of those books that change the way you see the world" due to how it shows that ecosystems are largely made up of the parasites that the individual organisms carry. A review in The American Biology Teacher by Donald A. Lawrence labeled the book as a "splendid overview of current knowledge about parasites" and praised the extensive Notes, Literature Cited, and Index sections. The newsletter editor for the American Society of Parasitologists, Scott Lyell Gardner, congratulated the book for bringing the field of parasitology into the public view, saying that how Zimmer "presents parasites in the “ugh” and “oooh” mode, in addition to trying to show how parasitologists actually ply our trade" helps to provide interest into the subject. BlueSci writer Harriet Allison summed up the book as one where Zimmer "manages to weave just enough easily understandable science into each chapter in order to create an engrossing and squirm-inducing story that will have you hooked until the end". Kirkus Reviews stated its acclaim for the "vivid detail" given to the lifestyles of parasites, calling the book an "eye-opening perspective on biology, ecology, and medicine" and "well worth reading".
See also
Microcosm: E. coli and the New Science of Life''
Veterinary parasitology
Conservation biology of parasites
References
External links
Parasite Rex on Carl Zimmer's website
Parasite Rex on the Simon & Schuster, Publisher website
2000 non-fiction books
Biology books
Ecology books
Parasitology literature
Biochemistry literature | Parasite Rex | [
"Chemistry",
"Biology"
] | 1,442 | [
"Biochemistry",
"Biochemistry literature"
] |
10,844,989 | https://en.wikipedia.org/wiki/3-Methylfentanyl | 3-Methylfentanyl (3-MF, mefentanyl) is an opioid analgesic that is an analog of fentanyl. 3-Methylfentanyl is one of the most potent opioids, estimated to be between 400 and 6000 times stronger than morphine, depending on which isomer is used (with the cis isomers being the more potent ones).
Overview and history
was first discovered in 1974 and subsequently appeared on the street as an alternative to the clandestinely produced fentanyl analog α-methylfentanyl. However, it quickly became apparent that was much more potent than α-methylfentanyl, and correspondingly more dangerous.
While was initially sold on the black market for only a short time between 1984 and 1985, its high potency made it an attractive target to clandestine drug producers, as racemic is 10–15 times more potent than fentanyl, and so correspondingly larger amounts of cut product for street sales can be produced for an equivalent amount of effort as for producing fentanyl itself; one gram of might be sufficient to produce several thousand dosage units once diluted for sale. has thus reappeared several times, at various places around the world.
The only country in the world with significant (200+ deaths a year, more than 10,000 addicts) abuse of this chemical is Estonia, where a dose of costs 10 €, and other opiates are not generally available since the end of the 2000s. Approximately 1100 deaths from fentanyl and abuse were recorded in Estonia between 2005–2013, compared to approximately 450 deaths in Sweden, Germany, UK, Finland and Greece combined during the same period.
Other opioid analogs even more potent still than are known, such as carfentanil and ohmefentanyl, but these are significantly more difficult to manufacture than . Since 2016 fentanyl seizures in Estonia contains mostly carfentanil or cyclopropylfentanyl.
has similar effects to fentanyl, but is far more potent due to increased binding affinity to its target site. Since fentanyl itself is already highly potent, is extremely dangerous when used recreationally, and has resulted in many deaths among recreational opioid users ingesting the drug. Side effects of fentanyl analogs are similar to those of fentanyl itself, which include itching, nausea and potentially serious respiratory depression, which can be life-threatening. Fentanyl analogs have killed hundreds of people throughout Europe and the former Soviet republics since the most recent resurgence in use began in Estonia in the early 2000s, and novel derivatives continue to appear.
Use as chemical weapon
3-Methylfentanyl was also reported by media as the identity of the anaesthetic "gas" Kolokol-1 delivered as an aerosol during the Moscow theater hostage crisis in 2002 in which many hostages died from accidental overdoses, 3-methylfentanyl was later ruled out as the primary agent used. The opiate antidote naloxone was on-hand to treat the victims of the crisis, but, whether due to their incarceration, lack of food, or water, or sleep, or due to the novel nature of the still-unconfirmed compound used, acute symptoms continued to develop, resulting in many fatalities despite the administration of naloxone.
Synthesis
A number of methods for synthesis have been published. The most recent is probably the method posted by the Serbian chemical society (2004).
There is another method, though, for constructing the N-Benzyl-3-methyl-4-piperidone in a 2-stage Michael reaction, followed by Dieckmann cyclization as per usual.
See also
3-Methylbutyrfentanyl
4-Fluorofentanyl
α-Methylfentanyl
Acetylfentanyl
Butyrfentanyl
List of fentanyl analogues
References
Synthetic opioids
Piperidines
Propionamides
Anilides
Mu-opioid receptor agonists
Designer drugs
Chemical weapons | 3-Methylfentanyl | [
"Chemistry",
"Biology"
] | 828 | [
"Biochemistry",
"Chemical accident",
"Chemical weapons"
] |
14,476,022 | https://en.wikipedia.org/wiki/Pfeiffer%20effect | The Pfeiffer effect is an optical phenomenon whereby the presence of an optically active compound influences the optical rotation of a racemic mixture of a second compound.
Racemic mixtures do not rotate plane polarized light, but the equilibrium concentration of the two enantiomers can shift from unity in the presence of a strongly interacting chiral species. Paul Pfeiffer, a student of Alfred Werner and inventor of the salen ligand, reported this phenomenon. The first example of the effect is credited to Eligio Perucca, who observed optical rotations in the visible part of the spectrum when crystals of sodium chlorate, which are chiral and colourless, were stained with a racemic dye. The effect is attributed to the interaction of the optically pure substance with the second coordination sphere of the racemate.
References
Polarization (waves)
Stereochemistry
Transition metals
Coordination chemistry | Pfeiffer effect | [
"Physics",
"Chemistry"
] | 182 | [
"Stereochemistry",
"Coordination chemistry",
"Astrophysics",
"Space",
"Stereochemistry stubs",
"nan",
"Spacetime",
"Polarization (waves)"
] |
14,476,384 | https://en.wikipedia.org/wiki/Mass%20versus%20weight | In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. Nevertheless, one object will always weigh more than another with less mass if both are subject to the same gravity (i.e. the same gravitational field strength).
In scientific contexts, mass is the amount of "matter" in an object (though "matter" may be difficult to define), but weight is the force exerted on an object's matter by gravity. At the Earth's surface, an object whose mass is exactly one kilogram weighs approximately 9.81 newtons, the product of its mass and the gravitational field strength there. The object's weight is less on Mars, where gravity is weaker; more on Saturn, where gravity is stronger; and very small in space, far from significant sources of gravity, but it always has the same mass.
Material objects at the surface of the Earth have weight despite such sometimes being difficult to measure. An object floating freely on water, for example, does not appear to have weight since it is buoyed by the water. But its weight can be measured if it is added to water in a container which is entirely supported by and weighed on a scale. Thus, the "weightless object" floating in water actually transfers its weight to the bottom of the container (where the pressure increases). Similarly, a balloon has mass but may appear to have no weight or even negative weight, due to buoyancy in air. However the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earth's surface, making the weight difficult to measure. The weight of a flying airplane is similarly distributed to the ground, but does not disappear. If the airplane is in level flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway, but spread over a larger area.
A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an object to not change its current state of motion (to remain at constant velocity) unless acted on by an external unbalanced force. Gravitational "weight" is the force created when a mass is acted upon by a gravitational field and the object is not allowed to free-fall, but is supported or retarded by a mechanical force, such as the surface of a planet. Such a force constitutes weight. This force can be added to by any other kind of force.
While the weight of an object varies in proportion to the strength of the gravitational field, its mass is constant, as long as no energy or matter is added to the object. For example, although a satellite in orbit (essentially a free-fall) is "weightless", it still retains its mass and inertia. Accordingly, even in orbit, an astronaut trying to accelerate the satellite in any direction is still required to exert force, and needs to exert ten times as much force to accelerate a 10ton satellite at the same rate as one with a mass of only 1 ton.
Overview
Mass is (among other properties) an inertial property; that is, the tendency of an object to remain at constant velocity unless acted upon by an outside force. Under Sir Isaac Newton's -year-old laws of motion and an important formula that sprang from his work, an object with a mass, m, of one kilogram accelerates, a, at one meter per second per second (about one-tenth the acceleration due to Earth's gravity) when acted upon by a force, F, of one newton.
Inertia is seen when a bowling ball is pushed horizontally on a level, smooth surface, and continues in horizontal motion. This is quite distinct from its weight, which is the downwards gravitational force of the bowling ball one must counter when holding it off the floor. The weight of the bowling ball on the Moon would be one-sixth of that on the Earth, although its mass remains unchanged. Consequently, whenever the physics of recoil kinetics (mass, velocity, inertia, inelastic and elastic collisions) dominate and the influence of gravity is a negligible factor, the behavior of objects remains consistent even where gravity is relatively weak. For instance, billiard balls on a billiard table would scatter and recoil with the same speeds and energies after a break shot on the Moon as on Earth; they would, however, drop into the pockets much more slowly.
In the physical sciences, the terms "mass" and "weight" are rigidly defined as separate measures, as they are different physical properties. In everyday use, as all everyday objects have both mass and weight and one is almost exactly proportional to the other, "weight" often serves to describe both properties, its meaning being dependent upon context. For example, in retail commerce, the "net weight" of products actually refers to mass, and is expressed in mass units such as grams or ounces (see also Pound: Use in commerce). Conversely, the load index rating on automobile tires, which specifies the maximum structural load for a tire in kilograms, refers to weight; that is, the force due to gravity. Before the late 20th century, the distinction between the two was not strictly applied in technical writing, so that expressions such as "molecular weight" (for molecular mass) are still seen.
Because mass and weight are separate quantities, they have different units of measure. In the International System of Units (SI), the kilogram is the basic unit of mass, and the newton is the basic unit of force. The non-SI kilogram-force is also a unit of force typically used in the measure of weight. Similarly, the avoirdupois pound, used in both the Imperial system and U.S. customary units, is a unit of mass, and its related unit of force is the pound-force.
Converting units of mass to equivalent forces on Earth
When an object's weight (its gravitational force) is expressed in "kilograms", this actually refers to the kilogram-force (kgf or kg-f), also known as the kilopond (kp), which is a non-SI unit of force. All objects on the Earth's surface are subject to a gravitational acceleration of approximately 9.8 m/s2. The General Conference on Weights and Measures fixed the value of standard gravity at precisely 9.80665 m/s2 so that disciplines such as metrology would have a standard value for converting units of defined mass into defined forces and pressures. Thus the kilogram-force is defined as precisely 9.80665 newtons. In reality, gravitational acceleration (symbol: g) varies slightly with latitude, elevation and subsurface density; these variations are typically only a few tenths of a percent. See also Gravimetry.
Engineers and scientists understand the distinctions between mass, force, and weight. Engineers in disciplines involving weight loading (force on a structure due to gravity), such as structural engineering, convert the mass of objects like concrete and automobiles (expressed in kilograms) to a force in newtons (by multiplying by some factor around 9.8; 2 significant figures is usually sufficient for such calculations) to derive the load of the object. Material properties like elastic modulus are measured and published in terms of the newton and pascal (a unit of pressure related to the newton).
Buoyancy and weight
Usually, the relationship between mass and weight on Earth is highly proportional; objects that are a hundred times more massive than a one-liter bottle of soda almost always weigh a hundred times more—approximately 1,000 newtons, which is the weight one would expect on Earth from an object with a mass slightly greater than 100 kilograms. Yet, this is not always the case and there are familiar objects that violate this proportionality.
A common helium-filled toy balloon is something familiar to many. When such a balloon is fully filled with helium, it has buoyancy—a force that opposes gravity. When a toy balloon becomes partially deflated, it often becomes neutrally buoyant and can float about the house a meter or two off the floor. In such a state, there are moments when the balloon is neither rising nor falling and—in the sense that a scale placed under it has no force applied to it—is, in a sense perfectly weightless (actually as noted below, weight has merely been redistributed along the Earth's surface so it cannot be measured). Though the rubber comprising the balloon has a mass of only a few grams, which might be almost unnoticeable, the rubber still retains all its mass when inflated.
Again, unlike the effect that low-gravity environments have on weight, buoyancy does not make a portion of an object's weight vanish; the missing weight is instead being borne by the ground, which leaves less force (weight) being applied to any scale theoretically placed underneath the object in question (though one may perhaps have some trouble with the practical aspects of accurately weighing something individually in that condition). If one were however to weigh a small wading pool that someone then entered and began floating in, they would find that the full weight of the person was being borne by the pool and, ultimately, the scale underneath the pool. Whereas a buoyant object (on a properly working scale for weighing buoyant objects) would weigh less, the object/fluid system becomes heavier by the value of object's full mass once the object is added. Since air is a fluid, this principle applies to object/air systems as well; large volumes of air—and ultimately the ground—supports the weight a body loses through mid-air buoyancy.
The effects of buoyancy do not just affect balloons; both liquids and gases are fluids in the physical sciences, and when all macrosize objects larger than dust particles are immersed in fluids on Earth, they have some degree of buoyancy. In the case of either a swimmer floating in a pool or a balloon floating in air, buoyancy can fully counter the gravitational weight of the object being weighed, for a weighing device in the pool. However, as noted, an object supported by a fluid is fundamentally no different from an object supported by a sling or cable—the weight has merely been transferred to another location, not made to disappear.
The mass of "weightless" (neutrally buoyant) balloons can be better appreciated with much larger hot air balloons. Although no effort is required to counter their weight when they are hovering over the ground (when they can often be within one hundred newtons of zero weight), the inertia associated with their appreciable mass of several hundred kilograms or more can knock fully grown men off their feet when the balloon's basket is moving horizontally over the ground.
Buoyancy and the resultant reduction in the downward force of objects being weighed underlies Archimedes' principle, which states that the buoyancy force is equal to the weight of the fluid that the object displaces. If this fluid is air, the force may be small.
Buoyancy effects of air on measurement
Normally, the effect of air buoyancy on objects of normal density is too small to be of any consequence in day-to-day activities. For instance, buoyancy's diminishing effect upon one's body weight (a relatively low-density object) is that of gravity (for pure water it is about that of gravity). Furthermore, variations in barometric pressure rarely affect a person's weight more than ±1 part in 30,000. However, in metrology (the science of measurement), the precision mass standards for calibrating laboratory scales and balances are manufactured with such accuracy that air density is accounted for to compensate for buoyancy effects. Given the extremely high cost of platinum-iridium mass standards like the international prototype of the kilogram (the mass standard in France that defined the magnitude of the kilogram), high-quality "working" standards are made of special stainless steel alloys with densities of about 8,000 kg/m3, which occupy greater volume than those made of platinum-iridium, which have a density of about 21,550 kg/m3. For convenience, a standard value of buoyancy relative to stainless steel was developed for metrology work and this results in the term "conventional mass". Conventional mass is defined as follows: "For a mass at 20 °C, ‘conventional mass’ is the mass of a reference standard of density 8,000 kg/m3 which it balances in air with a density of 1.2 kg/m3." The effect is a small one, 150 ppm for stainless steel mass standards, but the appropriate corrections are made during the manufacture of all precision mass standards so they have the true labeled mass.
Whenever a high-precision scale (or balance) in routine laboratory use is calibrated using stainless steel standards, the scale is actually being calibrated to conventional mass; that is, true mass minus 150 ppm of buoyancy. Since objects with precisely the same mass but with different densities displace different volumes and therefore have different buoyancies and weights, any object measured on this scale (compared to a stainless steel mass standard) has its conventional mass measured; that is, its true mass minus an unknown degree of buoyancy. In high-accuracy work, the volume of the article can be measured to mathematically null the effect of buoyancy.
Types of scales and what they measure
When one stands on a balance-beam-type scale at a doctor’s office, they are having their mass measured directly. This is because balances ("dual-pan" mass comparators) compare the gravitational force exerted on the person on the platform with that on the sliding counterweights on the beams; gravity is the force-generating mechanism that allows the needle to diverge from the "balanced" (null) point. These balances could be moved from Earth's equator to the poles and give exactly the same measurement, i.e. they would not spuriously indicate that the doctor's patient became 0.3% heavier; they are immune to the gravity-countering centrifugal force due to Earth's rotation about its axis. But if one steps onto spring-based or digital load cell-based scales (single-pan devices), one is having one's weight (gravitational force) measured; and variations in the strength of the gravitational field affect the reading. In practice, when such scales are used in commerce or hospitals, they are often adjusted on-site and certified on that basis, so that the mass they measure, expressed in pounds or kilograms, is at the desired level of accuracy.
Use in United States commerce
In the United States of America the United States Department of Commerce, the Technology Administration, and the National Institute of Standards and Technology (NIST) have defined the use of mass and weight in the exchange of goods under the Uniform Laws and Regulations in the areas of legal metrology and engine fuel quality in NIST Handbook 130:
K. "Mass" and "Weight" [See Section K. NOTE]
The mass of an object is a measure of the object’s inertial property, or the amount of matter it contains. The weight of an object is a measure of the force exerted on the object by gravity, or the force needed to support it. The pull of gravity on the earth gives an object a downward acceleration of about 9.8 m/s2. In trade and commerce and everyday use, the term "weight" is often used as a synonym for "mass". The "net mass" or "net weight" declared on a label indicates that the package contains a specific amount of commodity exclusive of wrapping materials. The use of the term "mass" is predominant throughout the world, and is becoming increasingly common in the United States.
(Added 1993)
Section K. NOTE: When used in this law (or regulation), the term "weight" means "mass". (see paragraphs K. "Mass" and "Weight" and L. Use of the Terms "Mass" and "Weight" in Section I. Introduction of NIST Handbook 130 for an explanation of these terms.)
(Note Added 1993)
L. Use of the Terms "Mass" and "Weight" [See Section K. NOTE]
When used in this handbook, the term "weight" means "mass". The term "weight" appears when inch-pound units are cited, or when both inch-pound and SI units are included in a requirement. The terms "mass" or "masses" are used when only SI units are cited in a requirement. The following note appears where the term "weight" is first used in a law or regulation.
U.S. federal law, which supersedes this handbook, also defines weight, particularly Net Weight, in terms of the avoirdupois pound or mass pound. From 21 CFR § 101.105 Declaration of net quantity of contents when exempt:
(a) The principal display panel of a food in package form shall bear a declaration of the net quantity of contents. This shall be expressed in the terms of weight, measure, numerical count, or a combination of numerical count and weight or measure. The statement shall be in terms of fluid measure if the food is liquid, or in terms of weight if the food is solid, semisolid, or viscous, or a mixture of solid and liquid; except that such statement may be in terms of dry measure if the food is a fresh fruit, fresh vegetable, or other dry commodity that is customarily sold by dry measure. If there is a firmly established general consumer usage and trade custom of declaring the contents of a liquid by weight, or a solid, semisolid, or viscous product by fluid measure, it may be used. Whenever the Commissioner determines that an existing practice of declaring net quantity of contents by weight, measure, numerical count, or a combination in the case of a specific packaged food does not facilitate value comparisons by consumers and offers opportunity for consumer confusion, he will by regulation designate the appropriate term or terms to be used for such commodity.
(b)(1) Statements of weight shall be in terms of avoirdupois pound and ounce.
See also 21 CFR § 201.51 – Declaration of net quantity of contents for general labeling and prescription labeling requirements.
See also
Apparent weight
Gravimeter
Pound (force)
References
Concepts in physics
Mass
Force
Conceptual distinctions | Mass versus weight | [
"Physics",
"Mathematics"
] | 3,818 | [
"Scalar physical quantities",
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Size",
"nan",
"Wikipedia categories named after physical quantities",
"Matter"
] |
14,477,915 | https://en.wikipedia.org/wiki/CHRNA7 | Neuronal acetylcholine receptor subunit alpha-7, also known as nAChRα7, is a protein that in humans is encoded by the CHRNA7 gene. The protein encoded by this gene is a subunit of certain nicotinic acetylcholine receptors (nAchR).
Function
The nicotinic acetylcholine receptors (nAChRs) are members of a superfamily of ligand-gated ion channels that mediate fast signal transmission at synapses. The nAChRs are thought to be hetero-pentamers composed of homologous subunits. The proposed structure for each subunit is a conserved N-terminal extracellular domain followed by three conserved transmembrane domains, a variable cytoplasmic loop, a fourth conserved transmembrane domain, and a short C-terminal extracellular region. The protein encoded by this gene forms a homo-oligomeric channel, displays marked permeability to calcium ions and is a major component of brain nicotinic receptors that are blocked by, and highly sensitive to, alpha-bungarotoxin. Once this receptor binds acetylcholine, it undergoes an extensive change in conformation that affects all subunits and leads to opening of an ion-conducting channel across the plasma membrane. This gene is located in a region identified as a major susceptibility locus for juvenile myoclonic epilepsy and a chromosomal location involved in the genetic transmission of schizophrenia. An evolutionarily recent partial duplication event in this region results in a hybrid containing sequence from this gene and a novel FAM7A gene.
Disruption of alpha-7 nicotinic receptors in schizophrenia is believed to contribute at least in part to the abnormally high prevalence of extremely heavy smoking in those affected by the disease. This observed particularly high nicotine intake compared to the average smoker is hypothesized to be a subconscious effort to activate the low-affinity alpha-7 receptors.
Interactions
CHRNA7 has been shown to interact with FYN.
Gene expression
The CHRNA7 gene is primarily expressed in the posterior amygdalar nucleus and the field CA3 of Ammon's horn in the mouse, and in the mammillary body in humans. Gene expression patterns from the Allen Brain Atlases can be seen here.
See also
Alpha-7 nicotinic receptor
Nicotinic acetylcholine receptor
Acetylcholine receptor
References
Further reading
External links
Ion channels
Nicotinic acetylcholine receptors | CHRNA7 | [
"Chemistry"
] | 519 | [
"Neurochemistry",
"Ion channels"
] |
14,479,902 | https://en.wikipedia.org/wiki/Monogenic%20system | In classical mechanics, a physical system is termed a monogenic system if the force acting on the system can be modelled in a particular, especially convenient mathematical form. The systems that are typically studied in physics are monogenic. The term was introduced by Cornelius Lanczos in his book The Variational Principles of Mechanics (1970).
In Lagrangian mechanics, the property of being monogenic is a necessary condition for certain different formulations to be mathematically equivalent. If a physical system is both a holonomic system and a monogenic system, then it is possible to derive Lagrange's equations from d'Alembert's principle; it is also possible to derive Lagrange's equations from Hamilton's principle.
Mathematical definition
In a physical system, if all forces, with the exception of the constraint forces, are derivable from the generalized scalar potential, and this generalized scalar potential is a function of generalized coordinates, generalized velocities, or time, then, this system is a monogenic system.
Expressed using equations, the exact relationship between generalized force and generalized potential is as follows:
where is generalized coordinate, is generalized velocity, and is time.
If the generalized potential in a monogenic system depends only on generalized coordinates, and not on generalized velocities and time, then, this system is a conservative system. The relationship between generalized force and generalized potential is as follows:
See also
Scleronomous
References
Mechanics
Classical mechanics
Lagrangian mechanics
Hamiltonian mechanics
Dynamical systems | Monogenic system | [
"Physics",
"Mathematics",
"Engineering"
] | 313 | [
"Theoretical physics",
"Lagrangian mechanics",
"Classical mechanics",
"Hamiltonian mechanics",
"Mechanics",
"Mechanical engineering",
"Dynamical systems"
] |
14,481,648 | https://en.wikipedia.org/wiki/Binder%20parameter | The Binder parameter or Binder cumulant in statistical physics, also known as the fourth-order cumulant is defined as the kurtosis of the order parameter, s, introduced by Austrian theoretical physicist Kurt Binder. It is frequently used to determine accurately phase transition points in numerical simulations of various models.
The phase transition point is usually identified comparing the
behavior of as a function of the temperature for different values of the system size . The transition temperature is the unique point where the different curves cross in the thermodynamic limit. This behavior is based on the fact that
in the critical region, , the Binder parameter behaves as , where .
Accordingly, the cumulant may also be used to identify the universality class of the transition by determining the value of the critical exponent of the correlation length.
In the thermodynamic limit, at the critical point, the value of the Binder parameter depends on boundary conditions, the shape of the system, and anisotropy of correlations.
References
Statistical mechanics | Binder parameter | [
"Physics"
] | 212 | [
"Statistical mechanics stubs",
"Statistical mechanics"
] |
14,484,306 | https://en.wikipedia.org/wiki/Proof%20mining | In proof theory, a branch of mathematical logic, proof mining (or proof unwinding) is a research program that studies or analyzes formalized proofs, especially in analysis, to obtain explicit bounds, ranges or rates of convergence from proofs that, when expressed in natural language, appear to be nonconstructive.
This research has led to improved results in analysis obtained from the analysis of classical proofs.
References
Further reading
Ulrich Kohlenbach and Paulo Oliva, "Proof Mining: A systematic way of analysing proofs in mathematics", Proc. Steklov Inst. Math, 242:136–164, 2003
Paulo Oliva, "Proof Mining in Subsystems of Analysis", BRICS PhD thesis citeseer
Proof theory | Proof mining | [
"Mathematics"
] | 156 | [
"Mathematical logic stubs",
"Mathematical logic",
"Proof theory"
] |
14,485,857 | https://en.wikipedia.org/wiki/Taft%20equation | The Taft equation is a linear free energy relationship (LFER) used in physical organic chemistry in the study of reaction mechanisms and in the development of quantitative structure–activity relationships for organic compounds. It was developed by Robert W. Taft in 1952 as a modification to the Hammett equation. While the Hammett equation accounts for how field, inductive, and resonance effects influence reaction rates, the Taft equation also describes the steric effects of a substituent. The Taft equation is written as:
where is the ratio of the rate of the substituted reaction compared to the reference reaction, ρ* is the sensitivity factor for the reaction to polar effects, σ* is the polar substituent constant that describes the field and inductive effects of the substituent, δ is the sensitivity factor for the reaction to steric effects, and Es is the steric substituent constant.
Polar substituent constants, σ*
Polar substituent constants describe the way a substituent will influence a reaction through polar (inductive, field, and resonance) effects. To determine σ* Taft studied the hydrolysis of methyl esters (RCOOMe). The use of ester hydrolysis rates to study polar effects was first suggested by Ingold in 1930. The hydrolysis of esters can occur through either acid and base catalyzed mechanisms, both of which proceed through a tetrahedral intermediate. In the base catalyzed mechanism the reactant goes from a neutral species to negatively charged intermediate in the rate determining (slow) step, while in the acid catalyzed mechanism a positively charged reactant goes to a positively charged intermediate.
Due to the similar tetrahedral intermediates, Taft proposed that under identical conditions any steric factors should be nearly the same for the two mechanisms and therefore would not influence the ratio of the rates. However, because of the difference in charge buildup in the rate determining steps it was proposed that polar effects would only influence the reaction rate of the base catalyzed reaction since a new charge was formed. He defined the polar substituent constant σ* as:
where log(ks/kCH3)B is the ratio of the rate of the base catalyzed reaction compared to the reference reaction, log(ks/kCH3)A is ratio of a rate of the acid catalyzed reaction compared to the reference reaction, and ρ* is a reaction constant that describes the sensitivity of the reaction series. For the definition reaction series, ρ* was set to 1 and R = methyl was defined as the reference reaction (σ* = zero). The factor of 1/2.48 is included to make σ* similar in magnitude to the Hammett σ values.
Steric substituent constants, Es
Although the acid catalyzed and base catalyzed hydrolysis of esters gives transition states for the rate determining steps that have differing charge densities, their structures differ only by two hydrogen atoms. Taft thus assumed that steric effects would influence both reaction mechanisms equally. Due to this, the steric substituent constant Es was determined from solely the acid catalyzed reaction, as this would not include polar effects. Es was defined as:
where ks is the rate of the studied reaction and \mathit k_{CH3} is the rate of the reference reaction (R = methyl). δ is a reaction constant that describes the susceptibility of a reaction series to steric effects. For the definition reaction series δ was set to 1 and Es for the reference reaction was set to zero. This equation is combined with the equation for σ* to give the full Taft equation.
From comparing the Es values for methyl, ethyl, isopropyl, and tert-butyl, it is seen that the value increases with increasing steric bulk. However, because context will have an effect on steric interactions some Es values can be larger or smaller than expected. For example, the value for phenyl is much larger than that for tert-butyl. When comparing these groups using another measure of steric bulk, axial strain values, the tert-butyl group is larger.
Other steric parameters for LFERs
In addition to Taft's steric parameter Es, other steric parameters that are independent of kinetic data have been defined. Charton has defined values v that are derived from van der Waals radii. Using molecular mechanics, Meyers has defined Va values that are derived from the volume of the portion of the substituent that is within 0.3 nm of the reaction center.
Sensitivity factors
Polar sensitivity factor, ρ*
Similar to ρ values for Hammett plots, the polar sensitivity factor ρ* for Taft plots will describe the susceptibility of a reaction series to polar effects. When the steric effects of substituents do not significantly influence the reaction rate the Taft equation simplifies to a form of the Hammett equation:
The polar sensitivity factor ρ* can be obtained by plotting the ratio of the measured reaction rates (ks) compared to the reference reaction (\mathit k_{CH3}) versus the σ* values for the substituents. This plot will give a straight line with a slope equal to ρ*. Similar to the Hammett ρ value:
If ρ* > 1, the reaction accumulates negative charge in the transition state and is accelerated by electron withdrawing groups.
If 1 > ρ* > 0, negative charge is built up and the reaction is mildly sensitive to polar effects.
If ρ* = 0, the reaction is not influenced by polar effects.
If 0 > ρ* > −1, positive charge is built up and the reaction is mildly sensitive to polar effects.
If −1 > ρ*, the reaction accumulates positive charge and is accelerated by electron donating groups.
Steric sensitivity factor, δ
Similar to the polar sensitivity factor, the steric sensitivity factor δ for a new reaction series will describe to what magnitude the reaction rate is influenced by steric effects. When a reaction series is not significantly influenced by polar effects, the Taft equation reduces to:
A plot of the ratio of the rates versus the Es value for the substituent will give a straight line with a slope equal to δ. Similarly to the Hammett ρ value, the magnitude of δ will reflect to what extent a reaction is influenced by steric effects:
A very steep slope will correspond to high steric sensitivity, while a shallow slope will correspond to little to no sensitivity.
Since Es values are large and negative for bulkier substituents, it follows that:
If δ is positive, increasing steric bulk decreases the reaction rate and steric effects are greater in the transition state.
If δ is negative, increasing steric bulk increases the reaction rate and steric effects are lessened in the transition state.
Reactions influenced by polar and steric effects
When both steric and polar effects influence the reaction rate the Taft equation can be solved for both ρ* and δ through the use of standard least squares methods for determining a bivariant regression plane. Taft outlined the application of this method to solving the Taft equation in a 1957 paper.
Taft plots in QSAR
The Taft equation is often employed in biological chemistry and medicinal chemistry for the development of quantitative structure–activity relationships (QSARs). In a recent example, Sandri and co-workers have used Taft plots in studies of polar effects in the aminolysis of β-lactams. They have looked at the binding of β-lactams to a poly(ethyleneimine) polymer, which functions as a simple mimic for human serum albumin (HSA). The formation of a covalent bond between penicillins and HSA as a result of aminolysis with lysine residues is believed to be involved in penicillin allergies. As a part of their mechanistic studies Sandri and co-workers plotted the rate of aminolysis versus calculated σ* values for 6 penicillins and found no correlation, suggesting that the rate is influenced by other effects in addition to polar and steric effects.
See also
Free-energy relationship
Hammett equation
Quantitative structure–activity relationship
References
Physical organic chemistry
Equations | Taft equation | [
"Chemistry",
"Mathematics"
] | 1,694 | [
"Equations",
"Mathematical objects",
"Physical organic chemistry"
] |
1,527,574 | https://en.wikipedia.org/wiki/Rotamer | In chemistry, rotamers are chemical species that differ from one another primarily due to rotations about one or more single bonds. Various arrangements of atoms in a molecule that differ by rotation about single bonds can also be referred to as different conformations. Conformers/rotamers differ little in their energies, so they are almost never separable in a practical sense. Rotations about single bonds are subject to small energy barriers. When the time scale for interconversion is long enough for isolation of individual rotamers (usually arbitrarily defined as a half-life of interconversion of 1000 seconds or longer), the species are termed atropisomers (see: atropisomerism). The ring-flip of substituted cyclohexanes constitutes a common form of conformers.
The study of the energetics of bond rotation is referred to as conformational analysis. In some cases, conformational analysis can be used to predict and explain product selectivity, mechanisms, and rates of reactions. Conformational analysis also plays an important role in rational, structure-based drug design.
Types
Rotating their carbon–carbon bonds, the molecules ethane and propane have three local energy minima. They are structurally and energetically equivalent, and are called the staggered conformers. For each molecule, the three substituents emanating from each carbon–carbon bond are staggered, with each H–C–C–H dihedral angle (and H–C–C–CH3 dihedral angle in the case of propane) equal to 60° (or approximately equal to 60° in the case of propane). The three eclipsed conformations, in which the dihedral angles are zero, are transition states (energy maxima) connecting two equivalent energy minima, the staggered conformers.
The butane molecule is the simplest molecule for which single bond rotations result in two types of nonequivalent structures, known as the anti- and gauche-conformers (see figure).
For example, butane has three conformers relating to its two methyl (CH3) groups: two gauche conformers, which have the methyls ±60° apart and are enantiomeric, and an anti conformer, where the four carbon centres are coplanar and the substituents are 180° apart (refer to free energy diagram of butane). The energy difference between gauche and anti is 0.9 kcal/mol associated with the strain energy of the gauche conformer. The anti conformer is, therefore, the most stable (≈ 0 kcal/mol). The three eclipsed conformations with dihedral angles of 0°, 120°, and 240° are transition states between conformers. Note that the two eclipsed conformations have different energies: at 0° the two methyl groups are eclipsed, resulting in higher energy (≈ 5 kcal/mol) than at 120°, where the methyl groups are eclipsed with hydrogens (≈ 3.5 kcal/mol).
While simple molecules can be described by these types of conformations, more complex molecules require the use of the Klyne–Prelog system to describe the different conformers.
More specific examples of conformations are detailed elsewhere:
Ring conformation
Cyclohexane conformations, including with chair and boat conformations among others.
Cycloalkane conformations, including medium rings and macrocycles
Carbohydrate conformation, which includes cyclohexane conformations as well as other details.
Allylic strain – energetics related to rotation about the single bond between an sp2 carbon and an sp3 carbon.
Atropisomerism – due to restricted rotation about a bond.
Folding, including the secondary and tertiary structure of biopolymers (nucleic acids and proteins).
Akamptisomerism – due to restricted inversion of a bond angle.
Equilibrium of conformers
Conformers generally exist in a dynamic equilibrium
Three isotherms are given in the diagram depicting the equilibrium distribution of two conformers at different temperatures. At a free energy difference of 0 kcal/mol, this gives an equilibrium constant of 1, meaning that two conformers exist in a 1:1 ratio. The two have equal free energy; neither is more stable, so neither predominates compared to the other. A negative difference in free energy means that a conformer interconverts to a thermodynamically more stable conformation, thus the equilibrium constant will always be greater than 1. For example, the ΔG° for the transformation of butane from the gauche conformer to the anti conformer is −0.47 kcal/mol at 298 K. This gives an equilibrium constant is about 2.2 in favor of the anti conformer, or a 31:69 mixture of gauche:anti conformers at equilibrium. Conversely, a positive difference in free energy means the conformer already is the more stable one, so the interconversion is an unfavorable equilibrium (K < 1). Even for highly unfavorable changes (large positive ΔG°), the equilibrium constant between two conformers can be increased by increasing the temperature, so that the amount of the less stable conformer present at equilibrium increases (although it always remains the minor conformer).
Population distribution of conformers
The fractional population distribution of different conformers follows a Boltzmann distribution:
The left hand side is the proportion of conformer i in an equilibrating mixture of M conformers in thermodynamic equilibrium. On the right side, Ek (k = 1, 2, ..., M) is the energy of conformer k, R is the molar ideal gas constant (approximately equal to 8.314 J/(mol·K) or 1.987 cal/(mol·K)), and T is the absolute temperature. The denominator of the right side is the partition function.
Factors contributing to the free energy of conformers
The effects of electrostatic and steric interactions of the substituents as well as orbital interactions such as hyperconjugation are responsible for the relative stability of conformers and their transition states. The contributions of these factors vary depending on the nature of the substituents and may either contribute positively or negatively to the energy barrier. Computational studies of small molecules such as ethane suggest that electrostatic effects make the greatest contribution to the energy barrier; however, the barrier is traditionally attributed primarily to steric interactions.
In the case of cyclic systems, the steric effect and contribution to the free energy can be approximated by A values, which measure the energy difference when a substituent on cyclohexane in the axial as compared to the equatorial position. In large (>14 atom) rings, there are many accessible low-energy conformations which correspond to the strain-free diamond lattice.
Observation of conformers
The short timescale of interconversion precludes the separation of conformer in most cases. Atropisomers are conformational isomers which can be separated due to restricted rotation. The equilibrium between conformational isomers can be observed using a variety of spectroscopic techniques.
Protein folding also generates conformers which can be observed. The Karplus equation relates the dihedral angle of vicinal protons to their J-coupling constants as measured by NMR. The equation aids in the elucidation of protein folding as well as the conformations of other rigid aliphatic molecules. Protein side chains exhibit rotamers, whose distribution is determined by their steric interaction with different conformations of the backbone. This is evident from statistical analysis of the conformations of protein side chains in the Backbone-dependent rotamer library.
Spectroscopy
Conformational dynamics can be monitored by variable temperature NMR spectroscopy. The technique applies to barriers of 8–14 kcal/mol, and species exhibiting such dynamics are often called "fluxional". For example, in cyclohexane derivatives, the two chair conformers interconvert rapidly at room temperature. The ring-flip proceeds at a rates of approximately 105 ring-flips/sec, with an overall energy barrier of 10 kcal/mol (42 kJ/mol). This barrier precludes separation at ambient temperatures. However, at low temperatures below the coalescence point one can directly monitor the equilibrium by NMR spectroscopy and by dynamic, temperature dependent NMR spectroscopy the barrier interconversion.
Besides NMR spectroscopy, IR spectroscopy is used to measure conformer ratios. For the axial and equatorial conformer of bromocyclohexane, νCBr differs by almost 50 cm−1.
Conformation-dependent reactions
Reaction rates are highly dependent on the conformation of the reactants. In many cases the dominant product arises from the reaction of the less prevalent conformer, by virtue of the Curtin-Hammett principle. This is typical for situations where the conformational equilibration is much faster than reaction to form the product. The dependence of a reaction on the stereochemical orientation is therefore usually only visible in Configurational analysis, in which a particular conformation is locked by substituents. Prediction of rates of many reactions involving the transition between sp2 and sp3 states, such as ketone reduction, alcohol oxidation or nucleophilic substitution is possible if all conformers and their relative stability ruled by their strain is taken into account.
One example where the rotamers become significant is elimination reactions, which involve the simultaneous removal of a proton and a leaving group from vicinal or antiperiplanar positions under the influence of a base.
The mechanism requires that the departing atoms or groups follow antiparallel trajectories. For open chain substrates this geometric prerequisite is met by at least one of the three staggered conformers. For some cyclic substrates such as cyclohexane, however, an antiparallel arrangement may not be attainable depending on the substituents which might set a conformational lock. Adjacent substituents on a cyclohexane ring can achieve antiperiplanarity only when they occupy trans diaxial positions (that is, both are in axial position, one going up and one going down).
One consequence of this analysis is that trans-4-tert-butylcyclohexyl chloride cannot easily eliminate but instead undergoes substitution (see diagram below) because the most stable conformation has the bulky t-Bu group in the equatorial position, therefore the chloride group is not antiperiplanar with any vicinal hydrogen (it is gauche to all four). The thermodynamically unfavored conformation has the t-Bu group in the axial position, which is higher in energy by more than 5 kcal/mol (see A value). As a result, the t-Bu group "locks" the ring in the conformation where it is in the equatorial position and substitution reaction is observed. On the other hand, cis-4-tert-butylcyclohexyl chloride undergoes elimination because antiperiplanarity of Cl and H can be achieved when the t-Bu group is in the favorable equatorial position.
The repulsion between an axial t-butyl group and hydrogen atoms in the 1,3-diaxial position is so strong that the cyclohexane ring will revert to a twisted boat conformation. The strain in cyclic structures is usually characterized by deviations from ideal bond angles (Baeyer strain), ideal torsional angles (Pitzer strain) or transannular (Prelog) interactions.
Alkane stereochemistry
Alkane conformers arise from rotation around sp3 hybridised carbon–carbon sigma bonds. The smallest alkane with such a chemical bond, ethane, exists as an infinite number of conformations with respect to rotation around the C–C bond. Two of these are recognised as energy minimum (staggered conformation) and energy maximum (eclipsed conformation) forms. The existence of specific conformations is due to hindered rotation around sigma bonds, although a role for hyperconjugation is proposed by a competing theory.
The importance of energy minima and energy maxima is seen by extension of these concepts to more complex molecules for which stable conformations may be predicted as minimum-energy forms. The determination of stable conformations has also played a large role in the establishment of the concept of asymmetric induction and the ability to predict the stereochemistry of reactions controlled by steric effects.
In the example of staggered ethane in Newman projection, a hydrogen atom on one carbon atom has a 60° torsional angle or torsion angle with respect to the nearest hydrogen atom on the other carbon so that steric hindrance is minimised. The staggered conformation is more stable by 12.5 kJ/mol than the eclipsed conformation, which is the energy maximum for ethane. In the eclipsed conformation the torsional angle is minimised.
In butane, the two staggered conformations are no longer equivalent and represent two distinct conformers:the anti-conformation (left-most, below) and the gauche conformation (right-most, below).
150px
Both conformations are free of torsional strain, but, in the gauche conformation, the two methyl groups are in closer proximity than the sum of their van der Waals radii. The interaction between the two methyl groups is repulsive (van der Waals strain), and an energy barrier results.
A measure of the potential energy stored in butane conformers with greater steric hindrance than the 'anti'-conformer ground state is given by these values:
Gauche, conformer – 3.8 kJ/mol
Eclipsed H and CH3 – 16 kJ/mol
Eclipsed CH3 and CH3 – 19 kJ/mol.
The eclipsed methyl groups exert a greater steric strain because of their greater electron density compared to lone hydrogen atoms.
The textbook explanation for the existence of the energy maximum for an eclipsed conformation in ethane is steric hindrance, but, with a C-C bond length of 154 pm and a Van der Waals radius for hydrogen of 120 pm, the hydrogen atoms in ethane are never in each other's way. The question of whether steric hindrance is responsible for the eclipsed energy maximum is a topic of debate to this day. One alternative to the steric hindrance explanation is based on hyperconjugation as analyzed within the Natural Bond Orbital framework. In the staggered conformation, one C-H sigma bonding orbital donates electron density to the antibonding orbital of the other C-H bond. The energetic stabilization of this effect is maximized when the two orbitals have maximal overlap, occurring in the staggered conformation. There is no overlap in the eclipsed conformation, leading to a disfavored energy maximum. On the other hand, an analysis within quantitative molecular orbital theory shows that 2-orbital-4-electron (steric) repulsions are dominant over hyperconjugation. A valence bond theory study also emphasizes the importance of steric effects.
Nomenclature
Naming alkanes per standards listed in the IUPAC Gold Book is done according to the Klyne–Prelog system for specifying angles (called either torsional or dihedral angles) between substituents around a single bond:
a torsion angle between 0° and ±90° is called syn (s)
a torsion angle between ±90° and 180° is called anti (a)
a torsion angle between 30° and 150° or between −30° and −150° is called clinal (c)
a torsion angle between 0° and ±30° or ±150° and 180° is called periplanar (p)
a torsion angle between 0° and ±30° is called synperiplanar (sp), also called syn- or cis- conformation
a torsion angle between 30° to 90° and −30° to −90° is called synclinal (sc), also called gauche or skew
a torsion angle between 90° and 150° or −90° and −150° is called anticlinal (ac)
a torsion angle between ±150° and 180° is called antiperiplanar (ap), also called anti- or trans- conformation
Torsional strain or "Pitzer strain" refers to resistance to twisting about a bond.
Special cases
In n-pentane, the terminal methyl groups experience additional pentane interference.
Replacing hydrogen by fluorine in polytetrafluoroethylene changes the stereochemistry from the zigzag geometry to that of a helix due to electrostatic repulsion of the fluorine atoms in the 1,3 positions. Evidence for the helix structure in the crystalline state is derived from X-ray crystallography and from NMR spectroscopy and circular dichroism in solution.
See also
Anomeric effect
Backbone-dependent rotamer library
Cycloalkane
Cyclohexane
Cyclohexane conformations.
Gauche effect
Klyne–Prelog system
Macrocyclic stereocontrol
Molecular configuration
Molecular modelling
Steric effects
Strain (chemistry)
References
Physical organic chemistry
Stereochemistry | Rotamer | [
"Physics",
"Chemistry"
] | 3,619 | [
"Stereochemistry",
"Space",
"nan",
"Physical organic chemistry",
"Spacetime"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.