id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
24,567,173 | https://en.wikipedia.org/wiki/Institute%20for%20Condensed%20Matter%20Theory | The Institute for Condensed Matter Theory (ICMT) is an institute for the research of condensed matter theory hosted by and located at the University of Illinois at Urbana-Champaign.
ICMT was founded in 2007. The first director of the institute was Paul Goldbart who was followed by Eduardo Fradkin. The chief scientist is Nobel laureate Anthony Leggett.
References
External links
The Institute for Condensed Matter Theory
2007 establishments in Illinois
Physics research institutes
Research institutes established in 2007
University of Illinois Urbana-Champaign centers and institutes
Theoretical physics institutes | Institute for Condensed Matter Theory | [
"Physics"
] | 107 | [
"Theoretical physics",
"Theoretical physics institutes"
] |
24,569,206 | https://en.wikipedia.org/wiki/W%E2%80%B2%20and%20Z%E2%80%B2%20bosons | In particle physics, W′ and Z′ bosons (or W-prime and Z-prime bosons) refer to hypothetical gauge bosons that arise from extensions of the electroweak symmetry of the Standard Model. They are named in analogy with the Standard Model W and Z bosons.
Types
Types of W′ bosons
W′ bosons often arise in models with an extra SU(2) gauge group relative to the full Standard Model gauge group . The extended symmetry spontaneously breaks into the diagonal subgroup SU(2)W which corresponds to the conventional SU(2) in electroweak theory.
More generally, there could be copies of SU(2), which are then broken down to a diagonal SU(2)W. This gives rise to different W′+, W′−, and Z′ bosons.
Such models might arise from a quiver diagram, for example.
In order for the W′ bosons to couple to weak isospin, the extra SU(2) and the Standard Model SU(2) must mix; one copy of SU(2) must break around the TeV scale (to get W′ bosons with a TeV mass) leaving a second SU(2) for the Standard Model. This happens in Little Higgs models that contain more than one copy of SU(2). Because the W′ comes from the breaking of an SU(2), it is generically accompanied by a Z′ boson of (almost) the same mass and with couplings related to the W′ couplings.
Another model with W′ bosons but without an additional SU(2) factor is the so-called 331 model with The symmetry breaking chain leads to a pair of W′± bosons and three Z′ bosons.
W′ bosons also arise in Kaluza–Klein theories with SU(2) in the bulk.
Types of Z′ bosons
Various models of physics beyond the Standard Model predict different kinds of Z′ bosons.
Models with a new U(1) gauge symmetry The Z′ is the gauge boson of the (broken) U(1) symmetry.
E6 models This type of model contains two Z′ bosons, which can mix in general.
Pati–Salam In addition to a fourth leptonic "color", Pati–Salam includes a right handed weak interaction with W′ and Z′ bosons.
Topcolor and Top Seesaw Models of Dynamical Electroweak Symmetry Breaking Both these models have Z′ bosons that select the formation of particular condensates.
Little Higgs models These models typically include an enlarged gauge sector, which is broken down to the Standard Model gauge symmetry around the TeV scale. In addition to one or more Z′ bosons, these models often contain W′ bosons.
Kaluza–Klein models The Z′ boson are the excited modes of a neutral bulk gauge symmetry.
Stueckelberg Extensions The Z′ boson is sourced from couplings found in string theories with intersecting D-branes (see Stueckelberg action).
Searches
Direct searches for "wide resonance-width" models
The following statements pertain only to "wide resonance width" models.
A W′-boson could be detected at hadron colliders through its decay to lepton plus neutrino or top quark plus bottom quark, after being produced in quark–antiquark annihilation. The LHC reach for W′ discovery is expected to be a few TeV.
Direct searches for Z′-bosons are carried out at hadron colliders, since these give access to the highest energies available. The search looks for high-mass dilepton resonances: the Z′-boson would be produced by quark–antiquark annihilation and decay to an electron–positron pair or a pair of opposite-charged muons. The most stringent current limits come from the Fermilab Tevatron, and depend on the couplings of the Z′-boson (which control the production cross section); as of 2006, the Tevatron excludes Z′-bosons up to masses of about 800 GeV for "typical" cross sections predicted in various models.
Direct searches for "narrow resonance-width" models
Recent classes of models have emerged that naturally provide cross section signatures that fall on the edge, or slightly below the 95% confidence level limits set by the Tevatron, and hence can produce detectable cross section signals for a Z′ boson in a mass range much closer to the Z pole-mass than the "wide width" models discussed above.
These "narrow width" models which fall into this category are those that predict a Stückelberg Z′ as well as a Z′ from a universal extra dimension (see for links to these papers).
On 7 April 2011, the CDF collaboration at the Tevatron reported an excess in proton–antiproton collision events that produce a W boson accompanied by two hadronic jets. This could possibly be interpreted in terms of a Z′ boson.
On 2 June 2015, the ATLAS experiment at the LHC reported evidence for W′-bosons at significance 3.4 , still too low to claim a formal discovery. Researchers at the CMS experiment also independently reported signals that corroborate ATLAS's findings.
In March 2021, there were some reports to hint at the possible existence of Z′ bosons as an unexpected difference in how beauty quarks decay to create electrons or muons. The measurement has been made at a statistical significance of 3.1 , which is well below the 5 level that is conventionally considered sufficient proof of a discovery.
Z′–Y mixings
We might have gauge kinetic mixings between the U(1)′ of the Z′ boson and U(1)Y of hypercharge. This mixing leads to a tree level modification of the Peskin–Takeuchi parameters.
See also
List of hypothetical particles
References
Further reading
, a pedagogical overview of Z′ phenomenology (TASI 2006 lectures)
More advanced:
External links
The Z′ Hunter's Guide, a collection of papers and talks regarding Z′ physics
Z′ physics on arxiv.org
Gauge bosons
Hypothetical elementary particles
Force carriers
Subatomic particles with spin 1 | W′ and Z′ bosons | [
"Physics"
] | 1,302 | [
"Physical phenomena",
"Force carriers",
"Unsolved problems in physics",
"Fundamental interactions",
"Hypothetical elementary particles",
"Physics beyond the Standard Model"
] |
1,451,702 | https://en.wikipedia.org/wiki/Seth%20Lloyd | Seth Lloyd (born August 2, 1960) is a professor of mechanical engineering and physics at the Massachusetts Institute of Technology.
His research area is the interplay of information with complex systems, especially quantum systems. He has performed seminal work in the fields of quantum computation, quantum communication and quantum biology, including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon's noisy channel theorem, and designing novel methods for quantum error correction and noise reduction.
Biography
Lloyd was born on August 2, 1960. He graduated from Phillips Academy in 1978 and received a bachelor of arts degree from Harvard College in 1982. He earned a certificate of advanced study in mathematics and a master of philosophy degree from Cambridge University in 1983 and 1984, while on a Marshall Scholarship. Lloyd was awarded a doctorate by Rockefeller University in 1988 (advisor Heinz Pagels) after submitting a thesis on Black Holes, Demons, and the Loss of Coherence: How Complex Systems Get Information, and What They Do With It.
From 1988 to 1991, Lloyd was a postdoctoral fellow in the High Energy Physics Department at the California Institute of Technology, where he worked with Murray Gell-Mann on applications of information to quantum-mechanical systems. From 1991 to 1994, he was a postdoctoral fellow at Los Alamos National Laboratory, where he worked at the Center for Nonlinear Systems on quantum computation. In 1994, he joined the faculty of the Department of Mechanical Engineering at MIT. Starting in 1988, Lloyd was an external faculty member at the Santa Fe Institute for more than 30 years.
In his 2006 book, Programming the Universe, Lloyd contends that the universe itself is one big quantum computer producing what we see around us, and ourselves, as it runs a cosmic program. According to Lloyd, once we understand the laws of physics completely, we will be able to use small-scale quantum computing to understand the universe completely as well.
Lloyd states that we could have the whole universe simulated in a computer in 600 years provided that computational power increases according to Moore's Law. However, Lloyd shows that there are limits to rapid exponential growth in a finite universe, and that it is very unlikely that Moore's Law will be maintained indefinitely.
Lloyd directs the Center for Extreme Quantum Information Theory (xQIT) at MIT. He has made influential contributions to a broad range of topics, mostly in the wider field of quantum information science. Among his most cited works are the first proposal for a digital quantum simulator, a general framework for quantum metrology, the first treatment of quantum computation with continuous variables, dynamical decoupling as a method of quantum error avoidance, quantum algorithms for equation solving and machine learning or research on the possible relevance of quantum effects in biological phenomena, especially photosynthesis, an effect he has also collaborated to exploit technologically.
According to Clarivate he had in July 2023 in total 199 peer-reviewed publications which were cited more than 22,600 times leading to an h index of 61.
Epstein affair
During July 2019, reports surfaced that MIT and other institutions had accepted funding from convicted sex offender Jeffrey Epstein. In the ensuing scandal, the director of the MIT Media Lab, Joi Ito, resigned from MIT as a result of his association with Epstein. Lloyd's connections to Epstein also drew criticism: Lloyd had acknowledged receiving funding from Epstein in 19 of his papers. On August 22, 2019, Lloyd published a letter apologizing for accepting grants (totaling $225,000) from Epstein. Despite this, the controversy continued. In January 2020, at the request of the MIT Corporation, the law firm Goodwin Procter issued a report on all of MIT's interactions with Epstein. As a result of the report, on January 10, 2020, Lloyd was placed on paid administrative leave. Lloyd has vigorously denied that he misled MIT about the source of the funds he received from Epstein. This denial was validated by a subsequent MIT investigation that concluded that Lloyd did not attempt to circumvent the MIT vetting process, nor try to conceal the name of the donor, and Lloyd was allowed to continue his tenured faculty position at MIT. However, most but not all members of MIT's fact-finding committee concluded that Lloyd had violated MIT's conflict of interest policy by not revealing crucial publicly known information about Epstein's background to MIT, as a result of which Lloyd will be subject to a series of administrative actions for 5 years.
Honors
2007 Fellow of the American Physical Society
2012 International Quantum Communication Award
Works
Lloyd, S., Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos, Knopf, March 14, 2006, 240 p.,
Interview: Quantum Hanky Panky: A Conversation with Seth Lloyd (video), Edge Foundation, 2016
Interview: The Computational Universe: Seth Lloyd (video), Edge Foundation, 2002
Lecture: The Black Hole of Finance (video), Santa Fe Institute
Movie: In 2022 Lloyd starred in the short film Steeplechase directed by Andrey Kezzyn, which thematizes closed timelike curves, a topic Lloyd has also addressed in his scientific work.
See also
Digital physics
Nuclear magnetic resonance quantum computer
Quantum Aspects of Life
Simulated reality
Notes
External links
Google Scholar page
Personal web page
"Crazy Molecules: Pulse Berlin Interview"
Programming the Universe
Radio Interview from This Week in Science September 26, 2006 Broadcast
American mechanical engineers
Complex systems scientists
Harvard College alumni
Rockefeller University alumni
MIT School of Engineering faculty
Living people
1960 births
American people of Welsh descent
Santa Fe Institute people
New England Complex Systems Institute
Quantum information scientists
Quantum biology
Marshall Scholars
Fellows of the American Physical Society | Seth Lloyd | [
"Physics",
"Biology"
] | 1,136 | [
"Quantum mechanics",
"nan",
"Quantum biology"
] |
1,452,308 | https://en.wikipedia.org/wiki/Borosilicate%20glass | Borosilicate glass is a type of glass with silica and boron trioxide as the main glass-forming constituents. Borosilicate glasses are known for having very low coefficients of thermal expansion (≈3 × 10−6 K−1 at 20 °C), making them more resistant to thermal shock than any other common glass. Such glass is subjected to less thermal stress and can withstand temperature differentials without fracturing of about . It is commonly used for the construction of reagent bottles and flasks, as well as lighting, electronics, and cookware. For many other applications, soda-lime glass is more common.
Borosilicate glass is sold under various trade names, including Borosil, Duran, Pyrex, Glassco, Supertek, Suprax, Simax, Bellco, Marinex (Brazil), BSA 60, BSC 51 (by NIPRO), Heatex, Endural, Schott, Refmex, Kimax, Gemstone Well, United Scientific, and MG (India).
Single-ended self-starting lamps are insulated with a mica disc and contained in a borosilicate glass gas discharge tube (arc tube) and a metal cap. They include the sodium-vapor lamp that is commonly used in street lighting.
Borosilicate glass usually melts at about .
History
Borosilicate glass was first developed by German glassmaker Otto Schott in the late 19th century in Jena. This early borosilicate glass thus came to be known as Jena glass. After Corning Glass Works introduced Pyrex in 1915, the name became synonymous with borosilicate glass in the English-speaking world (since the 1940s, a sizable portion of glass produced under the Pyrex brand has also been made of soda–lime glass). Borosilicate glass is the name of a glass family with various members tailored to completely different purposes. Most common today is borosilicate 3.3 or 5.0x glass such as Duran, Corning33, Corning51-V (clear), Corning51-L (amber), International Cookware's NIPRO BSA 60, and BSC 51.
Manufacturing process
Borosilicate glass is created by combining and melting boric oxide, silica sand, soda ash, and alumina. Since borosilicate glass melts at a higher temperature than ordinary silicate glass, some new techniques were required for industrial production.
In addition to quartz, sodium carbonate, and aluminium oxide traditionally used in glassmaking, boron is used in the manufacture of borosilicate glass. The composition of low-expansion borosilicate glass, such as those laboratory glasses mentioned above, is approximately 80% silica, 13% boric oxide, 4% sodium oxide or potassium oxide and 2–3% aluminium oxide. Though more difficult to make than traditional glass due to its high melting temperature, it is economical to produce. Its superior durability, chemical and heat resistance finds use in chemical laboratory equipment, cookware, lighting, and in certain kinds of windows.
The manufacturing process depends on the product geometry and can be differentiated between different methods like floating, tube drawing, or molding.
Physical characteristics
The common type of borosilicate glass used for laboratory glassware has a very low thermal expansion coefficient (3.3 × 10−6 K−1), about one-third that of ordinary soda–lime glass. This reduces material stresses caused by temperature gradients, which makes borosilicate a more suitable type of glass for certain applications (see below). Fused quartzware is even better in this respect (having one-fifteenth the thermal expansion of soda–lime glass); however, the difficulty of working with fused quartz makes quartzware much more expensive, and borosilicate glass is a low-cost compromise. While more resistant to thermal shock than other types of glass, borosilicate glass can still crack or shatter when subjected to rapid or uneven temperature variations.
Among the characteristic properties of this glass family are:
Different borosilicate glasses cover a wide range of different thermal expansions, enabling direct seals with various metals and alloys like molybdenum glass with a CTE (coefficient of thermal expansion) of 4.6, tungsten with a CTE around 4.0 and Kovar with a CTE around 5.0 because of the matched CTE with the sealing partner
Allowing high maximum temperatures of typically about
Showing an extremely high chemical resistance in corrosive environments. Norm tests for example for acid resistance create extreme conditions and reveal very low impacts on glass
The softening point (temperature at which viscosity is approximately 107.6 poise) of type 7740 Pyrex is .
Borosilicate glass is less dense (about 2.23 g/cm3) than typical soda–lime glass due to the low atomic mass of boron. Its mean specific heat capacity at constant pressure (20–100 °C) is 0.83 J/(g⋅K), roughly one fifth of water's.
The temperature differential that borosilicate glass can withstand before fracturing is about , whereas soda–lime glass can withstand only about a change in temperature. This is why typical kitchenware made from traditional soda–lime glass will shatter if a vessel containing boiling water is placed on ice, but Pyrex or other borosilicate laboratory glass will not.
Optically, borosilicate glasses are crown glasses with low dispersion (Abbe numbers around 65) and relatively low refractive indices (1.51–1.54 across the visible range).
Families
For the purposes of classification, borosilicate glass can be roughly arranged in the following groups, according to their oxide composition (in mass fractions). Characteristic of borosilicate glasses is the presence of substantial amounts of silica (SiO2) and boric oxide (B2O3, >8%) as glass network formers. The amount of boric oxide affects the glass properties in a particular way. Apart from the highly resistant varieties (B2O3 up to a maximum of 13%), there are others that – due to the different way in which the boric oxide is incorporated into the structural network – have only low chemical resistance (B2O3 content over 15%). Hence we differentiate between the following subtypes.
Non-alkaline-earth
The B2O3 content for borosilicate glass is typically 12–13% and the SiO2 content over 80%. High chemical durability and low thermal expansion (3.3 × 10−6 K−1) – the lowest of all commercial glasses for large-scale technical applications – make this a versatile glass material. High-grade borosilicate flat glasses are used in a wide variety of industries, mainly for technical applications that require either good thermal resistance, excellent chemical durability, or high light transmission in combination with a pristine surface quality. Other typical applications for different forms of borosilicate glass include glass tubing, glass piping, glass containers, etc. especially for the chemical industry.
Alkaline-earth
In addition to about 75% SiO2 and 8–12% B2O3, these glasses contain up to 5% oxides of alkaline earth metal and alumina (Al2O3). This is a subtype of slightly softer glasses, which have thermal expansions in the range (4.0–5.0) × 10−6 K−1.
This is not to be confused with simple borosilicate glass-alumina composites.
High-borate
Glasses containing 15–25% B2O3, 65–70% SiO2, and smaller amounts of alkalis and Al2O3 as additional components have low softening points and low thermal expansion. Sealability to metals in the expansion range of tungsten and molybdenum and high electrical insulation are their most important features. The increased B2O3 content reduces the chemical resistance; in this respect, high-borate borosilicate glasses differ widely from non-alkaline-earth and alkaline-earth borosilicate glasses. Among these are also borosilicate glasses that transmit UV light down to 180 nm, which combine the best of the borosilicate glass and the quartz world.
Uses
Borosilicate glass has a wide variety of uses ranging from cookware to lab equipment, as well as a component of high-quality products such as implantable medical devices and devices used in space exploration.
Health and science
Virtually all modern laboratory glassware is made of borosilicate glass. It is widely used in this application due to its chemical and thermal resistance and good optical clarity, but the glass can react with sodium hydride upon heating to produce sodium borohydride, a common laboratory reducing agent. Fused quartz is also found in some laboratory equipment when its higher melting point and transmission of UV are required (e.g. for tube furnace liners and UV cuvettes), but the cost and manufacturing difficulties associated with fused quartz make it an impractical investment for the majority of laboratory equipment.
Additionally, borosilicate tubing is used as the feedstock for the production of parenteral drug packaging, such as vials and pre-filled syringes, as well as ampoules and dental cartridges. The chemical resistance of borosilicate glass minimizes the migration of sodium ions from the glass matrix, thus making it well suited for injectable-drug applications. This type of glass is typically referred to as USP / EP JP Type I.
Borosilicate is widely used in implantable medical devices such as prosthetic eyes, artificial hip joints, bone cements, dental composite materials (white fillings) and even in breast implants.
Many implantable devices benefit from the unique advantages of borosilicate glass encapsulation. Applications include veterinary tracking devices, neurostimulators for the treatment of epilepsy, implantable drug pumps, cochlear implants, and physiological sensors.
Electronics
During the mid-20th century, borosilicate glass tubing was used to pipe coolants (often distilled water) through high-power vacuum-tube–based electronic equipment, such as commercial broadcast transmitters. It was also used for the envelope material for glass transmitting tubes which operated at high temperatures.
Borosilicate glasses also have an application in the semiconductor industry in the development of microelectromechanical systems (MEMS), as part of stacks of etched silicon wafers bonded to the etched borosilicate glass.
Cookware
Cookware is another common usage for borosilicate glass, including bakeware. It is used for some measuring cups, featuring screen printed markings providing graduated measurements. Borosilicate glass is sometimes used for high-quality beverage glassware, particularly in pieces designed for hot drinks. Items made of borosilicate glass can be thin yet durable, or thicker for extra strength, and are microwave- and dishwasher-safe.
Lighting
Many high-quality flashlights use borosilicate glass for the lens. This increases light transmittance through the lens compared to plastics and lower-quality glass.
Several types of high-intensity discharge (HID) lamps, such as mercury-vapor and metal-halide lamps, use borosilicate glass as the outer envelope material.
New lampworking techniques led to artistic applications such as contemporary glass marbles. The modern studio glass movement has responded to color. Borosilicate is commonly used in the glassblowing form of lampworking and the artists create a range of products such as jewelry, kitchenware, sculpture, as well as for artistic glass smoking pipes.
Lighting manufacturers use borosilicate glass in some of their lenses.
Organic light-emitting diodes (OLED) (for display and lighting purposes) also use borosilicate glass (BK7). The thicknesses of the BK7 glass substrates are usually less than 1 millimeter for OLED fabrication. Due to its optical and mechanical characteristics in relation with cost, BK7 is a common substrate in OLEDs. However, depending on the application, soda–lime glass substrates of similar thicknesses are also used in OLED fabrication.
Optics
Many astronomical reflecting telescopes use glass mirror components made of borosilicate glass because of its low coefficient of thermal expansion. This makes very precise optical surfaces possible that change very little with temperature, and matched glass mirror components that "track" across temperature changes and retain the optical system's characteristics.
The Hale Telescope's 200 inch mirror is made of borosilicate glass.
The optical glass most often used for making instrument lenses is Schott BK-7 (or the equivalent from other makers, such as the Chinese crown glass K9), a very finely made borosilicate crown glass.
It is also designated as 517642 glass after its 1.517 refractive index and 64.2 Abbe number. Other less costly borosilicate glasses, such as Schott B270 or the equivalent, are used to make "crown-glass" eyeglass lenses. Ordinary lower-cost borosilicate glass, like that used to make kitchenware and even reflecting telescope mirrors, cannot be used for high-quality lenses because of the striations and inclusions common to lower grades of this type of glass. The maximal working temperature is . While it transitions to a liquid starting at (just before it turns red-hot), it is not workable until it reaches over . That means that in order to industrially produce this glass, oxygen/fuel torches must be used. Glassblowers borrowed technology and techniques from welders.
Rapid prototyping
Borosilicate glass has become the material of choice for fused deposition modeling (FDM), or fused filament fabrication (FFF), build plates. Its low coefficient of expansion makes borosilicate glass, when used in combination with resistance-heating plates and pads, an ideal material for the heated build platform onto which plastic materials are extruded one layer at a time. The initial layer of build must be placed onto a substantially flat, heated surface to minimize shrinkage of some build materials (ABS, polycarbonate, polyamide, etc.) due to cooling after deposition. Depending on the material used, the build plate will cycle from room temperature to between 50 °C and 130 °C for each prototype that is built. The temperature, along with various coatings (Kapton tape, painter's tape, hair spray, glue stick, ABS+acetone slurry, etc.), ensure that the first layer may be adhered to and remain adhered to the plate, without warping, as the first and subsequent layers cool following extrusion. Subsequently, following the build, the heating elements and plate are allowed to cool. The resulting residual stress formed when the plastic contracts as it cools, while the glass remains relatively dimensionally unchanged due to the low coefficient of thermal expansion, provides a convenient aid in removing the otherwise mechanically bonded plastic from the build plate. In some cases the parts self-separate as the developed stresses overcome the adhesive bond of the build material to the coating material and underlying plate.
Other
Aquarium heaters are sometimes made of borosilicate glass. Due to its high heat resistance, it can tolerate the significant temperature difference between the water and the nichrome heating element.
Specialty glass smoking pipes for cannabis and tobacco can be made from borosilicate glass. The high heat resistance makes the pipes more durable. Some harm reduction organizations also give out borosilicate pipes intended for smoking crack cocaine, as the heat resistance prevents the glass from cracking, causing cuts and burns that can spread hepatitis C.
Most premanufactured glass guitar slides are made of borosilicate glass.
Borosilicate is also a material of choice for evacuated-tube solar thermal technology because of its high strength and heat resistance.
The thermal insulation tiles on the Space Shuttle were coated with a borosilicate glass.
Borosilicate glasses are used for immobilisation and disposal of radioactive wastes. In most countries high-level radioactive waste has been incorporated into alkali borosilicate or phosphate vitreous waste forms for many years; vitrification is an established technology. Vitrification is a particularly attractive immobilization route because of the high chemical durability of the vitrified glass product. The chemical resistance of glass can allow it to remain in a corrosive environment for many thousands or even millions of years.
Borosilicate glass tubing is used in specialty TIG welding torch nozzles in place of standard alumina nozzles. This allows a clear view of the arc in situations where visibility is limited.
Trade names
Borosilicate glass is offered in slightly different compositions under different trade names:
Borofloat of Schott AG, a borosilicate glass, which is produced to flat glass in a float process.
Borosil, manufactured by the company of the same name, used in laboratory glassware and microwaveable kitchenware in India
BK7 of Schott, a borosilicate glass with a high level of purity. Main use in lens and mirrors for laser, cameras and telescopes.
Duran of DURAN Group, similar to Pyrex, Simax or Jenaer Glas.
Pyrex borosilicate glass of Corning
Fiolax of Schott, mainly used for containers for pharmaceutical applications.
Ilmabor of (2014 insolvency), mainly used for containers and equipment in laboratories and medicine.
Jenaer Glas of Zwiesel Kristallglas, formerly Schott AG. Mainly used for kitchenware.
Kimax is the trademark for borosilicate glassware from Kimble
United Scientific, manufacturers and distributors of laboratory glassware
Rasotherm of VEB Jenaer Glaswerk Schott & Genossen, for technical glass
Simax of Kavalierglass a.s., Czechia, produced for both laboratory and consumer markets.
Supertek, manufacturer of scientific lab equipment and glassware.
Willow Glass is an alkali free, thin and flexible borosilicate glass of Corning
Boroux is a brand of borosilicate glass drinking bottles.
Endural is a brand name of Holophane
Borosilicate nanoparticles
It was initially thought that borosilicate glass could not be formed into nanoparticles, since an unstable boron oxide precursor would prevent successful forming of these shapes. However, in 2008 a team of researchers from the Swiss Federal Institute of Technology at Lausanne were successful in forming borosilicate nanoparticles of 100 to 500 nanometers in diameter. The researchers formed a gel of tetraethylorthosilicate and trimethoxyboroxine. When this gel is exposed to water under proper conditions, a dynamic reaction ensues, which results in the nanoparticles.
In lampworking
Borosilicate (or "boro", as it is often called) is used extensively in the glassblowing process lampworking; the glassworker uses a burner torch to melt and form glass, using a variety of metal and graphite tools to shape it. Borosilicate is referred to as "hard glass" and has a higher melting point (approximately 3,000 °F / 1648 °C) than "soft glass", which is preferred for glassblowing by beadmakers. Raw glass used in lampworking comes in glass rods for solid work and glass tubes for hollow work tubes and vessels/containers. Lampworking is used to make complex and custom scientific apparatus; most major universities have a lampworking shop to manufacture and repair their glassware. For this kind of "scientific glassblowing", the specifications must be exact and the glassblower must be highly skilled and able to work with precision. Lampworking is also done as art, and common items made include goblets, paper weights, pipes, pendants, compositions and figurines.
In 1968, English metallurgist John Burton brought his hobby of hand-mixing metallic oxides into borosilicate glass to Los Angeles. Burton began a glass workshop at Pepperdine College, with instructor Margaret Youd. A few of the students in the classes, including Suellen Fowler, discovered that a specific combination of oxides made a glass that would shift from amber to purples and blues, depending on the heat and flame atmosphere. Fowler shared this combination with Paul Trautman, who formulated the first small-batch colored borosilicate recipes. He then founded Northstar Glassworks in the mid-1980s, the first factory devoted solely to producing colored borosilicate glass rods and tubes for use by artists in the flame. Trautman also developed the techniques and technology to make the small-batch colored boro that is used by a number of similar companies.
Beadmaking
In recent years, with the resurgence of lampworking as a technique to make handmade glass beads, borosilicate has become a popular material in many glass artists' studios. Borosilicate for beadmaking comes in thin, pencil-like rods. Glass Alchemy, Trautman Art Glass, and Northstar are popular manufacturers, although there are other brands available. The metals used to color borosilicate glass, particularly silver, often create strikingly beautiful and unpredictable results when melted in an oxygen-gas torch flame. Because it is more shock-resistant and stronger than soft glass, borosilicate is particularly suited for pipe making, as well as sculpting figures and creating large beads. The tools used for making glass beads from borosilicate glass are the same as those used for making glass beads from soft glass.
References
Glass compositions
Boron compounds
Transparent materials
Fused filament fabrication
Low thermal expansion materials | Borosilicate glass | [
"Physics",
"Chemistry"
] | 4,543 | [
"Physical phenomena",
"Glass chemistry",
"Glass compositions",
"Low thermal expansion materials",
"Optical phenomena",
"Materials",
"Transparent materials",
"Matter"
] |
1,453,778 | https://en.wikipedia.org/wiki/Cenosphere | A cenosphere or kenosphere is a lightweight, inert, hollow sphere made largely of silica and alumina and filled with air or inert gas, typically produced as a coal combustion byproduct at thermal power plants. The color of cenospheres varies from gray to almost white and their density is about , which gives them a great buoyancy.
Cenospheres are hard and rigid, light, waterproof and insulative. This makes them highly useful in a variety of products, notably fillers.
Etymology
The word cenosphere or kenosphere is derived from two Greek words, κενός (kenos: hollow, empty) and σφαίρα (sphaira: sphere), literally meaning "hollow sphere."
Production
The process of burning coal in thermal power plants produces fly ash containing ceramic particles made largely of alumina and silica. They are produced at temperatures of through complicated chemical and physical transformation. Their chemical composition and structure varies considerably depending on the composition of coal that generated them.
The ceramic particles in fly ash have three types of structures. The first type of particles are solid and are called precipitator. The second type of particles are hollow and are called cenospheres. The third type of particles are called plerospheres, which are hollow particles of large diameter filled with smaller size precipitator and cenospheres.
Fuel or oil cenospheres
The definition of cenosphere has changed over the last 30 years. Up until the 1990s it was limited to a largely carbonaceous sphere caused by the oxygen-deficient combustion of a liquid fuel droplet that was cooled below before it was consumed. These fuel cenospheres indicated a combustion source using injected droplets of fuel or the open burning of heavy liquid fuels such as asphalt or a thermoplastic material that were bubbling as they burned; the bursting of the bubbles created airborne droplets of fuel. This is still a common definition used in environmental microscopy to differentiate between the inefficient combustion of liquid fuels and the high temperature fly ash resulting from the efficient combustion of fuels with inorganic contaminants. Fuel cenospheres are always black.
The refractory cenosphere as defined above is synonymous with microballoons or glass microspheres and excludes the traditional fuel cenospheres definition. The use of the term cenosphere in place of microballoons is widespread, and it has become an additional definition.
Applications
Cenospheres are now used as fillers in cement to produce low-density concrete. A 2016 article reports that some manufacturers have begun filling metals and polymers with cenospheres to make lightweight composite materials with higher strength than other types of foam materials. Such composite materials are called syntactic foam. Aluminum-based syntactic foams are finding applications in the automotive sector.
Silver-coated cenospheres are used in conductive coatings, tiles and fabrics. Another use is in conductive paints for antistatic coatings and electromagnetic shielding.
Syntactic foam
Cores of sandwich structure
Removal of nitrogen oxides
Dry reforming of methane
See also
References
Ceramic materials
Particulates
Pollutants
Refractory materials | Cenosphere | [
"Physics",
"Chemistry",
"Engineering"
] | 662 | [
"Ceramic engineering",
"Refractory materials",
"Particulates",
"Materials",
"Ceramic materials",
"Particle technology",
"Matter"
] |
1,453,781 | https://en.wikipedia.org/wiki/Free%20field | In physics a free field is a field without interactions, which is described by the terms of motion and mass.
Description
In classical physics, a free field is a field whose equations of motion are given by linear partial differential equations. Such linear PDE's have a unique solution for a given initial condition.
In quantum field theory, an operator valued distribution is a free field if it satisfies some linear partial differential equations such that the corresponding case of the same linear PDEs for a classical field (i.e. not an operator) would be the Euler–Lagrange equation for some quadratic Lagrangian. We can differentiate distributions by defining their derivatives via differentiated test functions. See Schwartz distribution for more details. Since we are dealing not with ordinary distributions but operator valued distributions, it is understood these PDEs aren't constraints on states but instead a description of the relations among the smeared fields. Beside the PDEs, the operators also satisfy another relation, the commutation/anticommutation relations.
Canonical commutation relation
Basically, commutator (for bosons)/anticommutator (for fermions) of two smeared fields is i times the Peierls bracket of the field with itself (which is really a distribution, not a function) for the PDEs smeared over both test functions. This has the form of a CCR/CAR algebra.
CCR/CAR algebras with infinitely many degrees of freedom have many inequivalent irreducible unitary representations. If the theory is defined over Minkowski space, we may choose the unitary irrep containing a vacuum state although that isn't always necessary.
Example
Let φ be an operator valued distribution and the (Klein–Gordon) PDE be
.
This is a bosonic field. Let's call the distribution given by the Peierls bracket Δ.
Then,
where here, φ is a classical field and {,} is the Peierls bracket.
Then, the canonical commutation relation is
.
Note that Δ is a distribution over two arguments, and so, can be smeared as well.
Equivalently, we could have insisted that
where is the time ordering operator and that if the supports of f and g are spacelike separated,
.
See also
Normal order
Wick's theorem
References
Michael E. Peskin and Daniel V. Schroeder, An Introduction to Quantum Field Theory, Addison-Wesley, Reading, 1995. p19-p29
Quantum field theory | Free field | [
"Physics"
] | 506 | [
"Quantum field theory",
"Quantum mechanics"
] |
1,454,099 | https://en.wikipedia.org/wiki/Metallizing | Metallizing is the general name for the technique of coating metal on the surface of objects. Metallic coatings may be decorative, protective or functional.
Techniques for metallization started as early as mirror making. In 1835, Justus von Liebig discovered the process of coating a glass surface with metallic silver, making the glass mirror one of the earliest items being metallized. Plating other non-metallic objects grew rapidly with introduction of ABS plastic. Because a non-metallic object tends to be a poor electrical conductor, the object's surface must be made conductive before plating can be performed. The plastic part is first etched chemically by a suitable process, such as dipping in a hot chromic acid-sulfuric acid mixture. The etched surface is sensitised and activated by first dipping in tin(II) chloride solution, then palladium chloride solution. The processed surface is then coated with electroless copper or nickel before further plating. This process gives useful (about 1 to 6 kgf/cm or 10 to 60 N/cm or 5 to 35 lbf/in) adhesion force, but is much weaker than actual metal-to-metal adhesion strength.
Vacuum metallizing involves heating the coating metal to its boiling point in a vacuum chamber, then letting condensation deposit the metal on the substrate's surface. Resistance heating, electron beam, or plasma heating is used to vaporize the coating metal. Vacuum metallizing was used to deposit aluminum on the large glass mirrors of reflecting telescopes, such as with the Hale Telescope.
Thermal spray processes are often referred to as metallizing. Metals applied in such a manner provide corrosion protection to steel for decades longer than paint alone. Zinc and aluminum are the most commonly used materials for metallizing steel structures.
Cold sprayable metal technology is a metallizing process that seamlessly applies cold sprayable or putty able metal to almost any surface. The composite metal consists of two (water-based binder) or three different ingredients: metal powder, binder and hardener.
The mixture of the ingredients is cast or sprayed on the substrate at room temperature. The desired effect and the necessary final treatment define the thickness of the layer, which normally varies between 80 and 150 μm.
See also
List of telescope parts and construction
Thin film deposition
Electroplating
Sputtering
Chemical vapor deposition
Electroless deposition
References
Industrial processes
Coatings | Metallizing | [
"Chemistry"
] | 490 | [
"Coatings"
] |
1,454,542 | https://en.wikipedia.org/wiki/Boxcar%20function | In mathematics, a boxcar function is any function which is zero over the entire real line except for a single interval where it is equal to a constant, A. The function is named after its graph's resemblance to a boxcar, a type of railroad car. The boxcar function can be expressed in terms of the uniform distribution as
where is the uniform distribution of x for the interval and is the Heaviside step function. As with most such discontinuous functions, there is a question of the value at the transition points. These values are probably best chosen for each individual application.
When a boxcar function is selected as the impulse response of a filter, the result is a simple moving average filter, whose frequency response is a sinc-in-frequency, a type of low-pass filter.
See also
Boxcar averager
Rectangular function
Step function
Top-hat filter
References
Special functions | Boxcar function | [
"Mathematics"
] | 183 | [
"Mathematical analysis",
"Special functions",
"Mathematical analysis stubs",
"Combinatorics"
] |
1,454,584 | https://en.wikipedia.org/wiki/Waste-to-energy%20plant | A waste-to-energy plant is a waste management facility that combusts wastes to produce electricity. This type of power plant is sometimes called a trash-to-energy, municipal waste incineration, energy recovery, or resource recovery plant.
Modern waste-to-energy plants are very different from the trash incinerators that were commonly used until a few decades ago. Unlike modern ones, those plants usually did not remove hazardous or recyclable materials before burning. These incinerators endangered the health of the plant workers and the nearby residents, and most of them did not generate electricity.
Waste-to-energy generation is being increasingly looked at as a potential energy diversification strategy, especially by Sweden, which has been a leader in waste-to-energy production over the past 20 years. The typical range of net electrical energy that can be produced is about 500 to 600 kWh of electricity per ton of waste incinerated. Thus, the incineration of about 2,200 tons per day of waste will produce about 1,200 MWh of electrical energy.
Operation
Most waste-to-energy plants burn municipal solid waste, but some burn industrial waste or hazardous waste. A modern, properly run waste-to-energy plant sorts material before burning it and can co-exist with recycling. The only items that are burned are not recyclable, by design or economically, and are not hazardous.
Waste-to-energy plants are similar in their design and equipment with other steam-electric power plants, particularly biomass plants. First, the waste is brought to the facility. Then, the waste is sorted to remove recyclable and hazardous materials. The waste is then stored until it is time for burning. A few plants use gasification, but most combust the waste directly because it is a mature, efficient technology. The waste can be added to the boiler continuously or in batches, depending on the design of the plant.
In terms of volume, waste-to-energy plants incinerate 80 to 90 percent of waste. Sometimes, the residue ash is clean enough to be used for some purposes such as raw materials for use in manufacturing cinder blocks or for road construction. In addition, the metals that may be burned are collected from the bottom of the furnace and sold to foundries. Some waste-to-energy plants convert salt water to potable fresh water as a by-product of cooling processes.
Cost
The typical plant with a capacity of 400 GWh energy production annually costs about 440 million dollars to build.
Waste-to-energy plants may have a significant cost advantage over traditional power options, as the waste-to-energy operator may receive revenue for receiving waste as an alternative to the cost of disposing of waste in a landfill, typically referred to as a "tipping fee" per ton basis, versus having to pay for the cost of fuel, whereas fuel cost can account for as much as 45 percent of the cost to produce electricity in a coal-powered plant, and 75 percent or more of the cost in a natural gas-powered plant. The National Solid Waste Management Association estimates that the average United States tipping fee for 2002 was $33.70 per ton.
Pollution
Waste-to-energy plants cause less air pollution than coal plants, but more than natural gas plants. At the same time, it is carbon-negative: processing waste into fuel releases considerably less carbon and methane into the air than having waste decay away in landfills or bodies of water.
Waste-to-energy plants are designed to reduce the emission of air pollutants in the flue gases exhausted to the atmosphere, such as nitrogen oxides, sulfur oxides and particulates, and to destroy pollutants already present in the waste, using pollution control measures such as baghouses, scrubbers, and electrostatic precipitators. High temperature, efficient combustion, and effective scrubbing and controls can significantly reduce air pollution outputs.
Burning municipal waste does produce significant amounts of dioxin and furan emissions to the atmosphere as compared to the smaller amounts produced by burning coal or natural gas. Dioxins and furans are considered by many to be serious health hazards. However, advances in emission control designs and very stringent new governmental regulations, as well as public opposition to municipal waste incinerators, have caused large reductions in the amount of dioxins and furans produced by waste-to-energy plants.
Waste-to-energy plants produce fly ash and bottom ash just as is the case when coal is combusted. The total amount of ash produced by waste-to-energy plants ranges from 15% to 25% by weight of the original quantity of waste, and the fly ash amounts to about 10% to 20% of the total ash. The fly ash, by far, constitutes more of a potential health hazard than does the bottom ash because the fly ash contains toxic metals such as lead, cadmium, copper, and zinc as well as small amounts of dioxins and furans. The bottom ash may or may not contain significant levels of health hazardous materials. In the United States, and perhaps in other countries as well, the law requires that the ash be tested for toxicity before disposal in landfills. If the ash is found to be hazardous, it can only be disposed of in landfills which are carefully designed to prevent pollutants in the ash from leaching into underground aquifers.
Odor pollution can be a problem when the plant location is not isolated. Some plants store the waste in an enclosed area with a negative pressure, which prevents unpleasant odors from escaping, and the air drawn from the storage area is sent through the boiler or a filter. However, not all plants take steps to reduce the odor, resulting in complaints.
An issue that affects community relationships is the increased road traffic of garbage trucks to transport municipal waste to the waste-to-energy facility. Due to this reason, most waste-to-energy plants are located in industrial areas.
Landfill gas, which contains about 50% methane, and 50% carbon dioxide, is contaminated with a small amount of pollutants. Unlike at waste-to-energy plants, there are little or no pollution controls on the burning of landfill gas. The gas is usually flared or used to run a reciprocating engine or microturbine, especially in digester gas power plants. Cleaning up the landfill gas is usually not cost effective because natural gas, which it substitutes for, is relatively cheap.
See also
Incineration
Waste management#Incineration
References
External links
Waste-to-energy plants
European Union Directive on waste incineration
ISWA Working Group on thermal treatment of solid waste
Waste management
Waste treatment technology | Waste-to-energy plant | [
"Chemistry",
"Engineering"
] | 1,362 | [
"Water treatment",
"Waste treatment technology",
"Environmental engineering"
] |
1,454,791 | https://en.wikipedia.org/wiki/Gene%20Ontology | The Gene Ontology (GO) is a major bioinformatics initiative to unify the representation of gene and gene product attributes across all species. More specifically, the project aims to: 1) maintain and develop its controlled vocabulary of gene and gene product attributes; 2) annotate genes and gene products, and assimilate and disseminate annotation data; and 3) provide tools for easy access to all aspects of the data provided by the project, and to enable functional interpretation of experimental data using the GO, for example via enrichment analysis. GO is part of a larger classification effort, the Open Biomedical Ontologies, being one of the Initial Candidate Members of the OBO Foundry.
Whereas gene nomenclature focuses on gene and gene products, the Gene Ontology focuses on the function of the genes and gene products. The GO also extends the effort by using a markup language to make the data (not only of the genes and their products but also of curated attributes) machine readable, and to do so in a way that is unified across all species (whereas gene nomenclature conventions vary by biological taxon).
History
The Gene Ontology was originally constructed in 1998 by a consortium of researchers studying the genomes of three model organisms: Drosophila melanogaster (fruit fly), Mus musculus (mouse), and Saccharomyces cerevisiae (brewer's or baker's yeast). Many other Model Organism Databases have joined the Gene Ontology Consortium, contributing not only to annotation data, but also to the development of ontologies and tools to view and apply the data. Many major plant, animal, and microorganism databases make a contribution towards this project. As of July 2019, the GO contains 44,945 terms; there are 6,408,283 annotations to 4,467 different biological organisms. There is a significant body of literature on the development and use of the GO, and it has become a standard tool in the bioinformatics arsenal. Their objectives have three aspects: building gene ontology, assigning ontology to gene/gene products, and developing software and databases for the first two objects.
Several analyses of the Gene Ontology using formal, domain-independent properties of classes (the metaproperties) are also starting to appear. For instance, there is now an ontological analysis of biological ontologies.
Terms and ontology
From a practical view, an ontology is a representation of something we know about. "Ontologies" consist of representations of things that are detectable or directly observable and the relationships between those things.
There is no universal standard terminology in biology and related domains, and term usage may be specific to a species, research area, or even a particular research group. This makes communication and sharing of data more difficult. The Gene Ontology project provides an ontology of defined terms representing gene product properties. The ontology covers three domains:
cellular component, the parts of a cell or its extracellular environment;
molecular function, the elemental activities of a gene product at the molecular level, such as binding or catalysis;
biological process, operations or sets of molecular events with a defined beginning and end, pertinent to the functioning of integrated living units: cells, tissues, organs, and organisms.
Each GO term within the ontology has a term name, which may be a word or string of words; a unique alphanumeric identifier; a definition with cited sources; and an ontology indicating the domain to which it belongs. Terms may also have synonyms, which are classed as being exactly equivalent to the term name, broader, narrower, or related; references to equivalent concepts in other databases; and comments on term meaning or usage. The GO ontology is structured as a directed acyclic graph, and each term has defined relationships to one or more other terms in the same domain, and sometimes to other domains. The GO vocabulary is designed to be species-neutral and includes terms applicable to prokaryotes and eukaryotes, single and multicellular organisms.
GO is not static, and additions, corrections, and alterations are suggested by and solicited from members of the research and annotation communities, as well as by those directly involved in the GO project. For example, an annotator may request a specific term to represent a metabolic pathway, or a section of the ontology may be revised with the help of community experts (e.g.). Suggested edits are reviewed by the ontology editors, and implemented where appropriate.
The GO ontology and annotation files are freely available from the GO website in a number of formats or can be accessed online using the GO browser AmiGO. The Gene Ontology project also provides downloadable mappings of its terms to other classification systems.
Example term
id: GO:0000016
name: lactase activity
ontology: molecular_function
def: "Catalysis of the reaction: lactose + H2O=D-glucose + D-galactose." [EC:3.2.1.108]
synonym: "lactase-phlorizin hydrolase activity" BROAD [EC:3.2.1.108]
synonym: "lactose galactohydrolase activity" EXACT [EC:3.2.1.108]
xref: EC:3.2.1.108
xref: MetaCyc:LACTASE-RXN
xref: Reactome:20536
is_a: GO:0004553 ! hydrolase activity, hydrolyzing O-glycosyl compounds
Data source:
Annotation
Genome annotation encompasses the practice of capturing data about a gene product, and GO annotations use terms from the GO to do so. Annotations from GO curators are integrated and disseminated on the GO website, where they can be downloaded directly or viewed online using AmiGO. In addition to the gene product identifier and the relevant GO term, GO annotations have at least the following data:
The reference used to make the annotation (e.g. a journal article);
An evidence code denoting the type of evidence upon which the annotation is based;
The date and the creator of the annotation
Supporting information, depending on the GO term and evidence used, and supplementary information, such as the conditions the function is observed under, may also be included in a GO annotation.
The evidence code comes from a controlled vocabulary of codes, the Evidence Code Ontology, covering both manual and automated annotation methods. For example, Traceable Author Statement (TAS) means a curator has read a published scientific paper and the metadata for that annotation bears a citation to that paper; Inferred from Sequence Similarity (ISS) means a human curator has reviewed the output from a sequence similarity search and verified that it is biologically meaningful. Annotations from automated processes (for example, remapping annotations created using another annotation vocabulary) are given the code Inferred from Electronic Annotation (IEA). In 2010, over 98% of all GO annotations were inferred computationally, not by curators, but as of July 2, 2019, only about 30% of all GO annotations were inferred computationally.
As these annotations are not checked by a human, the GO Consortium considers them to be marginally less reliable and they are commonly to a higher level, less detailed terms. Full annotation data sets can be downloaded from the GO website. To support the development of annotation, the GO Consortium provides workshops and mentors new groups of curators and developers.
Many machine learning algorithms have been designed and implemented to predict Gene Ontology annotations.
Example annotation
Gene product: Actin, alpha cardiac muscle 1, UniProtKB:P68032
GO term: heart contraction; GO:0060047 (biological process)
Evidence code: Inferred from Mutant Phenotype (IMP)
Reference:
Assigned by: UniProtKB, June 6, 2008
Data source:
Tools
There are a large number of tools available, both online and for download, that use the data provided by the GO project. The vast majority of these come from third parties; the GO Consortium develops and supports two tools, AmiGO and OBO-Edit.
AmiGO is a web-based application that allows users to query, browse, and visualize ontologies and gene product annotation data. It also has a BLAST tool, tools allowing analysis of larger data sets, and an interface to query the GO database directly. AmiGO can be used online at the GO website to access the data provided by the GO Consortium or downloaded and installed for local use on any database employing the GO database schema (e.g.). It is free open source software and is available as part of the go-dev software distribution.
OBO-Edit is an open source, platform-independent ontology editor developed and maintained by the Gene Ontology Consortium. It is implemented in Java and uses a graph-oriented approach to display and edit ontologies. OBO-Edit includes a comprehensive search and filter interface, with the option to render subsets of terms to make them visually distinct; the user interface can also be customized according to user preferences. OBO-Edit also has a reasoner that can infer links that have not been explicitly stated based on existing relationships and their properties. Although it was developed for biomedical ontologies, OBO-Edit can be used to view, search, and edit any ontology. It is freely available to download.
Consortium
The Gene Ontology Consortium is the set of biological databases and research groups actively involved in the gene ontology project. This includes a number of model organism databases and multi-species protein databases, software development groups, and a dedicated editorial office.
See also
Blast2GO
Comparative Toxicogenomics Database
DAVID bioinformatics
Interferome
National Center for Biomedical Ontology
Critical Assessment of Function Annotation
References
External links
AmiGO - the current official web-based set of tools for searching and browsing the Gene Ontology database
Gene Ontology Consortium - official site
PlantRegMap - GO annotation for 165 plant species and GO enrichment Analysis
Biological databases
Ontology (information science) | Gene Ontology | [
"Biology"
] | 2,133 | [
"Bioinformatics",
"Biological databases"
] |
1,455,062 | https://en.wikipedia.org/wiki/Empirical%20risk%20minimization | In statistical learning theory, the principle of empirical risk minimization defines a family of learning algorithms based on evaluating performance over a known and fixed dataset. The core idea is based on an application of the law of large numbers; more specifically, we cannot know exactly how well a predictive algorithm will work in practice (i.e. the "true risk") because we do not know the true distribution of the data, but we can instead estimate and optimize the performance of the algorithm on a known set of training data. The performance over the known set of training data is referred to as the "empirical risk".
Background
The following situation is a general setting of many supervised learning problems. There are two spaces of objects and and we would like to learn a function (often called hypothesis) which outputs an object , given . To do so, there is a training set of examples where is an input and is the corresponding response that is desired from .
To put it more formally, assuming that there is a joint probability distribution over and , and that the training set consists of instances drawn i.i.d. from . The assumption of a joint probability distribution allows for the modelling of uncertainty in predictions (e.g. from noise in data) because is not a deterministic function of but rather a random variable with conditional distribution for a fixed .
It is also assumed that there is a non-negative real-valued loss function which measures how different the prediction of a hypothesis is from the true outcome . For classification tasks, these loss functions can be scoring rules.
The risk associated with hypothesis is then defined as the expectation of the loss function:
A loss function commonly used in theory is the 0-1 loss function: .
The ultimate goal of a learning algorithm is to find a hypothesis among a fixed class of functions for which the risk is minimal:
For classification problems, the Bayes classifier is defined to be the classifier minimizing the risk defined with the 0–1 loss function.
Formal definition
In general, the risk cannot be computed because the distribution is unknown to the learning algorithm. However, given a sample of iid training data points, we can compute an estimate, called the empirical risk, by computing the average of the loss function over the training set; more formally, computing the expectation with respect to the empirical measure:
The empirical risk minimization principle states that the learning algorithm should choose a hypothesis which minimizes the empirical risk over the hypothesis class :
Thus, the learning algorithm defined by the empirical risk minimization principle consists in solving the above optimization problem.
Properties
Guarantees for the performance of empirical risk minimization depend strongly on the function class selected as well as the distributional assumptions made. In general, distribution-free methods are too coarse, and do not lead to practical bounds. However, they are still useful in deriving asymptotic properties of learning algorithms, such as consistency. In particular, distribution-free bounds on the performance of empirical risk minimization given a fixed function class can be derived using bounds on the VC complexity of the function class.
For simplicity, considering the case of binary classification tasks, it is possible to bound the probability of the selected classifier, being much worse than the best possible classifier . Consider the risk defined over the hypothesis class with growth function given a dataset of size . Then, for every :
Similar results hold for regression tasks. These results are often based on uniform laws of large numbers, which control the deviation of the empirical risk from the true risk, uniformly over the hypothesis class.
Impossibility results
It is also possible to show lower bounds on algorithm performance if no distributional assumptions are made. This is sometimes referred to as the No free lunch theorem. Even though a specific learning algorithm may provide the asymptotically optimal performance for any distribution, the finite sample performance is always poor for at least one data distribution. This means that no classifier can provide on the error for a given sample size for all distributions.
Specifically, let and consider a sample size and classification rule , there exists a distribution of with risk (meaning that perfect prediction is possible) such that:
It is further possible to show that the convergence rate of a learning algorithm is poor for some distributions. Specifically, given a sequence of decreasing positive numbers converging to zero, it is possible to find a distribution such that:
for all . This result shows that universally good classification rules do not exist, in the sense that the rule must be low quality for at least one distribution.
Computational complexity
Empirical risk minimization for a classification problem with a 0-1 loss function is known to be an NP-hard problem even for a relatively simple class of functions such as linear classifiers. Nevertheless, it can be solved efficiently when the minimal empirical risk is zero, i.e., data is linearly separable.
In practice, machine learning algorithms cope with this issue either by employing a convex approximation to the 0–1 loss function (like hinge loss for SVM), which is easier to optimize, or by imposing assumptions on the distribution (and thus stop being agnostic learning algorithms to which the above result applies).
In the case of convexification, Zhang's lemma majors the excess risk of the original problem using the excess risk of the convexified problem. Minimizing the latter using convex optimization also allow to control the former.
Tilted empirical risk minimization
Tilted empirical risk minimization is a machine learning technique used to modify standard loss functions like squared error, by introducing a tilt parameter. This parameter dynamically adjusts the weight of data points during training, allowing the algorithm to focus on specific regions or characteristics of the data distribution. Tilted empirical risk minimization is particularly useful in scenarios with imbalanced data or when there is a need to emphasize errors in certain parts of the prediction space.
See also
M-estimator
Maximum likelihood estimation
References
Further reading
Machine learning | Empirical risk minimization | [
"Engineering"
] | 1,204 | [
"Artificial intelligence engineering",
"Machine learning"
] |
1,455,348 | https://en.wikipedia.org/wiki/Sigma%20approximation | In mathematics, σ-approximation adjusts a Fourier summation to greatly reduce the Gibbs phenomenon, which would otherwise occur at discontinuities.
An m-1-term, σ-approximated summation for a series of period T can be written as follows:
in terms of the normalized sinc function:
and are the typical Fourier Series coefficients, and p, a non negative parameter, determines the amount of smoothening applied, where higher values of p further reduce the Gibbs phenomenon but can overly smoothen the representation of the function.
The term
is the Lanczos σ factor, which is responsible for eliminating most of the Gibbs phenomenon. This is sampling the right side of the main lobe of the function to rolloff the higher frequency Fourier Series coefficients.
As is known by the Uncertainty principle, having a sharp cutoff in the frequency domain (cutting off the Fourier Series abruptly without adjusting coefficients) causes a wide spread of information in the time domain (lots of ringing).
This can also be understood as applying a Window function to the Fourier series coefficients to balance maintaining a fast rise time (analogous to a narrow transition band) and small amounts of ringing (analogous to stopband attenuation).
See also
Lanczos resampling
References
Fourier series
Numerical analysis | Sigma approximation | [
"Mathematics"
] | 257 | [
"Mathematical analysis",
"Mathematical analysis stubs",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
1,455,471 | https://en.wikipedia.org/wiki/Pseudotensor | In physics and mathematics, a pseudotensor is usually a quantity that transforms like a tensor under an orientation-preserving coordinate transformation (e.g. a proper rotation) but additionally changes sign under an orientation-reversing coordinate transformation (e.g., an improper rotation), which is a transformation that can be expressed as a proper rotation followed by reflection. This is a generalization of a pseudovector. To evaluate a tensor or pseudotensor sign, it has to be contracted with some vectors, as many as its rank is, belonging to the space where the rotation is made while keeping the tensor coordinates unaffected (differently from what one does in the case of a base change). Under improper rotation a pseudotensor and a proper tensor of the same rank will have different sign which depends on the rank being even or odd. Sometimes inversion of the axes is used as an example of an improper rotation to see the behaviour of a pseudotensor, but it works only if vector space dimensions is odd otherwise inversion is a proper rotation without an additional reflection.
There is a second meaning for pseudotensor (and likewise for pseudovector), restricted to general relativity. Tensors obey strict transformation laws, but pseudotensors in this sense are not so constrained. Consequently, the form of a pseudotensor will, in general, change as the frame of reference is altered. An equation containing pseudotensors which holds in one frame will not necessarily hold in a different frame. This makes pseudotensors of limited relevance because equations in which they appear are not invariant in form.
Mathematical developments in the 1980s have allowed pseudotensors to be understood as sections of jet bundles.
Definition
Two quite different mathematical objects are called a pseudotensor in different contexts.
The first context is essentially a tensor multiplied by an extra sign factor, such that the pseudotensor changes sign under reflections when a normal tensor does not. According to one definition, a pseudotensor P of the type is a geometric object whose components in an arbitrary basis are enumerated by indices and obey the transformation rule
under a change of basis.
Here are the components of the pseudotensor in the new and old bases, respectively, is the transition matrix for the contravariant indices, is the transition matrix for the covariant indices, and
This transformation rule differs from the rule for an ordinary tensor only by the presence of the factor
The second context where the word "pseudotensor" is used is general relativity. In that theory, one cannot describe the energy and momentum of the gravitational field by an energy–momentum tensor. Instead, one introduces objects that behave as tensors only with respect to restricted coordinate transformations. Strictly speaking, such objects are not tensors at all. A famous example of such a pseudotensor is the Landau–Lifshitz pseudotensor.
Examples
On non-orientable manifolds, one cannot define a volume form globally due to the non-orientability, but one can define a volume element, which is formally a density, and may also be called a pseudo-volume form, due to the additional sign twist (tensoring with the sign bundle). The volume element is a pseudotensor density according to the first definition.
A change of variables in multi-dimensional integration may be achieved through the incorporation of a factor of the absolute value of the determinant of the Jacobian matrix. The use of the absolute value introduces a sign change for improper coordinate transformations to compensate for the convention of keeping integration (volume) element positive; as such, an integrand is an example of a pseudotensor density according to the first definition.
The Christoffel symbols of an affine connection on a manifold can be thought of as the correction terms to the partial derivatives of a coordinate expression of a vector field with respect to the coordinates to render it the vector field's covariant derivative. While the affine connection itself doesn't depend on the choice of coordinates, its Christoffel symbols do, making them a pseudotensor quantity according to the second definition.
See also
References
External links
Mathworld description for pseudotensor.
Differential geometry
Tensors
Tensors in general relativity | Pseudotensor | [
"Physics",
"Engineering"
] | 851 | [
"Tensors in general relativity",
"Tensors",
"Tensor physical quantities",
"Physical quantities"
] |
33,194,835 | https://en.wikipedia.org/wiki/Logarithmic%20Schr%C3%B6dinger%20equation | In theoretical physics, the logarithmic Schrödinger equation (sometimes abbreviated as LNSE or LogSE) is one of the nonlinear modifications of Schrödinger's equation, first proposed by Gerald H. Rosen in its relativistic version (with D'Alembertian instead of Laplacian and first-order time derivative) in 1969. It is a classical wave equation with applications to extensions of quantum mechanics, quantum optics, nuclear physics, transport and diffusion phenomena, open quantum systems and information theory,
effective quantum gravity and physical vacuum models and theory of superfluidity and Bose–Einstein condensation. It is an example of an integrable model.
The equation
The logarithmic Schrödinger equation is a partial differential equation. In mathematics and mathematical physics one often uses its dimensionless form:
for the complex-valued function of the particles position vector at time , and
is the Laplacian of in Cartesian coordinates. The logarithmic term has been shown indispensable in determining the speed of sound scales as the cubic root of pressure for Helium-4 at very low temperatures. This logarithmic term is also needed for cold sodium atoms. In spite of the logarithmic term, it has been shown in the case of central potentials, that even for non-zero angular momentum, the LogSE retains certain symmetries similar to those found in its linear counterpart, making it potentially applicable to atomic and nuclear systems.
The relativistic version of this equation can be obtained by replacing the derivative operator with the D'Alembertian, similarly to the Klein–Gordon equation. Soliton-like solutions known as Gaussons figure prominently as analytical solutions to this equation for a number of cases.
See also
Galaxy rotation curve
Nonlinear Schrödinger equation
Superfluid Helium-4
Superfluid vacuum theory
References
External links
Theoretical physics
Schrödinger equation | Logarithmic Schrödinger equation | [
"Physics"
] | 396 | [
"Equations of physics",
"Theoretical physics",
"Eponymous equations of physics",
"Quantum mechanics",
"Schrödinger equation"
] |
33,194,970 | https://en.wikipedia.org/wiki/Hidden%20attractor | In the bifurcation theory, a bounded oscillation that is born without loss of stability of stationary set is called a hidden oscillation. In nonlinear control theory, the birth of a hidden oscillation in a time-invariant control system with bounded states means crossing a boundary, in the domain of the parameters, where local stability of the stationary states implies global stability (see, e.g. Kalman's conjecture). If a hidden oscillation (or a set of such hidden oscillations filling a compact subset of the phase space of the dynamical system) attracts all nearby oscillations, then it is called a hidden attractor. For a dynamical system with a unique equilibrium point that is globally attractive, the birth of a hidden attractor corresponds to a qualitative change in behaviour from monostability to bi-stability. In the general case, a dynamical system may turn out to be multistable and have coexisting local attractors in the phase space. While trivial attractors, i.e. stable equilibrium points, can be easily found analytically or numerically, the search of periodic and chaotic attractors can turn out to be a challenging problem (see, e.g. the second part of Hilbert's 16th problem).
Classification of attractors as being hidden or self-excited
To identify a local attractor in a physical or numerical experiment, one needs to choose an initial system’s state in attractor’s basin of attraction and observe how the system’s state, starting from this initial state, after a transient process, visualizes the attractor. The classification of attractors as being hidden or self-excited reflects the difficulties of revealing basins of attraction and searching for the local attractors in the phase space.
Definition.
An attractor is called a hidden attractor if its basin of attraction does not intersect with a certain open neighbourhood of equilibrium points; otherwise it is called a self-excited attractor.
The classification of attractors as being hidden or self-excited was introduced by G. Leonov and N. Kuznetsov in connection with the discovery of the hidden Chua attractor
for the first time in 2009 year. Similarly, an arbitrary bounded oscillation, not necessarily having an open neighborhood as the basin of attraction in the phase space, is classified as a self-excited or hidden oscillation.
Self-excited attractors
For a self-excited attractor, its basin of attraction is connected with an unstable equilibrium and, therefore, the self-excited attractors can be found numerically by a standard computational procedure in which after a transient process, a trajectory, starting in a neighbourhood of an unstable equilibrium, is attracted to the state of oscillation and then traces it (see, e.g. self-oscillation process). Thus, self-excited attractors, even coexisting in the case of multistability, can be easily revealed and visualized numerically. In the Lorenz system, for classical parameters, the attractor is self-excited with respect to all existing equilibria, and can be visualized by any trajectory from their vicinities; however, for some other parameter values there are two trivial attractors coexisting with a chaotic attractor, which is a self-excited one with respect to the zero equilibrium only. Classical attractors in Van der Pol, Beluosov–Zhabotinsky, Rössler, Chua, Hénon dynamical systems are self-excited.
A conjecture is that the Lyapunov dimension of a self-excited attractor does not exceed the Lyapunov dimension of one of the unstable equilibria, the unstable manifold of which intersects with the basin of attraction and visualizes the attractor.
Hidden attractors
Hidden attractors have basins of attraction which are not connected with equilibria and are “hidden” somewhere in the phase space. For example, the hidden attractors are attractors in the systems without equilibria: e.g. rotating electromechanical dynamical systems with Sommerfeld effect (1902), in the systems with only one equilibrium, which is stable: e.g. counterexamples to the Aizerman's conjecture (1949) and Kalman's conjecture (1957) on the monostability of nonlinear control systems. One of the first related theoretical problems is the second part of Hilbert's 16th problem on the number and mutual disposition of limit cycles in two-dimensional polynomial systems where the nested stable limit cycles are hidden periodic attractors. The notion of a hidden attractor has become a catalyst for the discovery of hidden attractors in many applied dynamical models.
In general, the problem with hidden attractors is that there are no general straightforward methods to trace or predict such states for the system’s dynamics (see, e.g.). While for two-dimensional systems, hidden oscillations can be investigated using analytical methods (see, e.g., the results on the second part of Hilbert's 16th problem), for the study of stability and oscillations in complex nonlinear multidimensional systems, numerical methods are often used.
In the multi-dimensional case the integration of trajectories with random initial data is unlikely to provide a localization of a hidden attractor, since a basin of attraction may be very small, and the attractor dimension itself may be much less than the dimension of the considered system.
Therefore, for the numerical localization of hidden attractors in multi-dimensional space, it is necessary to develop special analytical-numerical computational procedures, which allow one to choose initial data in the attraction domain of the hidden oscillation (which does not contain neighborhoods of equilibria), and then to perform trajectory computation.
There are corresponding effective methods based on
homotopy and numerical continuation: a sequence of similar systems is constructed, such that
for the first (starting) system, the initial data for numerical computation of an oscillating solution
(starting oscillation) can be obtained analytically, and then the transformation of this starting oscillation in the transition from one system to another is followed numerically.
Theory of hidden oscillations
The classification of attractors as self-exited or hidden ones was a fundamental
premise for the emergence of the theory of hidden oscillations, which represents
the modern development of Andronov’s theory of oscillations.
It is key to determining the exact boundaries of the global stability,
parts of which are classified by N. Kuznetsov as trivial (i.e., determined
by local bifurcations) or as hidden (i.e., determined by non-local bifurcations
and by the birth of hidden oscillations).
References
Books
Chaotic Systems with Multistability and Hidden Attractors (Eds.: Wang, Kuznetsov, Chen), Springer, 2021 (doi:10.1007/978-3-030-75821-9)
Nonlinear Dynamical Systems with Self-Excited and Hidden Attractors (Eds.: Pham, Vaidyanathan, Volos et al.), Springer, 2018 (doi:10.1007/978-3-319-71243-7)
Selected lectures
N.Kuznetsov, Invited lecture The theory of hidden oscillations and stability of dynamical systems, Int. Workshop on Applied Mathematics, Czech Republic, 2021
Afraimovich Award's plenary lecture: N. Kuznetsov The theory of hidden oscillations and stability of dynamical systems. Int. Conference on Nonlinear Dynamics and Complexity, 2021
Dynamical systems
Oscillation
Chaos theory
Nonlinear systems
Hidden oscillation | Hidden attractor | [
"Physics",
"Mathematics"
] | 1,596 | [
"Nonlinear systems",
"Mechanics",
"Oscillation",
"Hidden oscillation",
"Dynamical systems"
] |
33,199,198 | https://en.wikipedia.org/wiki/Lamb%20Dicke%20regime | In ion trapping and atomic physics experiments, the Lamb Dicke regime (or Lamb Dicke limit) is a quantum regime in which the coupling (induced by an external light field) between an ion or atom's internal qubit states and its motional states is sufficiently small so that transitions that change the motional quantum number by more than one are strongly suppressed.
This condition is quantitively expressed by the inequality
where is the Lamb–Dicke parameter and is the motional quantum number of the ion or atom's harmonic oscillator state.
Lamb Dicke parameter
Considering the ion's motion along the direction of the static trapping potential of an ion trap (the axial motion in -direction), the trap potential can be validly approximated as quadratic around the equilibrium position and the ion's motion locally be considered as that of a quantum harmonic oscillator with quantum harmonic oscillator eigenstates . In this case the position operator is given by
where
is the spread of the zero-point wavefunction, is the frequency of the static harmonic trapping potential in -direction and are the ladder operators of the harmonic oscillator.
The Lamb Dicke regime corresponds to the condition
where is the motional part of the ion's wavefunction and (here: unit vector in z-direction) is the projection of the wavevector of the light field acting on the ion on the -direction.
The Lamb–Dicke parameter actually is defined as
Upon absorption or emission of a photon with momentum the kinetic energy of the ion is changed by the amount of the recoil energy
where the definition of the recoil frequency is
The square of the Lamb Dicke parameter then is given by
Hence the Lamb Dicke parameter quantifies the coupling strength between internal states and motional states of an ion. If the Lamb Dicke parameter is much smaller than one, the quantized energy spacing of the harmonic oscillator is larger than the recoil energy and transitions changing the motional state of the ion are negligible. The Lamb Dicke parameter being small is necessary, but not a sufficient condition for the Lamb Dicke regime.
Mathematical background
In ion trapping experiments, laser fields are used to couple the internal state of an ion with its motional state. The mechanical recoil of the ion upon absorption or emission of a photon is described by the operators . These operators induce a displacement of the atomic momentum by the quantity for the absorption (+) or emission (-) of a laser photon. In the basis of harmonic oscillator eigenstates , the probability for the transition is given by the Franck-Condon coefficients
If the condition for the Lamb-Dicke regime is met, a Taylor expansion is possible,
The ladder operators act on the state according to the rules
and .
If is small, the terms can be neglected, and the term
can therefore be approximated as
.
Since unless ,
this expression vanishes unless ,
and it is readily seen that transitions between motional states which change the motional quantum number by more than one are strongly suppressed.
Applicability
In the Lamb Dicke regime spontaneous decay occurs predominantly at the frequency of the qubit's internal transition (carrier frequency) and therefore does not affect the ion's motional state most of the time. This is a necessary requirement for resolved sideband cooling to work efficiently.
Reaching the Lamb Dicke regime is a requirement for many of the schemes used to perform coherent operations on ions. It therefore establishes the upper limit on the temperature of ions in order for these methods to create entanglement. During manipulations on ions with laser pulses, the ions cannot be laser cooled. They must therefore be initially cooled down to a temperature such that they stay in the Lamb Dicke regime during the entire manipulation process that creates entanglement.
See also
Laser cooling
Resolved sideband cooling
References and notes
Atomic physics | Lamb Dicke regime | [
"Physics",
"Chemistry"
] | 790 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
33,200,356 | https://en.wikipedia.org/wiki/Quaternionic%20discrete%20series%20representation | In mathematics, a quaternionic discrete series representation is a discrete series representation of a semisimple Lie group G associated with a quaternionic structure on the symmetric space of G. They were introduced by .
Quaternionic discrete series representations exist when the maximal compact subgroup of the group G has a normal subgroup isomorphic to SU(2). Every complex simple Lie group has a real form with quaternionic discrete series representations. In particular the classical groups SU(2,n), SO(4,n), and Sp(1,n) have quaternionic discrete series representations.
Quaternionic representations are analogous to holomorphic discrete series representations, which exist when the symmetric space of the group has a complex structure. The groups SU(2,n) have both holomorphic and quaternionic discrete series representations.
See also
Quaternionic symmetric space
References
External links
Representation theory | Quaternionic discrete series representation | [
"Mathematics"
] | 194 | [
"Representation theory",
"Fields of abstract algebra"
] |
2,100,884 | https://en.wikipedia.org/wiki/Linear%20actuator | A linear actuator is an actuator that creates linear motion (i.e., in a straight line), in contrast to the circular motion of a conventional electric motor. Linear actuators are used in machine tools and industrial machinery, in computer peripherals such as disk drives and printers, in valves and dampers, and in many other places where linear motion is required. Hydraulic or pneumatic cylinders inherently produce linear motion. Many other mechanisms are used to generate linear motion from a rotating motor.
Types
Mechanical actuators
Mechanical linear actuators typically operate by conversion of rotary motion into linear motion. Conversion is commonly made via a few simple types of mechanism:
Screw: leadscrew, screw jack, ball screw and roller screw actuators all operate on the principle of the simple machine known as the screw. By rotating the actuator's nut, the screw shaft moves in a line.
Wheel and axle: Hoist, winch, rack and pinion, chain drive, belt drive, rigid chain and rigid belt actuators operate on the principle of the wheel and axle. A rotating wheel moves a cable, rack, chain or belt to produce linear motion.
Cam: Cam actuators function on a principle similar to that of the wedge, but provide relatively limited travel. As a wheel-like cam rotates, its eccentric shape provides thrust at the base of a shaft.
Some mechanical linear actuators only pull, such as hoists, chain drive and belt drives. Others only push (such as a cam actuator). Pneumatic and hydraulic cylinders, or lead screws can be designed to generate force in both directions.
Mechanical actuators typically convert rotary motion of a control knob or handle into linear displacement via screws and/or gears to which the knob or handle is attached. A jackscrew or car jack is a familiar mechanical actuator. Another family of actuators are based on the segmented spindle. Rotation of the jack handle is converted mechanically into the linear motion of the jack head. Mechanical actuators are also frequently used in the field of lasers and optics to manipulate the position of linear stages, rotary stages, mirror mounts, goniometers and other positioning instruments. For accurate and repeatable positioning, index marks may be used on control knobs. Some actuators include an encoder and digital position readout. These are similar to the adjustment knobs used on micrometers except their purpose is position adjustment rather than position measurement.
Fluid actuators
Hydraulic
Hydraulic actuators or hydraulic cylinders typically involve a hollow cylinder having a piston inserted in it. An unbalanced pressure applied to the piston generates a force that can move an external object. Since liquids are nearly incompressible, a hydraulic cylinder can provide controlled precise linear displacement of the piston. The displacement is only along the axis of the piston. A familiar example of a manually operated hydraulic actuator is a hydraulic car jack. Typically though, the term "hydraulic actuator" refers to a device controlled by a hydraulic pump.
Pneumatic
Pneumatic actuators, or pneumatic cylinders, are similar to hydraulic actuators except they use compressed air to generate force instead of a liquid. They work similarly to a piston in which air is pumped inside a chamber and pushed out of the other side of the chamber. Air actuators are not necessarily used for heavy duty machinery and instances where large amounts of weight are present. One of the reasons pneumatic linear actuators are preferred to other types is the fact that the power source is simply an air compressor. Because air is the input source, pneumatic actuators are able to be used in many places of mechanical activity. The downside is, most air compressors are large, bulky, and loud. They are hard to transport to other areas once installed. Pneumatic linear actuators are likely to leak and this makes them less efficient than mechanical linear actuators.
Piezoelectric actuators
The piezoelectric effect is a property of certain materials in which application of a voltage to the material causes it to expand. Very high voltages correspond to only tiny expansions. As a result, piezoelectric actuators can achieve extremely fine positioning resolution, but also have a very short range of motion. In addition, piezoelectric materials exhibit hysteresis which makes it difficult to control their expansion in a repeatable manner.
Electro-mechanical actuators
Electro-mechanical actuators are similar to mechanical actuators except that the control knob or handle is replaced with an electric motor. Rotary motion of the motor is converted to linear displacement. Electromechanical actuators may also be used to power a motor that converts electrical energy into mechanical torque. There are many designs of modern linear actuators and every company that manufactures them tends to have a proprietary method. The following is a generalized description of a very simple electro-mechanical linear actuator.
Simplified design
Typically, an electric motor is mechanically connected to rotate a lead screw. A lead screw has a continuous helical thread machined on its circumference running along the length (similar to the thread on a bolt). Threaded onto the lead screw is a lead nut or ball nut with corresponding helical threads. The nut is prevented from rotating with the lead screw (typically the nut interlocks with a non-rotating part of the actuator body). When the lead screw is rotated, the nut will be driven along the threads. The direction of motion of the nut depends on the direction of rotation of the lead screw. By connecting linkages to the nut, the motion can be converted to usable linear displacement. Most current actuators are built for high speed, high force, or a compromise between the two. When considering an actuator for a particular application, the most important specifications are typically travel, speed, force, accuracy, and lifetime. Most varieties are mounted on dampers or butterfly valves.
There are many types of motors that can be used in a linear actuator system. These include dc brush, dc brushless, stepper, or in some cases, even induction motors. It all depends on the application requirements and the loads the actuator is designed to move. For example, a linear actuator using an integral horsepower AC induction motor driving a lead screw can be used to operate a large valve in a refinery. In this case, accuracy and high movement resolution aren't needed, but high force and speed are. For electromechanical linear actuators used in laboratory instrumentation robotics, optical and laser equipment, or X-Y tables, fine resolution in the micron range and high accuracy may require the use of a fractional horsepower stepper motor linear actuator with a fine pitch lead screw. There are many variations in the electromechanical linear actuator system. It is critical to understand the design requirements and application constraints to know which one would be best.
Standard vs compact construction
A linear actuator using standard motors will commonly have the motor as a separate cylinder attached to the side of the actuator, either parallel with the actuator or perpendicular to the actuator. The motor may be attached to the end of the actuator. The drive motor is of typical construction with a solid drive shaft that is geared to the drive nut or drive screw of the actuator.
Compact linear actuators use specially designed motors that try to fit the motor and actuator into the smallest possible shape.
The inner diameter of the motor shaft can be enlarged, so that the drive shaft can be hollow. The drive screw and nut can therefore occupy the center of the motor, with no need for additional gearing between the motor and the drive screw.
Similarly the motor can be made to have a very small outside diameter, but instead the pole faces are stretched lengthwise so the motor can still have very high torque while fitting in a small diameter space.
Principles
In the majority of linear actuator designs, the basic principle of operation is that of an inclined plane. The threads of a lead screw act as a continuous ramp that allows a small rotational force to be used over a long distance to accomplish the movement of a large load over a short distance.
The power supply is from a DC or AC motor. The typical motor is a 12v DC, but other voltages are available. Actuators have a switch to reverse the polarity of the motor, which makes the actuator change its motion.
The speed and force of an actuator depend on its gearbox. The amount of force depends on the actuator’s speed. Lower speeds supply greater force because motor speed and force are constant.
One of the basic differences between actuators is their stroke, which is defined by the length of the screw and shaft. Speed depends on the gears that connect the motor to the screw.
The mechanism to stop the stroke of an actuator is a limit or micro switch, which can be seen in the image below. Microswitches are located at the top and bottom of the shaft and are triggered by the up and down movement of the screw.
Variations
Many variations on the basic design have been created. Most focus on providing general improvements such as a higher mechanical efficiency, speed, or load capacity. There is also a large engineering movement towards actuator miniaturization.
Most electro-mechanical designs incorporate a lead screw and lead nut. Some use a ball screw and ball nut. In either case the screw may be connected to a motor or manual control knob either directly or through a series of gears. Gears are typically used to allow a smaller (and weaker) motor spinning at a higher rpm to be geared down to provide the torque necessary to spin the screw under a heavier load than the motor would otherwise be capable of driving directly. Effectively this sacrifices actuator speed in favor of increased actuator thrust. In some applications the use of worm gear is common as this allow a smaller built in dimension still allowing great travel length.
A traveling-nut linear actuator has a motor that stays attached to one end of the lead screw (perhaps indirectly through a gear box), the motor spins the lead screw, and the lead nut is restrained from spinning so it travels up and down the lead screw.
A traveling-screw linear actuator has a lead screw that passes entirely through the motor.
In a traveling-screw linear actuator, the motor "crawls" up and down a lead screw that is restrained from spinning. The only spinning parts are inside the motor, and may not be visible from the outside.
Some lead screws have multiple "starts". This means they have multiple threads alternating on the same shaft. One way of visualizing this is in comparison to the multiple color stripes on a candy cane. This allows for more adjustment between thread pitch and nut/screw thread contact area, which determines the extension speed and load carrying capacity (of the threads), respectively.
Static load capacity
Linear screw actuators can have a static loading capacity, meaning that when the motor stops the actuator essentially locks in place and can support a load that is either pulling or pushing on the actuator. This static load capacity increases mobility and speed.
The braking force of the actuator varies with the angular pitch of the screw threads and the specific design of the threads. Acme threads have a very high static load capacity, while ball screws have an extremely low load capacity and can be nearly free-floating.
Generally it is not possible to vary the static load capacity of screw actuators without additional technology. The screw thread pitch and drive nut design defines a specific load capacity that cannot be dynamically adjusted.
In some cases, high viscosity grease can be added to linear screw actuators to increase the static load. Some manufacturers use this to alter the load for specific needs.
Static load capacity can be added to a linear screw actuator using an electromagnetic brake system, which applies friction to the spinning drive nut. For example, a spring may be used to apply brake pads to the drive nut, holding it in position when power is turned off. When the actuator needs to be moved, an electromagnet counteracts the spring and releases the braking force on the drive nut.
Similarly an electromagnetic ratchet mechanism can be used with a linear screw actuator so that the drive system lifting a load will lock in position when power to the actuator is turned off. To lower the actuator, an electromagnet is used to counteract the spring force and unlock the ratchet.
Dynamic load capacity
Dynamic load capacity is typically referred to as the amount of force the linear actuator is capable of providing during operation. This force will vary with screw type (amount of friction restricting movement) and the motor driving the movement. Dynamic load is the figure which most actuators are classified by, and is a good indication of what applications it would suit best.
Speed control
In most cases when using an electro-mechanical actuator, it is preferred to have some type of speed control. Such controllers vary the voltage supplied to the motor, which in turn changes the speed at which the lead screw turns. Adjusting the gear ratio is another way to adjust speed. Some actuators are available with several different gearing options.
Duty cycle
The duty cycle of a motor refers to the amount of time the actuator can be run before it needs to cool down. Staying within this guideline when operating an actuator is key to its longevity and performance. If the duty cycle rating is exceeded, then overheating, loss of power, and eventual burning of the motor is risked.
Linear motors
A linear motor is functionally the same as a rotary electric motor with the rotor and stator circular magnetic field components laid out in a straight line. Where a rotary motor would spin around and re-use the same magnetic pole faces again, the magnetic field structures of a linear motor are physically repeated across the length of the actuator.
Since the motor moves in a linear fashion, no lead screw is needed to convert rotary motion to linear. While high capacity is possible, the material and/or motor limitations on most designs are surpassed relatively quickly due to a reliance solely on magnetic attraction and repulsion forces. Most linear motors have a low load capacity compared to other types of linear actuators.
Linear motors have an advantage in outdoor or dirty environments in that the two halves do not need to contact each other, and so the electromagnetic drive coils can be waterproofed and sealed against moisture and corrosion, allowing for a very long service life. Linear motors are being used extensively in high performance positioning systems for applications which require various combinations of high velocity, high precision and high force.
Telescoping linear actuator
Telescoping linear actuators are specialized linear actuators used where space restrictions exist. Their range of motion is many times greater than the unextended length of the actuating member.
A common form is made of concentric tubes of approximately equal length that extend and retract like sleeves, one inside the other, such as the telescopic cylinder.
Other more specialized telescoping actuators use actuating members that act as rigid linear shafts when extended, but break that line by folding, separating into pieces and/or uncoiling when retracted. Examples of telescoping linear actuators include:
Helical band actuator
Rigid belt actuator
Rigid chain actuator
Segmented spindle
Advantages and disadvantages
See also
References
External links
Leo Dorst's Lego linear actuator
Actuators
Positioning instruments
Articles containing video clips
Linear motion | Linear actuator | [
"Physics"
] | 3,222 | [
"Physical phenomena",
"Motion (physics)",
"Linear motion"
] |
2,101,244 | https://en.wikipedia.org/wiki/Nylon%206 | Nylon 6 or polycaprolactam is a polymer, in particular semicrystalline polyamide. Unlike most other nylons, nylon 6 is not a condensation polymer, but instead is formed by ring-opening polymerization; this makes it a special case in the comparison between condensation and addition polymers. Its competition with nylon 6,6 and the example it set have also shaped the economics of the synthetic fibre industry. It is sold under numerous trade names including Perlon (Germany), Dederon (former East Germany), Nylatron, Capron, Ultramid, Akulon, Kapron (former Soviet Union and satellite states), Rugopa (Turkey) and Durethan.
History
Polycaprolactam was developed by Paul Schlack at IG Farben in late 1930s (first synthesized in 1938) to reproduce the properties of Nylon 66 without violating the patent on its production. (Around the same time, Kohei Hoshino at Toray also succeeded in synthesizing nylon 6.) It was marketed as Perlon, and industrial production with a capacity of 3,500 tons per year was established in Nazi Germany in 1943, using phenol as a feedstock. At first, the polymer was used to produce coarse fiber for artificial bristle, then the fiber quality was improved, and Germans started making parachutes, cord for aircraft tires and towing cables for gliders.
The Soviet Union began its development of an analog in the 1940s, while negotiating with Germany on building an IG Farben plant in Ukraine, basic scientific work was ongoing in 1942. The production only started in 1948 in Klin, Moscow Oblast, after USSR obtained the 2000 volumes of IG Farben, and 10,000 volumes of AEG technical documentation, as a result of victory in the World War II.
Synthesis
Nylon 6 can be modified using comonomers or stabilizers during polymerization to introduce new chain end or functional groups, which changes the reactivity and chemical properties. It is often done to change its dyeability or flame retardance. Nylon 6 is synthesized by ring-opening polymerization of caprolactam. Caprolactam has 6 carbons, hence Nylon 6. When caprolactam is heated at about 533K in an inert atmosphere of nitrogen for about 4–5 hours, the ring breaks and undergoes polymerization. Then the molten mass is passed through spinnerets to form fibres of nylon 6.
During polymerization, the amide bond within each caprolactam molecule is broken, with the active groups on each side re-forming two new bonds as the monomer becomes part of the polymer backbone. Unlike nylon 6,6, in which the direction of the amide bond reverses at each bond, all nylon 6 amide bonds lie in the same direction (see figure: note the N to C orientation of each amide bond).
Properties
Nylon 6 fibres are tough, possessing high tensile strength, elasticity and lustre. They are wrinkleproof and highly resistant to abrasion and chemicals such as acids and alkalis. The fibres can
absorb up to 2.4% of water, although this lowers tensile strength. The glass transition temperature of
Nylon 6 is 47 °C.
As a synthetic fibre, Nylon 6 is generally white but can be dyed in a solution bath prior to production for different color results. Its tenacity is 6–8.5gf/D with a density of 1.14g/cm. Its melting point is at 215 °C and can protect heat up to 150 °C on average.
Biodegradation
Flavobacterium sp. [85] and Pseudomonas sp. (NK87) degrade oligomers of Nylon 6, but not polymers. Certain white rot fungal strains can also degrade Nylon 6 through oxidation. Compared to aliphatic polyesters, Nylon 6 has been said to have poor biodegradability. Strong interchain interactions from hydrogen bonds between molecular nylon chains is said to be the cause by some sources.
However, in 2023 a team of Northwestern University chemists led by Linda Broadbelt and Tobin J. Marks developed rare earth metallocene catalysts that rapidly break Nylon 6 down back to caprolactam at 220°C, which is considered mild conditions.
Production in Europe
At present, polyamide 6 is a significant construction material used in many industries, for instance in the automotive industry, aircraft industry, electronic and electrotechnical industry, clothing industry and medicine. Annual demand for polyamides in Europe amounts to a million tonnes. They are produced by all leading chemical companies.
The largest producers of polyamide 6 in Europe:
Fibrant, 260,000 tonnes per year
BASF, 240,000 tonnes per year
Lanxess, 170,000 tonnes per year
Radici, 125,000 tonnes per year
DOMO, 100,000 tonnes per year
Grupa Azoty, 100,000 tonnes per year
References
External links
The Promise of Nylon 6: A Case Study in Intelligent Product Design by William McDonough & Michael Braungart
Polyamides
Plastics
Synthetic fibers
German inventions | Nylon 6 | [
"Physics",
"Chemistry"
] | 1,063 | [
"Synthetic fibers",
"Synthetic materials",
"Unsolved problems in physics",
"Amorphous solids",
"Plastics"
] |
2,102,274 | https://en.wikipedia.org/wiki/Sodium%20iodide | Sodium iodide (chemical formula NaI) is an ionic compound formed from the chemical reaction of sodium metal and iodine. Under standard conditions, it is a white, water-soluble solid comprising a 1:1 mix of sodium cations (Na+) and iodide anions (I−) in a crystal lattice. It is used mainly as a nutritional supplement and in organic chemistry. It is produced industrially as the salt formed when acidic iodides react with sodium hydroxide. It is a chaotropic salt.
Uses
Food supplement
Sodium iodide, as well as potassium iodide, is commonly used to treat and prevent iodine deficiency. Iodized table salt contains 10 ppm iodide.
Organic synthesis
Sodium iodide is used for conversion of alkyl chlorides into alkyl iodides. This method, the Finkelstein reaction, relies on the insolubility of sodium chloride in acetone to drive the reaction:
R–Cl + NaI → R–I + NaCl
Nuclear medicine
Some radioactive iodide salts of sodium, including Na125I and Na131I, have radiopharmaceutical uses for thyroid cancer and hyperthyroidism or as radioactive tracer in imaging (see Isotopes of iodine > Radioiodines I-123, I-124, I-125, and I-131 in medicine and biology).
Thallium-doped NaI(Tl) scintillators
Sodium iodide activated with thallium, NaI(Tl), when subjected to ionizing radiation, emits photons (i.e., scintillate) and is used in scintillation detectors, traditionally in nuclear medicine, geophysics, nuclear physics, and environmental measurements. NaI(Tl) is the most widely used scintillation material. The crystals are usually coupled with a photomultiplier tube, in a hermetically sealed assembly, as sodium iodide is hygroscopic. Fine-tuning of some parameters (i.e., radiation hardness, afterglow, transparency) can be achieved by varying the conditions of the crystal growth. Crystals with a higher level of doping are used in X-ray detectors with high spectrometric quality. Sodium iodide can be used both as single crystals and as polycrystals for this purpose. The wavelength of maximum emission is 415 nm.
Radiocontrast
António Egas Moniz searched for a radiocontrast agent for cerebral angiography. After experiments on rabbits and dogs he settled upon sodium iodide as the best medium.
Solubility data
Sodium iodide exhibits high solubility in some organic solvents, unlike sodium chloride or even bromide:
Stability
Iodides (including sodium iodide) are detectably oxidized by atmospheric oxygen (O2) to molecular iodine (I2). I2 and I− complex to form the triiodide complex, which has a yellow color, unlike the white color of sodium iodide. Water accelerates the oxidation process, and iodide can also produce I2 by photooxidation, therefore for maximum stability sodium iodide should be stored under dark, low temperature, low humidity conditions.
See also
Gamma spectroscopy
Scintillation counter
Teratology
References
Cited sources
External links
Alkali metal iodides
Inorganic compounds
Iodides
Metal halides
Phosphors and scintillators
Sodium compounds
Chaotropic agents
Rock salt crystal structure | Sodium iodide | [
"Chemistry",
"Technology",
"Engineering"
] | 731 | [
"Luminescence",
"Inorganic compounds",
"Chaotropic agents",
"Radioactive contamination",
"Measuring instruments",
"Salts",
"Ionising radiation detectors",
"Phosphors and scintillators",
"Biomolecules",
"Metal halides"
] |
2,103,914 | https://en.wikipedia.org/wiki/Electron%20transfer | Electron transfer (ET) occurs when an electron relocates from an atom, ion, or molecule, to another such chemical entity. ET describes the mechanism by which electrons are transferred in redox reactions.
Electrochemical processes are ET reactions. ET reactions are relevant to photosynthesis and respiration and commonly involve transition metal complexes. In organic chemistry ET is a step in some industrial polymerization reactions. It is foundational to photoredox catalysis.
Classes of electron transfer
Inner-sphere electron transfer
In inner-sphere ET, two redox centers are covalently linked during the ET. This bridge can be permanent, in which case the electron transfer event is termed intramolecular electron transfer. More commonly, however, the covalent linkage is transitory, forming just prior to the ET and then disconnecting following the ET event. In such cases, the electron transfer is termed intermolecular electron transfer. A famous example of an inner sphere ET process that proceeds via a transitory bridged intermediate is the reduction of [CoCl(NH3)5]2+ by [Cr(H2O)6]2+. In this case, the chloride ligand is the bridging ligand that covalently connects the redox partners.
Outer-sphere electron transfer
In outer-sphere ET reactions, the participating redox centers are not linked via any bridge during the ET event. Instead, the electron "hops" through space from the reducing center to the acceptor. Outer sphere electron transfer can occur between different chemical species or between identical chemical species that differ only in their oxidation state. The latter process is termed self-exchange. As an example, self-exchange describes the degenerate reaction between permanganate and its one-electron reduced relative manganate:
[MnO4]− + [Mn*O4]2− → [MnO4]2− + [Mn*O4]−
In general, if electron transfer is faster than ligand substitution, the reaction will follow the outer-sphere electron transfer route.
Outer-sphere ET reactions often occur when one/both reactants are inert or if there is no suitable bridging ligand.
A key concept of Marcus theory is that the rates of such self-exchange reactions are mathematically related to the rates of "cross reactions". Cross reactions entail partners that differ by more than their oxidation states. One example (of many thousands) is the reduction of permanganate by iodide to form iodine and manganate.
Five steps of an outer sphere reaction
Reactants diffuse together, forming an "encounter complex", out of their solvent shells => precursor complex (requires )
Changing bond lengths, reorganize solvent => activated complex
Electron transfer
Relaxation of bond lengths, solvent molecules => successor complex
Diffusion of products (requires )
Heterogeneous electron transfer
In heterogeneous electron transfer, an electron moves between a chemical species present in solution and the surface of a solid such as a semi-conducting material or an electrode. Theories addressing heterogeneous electron transfer have applications in electrochemistry and the design of solar cells.
Vectorial electron transfer
Especially in proteins, electron transfer often involves hopping of an electron from one redox-active center to another one. The hopping pathway, which can be viewed as a vector, guides and facilitates ET within an insulating matrix. Typical redox centers are iron-sulfur clusters, e.g. the 4Fe-4S ferredoxins. These sites are often separated by 7-10 Å, a distance compatible with fast outer-sphere ET. It has been found that the matrix of ET protein plastocyanin (devoid of the redox copper ion) is sufficient to support charge transport with its redox partner photosystem I.
Theory
The first generally accepted theory of ET was developed by Rudolph A. Marcus (Nobel Prize in Chemistry in 1992) to address outer-sphere electron transfer and was based on a transition-state theory approach. The Marcus theory of electron transfer was then extended to include inner-sphere electron transfer by Noel Hush and Marcus. The resultant theory, Marcus-Hush theory, has guided most discussions of electron transfer ever since. Both theories are, however, semiclassical in nature, although they have been extended to fully quantum mechanical treatments by Joshua Jortner, Alexander M. Kuznetsov, and others proceeding from Fermi's golden rule and following earlier work in non-radiative transitions. Furthermore, theories have been put forward to take into account the effects of vibronic coupling on electron transfer, in particular, the PKS theory of electron transfer. In proteins, ET rates are governed by the bond structures: the electrons, in effect, tunnel through the bonds comprising the chain structure of the proteins.
See also
Electron equivalent
Electrochemical reaction mechanism
Solvated electron
References
Physical chemistry
Reaction mechanisms
zh:電子轉移 | Electron transfer | [
"Physics",
"Chemistry"
] | 1,004 | [
"Reaction mechanisms",
"Applied and interdisciplinary physics",
"nan",
"Physical organic chemistry",
"Chemical kinetics",
"Physical chemistry"
] |
2,104,510 | https://en.wikipedia.org/wiki/Nanolithography | Nanolithography (NL) is a growing field of techniques within nanotechnology dealing with the engineering (patterning e.g. etching, depositing, writing, printing etc) of nanometer-scale structures on various materials.
The modern term reflects on a design of structures built in range of 10−9 to 10−6 meters, i.e. nanometer scale. Essentially, the field is a derivative of lithography, only covering very small structures. All NL methods can be categorized into four groups: photo lithography, scanning lithography, soft lithography and other miscellaneous techniques.
History
The NL has evolved from the need to increase the number of sub-micrometer features (e.g. transistors, capacitors etc.) in an integrated circuit in order to keep up with Moore's Law. While lithographic techniques have been around since the late 18th century, none were applied to nanoscale structures until the mid-1950s. With evolution of the semiconductor industry, demand for techniques capable of producing micro- and nano-scale structures skyrocketed. Photolithography was applied to these structures for the first time in 1958 beginning the age of nanolithography.
Since then, photolithography has become the most commercially successful technique, capable of producing sub-100 nm patterns. There are several techniques associated with the field, each designed to serve its many uses in the medical and semiconductor industries. Breakthroughs in this field contribute significantly to the advancement of nanotechnology, and are increasingly important today as demand for smaller and smaller computer chips increases. Further areas of research deal with physical limitations of the field, energy harvesting, and photonics.
Etymology
From Greek, the word nanolithography can be broken up into three parts: "nano" meaning dwarf, "lith" meaning stone, and "graphy" meaning to write, or "tiny writing onto stone."
Photolithography
As of 2021 photolithography is the most heavily used technique in mass production of microelectronics and semiconductor devices. It is characterized by both high production throughput and small-sized features of the patterns.
Optical lithography
Optical Lithography (or photolithography) is one of the most important and prevalent sets of techniques in the nanolithography field. Optical lithography contains several important derivative techniques, all that use very short light wavelengths in order to change the solubility of certain molecules, causing them to wash away in solution, leaving behind a desired structure. Several optical lithography techniques require the use of liquid immersion and a host of resolution enhancement technologies like phase-shift masks (PSM) and optical proximity correction (OPC). Some of the included techniques in this set include multiphoton lithography, X-Ray lithography, light coupling nanolithography (LCM), and extreme ultraviolet lithography (EUVL). This last technique is considered to be the most important next generation lithography (NGL) technique due to its ability to produce structures accurately down below 30 nanometers at high throughput rates which makes it a viable option for commercial purposes.
Quantum optical lithography
Quantum optical lithography (QOL), is a diffraction-unlimited method able to write at 1 nm resolution by optical means, using a red laser diode (λ = 650 nm). Complex patterns like geometrical figures and letters were obtained at 3 nm resolution on resist substrate. The method was applied to nanopattern graphene at 20 nm resolution.
Scanning lithography
Electron-beam lithography
Electron beam lithography (EBL) or electron-beam direct-write lithography (EBDW) scans a focused beam of electrons on a surface covered with an electron-sensitive film or resist (e.g. PMMA or HSQ) to draw custom shapes. By changing the solubility of the resist and subsequent selective removal of material by immersion in a solvent, sub-10 nm resolutions have been achieved. This form of direct-write, maskless lithography has high resolution and low throughput, limiting single-column e-beams to photomask fabrication, low-volume production of semiconductor devices, and research and development. Multiple-electron beam approaches have as a goal an increase of throughput for semiconductor mass-production.
EBL can be utilized for selective protein nanopatterning on a solid substrate, aimed for ultrasensitive sensing. Resists for EBL can be hardened using sequential infiltration synthesis (SIS).
Scanning probe lithography
Scanning probe lithography (SPL) is another set of techniques for patterning at the nanometer-scale down to individual atoms using scanning probes, either by etching away unwanted material, or by directly-writing new material onto a substrate. Some of the important techniques in this category include dip-pen nanolithography, thermochemical nanolithography, thermal scanning probe lithography, and local oxidation nanolithography. Dip-pen nanolithography is the most widely used of these techniques.
Proton beam writing
This technique uses a focused beam of high energy (MeV) protons to pattern resist material at nanodimensions and has been shown to be capable of producing high-resolution patterning well below the 100 nm mark.
Charged-particle lithography
This set of techniques include ion- and electron-projection lithographies. Ion beam lithography uses a focused or broad beam of energetic lightweight ions (like He+) for transferring pattern to a surface. Using Ion Beam Proximity Lithography (IBL) nano-scale features can be transferred on non-planar surfaces.
Soft lithography
Soft lithography uses elastomer materials made from different chemical compounds such as polydimethylsiloxane. Elastomers are used to make a stamp, mold, or mask (akin to photomask) which in turn is used to generate micro patterns and microstructures. The techniques described below are limited to one stage. The consequent patterning on the same surfaces is difficult due to misalignment problems. The soft lithography isn't suitable for production of semiconductor-based devices as it's not complementary for metal deposition and etching. The methods are commonly used for chemical patterning.
PDMS lithography
Microcontact printing
Multilayer soft lithography
Miscellaneous techniques
Nanoimprint lithography
Nanoimprint lithography (NIL), and its variants, such as Step-and-Flash Imprint Lithography and laser assisted directed imprint (LADI) are promising nanopattern replication technologies where patterns are created by mechanical deformation of imprint resists, typically monomer or polymer formations that are cured by heat or UV light during imprinting. This technique can be combined with contact printing and cold welding. Nanoimprint lithography is capable of producing patterns at sub-10 nm levels.
Magnetolithography
Magnetolithography (ML) is based on applying a magnetic field on the substrate using paramagnetic metal masks call "magnetic mask". Magnetic mask which is analog to photomask define the spatial distribution and shape of the applied magnetic field. The second component is ferromagnetic nanoparticles (analog to the Photoresist) that are assembled onto the substrate according to the field induced by the magnetic mask.
Nanofountain drawing
A nanofountain probe is a micro-fluidic device similar in concept to a fountain pen which deposits a narrow track of chemical from a reservoir onto the substrate according to the movement pattern programmed.
Nanosphere lithography
Nanosphere lithography uses self-assembled monolayers of spheres (typically made of polystyrene) as evaporation masks. This method has been used to fabricate arrays of gold nanodots with precisely controlled spacings.
Neutral particle lithography
Neutral particle lithography (NPL) uses a broad beam of energetic neutral particle for pattern transfer on a surface.
Plasmonic lithography
Plasmonic lithography uses surface plasmon excitations to generate beyond-diffraction limit patterns, benefiting from subwavelength field confinement properties of surface plasmon polaritons.
Stencil lithography
Stencil lithography is a resist-less and parallel method of fabricating nanometer scale patterns using nanometer-size apertures as shadow-masks.
References
Lithography (microfabrication) | Nanolithography | [
"Materials_science"
] | 1,763 | [
"Nanotechnology",
"Microtechnology",
"Lithography (microfabrication)"
] |
2,105,059 | https://en.wikipedia.org/wiki/Two-photon%20excitation%20microscopy | Two-photon excitation microscopy (TPEF or 2PEF) is a fluorescence imaging technique that is particularly well-suited to image scattering living tissue of up to about one millimeter in thickness. Unlike traditional fluorescence microscopy, where the excitation wavelength is shorter than the emission wavelength, two-photon excitation requires simultaneous excitation by two photons with longer wavelength than the emitted light. The laser is focused onto a specific location in the tissue and scanned across the sample to sequentially produce the image. Due to the non-linearity of two-photon excitation, mainly fluorophores in the micrometer-sized focus of the laser beam are excited, which results in the spatial resolution of the image. This contrasts with confocal microscopy, where the spatial resolution is produced by the interaction of excitation focus and the confined detection with a pinhole.
Two-photon excitation microscopy typically uses near-infrared (NIR) excitation light which can also excite fluorescent dyes. Using infrared light minimizes scattering in the tissue because infrared light is scattered less in typical biological tissues. Due to the multiphoton absorption, the background signal is strongly suppressed. Both effects lead to an increased penetration depth for this technique. Two-photon excitation can be a superior alternative to confocal microscopy due to its deeper tissue penetration, efficient light detection, and reduced photobleaching.
Concept
Two-photon excitation employs two-photon absorption, a concept first described by Maria Goeppert Mayer (1906–1972) in her doctoral dissertation in 1931, and first observed in 1961 in a CaF2:Eu2+ crystal using laser excitation by Wolfgang Kaiser. Isaac Abella showed in 1962 in caesium vapor that two-photon excitation of single atoms is possible.
Two-photon excitation fluorescence microscopy has similarities to other confocal laser microscopy techniques such as laser scanning confocal microscopy and Raman microscopy. These techniques use focused laser beams scanned in a raster pattern to generate images, and both have an optical sectioning effect. Unlike confocal microscopes, multiphoton microscopes do not contain pinhole apertures that give confocal microscopes their optical sectioning quality. The optical sectioning produced by multiphoton microscopes is a result of the point spread function of the excitation. The concept of two-photon excitation is based on the idea that two photons, of comparably lower photon energy than needed for one-photon excitation, can also excite a fluorophore in one quantum event. Each photon carries approximately half the energy necessary to excite the molecule. The emitted photon is at a higher energy (shorter wavelength) than either of the two exciting photons. The probability of the near-simultaneous absorption of two photons is extremely low. Therefore, a high peak flux of excitation photons is typically required, usually generated by femtosecond pulsed laser. For example, the same average laser power but without pulsing results in no detectable fluorescence compared to fluorescence generated by the pulsed laser via the two-photon effect. The longer wavelength, lower energy (typically infrared) excitation lasers of multiphoton microscopes are well-suited to use in imaging live cells as they cause less damage than the short-wavelength lasers typically used for single-photon excitation, so living tissues may be observed for longer periods with fewer toxic effects.
The most commonly used fluorophores have excitation spectra in the 400–500 nm range, whereas the laser used to excite the two-photon fluorescence lies in the ~700–1100 nm (infrared) range produced by Ti-sapphire lasers. If the fluorophore absorbs two infrared photons simultaneously, it will absorb enough energy to be raised into the excited state. The fluorophore will then emit a single photon with a wavelength that depends on the type of fluorophore used (typically in the visible spectrum). Because two photons are absorbed during the excitation of the fluorophore, the probability of fluorescent emission from the fluorophores increases quadratically with the excitation intensity. Therefore, much more two-photon fluorescence is generated where the laser beam is tightly focused than where it is more diffuse. Effectively, excitation is restricted to the tiny focal volume (~1 femtoliter), resulting in a high degree of rejection of out-of-focus objects. This localization of excitation is the key advantage compared to single-photon excitation microscopes, which need to employ elements such as pinholes to reject out-of-focus fluorescence. The fluorescence from the sample is then collected by a high-sensitivity detector, such as a photomultiplier tube. This observed light intensity becomes one pixel in the eventual image; the focal point is scanned throughout a desired region of the sample to form all the pixels of the image.
Development
Two-photon microscopy was pioneered and patented by Winfried Denk and James Strickler in the lab of Watt W. Webb at Cornell University in 1990. They combined the idea of two-photon absorption with the use of a laser scanner. In two-photon excitation microscopy an infrared laser beam is focused through an objective lens. The Ti-sapphire laser normally used has a pulse width of approximately 100 femtoseconds (fs) and a repetition rate of about 80 MHz, allowing the high photon density and flux required for two-photon absorption, and is tunable across a wide range of wavelengths.
The use of infrared light to excite fluorophores in light-scattering tissue has added benefits. Longer wavelengths are scattered to a lesser degree than shorter ones, which is a benefit to high-resolution imaging. In addition, these lower-energy photons are less likely to cause damage outside the focal volume. Compared to a confocal microscope, photon detection is much more effective since even scattered photons contribute to the usable signal. These benefits for imaging in scattering tissues were only recognized several years after the invention of two-photon excitation microscopy.
There are several caveats to using two-photon microscopy: The pulsed lasers needed for two-photon excitation are much more expensive than the continuous wave (CW) lasers used in confocal microscopy. The two-photon absorption spectrum of a molecule may vary significantly from its one-photon counterpart. Higher-order photodamage becomes a problem and bleaching scales with the square of the laser power, whereas it is linear for single-photon (confocal). For very thin objects such as isolated cells, single-photon (confocal) microscopes can produce images with higher optical resolution due to their shorter excitation wavelengths. In scattering tissue, on the other hand, the superior optical sectioning and light detection capabilities of the two-photon microscope result in better performance.
Applications
Main
Two-photon microscopy has been involved in numerous fields including: physiology, neurobiology, embryology and tissue engineering. Even thin, nearly transparent tissues (such as skin cells) have been visualized with clear detail due to this technique. Two-photon microscopy's high speed imaging capabilities may also be utilized in noninvasive optical biopsy. Two-photon microscopy has been aptly used for producing localized chemical reactions, and effect that has been used also for two-photon-based lithography. Using two-photon fluorescence and second-harmonic generation–based microscopy, it was shown that organic porphyrin-type molecules can have different transition dipole moments for two-photon fluorescence and second harmonic generation, which are otherwise thought to occur from the same transition dipole moment. Non-degenerative two-photon excitation, or using 2 photons of unequal wavelengths, was shown to increase the fluorescence of all tested small molecules and fluorescent proteins.
Cancer research
2PEF was also proven to be very valuable for characterizing skin cancer, in addition monitoring breast cancer in vitro.
It had also been shown to reveal tumor cell arrest, tumor cell-platelet interaction, tumor cell-leukocyte interaction and metastatic colonization processes.
Embryonic research
2PEF has shown to be advantageous over other techniques, such as confocal microscopy when it comes to long-term live-cell imaging of mammalian embryos.
Kidney research
2PEF has also been used in visualization of difficult-to-access cell types, especially in regards to kidney cells. It has been used in better understanding fluid dynamics and filtration.
Viral infection level determination
2PEF was also proven to be valuable tool for monitoring correlates of viral (SARS-CoV-2) infection in cell culture using a 2P-active Ca2+ sensitive dye.
Neuroscience
2PEF as well as the extension of this method to 3PEF are used to characterize intact neural tissues in the brain of living and even behaving animals. In particular, the method is advantageous for calcium imaging of a neuron or populations of neurons, for photopharmacology including localized uncaging of components such as glutamate or isomerization of photoswitchable drugs, and for the imaging of other genetically encoded sensors that report the concentration of neurotransmitters.
Currently, two-photon microscopy is widely used to image the live firing of neurons in model organisms including fruit flies (Drosophila melanogaster), rats, songbirds, primates, ferrets, mice (Mus musculus), zebrafish. The animals are typically head-fixed due to the size of the microscope and scan devices, but also miniatured microscopes are being developed that enable imaging of neurons in the moving and freely behaving animals.
Higher-order excitation
Simultaneous absorption of three or more photons is also possible, allowing for higher-order multiphoton excitation microscopy. So-called "three-photon excitation fluorescence microscopy" (3PEF) is the most used technique after 2PEF, to which it is complementary. Localized isomerization of photoswitchable drugs in vivo using three-photon excitation has also been reported.
Dyes and fluorescent proteins for two-photon excitation microscopy
In general, all commonly used fluorescent proteins (CFP, GFP, YFP, RFP) and dyes can be excited in two-photon mode. Two-photon excitation spectra are often considerably broader, making it more difficult to excite fluorophores selectively by switching excitation wavelengths.
Several green, red and NIR emitting dyes (probes and reactive labels) with extremely high 2-photon absorption cross-sections have been reported. Due to the donor-acceptor-donor type structure, squaraine dyes such as Seta-670, Seta-700 and Seta-660 exhibit very high 2-photon absorption (2PA) efficiencies in comparison to other dyes, SeTau-647 and SeTau-665, a new type of squaraine-rotaxane, exhibit extremely high two-photon action cross-sections of up to 10,000 GM in the near IR region, unsurpassed by any other class of organic dyes.
See also
3D optical data storage
Nonlinear optics
Second-harmonic imaging microscopy
Three-photon microscopy
Two-photon absorption
Two-photon photoelectron spectroscopy
Wide-field multiphoton microscopy
Sources
References
External links
Simplifying two-photon microscopy
Webinar: Setting Up a Simple and Cost-Efficient 2Photon Microscope
Two-photon suitable dyes
introduction to multiphoton microscopy
Acquisition of Multiple Real-Time Images for Laser Scanning Microscopy (Sanderson microscopy article)
Build Your Own Video-Rate 2-photon Microscope
Two-photon Fluorescence Light Microscopy, ENCYCLOPEDIA OF LIFE SCIENCES
Multiple-photon excitation fluorescence microscopy. University of Wisconsin.
Fundamentals and Applications in Multiphoton Excitation Microscopy. Nikon MicroscopyU .
Microscopy
Cell imaging
Fluorescence techniques
Laboratory equipment
Optical microscopy | Two-photon excitation microscopy | [
"Chemistry",
"Biology"
] | 2,494 | [
"Optical microscopy",
"Cell imaging",
"Fluorescence techniques",
"Microscopy"
] |
2,105,763 | https://en.wikipedia.org/wiki/Boxed%20warning | In the United States, a boxed warning (sometimes "black box warning", colloquially) is a type of warning that appears near the beginning of the package insert for certain prescription drugs, so called because the U.S. Food and Drug Administration specifies that it is formatted with a 'box' or border around the text to emphasize it is of utmost importance. The FDA can require a pharmaceutical company to place a boxed warning. It is the strongest warning that the FDA requires, and signifies that medical studies indicate that the drug carries a significant risk of preventable, serious or even life-threatening adverse effects.
Economists and physicians have thoroughly studied the effects of FDA boxed warnings on prescription patterns. It is not necessarily true that a physician and patient will have a conversation about a drug's boxed warning after it is issued. For instance, an FDA-mandated boxed warning decreased rosiglitazone use by 70%, but that still meant 3.8 million people were given the drug. Later research indicated that after receiving an FDA advisory, there was a decrease in rosiglitazone use, due to a combined effect of media exposure, advisory, and scientific publications, whereas pioglitazone (with a similar advisory but less media exposure) did not similarly decrease in use.
Examples
Boxed warnings on drugs have received increased media attention in the United States since 2004. Among some of the more widely covered stories:
In October 2004, the FDA began requiring that boxed warnings be placed on all antidepressant medications, warning they may result in an increased risk of suicidal tendencies in children and adolescents. In May 2006, the boxed warning was expanded to young adults aged 18–24 years old.
, the FDA has required a boxed warning on the Depo-Provera contraceptive injection, due to the risk of significant loss of bone density with long-term use.
In April 2005, FDA advisors requested that Pfizer place a boxed warning on their non-steroidal anti-inflammatory drug Celebrex (celecoxib) for cardiovascular and gastrointestinal risks.
In 2005, the FDA issued a boxed warning regarding the risk of atypical antipsychotics being prescribed among elderly patients with dementia. This advisory was associated with a decrease in use of antipsychotics, especially in elderly patients with dementia.
, natalizumab (marketed as Tysabri) received a boxed warning on its packaging due to increased risk of developing progressive multifocal leukoencephalopathy (PML). Tysabri was pulled from the market in 2004, shortly after its introduction, after three cases of the rare disease were linked to its use. PML has affected approximately 212 Natalizumab recipients in 2012 (or 2.1 in every 1000 patients). Tysabri is now distributed under a controlled prescription program called TOUCH (Tysabri Outreach: Unified Commitment to Health).
, the FDA added a boxed warning to the anticoagulant warfarin due to the risk of bleeding to death.
In February 2006, the FDA's Drug Safety and Risk Management Advisory Committee voted to include boxed warnings on methylphenidate formulations used to treat attention deficit hyperactivity disorder, such as Ritalin (methylphenidate), due to possible cardiovascular side-effects. A month later, the agency's Pediatric Advisory Committee effectively rejected recommending boxed warnings for both cardiovascular and psychiatric adverse effects.
On November 14, 2007, the FDA added a boxed warning to the diabetes medication Avandia (rosiglitazone), citing the risk of heart failure or heart attack to patients with underlying heart disease, or are at a high heart attack risk.
On July 8, 2008, the FDA ordered a boxed warning on certain antibiotic medications containing fluoroquinolone, which has been linked to tendon ruptures and tendinitis. Included were the popular drugs Cipro (ciprofloxacin), Levaquin (levofloxacin), Avelox (moxifloxacin), Noroxin (norfloxacin) and Floxin (ofloxacin).
On July 1, 2009, the FDA required Chantix (varenicline) to carry a boxed warning due to public reports of side effects including depression, suicidal thoughts, and suicidal actions. , the warning has been removed on the basis of updated evidence.
On October 27, 2010, the FDA issued a boxed warning regarding the use of Metacam (meloxicam) oral suspension in cats in the United States. Meloxicam is a non-steroidal anti-inflammatory drug that is approved in the U.S. for a single post-operative injection in cats.
, the FDA issued a boxed warning regarding the use of thyroid hormone stimulating agents in treatment of obesity. Data does not indicate any benefits to using these agents for weight loss. Data does indicate an increased risk of life-threatening cardiovascular events when high levels of these agents are used in hypothyroid populations. Euthyroid populations demonstrate increased CV risk at clinical doses. Hypothyroid agents should not be used in combination with sympathomimetic agents including: stimulants, and diet pills, due to increased CV risks.
In July 2013, the FDA issued a boxed warning for the antimalarial drug mefloquine, noting the drug's adverse neuropsychiatric side effects, and emphasizing neurological effects from the drug could "occur at any time during drug use, and can last for months to years after the drug is stopped or can be permanent".
In other jurisdictions
In China, a warning text () may be added to a package insert, either voluntarily by the manufacturer or at the request of NMPA (formerly CFDA, the Chinese counterpart of FDA). Although no formatting requirement is found in law, the typical formatting is similar to the American counterpart with boxed text at the top of the insert. The CFDA/NMPA has used its power to mandate a warning on fluoroquinolones, ceftriaxone, aciclovir, and pioglitazone.
Health Canada terms its version of boxed warnings "serious warnings and precautions box". The formatting is similar to the US counterpart; an example for Paxlovid can be seen on Pfizer's website.
References
External links
Black Box Warnings by FormWeb.
drug marketing and sales
Food and Drug Administration
Warning systems | Boxed warning | [
"Technology",
"Engineering"
] | 1,327 | [
"Warning systems",
"Safety engineering",
"Measuring instruments"
] |
2,105,941 | https://en.wikipedia.org/wiki/Hydronics | Hydronics () is the use of liquid water or gaseous water (steam) or a water solution (usually glycol with water) as a heat-transfer medium in heating and cooling systems. The name differentiates such systems from oil and refrigerant systems.
Historically, in large-scale commercial buildings such as high-rise and campus facilities, a hydronic system may include both a chilled and a heated water loop, to provide for both heating and air conditioning. Chillers and cooling towers are used either separately or together as means to provide water cooling, while boilers heat water. A recent innovation is the chiller boiler system, which provides an efficient form of HVAC for homes and smaller commercial spaces.
District heating
Many larger cities have a district heating system that provides, through underground piping, publicly available high temperature hot water and chilled water. A building in the service district may be connected to these on payment of a service fee.
Types of hydronic system
Basic types
Hydronic systems can include the following kinds of distributions:
Chilled water systems
Hot water systems
Steam systems
Steam condensate systems
Ground source heat pump systems
Classification
Hydronic systems are further classified in five ways:
Flow generation (forced flow or gravity flow)
Temperature (low, medium, and high)
Pressurization (low, medium, and high)
Piping arrangement
Pumping arrangement
Piping arrangements
Hydronic systems may be divided into several general piping arrangement categories:
Single or one-pipe
Two pipe steam (direct return or reverse return)
Three pipe
Four pipe
Series loop
Single-pipe steam
In the oldest modern hydronic heating technology, a single-pipe steam system delivers steam to the radiators where the steam gives up its heat and is condensed back to water. The radiators and steam supply pipes are pitched so that gravity eventually takes this condensate back down through the steam supply piping to the boiler where it can once again be turned into steam and returned to the radiators.
Despite its name, a radiator does not primarily heat a room by radiation. If positioned correctly a radiator will create an air convection current in the room, which will provide the main heat transfer mechanism. It is generally agreed that for the best results a steam radiator should be no more than one to two inches (2.5 to 5cm) from a wall.
Single-pipe systems are limited in both their ability to deliver high volumes of steam (that is, heat) and the ability to control the flow of steam to individual radiators (because closing off the steam supply traps condensate in the radiators). Because of these limitations, single-pipe systems are no longer preferred.
These systems depend on the proper operation of thermostatic air-venting valves located on radiators throughout the heated area. When the system is not in use, these valves are open to the atmosphere, and radiators and pipes contain air. When a heating cycle begins, the boiler produces steam, which expands and displaces the air in the system. The air exits the system through the air-venting valves on the radiators and on the steam pipes themselves. The thermostatic valves close when they become hot; in the most common kind, the vapor pressure of a small amount of alcohol in the valve exerts the force to actuate the valve and prevent steam from leaving the radiator. When the valve cools, air enters the system to replace the condensing steam.
Some more modern valves can be adjusted to allow for more rapid or slower venting. In general, valves nearest to the boiler should vent the slowest, and valves furthest from the boiler should vent the fastest. Ideally, steam should reach each valve and close each and every valve at the same time, so that the system can work at maximal efficiency; this condition is known as a "balanced" system.
Two-pipe steam systems
In two-pipe steam systems, there is a return path for the condensate and it may involve pumps as well as gravity-induced flow. The flow of steam to individual radiators can be modulated using manual or automatic valves.
Two-pipe direct return system
The return piping, as the name suggests, takes the most direct path back to the boiler.
Advantages
Lower cost of return piping in most (but not all) applications, and the supply and return piping are separated.
Disadvantages
This system can be difficult to balance due to the supply line being a different length than the return; the further the heat transfer device is from the boiler, the more pronounced the pressure difference. Because of this, it is always recommended to: minimize the distribution piping pressure drops; use a pump with a , include balancing and flow-measuring devices at each terminal or branch circuit; and use control valves with a at the terminals.
Two-pipe reverse return system
The two-pipe reverse return configuration which is sometimes called 'the three-pipe system' is different from the two-pipe system in the way that water returns to the boiler. In a two-pipe system, once the water has left the first radiator, it returns to the boiler to be reheated, and so with the second and third etc. With the two-pipe reverse return, the return pipe travels to the last radiator in the system before returning to the boiler to be reheated.
Advantages
The advantage with the two-pipe reverse return system is that the pipe run to each radiator is about the same, this ensures that the frictional resistance to the flow of water in each radiator is the same. This allows easy balancing of the system.
Disadvantages
The installer or repair person cannot trust that every system is self-balancing without properly testing it.
Water loops
Modern systems almost always use heated water rather than steam. This opens the system to the possibility of also using chilled water to provide air conditioning.
In homes, the water loop may be as simple as a single pipe that "loops" the flow through every radiator in a zone. In such a system, flow to the individual radiators cannot be modulated as all of the water is flowing through every radiator in the zone. Slightly more complicated systems use a "main" pipe that flows uninterrupted around the zone; the individual radiators tap off a small portion of the flow in the main pipe. In these systems, individual radiators can be modulated. Alternatively, a number of loops with several radiators can be installed, the flow in each loop or zone controlled by a zone valve connected to a thermostat.
In most water systems, the water is circulated by means of one or more circulator pumps. This is in marked contrast to steam systems where the inherent pressure of the steam is sufficient to distribute the steam to remote points in the system. A system may be broken up into individual heating zones using either multiple circulator pumps or a single pump and electrically operated zone valves.
Improved efficiency and operating costs
There have been considerable improvements in the efficiency and therefore the operating costs of a hydronic heating system with the introduction of insulating products.
Radiator Panel system pipes are covered with a fire rated, flexible and lightweight elastomeric rubber material designed for thermal insulation. Slab Heating efficiency is improved with the installation of a thermal barrier made of foam. There are now many product offerings on the market with different energy ratings and installation methods.
Balancing
Most hydronic systems require balancing. This involves measuring and setting the flow to achieve an optimal distribution of energy in the system.
In a balanced system every radiator gets just enough hot water to allow it to heat up fully.
Boiler water treatment
Residential systems may use ordinary tap water, but sophisticated commercial systems often add various chemicals to the system water. For example, these added chemicals may:
Inhibit corrosion
Prevent freezing of the water in the system
Increase the boiling point of the water in the system
Inhibit the growth of mold and bacteria
Allow improved leak detection (for example, dyes that fluoresce under ultraviolet light)
Air elimination
All hydronic systems must have a means to eliminate air from the system. A properly designed, air-free system should continue to function normally for many years.
Air causes irritating system noises, and interrupts proper heat transfer to and from the circulating fluids. In addition, unless reduced below an acceptable level, the oxygen dissolved in water causes corrosion. This corrosion can cause rust and scale to build up on the piping. Over time these particles can become loose and travel around the pipes, reducing or even blocking the flow as well as damaging pump seals and other components.
Water-loop system
Water-loop systems can also experience air problems. Air found within hydronic water-loop systems may be classified into three forms:
Free air
Various devices such as manual and automatic air vents are used to address free air which floats up to the high points throughout the system. Automatic air vents contain a valve that is operated by a float. When air is present, the float drops, allowing the valve to open and bleed air out. When water reaches (fills) the valve, the float lifts, blocking the water from escaping. Small (domestic) versions of these valves in older systems are sometimes fitted with a Schrader-type air valve fitting, and any trapped, now-compressed air can be bled from the valve by manually depressing the valve stem until water rather than air begins to emerge.
Entrained air
Entrained air is air bubbles that travel around in the piping at the same velocity as the water. Air "scoops" are one example of products which attempt to remove this type of air.
Dissolved air
Dissolved air is also present in the system water and the amount is determined principally by the temperature and pressure (see Henry's Law) of the incoming water. On average, tap water contains between 8-10% dissolved air by volume.
Removal of dissolved, free and entrained air can only be achieved with a high-efficiency air elimination device that includes a coalescing medium that continually scrubs the air out of the system. Tangential or centrifugal style air separator devices are limited to removal of free and entrained air only.
Accommodating thermal expansion
Water expands as it heats and contracts as it cools. A water-loop hydronic system must have one or more expansion tanks in the system to accommodate this varying volume of the working fluid. These tanks often use a rubber diaphragm pressurised with compressed air. The expansion tank accommodates the expanded water by further air compression and helps maintain a roughly constant pressure in the system across the expected change in fluid volume. Simple cisterns open to atmospheric pressure are also used.
Water also expands drastically as it vaporizes, or flashes, into steam. Sparge pipes can help accommodate flashing that may occur as high pressure condensate enters a lower pressure region.
Automatic fill mechanisms
Hydronic systems are usually connected to a water supply (such as the public water supply). An automatic valve regulates the amount of water in the system and also prevents backflow of system water (and any water treatment chemicals) into the water supply.
Safety mechanisms
Excessive heat or pressure may cause the system to fail. At least one combination over-temperature and over-pressure relief valve is always fitted to the system to allow the steam or water to vent to the atmosphere in case of the failure of some mechanism (such as the boiler temperature control) rather than allowing the catastrophic bursting of the piping, radiators, or boiler. The relief valve usually has a manual operating handle to allow testing and the flushing of contaminants (such as grit) that may cause the valve to leak under otherwise-normal operating conditions.
Rapid condensation of steam can also lead to water hammer, which during rapid volume change from gas to liquid leads to a powerful vacuum force. This can damage and destroy fittings, valves and equipment. Proper design and the addition of vacuum breakers reduce or eliminate the risk of these problems.
Typical schematic with control devices shown
See also
Aquastat
Central heating
Hydronic balancing
Radiant cooling
Radiant heating
Uniform Mechanical Code
References
Heating, ventilation, and air conditioning
Plumbing | Hydronics | [
"Engineering"
] | 2,490 | [
"Construction",
"Plumbing"
] |
30,677,625 | https://en.wikipedia.org/wiki/Quadrupole%20splitting | Quadrupole splitting is an example of a hyperfine interaction found in gamma-ray spectroscopy, in the circumstance where nuclei with a non-radially-symmetric shape (that is, with a spin quantum number greater than 1/2) are found immersed in an external electric field gradient. It splits a state into two, producing a doublet in the Mössbauer spectrum, and the separation between the states can be used to measure the sign and strength of this electric field gradient, which is affected by the chemical environment of the nuclei.
References
Atomic physics | Quadrupole splitting | [
"Physics",
"Chemistry"
] | 113 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
30,678,493 | https://en.wikipedia.org/wiki/On%20shell%20renormalization%20scheme | In quantum field theory, and especially in quantum electrodynamics, the interacting theory leads to infinite quantities that have to be absorbed in a renormalization procedure, in order to be able to predict measurable quantities. The renormalization scheme can depend on the type of particles that are being considered. For particles that can travel asymptotically large distances, or for low energy processes, the on-shell scheme, also known as the physical scheme, is appropriate. If these conditions are not fulfilled, one can turn to other schemes, like the minimal subtraction scheme (MS scheme).
Fermion propagator in the interacting theory
Knowing the different propagators is the basis for being able to calculate Feynman diagrams which are useful tools to predict, for example, the result of scattering experiments. In a theory where the only field is the Dirac field, the Feynman propagator reads
where is the time-ordering operator, the vacuum in the non interacting theory, and the Dirac field and its Dirac adjoint, and where the left-hand side of the equation is the two-point correlation function of the Dirac field.
In a new theory, the Dirac field can interact with another field, for example with the electromagnetic field in quantum electrodynamics, and the strength of the interaction is measured by a parameter, in the case of QED it is the bare electron charge, . The general form of the propagator should remain unchanged, meaning that if now represents the vacuum in the interacting theory, the two-point correlation function would now read
Two new quantities have been introduced. First the renormalized mass has been defined as the pole in the Fourier transform of the Feynman propagator. This is the main prescription of the on-shell renormalization scheme (there is then no need to introduce other mass scales like in the minimal subtraction scheme). The quantity represents the new strength of the Dirac field. As the interaction is turned down to zero by letting , these new parameters should tend to a value so as to recover the propagator of the free fermion, namely and .
This means that and can be defined as a series in if this parameter is small enough (in the unit system where , , where is the fine-structure constant). Thus these parameters can be expressed as
On the other hand, the modification to the propagator can be calculated up to a certain order in using Feynman diagrams. These modifications are summed up in the fermion self energy
These corrections are often divergent because they contain loops.
By identifying the two expressions of the correlation function up to a certain order in , the counterterms can be defined, and they are going to absorb the divergent contributions of the corrections to the fermion propagator. Thus, the renormalized quantities, such as , will remain finite, and will be the quantities measured in experiments.
Photon propagator
Just like what has been done with the fermion propagator, the form of the photon propagator inspired by the free photon field will be compared to the photon propagator calculated up to a certain order in in the interacting theory. The photon self energy is noted and the metric tensor (here taking the +--- convention)
The behaviour of the counterterm is independent of the momentum of the incoming photon . To fix it, the behaviour of QED at large distances (which should help recover classical electrodynamics), i.e. when , is used :
Thus the counterterm is fixed with the value of .
Vertex function
A similar reasoning using the vertex function leads to the renormalization of the electric charge . This renormalization, and the fixing of renormalization terms is done using what is known from classical electrodynamics at large space scales. This leads to the value of the counterterm , which is, in fact, equal to because of the Ward–Takahashi identity. It is this calculation that accounts for the anomalous magnetic dipole moment of fermions.
Rescaling of the QED Lagrangian
We have considered some proportionality factors (like the ) that have been defined from the form of the propagator. However they can also be defined from the QED Lagrangian, which will be done in this section, and these definitions are equivalent. The Lagrangian that describes the physics of quantum electrodynamics is
where is the field strength tensor, is the Dirac spinor (the relativistic equivalent of the wavefunction), and the electromagnetic four-potential. The parameters of the theory are , , and . These quantities happen to be infinite due to loop corrections (see below). One can define the renormalized quantities (which will be finite and observable):
The are called counterterms (some other definitions of them are possible). They are supposed to be small in the parameter . The Lagrangian now reads in terms of renormalized quantities (to first order in the counterterms):
A renormalization prescription is a set of rules that describes what part of the divergences should be in the renormalized quantities and what parts should be in the counterterms. The prescription is often based on the theory of free fields, that is of the behaviour of and when they do not interact (which corresponds to removing the term in the Lagrangian).
References
Quantum field theory
Renormalization group | On shell renormalization scheme | [
"Physics"
] | 1,133 | [
"Quantum field theory",
"Physical phenomena",
"Critical phenomena",
"Quantum mechanics",
"Renormalization group",
"Statistical mechanics"
] |
30,682,536 | https://en.wikipedia.org/wiki/Zeeman%20slower | In atomic physics, a Zeeman slower is a scientific instrument that is commonly used in atomic physics to slow and cool a beam of hot atoms to speeds of several meters per second and temperatures below a kelvin. The gas-phase atoms used in atomic physics are often generated in an oven by heating a solid or liquid atomic sample to temperatures where the vapor pressure is high enough that a substantial number of atoms are in the gas phase. These atoms effuse out of a hole in the oven with average speeds on the order of hundreds of m/s and large velocity distributions (due to their high temperature). The Zeeman slower is attached close to where the hot atoms exit the oven and are used to slow them to less than 10 m/s (slowing) with a very small velocity spread (cooling).
A Zeeman slower consists of a cylinder, through which an atomic beam travels, a pump laser that counterpropagates with respect to the beam's direction, and a magnetic field (commonly produced by a solenoid-like coil) that points along the cylinder's axis with a spatially varying magnitude. The pump laser, which is required to be near-resonant with atomic transition, Doppler-slows a certain velocity class within the velocity distribution of the beam. The spatially varying magnetic field is designed to Zeeman-shift the resonant frequency to match the decreasing Doppler shift as the atoms are slowed to lower velocities while they propagate through the Zeeman slower, allowing the pump laser to be continuously resonant and provide a slowing force.
History
The Zeeman slower was first developed by Harold J. Metcalf and William D. Phillips (who was awarded 1/3 of the 1997 Nobel Prize in Physics in part work for his work on the Zeeman slower). The achievement of these low temperatures led the way for the experimental realization of Bose–Einstein condensation, and a Zeeman slower can be part of such an apparatus.
Principle
According to the principles of Doppler cooling, an atom modelled as a two-level atom can be cooled using a laser. If it moves in a specific direction and encounters a counter-propagating laser beam resonant with its transition, it is very likely to absorb a photon. The absorption of this photon gives the atom a "kick" in the direction that is consistent with momentum conservation and brings the atom to its excited state. However, this state is unstable, and some time later the atom decays back to its ground state via spontaneous emission (after a time on the order of nanoseconds; for example, in rubidium-87, the excited state of the D2 transition has a lifetime of 26.2 ns). The photon will be reemitted (and the atom will again increase its speed), but its direction will be random. When averaging over a large number of these processes applied to one atom, one sees that the absorption process decreases the speed always in the same direction (as the absorbed photon comes from a monodirectional source), whereas the emission process does not lead to any change in the speed of the atom because the emission direction is random. Thus the atom is being effectively slowed down by the laser beam.
There is nevertheless a problem in this basic scheme because of the Doppler effect. The resonance of the atom is rather narrow (on the order of a few megahertz), and after having decreased its momentum by a few recoil momenta, it is no longer in resonance with the pump beam because in its frame, the frequency of the laser has shifted. The Zeeman slower uses the fact that a magnetic field can change the resonance frequency of an atom using the Zeeman effect to tackle this problem.
The average acceleration (due to many photon absorption events over time) of an atom with mass , a cycling transition with frequency , and linewidth , that is in the presence of a laser beam that has wavenumber , and intensity (where is the saturation intensity of the laser) is
In the rest frame of the atoms with velocity in the atomic beam, the frequency of the laser beam is shifted by . In the presence of a magnetic field , the atomic transition is Zeeman-shifted by an amount (where is the magnetic moment of the transition). Thus, the effective detuning of the laser from the zero-field resonant frequency of the atoms is
The atoms for which will experience the largest acceleration, namely
where , and .
The most common approach is to require that we have a magnetic field profile that varies in the direction such that the atoms experience a constant acceleration as they fly along the axis of the slower. It has been recently shown, however, that a different approach yields better results.
In the constant-deceleration approach we get
where is the maximal velocity class that will be slowed; all the atoms in the velocity distribution that have velocities will be slowed, and those with velocities will not be slowed at all. The parameter (which determines the required laser intensity) is normally chosen to be around 0.5. If a Zeeman slower were to be operated with , then after absorbing a photon and moving to the excited state, the atom would preferentially re-emit a photon in the direction of the laser beam (due to stimulated emission), which would counteract the slowing process.
Realization
The required form of the spatially inhomogeneous magnetic field as we showed above has the form
This field can be realized a few different ways. The most popular design requires wrapping a current-carrying wire with many layered windings where the field is strongest (around 20–50 windings) and few windings where the field is weak. Alternative designs include a single-layer coil that varies in the pitch of the winding and an array of permanent magnets in various configurations.
Outgoing atoms
The Zeeman slower is usually used as a preliminary step to cool the atoms in order to trap them in a magneto-optical trap. Thus it aims at a final velocity of about 10 m/s (depending on the atom used), starting with a beam of atoms with a velocity of a few hundred meters per second. The final speed to be reached is a compromise between the technical difficulty of having a long Zeeman slower and the maximal speed allowed for an efficient loading into the trap.
A limitation of setup can be the transverse heating of the beam. It is linked to the fluctuations of the speed along the three axes around its mean values, since the final speed was said to be an average over a large number of processes. These fluctuations are linked to the atom having a Brownian motion due to the random reemission of the absorbed photon. They may cause difficulties when loading the atoms in the next trap.
References
Atomic physics
Cooling technology
Scientific instruments | Zeeman slower | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,394 | [
"Quantum mechanics",
"Measuring instruments",
"Scientific instruments",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
40,024,054 | https://en.wikipedia.org/wiki/Submarine%20pipeline | A submarine pipeline (also known as marine, subsea or offshore pipeline) is a pipeline that is laid on the seabed or below it inside a trench. In some cases, the pipeline is mostly on-land but in places it crosses water expanses, such as small seas, straits and rivers. Submarine pipelines are used primarily to carry oil or gas, but transportation of water is also important. A distinction is sometimes made between a flowline and a pipeline. The former is an intrafield pipeline, in the sense that it is used to connect subsea wellheads, manifolds and the platform within a particular development field. The latter, sometimes referred to as an export pipeline, is used to bring the resource to shore. Sizeable pipeline construction projects need to take into account many factors, such as the offshore ecology, geohazards and environmental loading – they are often undertaken by multidisciplinary, international teams.
Route selection
One of the earliest and most critical tasks in a submarine pipeline planning exercise is the route selection. This selection has to consider a variety of issues, some of a political nature, but most others dealing with geohazards, physical factors along the prospective route, and other uses of the seabed in the area considered. This task begins with a fact-finding exercise, which is a standard desk study that includes a survey of geological maps, bathymetry, fishing charts, aerial and satellite photography, as well as information from navigation authorities.
Physical factors
The primary physical factor to be considered in submarine pipeline construction is the state of the seabed – whether it is smooth (i.e., relatively flat) or uneven (corrugated, with high points and low points). If it is uneven, the pipeline will include free spans when it connects two high points, leaving the section in between unsupported. If an unsupported section is too long, the bending stress exerted onto it (due to its weight) may be excessive. Vibration from current-induced vortexes may also become an issue. Corrective measures for unsupported pipeline spans include seabed leveling and post-installation support, such as berm or sand infilling below the pipeline. The strength of the seabed is another significant parameter. If the soil is not strong enough, the pipeline may sink into it to an extent where inspection, maintenance procedures and prospective tie-ins become difficult to carry out. At the other extreme, a rocky seabed is expensive to trench and, at high points, abrasion and damage of the pipeline's external coating may occur. Ideally, the soil should be such as to allow the pipe to settle into it to some extent, thereby providing it with some lateral stability.
Other physical factors to be taken into account prior to building a pipeline include the following:
Seabed mobility: Sand waves and megaripples are features that move with time, such that a pipeline that was supported by the crest of one such feature during construction may find itself in a trough later during the pipeline's operational lifespan. The evolution of these features is difficult to predict so it is preferable to avoid the areas where they are known to exist.
Submarine landslides: They result from high sedimentation rates and occur on steeper slopes. They can be triggered by earthquakes. When the soil around the pipe is subjected to a slide, especially if the resulting displacement is at high angle to the line, the pipe within it can incur severe bending and consequent tensile failure.
Currents: High currents are objectionable in that they hinder pipe laying operations. For instance, in shallow seas tidal currents may be quite strong in a strait between two islands. Under these circumstances, it may be preferable to bring the pipe elsewhere, even if this alternative route ends up being longer.
Waves: In shallow waters, waves can also be problematic for pipeline laying operations (in severe wave regimes) and, subsequently, to its stability, because of the water's scouring action. This is one of a number of reasons why landfalls (where the pipeline reaches the shoreline) are particularly delicate areas to plan.
Ice-related issues: In freezing waters, floating ice features often drift into shallower waters, and their keel comes into contact with the seabed. As they continue to drift, they gouge the seabed and can hit the pipeline. Stamukhi can also damage this structure, either by exerting high local stresses on it or by causing to soil around it to fail, thereby inducing excessive bending. Strudel are another pipeline hazard in cold waters – water gushing through them can remove the soil from below the structure, making it vulnerable to overstress (due to self-weight) or vortex-induced oscillations. Pipeline route planning for areas where these risks are known to exist has to consider laying the pipeline in a back-filled trench.
Other uses of the seabed
Proper planning of a pipeline route has to factor in a wide range of human activities that make use of the seabed along the proposed route, or that are likely to do so in the future. They include the following:
Other pipelines: If and where the proposed pipeline intersects an existing one, which is not uncommon, a bridging structure may be required at that juncture in order to cross it. This has to be done at a right angle. The juncture should be carefully designed so as to avoid interferences between the two structures either by direct physical contact or due to hydrodynamic effects.
Fishing vessels: Commercial fishing makes use of heavy fishing nets dragged on the seabed and extending several kilometers behind the trawler. This net could snag the pipeline, with potential damage to both pipeline and vessel.
Ship anchors: Ship anchors are a potential threat to pipelines, especially near harbors.
Military activities: Some areas still have mines originating from former conflicts but that are still operational. Other areas, used for bombing or gunning practices, may also conceal live ammunition. Moreover, at some locations, various types of instrumentation are laid on the seafloor for submarine detection. These areas have to be avoided.
Submarine pipeline characteristics
Submarine pipelines generally vary in diameter from for gas lines, to for high capacity lines. Wall thicknesses typically range from to . The pipe can be designed for fluids at high temperature and pressure. The walls are made from high-yield strength steel, 350-500 MPa (50,000-70,000 psi), weldability being one of the main selection criteria. The structure is often shielded against external corrosion by coatings such as bitumastic or epoxy, supplemented by cathodic protection with sacrificial anodes. Concrete or fiberglass wrapping provides further protection against abrasion. The addition of a concrete coating is also useful to compensate for the pipeline's positive buoyancy when it carries lower density substances.
The pipeline's inside wall is not coated for petroleum service. But when it carries seawater or corrosive substances, it can be coated with epoxy, polyurethane or polyethylene; it can also be cement-lined. In the petroleum industry, where leaks are unacceptable and the pipelines are subject to internal pressures typically in the order of 10 MPa (1500 psi), the segments are joined by full penetration welds. Mechanical joints are also used. A pig is a standard device in pipeline transport, be it on-land or offshore. It is used to test for hydrostatic pressure, to check for dents and crimps on the sidewalls inside the pipe, and to conduct periodic cleaning and minor repairs.
Pipeline construction
Pipeline construction involves two procedures: assembling many pipe segments into a full line, and installing that line along the desired route. Several systems can be used – for a submarine pipeline, the choice in favor of any one of them is based on the following factors: physical and environmental conditions (e.g. currents, wave regime), availability of equipment and costs, water depth, pipeline length and diameter, constraints tied to the presence of other lines and structures along the route. These systems are generally divided into four broad categories: pull/tow, S-lay, J-lay and reel-lay.
The pull/tow system
In the pull/tow system, the submarine pipeline is assembled onshore and then towed to location. Assembly is done either parallel or perpendicular to the shoreline – in the former case, the full line can be built prior to tow out and installation. A significant advantage with the pull/tow system is that pre-testing and inspection of the line are done onshore, not at sea. It allows to handle lines of any size and complexity. As for the towing procedures, a number of configurations can be used, which may be categorized as follows: surface tow, near-surface tow, mid-depth tow and off-bottom tow.
Surface tow: In this configuration, the pipeline remains at the surface of the water during tow, and is then sunk into position at lay site. The line has to be buoyant – this can be done with individual buoyancy units attached to it. Surface tows are not appropriate for rough seas and are vulnerable to lateral currents.
Near-surface tow: The pipeline remains below the water surface but close to it – this mitigates wave action. But the spar buoys used to maintain the line at that level are affected by rough seas, which in itself may represent a challenge for the towing operation.
Mid-depth tow: The pipeline is not buoyant – either because it is heavy or it is weighted down by hanging chains. In this configuration, the line is suspended in a catenary between two towing vessels. The shape of that catenary (the sag) is a balance between the line's weight, the tension applied to it by the vessels and hydrodynamic lift on the chains. The amount of allowable sag is limited by how far down the seabed is.
Off-bottom tow: This configuration is similar to the mid-depth tow, but here the line is maintained within 1 to 2 m (several feet) away from the bottom, using chains dragging on the seabed.
Bottom tow: In this case, the pipeline is dragged onto the bottom – the line is not affected by waves and currents, and if the sea gets too rough for the tow vessel, the line can simply be abandoned and recovered later. Challenges with this type of system include: requirement for an abrasion-resistant coating, interaction with other submarine pipelines and potential obstructions (reef, boulders, etc.). Bottom tow is commonly used for river crossings and crossings between shores.
The S-lay system
In the S-lay system, the pipeline assembly is done at the installation site, on board a vessel that has all the equipment required for joining the pipe segments: pipe handling conveyors, welding stations, X-ray equipment, joint-coating module, etc. The S notation refers to the shape of the pipeline as it is laid onto the seabed. The pipeline leaves the vessel at the stern or bow from a supporting structure called a stinger that guides the pipe's downward motion and controls the convex-upward curve (the overbend). As it continues toward the seabed, the pipe has a convex-downward curve (the sagbend) before coming into contact with the seabed (touch down point). The sagbend is controlled by a tension applied from the vessel (via tensioners) in response to the pipeline's submerged weight. The pipeline configuration is monitored so that it will not get damaged by excessive bending. This on-site pipeline assembly approach, referred to as lay-barge construction, is known for its versatility and self-contained nature – despite the high costs associated with this vessel's deployment, it is efficient and requires relatively little external support. But it may have to contend with severe sea states – these adversely affect operations such as pipe transfer from supply boats, anchor-handling and pipe welding. Recent developments in lay-barge design include dynamic positioning and the J-lay system.
The J-lay system
In areas where the water is very deep, the S-lay system may not be appropriate because the pipeline leaves the stinger to go almost straight down. To avoid sharp bending at the end of it and to mitigate excessive sag bending, the tension in the pipeline would have to be high. Doing so would interfere with the vessel's positioning, and the tensioner could damage the pipeline. A particularly long stinger could be used, but this is also objectionable since that structure would be adversely affected by winds and currents. The J-lay system, one of the latest generations of lay-barge, is better suited for deep water environments. In this system, the pipeline leaves the vessel on a nearly vertical ramp (or tower). There is no overbend – only a sagbend of catenary nature (hence the J notation), such that the tension can be reduced. The pipeline is also less exposed to wave action as it enters the water. However, unlike for the S-lay system, where pipe welding can be done simultaneously at several locations along the vessel deck's length, the J-lay system can only accommodate one welding station. Advanced methods of automatic welding are used to compensate for this drawback.
The Reel-lay system
In the reel-lay system, the pipeline is assembled onshore and is spooled onto a large drum typically about x in size, mounted on board a purpose-built vessel. The vessel then goes out to location to lay the pipeline. Onshore facilities to assemble the pipeline have inherent advantages: they are not affected by the weather or the sea state and are less expensive than seaborne operations. Pipeline supply can be coordinated: while one line is being laid at sea, another one can be spooled onshore. A single reel can have enough capacity for a full length flow line. The reel-lay system, however, can only handle lower diameter pipelines – up to about 400 mm (16 in). Also, the kind of steel making up the pipes must be able to undergo the required amount of plastic deformation as it is bent to proper curvature (by a spiral J-tube) when reeled around the drum, and straightened back (by a straightener) during the layout operations at the installation site.
Stabilisation
Several methods are used to stabilise and protect submarine pipelines and their components. These may be used alone or in combinations.
Trenching and burial
A submarine pipeline may be laid inside a trench as a means of safeguarding it against fishing gear (e.g. anchors) and trawling activity. This may also be required in shore approaches to protect the pipeline against currents and wave action (as it crosses the surf zone). Trenching can be done prior to pipeline lay (pre-lay trenching), or afterward by seabed removal from below the pipeline (post-lay trenching). In the latter case, the trenching device rides on top of, or straddles, the pipeline.
Several systems are used to dig trenches in the seabed for submarine pipelines:
Jetting: This is a post-lay trenching procedure whereby the soil is removed from beneath the pipeline by using powerful pumps to blow water on each side of it.
Mechanical cutting: This system uses chains or cutter disks to dig through and remove harder soils, including boulders, from below the pipeline.
Plowing: The plowing principle, which was initially used for pre-lay trenching, has evolved into sophisticated systems that are lighter in size for faster and safer operation.
Dredging/excavation: In shallower water, the soil can be removed with a dredger or an excavator prior to laying the pipeline. This can be done in a number of ways, notably with a ′′cutter-suction′′ system, with the use of buckets or with a backhoe.
″A buried pipe is far better protected than a pipe in an open trench.″ This is commonly done either by covering the structure with rocks quarried from a nearby shoreline. Alternatively, the soil excavated from the seabed during trenching can be used as backfill. A significant drawback to burial is the difficulty in locating a leak should it arise, and for the ensuing repairing operations.
Mattresses
Mattresses may be laid over the pipeline, or both under and over it depending on the substrate.
Frond mattresses have an effect similar to seaweed and tend to cause sand to accumulate. They must be anchored to the bottom to prevent being washed away.
Concrete mattresses are used to help hold part of the pipeline in place by their weight and reduce scour. They are usually heavy enough to be held in place by their own weight, as they are made from concrete blocks linked together by rope.
Combination mattresses of concrete mattress with overlaid frond mattress are also used.
Ground anchors
Clamps holding the pipeline to piles may be used to prevent lateral movement.
Saddle blocks
Precast concrete saddle blocks may be used to provide lateral support and hold the pipeline down more firmly.
Sandbags and groutbags
These may be packed at the sides or under a pipeline to provide vertical and/or lateral support.
Gravel dumps
Gravel may be dumped over parts of a pipeline to reduce scour and help stabilise against lateral movement.
Environmental and legal issues
The Espoo Convention created certain requirements for notification and consultation where a project is likely to have transboundary environmental effects. Scholars are divided on how effective Espoo is at mitigating environmental harm. Law of the Sea concepts involved in the construction of transboundary pipelines concern territorial waters, continental shelves, exclusive economic zones, freedom of the high seas and protection of the environment. Under international law the high seas are open to all states to lay underwater pipelines and for various other types of construction.
Underwater pipelines pose environmental risk because pipelines themselves may become damaged by ship's anchors, corrosion, tectonic activity, or as a result of defective construction and materials. Stanislav Patin has said that study on the effects of natural gas on underwater ecosystems, fish and other marine organisms has been limited. Researchers found a cause-effect relationship between mass fish mortality and natural gas leaks after drilling accidents in the Sea of Azov in 1982 and 1985.
Concerns about the environmental risks of underwater pipelines have been raised on numerous occasions. There have been at least two serious incidents involving oil pipelines on the UK's continental shelf. There have also been several "minor spills and gas leaks" involving other North Sea pipelines. In 1980 a pipeline was damaged by a ship's anchor and in 1986 a pipeline valve failed due to pressure changed. Both incidents resulted in oil spills. Several Baltic countries expressed concerns about the Nord Stream pipeline. The route of the 1,200 km underwater pipeline would travel through fishing areas of the Baltic Sea, as well as area where chemical weapons from World War II had been discarded.
Protection of submarine pipelines
Significance of submarine pipelines
Submarine pipelines are crucial to the global energy infrastructure and supply. This infrastructure contributes to global energy security, as its function is to transport oil, natural gas and other resources over vast distances. In December 2020, operational oil pipelines worldwide had a daily capacity of almost 100 million barrels of oil equivalent. The regions with the longest operational oil pipeline network in 2024 in kilometres are North America with 111,353 km, Asia with 86,717 km and Europe in third place with 74,077 km.
Pipelines have been of geopolitical and national security priority since the beginning of their construction. Disruptions to pipeline operations can cause significant economic losses, environmental hazards, and energy shortages for the affected parties, reflecting the need for their sufficient protection to be of high priority. Key challenges related to critical maritime infrastructure protection are human activity, geographical accessibility, natural hazards, sabotage, maintenance and monitoring and lack of international cooperation.
Threats: intentional human activity
Earlier debates about critical maritime infrastructure have shifted from emphasis on terrorism and cyber threats toward the increasing frequency and efficacy of hybrid tactics. The Nord Stream sabotage demonstrates hybrid attacks, which aim to induce significant damage to an adversary while operating in a way that makes detection, attribution and response difficult to enact, blurring the conceptual lines between conflict and peace.
Increased attention
Despite underwater pipelines being categorized as critical infrastructure they only recently became a prominent issue in debates about maritime security. Previously, pipelines had been overlooked and neglected in geopolitical security debates. One reason for the increased attention is the growing threat to critical infrastructure on the seabed and the resulting risk to national security. The shift in attention changed especially after the sabotage of the Nord Stream pipelines in the Baltic Sea in September 2022. The explosion highlighted the need for prioritization of critical maritime infrastructure protection, including vulnerabilities of marine infrastructure and the inadequacy of the current protection and response mechanisms. The perpetrators of the sabotage of the Nord Stream pipeline are still not caught, despite observers' accounts stating that the incident had a high level of sophistication implying either state sponsorship or another form of governmental backing.
Protection of submarine pipelines
NATO working to enhance the security of underwater infrastructure to prevent and respond to threats such as organized crime and hybrid attacks like the explosion of the Nord Stream Pipeline. In 2023, at the Vilnius summit, NATO launched a new centre for the protection of underwater pipelines following the to-date unsolved case of sabotage on the Nord Stream pipelines and growing concern that Russia is mapping infrastructure in waters around Europe. The centre is dedicated to the security of the vast network of underwater energy pipelines demonstrated to be vulnerable to attacks to disrupt energy supply and economic activity.
The European Union has also updated its maritime security strategy involving the protection of underwater pipelines. The strategy promotes rules-based governance at sea and boosts international cooperation The objective of the strategy is to increase the resilience and protection of critical maritime infrastructure such as gas pipelines and underwater cables. The strategy comprises six strategic objectives. The first is to escalate activities at sea such as security exercises and the second is to cooperate with partners such as regional and international organisations. The third objective is to lead maritime domain awareness such as information collection and exchange among authorities, to manage risks and threats involving improving the collective resilience of the EU and its member states, to enhance civilian and military capabilities and lastly, to educate and train to ensure a high level of specialised education and skills which is largely focused on skills needed to tackle hybrid threats.
Responses to the Nord Stream sabotage
After the Nord Stream sabotage in 2022, NATO and the European Union intensified their attention towards the protection of maritime infrastructure. This resulted in increased policy activity and new protection strategies and plans, including the procurement of additional naval vessels to be used for seabed infrastructure protection. The sabotage of the Nord Stream pipeline in 2022 triggered a substantial response, especially among North Sea coastal states. In 2023, the UK added a new surveillance ship, the RFA Proteus, to its fleet. The vessel’s purpose is to monitor the seabed and ensure infrastructure safety. Norway increased its security measures around the country’s oil and gas infrastructure, involving an increased presence of the Navy, Coast Guard and Air Force covering all domains including subsea and cyber. Highlighting the need for cooperation in response to the threat against critical maritime infrastructure, Norway also accepted assistance from the UK, France and Germany offering their naval resources to restore and increase the security surrounding the oil and gas infrastructure in the North Sea.
See also
References
Bibliography
Bai Y. & Bai Q. (2010) Subsea Engineering Handbook. Gulf Professional Publishing, New York, 919 p.
Bisht, I. S. (2022, October 3). UK to Procure Seabed Surveillance Vessels for Infrastructure Safety. The Defense Post. https://thedefensepost.com/2022/10/03/uk-seabed-surveillance-vessels/
Brooke-Holland, L. (2023). Seabed warfare: Protecting the UK’s undersea infrastructure. https://commonslibrary.parliament.uk/seabed-warfare-protecting-the-uks-undersea-infrastructure/
Brown R.J. (2006) Past, present, and future towing of pipelines and risers. In: Proceedings of the 38th Offshore Technology Conference (OTC). Houston, U.S.A.
Bueger, C., & Edmunds, T. (2024). Understanding Maritime Security. Oxford University Press.
Bueger, C., & Liebetrau, T. (2023). Critical maritime infrastructure protection: What’s the trouble? Marine Policy, 155, 105772.
Cook, L. (2023, June 16). NATO moves to protect undersea pipelines, cables as concern mounts over Russian sabotage threat. AP News. https://apnews.com/article/nato-russia-sabotage-pipelines-cables-infrastructure-507929033b05b5651475c8738179ba5c
Council of the EU. (2023, October 24). Maritime security: Council approves revised EU strategy and action plan. Consilium. https://www.consilium.europa.eu/en/press/press-releases/2023/10/24/maritime-security-council-approves-revised-eu-strategy-and-action-plan/
Croasdale K., Been K., Crocker G., Peek R. & Verlaan P. (2013) Stamukha loading cases for pipelines in the Caspian Sea. Proceedings of the 22nd International Conference on Port and Ocean Engineering under Arctic Conditions (POAC), Espoo, Finland.
Dean E.T.R. (2010) Offshore Geotechnical Engineering - Principles and Practice, Thomas Telford, Reston, VA, U.S.A., 520 p.
Gerwick B.C. (2007) Construction of marine and offshore structures. CRC Press, New York, 795 p.
Häggblom, R. (2022, October 4). Nordic Countries’ Response to Nord Stream Sabotage. Naval News. https://www.navalnews.com/naval-news/2022/10/nordic-countries-response-to-nord-stream-sabotage/
MARCOM, P. A. O. (2024). NATO officially launches new Maritime Centre for Security of Critical Undersea Infrastructure. Mc.Nato.Int. https://mc.nato.int/media-centre/news/2024/nato-officially-launches-new-nmcscui.aspx
Monaghan, S., Svendsen, O., Darrah, M., & Arnold, E. (2023). NATO’s Role in Protecting Critical Undersea Infrastructure. Center for Strategic and International Studies (CSIS); JSTOR. http://www.jstor.org.ep.fjernadgang.kb.dk/stable/resrep55403
Nasiri, H. (2024, May 4). Managing Undersea Pipeline Damage: Strategies for Prevention, Response, and Recovery. 9th International Conference on Civil, Structural and Seismic Engineering. https://www.researchgate.net/publication/380343395_Managing_Undersea_Pipeline_Damage_Strategies_for_Prevention_Response_and_Recovery
Palmer, A.C. & Been K. (2011) Pipeline geohazards for Arctic conditions. In: W.O. McCarron (Editor), Deepwater Foundations and Pipeline Geomechanics, J. Ross Publishing, Fort Lauderdale, Florida, pp. 171–188.
Palmer, A. C. & King R. A. (2008). Subsea Pipeline Engineering (2nd ed.). Tulsa, USA: Pennwell, 624 p.
Ramakrishnan T.V. (2008) Offshore engineering. Gene-Tech Books, New Delhi, 347 p.
Shapiro, J., & McNeish, J.-A. (Eds.). (2021). Our Extractive Age: Expressions of Violence and Resistance. Routledge. https://doi.org/10.4324/9781003127611
Statista. (2024a). Daily capacity of oil pipelines worldwide as of December 2020, by status. Statista Research Department.
Statista. (2024b). Global oil pipeline length 2024, by region. Statista Research Department. https://www.statista.com/statistics/1491014/length-of-oil-pipelines-by-region/
Wilson J.F. (2003) Structures in the offshore environment. In: J.F. Wilson (Editor), Dynamics of Offshore Structures. John Wiley & Sons, Hoboken, New Jersey, U.S.A., pp. 1–16.
Geotechnical structures
Oceanography | Submarine pipeline | [
"Physics",
"Environmental_science"
] | 5,981 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
40,028,796 | https://en.wikipedia.org/wiki/Air%20Cargo%20Challenge | The Air Cargo Challenge is an aeronautical engineering student competition that is held in Europe every two years.
This competition was held for the first time in 2003 and it was founded by a group of Aerospace students in Lisbon. The competition is primarily directed to aeronautical and aerospace engineering students, similarly to the north-American Design/Build/Fly.
The main objective is to design and build a radio-controlled aircraft that is able to fly with the highest possible payload according with the rules established in the competition regulations, which vary in each edition.
The team's score is not only given by the performance demonstrated in the flight competition part, but also by the technical quality of the project, through the evaluation of the design report and drawings.
The event's first edition (ACC'03) was organized by the APAE: Associação Portuguesa de Aeronáutica e Espaço (Portuguese Association of Aeronautics and Space), an aerospace group from Instituto Superior Técnico. From the ACC'07 onwards, the competition grew to an international level under the umbrella of EUROAVIA, the European Association of Aerospace Students, and the winning team got the possibility of organizing the next edition. The ACC of 2011 was held in the University of Stuttgart, August 2011, organized by the AKAModell Stuttgart together with the EUROAVIA Stuttgart. The Universidade da Beira Interior was the winner of this edition, thus it took place in Portugal'. Again, the Team from Stuttgart won this Edition, and got the organization responsibility. In 2017 the 2015 winner HUSZ organized the event in Zagreb. The competition took place in Stuttgart again on August 12-17, 2019. After a one-year delay due to the Covid pandemic, the competition was held in Munich in the summer of 2022. The 2022 winner ADDI organized the 2024 edition on 10th-12th July in Aachen, Germany. The next ACC 2026 will be held in Stuttgart, Germany.
Competition results
2003 - Organization: APAE, Lisbon
2005 - Organization: APAE, Lisbon
2007 - Organization: Instituto Superior Técnico, Lisbon
2009 - Organization: University of Beira Interior, Covilhã
2011 - Organization: University of Stuttgart, Stuttgart
2013 - Organization: University of Beira Interior, Covilhã
2015 - Organization: University of Stuttgart, Stuttgart
2017 - Organization: University of Zagreb, Zagreb
2019 - Organization: University of Stuttgart, Stuttgart
2022 - Organization: AkaModell München e.V., Technical University of Munich Munich
The 2021 edition of the ACC was delayed to 2022 due to the Covid pandemic. It was organized by AkaModell München from Technical University of Munich.
All information to the ACC2022 can be found on www.acc2022.de.
The competition took place 5.7.2022-9.7.2022 in Munich.
The regulations were released on August 6, 2020.
2024 - Organization: Aachen Drone Development Initiative (ADDI), RWTH Aachen University
Statistics
Number of participated Teams per Country (2007-2024)
Medal table (2007-2024)
References
External links
https://web.archive.org/web/20110723004649/http://aircargochallenge.net/portal/
https://web.archive.org/web/20101227010310/http://www.acc2011.com/
https://web.archive.org/web/20130522142129/http://apae.org.pt/
https://web.archive.org/web/20081204102043/http://aircargochallenge.net/portal/finalResults.pdf
www.upcventuri.com
https://web.archive.org/web/20190805114405/http://www.acc2017.com/
https://aachen-drone.com/zb_acc2024/
Aeronautics
Engineering competitions
Engineering education
European student organizations
International aviation organizations
Student competitions | Air Cargo Challenge | [
"Technology"
] | 844 | [
"Science and technology awards",
"Engineering competitions"
] |
40,030,501 | https://en.wikipedia.org/wiki/M2-brane | In theoretical physics, an M2-brane, is a spatially extended mathematical object (brane) that appears in string theory and in related theories (e.g. M-theory, F-theory). In particular, it is a solution of eleven-dimensional supergravity which possesses a three-dimensional world volume.
Description
The M2-brane solution can be found by requiring symmetry of the solution and solving the supergravity equations of motion with the p-brane ansatz. The solution is given by a metric and three-form gauge field which, in isotropic coordinates, can be written as
where is the flat-space metric and the distinction has been made between world volume and transverse coordinates. The constant is proportional to the charge of the brane which is given by the integral of over the boundary of the transverse space of the brane.
See also
String theory
Membrane (M-theory)
M-theory
References
String theory
Physical cosmology | M2-brane | [
"Physics",
"Astronomy"
] | 199 | [
"Astronomical hypotheses",
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"String theory",
"Physical cosmology"
] |
3,919,387 | https://en.wikipedia.org/wiki/Remineralisation | In biogeochemistry, remineralisation (or remineralization) refers to the breakdown or transformation of organic matter (those molecules derived from a biological source) into its simplest inorganic forms. These transformations form a crucial link within ecosystems as they are responsible for liberating the energy stored in organic molecules and recycling matter within the system to be reused as nutrients by other organisms.
Remineralisation is normally viewed as it relates to the cycling of the major biologically important elements such as carbon, nitrogen and phosphorus. While crucial to all ecosystems, the process receives special consideration in aquatic settings, where it forms a significant link in the biogeochemical dynamics and cycling of aquatic ecosystems.
Role in biogeochemistry
The term "remineralization" is used in several contexts across different disciplines. The term is most commonly used in the medicinal and physiological fields, where it describes the development or redevelopment of mineralized structures in organisms such as teeth or bone. In the field of biogeochemistry, however, remineralization is used to describe a link in the chain of elemental cycling within a specific ecosystem. In particular, remineralization represents the point where organic material constructed by living organisms is broken down into basal inorganic components that are not obviously identifiable as having come from an organic source. This differs from the process of decomposition which is a more general descriptor of larger structures degrading to smaller structures.
Biogeochemists study this process across all ecosystems for a variety of reasons. This is done primarily to investigate the flow of material and energy in a given system, which is key to understanding the productivity of that ecosystem along with how it recycles material versus how much is entering the system. Understanding the rates and dynamics of organic matter remineralization in a given system can help in determining how or why some ecosystems might be more productive than others.
Remineralization reactions
While it is important to note that the process of remineralization is a series of complex biochemical pathways [within microbes], it can often be simplified as a series of one-step processes for ecosystem-level models and calculations. A generic form of these reactions is shown by:
{Organic \ Matter} + Oxidant -> {Liberated\ Simple\ Nutrients} + \underset{Carbon~Dioxide}{CO2} + \underset{Water}{H2O}
The above generic equation starts with two reactants: some piece of organic matter (composed of organic carbon) and an oxidant. Most organic carbon exists in a reduced form which is then oxidized by the oxidant (such as ) into and energy that can be harnessed by the organism. This process generally produces , water and a collection of simple nutrients like nitrate or phosphate that can then be taken up by other organisms. The above general form, when considering as the oxidant, is the equation for respiration. In this context specifically, the above equation represents bacterial respiration though the reactants and products are essentially analogous to the short-hand equations used for multi-cellular respiration.
Electron acceptor cascade
The degradation of organic matter through respiration in the modern ocean is facilitated by different electron acceptors, their favorability based on Gibbs free energy law, and the laws of thermodynamics. This redox chemistry is the basis for life in deep sea sediments and determines the obtainability of energy to organisms that live there. From the water interface moving toward deeper sediments, the order of these acceptors is oxygen, nitrate, manganese, iron, and sulfate. The zonation of these favored acceptors can be seen in Figure 1. Moving downwards from the surface through the zonation of these deep ocean sediments, acceptors are used and depleted. Once depleted the next acceptor of lower favorability takes its place. Thermodynamically, oxygen represents the most favorable electron accepted but is quickly used up in the water sediment interface and concentrations extends only millimeters to centimeters down into the sediment in most locations of the deep sea. This favorability indicates an organism's ability to obtain higher energy from the reaction which helps them compete with other organisms. In the absence of these acceptors, organic matter can also be degraded through methanogenesis, but the net oxidation of this organic matter is not fully represented by this process. Each pathway and the stoichiometry of its reaction are listed in table 1.
Due to this quick depletion of in the surface sediments, a majority of microbes use anaerobic pathways to metabolize other oxides such as manganese, iron, and sulfate. It is also important to figure in bioturbation and the constant mixing of this material which can change the relative importance of each respiration pathway. For the microbial perspective please reference the electron transport chain.
Remineralisation in sediments
Reactions
A quarter of all organic material that exits the photic zone makes it to the seafloor without being remineralised and 90% of that remaining material is remineralised in sediments itself. Once in the sediment, organic remineralisation may occur through a variety of reactions. The following reactions are the primary ways in which organic matter is remineralised, in them general organic matter (OM) is often represented by the shorthand: .
Aerobic respiration
Aerobic respiration is the most preferred remineralisation reaction due to its high energy yield. Although oxygen is quickly depleted in the sediments and is generally exhausted centimeters from the sediment-water interface.
Anaerobic respiration
In instances in which the environment is suboxic or anoxic, organisms will prefer to utilize denitrification to remineralise organic matter as it provides the second largest amount of energy. In depths below where denitrification is favored, reactions such as Manganese Reduction, Iron Reduction, Sulfate Reduction, Methane Reduction (also known as Methanogenesis), become favored respectively. This favorability is governed by Gibbs Free Energy (ΔG). In a water body, sediment seabed, or soil, the sorting of these chemical reactions with depth in order of energy provided is called a redox gradient.
Redox zonation
Redox zonation refers to how the processes that transfer terminal electrons as a result of organic matter degradation vary depending on time and space. Certain reactions will be favored over others due to their energy yield as detailed in the energy acceptor cascade detailed above. In oxic conditions, in which oxygen is readily available, aerobic respiration will be favored due to its high energy yield. Once the use of oxygen through respiration exceeds the input of oxygen due to bioturbation and diffusion, the environment will become anoxic and organic matter will be broken down via other means, such as denitrification and manganese reduction.
Remineralisation in the open ocean
In most open ocean ecosystems only a small fraction of organic matter reaches the seafloor. Biological activity in the photic zone of most water bodies tends to recycle material so well that only a small fraction of organic matter ever sinks out of that top photosynthetic layer. Remineralisation within this top layer occurs rapidly and due to the higher concentrations of organisms and the availability of light, those remineralised nutrients are often taken up by autotrophs just as rapidly as they are released.
What fraction does escape varies depending on the location of interest. For example, in the North Sea, values of carbon deposition are ~1% of primary production while that value is <0.5% in the open oceans on average. Therefore, most of nutrients remain in the water column, recycled by the biota. Heterotrophic organisms will utilize the materials produced by the autotrophic (and chemotrophic) organisms and via respiration will remineralise the compounds from the organic form back to inorganic, making them available for primary producers again.
For most areas of the ocean, the highest rates of carbon remineralisation occur at depths between in the water column, decreasing down to about 1,200 m where remineralisation rates remain pretty constant at 0.1 μmol kg−1 yr−1. As a result of this, the pool of remineralised carbon (which generally takes the form of carbon dioxide) tends to increase in the photic zone.
Most remineralisation is done with dissolved organic carbon (DOC). Studies have shown that it is larger sinking particles that transport matter down to the sea floor while suspended particles and dissolved organics are mostly consumed by remineralisation. This happens in part due to the fact that organisms must typically ingest nutrients smaller than they are, often by orders of magnitude. With the microbial community making up 90% of marine biomass, it is particles smaller than the microbes (on the order of ) that will be taken up for remineralisation.
See also
Biological pump
Decomposition
f-ratio
John D. Hamaker (soil remineralisation)
Mineralization (biology)
Mineralization (soil science)
Immobilization (soil science)
References
Biogeochemistry
Oceanography
Limnology | Remineralisation | [
"Physics",
"Chemistry",
"Environmental_science"
] | 1,873 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Environmental chemistry",
"Chemical oceanography",
"Biogeochemistry"
] |
3,919,647 | https://en.wikipedia.org/wiki/Polyamorphism | Polyamorphism is the ability of a substance to exist in several different amorphous modifications. It is analogous to the polymorphism of crystalline materials. Many amorphous substances can exist with different amorphous characteristics (e.g. polymers). However, polyamorphism requires two distinct amorphous states with a clear, discontinuous (first-order) phase transition between them. When such a transition occurs between two stable liquid states, a polyamorphic transition may also be referred to as a liquid–liquid phase transition.
Overview
Even though amorphous materials exhibit no long-range periodic atomic ordering, there is still significant and varied local structure at inter-atomic length scales (see structure of liquids and glasses). Different local structures can produce amorphous phases of the same chemical composition with different physical properties such as density. In several cases sharp transitions have been observed between two different density amorphous states of the same material. Amorphous ice is one important example (see also examples below). Several of these transitions (including water) are expected to end in a second critical point.
Liquid–liquid transitions
Polyamorphism may apply to all amorphous states, i.e. glasses, other amorphous solids, supercooled liquids, ordinary liquids or fluids. A liquid–liquid transition however, is one that occurs only in the liquid state (red line in the phase diagram, top right). In this article liquid–liquid transitions are defined as transitions between two liquids of the same chemical substance. Elsewhere the term liquid–liquid transition may also refer to the more common transitions between liquid mixtures of different chemical composition.
The stable liquid state unlike most glasses and amorphous solids, is a thermodynamically stable equilibrium state. Thus new liquid–liquid or fluid-fluid transitions in the stable liquid (or fluid) states are more easily analysed than transitions in amorphous solids where arguments are complicated by the non-equilibrium, non-ergodic nature of the amorphous state.
Rapoport's theory
Liquid–liquid transitions were originally considered by Rapoport in 1967 in order to explain high pressure melting curve maxima of some liquid metals. Rapoport's theory requires the existence of a melting curve maximum in polyamorphic systems.
Double well potentials
One physical explanation for polyamorphism is the existence of a double well inter-atomic pair potential (see lower right diagram). It is well known that the ordinary liquid–gas critical point appears when the inter-atomic pair potential contains a minimum. At lower energies (temperatures) particles trapped in this minimum condense into the liquid state. At higher temperatures however, these particles can escape the well and the sharp definition between liquid and gas is lost. Molecular modelling has shown that addition of a second well produces an additional transition between two different liquids (or fluids) with a second critical point.
Examples of polyamorphism
Polyamorphism has been experimentally observed or theoretically suggested in silicon, liquid phosphorus, triphenyl phosphate, mannitol, and in some other molecular network-forming substances.
Water and structural analogues
The most famous case of polyamorphism is amorphous ice. Pressurizing conventional hexagonal ice crystals to about 1.6 GPa at liquid nitrogen temperature (77 K) converts them to the high-density amorphous ice. Upon releasing the pressure, this phase is stable and has density of 1.17 g/cm3 at 77 K and 1 bar. Consequent warming to 127 K at ambient pressure transforms this phase to a low-density amorphous ice (0.94 g/cm3 at 1 bar). Yet, if the high-density amorphous ice is warmed up to 165 K not at low pressures but keeping the 1.6 GPa compression, and then cooled back to 77 K, then another amorphous ice is produced, which has even higher density of 1.25 g/cm3 at 1 bar. All those amorphous forms have very different vibrational lattice spectra and intermolecular distances.
A similar abrupt liquid-amorphous phase transition is predicted in liquid silicon when cooled under high pressures. This observation is based on first principles molecular dynamics computer simulations, and might be expected intuitively since tetrahedral amorphous carbon, silicon, and germanium are known to be structurally analogous to water.
Oxide liquids and glasses
Yttria-alumina melts are another system reported to exhibit polyamorphism. Observation of a liquid–liquid phase transition in the supercooled liquid has been reported. Though this is disputed in the literature. Polyamorphism has also been reported in Yttria-Alumina glasses. Yttria-Alumina melts quenched from about 1900 °C at a rate ~400 °C/s, can form glasses containing a second co-existing phase. This happens for certain Y/Al ratios (about 20–40 mol% Y2O3). The two phases have the same average composition but different density, molecular structure and hardness. However whether the second phase is glassy or crystalline is also debated.
Continuous changes in density were observed upon cooling silicon dioxide or germanium dioxide. Although continuous density changes do not constitute a first order transition, they may be indicative of an underlying abrupt transition.
Organic materials
Polyamorphism has also been observed in organic compounds, such as liquid triphenyl phosphite at temperatures between 210 K and 226 K and n-butanol at temperatures between 120 K and 140 K.
Polyamorphism is also an important area in pharmaceutical science. The amorphous form of a drug typically has much better aqueous solubility (compared to the analogous crystalline form) but the actual local structure in an amorphous pharmaceutical can be different, depending on the method used to form the amorphous phase.
Mannitol is the first pharmaceutical substance featuring polyamorphism. In addition to the regular amorphous phase, a second amorphous phase can be prepared at room temperature and pressure. This new phase has substantially lower energy, lower density and higher glass transition temperature. Since mannitol is widely used in pharmaceutical tablet formulations, mannitol polyamorphism offers a powerful tool to engineer the property and behavior of tablets.
See also
Glass
Liquid
Amorphous solid
structure of liquids and glasses
polymorphism (materials science)
Pair distribution function
References
Phase transitions
Phases of matter
Amorphous solids | Polyamorphism | [
"Physics",
"Chemistry"
] | 1,336 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Unsolved problems in physics",
"Statistical mechanics",
"Amorphous solids",
"Matter"
] |
3,921,716 | https://en.wikipedia.org/wiki/Categorical%20set%20theory | Categorical set theory is any one of several versions of set theory developed from or treated in the context of mathematical category theory.
See also
Categorical logic
References
External links
Category theory
Set theory
Formal methods
Categorical logic | Categorical set theory | [
"Mathematics",
"Engineering"
] | 44 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Set theory",
"Categorical logic",
"Mathematical logic",
"Mathematical objects",
"Software engineering",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations",
"Formal methods"
] |
3,922,021 | https://en.wikipedia.org/wiki/Robot%20calibration | Robot calibration is a process used to improve the accuracy of robots, particularly industrial robots which are highly repeatable but not accurate. Robot calibration is the process of identifying certain parameters in the kinematic structure of an industrial robot, such as the relative position of robot links. Depending on the type of errors modeled, the calibration can be classified in three different ways. Level-1 calibration only models differences between actual and reported joint displacement values, (also known as mastering). Level-2 calibration, also known as kinematic calibration, concerns the entire geometric robot calibration which includes angle offsets and joint lengths. Level-3 calibration, also called a non-kinematic calibration, models errors other than geometric defaults such as stiffness, joint compliance, and friction. Often Level-1 and Level-2 calibration are sufficient for most practical needs.
Parametric robot calibration is the process of determining the actual values of kinematic and dynamic parameters of an industrial robot (IR). Kinematic parameters describe the relative position and orientation of links and joints in the robot while the dynamic parameters describe arm and joint masses and internal friction.
Non-parametric robot calibration circumvents the parameter identification. Used with serial robots, it is based on the direct compensation of mapped errors in the workspace. Used with parallel robots, non-parametric calibration can be performed by the transformation of the configuration space.
Robot calibration can remarkably improve the accuracy of robots programmed offline. A calibrated robot has a higher absolute as well as relative positioning accuracy compared to an uncalibrated one; i.e., the real position of the robot end effector corresponds better to the position calculated from the mathematical model of the robot. Absolute positioning accuracy is particularly relevant in connection with robot exchangeability and off-line programming of precision applications. Besides the calibration of the robot, the calibration of its tools and the workpieces it works with (the so-called cell calibration) can minimize occurring inaccuracies and improve process security.
Accuracy criteria and error sources
The international standard ISO 9283 sets different performance criteria for industrial robots and suggests test procedures in order to obtain appropriate parameter values. The most important criteria, and also the most commonly used, are pose accuracy (AP) and pose repeatability (RP). Repeatability is particularly important when the robot is moved towards the command positions manually ("Teach-In"). If the robot program is generated by a 3D simulation (off-line programming), absolute accuracy is vital, too. Both are generally influenced negatively by kinematic factors. Here especially the joint offsets and deviations in lengths and angles between the individual robot links take effect.
Measurement systems
There exist different possibilities for pose measurement with industrial robots, e.g. touching reference parts, using supersonic distance sensors, laser interferometry, theodolites, calipers or laser triangulation. Furthermore, there are camera systems which can be attached in the robot's cell or at the IR mounting plate and acquire the pose of a reference object. Measurement and calibration systems are made by such companies as Bluewrist, Dynalog, RoboDK, FARO Technologies, Creaform, Leica, Cognibotics, Metris, Metronor, Wiest, Teconsult and Automated Precision.
Mathematical principles
The robot errors gathered by pose measurements can be minimized by numerical optimization. For kinematic calibration, a complete kinematical model of the geometric structure must be developed, whose parameters can then be calculated by mathematical optimization. The common system behaviour can be described with the vector model function as well as input and output vectors (see figure).
The variables k, l, m, n and their derivates describe the dimensions of the single vector spaces. Minimization of the residual error r for identification of the optimal parameter vector p follows from the difference between both output vectors using the Euclidean norm.
For solving the kinematical optimization problems least-squares descent methods are convenient, e.g. a modified quasi-Newton method. This procedure supplies corrected kinematical parameters for the measured machine, which then, for example, can be used to update the system variables in the controller to adapt the used robot model to the real kinematics.
Results
The positioning accuracy of industrial robots varies by manufacturer, age, and robot type. Using kinematic calibration, these errors can be reduced to less than a millimeter in most cases. An example of this is shown in the figure to the right.
Accuracy of 6-axis industrial robots can improved by a factor of 10.
Accuracy of parallel robots after calibration can be as low as a tenth of a millimeter.
Sample applications
In the industry, there is a general trend towards substitution of machine tools and special machines by industrial robots for certain manufacturing tasks whose accuracy demands can be fulfilled by calibrated robots. Through simulation and off-line programming, it is possible to easily accomplish complex programming tasks, such as robot machining. However, contrary to the teach programming method, good accuracy as well as repeatability is required.
In the figure, a current example is shown: In-line measurement in automotive manufacturing, where the common "measurement tunnel" used for 100% inspection with many expensive sensors are partly replaced by industrial robots that carry only one sensor each. This way the total costs of a measurement cell can be reduced significantly. The station can also be re-used after a model change by simple re-programming without mechanical adaptations.
Further examples for precision applications are robot-guided hemming in car body manufacturing, assembly of mobile phones, drilling, riveting and milling in the aerospace industry, and increasingly in medical applications.
See also
Hand eye calibration problem
Literature
Tagiyev, N.; Alizade, R.: A Forward and Reverse Displacement Analysis for a 6-DOF In-Parallel Manipulator. In: Mech. Mach. Theory, Vol. 29, No. 1, London 1994, pp. 115–124.
Trevelyan, J. P.: Robot Calibration with a Kalman Filter. Presentation at International Conference on Advanced Robotics and Computer Vision (ICARCV96), Singapore 1996.
N.N.: ISO 9283 - Manipulating industrial robots. Performance criteria and related test methods. ISO, Geneva 1998.
Beyer, L.; Wulfsberg, J.: Practical Robot Calibration with ROSY. In: Robotica, Vol. 22, Cambridge 2004, pp. 505–512.
Y. Zhang and F. Gao, “A calibration test of Stewart platform,” 2007 IEEE International Conference on Networking, Sensing and Control, IEEE, 2007, pp. 297–301.
A. Nubiola and I.A. Bonev, "Absolute calibration of an ABB IRB 1600 robot using a laser tracker," Robotics and Computer-Integrated Manufacturing, Vol. 29 No. 1, 2013, pp. 236–245.
Gottlieb, J.: Non-parametric Calibration of a Stewart Platform. In: Proceedings of 2014 Workshop on Fundamental Issues and Future Research Directions for Parallel Mechanisms and Manipulators July 7–8, 2014, Tianjin, China.
Nof, Shimon Y. Handbook of industrial robotics (Chapter 5, Section 9). Vol. 1. John Wiley & Sons, 1999.
References
Robot control | Robot calibration | [
"Engineering"
] | 1,557 | [
"Robotics engineering",
"Robot control"
] |
3,922,869 | https://en.wikipedia.org/wiki/Potassium%20hydride | Potassium hydride, KH, is the inorganic compound of potassium and hydrogen. It is an alkali metal hydride. It is a white solid, although commercial samples appear gray. It is a powerful superbase that is useful in organic synthesis. It is sold commercially as a slurry (~35%) in mineral oil or sometimes paraffin wax to facilitate dispensing.
Preparation
Potassium hydride is produced by direct combination of the metal and hydrogen at temperatures between 200 and 350 °C:
This reaction was discovered by Humphry Davy soon after his 1807 discovery of potassium, when he noted that the metal would vaporize in a current of hydrogen when heated just below its boiling point.
Potassium hydride is soluble in fused hydroxides (such as molten sodium hydroxide) and salt mixtures, but not in organic solvents.
Reactions
KH reacts with water according to the reaction:
As a superbase, potassium hydride is more basic than sodium hydride. It is used to deprotonate certain carbonyl compounds to give enolates. It also deprotonates amines to give the corresponding amides of the type KNHR and .
Safety
KH can be pyrophoric in air, react violently with acids, and can ignite upon contact with oxidants. As a suspension in mineral oil, KH is less dangerous.
See also
Sodium hydride
References
Metal hydrides
Potassium compounds
Superbases
Rock salt crystal structure | Potassium hydride | [
"Chemistry"
] | 307 | [
"Superbases",
"Inorganic compounds",
"Reducing agents",
"Metal hydrides",
"Bases (chemistry)"
] |
3,923,616 | https://en.wikipedia.org/wiki/Quiesce | To quiesce is to pause or alter a device or application to achieve a consistent state, usually in preparation for a backup or other maintenance.
Description
In software applications that modify information stored on disk, this generally involves flushing any outstanding writes; see buffering. With telecom applications, this generally involves allowing existing callers to finish their call but preventing new calls from initiating.
Example
Perhaps the best known support for this was incorporated into Microsoft Shadow Copies which was introduced in Microsoft Windows Server 2003. For an application to be quiesced during the shadow copy process, it must register itself as a writer and it is responsible for putting itself into a quiescent mode upon notification.
Vendor schemes
Various database and application vendors implement schemes to provide support for this feature including:
Symantec's Livestate – now includes a quiesce process, as does VMware's VI3 snapshot and VCB features. Symantec supports Exchange and SQL.
VMware support – IO system
IBM DB2 LUW supports a Quiesce command that is used to indicate a state for which all users have been locked out of a database or an instance (of databases) so that administrative tasks can be performed.
IBM DB2 for z/OS, OS/390 and IBM i Operating System has a utility command called QUIESCE, used in order to make it write all data belonging to a certain database (a logical entity in a DB2 subsystem) from the buffers, helping utility programs get DRAIN access on the datasets quickly.
IBM DB2 for z/OS and OS/390 also supports a command SET LOG SUSPEND that technically speaking, stops it from writing to the log, in fact freezing any database activity (except for most queries). This mode is used sometimes for snapshot type backup schemes, thus only lasting for less than a second, ensuring backed up data is in a consistent state. This command is reversed with a SET LOG RESUME command.
A graceful shutdown of WebSphere MQ is called quiescing.
ORACLE also supports a Quiesce command since version 9i which allows existing users to continue to use resources but disallows new resources being made available.
SYBASE ASE 12.0 and above support a QUIESCE DATABASE command that prevents any process from running commands that write to the transaction log. The primary purpose is to stop all update activity so the database files can be copied or backed up using OS level utilities. While the database is quiesced, it is still available to users for read-only queries.
Microsoft Windows SharePoint Services 3.0 and Microsoft Office SharePoint Server 2007/2010 support a QUIESCE or QUIESCE FROM TEMPLATE (2010) option within the Central Administration operations window. This allows an administrator to stop the server farm from accepting new user connections and gradually brings any long-running applications offline without causing data loss.
JADE Object Oriented Database system can perform a quiesced backup with the parameter 'quiesced=true'. The database is placed in a quiescent state by allowing current active transactions to complete and then flushing modified buffers from cache to the stable database. During a quiesced backup, updating transactions are not permitted and attempts to execute database transactions raise a database exception.
Microsoft Visual Studio Team Foundation Server supports quiesce functionality by using the TFS Service Control command-line utility. More information exists about this command-line utility in the MSDN Library. A Microsoft Knowledge Base article describes it by indicating that it disables access to Team Foundation Server services for the duration of servicing operations.
See also
References
Computer hardware tuning
Computing terminology | Quiesce | [
"Technology"
] | 741 | [
"Computing terminology"
] |
34,709,019 | https://en.wikipedia.org/wiki/Denjoy%E2%80%93Luzin%20theorem | In mathematics, the Denjoy–Luzin theorem, introduced independently by and
states that if a trigonometric series converges absolutely on a set of positive measure, then the sum of its coefficients converges absolutely, and in particular the trigonometric series converges absolutely everywhere.
References
Fourier series
Theorems in analysis | Denjoy–Luzin theorem | [
"Mathematics"
] | 65 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical analysis stubs",
"Mathematical problems",
"Mathematical theorems"
] |
34,709,142 | https://en.wikipedia.org/wiki/Intze%20principle | The Intze Principle () is a name given to two engineering principles, both named after the hydraulic engineer, Otto Intze, (1843–1904). In the one case, the Intze Principle relates to a type of water tower; in the other, a type of dam.
Intze Principle for water towers
A water tower built in accordance with the Intze Principle has a brick shaft on which the water tank sits. The base of the tank is fixed with a ring anchor (Ringanker) made of iron or steel, so that only vertical, not horizontal, forces are transmitted to the tower. Due to the lack of horizontal forces the tower shaft does not need to be quite as solidly built.
This type of design was used in Germany between 1885 and 1905.
Intze Principle for dams
The method of dam construction invented by Otto Intze was used in Germany at the end of the 19th and beginning of the 20th centuries. A dam built on the Intze Principle has the following features:
it is a gravity dam with an almost triangular cross-section
the wall is made of rubble stone with a high proportion of mortar
it has a curved ground plan
it has facing brickwork (Vorsatzmauerwerk or Verblendung) on the upper part of the upstream side
it has an earth embankment against the lower part of the upstream side, the so-called Intze Wedge (Intze-Keil)
it has a cement-sealed upstream face, coated with a layer of bitumen or tar
it has internal vertical drainage using clay pipes behind the upstream face
The purpose of the Intze Wedge is to provide an additional seal in the area of the highest water pressure. During the 1920s, this type of construction was gradually superseded by concrete dams or arched dams which were cheaper to build.
See also
List of dams in Germany
References
External links
Otto Intze and water tower construction
Otto Intze and dam construction
Intze tanks at wassertuerme.gmxhome.de
Hydraulic engineering
Water towers
Dams | Intze principle | [
"Physics",
"Engineering",
"Environmental_science"
] | 408 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
34,710,289 | https://en.wikipedia.org/wiki/Equation%20clock | An equation clock is a mechanical clock which includes a mechanism that simulates the equation of time, so that the user can read or calculate solar time, as would be shown by a sundial. The first accurate clocks, controlled by pendulums, were patented by Christiaan Huyghens in 1657. For the next few decades, people were still accustomed to using sundials, and wanted to be able to use clocks to find solar time. Equation clocks were invented to fill this need.
Early equation clocks have a pointer that moves to show the equation of time on a dial or scale. The clock itself runs at constant speed. The user calculates solar time by adding the equation of time to the clock reading. Later equation clocks, made in the 18th century, perform the compensation automatically, so the clock directly shows solar time. Some of them also show mean time, which is often called "clock time".
Simulation mechanisms
All equation clocks include a mechanism that simulates the equation of time, so a lever moves, or a shaft rotates, in a way that represents the variations of the equation of time as the year progresses. There are two frequently-used types of mechanism:
Cam and lever mechanism
In this type of mechanism, a shaft is driven by the clock so it rotates once a year, at constant speed. The shaft carries a cam, which is approximately "kidney shaped" such that its radius is essentially a graph of the annual variation of the equation of time. A follower and lever rest against the cam, so that as the cam rotates the lever moves in a way that represents the changing equation of time. This lever drives other components in the clock.
Double shaft mechanism
To a close approximation, the equation of time can be represented as the sum of two sine waves, one with a period of one year and the other with a period of six months, with the relative phase varying very slowly (marginally noticeable over the course of a century). See the explanation in Equation of time for more detail.
The double shaft mechanism has two shafts rotating at constant speeds: one turns once a year, and the other twice a year. Cranks or pins attached to the two shafts move the two ends of a combining lever (sometimes referred to as a whippletree) sinusoidally; if the dimensions are chosen correctly, the midpoint of the rod moves in a way that simulates the equation of time.
Types of equation clock
During the period when equation clocks were made and used, all clocks were made by hand. No two are exactly alike. Many equation clocks also have other features, such as displays of the phase of the moon or the times of sunrise and sunset. Leaving aside such additions, there are four different ways in which the clocks function. The following paragraphs are intended, not as detailed descriptions of individual clocks, but as illustrations of the general principles of these four different types of equation clock. The basic workings of particular clocks resemble these, but details vary. Pictures and descriptions of various equation clocks, which still exist in museums, can be accessed through the External links listed below.
Clocks without solar time displays
Many equation clocks, especially early ones, have a normal clock mechanism, showing mean time, and also a display that shows the equation of time. An equation of time simulation mechanism drives the pointer on this display. The user has to add the equation of time to clock time to calculate solar time.
Clocks that directly display solar time
Most later equation clocks, made in the 18th century, directly display solar time. Many of them also display mean time and the equation of time, but the user does not have to perform addition. Three types exist:
Clocks with movable minute markings
Clocks have been constructed in which the minute markings are on a circular plate that can be turned around the same axis as the hands. The axis passes through a hole in the centre of the plate, and the hands are in front of the plate. The minutes part of the time shown by the clock is given by the position of the minute hand relative to the markings on the plate. The hand is driven clockwise at constant speed by the clock mechanism, and the plate is turned by the mechanism that simulates the equation of time, rotating anticlockwise as the equation of time increases, and clockwise when it decreases. If the gear ratios are correct, the clock shows solar time. Mean time can also be shown by a separate, stationary set of minute markings on the dial, outside the edge of the plate. The hour display is not adjusted for the equation of time, so the hour reading is slightly approximate. This has no practical effect, since it is always easy to see which hour is correct. These clocks are mechanically simpler than the other types described below, but they have disadvantages: Solar time is difficult to read without looking closely at the minute markings, and the clock cannot be made to strike the hour in solar time.
Clocks with variable pendula
These clocks include a device at the top of the pendulum that slightly changes its effective length, so the speed of the clock varies. This device is driven by a simulation mechanism which moves to simulate the rate of change of the equation of time, rather than its actual value. For example, during the months of December and January, when the equation of time is decreasing so a sundial runs slower than usual, the mechanism makes the pendulum effectively longer than usual, so the clock runs slower and keeps pace with sundial time. At other times of the year, the pendulum is shortened, so the clock runs faster, again keeping pace with sundial time. This type of mechanism shows only solar (sundial) time. Clocks using it cannot easily be made to show mean time unless a separate clock mechanism, with its own pendulum, is included. There are some equation clocks in which this is done, but it requires the clock case to be very sturdy, to avoid coupling between the pendula. Another disadvantage of variable pendulum clocks is that the equation of time cannot be easily displayed.
Clocks that do mechanical addition
In some later equation clocks, a pendulum swings at a constant frequency, controlling a normal clock mechanism. Often, this mechanism drives a display showing mean (clock) time. However, there are additional components: an equation of time simulation mechanism as described above, and a device that automatically adds the equation of time to clock time, and drives a display that shows solar time. The addition is done by an analogue method, using a differential gear. This type of equation clock mechanism is the most versatile. Both solar and mean times can be easily and clearly displayed, as can the equation of time. Striking the hours in both kinds of time is also easy to arrange. After its invention in 1720, this mechanism became the standard one, and was used for much of the 18th century, until the demand for equation clocks ceased.
Slow changes in the equation of time
Slow changes in the motions of the Earth cause gradual changes in the annual variation of the equation of time. The graph at the top of this article shows the annual variation as it is at present, around the year 2000. Many centuries in the past or future, the shape of the graph would be very different. Most equation clocks were constructed some three centuries ago, since when the change in the annual variation of the equation of time has been small, but appreciable. The clocks embody the annual variation as it was when they were made. They do not compensate for the slow changes, which were then unknown, so they are slightly less accurate now than they were when new. The greatest error from this cause is currently about one minute. Centuries in the future, if these clocks survive, the errors will be larger.
Similar modern devices
Equation clocks, as such, are no longer widely used. However, components functionally the same as those in equation clocks are still used in, for example, solar trackers, which move so as to follow the movements of the Sun in the sky. Many of them do not sense the position of the Sun. Instead, they have a mechanism which rotates about a polar axis at a constant speed of 15 degrees per hour, keeping pace with the average speed of the Earth's rotation relative to the Sun. Sometimes, a digital representation of this rotation is generated, rather than physical rotation of a component. The equation of time is then added to this constant rotation, producing a rotation of the tracker that keeps pace with the apparent motion of the Sun. Generally, these machines use modern technology, involving electronics and computers, instead of the mechanical devices that were used in historic equation clocks, but the function is the same.
See also
Equation of time
Sundial
Clock
Differential (mechanical device)
References and footnotes
External links
Note: In some of these historical materials, clock time is called "equal time", and sundial time is called "apparent time" or "true solar time".
Variable pendulum clock in the British Museum.
British Museum equation clocks, with descriptions.
Pocket watch that works as an equation clock. The description suggests that it does mechanical addition.
Clock with movable minute markings on a ring.
Letter from Joseph Williamson written c.1715, claiming the invention of clocks showing solar time, working by movable minute marks or variable pendula.
The Equation of Time gives illustrated examples of the development of equation clocks
Clocks | Equation clock | [
"Physics",
"Technology",
"Engineering"
] | 1,886 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
34,712,011 | https://en.wikipedia.org/wiki/Ballistics%20by%20the%20Inch | Ballistics by the Inch (often called BBTI) was a project to test the performance characteristics of a variety of common handgun calibers/cartridges. The initial testing was done in 2008 and tested the velocity of 13 common handgun cartridges as it related to firearm barrel length. In 2009 an additional three calibers were tested and in 2010 and 2011 more calibers were added. Also in 2011 testing was carried out to study the "cylinder gap effect" on the velocity of ammunition shot from revolvers. All testing has been carried out as carefully as possible with no bias toward any particular manufacturer, caliber, or firearm. In November 2008 the BBTI website went online, describing the tests and making the data freely available so that anyone can dig out any particular bit of information they may be interested in. The website underwent a complete redesign in late 2011 with the addition of the 2011 tests, and even more tests have been done in the years since.
As of 1 September 2020, the project has been declared to be in 'archive status', as mentioned on the official website, which has remained accessible to the public.
Barrel length tests
The initial round of tests, and most later ones tested handgun bullet velocity as it relates to the length of the barrel. The BBTI team wanted to test barrel lengths from 2 to 18 inches in one-inch increments. To have a consistent platform to test the various sizes and shapes of ammunition, a gunsmith was commissioned to create a custom barrel to fit each caliber tested. These barrels fit into a single housing and can be swapped out easily, so the team can remove them for cutting. Each brand of ammunition in each caliber/cartridge was tested at 18". Then, the barrel was removed and 1" was cut off the end of the barrel and the cut end was dressed. Once all the cartridges had been tested at 18" and the barrels were cut and ready, each brand was tested again at 17" and so on, right down to a 2" barrel. Three rounds of each brand of ammunition were fired at each barrel length and the velocity of the bullet was recorded as it passed over two commercial chronographs 15 feet away. The resulting six numbers were averaged and the average plotted on a graph for each brand and caliber of ammunition so that the trend in the velocity could be readily seen. As of 2012, tests have been run on 21 different calibers/cartridges.
Cylinder gap tests
The two most common forms of handgun are the revolver and the semi-automatic. The semi-auto contains a number of cartridges in a magazine, usually in the grip of the gun, which are fed one at a time into the chamber for firing. The revolver is an older type, familiar to most as the movie-cowboy's "six-shooter." Because the cylinder of a revolver (where the cartridges are held) must rotate freely, there is a slight gap between the front of the cylinder and the barrel. When a round is fired, the velocity of the bullet is determined by the amount of push it is given by the gunpowder in the cartridge as the bullet leaves the gun. For a long time people have wondered how much energy is lost because it escapes through the gap rather than pushing the bullet through the barrel. So in 2011 the BBTI team devised a way to test how that gap affects the velocity of the bullet. In this case they tested a wide variety of ammunition available in .38 and .357 magnum using a single revolver modified to have a long barrel (so that they could cut it down an inch at a time, as they did with the barrel length tests) and also modified to allow them to change the gap using a set of shims. In this way they were able to test with a fairly standard cylinder gap of six one-thousandths of an inch (0.006"), with a gap of one one-thousandth of an inch (0.001") and with no gap at all. Each of these gaps was tested over the range of ammunition at barrel lengths from 18" down to 2". Because they thought the difference would be subtle, they shot ten rounds for each data point to get as accurate a result as possible. As with the barrel length tests, the results at each point were averaged and a graph created that compared the trend for each of the three gaps over the different barrel lengths for each ammunition.
Real world comparisons
Because they were using an idealized platform, the BBTI team decided to test a variety of actual weapons using the same ammunition used for the other tests. This would allow a comparison of how a "real world" weapon would perform related to the results of the other tests. A handgun in a given caliber with a 6" barrel could then be compared to the same caliber in the other tests to see how closely it might match. To date (2012), the team has tested 100 different real-world firearms with barrels from 24" to 1" in length.
BBTI team
Ballistics by the Inch started as discussions between two friends about ballistic characteristics and where they could find hard data to answer their questions. When it was clear that the data was not easily available, they decided to do the tests themselves and enlisted another friend for the first round. These three men — James Kasper, James Downey, and Steve Meyer — spent most of a week out at the testing site each day shooting the rounds, noting the data, and cutting and dressing the barrels to build up the data for the tests. Each brought their own area of expertise and experience to the project. For the second set of tests in 2009, an additional team member was added — Keith Kimball — who brought another area of expertise and experience into the group. These four have done all the testing available on the BBTI site.
Popular impact
Almost as soon as the BBTI site was launched, it was being cited in gun forums and publications. The website had more than 300,000 hits in the first month and in the first year, more than 1.5 million hits. As of February 2012 the site has had more than eight million hits. Because the team has made all the data freely available to anyone who wants to look at it, their data has become the standard cited when discussing the merits of a specific ammunition or barrel length. The April 2009 edition of Concealed Carry Magazine, the members' publication of the US Concealed Carry Association, ran a feature article on the project. Concealed Carry Magazine carried a follow-up feature in November 2010, discussing the tests of the .380 ACP done that year. Numerous firearms forums and podcasts have done stories or entire programs about BBTI, and American Handgunner ran a piece on BBTI in January 2011. The December 2011 re-launch of the expanded BBTI website/data was profiled by Guns.com in a feature article.
References
External links
Ballistics | Ballistics by the Inch | [
"Physics"
] | 1,386 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
34,713,249 | https://en.wikipedia.org/wiki/Forensic%20geophysics | Forensic geophysics is a branch of forensic science and is the study, the search, the localization and the mapping of buried objects or elements beneath the soil or the water, using geophysics tools for legal purposes. There are various geophysical techniques for forensic investigations in which the targets are buried and have different dimensions (from weapons or metallic barrels to human burials and bunkers). Geophysical methods have the potential to aid the search and the recovery of these targets because they can non-destructively and rapidly investigate large areas where a suspect, illegal burial or, in general, a forensic target is hidden in the subsoil. When in the subsurface there is a contrast of physical properties between a target and the material in which it is buried, it is possible to individuate and define precisely the concealing place of the searched target. It is also possible to recognize evidences of human soil occupation or excavation, both recent and older. Forensic geophysics is an evolving technique that is gaining popularity and prestige in law enforcement.
Searched for objects obviously include clandestine graves of murder victims, but also include unmarked burials in graveyards and cemeteries, weapons used in criminal activities and environmental crime illegally dumping material.
There are various near-surface geophysical techniques that can be utilised to detect a near-surface buried object, which should be site and case-specific. A thorough desk study (including historical maps), utility survey, site reconnaissance and control studies should be undertaken before trial geophysical surveys and then full geophysical surveys are undertaken in phased investigations. Note also other search techniques should be used to first to prioritise suspect areas, for example cadaver dogs or forensic geomorphologists.
Techniques
For large-scale buried objects, seismic surveys may be appropriate but these have, at best, 2m vertical resolution so may not be ideal for certain targets, more typically they are used to detect bedrock below the surface.
For relatively quick site surveys, bulk ground electrical conductivity surveys can be collected which identifies areas of disturbance of different ground but these can suffer from a lack of resolution. This recent Black Death investigation in central London shows an example. shows a successful woodland search for a cold case in woodland in New Zealand.
Ground-penetrating radar (or GPR) has a typical maximum depth below ground level (bgl) of 10 m, depending upon the antennae frequencies used, typically 50 MHz to 1.2 Gz. The higher the frequency the smaller the object that can be resolved but also penetration depths decrease, so operators need to think carefully when choosing antennae frequencies and, ideally, undertake trial surveys using different antennae over a target at a known depth onsite. GPR is the most popularly used technique in forensic search, but is not suitable in certain soil types and environments, e.g. coastal (i.e. salt-rich) and clay-rich soils (lack of penetration). 2D profiles can be relatively quickly collected and, if time permits, successive profiles can be used to generate 3D datasets which may resolve more subtle targets. Recent studies have used GPR to locate mass graves from the Spanish Civil War in mountainous and urban environments.
Electrical resistivity methods can also detect objects, especially in clay-rich soil which would preclude the use of GPR. There are different equipment configurations, the dipole-dipole (fixed-offset) method is the most common which can traverse across an area, measuring resistivity variations at a set depth (typically 1-2x probe separations) which have been used in forensic searches. More slower methods are putting out many probes and collecting both spatially horizontally and vertically, called Electrical resistivity imaging (ERI). Multiple 2D profiles is termed electrical resistivity tomography (ERT).
Magnetometry can detect buried metal (or indeed fired objects such as bricks or even where surface fires were) using simple total field magnetometers, through to fluxgate gradiometers and high-end alkali vapour gradiometers, depending upon accuracy (and cost) required. Surface magnetic susceptibility has also shown recent promise for forensic search.
Water-based searches are also becoming more common, with specialist marine magnetometers, side-scan sonar and other acoustic methods and even water-penetrating radar methods used to rapidly scan bottoms of ponds, lakes, rivers and near-shore depositional environments.
Controlled research
There has been recent efforts to undertake research over known buried and below-water surface simulated forensic targets in order to gain an insight into optimum search technique(s) and/or equipment configuration(s). Most commonly, this involved the burial porcine cadavers and long-term monitoring for soilwater, seasonal effects on electrical resistivity surveys, burial in walls and beneath concrete, and Long-Term monitoring in the UK, the US and Latin America. Finally there has been surveys in graveyards over graves of known ages to determine the geophysical responses of multi-geophysical techniques with increasing burial ages
See also
Forensic geology
References
Further reading
Dupras, D., Schultz, J., Wheeler, S. & Williams, L. 2006. Forensic Recovery of Human Remains: Archaeological Approaches. Taylor & Francis Group Publishers, Boca Raton, Florida, USA, 232pp.
Milsom, J. & Eriksen, A. 2011. Field geophysics. Geological Field Guide Series, 4th Edition, Wiley, Chichester, UK, 244pp. .
Geophysics
Geophysics | Forensic geophysics | [
"Physics"
] | 1,110 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
34,715,801 | https://en.wikipedia.org/wiki/Medal%20%22For%20Merit%20in%20Space%20Exploration%22 | The Medal "For Merit in Space Exploration" () is a state decoration of the Russian Federation aimed at recognising achievements in the space program. It was established by presidential decree №1099 of September 7, 2010 which revamped the entire Russian awards system.
Award statute
The Medal "For Merit in Space Exploration" is awarded to citizens of the Russian Federation for achievements in research, development and utilization of outer space, for a substantial contribution to the development of rocket and space technology and industry, training, research and design activities, for the implementation of international programs, as well as for other achievements in the field of space activities aimed at the comprehensive socio-economic development of the Russian Federation, at strengthening its defense and ensuring national interests, for encouraging and increasing international cooperation.
The Medal "For Merit in Space Exploration" may be awarded to foreign citizens for outstanding achievements in the development of space technology in the Russian Federation.
The Russian Federation Order of Precedence dictates the Medal "For Merit in Space Exploration" is to be worn on the left breast with other medals immediately after the Medal "For Merit in the Development of Nuclear Energy".
Award description
The Medal "For Merit in Space Exploration" is a 32mm diameter circular silver medal with a raised rim on both the obverse and reverse. The obverse of the medal bears an R-7 rocket launching from its pad, two supporting towers leaning at an angle away from the rocket, to the left of the rocket, a large four pointed star, to the right of the rocket, two smaller four pointed stars. The reverse of the medal bears the inscription "FOR MERIT IN SPACE EXPLORATION" (). Below the inscription, a relief letter "N" with an horizontal line reserved for the award serial number.
The medal is secured to a standard Russian pentagonal mount by a ring through the medal suspension loop. The mount is covered by an overlapping 24mm wide silk moiré light blue ribbon with a 5mm dark blue central stripe, two white 2mm edge stripes and two 2mm white stripes separating the dark from the light blue.
Award recipients
The following individuals are recipients of the Medal "For Merit in Space Exploration":
Additional recipients
Vladimir Aksyonov
Aleksandr Pavlovich Aleksandrov
Anatoly Artsebarsky
Oleg Atkov
Toktar Aubakirov
Sergey Avdeev
Aleksandr Nikolayevich Balandin
Boris Chertok
Vladimir Dzhanibekov
Muhammed Faris
Anatoly Filipchenko
Maidarjavyn Ganzorig
Yuri Gidzenko
Georgy Grechko
Jügderdemidiin Gürragchaa
Claudie Haigneré
Georgi Ivanov (cosmonaut)
Pyotr Klimuk
Vladimir Kovalyonok
Sergei Krikalev
Valery Kubasov
Aleksandr Laveykin
Valentin Lebedev
Vladimir Lyakhov
Gennady Manakov
Musa Manarov
Phạm Tuân
Dumitru Prunariu
Yuri Romanenko
Valery Rozhdestvensky
Valery Ryumin
Viktor Savinykh
Svetlana Savitskaya
Aleksandr Serebrov
Rakesh Sharma
Anatoly Solovyev
Vladimir Solovyov (cosmonaut)
Arnaldo Tamayo Méndez
Daniel M. Tani
Vladimir Titov (cosmonaut)
Aleksandr Viktorenko
Igor Volk
Aleksandr Aleksandrovich Volkov
Peggy Whitson
Vitaly Zholobov
Vyacheslav Zudov
See also
Awards and decorations of the Russian Federation
Russian Federal Space Agency
References
External links
The Commission on State Awards to the President of the Russian Federation
Military awards and decorations of Russia
Civil awards and decorations of Russia
Russian awards
Awards established in 2010
Space-related awards
Space program of Russia | Medal "For Merit in Space Exploration" | [
"Technology"
] | 745 | [
"Science and technology awards",
"Space-related awards"
] |
34,719,054 | https://en.wikipedia.org/wiki/Immunoradiometric%20assay | Immunoradiometric assay (IRMA) is an assay that uses radiolabeled antibodies. It differs from conventional radioimmunoassay (RIA) in that the compound to be measured combines immediately with the radiolabeled antibodies, rather than displacing another antigen by degrees over some period.
Principle:- A noncompetitive assay in which analyte to be measured sandwich btw two Antibodies
Introduction
Fluorescent and radioactive antibodies have been used to locate or measure solid-phase antigens for many years. However, only recently has the labeled antibody been applied to measurement of antigen to sample. The method converts the unknown antigen into a traceable radioactive product. Immunoradiometric assay (IRMA) was first introduced by "Miles and Hales" in 1968, who proposed certain theoretical advantages of the method with regard to improving the sensitivity and precision of immunoassays.
Principle
In IRMA, the antibodies are labeled with radioisotopes which are used to bind antigens present in the specimen. When a positive sample is added to the tubes, radioactively labeled (labeled with I125 or I131 radioisotopes) antibodies bind to the free epitopes of antigens and form an antigen-antibody complex. Unbound labeled antibodies are removed by a second reaction with a solid phase antigen. The amount of radioactive remaining in the solution is direct function of the antigen concentration.
References
Immunologic tests | Immunoradiometric assay | [
"Biology"
] | 301 | [
"Immunologic tests"
] |
35,874,308 | https://en.wikipedia.org/wiki/Omiloxetine | Omiloxetine (omiloextinum, omiloxetino INN) was a selective serotonin reuptake inhibitor drug candidate that underwent preclinical development by the Spanish pharmaceutical company, Ferrer Internacional, until 2005, when it was abandoned.
Rafael Foguet also patented Abaperidone.
References
4-Phenylpiperidines
Benzodioxoles
Ketones
4-Fluorophenyl compounds
Phenethylamines
Selective serotonin reuptake inhibitors
Abandoned drugs | Omiloxetine | [
"Chemistry"
] | 106 | [
"Ketones",
"Functional groups",
"Drug safety",
"Abandoned drugs"
] |
35,874,626 | https://en.wikipedia.org/wiki/Tofenacin | Tofenacin is an antidepressant drug with a tricyclic-like structure which was developed and marketed in the United Kingdom and Italy in 1971 and 1981, respectively, by Brocades-Stheeman & Pharmacia (now part of Astellas Pharma). It acts as a serotonin-norepinephrine reuptake inhibitor, and based on its close relation to orphenadrine, may also possess anticholinergic and antihistamine properties. Tofenacin is also the major active metabolite of orphenadrine and likely plays a role in its beneficial effects against depressive symptoms seen in Parkinson's disease patients.
See also
Clemastine
Orphenadrine
Tiazesim
References
Abandoned drugs
Amines
Astellas Pharma
Antidepressants
Ethers
Muscarinic antagonists
Serotonin–norepinephrine reuptake inhibitors
2-Tolyl compounds | Tofenacin | [
"Chemistry"
] | 205 | [
"Functional groups",
"Drug safety",
"Organic compounds",
"Ethers",
"Amines",
"Bases (chemistry)",
"Abandoned drugs"
] |
35,875,786 | https://en.wikipedia.org/wiki/Dewar%20reactivity%20number | In Hückel theory, a Dewar reactivity number, also known as Dewar number, is a measure of the reactivity in aromatic systems. It is used to quantify the difference in energy between the π-system of the original molecule and the intermediate having the incoming electrophile or nucleophile attached. It can be used to study important transformations such as the nitration of conjugated systems from a theoretical perspective.
Computation
The change in energy during the reaction can be derived by allowing the orbitals nearby the site i of attack to interact with the incoming molecule. A secular determinant can be formulated resulting in the equation:
where β is the Huckel interaction parameter and ar and as are the coefficients of the highest energy molecular orbital at nearby sites r and s respectively. Dewar's reactivity number is then defined as .
Clearly, the smaller the value of Ni, the less the destabilization energy in going towards the transition state and the more reactive the site. Thus, by computation of the molecular orbital coefficients it is possible to evaluate Dewar's number for all the sites and establish which one will be the most reactive. This has been shown to correlate well with experimental results.
The method is particularly efficient for alternant hydrocarbons in which the coefficients of the non-bonding orbitals involved are very easy to calculate.
References
Molecular physics
Theoretical chemistry | Dewar reactivity number | [
"Physics",
"Chemistry"
] | 286 | [
"Molecular physics",
"Theoretical chemistry stubs",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
"Molecular physics stubs",
" and optical physics"
] |
29,154,002 | https://en.wikipedia.org/wiki/Lead%28II%29%20phosphate | Lead(II) phosphate is an ionic compound with chemical formula Pb3(PO4)2. Lead(II) phosphate is a long-lived electronically neutral reagent chemical. Despite limited tests on humans, it has been identified as a carcinogen based on tests on animals conducted by the EPA. Lead(II) phosphate appears as hexagonal, colorless crystals or as a white powder. Lead(II) phosphate is insoluble in water and alcohol but soluble in nitric acid (HNO3) and fused alkali metal hydroxides. When lead(II) phosphate is heated for decomposition it emits very toxic fumes containing Lead (Pb) and POx.
Preparation
It is prepared by reacting lead(II) hydroxide with orthophosphoric acid.
References
Lead(II) compounds
Phosphates | Lead(II) phosphate | [
"Chemistry"
] | 175 | [
"Salts",
"Phosphates",
"Inorganic compounds",
"Inorganic compound stubs"
] |
29,161,906 | https://en.wikipedia.org/wiki/International%20Society%20for%20Structural%20and%20Multidisciplinary%20Optimization | The International Society for Structural and Multidisciplinary Optimization is a learned society in the field of multidisciplinary design optimization that was founded in October 1991. It has more than 1000 members in 45 countries. The current president is Wei Chen (Northwestern University). The society is an affiliated organization of the International Union of Theoretical and Applied Mechanics (IUTAM).
Objectives
The objectives are:
to stimulate and promote research into all aspects of the optimal design of structures and related topics, including engineering systems consisting partially of structures and/or fluids;
to encourage practical applications of optimization methods and the corresponding software development in all branches of technology;
to foster the interchange of ideas amongst various fields contributing to structural and multidisciplinary optimization;
to support the role of optimization in multidisciplinary design;
to provide a framework for the organization of meetings and other means for the dissemination of knowledge on structural optimization; and
to promote teaching of structural optimization in tertiary institutions.
The society works towards these objectives by organizing a biennial "World Congress of Structural and Multidisciplinary Optimization" and publishing an official journal, Structural and Multidisciplinary Optimization, in collaboration with Springer Science+Business Media.
Past Presidents
1991-1995: George Rozvany (Founder President)
1995-1999: Raphael Haftka
1999-2003: Niels Olhoff
2003-2007: Martin P. Bendsøe
2007-2011: KK Choi
2011-2015: Ole Sigmund
2015-2019: Cheng Gengdong
References
External links
Organizations established in 1991
International learned societies
Mathematical optimization | International Society for Structural and Multidisciplinary Optimization | [
"Mathematics"
] | 320 | [
"Mathematical optimization",
"Mathematical analysis"
] |
23,083,731 | https://en.wikipedia.org/wiki/Subatomic%20scale | The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent.
The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons—particularly quarks—become apparent.
See also
Astronomical scale the opposite end of the spectrum
Subatomic particles | Subatomic scale | [
"Physics"
] | 111 | [
"Subatomic particles",
"Particle physics",
"Nuclear physics",
"Particle physics stubs",
"Atoms",
"Matter"
] |
23,092,863 | https://en.wikipedia.org/wiki/Fractional%20anisotropy | Fractional anisotropy (FA) is a scalar value between zero and one that describes the degree of anisotropy of a diffusion process. A value of zero means that diffusion is isotropic, i.e. it is unrestricted (or equally restricted) in all directions. A value of one means that diffusion occurs only along one axis and is fully restricted along all other directions. FA is a measure often used in diffusion imaging where it is thought to reflect fiber density, axonal diameter, and myelination in white matter. The FA is an extension of the concept of eccentricity of conic sections in 3 dimensions, normalized to the unit range.
Definition
A Diffusion Ellipsoid is completely represented by the Diffusion Tensor, D. FA is calculated from the eigenvalues of the diffusion tensor. The eigenvectors give the directions in which the ellipsoid has major axes, and the corresponding eigenvalues give the magnitude of the peak in each eigenvector direction.
with being the mean value of the eigenvalues.
An equivalent formula for FA is
which is further equivalent to:
where R is the "normalized" diffusion tensor:
Note that if all the eigenvalues are equal, which happens for isotropic (spherical) diffusion, as in free water, the FA is . The FA can reach a maximum value of (this rarely happens in real data), in which case D has only one nonzero eigenvalue and the ellipsoid reduces to a line in the direction of that eigenvector. This means that the diffusion is confined to that direction alone.
Details
This can be visualized with an ellipsoid, which is defined by the eigenvectors and eigenvalues of D. The FA of a sphere is 0 since the diffusion is isotropic, and there is equal probability of diffusion in all directions. The eigenvectors and eigenvalues of the Diffusion Tensor give a complete representation of the diffusion process. FA quantifies the pointedness of the ellipsoid, but does not give information about which direction it is pointing to.
Note that the FA of most liquids, including water, is unless the diffusion process is being constrained by structures such as network of fibers. The measured FA may depend on the effective length scale of the diffusion measurement. If the diffusion process is not constrained on the scale being measured (the constraints are too far apart) or the constraints switch direction on a smaller scale than the measured one, then the measured FA will be attenuated. For example, the brain can be thought of as a fluid permeated by many fibers (nerve axons). However, in most parts the fibers go in all directions, and thus although they constrain the diffusion the FA is . In some regions, such as the corpus callosum the fibers are aligned over a large enough scale (on the order of a mm) for their directions to mostly agree within the resolution element of a magnetic resonance image, and it is these regions that stand out in an FA image. Liquid crystals can also exhibit anisotropic diffusion because the needle or plate-like shapes of their molecules affect how they slide over one another. When the FA is 0 the tensor nature of D is often ignored, and it is called the diffusion constant.
One drawback of the Diffusion Tensor model is that it can account only for Gaussian diffusion processes, which has been found to be inadequate in accurately representing the true diffusion process in the human brain. Due to this, higher order models using spherical harmonics and Orientation Distribution Functions (ODF) have been used to define newer and richer estimates of the anisotropy, called Generalized Fractional Anisotropy. GFA computations use samples of the ODF to evaluate the anisotropy in diffusion. They can also be easily calculated by using the Spherical Harmonic coefficients of the ODF model.
References
Transport phenomena
Diffusion
Imaging
Tensors
Neuroimaging
Medical imaging
Magnetic resonance imaging
de:Diffusivität#Fraktionale Anisotropie | Fractional anisotropy | [
"Physics",
"Chemistry",
"Engineering"
] | 844 | [
"Transport phenomena",
"Physical phenomena",
"Tensors",
"Diffusion",
"Nuclear magnetic resonance",
"Magnetic resonance imaging",
"Chemical engineering"
] |
27,861,521 | https://en.wikipedia.org/wiki/Self-sustainability | Self-sustainability and self-sufficiency are overlapping states of being in which a person, being, or system needs little or no help from, or interaction with others. Self-sufficiency entails the self being enough (to fulfill needs), and a self-sustaining entity can maintain self-sufficiency indefinitely. These states represent types of personal or collective autonomy. A self-sufficient economy is one that requires little or no trade with the outside world and is called an autarky.
Description
Self-sustainability is a type of sustainable living in which nothing is consumed other than what is produced by the self-sufficient individuals. Examples of attempts at self-sufficiency in North America include simple living, food storage, homesteading, off-the-grid, survivalism, DIY ethic, and the back-to-the-land movement.
Practices that enable or aid self-sustainability include autonomous building, permaculture, sustainable agriculture, and renewable energy. The term is also applied to limited forms of self-sustainability, for example growing one's own food or becoming economically independent of state subsidies. The self-sustainability of an electrical installation measures its degree of grid independence and is defined as the ratio between the amount of locally produced energy that is locally consumed, either directly or after storage, and the total consumption.
A system is self-sustaining (or self-sufficient) if it can maintain itself by independent effort. The system self-sustainability is:
the degree at which the system can sustain itself without external support
the fraction of time in which the system is self-sustaining
Self-sustainability is considered one of the "ilities" and is closely related to sustainability and availability. In the economics literature, a system that has the quality of being self-sustaining is also referred to as an autarky.
Examples
Political states
Autarky exists whenever an entity can survive or continue its activities without external assistance. Autarky is not necessarily economic. For example, a military autarky would be a state that could defend itself without help from another country.
Labor
According to the Idaho Department of Labor, an employed adult shall be considered self-sufficient if the family income exceeds 200% of the Office of Management and Budget poverty income level guidelines.
Peer-to-peer swarming
In peer-to-peer swarming systems, a swarm is self-sustaining if all the blocks of its files are available among peers (excluding seeds and publishers).
Discussion
Self-sustainability and survivability
Whereas self-sustainability is a quality of one's independence, survivability applies to the future maintainability of one's self-sustainability and indeed one's existence. Many believe that more self-sustainability guarantees a higher degree of survivability. However, just as many oppose this, arguing that it is not self-sustainability that is essential for survivability, but on the contrary specialization and thus dependence.
Consider the first two examples presented above. Among countries, commercial treats are as important as self-sustainability. An autarky is usually inefficient. Among people, social ties have been shown to be correlated to happiness and success as much as self-sustainability.
See also
Autarchism
Cottagecore
Eating your own dog food
Five Acres and Independence
Food sovereignty
Homesteading
Individualism
Juche
List of system quality attributes
Localism
Rugged individualism
Self-help
Tiny house movement
Vegetable farming
Notes and references
External links
Foundation for Self-Sufficiency in Central America
"Self-sustainability strategies for Development Initiatives: What is self-sustainability and why is it so important?"
Applied probability | Self-sustainability | [
"Mathematics"
] | 735 | [
"Applied mathematics",
"Applied probability"
] |
27,861,937 | https://en.wikipedia.org/wiki/Pivalonitrile | Pivalonitrile is a nitrile with the semi-structural formula (CH3)3CCN, abbreviated t-BuCN. This aliphatic organic compound is a clear, colourless liquid that is used as a solvent and as a labile ligand in coordination chemistry. Pivalonitrile is isomeric with tert-butyl isocyanide but the two compounds do not exist in chemical equilibrium, unlike its silicon analog trimethylsilyl cyanide.
References
5
Tert-butyl compounds | Pivalonitrile | [
"Chemistry"
] | 112 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
27,862,538 | https://en.wikipedia.org/wiki/Freyd%20cover | In the mathematical discipline of category theory, the Freyd cover or scone category is a construction that yields a set-like construction out of a given category. The only requirement is that the original category has a terminal object. The scone category inherits almost any categorical construct the original category has. Scones can be used to generally describe proofs that use logical relations.
The Freyd cover is named after Peter Freyd. The other name, "scone", is intended to suggest that it is like a cone, but with the Sierpiński space in place of the unit interval.
Definition
Formally, the scone of a category C with a terminal object 1 is the comma category .
See also
Artin gluing
Notes
References
Further reading
Category theory | Freyd cover | [
"Mathematics"
] | 158 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
27,863,637 | https://en.wikipedia.org/wiki/Shelf%20angle | In masonry veneer building construction, a shelf angle or masonry support is a steel angle which supports the weight of brick or stone veneer and transfers that weight onto the main structure of the building so that a gap or space can be created beneath to allow building movements to occur.
Background
Traditional masonry buildings had thick Load-bearing walls that supported the weight of the building. Openings in these load bearing walls such as doors and windows were typically small and spanned by steel lintels or masonry arches.
Modern buildings
The invention of skeleton frame buildings made it possible to reduce the thickness of the walls and have wide openings such as ribbon windows extending across most or all of the building facade. In these buildings, brick, stone, or other masonry cladding is often just a single wythe of material called a veneer since it is non-loadbearing. The only way to support the weight of this veneer across a wide opening is by providing a shelf angle on which the masonry bears. The shelf angle, in turn, is attached to major elements of the building structure such as floor beams or structural columns. Shelf angles are in reality a horizontal expansion joint which allows growth of the brick below the shelf angle and to allow movement or shrinkage of the frame without putting stresses on the brick veneer. In the United States, common sizes for steel shelf angles include L 3" x 3" x 1/4" and L 4" x 4" x 1/4".In the UK and Europe shelf angles / masonry support are predominantly manufactured in stainless steel to prevent corrosion and failure. These are bespoke to the building's frame and engineered to take the loads required.
References
See also
Masonry
Building materials
Building engineering | Shelf angle | [
"Physics",
"Engineering"
] | 345 | [
"Masonry",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Civil engineering",
"Matter",
"Architecture"
] |
27,864,034 | https://en.wikipedia.org/wiki/Fuzzy-trace%20theory | Fuzzy-trace theory (FTT) is a theory of cognition originally proposed by Valerie F. Reyna and Charles Brainerd to explain cognitive phenomena, particularly in memory and reasoning.
FTT posits two types of memory processes (verbatim and gist) and, therefore, it is often referred to as a dual process theory of memory. According to FTT, retrieval of verbatim traces (recollective retrieval) is characterized by mental reinstatement of the contextual features of a past event, whereas retrieval of gist traces (nonrecollective retrieval) is not. In fact, gist processes form representations of an event's semantic features rather than its surface details, the latter being a property of verbatim processes.
The theory has been used in areas such as cognitive psychology, human development, and social psychology to explain, for instance, false memory and its development, probability judgments, medical decision making, risk perception and estimation, and biases and fallacies in decision making.
FTT can explain phenomena involving both true memories (i.e., memories about events that actually happened) as well as false memories (i.e., memories about events that never happened).
History
FTT was initially proposed in the 1990s as an attempt to unify findings from the memory and reasoning domains that could not be predicted or explained by earlier approaches to cognition and its development (e.g., constructivism and information processing). One of such challenges was the statistical independence between memory and reasoning, that is, memory for background facts of problem situations is often unrelated to accuracy in reasoning tasks. Such findings called for a rethinking of the memory-reasoning relation, which in FTT took the form of a dual-process theory linking basic concepts from psycholinguistic and Gestalt theory to memory and reasoning. More specifically, FTT posits that people form two types of mental representations about a past event, called verbatim and gist traces. Gist traces are fuzzy representations of a past event (e.g., its bottom-line meaning), hence the name fuzzy-trace theory, whereas verbatim traces are detailed representations of a past event. Although people are capable of processing both verbatim and gist information, they prefer to reason with gist traces rather than verbatim. This implies, for example, that even if people are capable of understanding ratio concepts like probabilities and prevalence rates, which are the standard for the presentation of health- and risk-related data, their choice in decision situations will usually be governed by the bottom-line meaning of it (e.g., "the risk is high" or "the risk is low"; "the outcome is bad" or "the outcome is good") rather than the actual numbers. More importantly, in FTT, memory-reasoning independence can be explained in terms of preferred modes of processing when one performs a memory task (e.g., retrieval of verbatim traces) relative to when one performs a reasoning task (e.g., preference for reasoning with gist traces).
In 1999, a similar approach was applied to human vision. It suggested that human vision has two types of processing: one that aggregates local spatial receptive fields, and one that parses the local receptive field. People used prior experience, gists, to decide which process dominates a perceptual decision. The work attempted to link Gestalt theory and psychophysics (i.e., independent linear filters). This theory was further developed into fuzzy image processing and used in information processing technology and edge detection.
Memory
FTT posits two types of memory processes (verbatim and gist) and, therefore, it is often referred to as a dual process theory of memory. According to FTT, retrieval of verbatim traces (recollective retrieval) is characterized by mental reinstatement of the contextual features of a past event, whereas retrieval of gist traces (nonrecollective retrieval) is not. In fact, gist processes form representations of an event's semantic features rather than its surface details, the latter being a property of verbatim processes. In the memory domain, FTT's notion of verbatim and gist representations has been influential in explaining true memories (i.e., memories about events that actually happened) as well as false memories (i.e., memories about events that never happened). The following five principles have been used to predict and explain true and false memory phenomena:
Principles
Process independence
Parallel storage
The principle of parallel storage asserts that the encoding and storage of verbatim and gist information operate in parallel rather than in a serial fashion. For instance, suppose that a person is presented with the word "apple" in red color. On the one hand, according to the principle of parallel storage of verbatim and gist traces, verbatim features of the target item (e.g., the word was apple, it was presented in red, printed in boldface and italic, and all but the first letter were presented in lowercase) and gist features (e.g., the word was a type of fruit) would be encoded and stored simultaneously via distinct pathways. Conversely, if verbatim and gist traces are stored in a serial fashion, then gist features of the target item (the word was a type of fruit) would be derived from its verbatim features and, therefore, the formation of gist traces would depend on the encoding and storage of verbatim traces. The latter idea was often assumed by early memory models. However, despite the intuitive appeal of the serial processing approach, research suggests that the encoding and storage of gist traces do not depend on verbatim ones. Several studies have converged on the finding that the meaning of target items is encoded independently of, and even prior to, the encoding of the surface form of the same items. Ankrum and Palmer, for example, found that when participants are presented with a familiar word (e.g., apple) for a very brief period (100 milliseconds), they are able to identify the word itself ("was it apple?") better than its letters ("did it contain the letter L?").
Dissociated retrieval
Similar to the principle of parallel storage, retrieval of verbatim and gist traces also occur via dissociated pathways. According to the principle of dissociated retrieval, recollective and nonrecollective retrieval processes are independent of each other. Consequently, this principle allows verbatim and gist processes to be differentially influenced by factors such as the type of retrieval cues and the availability of each form of representation. In connection with Tulving's encoding specificity principle, items that were actually presented in the past are better cues for verbatim traces than items that were not. Similarly, items that were not presented in the past but preserve the meaning of presented items are usually better cues for gist traces. Suppose, for example, that subjects of an experiment are presented with a word list containing several dog breeds, such as poodle, bulldog, greyhound, doberman, beagle, collie, boxer, mastiff, husky, and terrier. During a recognition test, the words poodle, spaniel, and chair are presented. According to the principle of dissociated retrieval, retrieval of verbatim and gist traces does not depend on each other and, therefore, different types of test probes might serve as better cues to one type of trace than another. In this example, test probes such as poodle (targets, or studied items) will be better retrieval cues for verbatim traces than gist, whereas test probes such as spaniel (related distractors, non-studied items but related to targets) will be better retrieval cues for gist traces than verbatim. Chair, on the other hand, would neither be a better cue for verbatim traces nor for gist traces because it was not presented and is not related to dogs. If verbatim and gist processes were dependent, then factors that affect one process would also affect the other in the same direction. However, several experiments showing, for example, differential forgetting rates between memory for the surface details and memory for the bottom-line meaning of past events favor the notion of dissociated retrieval of verbatim and gist traces. In the case of forgetting rates, those experiments have shown that, over time, verbatim traces become inaccessible at a faster rate than gist traces. Brainerd, Reyna, and Kneer, for instance, found that delay drives true recognition rates (supported by both verbatim and gist traces) and false recognition rates (supported by gist and suppressed by verbatim traces) in opposite directions, namely true memory decays over time while false memory increases.
Opponent processes in false memory
The principle of opponent processes describes the interaction between verbatim and gist processes in creating true and false memories. Whereas true memory is supported by both verbatim and gist processes, false memory is supported by gist processes and suppressed by verbatim processes. In other words, verbatim and gist processes work in opposition to one another when it comes to false memories. Suppose, for example, that one is presented with a word list such as lemon, apple, pear, and citrus. During a recognition test, the items lemon (target), orange (related distractor), and fan (unrelated distractor) are shown. In this case, retrieval of a gist trace (fruits) supports acceptance of both test probes lemon (true memory) and orange (false memory), whereas retrieval of a verbatim trace (lemon) only supports acceptance of the test probe lemon. In addition, retrieval of an exclusory verbatim trace ("I saw only the words lemon, apple, pear, and citrus") suppresses acceptance of false but related items such as orange through an operation known as recollection rejection. If neither verbatim nor gist traces are retrieved, then one might accept any test probe on the basis of response bias.
This principle plays a key role in FTT's explanation of experimental dissociations between true and false memories (e.g., when a variable affects one type of memory without affecting the other, or when it produces opposite effects on them). The time of exposure of each word during study and the number of repetitions have been shown to produce such dissociations. More specifically, while true memory follows a monotonically increasing function when plotted against presentation duration, false memory rates exhibit an inverted-U pattern when plotted as a function of presentation duration. Similarly, repetition is monotonically related to true memory (true memory increases as a function of the number of repetitions) and is non-monotonically related to false memory (repetition produces an inverted-U relation with false memory).
Retrieval phenomenology
Retrieval phenomenologies are spontaneous mental experiences associated with the act of remembering. It was first systematically characterized by E. K. Strong in the early 1900s. Strong identified two distinct types of introspective phenomena associated with memory retrieval that have since been termed recollection (or remembrance) and familiarity. Whereas the former is characterized as retrieval associated with recollection of past experiences, the latter lacks such association. The two forms of experiences can be illustrated by everyday expressions such as "I remember that!" (recollection) and "That seems familiar..." (familiarity). In FTT, retrieval of verbatim traces often produces recollective phenomenology and thus is frequently referred to as recollective retrieval. However, one feature of FTT is that recollective phenomenology is not particular to one type of memory process as posited by other dual-process theories of memory. Instead, FTT posits that retrieval of gist traces can also produce recollective phenomenology under some circumstances. When gist resemblance between a false item and memory is high and compelling, this gives rise to a phenomenon called phantom recollection, which is a vivid, but false, memory deemed to be true.
Developmental variability in dual processes
The principle of developmental variability in dual processes posits that verbatim and gist processes show variability across the lifespan. More specifically, verbatim and gist processes have been shown to improve between early childhood and young adulthood. Regarding verbatim processes, older children are better at retrieval of verbatim traces than younger children, although even very young children (4-year-olds) are able to retrieve verbatim information at above chance level. For instance, source memory accuracy greatly increases between 4-year-olds and 6-year-olds, and memory for nonsense words (i.e., words without a meaning, such as neppez) has been shown to increase between 7- and 10-year-olds. Gist processes also improve with age. For example, semantic clustering in free recall increases from 8-year-olds to 14-year-olds, and meaning connection across words and sentences has been shown to improve between 6- and 9-year-olds. In particular, the notion that gist memory improves with age plays a central role in FTT's prediction of age increases in false memory, a counterintuitive pattern that has been called developmental reversal.
Regarding old age, several studies suggest that verbatim memory declines between early and late adulthood, while gist memory remains fairly stable. Experiments indicate that older adults perform worse on tasks that require retrieval of surface features from studied items relative to younger adults. In addition, results with measurement models that quantify verbatim and gist processes indicate that older adults are less able to use verbatim traces during recall than younger adults.
False memories
When people try to remember past events (e.g., a birthday party or the last dinner), they often commit two types of errors: errors of omission and errors of commission. The former is known as forgetting, while the latter is better known as false memories. False memories can be separated into spontaneous and implanted false memories. Spontaneous false memories result from endogenous (internal) processes, such as meaning processing, while implanted false memories are the result of exogenous (external) processes, such as the suggestion of false information by an outside source (e.g., an interviewer asking misleading questions). Research had first suggested that younger children are more susceptible to suggestion of false information than adults. However, research has since indicated that younger children are much less likely to form false memories than older children and adults. Moreover, in opposition to common sense, true memories are not more stable than false ones. Studies have shown that false memories are actually more persistent than true memories. According to FTT, such pattern arises because false memories are supported by memory traces that are less susceptible to interference and forgetting (gist traces) than traces that suppress them and also support true memories (verbatim traces).
FTT is not a model for false memories but rather a model that explains how memory reacts with a higher reasoning process. Essentially, the gist and verbatim traces of whatever the subject is experiencing has a major effect on information that the subject falsely remembers. Verbatim and gist traces assist with memory performance due to the performance being able to pull from traces, relying on factors of different retrieval cues, on the accessibility of these kinds of memories, and forgetting. Although not a model for false memories, FTT is able to predict true and false memories associated with narratives and sentences. This is especially apparent in eye witness testimonies.
There are 5 explanatory principles that explain FTT's description of false memory, which lays out the differences between experiences dealing with gist and verbatim traces.
The storage of verbatim and gist traces are lateral. The subject and the meaning contents are lateral in experiences. The very surface forms of directly experienced events are representations of verbatim traces; gist traces are stored throughout many levels of familiarity.
The retrieval of gist and verbatim traces: Retrieval cues work best with verbatim when the subject experiences different events. Events that are not explicitly experienced work best with regard to retrieval cues in gist traces. Surface memories in verbatim traces typically will decline more rapidly than those memories which deal with meaning.
False Memory and the dual-opponent processes: Effects on false memory typically differ between retrieval cues of verbatim and gist traces. Gist traces will support false memory because the meaning an item has to the subject will seem like it is familiar; whereas verbatim processes will suppress the false memory by getting rid of the familiarity of the meaning to the subject. However, when a false memory is shown to the subject as a suggestion, this rule takes exception. In this case, both retrieval cues of gist and verbatim traces will support the false memory, while the verbatim trace, through retrieval of originally experienced memories, will suppress the false memory.
Variability in development: There is some variability to the development of retrieval of both gist and verbatim memory; both of these will improve during development into adulthood. Especially in the case of gist traces, where as someone gets older, connecting meaning with different items/events will improve.
Gist and verbatim processes assist with remembering an event vividly. The retrieval of both gist and verbatim support a form of verbatim assist with remembering, either the recollected thoughts will be more generic like in the case of gist traces or conscience experiences like with verbatim traces.
Differences between true and false memories are also laid out by FTT. The associations and dissociations between true and false memories are predicted by FTT, namely, certain associations and dissociations are observed under different kinds of conditions. Dissociation emerges under situations that involve reliance on verbatim traces. Memories, whether true or false are then based on different kinds of representations.
FTT may also help explain the effects of false memories, misinformation, and false-recognition in children as well as how this may vary during developmental changes.
While many false memories may be perceived as being "dumb," recent studies on FTT have shown that the theory might have an influence on creating "smart" false memories, which are created from being aware of the meaning of certain experiences.
While false memory research is still in early development, the application of FTT in false memory has been able to apply to real world settings; FTT has been effective in explaining multiple phenomena of false memory. In explaining false memories, FTT rejects the idea that offhand false memories are deemed as being true and how gist and verbatim traces embed false memories.
Reasoning and decision-making
FTT, as it applies to reasoning, is adapted from dual process models of human cognition. It differs from the traditional dual process model in that it makes a distinction between impulsivity and intuition—which are combined in System 1 according to traditional dual process theories—and then makes the claim that expertise and advanced cognition relies on intuition. The distinction between intuition and analysis depends on what kind of representation is used to process information. The mental representations described by FTT are categorized as either gist or verbatim representations:
Gist representations are bottom-line understandings of the meaning of information or experience, and are used in intuitive gist processing.
Verbatim representations are the precise and detailed representations of the exact information or experience, and are used in analytic verbatim processing.
Generally, most adults display what is called a "fuzzy processing preference," meaning that they rely on the least precise gist representations necessary to make a decision, despite parallel processing of both gist and verbatim representations. Both processes increase with age, though the verbatim process develops sooner than the gist, and is thus more heavily relied on in adolescence.
In this regard, the theory expands on research that has illustrated the role of memory representations in reasoning processes, the intersection of which has been previously underexplored. However, in certain circumstances, FTT predicts independence between memory and reasoning, specifically between reasoning tasks that rely on gist representations and memory tests that rely on verbatim representations. An example of this is research between the risky choice framing task and working memory, in which better working memory is not associated with a reduction in bias.
FTT thus explains inconsistencies or biases in reasoning to be dependent on retrieval cues that access stored values and principles that are gist representations, which can be filtered through experience and cultural, affective, and developmental factors. This dependence on gist results in a vulnerability of reasoning to processing interference from overlapping classes of events, but can also explain expert reasoning in that a person can treat superficially different reasoning problems in the same way if the problems share an underlying gist.
Risk perception and probability judgments
FTT posits that when people are presented with statistical information, they extract representations of the gist of the information (qualitatively) as well as the exact verbatim information (quantitatively). The gist that is encoded is often a basic categorical distinction between no risk and some risk. However, in situations when both choices in the decision have a level of uncertainty or risk, then another level of precision would be required, e.g., low risk or high risk. An illustration of this principle can be found in FTT's explanation of the common framing effect.
Framing effects
Framing effects occur when linguistically different descriptions of equivalent options lead to inconsistent choices. A famous example of a risky choice framing task is the Asian Disease Problem. This task requires the participants to imagine that their country is about to face a disease which is expected to kill 600 people. They have to choose among two programs to combat this disease. Subjects are presented with options that are framed as either gains (lives saved) or losses (lives lost). The possible options, as well as the categorical gists that are posited to be encoded by FTT are displayed below.
It is commonly found that people prefer the sure option when the options are framed as gains (program A) and the risky option when they are framed as losses (program D), despite the fact that the expected values for all the programs are equivalent. This is in contrast to a normative point of view that would indicate that if respondents prefer the sure option in the positive frame, they should also prefer the sure option in the negative frame.
The explanation for this effect according to FTT is that people will tend to operate on the simplest gist that is permitted to make a decision. In the case of this framing question, the gain frame presents a situation in which people prefer the gist of some people being saved to the possibility that some are saved or no one could be saved, and conversely, that the possibility of some people dying or no one dying is preferable to the option that some people will surely die.
Critical tests have been conducted to provide evidence in support of this explanation in favor of other theoretical explanations (i.e., Prospect theory) by presenting a modified version of this task that eliminates some mathematically redundant wording, e.g., program B would instead indicate that "If program B is adopted, there is 1/3 probability that 600 people will be saved." FTT predicts, in this case, that the elimination of the additional gist (the explicit possible death in program B) would result in indifference and eliminate the framing effect, which is indeed what was found.
Probability judgments and risk
The dual-process assumption of FTT has also been used to explain common biases of probability judgment, including the conjunction and disjunction fallacies. The conjunction fallacy occurs when people mistakenly judge a specific set of circumstances to be more probable than a more general set that includes the specific set. This fallacy is famously demonstrated by the Linda problem: that given a description of a woman named Linda who is an outspoken philosophy major who is concerned about discrimination and social justice, people will judge "Linda is a bank teller and is active in the feminist movement" to be more probable than "Linda is a bank teller", despite the fact that the latter statement is entirely inclusive of the former. FTT explains this phenomenon to not be a matter of encoding, given that priming participants to understand the inclusive nature of the categories tends not to reduce the bias. Instead, this is the result of the salience of relational gist, which contributes to a tendency to judge relative numerosity instead of merely applying the principle of class inclusion.
Errors of probability perception are also associated with the theory's predictions of contradictory relationships between risk perception and risky behavior. Specifically, that endorsement of accurate principles of objective risk is actually associated with greater risk-taking, whereas measures that assess global, gist-based judgments of risk had a protective effect (consistent with other predictions from FTT in the field of medical decision making). Since gist processing develops after verbatim processing as people age, this finding lends explanation to the increase in risk-taking that occurs during adolescence.
Management and economics
FTT has also been applied in the domains of consumer behavior and economics. For example, since the theory posits that people rely primarily on gist representations in making decisions, and that culture and experience can affect consumers' gist representations, factors such as cultural similarity and personal relevance have been used to explain consumers' perceptions of the risk of food-borne contamination and their intentions to reduce consumption of certain foods. In other words, one's evaluation of how "at-risk" he or she is can be influenced both by specific information learned as well as by the fuzzy representations of culture experience, and perceived proximity. In practice this resulted in greater consumer concern when the threat of a food-borne-illness was described in a culturally similar location, regardless of geographical proximity or other verbatim details.
Evidence was also found in consumer research in support of FTT's "editing" hypothesis, namely that extremely low-probability risks can be simplified by gist processing to be represented as "essentially nil." For example, one study found that people were willing to pay more for a safer product if safety was expressed relatively (i.e., product A is safer than product B) than they were if safety was expressed with statistics of actual incidence of safety hazards.
This result is in contrast to most prescriptive decision rules that predict that formally equivalent methods of communicating risk information should have identical effects on risk-taking behavior, even if the pertinent displays are different. These findings are predicted by FTT (and related models), which suggest that people reason on the basis of simplified representations rather than on the literal information available.
Medical decision-making
Like other people, clinicians apply cognitive heuristics and fall into systematic errors which affect decisions in everyday life. Research has shown that patients and their physicians have difficulty understanding a host of numerical concepts, especially risks and probabilities, and this often implies some problems with numeracy, or mathematical proficiency. For example, physicians and patients both demonstrate great difficulty understanding the probabilities of certain genetic risks and were prone to the same errors, despite vast differences in medical knowledge.
Though traditional dual process theory generally predicts that decisions made by computation are superior to those made by intuition, FTT assumes the opposite: that intuitive processing is more sophisticated and is capable of making better decisions, and that increases in expertise are accompanied by reliance on intuitive, gist-based reasoning rather than on literal, verbatim reasoning.
FTT predicts that simply educating people with statistics regarding risk factors can hinder prevention efforts. Due to low prevalence of HIV or cancer, for example, people tend to overestimate their risks, and consequently interventions stressing the actual numbers may move people toward complacency as opposed to risk reduction. When women learn that their actual risks for breast cancer are lower than they thought, they return for screening at a lower rate. Also, some interventions to discourage adolescent drug use by presenting the risks have been shown to be ineffective or can even backfire.
The conclusion drawn from this evidence is that health-care professionals and health policymakers need to package, present, and explain information in more meaningful ways that facilitate forming an appropriate gist. Such strategies would include explaining quantities qualitatively, displaying information visually, and tailoring the format to trigger the appropriate gist and to cue the retrieval of health-related knowledge and values. Web-based interventions have been designed using these principles, which have been found to increase the patient's willingness to escalate care, as well as gain knowledge and make an informed choice.
Implications
Theory-driven research using principles from FTT provides empirically supported recommendations that can be applied in many fields. For example, it provides specific recommendations regarding interventions aiming at reducing adolescent risk taking. Moreover, according to FTT, precise information does not necessarily work to communicate health-related information, which has obvious implications to public policy and procedures for improving treatment adherence in particular. Specifically, FTT principles suggest examples of how to display risk proportions in order to be comprehensible for both patients and health care professionals:
Explain quantities qualitatively. Do not rely solely on numbers when presenting information.
Explain quantities, percentages, and probabilities verbally, stressing conceptual understanding (the bottom-line meaning of information) over precise memorization of verbatim facts or numbers (e.g., a 20% chance of breast cancer is actually a "high" risk).
Provide verbal guidance in disentangling classes and class-inclusion relationships.
Display information visually. When it is necessary to present information numerically, arrange numbers so that meaningful patterns or relationships among them are obvious.
Make use of graphical displays which help people extract the relevant gist. Useful formats for conveying relative risks and other comparative information include simple bar graphs and risk ladders. Pie charts are good for representing relative proportions. Line graphs are optimal for conveying the gist of a linear trend, such as survival and mortality curves or the effectiveness of a drug over time. Stacked bar graphs are useful for showing absolute risks; and Venn diagrams, two-by-two grids, and 100-square grids are useful for disentangling numerators and denominators and for eliminating errors from probability judgments.
Avoid distracting gists. The class-inclusion confusion is especially likely to produce errors when visually or emotionally salient details, a story, or a stereotype draws attention away from the relevant data in the direction of extraneous information. For example, given a display of seven cows and three horses, children are asked whether there are more cows or more animals. Until the age of ten, children often respond that there are more cows than animals, even after counting the number in each class aloud correctly. However, young children in the previous example are more likely to answer the problem correctly when they are not shown a picture with the visually hard-to-ignore detail, that is, several figures of cows.
Facilitate reexamination of problems. Encourage people to reexamine problems and edit their initial judgments. Although gist for quantities tends to be more available than the numbers verbatim, people can and do attend to the numbers to correct their first gist-based impressions when cued to do so and when they are given the time and opportunity, which can help reducing errors.
In addition, memory principles in FTT provide recommendations to eyewitness testimony. Children are often called upon to testify in courts, most commonly in cases of maltreatment, divorce, and child custody. Contrary to common sense, FTT posits that children can be reliable witnesses as long as they are encouraged to report verbatim memories and their reports are protected from suggestion of false information. More specifically:
Children should be interviewed as soon as possible after the target event to reduce exposure to false suggestions and to facilitate retrieval of verbatim memories before their rapid decay.
When reminding a witness of a target event, interviewers should present pictures or photos rather than words to describe it. Pictures of the actual target event help to increase retrieval of true memories as they are better cues to verbatim memories than words.
Avoid repeated questioning. FTT predicts, for example, that the repetition of questions that restate the gist of a false information can increase the probability of false memories during subsequent interviews.
Do not give children negative feedback about their performance during an interview. This procedure prompts children to provide additional information that is often false rather than true.
See also
Behavioral economics
Cognitive development
Decision-making
Developmental psychology
Framing
Reason
Risk
References
Cognitive psychology
Applied probability
Decision theory | Fuzzy-trace theory | [
"Mathematics",
"Biology"
] | 6,762 | [
"Behavior",
"Applied probability",
"Applied mathematics",
"Behavioural sciences",
"Cognitive psychology"
] |
27,866,367 | https://en.wikipedia.org/wiki/Medical%20bag | A medical bag (also called a doctor's bag or physician's bag) is a portable bag used by a physician or other medical professional to transport medical supplies and medicine.
Traditionally, the medical bag was made of leather, opened on the top with a split-handle design. During the American Civil War, physician's medical saddle bags were used. Modern medical bags are made of various materials and come in various designs that can include many pockets, pouches, and zippered or hook-and-loop openings.
Indigenous North American medicine men and shamans use a medicine bag. A battle bag is used in the military.
See also
First aid kit
Gladstone bag
References
External links
Doctor Bag Project - What's in the bag?, a glimpse into a doctor's bag from a typical late 1880s rural doctor, Indiana Medical History Museum
Series 13 - Medical bags, cases and other containers (with or without contents), Medical History Museum, University of Melbourne
Bags
Medical equipment | Medical bag | [
"Biology"
] | 198 | [
"Medical equipment",
"Medical technology"
] |
27,866,979 | https://en.wikipedia.org/wiki/ATOMKI | ATOMKI is the Institute for Nuclear Research, Hungarian Academy of Sciences. The institute is located in Debrecen and was established in 1954 by Sándor Szalay, the founding director.
ATOMKI became independent from the Institute of Experimental Physics of the Kossuth Lajos University (presently called University of Debrecen), where Sándor Szalay started and directed nuclear physics research for decades. At present, the main research fields of Atomki are atom-, nuclear-, and particle physics, ion beam analytics, technique of detection and signal processing, environmental analytics, radioactive dating, radiochemistry, and solid state physics. The director is Zsolt Dombrádi, D.Sc.
Some of its buildings were originally the National Orphanage for Teachers' Children, built in 1917.
See also
X17 particle
References
Further reading
Booklet including brief history of the institute (page 7)
External links
Discovery of neutrino particle
1954 establishments in Hungary
Hungarian Academy of Sciences
Nuclear research institutes
Research institutes in Hungary
Organisations based in Debrecen | ATOMKI | [
"Engineering"
] | 210 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
2,921,396 | https://en.wikipedia.org/wiki/Cabri%20Geometry | Cabri Geometry is a commercial interactive geometry software produced by the French company Cabrilog for teaching and learning geometry and trigonometry. It was designed with ease-of-use in mind. The program allows the user to animate geometric figures, proving a significant advantage over those drawn on a blackboard. Relationships between points on a geometric object may easily be demonstrated, which can be useful in the learning process. There are also graphing and display functions which allow exploration of the connections between geometry and algebra. The program can be run under Windows or the Mac OS.
See also
Interactive geometry software – alternatives to Cabri Geometry
References
External links
Cabri Geometry
Cabri belongs to the Inter2Geo European project aiming at interoperability between interactive geometry software.
Interactive geometry software | Cabri Geometry | [
"Mathematics"
] | 158 | [
"Interactive geometry software",
"Mathematical software",
"Geometry",
"Geometry stubs"
] |
2,921,620 | https://en.wikipedia.org/wiki/Sipser%E2%80%93Lautemann%20theorem | In computational complexity theory, the Sipser–Lautemann theorem or Sipser–Gács–Lautemann theorem states that bounded-error probabilistic polynomial (BPP) time is contained in the polynomial time hierarchy, and more specifically Σ2 ∩ Π2.
In 1983, Michael Sipser showed that BPP is contained in the polynomial time hierarchy. Péter Gács showed that BPP is actually contained in Σ2 ∩ Π2. Clemens Lautemann contributed by giving a simple proof of BPP’s membership in Σ2 ∩ Π2, also in 1983. It is conjectured that in fact BPP=P, which is a much stronger statement than the Sipser–Lautemann theorem.
Proof
Here we present the proof by Lautemann. Without loss of generality, a machine M ∈ BPP with error ≤ 2−|x| can be chosen. (All BPP problems can be amplified to reduce the error probability exponentially.) The basic idea of the proof is to define a Σ2 sentence that is equivalent to stating that x is in the language, L, defined by M by using a set of transforms of the random variable inputs.
Since the output of M depends on random input, as well as the input x, it is useful to define which random strings produce the correct output as A(x) = {r | M(x,r) accepts}. The key to the proof is to note that when x ∈ L, A(x) is very large and when x ∉ L, A(x) is very small. By using bitwise parity, ⊕, a set of transforms can be defined as A(x) ⊕ t={r ⊕ t | r ∈ A(x)}. The first main lemma of the proof shows that the union of a small finite number of these transforms will contain the entire space of random input strings. Using this fact, a Σ2 sentence and a Π2 sentence can be generated that is true if and only if x ∈ L (see conclusion).
Lemma 1
The general idea of lemma one is to prove that if A(x) covers a large part of the random space then there exists a small set of translations that will cover the entire random space. In more mathematical language:
If , then , where such that
Proof. Randomly pick t1, t2, ..., t|r|. Let (the union of all transforms of A(x)).
So, for all r in R,
The probability that there will exist at least one element in R not in S is
Therefore
Thus there is a selection for each such that
Lemma 2
The previous lemma shows that A(x) can cover every possible point in the space using a small set of translations. Complementary to this, for x ∉ L only a small fraction of the space is covered by . We have:
because is polynomial in .
Conclusion
The lemmas show that language membership of a language in BPP can be expressed as a Σ2 expression, as follows.
That is, x is in language L if and only if there exist binary vectors, where for all random bit vectors r, TM M accepts at least one random vector ⊕ ti.
The above expression is in Σ2 in that it is first existentially then universally quantified. Therefore BPP ⊆ Σ2. Because BPP is closed under complement, this proves BPP ⊆ Σ2 ∩ Π2.
Stronger version
The theorem can be strengthened to (see MA, S).
References
Structural complexity theory
Randomized algorithms
Theorems in computational complexity theory
Articles containing proofs | Sipser–Lautemann theorem | [
"Mathematics"
] | 742 | [
"Theorems in computational complexity theory",
"Articles containing proofs",
"Theorems in discrete mathematics"
] |
2,922,081 | https://en.wikipedia.org/wiki/Separatrix%20%28mathematics%29 | In mathematics, a separatrix is the boundary separating two modes of behaviour in a differential equation.
Examples
Simple pendulum
Consider the differential equation describing the motion of a simple pendulum:
where denotes the length of the pendulum, the gravitational acceleration and the angle between the pendulum and vertically downwards. In this system there is a conserved quantity H (the Hamiltonian), which is given by
With this defined, one can plot a curve of constant H in the phase space of system. The phase space is a graph with along the horizontal axis and on the vertical axis – see the thumbnail to the right. The type of resulting curve depends upon the value of H.
If then no curve exists (because must be imaginary).
If then the curve will be a simple closed curve which is nearly circular for small H and becomes "eye" shaped when H approaches the upper bound. These curves correspond to the pendulum swinging periodically from side to side.
If then the curve is open, and this corresponds to the pendulum forever swinging through complete circles.
In this system the separatrix is the curve that corresponds to . It separates — hence the name — the phase space into two distinct areas, each with a distinct type of motion. The region inside the separatrix has all those phase space curves which correspond to the pendulum oscillating back and forth, whereas the region outside the separatrix has all the phase space curves which correspond to the pendulum continuously turning through vertical planar circles.
FitzHugh–Nagumo model
In the FitzHugh–Nagumo model, when the linear nullcline pierces the cubic nullcline at the left, middle, and right branch once each, the system has a separatrix. Trajectories to the left of the separatrix converge to the left stable equilibrium, and similarly for the right. The separatrix itself is the stable manifold for the saddle point in the middle. Details are found in the page.
The separatrix is clearly visible by numerically solving for trajectories backwards in time. Since when solving for the trajectories forwards in time, trajectories diverge from the separatrix, when solving backwards in time, trajectories converge to the separatrix.
References
Logan, J. David, Applied Mathematics, 3rd Ed., 2006, John Wiley and Sons, Hoboken, NJ, pg. 65.
External links
Separatrix from MathWorld.
Dynamical systems | Separatrix (mathematics) | [
"Physics",
"Mathematics"
] | 503 | [
"Mechanics",
"Dynamical systems"
] |
2,922,089 | https://en.wikipedia.org/wiki/Dihedral%20symmetry%20in%20three%20dimensions | In geometry, dihedral symmetry in three dimensions is one of three infinite sequences of point groups in three dimensions which have a symmetry group that as an abstract group is a dihedral group Dihn (for n ≥ 2).
Types
There are 3 types of dihedral symmetry in three dimensions, each shown below in 3 notations: Schönflies notation, Coxeter notation, and orbifold notation.
Chiral
Dn, [n,2]+, (22n) of order 2n – dihedral symmetry or para-n-gonal group (abstract group: Dihn).
Achiral
Dnh, [n,2], (*22n) of order 4n – prismatic symmetry or full ortho-n-gonal group (abstract group: Dihn × Z2).
Dnd (or Dnv), [2n,2+], (2*n) of order 4n – antiprismatic symmetry or full gyro-n-gonal group (abstract group: Dih2n).
For a given n, all three have n-fold rotational symmetry about one axis (rotation by an angle of 360°/n does not change the object), and 2-fold rotational symmetry about a perpendicular axis, hence about n of those. For n = ∞, they correspond to three Frieze groups. Schönflies notation is used, with Coxeter notation in brackets, and orbifold notation in parentheses. The term horizontal (h) is used with respect to a vertical axis of rotation.
In 2D, the symmetry group Dn includes reflections in lines. When the 2D plane is embedded horizontally in a 3D space, such a reflection can either be viewed as the restriction to that plane of a reflection through a vertical plane, or as the restriction to the plane of a rotation about the reflection line, by 180°. In 3D, the two operations are distinguished: the group Dn contains rotations only, not reflections. The other group is pyramidal symmetry Cnv of the same order, 2n.
With reflection symmetry in a plane perpendicular to the n-fold rotation axis, we have Dnh, [n], (*22n).
Dnd (or Dnv), [2n,2+], (2*n) has vertical mirror planes between the horizontal rotation axes, not through them. As a result, the vertical axis is a 2n-fold rotoreflection axis.
Dnh is the symmetry group for a regular n-sided prism and also for a regular n-sided bipyramid. Dnd is the symmetry group for a regular n-sided antiprism, and also for a regular n-sided trapezohedron. Dn is the symmetry group of a partially rotated prism.
n = 1 is not included because the three symmetries are equal to other ones:
D1 and C2: group of order 2 with a single 180° rotation.
D1h and C2v: group of order 4 with a reflection in a plane and a 180° rotation about a line in that plane.
D1d and C2h: group of order 4 with a reflection in a plane and a 180° rotation about a line perpendicular to that plane.
For n = 2 there is not one main axis and two additional axes, but there are three equivalent ones.
D2, [2,2]+, (222) of order 4 is one of the three symmetry group types with the Klein four-group as abstract group. It has three perpendicular 2-fold rotation axes. It is the symmetry group of a cuboid with an S written on two opposite faces, in the same orientation.
D2h, [2,2], (*222) of order 8 is the symmetry group of a cuboid.
D2d, [4,2+], (2*2) of order 8 is the symmetry group of e.g.:
A square cuboid with a diagonal drawn on one square face, and a perpendicular diagonal on the other one.
A regular tetrahedron scaled in the direction of a line connecting the midpoints of two opposite edges (D2d is a subgroup of Td; by scaling, we reduce the symmetry).
Subgroups
For Dnh, [n,2], (*22n), order 4n
Cnh, [n+,2], (n*), order 2n
Cnv, [n,1], (*nn), order 2n
Dn, [n,2]+, (22n), order 2n
For Dnd, [2n,2+], (2*n), order 4n
S2n, [2n+,2+], (n×), order 2n
Cnv, [n+,2], (n*), order 2n
Dn, [n,2]+, (22n), order 2n
Dnd is also subgroup of D2nh.
Examples
Dnh, [n], (*22n):
D5h, [5], (*225):
D4d, [8,2+], (2*4):
D5d, [10,2+], (2*5):
D17d, [34,2+], (2*17):
See also
List of spherical symmetry groups
Point groups in three dimensions
Cyclic symmetry in three dimensions
References
N.W. Johnson: Geometries and Transformations, (2018) Chapter 11: Finite symmetry groups, 11.5 Spherical Coxeter groups
External links
Graphic overview of the 32 crystallographic point groups – form the first parts (apart from skipping n=5) of the 7 infinite series and 5 of the 7 separate 3D point groups
Symmetry
Euclidean symmetries
Group theory | Dihedral symmetry in three dimensions | [
"Physics",
"Mathematics"
] | 1,214 | [
"Functions and mappings",
"Euclidean symmetries",
"Mathematical objects",
"Group theory",
"Fields of abstract algebra",
"Mathematical relations",
"Geometry",
"Symmetry"
] |
2,923,964 | https://en.wikipedia.org/wiki/Inductive%20effect | In Organic chemistry, the inductive effect in a molecule is a local change in the electron density due to electron-withdrawing or electron-donating groups elsewhere in the molecule, resulting in a permanent dipole in a bond.
It is present in a σ (sigma) bond, unlike the electromeric effect which is present in a π (pi) bond.
The halogen atoms in an alkyl halide are electron withdrawing while the alkyl groups have electron donating tendencies. If the electronegative atom (missing an electron, thus having a positive charge) is then joined to a chain of atoms, typically carbon, the positive charge is relayed to the other atoms in the chain. This is the electron-withdrawing inductive effect, also known as the -I effect. In short, alkyl groups tend to donate electrons, leading to the +I effect. Its experimental basis is the ionization constant. It is distinct from and often opposite to the mesomeric effect.
Bond polarization
Covalent bonds can be polarized depending on the relative electronegativity of the two atoms forming the bond. The electron cloud in a σ-bond between two unlike atoms is not uniform and is slightly displaced towards the more electronegative of the two atoms. This causes a permanent state of bond polarization, where the more electronegative atoms has a fractional negative charge (δ–) and the less electronegative atom has a fractional positive charge (δ+).
For example, the water molecule has an electronegative oxygen atom that attracts a negative charge. This is indicated by δ− in the water molecule in the vicinity of the O atom, as well as by a δ+ next to each of the two H atoms. The vector addition of the individual bond dipole moments results in a net dipole moment for the molecule. A polar bond is a covalent bond in which there is a separation of charge between one end and the other - in other words in which one end is slightly positive and the other slightly negative. Examples include most covalent bonds. The hydrogen-chlorine bond in HCl or the hydrogen-oxygen bonds in water are typical.
Inductive effect
The effect of the sigma electron displacement towards the more electronegative atom by which one end becomes positively charged and the other end negatively charged is known as the inductive effect. The -I effect is a permanent effect & generally represented by an arrow on the bond.
However, some groups, such as the alkyl group, are less electron-withdrawing than hydrogen and are therefore considered as electron-releasing/ electron-donating groups. This is electron-releasing character and is indicated by the +I effect. In short, alkyl groups tend to give electrons, leading to the induction effect. However, such an effect has been questioned.
As the induced change in polarity is less than the original polarity, the inductive effect rapidly dies out and is significant only over a short distance. Moreover, the inductive effect is permanent but feeble since it involves the shift of strongly held σ-bond electrons and other stronger factors may overshadow this effect.
Relative inductive effects
Relative inductive effects have been experimentally measured through the resulting s of a nearby carboxylic acid group (see ). In increasing order of -I effect or decreasing order of +I effect, common functional groups are:
Hydrogen substituents also exhibit an isotope effect: relative to the same order,
where H is hydrogen, D deuterium, and T tritium.
The strength of inductive effect is also dependent on the distance between the substituent group and the main group that react; the longer the distance, the weaker the effect.
Inductive effects can be expressed quantitatively through the Hammett equation, which describes the relationship between reaction rates and equilibrium constants with respect to substituent.
Fragmentation
The inductive effect can be used to determine the stability of a molecule depending on the charge present on the atom and the groups bonded to the atom. For example, if an atom has a positive charge and is attached to a -I group its charge becomes 'amplified' and the molecule becomes more unstable. Similarly, if an atom has a negative charge and is attached to a +I group its charge becomes 'amplified' and the molecule becomes more unstable. In contrast, if an atom has a negative charge and is attached to a -I group its charge becomes 'de-amplified' and the molecule becomes more stable than if the I-effect was not taken into consideration. Similarly, if an atom has a positive charge and is attached to a +I group its charge becomes 'de-amplified' and the molecule becomes more stable than if the I-effect was not taken into consideration. The explanation for the above is given by the fact that more charge on an atom decreases stability and less charge on an atom increases stability.
Acidity and basicity
The inductive effect also plays a vital role in deciding the acidity and basicity of a molecule. Groups having +I effect (Inductive effect) attached to a molecule increases the overall electron density on the molecule and the molecule is able to donate electrons, making it basic. Similarly, groups having -I effect attached to a molecule decreases the overall electron density on the molecule making it electron deficient which results in its acidity. As the number of -I groups attached to a molecule increases, its acidity increases; as the number of +I groups on a molecule increases, its basicity will increase.
Applications
Carboxylic acids
The strength of a carboxylic acid depends on the extent of its ionization constant: the more ionized it is, the stronger it is. As an acid becomes stronger, the numerical value of its drops.
In acids, the electron-releasing inductive effect of the alkyl group increases the electron density on oxygen and thus hinders the breaking of the O-H bond, which consequently reduces the ionization. Due to its greater ionization, formic acid () is stronger than acetic acid (). Monochloroacetic acid (), though, is stronger than formic acid, due to the electron-withdrawing effect of chlorine promoting ionization.
In benzoic acid, the carbon atoms which are present in the ring are sp2 hybridised. As a result, benzoic acid () is a stronger acid than cyclohexanecarboxylic acid (). Also, in aromatic carboxylic acids, electron-withdrawing groups substituted at the ortho and para positions can enhance the acid strength.
Since the carboxyl group is itself an electron-withdrawing group, dicarboxylic acids are, in general, stronger acids than their monocarboxyl analogues.
The Inductive effect will also help in polarization of a bond making certain carbon atom or other atom positions.
Comparison between inductive effect and electromeric effect
See also
Mesomeric effect
Pi backbonding
Baker–Nathan effect: the observed order in electron-releasing basic substituents is apparently reversed.
References
External links
globalbritannica.com
auburn.edu (PDF)
pubs.acs.org
Physical organic chemistry
Chemical bonding | Inductive effect | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,493 | [
"nan",
"Chemical bonding",
"Condensed matter physics",
"Physical organic chemistry"
] |
2,924,436 | https://en.wikipedia.org/wiki/Electromagnetic%20wave%20equation | The electromagnetic wave equation is a second-order partial differential equation that describes the propagation of electromagnetic waves through a medium or in a vacuum. It is a three-dimensional form of the wave equation. The homogeneous form of the equation, written in terms of either the electric field or the magnetic field , takes the form:
where
is the speed of light (i.e. phase velocity) in a medium with permeability , and permittivity , and is the Laplace operator. In a vacuum, , a fundamental physical constant. The electromagnetic wave equation derives from Maxwell's equations. In most older literature, is called the magnetic flux density or magnetic induction. The following equationspredicate that any electromagnetic wave must be a transverse wave, where the electric field and the magnetic field are both perpendicular to the direction of wave propagation.
The origin of the electromagnetic wave equation
In his 1865 paper titled A Dynamical Theory of the Electromagnetic Field, James Clerk Maxwell utilized the correction to Ampère's circuital law that he had made in part III of his 1861 paper On Physical Lines of Force. In Part VI of his 1864 paper titled Electromagnetic Theory of Light, Maxwell combined displacement current with some of the other equations of electromagnetism and he obtained a wave equation with a speed equal to the speed of light. He commented:
The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws.
Maxwell's derivation of the electromagnetic wave equation has been replaced in modern physics education by a much less cumbersome method involving combining the corrected version of Ampère's circuital law with Faraday's law of induction.
To obtain the electromagnetic wave equation in a vacuum using the modern method, we begin with the modern 'Heaviside' form of Maxwell's equations. In a vacuum- and charge-free space, these equations are:
These are the general Maxwell's equations specialized to the case with charge and current both set to zero.
Taking the curl of the curl equations gives:
We can use the vector identity
where is any vector function of space. And
where is a dyadic which when operated on by the divergence operator yields a vector. Since
then the first term on the right in the identity vanishes and we obtain the wave equations:
where
is the speed of light in free space.
Covariant form of the homogeneous wave equation
These relativistic equations can be written in contravariant form as
where the electromagnetic four-potential is
with the Lorenz gauge condition:
and where
is the d'Alembert operator.
Homogeneous wave equation in curved spacetime
The electromagnetic wave equation is modified in two ways, the derivative is replaced with the covariant derivative and a new term that depends on the curvature appears.
where is the Ricci curvature tensor and the semicolon indicates covariant differentiation.
The generalization of the Lorenz gauge condition in curved spacetime is assumed:
Inhomogeneous electromagnetic wave equation
Localized time-varying charge and current densities can act as sources of electromagnetic waves in a vacuum. Maxwell's equations can be written in the form of a wave equation with sources. The addition of sources to the wave equations makes the partial differential equations inhomogeneous.
Solutions to the homogeneous electromagnetic wave equation
The general solution to the electromagnetic wave equation is a linear superposition of waves of the form
for virtually well-behaved function of dimensionless argument , where is the angular frequency (in radians per second), and is the wave vector (in radians per meter).
Although the function can be and often is a monochromatic sine wave, it does not have to be sinusoidal, or even periodic. In practice, cannot have infinite periodicity because any real electromagnetic wave must always have a finite extent in time and space. As a result, and based on the theory of Fourier decomposition, a real wave must consist of the superposition of an infinite set of sinusoidal frequencies.
In addition, for a valid solution, the wave vector and the angular frequency are not independent; they must adhere to the dispersion relation:
where is the wavenumber and is the wavelength. The variable can only be used in this equation when the electromagnetic wave is in a vacuum.
Monochromatic, sinusoidal steady-state
The simplest set of solutions to the wave equation result from assuming sinusoidal waveforms of a single frequency in separable form:
where
is the imaginary unit,
is the angular frequency in radians per second,
is the frequency in hertz, and
is Euler's formula.
Plane wave solutions
Consider a plane defined by a unit normal vector
Then planar traveling wave solutions of the wave equations are
where is the position vector (in meters).
These solutions represent planar waves traveling in the direction of the normal vector . If we define the direction as the direction of , and the direction as the direction of , then by Faraday's Law the magnetic field lies in the direction and is related to the electric field by the relation
Because the divergence of the electric and magnetic fields are zero, there are no fields in the direction of propagation.
This solution is the linearly polarized solution of the wave equations. There are also circularly polarized solutions in which the fields rotate about the normal vector.
Spectral decomposition
Because of the linearity of Maxwell's equations in a vacuum, solutions can be decomposed into a superposition of sinusoids. This is the basis for the Fourier transform method for the solution of differential equations. The sinusoidal solution to the electromagnetic wave equation takes the form
where
is time (in seconds),
is the angular frequency (in radians per second),
is the wave vector (in radians per meter), and
is the phase angle (in radians).
The wave vector is related to the angular frequency by
where is the wavenumber and is the wavelength.
The electromagnetic spectrum is a plot of the field magnitudes (or energies) as a function of wavelength.
Multipole expansion
Assuming monochromatic fields varying in time as , if one uses Maxwell's Equations to eliminate , the electromagnetic wave equation reduces to the Helmholtz equation for :
with as given above. Alternatively, one can eliminate in favor of to obtain:
A generic electromagnetic field with frequency can be written as a sum of solutions to these two equations. The three-dimensional solutions of the Helmholtz Equation can be expressed as expansions in spherical harmonics with coefficients proportional to the spherical Bessel functions. However, applying this expansion to each vector component of or will give solutions that are not generically divergence-free (), and therefore require additional restrictions on the coefficients.
The multipole expansion circumvents this difficulty by expanding not or , but or into spherical harmonics. These expansions still solve the original Helmholtz equations for and because for a divergence-free field , . The resulting expressions for a generic electromagnetic field are:
where and are the electric multipole fields of order (l, m), and and are the corresponding magnetic multipole fields, and and are the coefficients of the expansion. The multipole fields are given by
where are the spherical Hankel functions, and are determined by boundary conditions, and
are vector spherical harmonics normalized so that
The multipole expansion of the electromagnetic field finds application in a number of problems involving spherical symmetry, for example antennae radiation patterns, or nuclear gamma decay. In these applications, one is often interested in the power radiated in the far-field. In this regions, the and fields asymptotically approach
The angular distribution of the time-averaged radiated power is then given by
See also
Theory and experiment
Maxwell's equations
Wave equation
Partial differential equation
Computational electromagnetics
Electromagnetic radiation
Charge conservation
Light
Electromagnetic spectrum
Optics
Special relativity
General relativity
Inhomogeneous electromagnetic wave equation
Photon polarization
Larmor power formula
Phenomena and applications
Rainbow
Cosmic microwave background
Laser
Laser fusion
Photography
X-ray
X-ray crystallography
Radar
Radio wave
Optical computing
Microwave
Holography
Microscope
Telescope
Gravitational lens
Black-body radiation
Biographies
André-Marie Ampère
Albert Einstein
Michael Faraday
Heinrich Hertz
Oliver Heaviside
James Clerk Maxwell
Hendrik Lorentz
Notes
Further reading
Electromagnetism
Journal articles
Maxwell, James Clerk, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459-512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.)
Undergraduate-level textbooks
Edward M. Purcell, Electricity and Magnetism (McGraw-Hill, New York, 1985). .
Hermann A. Haus and James R. Melcher, Electromagnetic Fields and Energy (Prentice-Hall, 1989) .
Banesh Hoffmann, Relativity and Its Roots (Freeman, New York, 1983). .
David H. Staelin, Ann W. Morgenthaler, and Jin Au Kong, Electromagnetic Waves (Prentice-Hall, 1994) .
Charles F. Stevens, The Six Core Theories of Modern Physics, (MIT Press, 1995) .
Markus Zahn, Electromagnetic Field Theory: a problem solving approach, (John Wiley & Sons, 1979)
Graduate-level textbooks
Landau, L. D., The Classical Theory of Fields (Course of Theoretical Physics: Volume 2), (Butterworth-Heinemann: Oxford, 1987). .
Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, Gravitation, (1970) W.H. Freeman, New York; . (Provides a treatment of Maxwell's equations in terms of differential forms.)
Vector calculus
P. C. Matthews Vector Calculus, Springer 1998,
H. M. Schey, Div Grad Curl and all that: An informal text on vector calculus, 4th edition (W. W. Norton & Company, 2005) .
Electrodynamics
Electromagnetic radiation
Electromagnetism
Hyperbolic partial differential equations
Mathematical physics
Equations of physics | Electromagnetic wave equation | [
"Physics",
"Mathematics"
] | 2,046 | [
"Electromagnetism",
"Physical phenomena",
"Equations of physics",
"Electromagnetic radiation",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Equations",
"Radiation",
"Fundamental interactions",
"Electrodynamics",
"Mathematical physics",
"Dynamical systems"
] |
2,925,061 | https://en.wikipedia.org/wiki/SOLEIL | SOLEIL ("Sun" in French) is a synchrotron facility near Paris, France. It performed its first acceleration of electrons on May 14, 2006. The name SOLEIL is a backronym for Source Optimisée de Lumière d’Énergie Intermédiaire du LURE (LURE optimised intermediary energy light source), LURE meaning Laboratoire pour l'Utilisation du Rayonnement Électromagnétique.
The facility is run by a civil corporation held by the French National Centre for Scientific Research (CNRS) and the French Alternative Energies and Atomic Energy Commission (CEA), two French national research agencies. It is located in Saint-Aubin in the Essonne département, a south-western suburb of Paris, near Gif-sur-Yvette and Saclay, which host other facilities for nuclear and particle physics.
The facility is an associate member of the University of Paris-Saclay.
SOLEIL also hosts IPANEMA, the European research platform on ancient materials (archaeology, palaeontology, past environments and cultural heritage), a joint CNRS / French Ministry of Culture and Communication research unit.
SOLEIL covers fundamental research needs in physics, chemistry, material sciences, life sciences (notably in the crystallography of biological macromolecules), earth sciences, and atmospheric sciences. It offers the use of a wide range of spectroscopic methods from infrared to X-rays, and structural methods such as X-ray diffraction and scattering.
Main parameters
SOLEIL contains electrons travelling with an energy of 2.75 GeV around a 354 m circumference. It takes the electrons 1.2 μs to travel around this ring at almost the speed of light; 847,000 times per second.
Most Cited Scientists at Synchrotron SOLEIL
According to Google Scholar, in 2024 this is the top 10 of most cited scientists of Synchrotron Soleil:
John Bozek
Citations: 23,755
Research Areas: X-ray physics, synchrotron radiation, XFEL, chemical physics, ultrafast X-ray
Jose Avila
Citations: 15,273
Research Areas: Not specified
Amina Taleb Ibrahimi
Citations: 12,628
Research Areas: Condensed matter physics, low-dimensional systems
Timm Weitkamp
Citations: 11,773
Research Areas: X-ray imaging, microtomography, X-ray microscopy, X-ray phase contrast imaging
Laurent Nahon
Citations: 10,754
Research Areas: Chirality, circular dichroism, molecular photoionization, VUV spectroscopy, polarimetry
Andrea Zitolo
Citations: 8,535
Research Areas: Physical chemistry, Material sciences, materials for energy and hydrogen
Patrick Le Fèvre
Citations: 8,187
Research Areas: Physics
François Bertran
Citations: 7,977
Research Areas: Physics
Pavel Dudin
Citations: 7,775
Research Areas: Band structure, materials science, topological insulators, graphene, superconductors
Pierre Legrand
Citations: 7,706
Research Areas: Structural virology, nucleic acid-protein interaction, crystallography, tomography, synchrotron
References
External links
Official website
Official website
LURE website
Lightsources.org
Official website of IPANEMA
Official website of IPANEMA
Synchrotron radiation facilities
Research institutes in France | SOLEIL | [
"Physics",
"Materials_science"
] | 696 | [
"Particle physics stubs",
"Materials testing",
"Particle physics",
"Synchrotron radiation facilities"
] |
2,925,371 | https://en.wikipedia.org/wiki/Radiative%20transfer | Radiative transfer (also called radiation transport) is the physical phenomenon of energy transfer in the form of electromagnetic radiation. The propagation of radiation through a medium is affected by absorption, emission, and scattering processes. The equation of radiative transfer describes these interactions mathematically. Equations of radiative transfer have application in a wide variety of subjects including optics, astrophysics, atmospheric science, and remote sensing. Analytic solutions to the radiative transfer equation (RTE) exist for simple cases but for more realistic media, with complex multiple scattering effects, numerical methods are required.
The present article is largely focused on the condition of radiative equilibrium.
Definitions
The fundamental quantity that describes a field of radiation is called spectral radiance in radiometric terms (in other fields it is often called specific intensity). For a very small area element in the radiation field, there can be electromagnetic radiation passing in both senses in every spatial direction through it. In radiometric terms, the passage can be completely characterized by the amount of energy radiated in each of the two senses in each spatial direction, per unit time, per unit area of surface of sourcing passage, per unit solid angle of reception at a distance, per unit wavelength interval being considered (polarization will be ignored for the moment).
In terms of the spectral radiance, , the energy flowing across an area element of area located at in time in the solid angle about the direction in the frequency interval to is
where is the angle that the unit direction vector makes with a normal to the area element. The units of the spectral radiance are seen to be energy/time/area/solid angle/frequency. In MKS units this would be W·m−2·sr−1·Hz−1 (watts per square-metre-steradian-hertz).
The equation of radiative transfer
The equation of radiative transfer simply says that as a beam of radiation travels, it loses energy to absorption, gains energy by emission processes, and redistributes energy by scattering. The differential form of the equation for radiative transfer is:
where is the speed of light, is the emission coefficient, is the scattering opacity, is the absorption opacity, is the mass density and the term represents radiation scattered from other directions onto a surface.
Solutions to the equation of radiative transfer
Solutions to the equation of radiative transfer form an enormous body of work. The differences however, are essentially due to the various forms for the emission and absorption coefficients. If scattering is ignored, then a general steady state solution in terms of the emission and absorption coefficients may be written:
where is the optical depth of the medium between positions and :
Local thermodynamic equilibrium
A particularly useful simplification of the equation of radiative transfer occurs under the conditions of local thermodynamic equilibrium (LTE). It is important to note that local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas do not need to be in a thermodynamic equilibrium with each other or with the massive particles of the gas in order for LTE to exist.
In this situation, the absorbing/emitting medium consists of massive particles which are locally in equilibrium with each other, and therefore have a definable temperature (Zeroth Law of Thermodynamics). The radiation field is not, however in equilibrium and is being entirely driven by the presence of the massive particles. For a medium in LTE, the emission coefficient and absorption coefficient are functions of temperature and density only, and are related by:
where is the black body spectral radiance at temperature T. The solution to the equation of radiative transfer is then:
Knowing the temperature profile and the density profile of the medium is sufficient to calculate a solution to the equation of radiative transfer.
The Eddington approximation
The Eddington approximation is distinct from the two-stream approximation. The two-stream approximation assumes that the intensity is constant with angle in the upward hemisphere, with a different constant value in the downward hemisphere. The Eddington approximation instead assumes that the intensity is a linear function of , i.e.
where is the normal direction to the slab-like medium. Note that expressing angular integrals in terms of simplifies things because appears in the Jacobian of integrals in spherical coordinates. The Eddington approximation can be used to obtain the spectral radiance in a "plane-parallel" medium (one in which properties only vary in the perpendicular direction) with isotropic frequency-independent scattering.
Extracting the first few moments of the spectral radiance with respect to yields
Thus the Eddington approximation is equivalent to setting . Higher order versions of the Eddington approximation also exist, and consist of more complicated linear relations of the intensity moments. This extra equation can be used as a closure relation for the truncated system of moments.
Note that the first two moments have simple physical meanings. is the isotropic intensity at a point, and is the flux through that point in the direction.
The radiative transfer through an isotropically scattering medium with scattering coefficient at local thermodynamic equilibrium is given by
Integrating over all angles yields
Premultiplying by , and then integrating over all angles gives
Substituting in the closure relation, and differentiating with respect to allows the two above equations to be combined to form the radiative diffusion equation
This equation shows how the effective optical depth in scattering-dominated systems may be significantly different from that given by the scattering opacity if the absorptive opacity is small.
See also
Beer-Lambert law
Kirchhoff's law of thermal radiation
List of atmospheric radiative transfer codes
Optical depth
Planck's law
Radiative transfer equation and diffusion theory for photon transport in biological tissue
Schwarzschild's equation for radiative transfer
Vector radiative transfer
References
Further reading
Radiometry
Electromagnetic radiation
Atmospheric radiation | Radiative transfer | [
"Physics",
"Engineering"
] | 1,235 | [
"Physical phenomena",
"Telecommunications engineering",
"Electromagnetic radiation",
"Radiation",
"Radiometry"
] |
5,283,757 | https://en.wikipedia.org/wiki/Tearing | Tearing is the act of breaking apart a material by force, without the aid of a cutting tool. A tear in a piece of paper, fabric, or some other similar object may be the result of the intentional effort with one's bare hands, or be accidental. Unlike a cut, which is generally on a straight or patterned line controlled by a tool such as scissors, a tear is generally uneven and, for the most part, unplanned. An exception is a tear along a perforated line, as found on a roll of toilet paper or paper towels, which has been previously partially cut, so the effort of tearing will probably produce a straight line.
Materials vary in their susceptibility to tearing. Some materials may be quite resistant to tearing when they are in their full form, but when a small cut or tear is made, the material becomes compromised, and the effort needed to continue tearing along that line becomes less.
Materials can be characterized by standard test methods to measure their tear resistance. There are several applicable standards which vary around the world. The variables which affect tear strength are defined "the shape of the test piece, speed of stretching and temperature."
See also
Cutting
Deformation
Ripstop
Screen tearing
References
Solid mechanics | Tearing | [
"Physics"
] | 252 | [
"Solid mechanics",
"Materials stubs",
"Materials",
"Mechanics",
"Matter"
] |
5,284,087 | https://en.wikipedia.org/wiki/Drug-eluting%20stent | A drug-eluting stent (DES) is a tube made of a mesh-like material used to treat narrowed arteries in medical procedures both mechanically (by providing a supporting scaffold inside the artery) and pharmacologically (by slowly releasing a pharmaceutical compound). A DES is inserted into a narrowed artery using a delivery catheter usually inserted through a larger artery in the groin or wrist. The stent assembly has the DES mechanism attached towards the front of the stent, and usually is composed of the collapsed stent over a collapsed polymeric balloon mechanism, the balloon mechanism is inflated and used to expand the meshed stent once in position. The stent expands, embedding into the occluded artery wall, keeping the artery open, thereby improving blood flow. The mesh design allows for stent expansion and also for new healthy vessel endothelial cells to grow through and around it, securing it in place.
A DES is different from other types of stents in that it has a coating that delivers medication directly into the blood vessel wall. The stent slowly releases a drug to prevent the growth of scar tissue and new obstructive plaque material which caused the original blood vessel stenosis, this clogging of a stent is termed restenosis. A DES is fully integrated with a catheter delivery system and is viewed as one integrated medical device.
DESs are commonly used in the treatment of narrowed arteries in the heart (coronary artery disease), but also elsewhere in the body, especially the legs (peripheral artery disease). Over the last three decades, coronary stenting has matured into a primary minimally invasive treatment tool in managing CAD. Coronary artery stenting is inherently tied to percutaneous coronary intervention (PCI) procedures. PCI is a minimally invasive procedure performed via a catheter (not by open-chest surgery), it is the medical procedure used to place a DES in narrowed coronary arteries. PCI procedures are performed by an interventional cardiologist using fluoroscopic imaging techniques to see the location of the required DES placement. PCI uses larger peripheral arteries in the arms or the legs to thread a catheter/DES device through the arterial system and place the DES in the narrowed coronary artery or arteries. Multiple stents are often used depending on the degree of blockage and the number of diseased coronary arteries that are being treated.
Design
A drug-eluting stent (DES) is a small mesh tube that is placed in the arteries to keep them open in the treatment of vascular disease. The stent slowly releases a drug to block cell proliferation (a biological process of cell growth and division), thus preventing the arterial narrowing (stenosis) that can occur after stent implantation. While such stents can be used in various arteries throughout the body, they are commonly placed in the coronary arteries to treat coronary heart disease. DES products are integrated medical devices and are part of a percutaneous coronary intervention (PCI) delivery system.
DES is a medical device with several key properties: it functions as a structural scaffold, physically keeping an artery open to ensure blood flow; the device has specific drug delivery features, and the chosen drug is critical for its effectiveness. The drug, the hallmark compenent of the device, is selected for its suitability in inhibiting restenosis and its pharmacokinetics. Apart from the drug, the materials used in the fabrication of the device are also essential and are carefully chosen for their biocompatibility and durability in a biological environment, such as human blood; these materials must also withstand the constant motion of the heart's beat and be suitable for future patient imaging using magnetic resonance imaging (MRI) technologies, which employ high magnetic fields.
Other components, such as the catheter design, also play significant roles in the device's overall functionality and effectiveness.
DES are typically composed of metal alloys, most commonly stainless steel or cobalt-chromium, but can also be made of other materials such as platinum-chromium or nickel-titanium. The stent is often coated with a polymer to control the release of drugs. The role of polymers in drug delivery is significant as they regulate the rate at which the drug is released into the surrounding tissue. There are also polymer-free stents where the drug is directly coated on the stent or contained in reservoirs within the stent.
The design of the stent includes struts, which are thin wire structures that make up the stent frame. The strut thickness can influence the stent's performance, with thinner struts generally being associated with lower restenosis rates and reduced thrombosis risk.
Most DES are balloon-expandable, meaning they are mounted on a balloon catheter and expand when the balloon is inflated. There are also self-expanding stents, which automatically expand when deployed. The very first stent, introduced in 1986, was of this type.
The stent tube mesh is initially collapsed onto the catheter—in this collapsed state, it is small enough to be passed though relatively narrow arteries and then expanded in its destination place, pushing firmly to the diseased artery wall.
The pharmaceutical compounds that DES emit are antiproliferative agents such as sirolimus, everolimus, zotarolimus, paclitaxel and biolimus. These drugs help prevent the arterial narrowing that can occur after stent implantation. These drugs are also used for other purposes, that involve moderating the immune system or treating cancer. They work by inhibiting cell growth. In DES, they are used in very small amounts and for a short time, and only in the area where the stent is placed.
There is a distinction between coronary stents and peripheral stents. While both are used to prevent the narrowing of arteries, coronary stents are specifically for the coronary arteries, while peripheral stents are for any other arteries in the body. Peripheral stents are mostly bare metal ones; some peripheral DES, of the self-expanding type, are used in arteries of the legs.
Bioresorbable DES are made of materials that can be absorbed by the body over time, potentially reducing potential long-term complications associated with permanent stents.
Uses
Atherosclerosis: a general background
Atherosclerosis is a chronic disease that affects the large and medium-sized arteries. It is characterized by the accumulation of calcium, fats (such as cholesterol) and other substances in the innermost layer of the endothelium, a layer of cells that line the interior surface of blood vessels. Atherosclerosis is considered to be the most common form of arteriosclerosis, which refers to the loss of arterial elasticity caused by thickening and stiffening of blood vessels.
Atherosclerosis can begin as early as childhood with the development of small "fatty streaks" within arteries. These streaks are essentially deposits of fat. Over time, these initial lesions grow larger and become thicker, forming atheromas (atherosclerotic plaques).
Drug-eluting stents (DESs) are used in the treatment of atherosclerosis in both coronary interventions and peripheral arterial interventions:
In coronary interventions, DESs are used to treat coronary artery disease, which is primarily caused by atherosclerosis. The stents are inserted into narrowed coronary arteries and then expanded to open up the narrowed artery. The drug compound released by the stents suppresses cellular growth in the newly stented area, reducing the potential for blockage within the stent area itself.
In peripheral arterial interventions, DESs have established themselves as the go-to choice for addressing symptomatic peripheral arterial disease (PAD). These highly effective stents are deployed in the treatment of peripheral arterial occlusive disease (PAOD), a condition that shares resemblances with coronary artery disease but specifically affects the peripheral arteries. By employing DESs, healthcare professionals can provide optimal care and intervention to manage PAOD, ultimately improving patient outcomes and mitigating associated complications.
DESs are used in the management of atherosclerosis in both coronary and peripheral arterial interventions. They help improve blood flow and reduce the risk of restenosis, thereby improving patient outcomes. The use of DESs is accompanied by appropriate medical therapy and lifestyle modifications to manage atherosclerosis effectively.
Stenosis and restenosis of blood vessels
Stenosis of blood vessels refers to the narrowing of the blood vessels, which can restrict blood flow to the organs and tissues. This condition is often caused by the buildup of fatty deposits in the arteries, a process also called atherosclerosis.
In the context of stents, stenosis is a significant concern. Stents are inserted into a narrowed artery during a procedure known as angioplasty. The stents help to open up the narrowed artery and improve blood flow. However, over time, the treated artery can close up again, a condition known as restenosis.
Restenosis, or in-stent restenosis, is a blockage or narrowing that comes back in the portion of the artery previously treated with a stent. Restenosis tends to happen three to six months after the procedure. Restenosis is even more likely to occur if a stent would not have been used.
When restenosis occurs, another procedure may be needed to correct the problem, such as the placement of a DES that gradually release a drug compound that suppresses cellular growth, thereby reducing the potential for blockage within the stent area itself. This therapy significantly reduces the occurrence of adverse events post-stenting.
Technically, a DES in a mesh tube implant devices that is used in angioplasty procedures to treat stenosis of blood vessels and prevent restinosis: the stent, which elutes drugs, is implanted into the blood vessel to help keep the vessel open and improve blood flow. Specifically, drug-eluting stents are used in the treatment of various medical conditions usually at the site of stenotic or occlusive arterial lesions, but one of the primary medical uses is in the treatment of coronary artery disease. Stents are inserted into narrowed coronary arteries, where the narrowing is primarily caused by atherosclerosis. Stents are then expanded to open up the narrowed artery. Such stents gradually release a drug compound that suppresses cellular growth, into the newly stented area, thereby reducing the potential for blockage within the stent area itself. Such blockage is termed in-stent restenosis (ISR). This in-stent blockage is most often caused by excessive cell proliferation or thrombi (blood clots). Anticoagulation therapy (blood thinners), has become a standard treatment following the placement of DES. This therapy significantly reduces the occurrence of adverse events post-stenting.
Coronary interventions
DESs have played a transformative role in the management of coronary artery disease. These stents are tiny, flexible mesh tubes employed during percutaneous coronary intervention (PCI) to address narrowed coronary arteries. What sets them apart is their special coating, which incorporates a drug delivery system that enables controlled release of medication over a specific period, typically within the first 30 to 45 days following implantation. This medication aids in inhibiting the formation of scar tissue within the stent and subsequent re-narrowing of the blood vessel.
PCI is a minimally invasive procedure. It involves the placement of a drug-eluting stent (DES) in a coronary artery. This procedure, previously known as angioplasty with a stent, is considered non-surgical as it is performed through a small puncture in a peripheral artery, avoiding the need to open the chest wall. While bleeding from the puncture site was once a concern, advancements in PCI practices have mitigated this issue through the use of pressure bands and arterial closure systems. Modern DES/PCI procedures are generally painless, although some mild discomfort may be experienced. In PCI, multiple DES are sometimes implanted within a single patient; the decision to use multiple stents is typically contingent on the extent of the coronary artery disease present and the number of diseased coronary arteries that require treatment.
Peripheral arterial interventions
DESs have emerged as the primary therapeutic approach for managing symptomatic peripheral arterial disease (PAD). These specialized stents are now widely utilized in the treatment of peripheral arterial occlusive disease (PAOD), a condition that shares similarities with coronary artery disease but affects the peripheral arteries. By deploying DESs, healthcare professionals can effectively address and alleviate the complications associated with PAOD, enhancing patient outcomes and quality of life. The use of DESs in peripheral arterial interventions has shown encouraging results in terms of primary patency (PP) and target lesion revascularization (TLR) compared with bare-metal stents (BMSs).
Different types of DESs are available on the market, each with different concentrations of drugs and showing varying efficacy. Among the different DESs, sirolimus-eluting stents and everolimus-eluting stents were found to be more effective than paclitaxel-eluting stents.
Clinical indications
PCI and stent placement are considered when someone shows signs of reduced blood flow in the arteries that supply the heart or when tests, such as different types of coronary artery imaging, show a blockage in those arteries.
Symptoms can include:
severe, pressure-like chest pain unrelieved by rest;
shortness of breath, fatigue, lightheadedness;
palpitations;
atypical symptoms: nausea, vomiting, indigestion, confusion, back pain.
In a medical setting, it's not very useful for doctors to rely solely on what people say about where their pain comes from or how it feels, because the way people describe chest pain caused by reduced blood flow to the heart can vary greatly and may not match what is typically taught in medical education or described in books and articles.
Contraindications
DES is not recommended in some cases as it may do more harm than good. DES is not suitable:
when individuals have a bleeding tendency;
when a coronary artery has no clear and identifiable narrowing;
when only one diseased coronary artery supplies oxygenated blood to the heart muscle. During stent placement, there is a short period of blood flow blockage by the balloon inflation. This blockage time is often longer than twenty seconds to allow the DES to expand and embed into the arterial wall. In this case, this time may be too long and cause serious events due to lack of blood to the heart muscle.
Bleeding disorders make DES unsuitable because of the need for anticoagulation drugs (blood thinners) during the procedure and in post-stenting aftercare. Other factors that could rule out the use of stents include a history of in-stent blockage, bleeding problems, complex or unsuitable coronary anatomy, or a short life expectancy due to other serious medical conditions.
Risks and complications
Risks from the procedure
Stent placement risks include bleeding, allergic reactions to the contrast agents used to visualize the coronary arteries, and myocardial infarction. With percutaneous coronary intervention (PCI), the requirement for emergency coronary artery bypass graft (CABG) surgery has decreased as better practices have been introduced. In some situations, coronary stenting is permitted in hospitals without cardiac surgery facilities, but such permission remains controversial because of the rare but unpredictable risk of coronary artery perforation.
Stent thrombosis risks
A complication of coronary stenting is stent thrombosis (blood clots). This occurs when a new clot forms within the stent and occludes blood flow, causing a heart attack.
In-stent restenosis risks (ISR)
DES were designed to specifically combat issues of restenosis that occurred with older bare-metal stents (BMS). Though less frequent with drug-eluting stents, restenosis can still occur.
Since the advent of DES technology, the incidence of ISR has significantly decreased.
Usage outside the scope of typical regulatory approval
DES have been shown to be superior to BMS in reducing short-term complications of stenting in saphenous vein grafts. However, the use of DESs in bypass grafts was not their originally intended use nor within the scope of originally regulatory approval (US FDA, European Medicines Agency, etc.). The practice of using a medical device or drug in a way not specified in the original or current approved labeling is often referred to as "off-label" use.
In regions were cardiac stenting has become commonplace, think tanks and advocacy groups express concern about the overzealous use of stents, because patients who received stents for unapproved reasons often have worse outcomes compared to patients who received stents for approved uses.
Clinical procedure
DES placement
People who receive a coronary stent have different needs depending on their medical condition. Some patients are actually having a heart attack and need immediate life-saving emergency care. Other patients are at high risk of having a heart attack in the very near future. For people from each of these groups, PCI procedures may vary slightly, with particular modifications as to how they are sedated, pain management, and broader intensive care issues such as breathing support.
Many people who are not in critical care situations are usually fully awake during the PCI procedure and DES placement, but they receive local anesthetic at the site of catheter entry, to ensure there is no pain. Different sedation and pain management practices are used by different medical institutions and practitioners, but patient comfort is always a primary consideration.
The catheter/stent system is inserted into the body by piercing a peripheral artery (an artery in the arm or leg) and moved through the arterial system to deliver the DES into the blocked coronary artery. The stent is then expanded to widen (open) blocked or narrowed coronary arteries (narrowed by plaque buildup), caused by a condition called atherosclerosis. Peripheral arterial access is usually through the femoral (upper leg) or the radial artery (arm/wrist) and less often done through the brachial or ulnar artery (wrist/arm). In the past, controlling bleeding at the point of arterial access after the procedure was a problem. Modern arterial pressure bands and arterial closure systems now exist, which have helped control bleeding after the procedure, but it is still a concern.
Modern catheter/stent systems are integrated medical devices, made of a guidewire, catheter, balloon, and stent. The stent tube mesh is initially collapsed onto the balloon of the device, and it is small enough to be passed through relatively narrow peripheral arteries. When in position, the balloon is inflated by introducing physiological saline, and this pushes the overlaying stent firmly into the diseased artery wall, inflation time and pressure are recorded during this placement procedure. After placement, the balloon is deflated, and the device is removed from the body, leaving the expanded stent in place and opening up the artery.
The interventional cardiologist decides how to treat the blockage in the best way during the PCI/DES placement, based on real-time data. The cardiologist uses imaging data provided by both intravascular ultrasound (IVUS), and fluoroscopic imaging (combined with a radiopaque dye). During the procedure, the information obtained from these two sources enables the cardiologist to track the path of the catheter-DES device as it moves through the arterial blood vessels. This information also helps determine both the location and characteristics of any plaque causing narrowing in the arteries. Data from these two techniques is used to correctly position the stent and to obtain detailed information relating to the coronary arterial anatomy. Given that this anatomy varies greatly among individuals, having this information becomes a prerequisite for effective treatment. The obtained data is recorded on video and may be used in cases when further treatment is needed.
Post-stenting recovery and rehabilitation
For many people the stenting procedure does not require staying in the hospital for any extended time period, most people leave the hospital the same day. Much of the time immediately after the stenting is spent in a recovery area to make sure the access site is not bleeding and to ensure vital signs are stable.
In most hospital settings, the interventional cardiologist who performed the procedure will speak directly with the patient/family and give them information about how things went, and follow-up instructions. The nursing staff will keep an eye on the person's condition and use tools like ECG to monitor their heart. To prevent a blood clot from forming in the stent, medications are given right after the procedure. One common medication is plavix, which is a potent blood thinner that comes as a pill. Other medicines that thin the blood are also used, and it's typical to combine aspirin with plavix. For people who have had a heart attack, the length of hospitalization is dependent on the degree of heart muscle damage caused by the event.
A catheter with DES is a medical device, so people who receive it are given a medical device card. This card has information on the implanted DES and a medical device serial number. This information is important for future medical procedures, because it helps the doctors to know what type of device is in the person's body. Some arterial closure systems, which are devices that help to seal the access site after the procedure, are also medical devices and have their own informational cards.
The access site is the place where the catheter enters the artery in the arm or leg. There is usually soreness and bruising at this site. This bruising and soreness usually get better after a week or so. People are advised to rest for a week or two and not to lift heavy things. This is mainly to make sure the access site heals well. It is normal to have follow-up appointments with a cardiologist or a primary care provider/general practitioner within a week or two of the procedure.
People who get a coronary stent usually have more check-ups every three to six months for the first year, but this can vary. They usually do not need to have another coronary angiography, which is a test that uses a special dye and X-rays to see the arteries of the heart. If the doctors suspect that the heart disease is getting worse, they can prescribe a stress test, which is a test that measures how the heart works during physical activity. People who have symptoms or show signs of reduced blood flow to the heart in a stress test may need to have a diagnostic cardiac re-catheterization.
After PCI-stenting procedures, physical examinations are important. People who have a high risk of complications or more complex coronary problems may need to have angiography. This may be the case even if the results of non-invasive stress tests, which are tests that measure how the heart works during physical activity, appear normal.
Cardiac rehabilitation activities depend on many factors, but mainly on how much the heart muscle was damaged before the PCI/DES procedure. Many people who have this procedure have not had a heart attack, and their hearts may be fine. Others may have had a heart attack and their hearts may have trouble pumping oxygen-rich blood to the body. Rehabilitation activities are tailored to each person's needs.
Efficacy
Benefits
DES are an improvement over older BMS devices as they reduce the chances of in-stent blockages. This reduces the incidence of serious post-stenting events such as, angina occurrence or recurrence, heart attacks, and death. They also reduce the likelihood of requiring another PCI procedure to open a blockage caused by the actual stent.
The major benefit of drug-eluting stents (DES) when compared to bare-metal stents (BMS) is the prevention of in-stent restenosis (ISR). Restenosis is a gradual re-narrowing of the stented segment that occurs most commonly between 3–12 months after stent placement. High rates of restenosis associated with BMS prompted the development of DES, which resulted in a reduction of ISR incidence to around 5-10%. Continued development of newer generation DES have resulted in the near-elimination of BMS from clinical practice.
Procedure outcomes
A key benefit of DES usage compared to BMS is a lower incidence of repeat revascularization procedures (re-stenting, invasive bypass surgeries etc.). Revascularization procedures are treatments that restore blood flow to parts of the heart that are not getting enough blood, a problem called ischemia. This can happen because of plaque buildup in the arteries of the heart, which can narrow or block them. Rates of repeat revascularizations and stent thrombosis (blood clots) are significantly lower in those who received DES compared to BMS.
Newer generations of DES devices have substantially improved safety outcomes, specifically regarding stent thrombosis, recurrent myocardial infarctions, and death.
Considerations for regulatory submission, assessment and approval
There are a number of very detailed medical device design considerations for DES products, these considerations are included in submissions for approval to regulatory authorities such as the US FDA:
Aspects of the design that relate to a DES as structural devices that keep an artery open by purely physical means.
Choice of the construction materials, with a particular focus on biocompatibility, longevity in the human body, mechanical stress resistance and the suitability of the chosen material for future patient imaging using MRI technologies, due to the high magnetic fields used in such imaging.
Choice of a mechanism of the drug release: how long the drug lasts, and how to make the stent release the drug in a manner that inhibits in-stent restenosis.
Choice of chemical agent the stent will deliver.
Choice of the stent delivery technology as an integrated system: catheter design, placement visualization and assessment of the success of artery reperfusion (is the treated artery actually supplying cardiac muscle with sufficient oxygenated blood).
Quality assurance considerations such as those defined in ISO 13485.
Quality control considerations: what testing can be performed on each manufactured unit prior to release for sale to demonstrate its usage suitability.
Traceability issues, can a single stent be traced from the manufacturer to the patient it was implanted in. In the case of a recall of a product it is critical to be able to trace the stent from design, manufacture, and distribution to the patient.
The drug choice is a critical design element and determining its true effectiveness in inhibiting neointimal growth due to the proliferation of smooth muscle cells that would cause restenosis can be a design challenge. Much of the neointimal hyperplasia seems to be caused by inflammation.
Vascular stents are classified by the US as class III medical devices, meaning that they pose the highest risk to patients and are subject to both general and premarket approval, which requires clinical trials and scientific evidence of safety and effectiveness, as well as rigorous mechanical testing. During the mechanical testing process, universal testing machines induce bending, stretching, twisting, and putting pressure on vascular stents from various angles.
The specific properties of each type of stent and its intended use depend on the results of testing, and vice versa: different types of stents may need different or additional tests based on where they will be placed in the body and what they will be used for. Some of these additional tests might include checking how well the stent can withstand being crushed or bent out of shape, its resistance to getting kinks in it, whether it resists corrosion or damage over time, as well as making sure any coatings on the device remain intact.
Alternatives to stenting procedures
Pharmacological therapy for coronary artery disease may be indicated instead of or in addition to invasive treatment. For those requiring percutaneous coronary intervention or surgery, medical therapy should be viewed as complementary to revascularization procedures, rather than an opposing strategy. Coronary artery bypass graft (CABG) surgery is an alternative to percutaneous coronary intervention (PCI) with drug-eluting stents (DES) for patients with ischemic left ventricular systolic dysfunction (LVSD). CABG is associated with lower risks of all-cause mortality, repeat revascularization, and myocardial infarction compared to PCI. However, there is no significant difference between the two procedures in terms of cardiovascular mortality, stroke, major adverse cardiovascular and cerebrovascular events, and ventricular tachycardia.
History
The first procedure to treat blocked coronary arteries was coronary artery bypass graft surgery (CABG), wherein a section of vein or artery from elsewhere in the body is used to bypass the diseased segment of the coronary artery. In 1977, Andreas Grüntzig introduced percutaneous transluminal coronary angioplasty (PTCA), also called balloon angioplasty, in which a catheter was introduced through a peripheral artery and a balloon expanded to dilate the narrowed segment of the artery.
As equipment and techniques improved, the use of PTCA rapidly increased, and by the mid-1980s, PTCA and CABG were being performed at equivalent rates. Balloon angioplasty was generally effective and safe, but restenosis was frequent, occurring in about 30–40% of cases, usually within the first year after dilation. In about 3% of balloon angioplasty cases, failure of the dilation and acute or threatened closure of the coronary artery (often because of dissection) prompted emergency CABGs.
Charles Theodore Dotter and Melvin Judkins had proposed using prosthetic devices inside arteries in the leg to maintain blood flow after dilation as early as 1964. In 1986, Puel and Sigwart implanted the first coronary stent in a human patient. Several trials in the 1990s showed the superiority of stent placement over balloon angioplasty. Restenosis was reduced because the stent acted as a scaffold to hold open the dilated segment of the artery. Acute closure of the coronary artery (and the requirement for emergency CABG) was reduced, because the stent repaired dissections of the arterial wall. By 1999, stents were used in 84% of percutaneous coronary interventions (i.e., those done via a catheter, and not by open-chest surgery).
Early difficulties with coronary stents included a risk of early thrombosis (clotting) resulting in occlusion of the stent. Coating stainless steel stents with other substances such as platinum or gold did not eliminate this problem. High-pressure balloon expansion of the stent to ensure its full apposition to the arterial wall, combined with drug therapy using aspirin and another inhibitor of platelet aggregation (usually ticlopidine or clopidogrel) nearly eliminated this risk of early stent thrombosis.
Though it occurred less frequently than with balloon angioplasty or other techniques, stents nonetheless remained vulnerable to restenosis, caused almost exclusively by neointimal tissue growth (tissue formation in the inner 'tube' structure of the artery). To address this issue, developers of drug-eluting stents used the devices themselves as a tool for delivering medication directly to the arterial wall. While initial efforts were unsuccessful, the release (elution) of drugs with certain specific physicochemical properties from the stent was shown in 2001 to achieve high concentrations of the drug locally, directly at the target lesion, with minimal systemic side effects. As currently used in clinical practice, "drug-eluting" stents refers to metal stents that elute a drug designed to limit the growth of neointimal scar tissue, thus reducing the likelihood of stent restenosis.
The first type of DES to be approved by the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) were sirolimus-eluting stents (SES), which release a natural product called sirolimus, an immunosuppressant drug. SES were shown to reduce the need for repeat procedures and improve the outcomes of patients with coronary artery disease. The sirolimus-eluting Cypher stent received CE mark approval in Europe in 2002, and then underwent a larger trial to demonstrate its safety and effectiveness for the US market. The trial, published in 2003, enrolled 1058 patients with more complex lesions and confirmed the superiority of SES over bare metal stents in terms of angiographic and clinical outcomes. Based on these results, the Cypher stent received FDA approval and was released in the US in 2003. The FDA approval process for DES involves submitting an investigational device exemption (IDE) application to conduct clinical trials under 21 CFR Part 812, and then a premarket approval (PMA) application to obtain marketing authorization under 21 CFR Part 8144. The FDA assigns the primary review responsibility to the Center for Devices and Radiological Health (CDRH), but also consults with the Center for Drug Evaluation and Research (CDER) for the drug component of the combination product.
The second type of DES to be approved by the EMA and the FDA were paclitaxel-eluting stents (PES), which release another natural product called paclitaxel. PES also reduced the need for repeat procedures and improved the outcomes of patients with different types of lesions and risk factors. The paclitaxel-eluting Taxus stent received FDA approval and was launched in the US in 2004, after a series of trials that compared it with a bare metal stent in various settings. The trials showed a significant reduction in target lesion revascularization and major adverse cardiac events with the Taxus stent at 9 and 12 months. Both SES and PES use natural products as the active agents to prevent the recurrence of blockages in the arteries. These DES have changed the practice of interventional cardiology and have become the preferred treatment for many patients with coronary artery disease.
The initial rapid acceptance of DES led to their peak usage in 2005, accounting for 90% of all stent implantations, but concerns about late stent thrombosis led to a decrease in DES usage in late 2006. Subsequent studies reassured the medical community about their safety, showing that while DES may have a slightly higher risk for very late stent thrombosis, they significantly reduce target vessel revascularization without increasing the incidence of death or myocardial infarction; these reassurances led to a resurgence in DES utilization, although it did not reach the peak usage rates seen in early 2006.
The concept of using absorbable (also called biodegradable, bioabsorbable or bioresorbable) materials in stents was first reported in 1878 by Huse who used magnesium wires as ligatures to halt the bleeding in vessels of three patients. Despite extensive search, the full name of this pioneer in the field remains elusive. In 20th century, a resorbable stent tested in humans was developed by the Igaki Medical Planning Company in Japan and was constructed from poly-L-lactic acid (a form of polylactic acid); they published their initial results in 2000. The German company Biotronik developed a magnesium absorbable (bioresorbable) stent and published clinical results in 2007.
The first company to bring a bioresorbable stent to market was Abbott Vascular which received European marketing approval in September 2012; the second was Elixir which received its CE mark in May 2013.
Despite the initial promise, the first-generation bioresorbable stents, such as the Absorb bioresorbable stent by Abbott, faced significant challenges in their performance. In comparison to current-generation drug-eluting stents, numerous trials revealed that these first-generation bioresorbsble stents exhibited poor outcomes. Specifically, they showed high rates of stent thrombosis (cases where an implanted coronary stent caused a thrombotic occlusion), target-lesion myocardial infarction (heart attack occurring at the site of the treated lesion), and target vessel revascularization (the need for further procedures to restore blood flow in the treated artery). In 2017, Abbott pulled its bioabsorbable stent, Absorb, from the European market after negative press regarding the device. Boston Scientific also announced termination of its Renuvia bioresorbable coronary stent program as studies showed higher risk of serious adverse events.
Currently, fully bioresorbable stents do not play a significant role in coronary interventions. While various manufacturers are proposing new stents and continuing their development, it remains uncertain whether they will have a substantial impact, unless there will be more data from their clinical trials. As of now, these stents are not widely utilized in practice.
Due to challenges in developing resorbable stents, many manufacturers have focused efforts on targeting or reducing drug release through bioabsorbable-polymer coatings. Boston Scientific's Synergy bioabsorbable polymer stent has been shown potential to reduce the length of dual antiplatelet therapy post-implantation. MicroPort's Firehawk target eluting stent has been shown to be non-inferior to traditional drug-eluting stents while using one-third of the amount of equivalent drug.
As for the materials used to make a DES, the first DES products available for treating patients were stainless steel alloys composed of iron, nickel, and chromium and were based on existing bare metal stents. These stents were hard to visualize with medical imaging, posed a risk of causing allergic responses, and were difficult to deliver. Subsequent new alloys were used, namely cobalt-chrome and platinum chrome, with improved performance. Bioresorbable stents have been developed in which the stent itself dissolves over time. Materials explored for use include magnesium, polylactic acid, polycarbonate polymers, and salicylic acid polymers. Resorbable stents have held the promise of providing an acute treatment that would eventually allow the vessel to function normally, without leaving a permanent device behind.
For the coating of DES, one to three or more layers of polymer can be used: a base layer for adhesion, a main layer that holds and elutes (releases) the drug into the arterial wall by contact transfer, and sometimes a top coat to slow down the release of the drug and extend its effect. The first few drug-eluting stents to be licensed used durable coatings. The first generation of coatings appears to have caused immunological reactions at times, and some possibly led to thrombosis. This has driven experimentation and the development of new coating approaches.
Research directions
A research direction for a DES is to improve the material from which a device is made. The first-generation DES were made of stainless steel, while contemporary DES mainly consist of different kinds of alloys such as cobalt chromium and platinum chromium. In the current generation DES, thinner struts are employed than in the first-generation DES with preserved radial strength and radio-opacity. The lower strut thickness is believed to be associated with better stent-related outcomes including target lesion revascularization, myocardial infarction, and stent thrombosis.
Another area of research for DES focuses on polymers. The current generation DES includes both durable polymer-coated stents and biodegradable polymer-coated stents. It has been reported that the presence of a durable polymer in the body over a long period can lead to chronic inflammation and neoatherosclerosis. To address this potential limitation, researchers have developed biodegradable polymer DES as an alternative solution.
Scientists are also studying different drugs that could be used in DES to prevent restenosis. These drugs, which have immunosuppressive and anti-cancer properties, aim to inhibit the growth of smooth muscle cells. Additionally, there is a specific type of stent that features an extra layer of anti-CD4 antibodies on its struts. This additional layer is positioned on top of the polymer coating and aims to capture circulating endothelial progenitor cells. The goal behind this design is to promote improved healing of the blood vessel lining, known as the endothelium.
A potential research focus for DES is the application of a polymer-free DES in clinical practice: moving away from polymer-based DES and instead using either a polymer-free DES or a drug-coated coronary stent. In the case of the polymer-free DES, it utilizes an abluminal coating of probucol to control the release of sirolimus. On the other hand, the drug-coated coronary stent has a micro-structured abluminal surface that allows for direct application of an anti-restenotic drug.
Society and culture
Brand names and manufacturers
there are over 20 different types of drug-eluting stents available, with differences in features and characteristics.
Economics
The economic evaluation of DES has been a topic of extensive research. In 2007, the overall incremental cost-effectiveness ratio in Europe was €98,827 per quality-adjusted life-years gained. Avoiding one revascularization with DES would cost €4,794, when revascularization with BMS costs €3,2606.
Controversies
There were controversies related to the use of DES. In 2012, a meta-analysis of clinical trial data showed no benefit of the use of DES for people with stable coronary artery compared to treatment with drugs, yet, The New York Times interviewed David Brown, an author of the analysis, who said that more than half of patients with stable coronary artery disease were implanted with stents without even trying drug treatment and that he believed this happened because hospitals and doctors wanted to make more money.
The interview sparked a debate among cardiologists, researchers, and patients about the appropriateness and effectiveness of DES for stable coronary artery disease: some agreed with the study's findings and questioned the overuse of stents, while others criticized the study's methods and limitations and defended the benefits of stents, arguing that the interviewee's statement was "outrageous and defamatory" and that he was "insulting the integrity of the entire profession.
In 2013 the Times of India reported that DES were widely overused and that Indian distributors used profits from high markups on DES to bribe doctors to use them.
In 2014 an investigation by the Maharashtra Food and Drug Administration found that high markups and bribery related to DES was still widespread.
Intellectual property disputes
There have been several patent disputes related to drug-eluting stents. In one of them, Boston Scientific Corporation (BSC) has been found guilty of infringing upon a patent awarded to the University of Texas at Arlington in 2003 and licensed to TissueGen. This patent involves technology developed by TissueGen founder Kevin Nelson, during his time as a faculty member at the University. The technology is designed to deliver drugs through an extruded fiber within an implanted vascular stent. As a result, BSC has been ordered to pay $42 million in lost royalties to both TissueGen and the University
Class action lawsuits
Drug-eluting stents have been associated with legal and ethical controversies, and there have been related class action lawsuits. In 2014, the former owners of St. Joseph Medical Center in Maryland settled a class action lawsuit for $37 million with hundreds of patients who received unnecessary DES implantation. The lawsuit alleged that Dr. Mark Midei, a cardiologist at the center, falsified the degree of coronary artery stenosis to justify the use of DES, exposing the patients to increased risks of thrombosis, bleeding, and infection. Another DES manufacturer, Cordis Corporation, a subsidiary of Johnson & Johnson, was involved in lawsuits from people who suffered adverse events from the Cypher Stent, a stainless-steel DES coated with sirolimus, an immunosuppressant drug. The Cypher Stent was approved by the FDA in 2003, but soon after, the FDA issued a Safety Warning following 290 reports of subacute thrombosis and at least 60 deaths related to the device.
See also
;
;
.
References
Drug delivery devices
Implants (medicine)
Interventional cardiology | Drug-eluting stent | [
"Chemistry"
] | 9,246 | [
"Pharmacology",
"Drug delivery devices"
] |
5,284,206 | https://en.wikipedia.org/wiki/Bertrand%27s%20theorem | In classical mechanics, Bertrand's theorem states that among central-force potentials with bound orbits, there are only two types of central-force (radial) scalar potentials with the property that all bound orbits are also closed orbits.
The first such potential is an inverse-square central force such as the gravitational or electrostatic potential:
with force .
The second is the radial harmonic oscillator potential:
with force .
The theorem is named after its discoverer, Joseph Bertrand.
Derivation
All attractive central forces can produce circular orbits, which are naturally closed orbits. The only requirement is that the central force exactly equals the centripetal force, which determines the required angular velocity for a given circular radius. Non-central forces (i.e., those that depend on the angular variables as well as the radius) are ignored here, since they do not produce circular orbits in general.
The equation of motion for the radius of a particle of mass moving in a central potential is given by motion equations
where , and the angular momentum is conserved. For illustration, the first term on the left is zero for circular orbits, and the applied inwards force equals the centripetal force requirement , as expected.
The definition of angular momentum allows a change of independent variable from to :
giving the new equation of motion that is independent of time:
This equation becomes quasilinear on making the change of variables and multiplying both sides by (see also Binet equation):
As noted above, all central forces can produce circular orbits given an appropriate initial velocity. However, if some radial velocity is introduced, these orbits need not be stable (i.e., remain in orbit indefinitely) nor closed (repeatedly returning to exactly the same path). Here we show that a necessary condition for stable, exactly closed non-circular orbits is an inverse-square force or radial harmonic oscillator potential. In the following sections, we show that those two force laws produce stable, exactly closed orbits.
Define as
where represents the radial force. The criterion for perfectly circular motion at a radius is that the first term on the left be zero:
where .
The next step is to consider the equation for under small perturbations from perfectly circular orbits. On the right, the function can be expanded in a standard Taylor series:
Substituting this expansion into the equation for and subtracting the constant terms yields
which can be written as
where is a constant. must be non-negative; otherwise, the radius of the orbit would vary exponentially away from its initial radius. (The solution corresponds to a perfectly circular orbit.) If the right side may be neglected (i.e., for small perturbations), the solutions are
where the amplitude is a constant of integration. For the orbits to be closed, must be a rational number. What's more, it must be the same rational number for all radii, since cannot change continuously; the rational numbers are totally disconnected from one another. Using the definition of along with equation (),
Since this must hold for any value of ,
which implies that the force must follow a power law
Hence, must have the general form
For more general deviations from circularity (i.e., when we cannot neglect the higher-order terms in the Taylor expansion of ), may be expanded in a Fourier series, e.g.,
We substitute this into equation () and equate the coefficients belonging to the same frequency, keeping only the lowest-order terms. As we show below, and are smaller than , being of order . , and all further coefficients, are at least of order . This makes sense, since must all vanish faster than as a circular orbit is approached.
From the term, we get
where in the last step we substituted in the values of and .
Using equations () and (), we can calculate the second and third derivatives of evaluated at :
Substituting these values into the last equation yields the main result of Bertrand's theorem:
Hence, the only potentials that can produce stable closed non-circular orbits are the inverse-square force law () and the radial harmonic-oscillator potential (). The solution corresponds to perfectly circular orbits, as noted above.
Classical field potentials
For an inverse-square force law such as the gravitational or electrostatic potential, the potential can be written
The orbit u(θ) can be derived from the general equation
whose solution is the constant plus a simple sinusoid:
where e (the eccentricity), and θ0 (the phase offset) are constants of integration.
This is the general formula for a conic section that has one focus at the origin; e = 0 corresponds to a circle, 0 < e < 1 corresponds to an ellipse, e = 1 corresponds to a parabola, and e > 1 corresponds to a hyperbola. The eccentricity e is related to the total energy E (see Laplace–Runge–Lenz vector):
Comparing these formulae shows that E < 0 corresponds to an ellipse, E = 0 corresponds to a parabola, and E > 0 corresponds to a hyperbola. In particular, for perfectly circular orbits.
Harmonic oscillator
To solve for the orbit under a radial harmonic-oscillator potential, it's easier to work in components r = (x, y, z). The potential can be written as
The equation of motion for a particle of mass m is given by three independent Euler equations:
where the constant must be positive (i.e., k > 0) to ensure bounded, closed orbits; otherwise, the particle will fly off to infinity. The solutions of these simple harmonic oscillator equations are all similar:
where the positive constants Ax, Ay and Az represent the amplitudes of the oscillations, and the angles φx, φy and φz represent their phases. The resulting orbit r(t) = [x(t), y(y), z(t)] is closed because it repeats exactly after one period
The system is also stable because small perturbations in the amplitudes and phases cause correspondingly small changes in the overall orbit.
References
Further reading
Classical mechanics
Eponymous theorems of physics
Orbits | Bertrand's theorem | [
"Physics"
] | 1,279 | [
"Equations of physics",
"Classical mechanics",
"Eponymous theorems of physics",
"Mechanics",
"Physics theorems"
] |
5,284,787 | https://en.wikipedia.org/wiki/Sustainable%20drainage%20system | Sustainable drainage systems (also known as SuDS, SUDS, or sustainable urban drainage systems) are a collection of water management practices that aim to align modern drainage systems with natural water processes and are part of a larger green infrastructure strategy. SuDS efforts make urban drainage systems more compatible with components of the natural water cycle such as storm surge overflows, soil percolation, and bio-filtration. These efforts hope to mitigate the effect human development has had or may have on the natural water cycle, particularly surface runoff and water pollution trends.
SuDS have become popular in recent decades as understanding of how urban development affects natural environments, as well as concern for climate change and sustainability, have increased. SuDS often use built components that mimic natural features in order to integrate urban drainage systems into the natural drainage systems or a site as efficiently and quickly as possible. SUDS infrastructure has become a large part of the Blue-Green Cities demonstration project in Newcastle upon Tyne.
History of drainage systems
Drainage systems have been found in ancient cities over 5,000 years old, including Minoan, Indus, Persian, and Mesopotamian civilizations. These drainage systems focused mostly on reducing nuisances from localized flooding and waste water. Rudimentary systems made from brick or stone channels constituted the extent of urban drainage technologies for centuries. Cities in Ancient Rome also employed drainage systems to protect low-lying areas from excess rainfall. When builders began constructing aqueducts to import fresh water into cities, urban drainage systems became integrated into water supply infrastructure for the first time as a unified urban water cycle.
Modern drainage systems did not appear until the 19th century in Western Europe, although most of these systems were primarily built to deal with sewage issues rising from rapid urbanization. One such example is that of the London sewerage system, which was constructed to combat massive contamination of the River Thames. At the time, the River Thames was the primary component of London's drainage system, with human waste concentrating in the waters adjacent to the densely populated urban center. As a result, several epidemics plagued London's residents and even members of Parliament, including events known as the 1854 Broad Street cholera outbreak and the Great Stink of 1858. The concern for public health and quality of life launched several initiatives, which ultimately led to the creation of London's modern sewerage system designed by Joseph Bazalgette. This new system explicitly aimed to ensure waste water was redirected as far away from water supply sources as possible in order to reduce the threat of waterborne pathogens. Since then, most urban drainage systems have aimed for similar goals of preventing public health crises.
Within past decades, as climate change and urban flooding have become increasingly urgent challenges, drainage systems designed specifically for environmental sustainability have become more popular in both academia and practice. The first sustainable drainage system to utilize a full management train including source control in the UK was the Oxford services motorway station designed by SuDS specialists Robert Bray Associates Originally the term SUDS described the UK approach to sustainable urban drainage systems. These developments may not necessarily be in "urban" areas, and thus the "urban" part of SuDS is now usually dropped to reduce confusion. Other countries have similar approaches in place using a different terminology such as best management practice (BMP) and low-impact development in the United States, water-sensitive urban design (WSUD) in Australia, low impact urban design and development (LIUDD) in New Zealand, and comprehensive urban river basin management in Japan.
The National Research Council's definitive report on urban stormwater management described that urban drainage systems began in the United States after World War II. These structures were based on simple catch basins and pipes to transfer the water outside of the cities. Urban stormwater management started to evolve more in the 1970s when landscape architects focused more on low-impact development and began using practices such as infiltration channels. Parallel to this time, scientists started becoming concerned with other stormwater hazards surrounding pollution. Studies such as the Nationwide Urban Runoff Program showed that urban runoff contained pollutants like heavy metals, sediments, and pathogens, all of which water can pick up as it flows off of impermeable surfaces. It was at the beginning of the 21st century where stormwater infrastructure to allow runoff to infiltrate close to the source became popular. This was around the same time that the term green infrastructure was coined.
Background
Traditional urban drainage systems are limited by various factors including volume capacity, damage or blockage from debris and contamination of drinking water. Many of these issues are addressed by SuDS systems by bypassing traditional drainage systems altogether and returning rainwater to natural water sources or streams as soon as possible. Increasing urbanisation has caused problems with increased flash flooding after sudden rain. As areas of vegetation are replaced by concrete, asphalt, or roofed structures, leading to impervious surfaces, the area loses its ability to absorb rainwater. This rain is instead directed into surface water drainage systems, often overloading them and causing floods.
The goal of all sustainable drainage systems is to use rainfall to recharge the water sources of a given site. These water sources are often underlying the water table, nearby streams, lakes, or other similar freshwater sources. For example, if a site is above an unconsolidated aquifer, then SuDS will aim to direct all rain that falls on the surface layer into the underground aquifer as quickly as possible. To accomplish this, SuDS use various forms of permeable layers to ensure the water is not captured or redirected to another location. Often these layers include soil and vegetation, though they can also be artificial materials.
The paradigm of SuDS solutions should be that of a system that is easy to manage, requiring little or no energy input (except from environmental sources such as sunlight, etc.), resilient to use, and being environmentally as well as aesthetically attractive. Examples of this type of system are basins (shallow landscape depressions that are dry most of the time when it is not raining), rain gardens (shallow landscape depressions with shrub or herbaceous planting), swales (shallow normally-dry, wide-based ditches), filter drains (gravel filled trench drain), bioretention basins (shallow depressions with gravel and/or sand filtration layers beneath the growing medium), reed beds and other wetland habitats that collect, store, and filter dirty water along with providing a habitat for wildlife.
A common misconception of SuDS is that they reduce flooding on the development site. In fact the SuDS is designed to reduce the impact that the surface water drainage system of one site has on other sites. For instance, sewer flooding is a problem in many places. Paving or building over land can result in flash flooding. This happens when flows entering a sewer exceed its capacity and it overflows. The SuDS system aims to minimise or eliminate discharges from the site, thus reducing the impact, the idea being that if all development sites incorporated SuDS then urban sewer flooding would be less of a problem. Unlike traditional urban stormwater drainage systems, SuDS can also help to protect and enhance ground water quality.
Example features
Because SuDS describe a collection of systems with similar components or goals, there is a large crossover between SuDS and other terminologies dealing with sustainable urban development. The following are examples generally accepted as components in a SuDS system:
Bioswales
Permeable pavement
Wetlands
Artificial wetlands can be constructed in areas that see large volumes of storm water surges or runoff. Built to replicate shallow marshes, wetlands as BMPs gather and filter water at scales larger than bioswales or rain gardens. Unlike bioswales, artificial wetlands are designed to replicate natural wetlands processes as opposed to having an engineered mechanism within the artificial wetland. Because of this, the ecology of the wetland (soil components, water, vegetation, microbes, sunlight processes, etc.) becomes the primary system to remove pollutants. Water in an artificial wetland tends to be filtered slowly in comparison to systems with mechanized or explicitly engineered components.
Wetlands can be used to concentrate large volumes of runoff from urban areas and neighborhoods. In 2012, the South Los Angeles Wetlands Park was constructed in a densely populated inner-city district as a renovation for a former LA Metro bus yard. The park is designed to capture runoff from surrounding surfaces as well as storm water overflow from the city's current drainage system.
Retention basins
Green roofs
Rain gardens
Rain gardens are a form of stormwater management using water capture. Rain gardens are shallow depressed areas in the landscape, planted with shrubs and plants that are used to collect rainwater from roofs or pavement and allows for the stormwater to slowly infiltrate into the ground . Rain gardens mimic natural landscape functions by capturing stormwater, filtering out pollutants, and recharging groundwater. A study done in 2008 explains how rain gardens and stormwater planters are easy to incorporate into urban areas where they will improve the streets by minimizing the effects of drought and helping out with stormwater runoff. Stormwater planters can easily fit between other street landscapes and ideal in areas where spacing is tight.
Downspout disconnection
Downspout disconnection is a form of green infrastructure that separates roof downspouts from the sewer system and redirects roof water runoff into permeable surfaces. It can be used for storing stormwater or allowing the water to penetrate the ground. Downspout disconnection is especially beneficial in cities with combined sewer systems. With high volumes of rain, downspouts on buildings can send 12 gallons of water a minute into the sewer system, which increases the risk of basement backups and sewer overflows.
Benefits for stormwater management
Green infrastructure keeps waterways clean and healthy in two primary ways; water retention and water quality. Different green infrastructure strategies prevents runoff by capturing the rain where it lies, allowing it to filter into the ground to recharge groundwater, return to the atmosphere through evapotranspiration, or be reused for another purpose like landscaping. Water quality is also improved by decreasing the amount of stormwater that reaches other waterways and removing contaminants. Vegetation and soil help capture and remove pollutants from stormwater in many ways like adsorption, filtration, and plant uptake. These processes break down or capture many of the common pollutants found in runoff.
Reduced flooding
With climate change intensifying, heavy storms are becoming more frequent and so is the increasing risk of flooding and sewer system overflows. According to the EPA, the average size of a 100-year floodplain is likely to increase by 45% in the next ten years. Another growing problem is urban flooding being caused by too much rain on impervious surfaces, urban floods can destroy neighborhoods. They particularly affect minority and low-income neighborhoods and can leave behind health problems like asthma and illness caused by mold. Green infrastructure reduces flood risks and bolsters the climate resiliency of communities by keeping rain out of sewers and waterways, capturing it where it falls.
Increased water supply
More than half of the rain that falls in urban areas covered mostly by impervious surfaces ends up as runoff. Green infrastructure practices reduce runoff by capturing stormwater and allowing it to recharge groundwater supplies or be harvested for purposes like landscaping. Green infrastructure promotes rainfall conservation through the use of capture methods and infiltration techniques, for instance bioswales. As much as 75 percent of the rainfall that lands on a rooftop can be captured and used for other purposes.
Heat management
A city with miles of dark hot pavement absorbs and radiates heat into the surrounding atmosphere at a greater rate than a natural landscapes do. This is urban heat island effect causing an increase in air temperatures. The EPA estimates that the average air temperature of a city with one million people or more can be warmer than surrounding areas. Higher temperatures reduce air quality by increasing smog. In Los Angeles, a 1 degree temperature increase makes the air roughly 3 percent more smog. Green roofs and other forms of green infrastructure help improve air quality and reduce smog through their use of vegetation. Plants not only provide shade for cooling, but also absorb pollutants like carbon dioxide and help reduce air temperatures through evaporation and evapotranspiration.
Health benefits
By improving water quality, reducing air temperatures and pollution, green infrastructure provides many public health benefits. Cooler and cleaner air can help reduce heat related illnesses like exhaustion and heatstroke, as well as respiratory problems like asthma. Cleaner and healthier waterways also means less illness from contaminated waters and seafood. Greener areas also promote physical activity and can boost mental health.
Reduced costs
Green infrastructure is often cheaper than more conventional water management strategies. Philadelphia found that its new green infrastructure plan will cost $1.2 billion over 25 years, compared with the $6 billion a gray infrastructure would have cost. The expenses for implementing green infrastructure are often smaller, planting a rain garden to deal with drainage costs less than digging tunnels and installing pipes. But even when it is not cheaper, green infrastructure still has a good long-term effect. A green roof lasts twice as long as a regular roof, and low maintenance costs of permeable pavement can make for a good long-term investment. The Iowa town of West Union determined it could save $2.5 million over the lifespan of a single parking lot by using permeable pavement instead of traditional asphalt. Green infrastructure also improves the quality of water drawn from rivers and lakes for drinking, which reduces the costs associated with purification and treatment, in some cases by more than 25 percent. And green roofs can reduce heating and cooling costs, leading to energy savings of as much as 15 percent.
See also
Aquifer storage and recovery
Blue roof
French drain
Low-impact development (U.S. and Canada)
Resin-bound paving
Retention basin
Sponge city
Stream restoration
Sustainable city
Tree box filter
Urban runoff
References
External links
SUDS solutions from the British Geological Survey
International Best Management Practices Database – Detailed data sets & summaries on performance of Urban BMPs
Portland Guide to Sustainable Stormwater – City of Portland, Oregon
Drainage
Environmental engineering
Hydrology and urban planning
Drainage system
Sustainable design
Waste treatment technology
Water pollution
Sustainable urban planning | Sustainable drainage system | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,884 | [
"Hydrology",
"Water treatment",
"Chemical engineering",
"Water pollution",
"Civil engineering",
"Hydrology and urban planning",
"Environmental engineering",
"Waste treatment technology"
] |
5,285,068 | https://en.wikipedia.org/wiki/Sustainable%20landscape%20architecture | Sustainable landscape architecture is a category of sustainable design concerned with the planning and design of the built and natural environments.
The design of a sustainable landscape encompasses the three pillars of sustainable development: economic well-being, social equity and environmental protections. The United Cities and Local Governments, UNESCO, and the World Summit on Sustainable Development further recommend including a fourth pillar of cultural preservation to create successful sustainable landscape designs. Creating a sustainable landscape requires consideration of ecology, history, cultural associations, sociopolitical dynamics, geology, topography, soils, land use, and architecture. Methods used to create sustainable landscapes include recycling, restoration, species reintroduction, and many more.
Goals of sustainable landscape architecture include a reduction of pollution, heightened water management and thoughtful vegetation choices.
An example of sustainable landscape architecture is the design of a sustainable urban drainage system, which can protect wildlife habitats, improve recreational facilities and save money through flood control. Another example is the design of a green roof or a roof garden that also contributes to the sustainability of a landscape architecture project. The roof will help manage surface water, decrease environmental impacts and provide space for recreation.
History
The first documented concern of the destruction of the modern landscape was in the 1981 Wildlife and Countryside Act.
Historic Cultural Influences
Perspectives on what entails a sustainable landscape design vary in different cultural lenses. Historically, Eastern and Western civilizations have had opposing philosophies of how to interact with nature within the built environment.
Western Civilization
United States
In the United States, Frederick Law Olmsted (1822-1903) acted as a pioneer in American landscape architecture. Olmsted began his career as an agricultural correspondent, before visiting England where he experienced and brought back ideas of English landscape design to the United States. He used landscape design to transform societal dynamics. His projects encouraged the mingling of community members from different strata by creating communal passageways through the city. This fostered an emphasis on creating socially sustainable communal urban spaces. His legacy is still lived in projects such as New York City’s Central Park, Boston’s Emerald Necklace and the U.S. Capitol Grounds. The cultural and philosophical doctrine that Olmsted introduced to North American landscape architecture created a basis of socially sustainable landscape designs. This idea falls under the old paradigm (1905-1940s) of scientific ecology, fathered by Frederic Clements, which excludes humans from being a part of natural ecology. This school of thought sees nature as a venue for humans to mingle and interact, rather than including humans as part of the natural world and ecosystem. The next scientific ecology paradigm in American culture was the thermodynamic paradigm (1930s-1980s), most famously documented by Eugene Odum. Rather than seeing humans dominating and existing separate from ecology, this school of thought focused on the relationship between humanity and nature. This era was characterized by the New Deal in the United States, and projects such as the Tennessee Valley Authority and large-scale highway projects. Within this paradigm, the United States government began to foster support of natural conservation, inviting the public to travel and admire the conserved landscapes of the country. However, efforts of conservatism within this age are criticized for emphasizing the human component of landscapes. The TVA, for example, championed the creation of dams and the production of energy. This created a space for admiration of a landscape altered solely for human benefit. Although there is a clear connection between humanity and nature, the emphasis on human accomplishment continued the anthropocentric culture of landscape architecture in North America. Within this same era, Ian McHarg was a leading landscape architect, and is seen as one of the first to challenge traditional ideas of landscape architecture towards a more sustainable lens. McHarg emphasized the idea of designing with nature, instead of against it. He encouraged thorough analysis and observation of a landscape before designing. His ideas were revolutionary as he put the environmental restrictions at the forefront of landscape architecture, reasoning that a healthy living environment fosters occupant respect for their surroundings. Instead of seeing man and nature as forces against one another, McHarg sought to bring them together. Aldo Leopold was another influential figure during this time, who pushed for the symbiotic relationship between man and nature. In addition to Olmsted’s emphasis on societal equality within the space of designed landscape, Leopold urged society to view the landscapes as part of the community. The widespread government support of natural conservation in American culture began to shatter once economic growth and political unrest began to overwhelm the country. These tensions gave way to the modern-day paradigm of scientific ecology: the evolutionary paradigm. This perspective views humans as a piece within nature’s dynamics, emphasizing the consequences of human impact on the environment.
Europe
European culture offers similar influences to the practice of sustainable landscape architecture. Western European countries would historically use landscaping to separate and organize natural habitats. Like North American practices, conservation of nature was routinely seen as “management” of wildlife by containing it in a space that is admired from a distance. Furthermore, landscapes have been designed to optimize the use of natural resources and economic gain of the land. Research into landscapes of ancient Eastern European civilizations showed similar ideals. In the Aegean, coaxial field and terraces were frequently used to cultivate the land for food. This connects to the root of all artificial landscape manipulation; a method of survival. Agriculture and resource allocation are an instinctual way which humans approach landscape architecture, demonstrating human dependence on continued and sustainable use of such landscapes.
Eastern Civilization
East Asian countries had a different cultural history in connection to sustainable landscape architecture. Whereas Western civilization focused on a human-centered built environment, Eastern countries used traditional philosophies that encourage the “unity of man and nature” to design landscapes. This idea is centered around the Chinese philosophy of Taoism, claiming that humans must be in tune with nature’s rhythms.
Instead of altering the environment to benefit humanity, this philosophy states that humans should create the build environment by taking advantage of natural patterns and tendencies (4). Furthermore, this school of thought considers The Peach Blossom Spring ideal, which expresses the cultural desire for nature to be a healing oasis for humans. In Eastern culture, nature is the overpowering entity that can care for humanity. This key difference from Western culture influenced a different trajectory of sustainable landscape architecture in Eastern civilization. The first instances of preservation and respect of nature was seen up to 2,000 years ago, via Chinese “gardens of literati”, or scholar gardens. Landscapes like these emphasized controlled borders of landscape design, rather than growth and expansion. Furthermore, ideas of Yin-yang and Feng shui inspired sustainable landscape practices. Yin-yang emphasizes balance, and within the built environment, dictates that natural and manmade components of landscape architecture must be in harmony. Feng shui originates from traditional burial design, and represents the “wind-water” balance. This relationship represents a fluidity of ecological processes and Feng shui aims to protect these natural cycles. Additionally, Feng shui focuses on “Qi” energy of design components, and how this energy can be influenced to inspire the ideal symbiosis between human and nature. Qi can include any resource, such as clean air, water, and suitable soil. Finding a location with Qi and maintaining the integrity of the Qi serves both the environment and the occupants. The principles defining Feng shui divide land into categories of which direction they face, their size, shape, and other parameters that dictate their “qi”, or energy potential. If the balance of a landscape does not cultivate optimal energy, to benefit both the ecosystem and humanity, then sustainable landscape architecture permits “bibo” or “apseung” - an addition or deletion, respectively - of materials. Human intervention therefore, is only used when necessary to increase the symbiosis of a landscape, rather than used simply to benefit human occupants of the area. The most influential aspect of Feng shui is its assessment of the built environment. Instead of quantitatively measuring the well-being of a community’s environment, the environment is assessed based on how it serves the culture, and if processes are in balance. This integrates the ecological well-being with cultural well-being, raising the stakes of the landscape health and connecting it directly to the people. Although Eastern culture emphasizes a harmonious and mutual benefit in humanity’s relationship with nature, critics point out that Eastern culture tends to emphasize landscape design on beauty, rather than function.
Main Differences Between Western and Eastern Civilizations
The cultural backgrounds that give way to sustainable landscape architecture in the Western and Eastern hemisphere differ on several grounds. Western culture aimed to domesticate and tame nature. Humans and nature were considered separate entities, and humans would design for ways to overcome the ‘obstacle’ of nature. Eastern culture, conversely, strived to be in harmony with nature. Humans and nature were thought to be a part of the same cycle, and designs were created to be in sync with natural processes for maximum benefits. A specific phenomenon that exemplifies these two cultures is looking at building materials from the two contrasting cultures. In England and New England, whenever possible, brick and mortar were used, symbolic of the dominance that humans would exert in their built environment. These materials are strong, but energy intensive to make, and not in sync with the natural world. In China and other Asian countries, wood was the preferred building material, as it was readily available and renewable. Design of buildings and landscape were purposefully done in a way that enhances the flow of “Qi”, or energy, rather than stifling it.
Challenges to Sustainable Landscape Architecture
History and cultural norms have defined how landscape designs have been approached in the past, largely when interpreting how mankind can design untouched land. However, following the industrial revolution and along with a booming population rise, urbanization has become the main spotlight surrounding landscape architecture. Some European countries have witnessed up to 80% of their population moving to urban centers. Along with these mass migrations comes a tremendous loss of biodiversity, as forest land is cleared for timber and residential land use. Due to urbanization and increased transportation, neighboring rural landscapes and even remote villages have been delegated to functional urban regions (FURs) and are slowly losing their cultural and heritage value. These patterns influence landscape architecture to be catered solely to urban spaces, and to serve the metropolitan needs of economic meccas. Whereas once culture was the driving force behind how sustainable landscape designs are, financial worth now governs the design of landscapes in a largely urban world. Instead of designing untouched land and choosing how to interact with nature, landscape architects face the challenge of how to design and renovate an environment or city to be as sustainable as possible.
Western colonization of Eastern Asian countries has overwhelmed traditional cultural perspectives of landscape architecture with Western ideals. Urbanization leads to environmental degradation such as fragmentation, a lack of green spaces and poor water quality. All these side-effects hinder the practice of Fengshui. For example, in Seoul, 20% of forests disappeared during urbanization in 1988-1999, due to an unplanned influx of population coupled with disorganization following the Korean War. To avoid the environmental downfalls of urbanization, methods such as planned spacing, sustainable transport systems and purposeful vegetation implementation (on roofs, roads, riversides) is recommended.
Due to a strong human presence and impact, sustainable landscape architecture is more important now than it has ever been. Tools within the built environment, such as natural filters, climate control tactics and reconciliation ecology are recommended to sustain the planet. This requires a combination of both Eastern and Western cultural drivers. In Southern Europe, domestic species are being re-introduced to urbanized areas to help with cultural identity, food production, and a lack of vegetation in the city. Athens is an example of this tactic. In the 1980s, Athens, Greece was a compact city. It slowly began to spread out into periphery farming land, consuming the landscapes bordering the urban space. The government has begun to plant olive trees in such areas, therefore benefitting from small green urban spaces in several categories. Olive trees offer cultural and traditional sustainability, due to their importance in Greek culture and history. They offer local food production and a source of income. Furthermore, the trees increase shade in heat island, and decrease the risk of fire. They are a low-maintenance crop which emphasizes sustainable landscape design within multiple realms, making their implementation a favorable way to design challenging urban landscapes in a sustainable fashion. The emphasis on the cultural identity of the olive trees ensures cultural sustainability, the suggested fourth pillar in sustainable development.
With urbanization and industrialization discouraging the participation in rural landscapes and communities, the United Nations has sought ways to restore culture sustainability with these spaces through touristic potential. In 1990, UNESCO emphasized the creation of GeoParks to instill geotourism and restore historical and cultural integrity to a site. By rooting these projects in cultural incentives, landscape designs can focus on rural community ideals, rather than metropolitan restrictions. Ţara Haţegului in Romania is an ideal example of such a project which achieves sustainable landscape architecture by using cultural emphasis. Combining landscape architecture with cultural identity ensures that the land becomes a part of a community's heritage. The Council of Europe has created a framework convention on the value of cultural heritage, and they have concluded that cultural integrity correlates with social responsibility of a landscape. They emphasize approaching landscape architecture in a conscious matter that protects people’s surrounding. By connecting people to their environment, it becomes part of their identity, and it gives motive to protect the ecosystem. The council concludes that such multidisciplinary policies are essential to cultivate sustainable landscape architecture.
Certifications
Green Business Certification Inc. has partnered with the U.S. Green Building Council to create the Sustainable SITES Initiative certification program. This program has adopted LEED strategies in promoting the sustainability of landscapes within the built environment.
See also
Built environment
Carbon cycle re-balancing
Climate-friendly gardening
Context theory
Cultural sustainability
Energy-efficient landscaping
Feng shui
Frederick Law Olmsted
Green roof
Green transport
GeoParks
Ian McHarg
Landscape planning
Sustainable agriculture
Sustainable gardening
Sustainable landscaping
Sustainable planting
Sustainable urban drainage
Urban agriculture
Urban forestry
Urbanization
Taoism
References
Landscape and sustainability John F. Benson, Maggie H. Roe (2007)
Sustainable Site Design: Criteria, Process, and Case Studies Claudia Dinep, Kristin Schwab (2009)
Sustainable urban design: perspectives and examples Work Group for Sustainable Urban Development (2005)
External links
The Sustainable Landscapes Conference at Utah State University
Information on designing a sustainable urban landscape
Resource guides from the American society of landscape artists.
Sustainable design
Sustainable gardening
Garden design
Landscape architecture
Sustainable architecture | Sustainable landscape architecture | [
"Engineering",
"Environmental_science"
] | 2,969 | [
"Landscape architecture",
"Sustainable architecture",
"Environmental social science",
"Architecture"
] |
5,286,471 | https://en.wikipedia.org/wiki/Treadmilling | In molecular biology, treadmilling is a phenomenon observed within protein filaments of the cytoskeletons of many cells, especially in actin filaments and microtubules. It occurs when one end of a filament grows in length while the other end shrinks, resulting in a section of filament seemingly "moving" across a stratum or the cytosol. This is due to the constant removal of the protein subunits from these filaments at one end of the filament, while protein subunits are constantly added at the other end. Treadmilling was discovered by Wegner, who defined the thermodynamic and kinetic constraints. Wegner recognized that: “The equilibrium constant (K) for association of a monomer with a polymer is the same at both ends, since the addition of a monomer to each end leads to the same polymer.”; a simple reversible polymer can’t treadmill; ATP hydrolysis is required. GTP is hydrolyzed for microtubule treadmilling.
Detailed process
Dynamics of the filament
The cytoskeleton is a highly dynamic part of a cell and cytoskeletal filaments constantly grow and shrink through addition and removal of subunits. Directed crawling motion of cells such as macrophages relies on directed growth of actin filaments at the cell front (leading edge).
Microfilaments
The two ends of an actin filament differ in their dynamics of subunit addition and removal. They are thus referred to as the plus end (with faster dynamics, also called barbed end) and the minus end (with slower dynamics, also called pointed end). This difference results from the fact that subunit addition at the minus end requires a conformational change of the subunits. Note that each subunit is structurally polar and has to attach to the filament in a particular orientation. As a consequence, the actin filaments are also structurally polar.
Elongating the actin filament occurs when free-actin (G-actin) bound to ATP associates with the filament. Under physiological conditions, it is easier for G-actin to associate at the positive end of the filament, and harder at the negative end. However, it is possible to elongate the filament at either end. Association of G-actin into F-actin is regulated by the critical concentration outlined below. Actin polymerization can further be regulated by profilin and cofilin. Cofilin functions by binding to ADP-actin on the negative end of the filament, destabilizing it, and inducing depolymerization. Profilin induces ATP binding to G-actin so that it can be incorporated onto the positive end of the filament.
Microtubules
Two main theories exist on microtubule movement within the cell: dynamic instability and treadmilling. Dynamic instability occurs when the microtubule assembles and disassembles at one end only, while treadmilling occurs when one end polymerizes while the other end disassembles.
Critical concentration
The critical concentration is the concentration of either G-actin (actin) or the alpha,beta- tubulin complex (microtubules) at which the end will remain in an equilibrium state with no net growth or shrinkage. What determines whether the ends grow or shrink is entirely dependent on the cytosolic concentration of available monomer subunits in the surrounding area. Critical concentration differs from the plus (CC+) and the minus end (CC−), and under normal physiological conditions, the critical concentration is lower at the plus end than the minus end. Examples of how the cytosolic concentration relates to the critical concentration and polymerization are as follows:
A cytosolic concentration of subunits above both the CC+ and CC− ends results in subunit addition at both ends
A cytosolic concentration of subunits below both the CC+ and CC− ends results in subunit removal at both ends
Note that the cytosolic concentration of the monomer subunit between the CC+ and CC− ends is what is defined as treadmilling in which there is growth at the plus end, and shrinking on the minus end.
The cell attempts to maintain a subunit concentration between the dissociation constants at the plus and minus ends of the polymer.
Microtubule treadmilling
Microtubules formed from pure tubulin undergo subunit uptake and loss at ends by both random exchange diffusion, and by a directional (treadmilling) element. Treadmilling is inefficient, and for microtubules at steady state: the Wegner s-value1 (the reciprocal of the number of molecular events required for the net uptake of a subunit) is equal to 0.0005-0.001; i.e., it requires >1000 events. Microtubule treadmilling with pure tubulin also occurs with growing microtubules and is enhanced by proteins that bind to ends11. Rapid treadmilling occurs in cells.
FtsZ treadmilling
The bacterial tubulin homolog FtsZ is one of the best documented treadmilling polymers. FtsZ assembles into protofilaments that are one subunit thick, which can further associate into small patches of parallel protofilaments. Single filaments and/or patches have been demonstrated to treadmill in vitro and inside bacterial cells. A Monte Carlo model of FtsZ treadmilling has been designed, based on a conformational change of subunits upon polymerization and GTP hydrolysis.
References
Biochemical reactions
Cell movement
Molecular biology | Treadmilling | [
"Chemistry",
"Biology"
] | 1,170 | [
"Biochemistry",
"Biochemical reactions",
"Molecular biology"
] |
5,289,693 | https://en.wikipedia.org/wiki/Lagrangian%20and%20Eulerian%20specification%20of%20the%20flow%20field |
In classical field theories, the Lagrangian specification of the flow field is a way of looking at fluid motion where the observer follows an individual fluid parcel as it moves through space and time. Plotting the position of an individual parcel through time gives the pathline of the parcel. This can be visualized as sitting in a boat and drifting down a river.
The Eulerian specification of the flow field is a way of looking at fluid motion that focuses on specific locations in the space through which the fluid flows as time passes. This can be visualized by sitting on the bank of a river and watching the water pass the fixed location.
The Lagrangian and Eulerian specifications of the flow field are sometimes loosely denoted as the Lagrangian and Eulerian frame of reference. However, in general both the Lagrangian and Eulerian specification of the flow field can be applied in any observer's frame of reference, and in any coordinate system used within the chosen frame of reference. The Lagrangian and Eulerian specifications are named after Joseph-Louis Lagrange and Leonhard Euler, respectively.
These specifications are reflected in computational fluid dynamics, where "Eulerian" simulations employ a fixed mesh while "Lagrangian" ones (such as meshfree simulations) feature simulation nodes that may move following the velocity field.
History
Leonhard Euler is credited of introducing both specifications in two publications written in 1755 and 1759. Joseph-Louis Lagrange studied the equations of motion in connection to the principle of least action in 1760, later in a treaty of fluid mechanics in 1781, and thirdly in his book Mécanique analytique. In this book Lagrange starts with the Lagrangian specification but later converts them into the Eulerian specification.
Description
In the Eulerian specification of a field, the field is represented as a function of position x and time t. For example, the flow velocity is represented by a function
On the other hand, in the Lagrangian specification, individual fluid parcels are followed through time. The fluid parcels are labelled by some (time-independent) vector field x0. (Often, x0 is chosen to be the position of the center of mass of the parcels at some initial time t0. It is chosen in this particular manner to account for the possible changes of the shape over time. Therefore, the center of mass is a good parameterization of the flow velocity u of the parcel.) In the Lagrangian description, the flow is described by a function
giving the position of the particle labeled x0 at time t.
The two specifications are related as follows:
because both sides describe the velocity of the particle labeled x0 at time t.
Within a chosen coordinate system, x0 and x are referred to as the Lagrangian coordinates and Eulerian coordinates of the flow respectively.
Material derivative
The Lagrangian and Eulerian specifications of the kinematics and dynamics of the flow field are related by the material derivative (also called the Lagrangian derivative, convective derivative, substantial derivative, or particle derivative).
Suppose we have a flow field u, and we are also given a generic field with Eulerian specification F(x, t). Now one might ask about the total rate of change of F experienced by a specific flow parcel. This can be computed as
where ∇ denotes the nabla operator with respect to x, and the operator u⋅∇ is to be applied to each component of F. This tells us that the total rate of change of the function F as the fluid parcels moves through a flow field described by its Eulerian specification u is equal to the sum of the local rate of change and the convective rate of change of F. This is a consequence of the chain rule since we are differentiating the function F(X(x0, t), t) with respect to t.
Conservation laws for a unit mass have a Lagrangian form, which together with mass conservation produce Eulerian conservation; on the contrary, when fluid particles can exchange a quantity (like energy or momentum), only Eulerian conservation laws exist.
See also
Brewer-Dobson Circulation
Conservation form
Contour advection
Displacement field (mechanics)
Equivalent latitude
Generalized Lagrangian mean
Trajectory (fluid mechanics)
Liouville's theorem (Hamiltonian)
Lagrangian particle tracking
Rolling
Streamlines, streaklines, and pathlines
Immersed Boundary Method
Semi-Lagrangian scheme
Stochastic Eulerian Lagrangian methods
Notes
References
Bennett, A. (2006). Lagrangian Fluid Dynamics. Cambridge, U.K.: Cambridge University Press.
External links
Objectivity in classical continuum mechanics: Motions, Eulerian and Lagrangian functions; Deformation gradient; Lie derivatives; Velocity-addition formula, Coriolis; Objectivity.
Fluid dynamics
Aerodynamics
Computational fluid dynamics | Lagrangian and Eulerian specification of the flow field | [
"Physics",
"Chemistry",
"Engineering"
] | 1,010 | [
"Computational fluid dynamics",
"Chemical engineering",
"Aerodynamics",
"Computational physics",
"Aerospace engineering",
"Piping",
"Fluid dynamics"
] |
24,570,445 | https://en.wikipedia.org/wiki/Matter%20collineation | A matter collineation (sometimes matter symmetry and abbreviated to MC) is a vector field that satisfies the condition,
where are the energy–momentum tensor components. The intimate relation between geometry and physics may be highlighted here, as the vector field is regarded as preserving certain physical quantities along the flow lines of , this being true for any two observers. In connection with this, it may be shown that every Killing vector field is a matter collineation (by the Einstein field equations (EFE), with or without cosmological constant). Thus, given a solution of the EFE, a vector field that preserves the metric necessarily preserves the corresponding energy-momentum tensor. When the energy-momentum tensor represents a perfect fluid, every Killing vector field preserves the energy density, pressure and the fluid flow vector field. When the energy-momentum tensor represents an electromagnetic field, a Killing vector field does not necessarily preserve the electric and magnetic fields.
See also
Affine vector field
Conformal vector field
Curvature collineation
Homothetic vector field
Spacetime symmetries
Mathematical methods in general relativity | Matter collineation | [
"Physics"
] | 221 | [
"Relativity stubs",
"Theory of relativity"
] |
24,580,536 | https://en.wikipedia.org/wiki/Cobalt | Cobalt is a chemical element; it has symbol Co and atomic number 27. As with nickel, cobalt is found in the Earth's crust only in a chemically combined form, save for small deposits found in alloys of natural meteoric iron. The free element, produced by reductive smelting, is a hard, lustrous, somewhat brittle, gray metal.
Cobalt-based blue pigments (cobalt blue) have been used since antiquity for jewelry and paints, and to impart a distinctive blue tint to glass. The color was long thought to be due to the metal bismuth. Miners had long used the name kobold ore (German for goblin ore) for some of the blue pigment-producing minerals. They were so named because they were poor in known metals and gave off poisonous arsenic-containing fumes when smelted. In 1735, such ores were found to be reducible to a new metal (the first discovered since ancient times), which was ultimately named for the kobold.
Today, some cobalt is produced specifically from one of a number of metallic-lustered ores, such as cobaltite (CoAsS). The element is more usually produced as a by-product of copper and nickel mining. The Copperbelt in the Democratic Republic of the Congo (DRC) and Zambia yields most of the global cobalt production. World production in 2016 was (according to Natural Resources Canada), and the DRC alone accounted for more than 50%.
Cobalt is primarily used in lithium-ion batteries, and in the manufacture of magnetic, wear-resistant and high-strength alloys. The compounds cobalt silicate and cobalt(II) aluminate (CoAl2O4, cobalt blue) give a distinctive deep blue color to glass, ceramics, inks, paints and varnishes. Cobalt occurs naturally as only one stable isotope, cobalt-59. Cobalt-60 is a commercially important radioisotope, used as a radioactive tracer and for the production of high-energy gamma rays. Cobalt is also used in the petroleum industry as a catalyst when refining crude oil. This is to purge it of sulfur, which is very polluting when burned and causes acid rain.
Cobalt is the active center of a group of coenzymes called cobalamins. Vitamin B, the best-known example of the type, is an essential vitamin for all animals. Cobalt in inorganic form is also a micronutrient for bacteria, algae, and fungi.
The name cobalt derives from a type of ore considered a nuisance by 16th century German silver miners, which in turn may have been named from a spirit or goblin held superstitiously responsible for it; this spirit is considered equitable to the kobold (a household spirit) by some, or, categorized as a gnome (mine spirit) by others.
Characteristics
Cobalt is a ferromagnetic metal with a specific gravity of 8.9. The Curie temperature is and the magnetic moment is 1.6–1.7 Bohr magnetons per atom. Cobalt has a relative permeability two-thirds that of iron. Metallic cobalt occurs as two crystallographic structures: hcp and fcc. The ideal transition temperature between the hcp and fcc structures is , but in practice the energy difference between them is so small that random intergrowth of the two is common.
Cobalt is a weakly reducing metal that is protected from oxidation by a passivating oxide film. It is attacked by halogens and sulfur. Heating in oxygen produces Co3O4 which loses oxygen at to give the monoxide CoO. The metal reacts with fluorine (F2) at 520 K to give CoF3; with chlorine (Cl2), bromine (Br2) and iodine (I2), producing equivalent binary halides. It does not react with hydrogen gas (H2) or nitrogen gas (N2) even when heated, but it does react with boron, carbon, phosphorus, arsenic and sulfur. At ordinary temperatures, it reacts slowly with mineral acids, and very slowly with moist, but not dry, air.
Compounds
Common oxidation states of cobalt include +2 and +3, although compounds with oxidation states ranging from −3 to +5 are also known. A common oxidation state for simple compounds is +2 (cobalt(II)). These salts form the pink-colored metal aquo complex in water. Addition of chloride gives the intensely blue . In a borax bead flame test, cobalt shows deep blue in both oxidizing and reducing flames.
Oxygen and chalcogen compounds
Several oxides of cobalt are known. Green cobalt(II) oxide (CoO) has rocksalt structure. It is readily oxidized with water and oxygen to brown cobalt(III) hydroxide (Co(OH)3). At temperatures of 600–700 °C, CoO oxidizes to the blue cobalt(II,III) oxide (Co3O4), which has a spinel structure. Black cobalt(III) oxide (Co2O3) is also known. Cobalt oxides are antiferromagnetic at low temperature: CoO (Néel temperature 291 K) and Co3O4 (Néel temperature: 40 K), which is analogous to magnetite (Fe3O4), with a mixture of +2 and +3 oxidation states.
The principal chalcogenides of cobalt are the black cobalt(II) sulfides, CoS2 (pyrite structure), (spinel structure), and CoS (nickel arsenide structure).
Halides
Four dihalides of cobalt(II) are known: cobalt(II) fluoride (CoF2, pink), cobalt(II) chloride (CoCl2, blue), cobalt(II) bromide (CoBr2, green), cobalt(II) iodide (CoI2, blue-black). These halides exist in anhydrous and hydrated forms. Whereas the anhydrous dichloride is blue, the hydrate is red.
The reduction potential for the reaction + e− → is +1.92 V, beyond that for chlorine to chloride, +1.36 V. Consequently, cobalt(III) chloride would spontaneously reduce to cobalt(II) chloride and chlorine. Because the reduction potential for fluorine to fluoride is so high, +2.87 V, cobalt(III) fluoride is one of the few simple stable cobalt(III) compounds. Cobalt(III) fluoride, which is used in some fluorination reactions, reacts vigorously with water.
Coordination compounds
The inventory of complexes is very large. Starting with higher oxidation states, complexes of Co(IV) and Co(V) are rare. Examples are found in caesium hexafluorocobaltate(IV) (Cs2CoF6) and potassium percobaltate (K3CoO4).
Cobalt(III) forms a wide variety of coordination complexes with ammonia and amines, which are called ammine complexes. Examples include , (chloropentamminecobalt(III)), and cis- and trans-. The corresponding ethylenediamine complexes are also well known. Analogues are known where the halides are replaced by nitrite, hydroxide, carbonate, etc. Alfred Werner worked extensively on these complexes in his Nobel-prize winning work. The robustness of these complexes is demonstrated by the optical resolution of tris(ethylenediamine)cobalt(III) ().
Cobalt(II) forms a wide variety of complexes, but mainly with weakly basic ligands. The pink-colored cation hexaaquocobalt(II) is found in several routine cobalt salts such as the nitrate and sulfate. Upon addition of excess chloride, solutions of the hexaaquo complex converts to the deep blue , which is tetrahedral.
Softer ligands like triphenylphosphine form complexes with Co(II) and Co(I), examples being bis- and tris(triphenylphosphine)cobalt(I) chloride, and . These Co(I) and Co(II) complexes represent a link to the organometallic complexes described below.
Organometallic compounds
Cobaltocene is a structural analog to ferrocene, with cobalt in place of iron. Cobaltocene is much more sensitive to oxidation than ferrocene. Cobalt carbonyl (Co2(CO)8) is a catalyst in carbonylation and hydrosilylation reactions. Vitamin B12 (see below) is an organometallic compound found in nature and is the only vitamin that contains a metal atom. An example of an alkylcobalt complex in the otherwise uncommon +4 oxidation state of cobalt is the homoleptic complex tetrakis(1-norbornyl)cobalt(IV) (Co(1-norb)4), a transition metal-alkyl complex that is notable for its resistance to β-hydrogen elimination, in accord with Bredt's rule. The cobalt(III) and cobalt(V) complexes and are also known.
Isotopes
59Co is the only stable cobalt isotope and the only isotope that exists naturally on Earth. Twenty-two radioisotopes have been characterized: the most stable, 60Co, has a half-life of 5.2714 years; 57Co has a half-life of 271.8 days; 56Co has a half-life of 77.27 days; and 58Co has a half-life of 70.86 days. All the other radioactive isotopes of cobalt have half-lives shorter than 18 hours, and in most cases shorter than 1 second. This element also has 4 meta states, all of which have half-lives shorter than 15 minutes.
The isotopes of cobalt range in atomic weight from 50 u (50Co) to 73 u (73Co). The primary decay mode for isotopes with atomic mass unit values less than that of the only stable isotope, 59Co, is electron capture and the primary mode of decay in isotopes with atomic mass greater than 59 atomic mass units is beta decay. The primary decay products below 59Co are element 26 (iron) isotopes; above that the decay products are element 28 (nickel) isotopes.
Etymology
Many different stories about the origin of the word "cobalt" have been proposed. In one version the element cobalt was named after "", the name which 16th century German silver miners had given to a nuisance type of ore which occurred that was corrosive and issued poisonous gas. Although such ores had been used for blue pigmentation since antiquity, the Germans at that time did not have the technology to smelt the ore into metal (cf. below).
The authority on such kobelt ore (Latinized as cobaltum or cadmia) at the time was Georgius Agricola. He was also the oft-quoted authority on the mine spirits called "" (Latinized as cobalus or pl. cobali) in a separate work.
Agricola did not make an connection between the similarly named ore and spirit. However, a causal connection (ore blamed on "kobel") was made by a contemporary, and a word origin connection (word "formed" from cobalus) made by a late 18th century writer. Later, Grimms' dictionary (1868) noted the kobalt/kobelt ore was blamed on the mountain spirit () which was also held responsible for "stealing the silver and putting out an ore that caused poor mining atmosphere (Wetter) and other health hazards".
Grimms' dictionary entries equated the word "kobel" with "kobold", and listed it as a mere variant diminutive, but the latter is defined in it as a household spirit. Whereas some of the more recent commentators prefer to characterize the ore's namesake kobelt (recté kobel) as a gnome.
The early 20th century Oxford English Dictionary (1st edition, 1908) had upheld Grimm's etymology. However, by around the same time in Germany, the alternate etymology not endorsed by Grimm (kob/kof "house, chamber" + walt "power, ruler") was being proposed as more convincing.
Somewhat later, Paul Kretschmer (1928) explained that while this "house ruler" etymology was the proper one that backed the original meaning of kobold as household spirit, a corruption later occurred introducing the idea of "mine demon" to it. The present edition of the Etymologisches Wörterbuch (25th ed., 2012) under "kobold" lists the latter, not Grimm's etymology, but still persists, under its entry for "kobalt", that while the cobalt ore may have got its name from "a type of mine spirit/demon" (daemon metallicus) while stating that this is "apparently" the kobold.
Joseph William Mellor (1935) also stated that cobalt may derive from kobalos (), though other theories had been suggested.
Alternate theories
Several alternative etymologies that have been suggested, which may not involve a spirit (kobel or kobold) at all. Karl Müller-Fraureuth conjectured that kobelt derived from , a bucket used in mining, frequently mentioned by Agricola, namely the kobel/köbel (Latinized as modulus).
Another theory given by the Etymologisches Wörterbuch derives the term from or rather (, "arsenic sulfide") which occurs as noxious fumes.
An etymology from Slavonic was suggested by Emanuel Merck (1902).
W. W. Skeat and J. Berendes construed as "parasite", i.e. as an ore parasitic to nickel, but this explanation is faulted for its anachronism since nickel was not discovered until 1751.
History
Cobalt compounds have been used for centuries to impart a rich blue color to glass, glazes, and ceramics. Cobalt has been detected in Egyptian sculpture, Persian jewelry from the third millennium BC, in the ruins of Pompeii, destroyed in 79 AD, and in China, dating from the Tang dynasty (618–907 AD) and the Ming dynasty (1368–1644 AD).
Cobalt has been used to color glass since the Bronze Age. The excavation of the Uluburun shipwreck yielded an ingot of blue glass, cast during the 14th century BC. Blue glass from Egypt was either colored with copper, iron, or cobalt. The oldest cobalt-colored glass is from the eighteenth dynasty of Egypt (1550–1292 BC). The source for the cobalt the Egyptians used is not known.
The word cobalt is derived from the 16th century German "", a type of ore, as aforementioned. The first attempts to smelt those ores for copper or silver failed, yielding simply powder (cobalt(II) oxide) instead. Because the primary ores of cobalt always contain arsenic, smelting the ore oxidized the arsenic into the highly toxic and volatile arsenic oxide, adding to the notoriety of the ore. Paracelsus, Georgius Agricola, and Basil Valentine all referred to such silicates as "cobalt".
Swedish chemist Georg Brandt (1694–1768) is credited with discovering cobalt , showing it to be a previously unknown element, distinct from bismuth and other traditional metals. Brandt called it a new "semi-metal", naming it for the mineral from which he had extracted it. He showed that compounds of cobalt metal were the source of the blue color in glass, which previously had been attributed to the bismuth found with cobalt. Cobalt became the first metal to be discovered since the pre-historical period. All previously known metals (iron, copper, silver, gold, zinc, mercury, tin, lead and bismuth) had no recorded discoverers.
During the 19th century, a significant part of the world's production of cobalt blue (a pigment made with cobalt compounds and alumina) and smalt (cobalt glass powdered for use for pigment purposes in ceramics and painting) was carried out at the Norwegian Blaafarveværket. The first mines for the production of smalt in the 16th century were located in Norway, Sweden, Saxony and Hungary. With the discovery of cobalt ore in New Caledonia in 1864, the mining of cobalt in Europe declined. With the discovery of ore deposits in Ontario, Canada, in 1904 and the discovery of even larger deposits in the Katanga Province in the Congo in 1914, mining operations shifted again. When the Shaba conflict started in 1978, the copper mines of Katanga Province nearly stopped production. The impact on the world cobalt economy from this conflict was smaller than expected: cobalt is a rare metal, the pigment is highly toxic, and the industry had already established effective ways for recycling cobalt materials. In some cases, industry was able to change to cobalt-free alternatives.
In 1938, John Livingood and Glenn T. Seaborg discovered the radioisotope cobalt-60. This isotope was famously used at Columbia University in the 1950s to establish parity violation in radioactive beta decay.
After World War II, the US wanted to guarantee the supply of cobalt ore for military uses (as the Germans had been doing) and prospected for cobalt within the US. High purity cobalt was highly sought after for its use in jet engines and gas turbines. An adequate supply of the ore was found in Idaho near Blackbird canyon. Calera Mining Company started production at the site.
Cobalt demand has further accelerated in the 21st century as an essential constituent of materials used in rechargeable batteries, superalloys, and catalysts. It has been argued that cobalt will be one of the main objects of geopolitical competition in a world running on renewable energy and dependent on batteries, but this perspective has also been criticised for underestimating the power of economic incentives for expanded production.
Occurrence
The stable form of cobalt is produced in supernovae through the r-process. It comprises 0.0029% of the Earth's crust. Except as recently delivered in meteoric iron, free cobalt (the native metal) is not found on Earth's surface because of its tendency to react with oxygen in the atmosphere. Small amounts of cobalt compounds are found in most rocks, soils, plants, and animals. In the ocean cobalt typically reacts with chlorine.
In nature, cobalt is frequently associated with nickel. Both are characteristic components of meteoric iron, though cobalt is much less abundant in iron meteorites than nickel. As with nickel, cobalt in meteoric iron alloys may have been well enough protected from oxygen and moisture to remain as the free (but alloyed) metal.
Cobalt in compound form occurs in copper and nickel minerals. It is the major metallic component that combines with sulfur and arsenic in the sulfidic cobaltite (CoAsS), safflorite (CoAs2), glaucodot (), and skutterudite (CoAs3) minerals. The mineral cattierite is similar to pyrite and occurs together with vaesite in the copper deposits of Katanga Province. When it reaches the atmosphere, weathering occurs; the sulfide minerals oxidize and form pink erythrite ("cobalt glance": Co3(AsO4)2·8H2O) and spherocobaltite (CoCO3).
Cobalt is also a constituent of tobacco smoke. The tobacco plant readily absorbs and accumulates heavy metals like cobalt from the surrounding soil in its leaves. These are subsequently inhaled during tobacco smoking.
Production
The main ores of cobalt are cobaltite, erythrite, glaucodot and skutterudite (see above), but most cobalt is obtained by reducing the cobalt by-products of nickel and copper mining and smelting.
Since cobalt is generally produced as a by-product, the supply of cobalt depends to a great extent on the economic feasibility of copper and nickel mining in a given market. Demand for cobalt was projected to grow 6% in 2017.
Primary cobalt deposits are rare, such as those occurring in hydrothermal deposits, associated with ultramafic rocks, typified by the Bou-Azzer district of Morocco. At such locations, cobalt ores are mined exclusively, albeit at a lower concentration, and thus require more downstream processing for cobalt extraction.
Several methods exist to separate cobalt from copper and nickel, depending on the concentration of cobalt and the exact composition of the used ore. One method is froth flotation, in which surfactants bind to ore components, leading to an enrichment of cobalt ores. Subsequent roasting converts the ores to cobalt sulfate, and the copper and the iron are oxidized to the oxide. Leaching with water extracts the sulfate together with the arsenates. The residues are further leached with sulfuric acid, yielding a solution of copper sulfate. Cobalt can also be leached from the slag of copper smelting.
The products of the above-mentioned processes are transformed into the cobalt oxide (Co3O4). This oxide is reduced to metal by the aluminothermic reaction or reduction with carbon in a blast furnace.
Extraction
The United States Geological Survey estimates world reserves of cobalt at 7,100,000 metric tons. The Democratic Republic of the Congo (DRC) currently produces 63% of the world's cobalt. This market share may reach 73% by 2025 if planned expansions by mining producers like Glencore Plc take place as expected. Bloomberg New Energy Finance has estimated that by 2030, global demand for cobalt could be 47 times more than it was in 2017.
Democratic Republic of the Congo
Changes that Congo made to mining laws in 2002 attracted new investments in Congolese copper and cobalt projects. In 2005, the top producer of cobalt was the copper deposits in the Democratic Republic of the Congo's Katanga Province. Formerly Shaba province, the area had almost 40% of global reserves, reported the British Geological Survey in 2009.
The Mukondo Mountain project, operated by the Central African Mining and Exploration Company (CAMEC) in Katanga Province, may be the richest cobalt reserve in the world. It produced an estimated one-third of the total global cobalt production in 2008. In July 2009, CAMEC announced a long-term agreement to deliver its entire annual production of cobalt concentrate from Mukondo Mountain to Zhejiang Galico Cobalt & Nickel Materials of China.
In 2016, Chinese ownership of cobalt production in the Congo was estimated at over 10% of global cobalt supply, forming a key input to the Chinese cobalt refining industry and granting China substantial influence over the global cobalt supply chain. Chinese control of Congolese cobalt has raised concern in Western nations which have sought to reduce supply chain reliance upon China and have expressed concern regarding labor and human rights violations in cobalt mines in the DRC.
Glencore's Mutanda Mine shipped 24,500 tons of cobalt in 2016, 40% of Congo DRC's output and nearly a quarter of global production. After oversupply, Glencore closed Mutanda for two years in late 2019. Glencore's Katanga Mining project is resuming as well and should produce 300,000 tons of copper and 20,000 tons of cobalt by 2019, according to Glencore.
In February 2018, global asset management firm AllianceBernstein defined the DRC as economically "the Saudi Arabia of the electric vehicle age", due to its cobalt resources, as essential to the lithium-ion batteries that drive electric vehicles.
On March 9, 2018, President Joseph Kabila updated the 2002 mining code, increasing royalty charges and declaring cobalt and coltan "strategic metals". The 2002 mining code was effectively updated on December 4, 2018.
Labor conditions
Artisanal mining supplied 17% to 40% of the DRC production as of 2016. Some 100,000 cobalt miners in Congo DRC use hand tools to dig hundreds of feet, with little planning and fewer safety measures, say workers and government and NGO officials, as well as The Washington Post reporters' observations on visits to isolated mines. The lack of safety precautions frequently causes injuries or death. Mining pollutes the vicinity and exposes local wildlife and indigenous communities to toxic metals thought to cause birth defects and breathing difficulties, according to health officials.
Child labor is used in mining cobalt from African artisanal mines. Human rights activists have highlighted this and investigative journalism reporting has confirmed it. This revelation prompted cell phone maker Apple Inc., on March 3, 2017, to stop buying ore from suppliers such as Zhejiang Huayou Cobalt who source from artisanal mines in the DRC, and begin using only suppliers that are verified to meet its workplace standards. In 2023, Apple announced it would convert to using recycled cobalt by 2025.
There is a push globally by the EU and major car manufacturers (OEM) for global production of cobalt to be sourced and –produced sustainably, responsibly and traceability of the supply chain. Mining companies are adopting and practising ESG initiatives in line with OECD Guidance and putting in place evidence of zero to low carbon footprint activities in the supply chain production of lithium-ion batteries. These initiatives are already taking place with major mining companies, artisanal and small-scale mining companies (ASM). Car manufacturers and battery manufacturer supply chains: Tesla, VW, BMW, BASF and Glencore are participating in several initiatives, such as the Responsible Cobalt Initiative and Cobalt for Development study. In 2018 BMW Group in partnership with BASF, Samsung SDI and Samsung Electronics have launched a pilot project in the DRC over one pilot mine, to improve conditions and address challenges for artisanal miners and the surrounding communities.
The political and ethnic dynamics of the region have in the past caused outbreaks of violence and years of armed conflict and displaced populations. This instability affected the price of cobalt and also created perverse incentives for the combatants in the First and Second Congo Wars to prolong the fighting, since access to diamond mines and other valuable resources helped to finance their military goals—which frequently amounted to genocide—and also enriched the fighters themselves. While DR Congo has in the 2010s not recently been invaded by neighboring military forces, some of the richest mineral deposits adjoin areas where Tutsis and Hutus still frequently clash, unrest continues although on a smaller scale and refugees still flee outbreaks of violence.
Cobalt extracted from small Congolese artisanal mining endeavors in 2007 supplied a single Chinese company, Congo DongFang International Mining. A subsidiary of Zhejiang Huayou Cobalt, one of the world's largest cobalt producers, Congo DongFang supplied cobalt to some of the world's largest battery manufacturers, who produced batteries for ubiquitous products like the Apple iPhones. Because of accused labour violations and environmental concerns, LG Chem subsequently audited Congo DongFang in accordance with OECD guidelines. LG Chem, which also produces battery materials for car companies, imposed a code of conduct on all suppliers that it inspects.
In December 2019, International Rights Advocates, a human rights NGO, filed a landmark lawsuit against Apple, Tesla, Dell, Microsoft and Google company Alphabet for "knowingly benefiting from and aiding and abetting the cruel and brutal use of young children" in mining cobalt. The companies in question denied their involvement in child labour. In 2024 the court ruled that the suppliers facilitate force labor but the US tech companies are not liable because they don't operate as a shared enterprise with the suppliers and that the "alleged injuries are not fairly traceable" to any of the defendants' conduct. The book Cobalt Red alleges that workers including children suffer injuries, amputations, and death as the result of the hazardous working conditions and mine tunnel collapses during artisanal mining of cobalt in the DRC.
Since child and slave labor have been repeatedly reported in cobalt mining, primarily in the artisanal mines of DR Congo, technology companies seeking an ethical supply chain have faced shortages of this raw material and the price of cobalt metal reached a nine-year high in October 2017, more than US$30 a pound, versus US$10 in late 2015. After oversupply, the price dropped to a more normal $15 in 2019. As a reaction to the issues with artisanal cobalt mining in DR Congo a number of cobalt suppliers and their customers have formed the Fair Cobalt Alliance (FCA) which aims to end the use of child labor and to improve the working conditions of cobalt mining and processing in the DR Congo. Members of FCA include Zhejiang Huayou Cobalt, Sono Motors, the Responsible Cobalt Initiative, Fairphone, Glencore and Tesla, Inc.
Canada
In 2017, some exploration companies were planning to survey old silver and cobalt mines in the area of Cobalt, Ontario, where significant deposits are believed to lie.
Cuba
Canada's Sherritt International processes cobalt ores in nickel deposits from the Moa mines in Cuba, and the island has several others mines in Mayarí, Camagüey, and Pinar del Río. Continued investments by Sherritt International in Cuban nickel and cobalt production while acquiring mining rights for 17–20 years made the communist country third for cobalt reserves in 2019, before Canada itself.
Indonesia
Starting from smaller amounts in 2021, Indonesia began producing cobalt as a byproduct of nickel production. By 2022, the country had become the world's second-largest cobalt producer, with Benchmark Mineral Intelligence forecasting Indonesian output to make up 20 percent of global production by 2030.
Applications
In 2016, of cobalt was used. Cobalt has been used in the production of high-performance alloys. It is also used in some rechargeable batteries.
Alloys
Cobalt-based superalloys have historically consumed most of the cobalt produced. The temperature stability of these alloys makes them suitable for turbine blades for gas turbines and aircraft jet engines, although nickel-based single-crystal alloys surpass them in performance. Cobalt-based alloys are also corrosion- and wear-resistant, making them, like titanium, useful for making orthopedic implants that do not wear down over time. The development of wear-resistant cobalt alloys started in the first decade of the 20th century with the stellite alloys, containing chromium with varying quantities of tungsten and carbon. Alloys with chromium and tungsten carbides are very hard and wear-resistant. Special cobalt-chromium-molybdenum alloys like Vitallium are used for prosthetic parts (hip and knee replacements). Cobalt alloys are also used for dental prosthetics as a useful substitute for nickel, which may be allergenic. Some high-speed steels also contain cobalt for increased heat and wear resistance. The special alloys of aluminium, nickel, cobalt and iron, known as Alnico, and of samarium and cobalt (samarium–cobalt magnet) are used in permanent magnets. It is also alloyed with 95% platinum for jewelry, yielding an alloy suitable for fine casting, which is also slightly magnetic.
Batteries
Lithium cobalt oxide (LiCoO2, aka "LCO"), first sold commercially in 1991 by Sony, was widely used in lithium-ion battery cathodes until the 2010s. The material is composed of cobalt oxide layers with the lithium intercalated. These LCO batteries continue to dominate the market for consumer electronics. Batteries for electric cars however have shifted to lower cobalt technologies.
In 2018 most cobalt in batteries was used in a mobile device, a more recent application for cobalt is rechargeable batteries for electric cars. This industry increased five-fold in its demand for cobalt from 2016 to 2020, which made it urgent to find new raw materials in more stable areas of the world. Demand is expected to continue or increase as the prevalence of electric vehicles increases. Exploration in 2016–2017 included the area around Cobalt, Ontario, an area where many silver mines ceased operation decades ago. Cobalt for electric vehicles increased 81% from the first half of 2018 to 7,200 tonnes in the first half of 2019, for a battery capacity of 46.3 GWh.
As of August 2020 battery makers have gradually reduced the cathode cobalt content from 1/3 (NMC 111) to 1/5 (NMC 442) to currently 1/10 (NMC 811) and have also introduced the cobalt free lithium iron phosphate cathode into the battery packs of electric cars such as the Tesla Model 3.
Research was also conducted by the European Union into the possibility of eliminating cobalt requirements in lithium-ion battery production.
In September 2020, Tesla outlined their plans to make their own, cobalt-free battery cells.
Nickel–cadmium (NiCd) and nickel metal hydride (NiMH) batteries also included cobalt to improve the oxidation of nickel in the battery.
Lithium iron phosphate batteries officially surpassed ternary cobalt batteries in 2021 with 52% of installed capacity. Analysts estimate that its market share will exceed 60% in 2024.
Catalysts
Several cobalt compounds are oxidation catalysts. Cobalt acetate is used to convert xylene to terephthalic acid, the precursor of the bulk polymer polyethylene terephthalate. Typical catalysts are the cobalt carboxylates (known as cobalt soaps). They are also used in paints, varnishes, and inks as "drying agents" through the oxidation of drying oils. However, their use is being phased out due to toxicity concerns. The same carboxylates are used to improve the adhesion between steel and rubber in steel-belted radial tires. In addition they are used as accelerators in polyester resin systems.
Cobalt-based catalysts are used in reactions involving carbon monoxide. Cobalt is also a catalyst in the Fischer–Tropsch process for the hydrogenation of carbon monoxide into liquid fuels. Hydroformylation of alkenes often uses cobalt octacarbonyl as a catalyst. The hydrodesulfurization of petroleum uses a catalyst derived from cobalt and molybdenum. This process helps to clean petroleum of sulfur impurities that interfere with the refining of liquid fuels.
Pigments and coloring
Before the 19th century, cobalt was predominantly used as a pigment. It has been used since the Middle Ages to make smalt, a blue-colored glass. Smalt is produced by melting a mixture of roasted mineral smaltite, quartz and potassium carbonate, which yields a dark blue silicate glass, which is finely ground after the production. Smalt was widely used to color glass and as pigment for paintings. In 1780, Sven Rinman discovered cobalt green, and in 1802 Louis Jacques Thénard discovered cobalt blue. Cobalt pigments such as cobalt blue (cobalt aluminate), cerulean blue (cobalt(II) stannate), various hues of cobalt green (a mixture of cobalt(II) oxide and zinc oxide), and cobalt violet (cobalt phosphate) are used as artist's pigments because of their superior chromatic stability.
Radioisotopes
Cobalt-60 (Co-60 or 60Co) is useful as a gamma-ray source because it can be produced in predictable amounts with high activity by bombarding cobalt with neutrons. It produces gamma rays with energies of 1.17 and 1.33 MeV.
Cobalt is used in external beam radiotherapy, sterilization of medical supplies and medical waste, radiation treatment of foods for sterilization (cold pasteurization), industrial radiography (e.g. weld integrity radiographs), density measurements (e.g. concrete density measurements), and tank fill height switches. The metal has the unfortunate property of producing a fine dust, causing problems with radiation protection. Cobalt from radiotherapy machines has been a serious hazard when not discarded properly, and one of the worst radiation contamination accidents in North America occurred in 1984, when a discarded radiotherapy unit containing cobalt-60 was mistakenly disassembled in a junkyard in Juarez, Mexico.
Cobalt-60 has a radioactive half-life of 5.27 years. Loss of potency requires periodic replacement of the source in radiotherapy and is one reason why cobalt machines have been largely replaced by linear accelerators in modern radiation therapy. Cobalt-57 (Co-57 or 57Co) is a cobalt radioisotope most often used in medical tests, as a radiolabel for vitamin B uptake, and for the Schilling test. Cobalt-57 is used as a source in Mössbauer spectroscopy and is one of several possible sources in X-ray fluorescence devices.
Nuclear weapon designs could intentionally incorporate 59Co, some of which would be activated in a nuclear explosion to produce 60Co. The 60Co, dispersed as nuclear fallout, is sometimes called a cobalt bomb.
Magnetic materials
Due to the ferromagnetic properties of cobalt, it is used in the production of various magnetic materials. It is used in creating permanent magnets like Alnico magnets, known for their strong magnetic properties used in electric motors, sensors, and MRI machines. It is also used in production of magnetic alloys like cobalt steel, widely used in magnetic recording media such as hard disks and tapes.
Cobalt's ability to maintain magnetic properties at high temperatures makes it valuable in magnetic recording applications, ensuring reliable data storage devices. Cobalt also contributes to specialized magnets such as samarium-cobalt and neodymium-iron-boron magnets, which are vital in electronics for components like sensors and actuators.
Other uses
Cobalt is used in electroplating for its attractive appearance, hardness, and resistance to oxidation.
It is also used as a base primer coat for porcelain enamels.
Biological role
Cobalt is essential to the metabolism of all animals. It is a key constituent of cobalamin, also known as vitamin B, the primary biological reservoir of cobalt as an ultratrace element. Bacteria in the stomachs of ruminant animals convert cobalt salts into vitamin B, a compound which can only be produced by bacteria or archaea. A minimal presence of cobalt in soils therefore markedly improves the health of grazing animals, and an uptake of 0.20 mg/kg a day is recommended, because they have no other source of vitamin B.
Proteins based on cobalamin use corrin to hold the cobalt. Coenzyme B12 features a reactive C-Co bond that participates in the reactions. In humans, B12 has two types of alkyl ligand: methyl and adenosyl. MeB12 promotes methyl (−CH3) group transfers. The adenosyl version of B12 catalyzes rearrangements in which a hydrogen atom is directly transferred between two adjacent atoms with concomitant exchange of the second substituent, X, which may be a carbon atom with substituents, an oxygen atom of an alcohol, or an amine. Methylmalonyl coenzyme A mutase (MUT) converts MMl-CoA to Su-CoA, an important step in the extraction of energy from proteins and fats.
Although far less common than other metalloproteins (e.g. those of zinc and iron), other cobaltoproteins are known besides B12. These proteins include methionine aminopeptidase 2, an enzyme that occurs in humans and other mammals that does not use the corrin ring of B12, but binds cobalt directly. Another non-corrin cobalt enzyme is nitrile hydratase, an enzyme in bacteria that metabolizes nitriles.
Cobalt deficiency
In humans, consumption of cobalt-containing vitamin B12 meets all needs for cobalt. For cattle and sheep, which meet vitamin B12 needs via synthesis by resident bacteria in the rumen, there is a function for inorganic cobalt. In the early 20th century, during the development of farming on the North Island Volcanic Plateau of New Zealand, cattle suffered from what was termed "bush sickness". It was discovered that the volcanic soils lacked the cobalt salts essential for the cattle food chain. The "coast disease" of sheep in the Ninety Mile Desert of the Southeast of South Australia in the 1930s was found to originate in nutritional deficiencies of trace elements cobalt and copper. The cobalt deficiency was overcome by the development of "cobalt bullets", dense pellets of cobalt oxide mixed with clay given orally for lodging in the animal's rumen.
Health issues
The LD50 value for soluble cobalt salts has been estimated to be between 150 and 500 mg/kg. In the US, the Occupational Safety and Health Administration (OSHA) has designated a permissible exposure limit (PEL) in the workplace as a time-weighted average (TWA) of 0.1 mg/m3. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.05 mg/m3, time-weighted average. The IDLH (immediately dangerous to life and health) value is 20 mg/m3.
However, chronic cobalt ingestion has caused serious health problems at doses far less than the lethal dose. In 1966, the addition of cobalt compounds to stabilize beer foam in Canada led to a peculiar form of toxin-induced cardiomyopathy, which came to be known as beer drinker's cardiomyopathy.
Furthermore, cobalt metal is suspected of causing cancer (i.e., possibly carcinogenic, IARC Group 2B) as per the International Agency for Research on Cancer (IARC) Monographs.
It causes respiratory problems when inhaled. It also causes skin problems when touched; after nickel and chromium, cobalt is a major cause of contact dermatitis.
Notes
References
Further reading
External links
Cobalt at The Periodic Table of Videos (University of Nottingham)
Centers for Disease and Prevention – Cobalt
Chemical elements
Transition metals
Dietary minerals
Ferromagnetic materials
IARC Group 2B carcinogens
Child labour
Cobalt mining
Informal economy in Africa
Resource economics
Mining communities in Africa
Extractive Industries Transparency Initiative
Chemical elements with hexagonal close-packed structure
Native element minerals | Cobalt | [
"Physics",
"Chemistry"
] | 8,666 | [
"Nuclear magnetic resonance",
"Chemical elements",
"Ferromagnetic materials",
"Materials",
"Nuclear physics",
"Atoms",
"Matter"
] |
24,580,596 | https://en.wikipedia.org/wiki/Cerium | Cerium is a chemical element; it has symbol Ce and atomic number 58. It is a soft, ductile, and silvery-white metal that tarnishes when exposed to air. Cerium is the second element in the lanthanide series, and while it often shows the oxidation state of +3 characteristic of the series, it also has a stable +4 state that does not oxidize water. It is considered one of the rare-earth elements. Cerium has no known biological role in humans but is not particularly toxic, except with intense or continued exposure.
Despite always occurring in combination with the other rare-earth elements in minerals such as those of the monazite and bastnäsite groups, cerium is easy to extract from its ores, as it can be distinguished among the lanthanides by its unique ability to be oxidized to the +4 state in aqueous solution. It is the most common of the lanthanides, followed by neodymium, lanthanum, and praseodymium. Its estimated abundance in the Earth's crust is 68 ppm.
Cerium was the first of the lanthanides to be discovered, in Bastnäs, Sweden. It was discovered by Jöns Jakob Berzelius and Wilhelm Hisinger in 1803, and independently by Martin Heinrich Klaproth in Germany in the same year. In 1839 Carl Gustaf Mosander became the first to isolate the metal. Today, cerium and its compounds have a variety of uses: for example, cerium(IV) oxide is used to polish glass and is an important part of catalytic converters. Cerium metal is used in ferrocerium lighters for its pyrophoric properties. Cerium-doped YAG phosphor is used in conjunction with blue light-emitting diodes to produce white light in most commercial white LED light sources.
Characteristics
Physical
Cerium is the second element of the lanthanide series. In the periodic table, it appears between the lanthanides lanthanum to its left and praseodymium to its right, and above the actinide thorium. It is a ductile metal with a hardness similar to that of silver. Its 58 electrons are arranged in the configuration [Xe]4f5d6s, of which the four outer electrons are valence electrons. The 4f, 5d, and 6s energy levels are very close to each other, and the transfer of one electron to the 5d shell is due to strong interelectronic repulsion in the compact 4f shell. This effect is overwhelmed when the atom is positively ionised; thus Ce on its own has instead the regular configuration [Xe]4f, although in some solid solutions it may be [Xe]4f5d. Most lanthanides can use only three electrons as valence electrons, as afterwards the remaining 4f electrons are too strongly bound: cerium is an exception because of the stability of the empty f-shell in Ce and the fact that it comes very early in the lanthanide series, where the nuclear charge is still low enough until neodymium to allow the removal of the fourth valence electron by chemical means.
Cerium has a variable electronic structure. The energy of the 4f electron is nearly the same as that of the outer 5d and 6s electrons that are delocalized in the metallic state, and only a small amount of energy is required to change the relative occupancy of these electronic levels. This gives rise to dual valence states. For example, a volume change of about 10% occurs when cerium is subjected to high pressures or low temperatures. In its high pressure phase (α-Cerium), the 4f electrons are also delocalized and itinerate, as opposed to localized 4f electrons in low pressure phase (γ-Cerium). It appears that the valence changes from about 3 to 4 when it is cooled or compressed.
Chemical properties of the element
Like the other lanthanides, cerium metal is a good reducing agent, having standard reduction potential of E = −2.34 V for the Ce/Ce couple. It tarnishes in air, forming a passivating oxide layer like iron rust. A centimeter-sized sample of cerium metal corrodes completely in about a year. More dramatically, metallic cerium can be highly pyrophoric:
Being highly electropositive, cerium reacts with water. The reaction is slow with cold water but speeds up with increasing temperature, producing cerium(III) hydroxide and hydrogen gas:
Allotropes
Four allotropic forms of cerium are known to exist at standard pressure and are given the common labels of α to δ:
The high-temperature form, δ-cerium, has a bcc (body-centered cubic) crystal structure and exists above 726 °C.
The stable form below 726 °C to approximately room temperature is γ-cerium, with an fcc (face-centered cubic) crystal structure.
The DHCP (double hexagonal close-packed) form β-cerium is the equilibrium structure approximately from room temperature to −150 °C.
The fcc form α-cerium is stable below about −150 °C; it has a density of 8.16 g/cm.
Other solid phases occurring only at high pressures are shown on the phase diagram.
Both γ and β forms are quite stable at room temperature, although the equilibrium transformation temperature is estimated at 75 °C.
At lower temperatures the behavior of cerium is complicated by the slow rates of transformation. Transformation temperatures are subject to substantial hysteresis and values quoted here are approximate. Upon cooling below −15 °C, γ-cerium starts to change to β-cerium, but the transformation involves a volume increase and, as more β forms, the internal stresses build up and suppress further transformation. Cooling below approximately −160 °C will start formation of α-cerium but this is only from remaining γ-cerium. β-cerium does not significantly transform to α-cerium except in the presence of stress or deformation. At atmospheric pressure, liquid cerium is more dense than its solid form at the melting
point.
Isotopes
Naturally occurring cerium is made up of four isotopes: Ce (0.19%), Ce (0.25%), Ce (88.4%), and Ce (11.1%). All four are observationally stable, though the light isotopes Ce and Ce are theoretically expected to undergo double electron capture to isotopes of barium, and the heaviest isotope 142Ce is expected to undergo double beta decay to 142Nd or alpha decay to 138Ba. Thus, 140Ce is the only theoretically stable isotope. None of these decay modes have yet been observed, though the double beta decay of 136Ce, 138Ce, and 142Ce have been experimentally searched for. The current experimental limits for their half-lives are:
136Ce: >3.8×1016 y
138Ce: >5.7×1016 y
142Ce: >5.0×1016 y
All other cerium isotopes are synthetic and radioactive. The most stable of them are 144Ce with a half-life of 284.9 days, 139Ce with a half-life of 137.6 days, and 141Ce with a half-life of 32.5 days. All other radioactive cerium isotopes have half-lives under four days, and most of them have half-lives under ten minutes. The isotopes between 140Ce and 144Ce inclusive occur as fission products of uranium. The primary decay mode of the isotopes lighter than 140Ce is inverse beta decay or electron capture to isotopes of lanthanum, while that of the heavier isotopes is beta decay to isotopes of praseodymium. Some isotopes of neodymium can alpha decay or are predicted to decay to isotopes of cerium.
The rarity of the proton-rich 136Ce and 138Ce is explained by the fact that they cannot be made in the most common processes of stellar nucleosynthesis for elements beyond iron, the s-process (slow neutron capture) and the r-process (rapid neutron capture). This is so because they are bypassed by the reaction flow of the s-process, and the r-process nuclides are blocked from decaying to them by more neutron-rich stable nuclides. Such nuclei are called p-nuclei, and their origin is not yet well understood: some speculated mechanisms for their formation include proton capture as well as photodisintegration. 140Ce is the most common isotope of cerium, as it can be produced in both the s- and r-processes, while 142Ce can only be produced in the r-process. Another reason for the abundance of 140Ce is that it is a magic nucleus, having a closed neutron shell (it has 82 neutrons), and hence it has a very low cross section towards further neutron capture. Although its proton number of 58 is not magic, it is granted additional stability, as its eight additional protons past the magic number 50 enter and complete the 1g proton orbital. The abundances of the cerium isotopes may differ very slightly in natural sources, because Ce and Ce are the daughters of the long-lived primordial radionuclides La and Nd, respectively.
Compounds
Cerium exists in two main oxidation states, Ce(III) and Ce(IV). This pair of adjacent oxidation states dominates several aspects of the chemistry of this element. Cerium(IV) aqueous solutions may be prepared by reacting cerium(III) solutions with the strong oxidizing agents peroxodisulfate or bismuthate. The value of E(Ce/Ce) varies widely depending on conditions due to the relative ease of complexation and hydrolysis with various anions, although +1.72 V is representative. Cerium is the only lanthanide which has important aqueous and coordination chemistry in the +4 oxidation state.
Halides
Cerium forms all four trihalides CeX (X = F, Cl, Br, I) usually by reaction of the oxides with the hydrogen halides. The anhydrous halides are pale-colored, paramagnetic, hygroscopic solids. Upon hydration, the trihalides convert to complexes containing aquo complexes [Ce(HO)]. Unlike most lanthanides, Ce forms a tetrafluoride, a white solid. It also forms a bronze-colored diiodide, which has metallic properties. Aside from the binary halide phases, a number of anionic halide complexes are known. The fluoride gives the Ce(IV) derivatives and . The chloride gives the orange .
Oxides and chalcogenides
Cerium(IV) oxide ("ceria") has the fluorite structure, similarly to the dioxides of praseodymium and terbium. Ceria is a nonstoichiometric compound, meaning that the real formula is CeO, where x is about 0.2. Thus, the material is not perfectly described as Ce(IV). Ceria reduces to cerium(III) oxide with hydrogen gas. Many nonstoichiometric chalcogenides are also known, along with the trivalent CeZ (Z = S, Se, Te). The monochalcogenides CeZ conduct electricity and would better be formulated as CeZe. While CeZ are known, they are polychalcogenides with cerium(III): cerium(IV) derivatives of S, Se, and Te are unknown.
Cerium(IV) complexes
The compound ceric ammonium nitrate (CAN) is the most common cerium compound encountered in the laboratory. The six nitrate ligands bind as bidentate ligands. The complex is 12-coordinate, a high coordination number which emphasizes the large size of the Ce4+ ion. CAN is a popular oxidant in organic synthesis, both as a stoichiometric reagent and as a catalyst. It is inexpensive, stable in air, easily handled, and of low toxicity. It operates by one-electron redox. Cerium nitrates also form 4:3 and 1:1 complexes with 18-crown-6 (the ratio referring to that between the nitrate and the crown ether). Classically, CAN is a primary standard for quantitative analysis. Cerium(IV) salts, especially cerium(IV) sulfate, are often used as standard reagents for volumetric analysis in cerimetric titrations.
Due to ligand-to-metal charge transfer, aqueous cerium(IV) ions are orange-yellow. Aqueous cerium(IV) is metastable in water and is a strong oxidizing agent that oxidizes hydrochloric acid to give chlorine gas. In the Belousov–Zhabotinsky reaction, cerium oscillates between the +4 and +3 oxidation states to catalyze the reaction.
Organocerium compounds
Organocerium chemistry is similar to that of the other lanthanides, often involving complexes of cyclopentadienyl and cyclooctatetraenyl ligands. Cerocene adopts the uranocene molecular structure. The 4f electron in cerocene is poised ambiguously between being localized and delocalized and this compound is considered intermediate-valent.
Alkyl, alkynyl, and alkenyl organocerium derivatives are prepared from the transmetallation of the respective organolithium or Grignard reagents, and are more nucleophilic but less basic than their precursors.
History
Cerium was discovered in Bastnäs in Sweden by Jöns Jakob Berzelius and Wilhelm Hisinger, and independently in Germany by Martin Heinrich Klaproth, both in 1803. Cerium was named by Berzelius after the asteroid Ceres, formally 1 Ceres, discovered two years earlier. Ceres was initially considered to be a planet at the time. The asteroid is itself named after the Roman goddess Ceres, goddess of agriculture, grain crops, fertility and motherly relationships.
Cerium was originally isolated in the form of its oxide, which was named ceria, a term that is still used. The metal itself was too electropositive to be isolated by then-current smelting technology, a characteristic of rare-earth metals in general. After the development of electrochemistry by Humphry Davy five years later, the earths soon yielded the metals they contained. Ceria, as isolated in 1803, contained all of the lanthanides present in the cerite ore from Bastnäs, Sweden, and thus only contained about 45% of what is now known to be pure ceria. It was not until Carl Gustaf Mosander succeeded in removing lanthana and "didymia" in the late 1830s that ceria was obtained pure. Wilhelm Hisinger was a wealthy mine-owner and amateur scientist, and sponsor of Berzelius. He owned and controlled the mine at Bastnäs, and had been trying for years to find out the composition of the abundant heavy gangue rock (the "Tungsten of Bastnäs", which despite its name contained no tungsten), now known as cerite, that he had in his mine. Mosander and his family lived for many years in the same house as Berzelius, and Mosander was undoubtedly persuaded by Berzelius to investigate ceria further.
The element played a role in the Manhattan Project, where cerium compounds were investigated in the Berkeley site as materials for crucibles for uranium and plutonium casting. For this reason, new methods for the preparation and casting of cerium were developed within the scope of the Ames daughter project (now the Ames Laboratory). Production of extremely pure cerium in Ames commenced in mid-1944 and continued until August 1945.
Occurrence and production
Cerium is the most abundant of all the lanthanides and the 25th most abundant element, making up 68 ppm of the Earth's crust. This value is the same of copper, and cerium is even more abundant than common metals such as lead (13 ppm) and tin (2.1 ppm). Thus, despite its position as one of the so-called rare-earth metals, cerium is actually not rare at all. Cerium content in the soil varies between 2 and 150 ppm, with an average of 50 ppm; seawater contains 1.5 parts per trillion of cerium. Cerium occurs in various minerals, but the most important commercial sources are the minerals of the monazite and bastnäsite groups, where it makes up about half of the lanthanide content. Monazite-(Ce) is the most common representative of the monazites, with "-Ce" being the Levinson suffix informing on the dominance of the particular REE element representative. Also the cerium-dominant bastnäsite-(Ce) is the most important of the bastnäsites. Cerium is the easiest lanthanide to extract from its minerals because it is the only one that can reach a stable +4 oxidation state in aqueous solution. Because of the decreased solubility of cerium in the +4 oxidation state, cerium is sometimes depleted from rocks relative to the other rare-earth elements and is incorporated into zircon, since Ce4+ and Zr4+ have the same charge and similar ionic radii. In extreme cases, cerium(IV) can form its own minerals separated from the other rare-earth elements, such as cerianite-(Ce) and .
Bastnäsite, LnCOF, is usually lacking in thorium and the heavy lanthanides beyond samarium and europium, and hence the extraction of cerium from it is quite direct. First, the bastnäsite is purified, using dilute hydrochloric acid to remove calcium carbonate impurities. The ore is then roasted in the air to oxidize it to the lanthanide oxides: while most of the lanthanides will be oxidized to the sesquioxides , cerium will be oxidized to the dioxide CeO. This is insoluble in water and can be leached out with 0.5 M hydrochloric acid, leaving the other lanthanides behind.
The procedure for monazite, , which usually contains all the rare earths, as well as thorium, is more involved. Monazite, because of its magnetic properties, can be separated by repeated electromagnetic separation. After separation, it is treated with hot concentrated sulfuric acid to produce water-soluble sulfates of rare earths. The acidic filtrates are partially neutralized with sodium hydroxide to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. After that, the solution is treated with ammonium oxalate to convert rare earths to their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid, but cerium oxide is insoluble in HNO3 and hence precipitates out. Care must be taken when handling some of the residues as they contain 228Ra, the daughter of 232Th, which is a strong gamma emitter.
Applications
Cerium has two main applications, both of which use CeO2. The industrial application of ceria is for polishing, especially chemical-mechanical planarization (CMP). In its other main application, CeO2 is used to decolorize glass. It functions by converting green-tinted ferrous impurities to nearly colorless ferric oxides. Ceria has also been used as a substitute for its radioactive congener thoria, for example in the manufacture of electrodes used in gas tungsten arc welding, where ceria as an alloying element improves arc stability and ease of starting while decreasing burn-off.
Gas mantles and pyrophoric alloys
The first use of cerium was in gas mantles, invented by Austrian chemist Carl Auer von Welsbach. In 1885, he had previously experimented with mixtures of magnesium, lanthanum, and yttrium oxides, but these gave green-tinted light and were unsuccessful. Six years later, he discovered that pure thorium oxide produced a much better, though blue, light, and that mixing it with cerium dioxide resulted in a bright white light. Cerium dioxide also acts as a catalyst for the combustion of thorium oxide.
This resulted in commercial success for von Welsbach and his invention, and created great demand for thorium. Its production resulted in a large amount of lanthanides being simultaneously extracted as by-products. Applications were soon found for them, especially in the pyrophoric alloy known as "mischmetal" composed of 50% cerium, 25% lanthanum, and the remainder being the other lanthanides, that is used widely for lighter flints. Usually iron is added to form the alloy ferrocerium, also invented by von Welsbach. Due to the chemical similarities of the lanthanides, chemical separation is not usually required for their applications, such as the addition of mischmetal to steel as an inclusion modifier to improve mechanical properties, or as catalysts for the cracking of petroleum. This property of cerium saved the life of writer Primo Levi at the Auschwitz concentration camp, when he found a supply of ferrocerium alloy and bartered it for food.
Pigments and phosphors
The photostability of pigments can be enhanced by the addition of cerium, as it provides pigments with lightfastness and prevents clear polymers from darkening in sunlight. An example of a cerium compound used on its own as an inorganic pigment is the vivid red cerium(III) sulfide (cerium sulfide red), which stays chemically inert up to very high temperatures. The pigment is a safer alternative to lightfast but toxic cadmium selenide-based pigments. The addition of cerium oxide to older cathode-ray tube television glass plates was beneficial, as it suppresses the darkening effect from the creation of F-center defects due to the continuous electron bombardment during operation. Cerium is also an essential component as a dopant for phosphors used in CRT TV screens, fluorescent lamps, and later white light-emitting diodes. The most commonly used example is cerium(III)-doped yttrium aluminium garnet (Ce:YAG) which emits green to yellow-green light (550–530 nm) and also behaves as a scintillator.
Other uses
Cerium salts, such as the sulfides Ce2S3 and Ce3S4, were considered during the Manhattan Project as advanced refractory materials for the construction of crucibles which could withstand the high temperatures and strongly reducing conditions when casting plutonium metal. Despite desirable properties, these sulfides were never widely adopted due to practical issues with their synthesis. Cerium is used as alloying element in aluminium to create castable eutectic aluminium alloys with 6–16 wt.% Ce, to which other elements such as Mg, Ni, Fe and Mn can be added. These Al-Ce alloys have excellent high temperature strength and are suitable for automotive applications (e.g. in cylinder heads). Other alloys of cerium include Pu-Ce and Pu-Ce-Co plutonium alloys, which have been used as nuclear fuel.
Other automotive applications for the lower sesquioxide are as a catalytic converter for the oxidation of CO and NO emissions in the exhaust gases from motor vehicles.
Biological role and precautions
The early lanthanides have been found to be essential to some methanotrophic bacteria living in volcanic mudpots, such as Methylacidiphilum fumariolicum: lanthanum, cerium, praseodymium, and neodymium are about equally effective. Cerium is otherwise not known to have biological role in any other organisms, but is not very toxic either; it does not accumulate in the food chain to any appreciable extent. Because it often occurs together with calcium in phosphate minerals, and bones are primarily calcium phosphate, cerium can accumulate in bones in small amounts that are not considered dangerous.
Cerium nitrate is an effective topical antimicrobial treatment for third-degree burns, although large doses can lead to cerium poisoning and methemoglobinemia. The early lanthanides act as essential cofactors for the methanol dehydrogenase of the methanotrophic bacterium Methylacidiphilum fumariolicum SolV, for which lanthanum, cerium, praseodymium, and neodymium alone are about equally effective.
Like all rare-earth metals, cerium is of low to moderate toxicity. A strong reducing agent, it ignites spontaneously in air at 65 to 80 °C. Fumes from cerium fires are toxic. Cerium reacts with water to produce hydrogen gas, and thus cerium fires can only be effectively extinguished using class D dry powder extinguishing media. Workers exposed to cerium have experienced itching, sensitivity to heat, and skin lesions. Cerium is not toxic when eaten, but animals injected with large doses of cerium have died due to cardiovascular collapse. Cerium is more dangerous to aquatic organisms because it damages cell membranes; it is not very soluble in water and can cause environmental contamination.
Cerium oxide, the most prevalent cerium compound in industrial applications, is not regulated in the United States by the Occupational Safety and Health Administration (OSHA) as a hazardous substance. In Russia, its occupational exposure limit is 5 mg/m. Elemental cerium has no established occupational or permissible exposure limits by the OSHA or American Conference of Governmental Industrial Hygienists, though it is classified as a flammable solid and regulated as such under the Globally Harmonized System of Classification and Labelling of Chemicals. Toxicological reports on cerium compounds have noted their cytotoxicity and contributions to pulmonary interstitial fibrosis in workers.
References
Bibliography
Chemical elements
Chemical elements with double hexagonal close-packed structure
Lanthanides
Reducing agents
Pyrophoric materials
Materials that expand upon freezing | Cerium | [
"Physics",
"Chemistry",
"Technology"
] | 5,499 | [
"Physical phenomena",
"Phase transitions",
"Chemical elements",
"Redox",
"Reducing agents",
"Materials",
"Materials that expand upon freezing",
"Atoms",
"Matter"
] |
38,683,755 | https://en.wikipedia.org/wiki/AVP%20Research%20Foundation | The AVP Research Foundation (formerly known as AVT Institute for Advanced Research) was established in 2003 as a research department under The Ayurvedic Trust and became an independent not-for-profit research institution registered under section 25 of The Companies Act, 1956 in 2012. The foundation is known for its excellence in clinical research on Ayurvedic medicines, initiatives on practice based evidence, developing research and education oriented software for Ayurvedic fraternity and its journal indexing service in Ayurveda. The department of scientific and industrial research, Government of India has recognised the institution as a Scientific and Industrial Research Organisation.
History
The AVP Research Foundation (AVP RF) was established in 2003 as AVT Institute for Advanced Research (AVTAR) under The Ayurvedic Trust to administer the research initiatives of the trust and Arya Vaidya Pharmacy|The Arya Vaidya Pharmacy (Cbe) Ltd. The trust in itself had pioneered in research on arthritis right from 1980s' and later the multifaceted trust established AVTAR to concentrate on research in Ayurveda. The department in recent times became an independent institution focusing on research and education in Ayurveda. The institution is steered by a research advisory board and function under a Director & Chief Scientific Officer appointed by the Board of Directors and the Governing Council.
Activities
Research
The foundation is engaged in basic and clinical research in Ayurveda, the clinical trial conducted at The Ayurvedic Trust in collaboration with UCLA Medical School on rheumatoid arthritis to see which works better for rheumatoid arthritis, Ayurveda or the Western drug methotrexate was pronounced as a blue print on how clinical research can be done in complementary and alternative medicine by Edzard Ernst. The Foundation is also engaged in research to establish evidence from on going clinical practices to substantiate the practice of Ayurveda by generating evidence on efficacy through practice-based evidence. It is also involved in basic research to elucidate the probably biochemistry behind classical Ayurvedic medicine practised in India for quite long period.
In 2020 Arya Vaidya Pharmacy has signed an MoU with Central Council for Research In Ayurveda Systems to conduct phase III Multi centric clinical trial on rheumatoid arthritis in Bengaluru and Mumbai.
In year 2017 AVP Research Foundation signed an MoU to conduct joint training programmes is USA in tie-up with South California University and in Korea through Indo-Korean collaborators.
Education
AVP Research Foundation is engaged in education by providing ancillary courses on Ayurveda for students in Ayurveda from India and abroad.
Informatics
The Foundation has an informatics department which has developed unique software and programs that assist researchers and practitioners in Ayurveda to generate and document their routine clinical practise.
Journals
AVP Research Foundation manages largest number of scientific communications in Ayurveda with two research journals, Ancient Science of Life launched in 1981, which is now the largest PubMed indexed journal on Ayurveda and the first speciality research journal in Ayurveda titled ASL-Musculoskeletal Diseases published in alliance with Medknow Publications.
Conferences
AVP Research Foundation organises regular conferences, workshops and Continuing Medical Education programs in the field of Ayurveda. Insight Ayurveda is biennial conference on Ayurveda hosted by AVP Research Foundation.
Inter-disciplinary Research
Inter-disciplinary research bridging Ayurveda with modern sciences to elucidate the biochemistry behind the inherent theories in Ayurveda is also attempted at AVP Research Foundation.
Support for open access
The foundation supports open access of research and all the journals published from the institution are open to access free online. The database developed by the foundation attempts to make the data generated out of research in Ayurveda is available to larger circle of stakeholders. The institutions also involves in pursuing more journals to come forward to be open access. The institution also have made the classical texts of Ayurveda available in digital format which is also free to access.
Public engagement and outreach programs
The foundation is engaged in public out reach programs through regular medical camps on ano-rectal diseases and eye diseases.
Awards
The research in the institution was recognised with Excellence in Integrative Medicine Research Award by the European Society of Integrative Medicine in 2012.
References
Research institutes established in 2003
Biomedical research foundations
Ayurvedic organisations
Charities based in India | AVP Research Foundation | [
"Engineering",
"Biology"
] | 916 | [
"Biotechnology organizations",
"Biomedical research foundations"
] |
38,688,920 | https://en.wikipedia.org/wiki/Surgical%20mesh | Surgical mesh is a medical implant made of loosely woven mesh, which is used in surgery as either a permanent or temporary structural support for organs and other tissues. Surgical mesh can be made from both inorganic and biological materials and is used in a variety of surgeries, although hernia repair is the most common application. It can also be used for reconstructive work, such as in pelvic organ prolapse or to repair physical defects (mainly of body cavity walls) created by extensive resections or traumatic tissue loss.
Permanent meshes remain in the body, whereas temporary ones dissolve over time. One temporary mesh was shown in 2012 to fully dissolve after three years in a scientific trial on sheep. Some types of mesh combine permanent and temporary meshes which includes both resorbable vicryl, made from polyglycolic acid, and prolene, a non-resorbable polypropylene.
Data of mechanical and biological behaviors of mesh in vivo may not always be reflective of conditions within the human body due to testing in non-human organisms. Most published reports experiment on mice, thus creating the likelihood of possible differences when inserted into the human body. Also, most published research reports reference meshes that are currently disallowed from the medical device market due to complications post-surgery. Additionally, the absence of FDA-approved regulatory protocols and universal standard operating procedures leads to a variety of different testing methods from researcher to researcher. Experimentation may find differing outcomes for some meshes.
Medical uses
The primary function of surgical mesh is to support prolapsed organs either temporarily or permanently. It is most commonly used in hernia surgery within the abdomen, which is required when an organ protrudes through abdominal muscles. Surgical mesh may also be used for pelvic or vaginal wall reconstructions in women and is implemented to add as a growth guide for damaged tissue. Ideally, these implants should be strong enough to survive mechanical loads and actions of whichever body area they become a part of.
Hernia surgery
Hernia surgery is one of the most common current applications of surgical mesh. Hernias occur when organs or fatty tissue bulge through openings or debilitated areas of muscle, usually in the abdominal wall. Surgical mesh is implanted to strengthen tissue repair and minimize the rate of recurrence. The surgery can be performed laparoscopic (internally) or open with a variety of materials available for prosthesis. Polypropylene (PP) is the most frequently used type of mesh, although it may be uncomfortable for the patient after implantation. Another type that is less utilized in hernia surgery is polyethylene terephthalate (PET), which faces complications because it easily degrades after some years of implantation, erasing the effects of the surgery. Polytetrafluorethylene (PTFE) is used as well, but is manufactured in the form of a foil and has difficulty integrating into surrounding tissue, therefore it loses stability.
Pelvic surgery
Similar to hernia surgery, synthetic meshes may be used for organ prolapses in the pelvic region as well. Pelvic organ prolapse occurs in 50% of women above the age of 50 with a history of one or more vaginal childbirths throughout her lifetime. Mesh surgery can be performed in various areas of the pelvic region, such as cystocele, rectocele, and vaginal vault or uterus. The most commonly used material, as in hernia surgery, is PP, which is considered to have acceptable biocompatibility within the region. It induces a mild inflammatory response but has a tendency to adhere to the viscera.
The vaginal wall has three layers: tunica mucosa, muscularis, adventitia. When prolapse occurs, smooth fibers of the muscularis are compromised. Prolapse in women has also been seen to increase stiffness in the pelvis, particularly post-menopausal women. Surgical mesh that is used in pelvic reconstruction must counter this stiffness, but if the modulus of elasticity is too high, it will not sufficiently support the organs. On the contrary, if the mesh is too stiff, tissue will erode and inflammatory responses will cause post-surgical complications. Post-implantation, polypropylene mesh sometimes exhibits microcracks, flaking of fibers, and fibrosis.
Additionally, the mesh has enough strength to withstand basic actions and tissue behavior in physiological conditions, particularly during tissue regeneration through the mesh itself. The area is subjected to a variety of loads approaching from abdominal contents, pressure from abdominal/diaphragm muscles, and genital organs, as well as respiratory actions. For the average, reproductive-age woman, the pelvis must withstand loads of 20 N in the supine position, 25-35 N in the standing position, and 90-130 N whilst coughing. Any mesh that is implanted in the pelvic area must be strong enough to withstand these loads.
Regulation
In 2018, the United Kingdom temporarily halted vaginal mesh implants for treatment of urinary incontinence pending further investigation into the risks and available safeguards.
In the United States, the FDA reclassified transvaginal surgical mesh as "class III" (high risk) in 2016, and in late 2018, mandated premark approval applications for mesh intended for transvaginal pelvic organ prolapse repair, with further investigation planned in 2019. Then on April 16, 2019, the FDA ordered all makers of transvaginal surgical meshes to immediately stop their sale and distribution.
Biocompatibility
Mesh implantation will naturally generate an inflammatory response to the inserted mesh, but biocompatibility ranges from how easily it is integrated to how severe the foreign body reaction is. A minimal response includes the formation of fibrosis around the prosthesis (much like in scar tissue formation); this response is generated with the best form of biocompatibility. A physical response triggers an acute inflammatory reaction, which involves the formation of giant cells and subsequently granulomas, meaning that the tissue is “tolerating” the mesh fairly well. Lastly, a chemical response allows for a severe inflammatory reaction during attempted tissue-mesh integration, including fibroblastic cell proliferation. Ultimately, the goal for surgical mesh creation is to formulate one that has a minimal in vivo reaction to maximize comfort for the patient, avoid infection, and ensure clean integration into the body for tissue repair.
A number of factors play into mesh biocompatibility. Mesh porosity is the ratio of pore to total area, and plays a role in the development of either bacterial infection or smooth tissue regeneration depending on pore size. Pore sizes below 10 micrometers are susceptible to infection because bacteria may enter and proliferate, while macrophages and neutrophils are too large to fit through and cannot aid in the elimination of them. With pore sizes exceeding 75 micrometers, fibroblasts, blood vessels, and collagen fibers are permitted through as part of tissue regeneration. Although there is no general consensus on the best pore size, it can be deduced that larger pores are better for development of tissue and integration in vivo.
Knowing this, the current problem with a variety of the meshes used in all types of surgeries is that they are not sufficiently biocompatible. PP proves an effective mesh for adjusting prolapsed organs, but may cause severe discomfort for the patient due to its high modulus of elasticity. This stiffens the prosthesis and results in a more pronounced inflammatory response, which complicates integration into the body with tissue ingrowth. As previously mentioned, PET too easily degrades in vivo and tissue has a difficult time integrating with PTFE. For these reasons, researchers are beginning to look for different types of surgical mesh that may be suitable for the biological environment and provide better comfort while supporting prolapsed organs.
PVDF (nanofibrous mesh)
One particular type of mesh that is under study is polyvinylidene fluoride (PVDF), or nanofibrous mesh, which has been found to be more resistant to hydrolysis and disintegration, unlike PET, and does not increase its stiffness as it ages, unlike PP. It is being tested for both hernia and pelvic/vaginal wall surgery and is produced via fiber placement layer by layer, whereas PP is constructed by a weaving-like process. This gives the nanofibrous mesh a heavyweight yet low-porosity structure, also adding greater stiffness and stress threshold when compared to PP. This is supported by the presence of HSP70—an indicator for cell stress and protector of cell formation against damage, which is beneficial for the prosthesis and tissue formation—which has been monitored and observed in a larger presence in PVDF implants. In vitro observations of nanofibrous mesh have yielded evidence of cell migration and proliferation on the set of the mesh. Successful cell growth has been noted with long fusiform shapes and clear boundaries.
A significant advantage of using nanofibrous mesh is that it can carry far more stem cells than traditional PP mesh, which could improve cell-based therapy for pelvic organ prolapse and regeneration methods. Another important advantage of PVDF is the formation of capillaries after 12 weeks, which is essential for wound healing. The faster neovascularization occurs, the faster tissue can be repaired and regenerated, which decreases the likelihood of exposure or extrusion of the mesh.
Some enhancements to PVDF must also be made before it can be used for human surgery. Although the modulus of elasticity is higher than that of PP, resulting stretch under identical stress is much less, which could cause complications such as tissue degeneration and loss of mechanical soundness. Nanofibrous mesh currently also promotes a greater foreign body reaction and inflammatory response, which compromises the biocompatibility of the mesh. For these reasons, PVDF is still under consideration and experimentation for bodily implants.
Reduction of inflammatory response using MSCs
Inflammatory responses to mesh insertion promote tissue formation around mesh fibers and proliferation of fibroblasts, polymorphonucleocytes, and macrophages, which all aid in the integration of mesh. Failures to resolve inflammatory responses may lead to foreign body reactions and the ultimate encapsulation of the implant which negates any functional purpose that the implant was supposed to serve. Mesenchymal stem cells (MSCs) are known to reduce inflammatory responses which, when combined with surgical mesh, could prevent them from getting uncontrollable and too difficult to tame. MSCs combined with surgical meshes can be used as “off the shelf” products and enhance macrophage polarization in both in vivo and in vitro environments. This can encourage an anti-inflammatory response and can regulate the inflammatory reaction associated with mesh implantation.
Antimicrobial hernia meshes
Alongside mesh erosion, mesh migration, and enterocuteneous fistula, mesh-related surgical site infections (SSI) remain a significant healthcare problem. Approximately 60,000 inguinal and ventral hernia repairs become infected annually, with similar numbers in Europe. The CDC distinguishes among superficial SSI; which involve only the skin and subcutaneous tissue, and deep SSI when the infection can settle on the implant and thus requiring more elaborate treatment protocols.
The pathogenesis behind mesh-related contaminations is mostly due to the patient's skin or mucosa during primary incision and clinical practices. The insertion of medical devices has been found to increase the susceptibility to the uptake of adherent bacteria by a factor of 10,000 up to 100,000. In the case of hernia operations, one-third to two-thirds of the implanted mesh would be contaminated at the point of insertion, although only a small number of them will cause an infection. Many factors affect the chances of an infection incurring on a mesh material, among which the type of the surgical procedure and the location are of the highest importance. For instance, the chances of an infection incurring are 2%-4% for an open inguinal repair, but as high as 10% for incisional hernia repair. Laparoscopy carries the lowest infection rate, which generally ranges below 1% and as low as 0.1%. Other factors that affect an infection's chances are the surgeon's learning curve, as a less experienced surgeon may be required longer to perform the operation and thus increase the time that the incision is exposed. Further, the type of mesh, with a vast choice of available prostheses today, could be distinguished based on the material and composition, the architecture of the filament, the absorbability, and the weight. The patient's demographics also influence the chances of an infection occurring; those include smoking, diabetes, immunocompromised, and obesity.
Predictive factors for early- and late-onset surgical site infections will encounter inflammation, fever, focal tenderness, erythema, swelling discharging pus, redness, heat, or pain. This will then be assessed by CT or MRI, followed by aspiration of fluid and culturing. Staphylococci species, and more specifically S. aureus and S. epidermidis account for approximately 90% for the incurring infections, with a prevalent presence of methicillin-resistant Staphylococcus aureus (MRSA). Gram-negative species such as Pseudomonas sp., Enterobacteriaceae are also commonly found. With multi-species biofilms also commonly encountered. If an infection settles on a mesh, administration of antibiotics is commonly ineffective, due to the blood-mesh barrier, and removal of the mesh will be required for over 40% of the deep surgical site infections.
From a material science perspective, a mesh can play a passive role towards antibacterial protection through its architecture, or an active role by combining therapeutics in the mesh's composition. For instance, monofilament meshes have been found to be twice as unlikely to adhere bacteria than multifilament meshes. As a drug delivery system, a hernia mesh can be used to deliver antibiotics, antiseptics, antimicrobials, antimicrobial peptides or nanoparticles. Different techniques can be used to implement the integration of such substances, including dipping/soaking, physical coating, chemical surface functionalization and electrospinning.
FDA Approved Antimicrobial Hernia Meshes
MycroMesh and DualMesh Plus by GORE
XenMatrix Surgical Graft by BD
Ventrio Hernia Patch by BD
See also
Adhesion barrier
Biomesh
Inguinal hernia surgery meshes
References
Medical devices
Implants (medicine)
Surgical suture material | Surgical mesh | [
"Biology"
] | 3,071 | [
"Medical devices",
"Medical technology"
] |
38,689,987 | https://en.wikipedia.org/wiki/Plastid%20terminal%20oxidase | Plastid terminal oxidase or plastoquinol terminal oxidase (PTOX) is an enzyme that resides on the thylakoid membranes of plant and algae chloroplasts and on the membranes of cyanobacteria. The enzyme was hypothesized to exist as a photosynthetic oxidase in 1982 and was verified by sequence similarity to the mitochondrial alternative oxidase (AOX). The two oxidases evolved from a common ancestral protein in prokaryotes, and they are so functionally and structurally similar that a thylakoid-localized AOX can restore the function of a PTOX knockout.
Function
Plastid terminal oxidase catalyzes the oxidation of the plastoquinone pool, which exerts a variety of effects on the development and functioning of plant chloroplasts.
Carotenoid biosynthesis and plastid development
The enzyme is important for carotenoid biosynthesis during chloroplast biogenesis. In developing plastids, its activity prevents the over-reduction of the plastoquinone pool. Knockout plants for PTOX exhibit phenotypes of variegated leaves with white patches. Without the enzyme, the carotenoid synthesis pathway slows down due to the lack of oxidized plastoquinone with which to oxidize phytoene, a carotenoid intermediate. The colorless compound phytoene accumulates in the leaves, resulting in white patches of cells. PTOX is also thought to determine the redox poise of the developing photosynthetic apparatus and without it, plants fail to assemble organized internal membrane structures in chloroplasts when exposed to high light during early development.
Photoprotection
Plants deficient in the IMMUTANS gene that encodes the oxidase are especially susceptible to photooxidative stress during early plastid development. The knockout plants exhibit a phenotype of variegated leaves with white patches that indicate a lack of pigmentation or photodamage. This effect is enhanced with increased light and temperature during plant development. The lack of plastid terminal oxidase indirectly causes photodamage during plastid development because protective carotenoids are not synthesized without the oxidase.
The enzyme is also thought to act as a safety valve for stress conditions in the photosynthetic apparatus. By providing an electron sink when the plastoquinone pool is over-reduced, the oxidase is thought to protect photosystem II from oxidative damage. Knockouts for Rubisco and photosystem II complexes, which would experience more photodamage than normal, exhibit an upregulation of plastid terminal oxidase. This effect is not universal because it requires plants to have additional PTOX regulation mechanisms. While many studies agree with the stress-protective role of the enzyme, one study showed that overexpression of PTOX increases the production of reactive oxygen species and causes more photodamage than normal. This finding suggests that an efficient antioxidant system is required for the oxidase to function as a safety valve for stress conditions and that it is more important during chloroplast biogenesis than in the regular functioning of the chloroplast.
Chlororespiration and electron flux
The most confirmed function of plastid terminal oxidase in developed chloroplasts is its role in chlororespiration. In this process, NADPH dehydrogenase (NDH) reduces the quinone pool and the terminal oxidase oxidizes it, serving the same function as cytochrome c oxidase from mitochondrial electron transport. In Chlamydomonas, there are two copies of the gene for the oxidase. PTOX2 significantly contributes to the flux of electrons through chlororespiration in the dark. There is also evidence from experiments with tobacco that it functions in plant chlororespiration as well.
In fully developed chloroplasts, prolonged exposure to light increases the activity of the oxidase. Because the enzyme acts at the plastoquinone pool in between photosystem II and photosystem I, it may play a role in controlling electron flow through photosynthesis by acting as an alternative electron sink. Similar to its role in carotenoid synthesis, its oxidase activity may prevent the over-reduction of photosystem I electron acceptors and damage by photoinhibition. A recent analysis of electron flux through the photosynthetic pathway shows that even when activated, the electron flux plastid terminal oxidase diverts is two orders of magnitude less than the total flux through photosynthetic electron transport. This suggests that the protein may play less of a role than previously thought in relieving the oxidative stress in photosynthesis.
Structure
Plastid terminal oxidase is an integral membrane protein, or more specifically, an integral monotopic protein and is bound to the thylakoid membrane facing the stroma. Based on sequence homology, the enzyme is predicted to contain four alpha helix domains that encapsulate a di-iron center. The two iron atoms are ligated by six essential conserved histidine and glutamate residues – Glu136, Glu175, His171, Glu227, Glu296, and His299. The predicted structure is similar to that of the alternative oxidase, with an additional Exon 8 domain that is required for the plastid oxidase's activity and stability. The enzyme is anchored to the membrane by a short fifth alpha helix that contains a Tyr212 residue hypothesized to be involved in substrate binding.
Mechanism
The oxidase catalyzes the transfer of four electrons from reduced plastoquinone to molecular oxygen to form water . The net reaction is written below:
2 QH2 + O2 → 2 Q + 2 H2O
Analysis of substrate specificity revealed that the enzyme almost exclusively catalyzes the reduction of plastoquinone over other quinones such as ubiquinone and duroquinone. Additionally, iron is essential for the catalytic function of the enzyme and cannot be substituted by another metal cation like Cu2+, Zn2+, or Mn2+ at the catalytic center.
It is unlikely that four electrons could be transferred at once in a single iron cluster, so all of the proposed mechanisms involve two separate two-electron transfers from reduced plastoquinone to the di-iron center. In the first step common to all proposed mechanisms, one plastoquinone is oxidized and both irons are reduced from iron(III) to iron(II). Four different mechanisms are proposed for the next step, oxygen capture. One mechanism proposes a peroxide intermediate, after which one oxygen atom is used to create water and another is left bound in a diferryl configuration. Upon one more plastoquinone oxidation, a second water molecule is formed and the irons return to a +3 oxidation state. The other mechanisms involve the formation of Fe(III)-OH or Fe(IV)-OH and a tyrosine radical. These radical-based mechanisms could explain why over-expression of the PTOX gene causes increased generation of reactive oxygen species.
Evolution
The enzyme is present in organisms capable of oxygenic photosynthesis, which includes plants, algae, and cyanobacteria. Plastid terminal oxidase and alternative oxidase are thought to have originated from a common ancestral di-iron carboxylate protein. Oxygen reductase activity was likely an ancient mechanism to scavenge oxygen in the early transition from an anaerobic to aerobic world. The plastid oxidase first evolved in ancient cyanobacteria and the alternative oxidase in Pseudomonadota before eukaryotic evolution and endosymbiosis events. Through endosymbiosis, the plastid oxidase was vertically inherited by eukaryotes that evolved into plants and algae. Sequenced genomes of various plant and algae species shows that the amino acid sequence is more than 25% conserved, which is a significant amount of conservation for an oxidase. This sequence conservation further supports the theory that both the alternative and plastid oxidases evolved before endosymbiosis and did not significantly change through eukaryote evolution.
There also exist PTOX cyanophages that contain copies of the gene for the plastid oxidase. They are known to act as viral vectors for movement of the gene between cyanobacterial species. Some evidence suggests that the phages may use the oxidase to influence photosynthetic electron flow to produce more ATP and less NADPH because viral synthesis utilizes more ATP.
See also
Alternative oxidase
Metalloprotein
References
External links
Uniprot entry for tomato PTOX
InterPro entry on alternative oxidases
Integral membrane proteins
Photosynthesis
Metabolism
EC 1.10.3 | Plastid terminal oxidase | [
"Chemistry",
"Biology"
] | 1,886 | [
"Biochemistry",
"Metabolism",
"Photosynthesis",
"Cellular processes"
] |
38,693,735 | https://en.wikipedia.org/wiki/Indium%28III%29%20hydroxide | Indium(III) hydroxide is the chemical compound with the formula . Its prime use is as a precursor to indium(III) oxide, . It is sometimes found as the rare mineral dzhalindite.
Structure
Indium(III) hydroxide has a cubic structure, space group Im3, a distorted structure.
Preparation and reactions
Neutralizing a solution containing an salt such as indium nitrate () or a solution of indium trichloride () gives a white precipitate that on aging forms indium(III) hydroxide. A thermal decomposition of freshly prepared shows the first step is the conversion of to cubic indium(III) hydroxide. The precipitation of indium hydroxide was a step in the separation of indium from zincblende ore by Reich and Richter, the discoverers of indium.
Indium(III) hydroxide is amphoteric, like gallium(III) hydroxide () and aluminium hydroxide (), but is much less acidic than gallium hydroxide (), having a lower solubility in alkaline solutions than in acid solutions. It is for all intents and purposes a basic hydroxide.
Dissolving indium(III) hydroxide in strong alkali gives solutions that probably contain either four coordinate or .
Reaction with acetic acid or carboxylic acids is likely to give the basic acetate or carboxylate salt, e.g. .
At 10 MPa pressure and 250-400 °C, indium(III) hydroxide converts to indium oxide hydroxide (InO(OH)), which has a distorted rutile structure.
Rapid decompression of samples of indium(III) hydroxide compressed at 34 GPa causes decomposition, yielding some indium metal.
Laser ablation of indium(III) hydroxide gives indium(I) hydroxide (InOH), a bent molecule with an In-O-H angle of around 132° and an In-O bond length of 201.7 pm.
References
Indium compounds
Hydroxides
Inorganic compounds | Indium(III) hydroxide | [
"Chemistry"
] | 431 | [
"Inorganic compounds",
"Bases (chemistry)",
"Hydroxides"
] |
38,693,879 | https://en.wikipedia.org/wiki/Flemingia%20macrophylla | Flemingia macrophylla is a tropical woody leguminous shrub in the family Fabaceae. It is a multipurpose plant widely used in agriculture, crop improvement, fodder, dyes and for various therapeutic purposes. Perhaps, it is the most versatile species of Flemingia in terms of adaptation, medicinal and agricultural applications.
Description
Flemingia macrophylla is a woody, perennial, deep-rooting, and leafy shrub. It is about 0.6-2.4 m high (rarely 3 m). The main stem is prostrate or erect, with numerous stems arising from a single base. The young branches are greenish, ribbed, triangular in section and silky; while the old stems are brown, almost round in section. The leaves are trifoliate. leaflets are papery, with a glabrous upper surface. Inflorescences are densely spicate-racemose or paniculate, and bracts are foliaceous or dry, persistent or deciduous. Pods are small and turn brown when ripening; they are dehiscent, generally with two shiny black seeds in the vessel. Seeds are globular, 2–3 mm in diameter, and shiny black. The leaves are disproportionately large, hence origin of the specific name, macrophylla meaning ‘large leaved’ (Greek makros = large; phyllon = leaf).
Distribution and habitat
It is a native plant of subhumid to humid (sub-) tropics where average annual rainfall is typically 1100–3500 mm with up to 6 dry months, at altitude up to 2000 m above msl. Thus its natural habitat is found in Asia including Bhutan, southern China, Cambodia, India, Indonesia, Laos, Myanmar, Malaysia, Nepal, northern Pakistan, Papua New Guinea, Philippines, Sri Lanka, Taiwan, Thailand and Vietnam. It has been cultivated and naturalised in sub-Saharan Africa (such as Côte d'Ivoire, Ghana, Nigeria, Cameroon), Central and South America (e.g. Costa Rica, Panama, Colombia), and tropical Australia.
Its natural habitat is often in shaded locations, scrub, woodlands, grasslands, gallery forest edges and alike, and on soils with fertility ranging from very low to intermediate (and even high) acidic contents. The shrubs are mostly seen under trees along watercourses and in grasslands, on clay and lateritic soils. The plant is tolerant of light shade and is moderately able to survive fires. It can tolerate fairly long dry spells and is capable of surviving on very poorly drained soils with waterlogging. It can strive well on a wide variety of soils within a pH range from 4–8, and high soluble aluminium (80% saturation). It requires a minimum rainfall of about 1,100 mm, and up to 3,500 mm/year for normal propagation, and is very drought tolerant. It can flower and fruit throughout the year.
Chemical constituents
A number of bioactive compounds have been reported from F. macrophylla. Like other members of Fabaceae, it is rich in Flavonoids. Genistein, 5, 7,3’, 4’-tetrahydroxyisoflavone, 5, 7, 4’-trihydroxyisoflavone-7-O-β-D-glucopyranoside, 5, 7,4’-trihydroxy-8,3’-diprenylflavanone, 5, 7,4’-trihydroxy-6-prenylisoflavone, flemichin D, lespedezaflavanone A and ouratea-catechin are isolated from the root, in which genistein and its isoflavones analogs are the main constituents. A novel flavanone, named fleminone, was isolated from a petrol extract of the stems. A new isoflavone, called flemiphyllin was also isolated. Three new flavonoids, fleminginin (1), flemingichromone (2), and flemingichalcone (3), and other twenty known compounds were isolated from the aerial parts.
Uses
Agronomy
Flemingia macrophylla is used in a variety of agricultural practices and by-products. Due to slow decomposition rate of its leaves, along with its dense growth, moderate drought tolerance, ability to withstand occasional flooding, and coppicing ability, it is commonly used for mulching, weed control and sod protection. It is most commonly used in contour hedgerows for erosion control, often in association with Desmodium cinereum. Prunings are used for mulch and green manure in alley cropping systems. Probably the most interesting feature of the species is the relative resistance of its leaves to decomposition. It is experimentally demonstrated that F. macrophylla is superior over the common Leucaena leucocephala as mulch for plantain production.
It is also often used to shade young coffee and cocoa plants, for weed suppression and soil enrichment in orchards, and to provide fuel wood and stakes for climbing crop species. However, it is considered a poor forage since its leaves have a high fibre and condensed tannin concentrations and is not readily eaten by stock. Yet it is used as dietary supplement by mixing with grasses and other legumes, particularly during dry season when regular forages are scarce.
In India it is used as a host plant to the Lac insect, and is sometimes intercropped with food crops during its establishment period. It is also one of the major sources of the resinous powder, called in Arabic ورس (wars), with variants waras, wurs and wurus, obtained from fruits of the plant. It is a coarse purple or orange-brown powder, consisting of the glandular hairs rubbed from the dry pods, principally used for dyeing silk to brilliant orange color; the active compound for it is flemingin. In Arabia, the powder is used as cosmetic.
Folk medicine
Extracts from Flemingia species have been used as a traditional medicine for treating rheumatism.
References
External links
Information at Tropical Forages
Information at Flora of China
Taxonomy at the Plant List
Taxonomy at Flora of China
Information at Encyclopedia of Life
Wikispecies
Information at hear.org
Taxonomy at Botanica sistematica
Phaseoleae
Medicinal plants of Asia
Medicinal plants of Oceania
Pharmacognosy | Flemingia macrophylla | [
"Chemistry"
] | 1,329 | [
"Pharmacology",
"Pharmacognosy"
] |
33,208,041 | https://en.wikipedia.org/wiki/P-adic%20quantum%20mechanics | p-adic quantum mechanics is a collection of related research efforts in quantum physics that replace real numbers with p-adic numbers. Historically, this research was inspired by the discovery that the Veneziano amplitude of the open bosonic string, which is calculated using an integral over the real numbers, can be generalized to the p-adic numbers. This observation initiated the study of p-adic string theory. Another approach considers particles in a p-adic potential well, with the goal of finding solutions with smoothly varying complex-valued wave functions. Alternatively, one can consider particles in p-adic potential wells and seek p-adic valued wave functions, in which case the problem of the probabilistic interpretation of the p-adic valued wave function arises. As there does not exist a suitable p-adic Schrödinger equation, path integrals are employed instead. Some one-dimensional systems have been studied by means of the path integral formulation, including the free particle, the particle in a constant field, and the harmonic oscillator.
References
External links
P-adic numbers
Quantum mechanics
String theory | P-adic quantum mechanics | [
"Physics",
"Astronomy",
"Mathematics"
] | 233 | [
"Astronomical hypotheses",
"P-adic numbers",
"Theoretical physics",
"Quantum mechanics",
"String theory",
"Number theory"
] |
33,212,532 | https://en.wikipedia.org/wiki/SensorMedics%20high-frequency%20oscillatory%20ventilator | The SensorMedics High-Frequency Oscillatory Ventilator is a patented high-frequency (>150 Rf) mechanical ventilator designed and manufactured by SensorMedics Corp. of Yorba Linda, California. After a series of acquisitions, Vyaire Medical, Inc. marketed the product as 3100A/B HFOV Ventilators. Model 3100 (later 3100A) received premarket approval from the United States Food and Drug Administration (FDA) in 1991 for treatment of all forms of respiratory failure in neonatal patients. In 1995, it received pre-market approved for Pediatric Application with no upper weight limit for treating selected patients failing on conventional ventilation.
3100A
The 3100A model is used for infants and children under 35 kilograms (<35 kg).
3100B
The 3100B model is used for all other people weighing more than 35 kilograms.
Controls and settings
Bias flow
Adjusting bias flow affects mean Paw. Lowering bias flow may decrease the work of breathing and facilitate weaning.
Typical ranges
Premature 8–15 LPM
Near-term 10–20 LPM
Small child 15–25 LPM
Large Child 20–30 LPM
Adjust
This control sets the mean airway pressure, directly affecting lung volume and oxygenation.
The initial setting is slightly higher than the mean airway pressure for conventional ventilation.
Power
Piston displacement is controlled by the power setting. Power changes ventilation and thereby changes blood PaCO2 levels.
Typical values
Start with a power of 2.0 and adjust for chest wiggle to the umbilicus.
Inspiratory time %
Ti% is the percentage of time allotted for inspiration. Once this value is set, it rarely needs to be changed.
Typical values
33% is recommended by the manufacturer for almost all applications.
Up to 50% is recommended in situations where lung recruitment is necessary.
Any Inspiratory Time above 33% can cause air trapping and lead to barotrauma. Setting the Mean airway pressure 1–2 cm of h2O above the set MAP for a few minutes, then weaning back down to the original MAP can recruit alveoli safely.
Frequency
Frequency (Rf) is the number of breaths in one second, expressed in hertz (hz). One hertz is equal to 60 breaths per minute (Rf) .
Typical values and ranges
The smaller the patient, the higher the frequency.
The larger the patient, the lower the frequency.
Changes in frequency
Decrease in frequency = increased tidal volume.
Increase in frequency = decreased tidal volume.
Problems
Since neither the 3100A or the 3100B measure actual tidal volumes, it is impossible to wean with precision; as a result, some clinicians find it problematic to use these machines for oscillatory ventilation.
References
Mechanical ventilation
Respiratory therapy
Medical equipment
Pulmonology | SensorMedics high-frequency oscillatory ventilator | [
"Biology"
] | 576 | [
"Medical equipment",
"Medical technology"
] |
33,212,628 | https://en.wikipedia.org/wiki/Medical%20gas%20therapy | Medical gas therapy is a treatment involving the administration of various gases. It has been used in medicine since the use of oxygen therapy. Most of these gases are drugs, including oxygen. Many other gases, collectively known as factitious airs, were explored for medicinal value in the late eighteenth century. In addition to oxygen, medical gases include nitric oxide (NO), and helium-O2 mixtures (Heliox). Careful considerations and close monitoring needed when medical gases are in use. For the purpose of this article only gas mixtures are described.
Gas mixtures therapies
Nitric oxide
Nitric oxide is a substance that our body produces in its every cell and in its every organ. It has a number of functions. It take part in vasodilation, platelet inhibition, immune regulation, enzyme regulation, and neurotransmission.
Inhaled nitric oxide is a gas that is inhaled. It was initially described in 1987 as an "endothelial-derived relaxing factor" and has since been used to treat pulmonary disorders. It works by relaxing smooth muscle to widen (dilate) blood vessels, especially in the lungs. Inhaled nitric oxide selects only pulmonary smooth muscles. There will be no effect or minimal effect of inhaled nitric oxide on atelectatic or fluid-filled lung. It improves oxygenation and decreases pulmonary hypertension. Nitric oxide is used together with a mechanical ventilator to treat respiratory failure in premature infants. In adults nitric oxide can be used in treating pulmonary hypertension with acute respiratory distress syndrome. Thanks to the possible clinical successful outcomes of nitric oxide treatment patients can avoid need for extracorporeal membrane oxygenation treatment. The U.S. Food and Drug Administration has been approved the use of nitric oxide in term and near-term (greater than 34 weeks' gestation age) neonates with hypoxic respiratory failure with clinical or echocardiographic evidence of pulmonary hypertension.
Contraindications
Nitric oxide must not be used in neonates who depend on right-to-left shunting of blood.
Dosing of nitric oxide
Dose needed to achieve desired effect but avoid toxicity and adverse effects in neonates and adults is relatively low. Usually it is 5-20 ppm (parts per million). Regular arterial blood gas tests needed to assess the response to the therapy and signs of toxicity. Improvement in partial pressure of oxygen (PO2) and oxygen saturation would be indication of positive response to the nitric oxide therapy. If there is an evidence that nitric oxide works the same dose would be used till the hypoxemia and pulmonary hypertension resolved. When the hypoxemia and pulmonary hypertension resolved titration or slowly weaning of the nitric oxide initiates. Abrupt discontinuation of nitric oxide may lead to compromised oxygenation and pulmonary hypertension may rebound.
Side effects of the nitric oxide therapy
Methemoglobins level in the blood increases with the use of nitric oxide. Methemoglobin is abnormal form of molecule which can not carry oxygen. Methemoglobin turns blood brown. Other medications can produce methemoglobin too. Monitoring of methemoglobin needed when nitric oxide is in use.
Nitric oxide with oxygen () in combination produces another by-product chemical compound nitrogen dioxide (). The higher the oxygen concentration and nitric oxide therapy duration and lower ventilator flow rate the higher amount of will be produced. is toxic and its level should always be monitored in nitric oxide therapies. High level of can lead to cell damage, hemorrhage, pulmonary edema.
Use of nitric oxide in patients with left heart failure or congestive heart failure may lead to pulmonary edema or worsen pulmonary edema.
Nobel Prize for Nitric oxide discoveries
Three US scientist - Robert F. Furchgott, PhD, Louis J. Ignarro, PhD, and Ferid Murad, MD, PhD won Nobel Prize in Physiology and Medicine for their discovery of nitric oxide role in cardiovascular and nervous systems in 1998. Even though the nitric oxide effects on the body known for more than 25 years the clinical use is still in a development.
Helium and oxygen
In medicine, Heliox generally refers to a mixture of 21% O2 (the same as air) and 79% He, although other combinations are available.
Heliox generates less airway resistance than air and thereby requires less mechanical energy to ventilate the lungs. "Work of Breathing" is reduced. It does this by two mechanisms:
increased tendency to laminar flow
reduced resistance in turbulent flow
The dry air on the Earth we inhale consists of 78.8% nitrogen, 20.95% oxygen and 0.93% argon. Heliox therapy is substitution of nitrogen with helium. Helium itself has no pharmacological value, it does not react in the body. Its only purpose is to make the flow less turbulent and help oxygen to get into the lungs. Less turbulent flow requires less work to breathe.
Helium and Heliox properties
Helium (He) is colorless, odorless, tasteless, and inert noble gas. Helium is second lightest gas after hydrogen.
Heliox has a similar viscosity to air but a significantly lower density (0.5 g/L versus 1.2 5g/L at STP). Flow of gas through the airways comprises laminar flow, transitional flow and turbulent flow. The tendency for each type of flow is described by the Reynolds number. Heliox's low density produces a lower Reynolds number and hence higher probability of laminar flow for any given airway. Laminar flow tends to generate less resistance than turbulent flow.
In the small airways where flow is laminar, resistance is proportional to gas viscosity and is not related to density and so heliox has little effect. The Hagen–Poiseuille equation describes laminar resistance. In the large airways where flow is turbulent, resistance is proportional to density, so Heliox has a significant effect.
Heliox has been used medically since the early 1930s. It was the mainstay of treatment in acute asthma before the advent of bronchodilators. Currently, heliox is mainly used in conditions of large airway narrowing (upper airway obstruction from tumors or foreign bodies and vocal cord dysfunction). There is also some use of heliox in conditions of the medium airways (croup, asthma and chronic obstructive pulmonary disease).
Patients with these conditions may develop a range of symptoms including dyspnea (breathlessness), hypoxemia (below-normal oxygen content in the arterial blood) and eventually a weakening of the respiratory muscles due to exhaustion, which can lead to respiratory failure and require intubation and mechanical ventilation. Heliox may reduce all these effects, making it easier for the patient to breathe. Heliox has also found utility in the weaning of patients off mechanical ventilation, and in the nebulization of inhalable drugs, particularly for the elderly. Research has also indicated advantages in using helium–oxygen mixtures in delivery of anaesthesia.
Heliox side effect
Heliox side effect is that inhaled helium change voice. Speech will sound high pitched. This effect is caused by low density gas passing through the vocal cords. The effect is reversible.
References
Respiratory therapy
Pulmonology
Medical treatments
Industrial gases | Medical gas therapy | [
"Chemistry"
] | 1,540 | [
"Chemical process engineering",
"Industrial gases"
] |
37,252,648 | https://en.wikipedia.org/wiki/Moderne%20Algebra | Moderne Algebra is a two-volume German textbook on graduate abstract algebra by , originally based on lectures given by Emil Artin in 1926 and by from 1924 to 1928. The English translation of 1949–1950 had the title Modern algebra, though a later, extensively revised edition in 1970 had the title Algebra.
The book was one of the first textbooks to use an abstract axiomatic approach to groups, rings, and fields, and was by far the most successful, becoming the standard reference for graduate algebra for several decades. It "had a tremendous impact, and is widely considered to be the major text on algebra in the twentieth century."
In 1975 van der Waerden described the sources he drew upon to write the book.
In 1997 Saunders Mac Lane recollected the book's influence:
Upon its publication it was soon clear that this was the way that algebra should be presented.
Its simple but austere style set the pattern for mathematical texts in other subjects, from Banach algebras to topological group theory.
[Van der Waerden's] two volumes on modern algebra ... dramatically changed the way algebra is now taught by providing a decisive example of a clear and perspicacious presentation. It is, in my view, the most influential text of algebra of the twentieth century.
Publication history
Moderne Algebra has a rather confusing publication history, because it went through many different editions, several of which were extensively rewritten with chapters and major topics added, deleted, or rearranged. In addition the new editions of first and second volumes were issued almost independently and at different times, and the numbering of the English editions does not correspond to the numbering of the German editions. In 1955 the title was changed from "Moderne Algebra" to "Algebra" following a suggestion of Brandt, with the result that the two volumes of the third German edition do not even have the same title.
For volume 1, the first German edition was published in 1930, the second in 1937 (with the axiom of choice removed), the third in 1951 (with the axiom of choice reinstated, and with more on valuations). The fourth edition appeared in 1955 (with the title changed to Algebra), the fifth in 1960, the sixth in 1964, the seventh in 1966, the eighth in 1971, the ninth in 1993. For volume 2, the first edition was published in 1931, the second in 1940, the third in 1955 (with the title changed to Algebra), the fourth in 1959 (extensively rewritten, with elimination theory replaced by algebraic functions of 1 variable), the fifth in 1967, and the sixth in 1993. The German editions were all published by Springer.
The first English edition was published in 1949–1950 and was a translation of the second German edition. There was a second edition in 1953, and a third edition under the new title Algebra in 1970 translated from the 7th German edition of volume 1 and the 5th German edition of volume 2. The three English editions were originally published by Ungar, though the 3rd English edition was later reprinted by Springer.
There were also Russian editions published in 1976 and 1979, and Japanese editions published in 1959 and 1967–1971.
References
History of mathematics
Mathematics textbooks
1930 non-fiction books
Abstract algebra
Springer Science+Business Media books | Moderne Algebra | [
"Mathematics"
] | 665 | [
"Abstract algebra",
"Algebra"
] |
37,254,856 | https://en.wikipedia.org/wiki/Hemilability | In coordination chemistry and catalysis hemilability (hemi - half, lability - a susceptibility to change) refers to a property of many polydentate ligands which contain at least two electronically different coordinating groups, such as hard and soft donors. These hybrid or heteroditopic ligands form complexes where one coordinating group is easily displaced from the metal centre while the other group remains firmly bound; a behaviour which has been found to increase the reactivity of catalysts when compared to the use of more traditional ligands.
Overview
In general, catalytic cycles can be divided into 3 stages:
Coordination of the starting material(s)
Catalytic transformation of the starting material(s) to the product(s)
Displacement of the product(s) to regain the catalyst (or pre-catalyst)
Traditionally the focus of catalytic research has been on the reaction taking place in the second stage, however there will be energy changes associated with the beginning and end steps due to their effect on the coordination sphere and geometry of the complex, as well as its oxidation number in cases of oxidative addition and reductive elimination. When these energy changes are large they can dictate the turn-over rate of the catalyst and hence its effectiveness.
Hemilabile ligands reduce the activation energy of these changes by readily undergoing partial and reversible displacement from the metal centre. Hence a co-ordinately saturated hemilabile complex will readily reorganise to allow the coordination of reagents but will also promote the ejection of products due to re-coordination of the labile section of the ligand. The low energy barrier between the fully and hemi coordinated states results in frequent inverconvertion between the two, which promotes a fast catalytic turn-over rate.
Hemilabile ligands dissociate in one of three main ways; an "on/off" mechanism where they are constantly dissociating and re-associating, a displacement mechanism where they dissociate easily when exposed to a competing substrate, or redox switching where the oxidation state of the ligand is used to tune its affinity for the metal center.
Examples
The oxidative addition of MeI to Ir(I) complexes was shown to proceed about 100 times faster with a hemilabile phosphane ligand compared to a very similar non-labile ligand.
Hydrovinylation (olefin dimerisation), which is typically difficult to carry out enantioselectively, has been shown to proceed with high enantiomeric excess when using a chiral phosphine ligand with an appropriately placed hemilabile coordinating group. (review article) The Pauson–Khand reaction, which is conceptually similar, has also been shown to give improved results when hemilabile P,S type hybrid ligands were used.
Iridium(I) complexes incorporating hemilabile ligands which contain methoxy, dimethylamino, and pyridine as donor functions have been shown to be effective catalysts for transfer hydrogenation.
See also
Scorpionate ligand
Pincer ligand
Weak-Link Approach (supramolecular chemistry)
2-(Diphenylphosphino)anisole
References
Catalysis
Coordination chemistry | Hemilability | [
"Chemistry"
] | 673 | [
"Catalysis",
"Chemical kinetics",
"Coordination chemistry"
] |
21,591,425 | https://en.wikipedia.org/wiki/Modified%20Newtonian%20dynamics | Modified Newtonian dynamics (MOND) is a theory that proposes a modification of Newton's second law to account for observed properties of galaxies. Its primary motivation is to explain galaxy rotation curves without invoking dark matter, and is one of the most well-known theories of this class. However, it has not gained widespread acceptance, with the majority of astrophysicists supporting the Lambda-CDM model as providing the better fit to observations.
MOND was developed in 1982 and presented in 1983 by Israeli physicist Mordehai Milgrom. Milgrom noted that galaxy rotation curve data, which seemed to show that galaxies contain more matter than is observed, could also be explained if the gravitational force experienced by a star in the outer regions of a galaxy decays more slowly than predicted by Newton's law of gravity. MOND modifies Newton's laws for extremely small accelerations (characteristic of the outer regions of galaxies, or the inter-galaxy forces within galaxy clusters), fitting the galaxy rotation curve data. In addition, the theory predicts that the mass of the Galactic Center should even affect the orbits of Kuiper Belt objects.
Since Milgrom's original proposal, MOND has seen scattered successes. It is capable of explaining several observations in galaxy dynamics, some of which can be difficult for Lambda-CDM to explain. However, MOND struggles to explain a range of other observations, such as the acoustic peaks of the cosmic microwave background and the Bullet cluster; furthermore, because MOND is not a relativistic theory, it struggles to explain relativistic effects such as gravitational lensing and gravitational waves. Finally, a major weakness of MOND is that galaxy clusters show a residual mass discrepancy even when analyzed using MOND.
A minority of astrophysicists continue to work on the theory. Jacob Bekenstein developed a relativistic generalization of MOND in 2004, TeVeS, which however had its own set of problems. Another notable attempt was by and in 2021, which proposed a relativistic model of MOND compatible with cosmic microwave background observations.
Overview
Several independent observations suggest that the visible mass in galaxies and galaxy clusters is insufficient to account for their dynamics, when analyzed using Newton's laws. This discrepancy – known as the "missing mass problem" – was first identified for clusters by Swiss astronomer Fritz Zwicky in 1933 (who studied the Coma cluster), and subsequently extended to include spiral galaxies by the 1939 work of Horace Babcock on Andromeda.
These early studies were augmented and brought to the attention of the astronomical community in the 1960s and 1970s by the work of Vera Rubin at the Carnegie Institute in Washington, who mapped in detail the rotation velocities of stars in a large sample of spirals. While Newton's Laws predict that stellar rotation velocities should decrease with distance from the galactic centre, Rubin and collaborators found instead that they remain almost constant – the rotation curves are said to be "flat". This observation necessitates at least one of the following:
{|
|- valign="top"
|(1) || There exists in galaxies large quantities of unseen matter which boosts the stars' velocities beyond what would be expected on the basis of the visible mass alone, or
|- valign="top"
|(2) || Newton's Laws do not apply to galaxies.
|}
Option (1) leads to the dark matter hypothesis; option (2) leads to MOND.
The basic premise of MOND is that while Newton's laws have been extensively tested in high-acceleration environments (in the Solar System and on Earth), they have not been verified for objects with extremely low acceleration, such as stars in the outer parts of galaxies. This led Milgrom to postulate a new effective gravitational force law (sometimes referred to as "Milgrom's law") that relates the true acceleration of an object to the acceleration that would be predicted for it on the basis of Newtonian mechanics. This law, the keystone of MOND, is chosen to reproduce the Newtonian result at high acceleration but leads to different ("deep-MOND") behavior at low acceleration:
Here is the Newtonian force, is the object's (gravitational) mass, is its acceleration, () is an as-yet unspecified function (called the interpolating function), and is a new fundamental constant which marks the transition between the Newtonian and deep-MOND regimes. Agreement with Newtonian mechanics requires
and consistency with astronomical observations requires
Beyond these limits, the interpolating function is not specified by the hypothesis, although it is possible to weakly constrain it empirically. Two common choices are the "simple interpolating function":
and the "standard interpolating function":
Thus, in the deep-MOND regime ( ≪ ):
Applying this to a star or other object of mass in circular orbit around mass (the total baryonic mass of the galaxy), produces
By fitting his law to rotation curve data, Milgrom found to be optimal.
MOND holds that for accelerations smaller than an a0 value of roughly accelerations increasingly depart from the standard Newtonian relationship of mass and distance, wherein gravitational strength is proportional to mass and the inverse square of distance. Specifically, the theory holds that when gravity is well below the a0 value, its rate of change—including the curvature of spacetime—increases with the square root of mass (rather than linearly as per Newtonian law) and decreases linearly with distance (rather than distance squared).
Whenever a small mass, m is near a much larger mass, M, whether it be a star near the center of a galaxy or an object near or on Earth, MOND yields dynamics that are indistinguishably close to those of Newtonian gravity. This 1-to-1 correspondence between MOND and Newtonian dynamics is retained down to accelerations of about (the a0 value); as accelerations decline below a0, MOND's dynamics rapidly diverge from the Newtonian description of gravity. For instance, there is a certain distance from the center of any given galaxy at which its gravitational acceleration equals a0; at ten times that distance, Newtonian gravity predicts a hundredfold decline in gravity whereas MOND predicts only a tenfold reduction.
It is important to note that the Newtonian component of MOND's dynamics remains active at accelerations well below the a0 value of ; the equations of MOND assert no minimum acceleration for the Newtonian component. However, because the residual Newtonian-like dynamics of MOND continue to decline as the inverse square of distance below a0—just as they do above—they comparatively vanish as they become overwhelmed by the stronger “deep-MOND” linear dynamics of the theory.
MOND predicts stellar velocities that closely match observations for an extraordinarily wide range of distances from galactic centers of mass. The magnitude of a0 establishes not only the distance from the center of the galaxy at which Newtonian and MOND dynamics diverge, but a0 also establishes the angle (when not plotted with log/log scales) of the non-Newtonian linear slope on velocity/radius graphs like Fig. 1.
MOND-compliant gravity, which explains galactic-scale observations, was not previously detected closer to Earth, such as in national laboratories or the trajectories of interplanetary spacecraft, because the a0 acceleration, , at which the dynamics of MOND begin diverging from Newtonian dynamics, is—as a practical matter—indistinguishably close to perfect weightlessness. Within the Solar System, the v 4 = GMa0 equation makes the effect of the a0 term virtually nonexistent; it is overwhelmed by the enormous—and highly Newtonian—gravitational influence of the Sun as well as the variability of Earth's surface gravity.
On Earth's surface—and in national laboratories when performing ultra-precise gravimetry—the a0 value is equal to 0.012 microgal (μGal), which is only twelve-trillionths the strength of Earth's gravity. A change in the laws of gravity below this acceleration is far too small to be resolved with even the most sensitive free-fall-style absolute gravimeters available to national labs, like the FG5-X, which is accurate to just ±2 μGal. When considering why MOND's effects aren't detectable with precision gravimetry on Earth, it is important to remember that a0 doesn't represent a spurious force; it is the gravitational strength at which MOND is theorized to significantly begin departing from the Newtonian dynamic. Moreover, the a0 strength is equivalent to the change in Earth's gravity brought about by an elevation difference of 0.04 mm—the width of a fine human hair. Such subtle gravitational details, besides being unresolvable with current gravimeters, are overwhelmed by twice-daily distortions in Earth's shape due to lunar gravitational tides, which can cause local elevation changes nearly 10,000 times greater than 0.04 mm. Such disturbances in local gravity due to tidal distortions are even detectable as variations in the rate of a Shortt double-pendulum clock, which was a national timekeeping standard in the late 1920s.
Even at the edge of the Solar System, the a0 point at which MOND dynamics significantly diverge from Newtonian dynamics is overwhelmed and masked by the much stronger gravitational fields from the Sun and planets, which follow Newtonian gravity. To give a sense of scale to a0, a free-floating mass in space that was exposed for one hour to would "fall" by just 0.8 millimeter—roughly the thickness of a credit card. An interplanetary spacecraft on a free-flying inertial path well above the Solar System's ecliptic plane (where it is isolated from the gravitational influence of individual planets) would, when at the same distance from the Sun as Neptune, experience a classic Newtonian gravitational strength that is 55,000 times stronger than a0. For small Solar System asteroids, gravitational effects in the realm of a0 are comparable in magnitude to the Yarkovsky effect, which subtly perturbs their orbits over long periods due to momentum transfer from the non-symmetric emission of thermal photons. The Sun's contribution to interstellar galactic gravity doesn't decline to the a0 threshold at which MOND's effects predominate until objects are 41 light-days from the Sun; this is 53 times further away from the Sun than Voyager 2 was in November 2022, which has been in the interstellar medium since 2012.
Despite its vanishingly small and undetectable effects on bodies that are on Earth, within the Solar System, and even in proximity to the Solar System and other planetary systems, MOND successfully explains significant observed galactic-scale rotational effects without invoking the existence of as-yet undetected dark matter particles lying outside of the highly successful Standard Model of particle physics. This is in large part due to MOND holding that exceedingly weak galactic-scale gravity holding galaxies together near their perimeters declines as a very slow linear relationship to distance from the center of a galaxy rather than declining as the inverse square of distance.
Milgrom's law can be interpreted in two ways:
One possibility is to treat it as a modification to Newton's second law, so that the force on an object is not proportional to the particle's acceleration but rather to In this case, the modified dynamics would apply not only to gravitational phenomena, but also those generated by other forces, for example electromagnetism.
Alternatively, Milgrom's law can be viewed as leaving Newton's Second Law intact and instead modifying the inverse-square law of gravity, so that the true gravitational force on an object of mass due to another of mass is roughly of the form In this interpretation, Milgrom's modification would apply exclusively to gravitational phenomena.
By itself, Milgrom's law is not a complete and self-contained physical theory, but rather an ad hoc empirically motivated variant of one of the several equations that constitute classical mechanics. Its status within a coherent non-relativistic hypothesis of MOND is akin to Kepler's Third Law within Newtonian mechanics; it provides a succinct description of observational facts, but must itself be explained by more fundamental concepts situated within the underlying hypothesis. Several complete classical hypotheses have been proposed (typically along "modified gravity" as opposed to "modified inertia" lines), which generally yield Milgrom's law exactly in situations of high symmetry and otherwise deviate from it slightly. A subset of these non-relativistic hypotheses have been further embedded within relativistic theories, which are capable of making contact with non-classical phenomena (e.g., gravitational lensing) and cosmology. Distinguishing both theoretically and observationally between these alternatives is a subject of current research.
The majority of astronomers, astrophysicists, and cosmologists accept dark matter as the explanation for galactic rotation curves (based on general relativity, and hence Newtonian mechanics), and are committed to a dark matter solution of the missing-mass problem. The primary difference between supporters of ΛCDM and MOND is in the observations for which they demand a robust, quantitative explanation, and those for which they are satisfied with a qualitative account, or are prepared to leave for future work. Proponents of MOND emphasize predictions made on galaxy scales (where MOND enjoys its most notable successes) and believe that a cosmological model consistent with galaxy dynamics has yet to be discovered. Proponents of ΛCDM require high levels of cosmological accuracy (which concordance cosmology provides) and argue that a resolution of galaxy-scale issues will follow from a better understanding of the complicated baryonic astrophysics underlying galaxy formation.
Observational evidence for MOND
Since MOND was specifically designed to produce flat rotation curves, these do not constitute evidence for the hypothesis, but every matching observation adds to support of the empirical law. Nevertheless, proponents claim that a broad range of astrophysical phenomena at the galactic scale are neatly accounted for within the MOND framework. Many of these came to light after the publication of Milgrom's original papers and are difficult to explain using the dark matter hypothesis. The most prominent are the following:
In addition to demonstrating that rotation curves in MOND are flat, equation 2 provides a concrete relation between a galaxy's total baryonic mass (the sum of its mass in stars and gas) and its asymptotic rotation velocity. This predicted relation was called by Milgrom the mass-asymptotic speed relation (MASSR); its observational manifestation is known as the baryonic Tully–Fisher relation (BTFR), and is found to conform quite closely to the MOND prediction.
Milgrom's law fully specifies the rotation curve of a galaxy given only the distribution of its baryonic mass. In particular, MOND predicts a far stronger correlation between features in the baryonic mass distribution and features in the rotation curve than does the dark matter hypothesis (since dark matter dominates the galaxy's mass budget and is conventionally assumed not to closely track the distribution of baryons). Such a tight correlation is claimed to be observed in several spiral galaxies, a fact which has been referred to as "Renzo's rule".
Since MOND modifies Newtonian dynamics in an acceleration-dependent way, it predicts a specific relationship between the acceleration of a star at any radius from the centre of a galaxy and the amount of unseen (dark matter) mass within that radius that would be inferred in a Newtonian analysis. This is known as the mass discrepancy-acceleration relation, and has been measured observationally. One aspect of the MOND prediction is that the mass of the inferred dark matter goes to zero when the stellar centripetal acceleration becomes greater than a0, where MOND reverts to Newtonian mechanics. In a dark matter hypothesis, it is a challenge to understand why this mass should correlate so closely with acceleration, and why there appears to be a critical acceleration above which dark matter is not required.
Both MOND and dark matter halos stabilize disk galaxies, helping them retain their rotation-supported structure and preventing their transformation into elliptical galaxies. In MOND, this added stability is only available for regions of galaxies within the deep-MOND regime (i.e., with a < a0), suggesting that spirals with a > a0 in their central regions should be prone to instabilities and hence less likely to survive to the present day. This may explain the "Freeman limit" to the observed central surface mass density of spiral galaxies, which is roughly a0/G. This scale must be put in by hand in dark matter-based galaxy formation models.
Particularly massive galaxies are within the Newtonian regime (a > a0) out to radii enclosing the vast majority of their baryonic mass. At these radii, MOND predicts that the rotation curve should fall as 1/r, in accordance with Kepler's Laws. In contrast, from a dark matter perspective one would expect the halo to significantly boost the rotation velocity and cause it to asymptote to a constant value, as in less massive galaxies. Observations of high-mass ellipticals bear out the MOND prediction.<ref></</ref>
In MOND, all gravitationally bound objects with a < a0 – regardless of their origin – should exhibit a mass discrepancy when analyzed using Newtonian mechanics, and should lie on the BTFR. Under the dark matter hypothesis, objects formed from baryonic material ejected during the merger or tidal interaction of two galaxies ("tidal dwarf galaxies") are expected to be devoid of dark matter and hence show no mass discrepancy. Three objects unambiguously identified as Tidal Dwarf Galaxies appear to have mass discrepancies in close agreement with the MOND prediction.
Recent work has shown that many of the dwarf galaxies around the Milky Way and Andromeda are located preferentially in a single plane and have correlated motions. This suggests that they may have formed during a close encounter with another galaxy and hence be Tidal Dwarf Galaxies. If so, the presence of mass discrepancies in these systems constitutes evidence for MOND. In addition, it has been claimed that a gravitational force stronger than Newton's (such as Milgrom's) is required for these galaxies to retain their orbits over time.
In 2020, a group of astronomers analyzing data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog, concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. They observed an effect consistent with the external field effect of modified Newtonian dynamics and inconsistent with tidal effects in the Lambda-CDM model paradigm commonly known as the Standard Model of Cosmology.
In a 2022 published survey of dwarf galaxies from the Fornax Deep Survey (FDS) catalogue, a group of astronomers and physicists conclude that 'observed deformations of dwarf galaxies in the Fornax Cluster and the lack of low surface brightness dwarfs towards its centre are incompatible with ΛCDM expectations but well consistent with MOND.'
In 2022, Kroupa et al. published a study of open star clusters, arguing that asymmetry in the population of leading and trailing tidal tails, and the observed lifetime of these clusters, are inconsistent with Newtonian dynamics but consistent with MOND.
In 2023, a study claimed that cold dark matter cannot explain galactic rotation curves, while MOND can.
In 2023, a study measured the acceleration of 26,615 wide binaries within 200 parsecs. The study showed that those binaries with accelerations less than 1 nm/s2 systematically deviate from Newtonian dynamics, but conform to MOND predictions, specifically to AQUAL. The results are disputed, with some authors arguing that the detection is caused by poor quality controls, while the original authors claimed that the added quality controls do not significantly affect the results.
In 2024, a study claimed that the universe's earliest galaxies formed and grew too quickly for the Lambda-CDM model to explain, but such rapid growth is predicted in MOND.
Complete MOND hypotheses
Milgrom's law requires incorporation into a complete hypothesis if it is to satisfy conservation laws and provide a unique solution for the time evolution of any physical system. Each of the theories described here reduce to Milgrom's law in situations of high symmetry (and thus enjoy the successes described above), but produce different behavior in detail.
Nonrelativistic
The first hypothesis of MOND (dubbed AQUAL) was constructed in 1984 by Milgrom and Jacob Bekenstein. AQUAL generates MONDian behavior by modifying the gravitational term in the classical Lagrangian from being quadratic in the gradient of the Newtonian potential to a more general function. (AQUAL is an acronym for A QUAdratic Lagrangian.) In formulae:
where is the standard Newtonian gravitational potential and F is a new dimensionless function. Applying the Euler–Lagrange equations in the standard way then leads to a non-linear generalization of the Newton–Poisson equation:
This can be solved given suitable boundary conditions and choice of F to yield Milgrom's law (up to a curl field correction which vanishes in situations of high symmetry).
An alternative way to modify the gravitational term in the lagrangian is to introduce a distinction between the true (MONDian) acceleration field a and the Newtonian acceleration field aN. The Lagrangian may be constructed so that aN satisfies the usual Newton-Poisson equation, and is then used to find a via an additional algebraic but non-linear step, which is chosen to satisfy Milgrom's law. This is called the "quasi-linear formulation of MOND", or QUMOND, and is particularly useful for calculating the distribution of "phantom" dark matter that would be inferred from a Newtonian analysis of a given physical situation.
Both AQUAL and QUMOND propose changes to the gravitational part of the classical matter action, and hence interpret Milgrom's law as a modification of Newtonian gravity as opposed to Newton's second law. The alternative is to turn the kinetic term of the action into a functional depending on the trajectory of the particle. Such "modified inertia" theories, however, are difficult to use because they are time-nonlocal, require energy and momentum to be non-trivially redefined to be conserved, and have predictions that depend on the entirety of a particle's orbit.
Relativistic
In 2004, Jacob Bekenstein formulated TeVeS, the first complete relativistic hypothesis using MONDian behaviour. TeVeS is constructed from a local Lagrangian (and hence respects conservation laws), and employs a unit vector field, a dynamical and non-dynamical scalar field, a free function and a non-Einsteinian metric in order to yield AQUAL in the non-relativistic limit (low speeds and weak gravity). TeVeS has enjoyed some success in making contact with gravitational lensing and structure formation observations, but faces problems when confronted with data on the anisotropy of the cosmic microwave background, the lifetime of compact objects, and the relationship between the lensing and matter overdensity potentials.
Several alternative relativistic generalizations of MOND exist, including BIMOND and generalized Einstein aether theory. There is also a relativistic generalization of MOND that assumes a Lorentz-type invariance as the physical basis of MOND phenomenology.
External field effect
In Newtonian mechanics, an object's acceleration can be found as the vector sum of the acceleration due to each of the individual forces acting on it. This means that a subsystem can be decoupled from the larger system in which it is embedded simply by referring the motion of its constituent particles to their centre of mass; in other words, the influence of the larger system is irrelevant for the internal dynamics of the subsystem. Since Milgrom's law is non-linear in acceleration, MONDian subsystems cannot be decoupled from their environment in this way, and in certain situations this leads to behaviour with no Newtonian parallel. This is known as the "external field effect" (EFE), for which there exists observational evidence.
The external field effect is best described by classifying physical systems according to their relative values of ain (the characteristic acceleration of one object within a subsystem due to the influence of another), aex (the acceleration of the entire subsystem due to forces exerted by objects outside of it), and a0:
: Newtonian regime
: Deep-MOND regime
: The external field is dominant and the behavior of the system is Newtonian.
: The external field is larger than the internal acceleration of the system, but both are smaller than the critical value. In this case, dynamics is Newtonian but the effective value of G is enhanced by a factor of a0/aex.
The external field effect implies a fundamental break with the strong equivalence principle (but not necessarily the weak equivalence principle). The effect was postulated by Milgrom in the first of his 1983 papers to explain why some open clusters were observed to have no mass discrepancy even though their internal accelerations were below a0. It has since come to be recognized as a crucial element of the MOND paradigm.
The dependence in MOND of the internal dynamics of a system on its external environment (in principle, the rest of the universe) is strongly reminiscent of Mach's principle, and may hint towards a more fundamental structure underlying Milgrom's law. In this regard, Milgrom has commented:
It has been long suspected that local dynamics is strongly influenced by the universe at large, a-la Mach's principle, but MOND seems to be the first to supply concrete evidence for such a connection. This may turn out to be the most fundamental implication of MOND, beyond its implied modification of Newtonian dynamics and general relativity, and beyond the elimination of dark matter.
Indeed, the potential link between MONDian dynamics and the universe as a whole (that is, cosmology) is augmented by the observation that the value of a0 (determined by fits to internal properties of galaxies) is within an order of magnitude of cH0, where c is the speed of light and H0 is the Hubble constant (a measure of the present-day expansion rate of the universe). It is also close to the acceleration rate of the universe, and hence the cosmological constant. Recent work on a transactional formulation of entropic gravity by Schlatter and Kastner suggests a natural connection between a0, H0, and the cosmological constant.
Responses and criticism
Dark matter explanation
While acknowledging that Milgrom's law provides a succinct and accurate description of a range of galactic phenomena, many physicists reject the idea that classical dynamics itself needs to be modified and attempt instead to explain the law's success by reference to the behavior of dark matter. Some effort has gone towards establishing the presence of a characteristic acceleration scale as a natural consequence of the behavior of cold dark matter halos, although Milgrom has argued that such arguments explain only a small subset of MOND phenomena. An alternative proposal is to ad hoc modify the properties of dark matter (e.g., to make it interact strongly with itself or baryons) in order to induce the tight coupling between the baryonic and dark matter mass that the observations point to. Finally, some researchers suggest that explaining the empirical success of Milgrom's law requires a more radical break with conventional assumptions about the nature of dark matter. One idea (dubbed "dipolar dark matter") is to make dark matter gravitationally polarizable by ordinary matter and have this polarization enhance the gravitational attraction between baryons.
Outstanding problems for MOND
The most serious problem facing Milgrom's law is that galaxy clusters show a residual mass discrepancy even when analyzed using MOND. This detracts from the adequacy of MOND as a solution to the missing mass problem, although the amount of extra mass required is a fifth that of a Newtonian analysis, and there is no requirement that the missing mass be non-baryonic. It has been speculated that 2 eV neutrinos could account for the cluster observations in MOND while preserving the hypothesis's successes at the galaxy scale. Indeed, analysis of sharp lensing data for the galaxy cluster Abell 1689 shows that MOND only becomes distinctive at Mpc distance from the center, so that Zwicky's conundrum remains, and 1.8 eV neutrinos are needed in clusters.
The 2006 observation of a pair of colliding galaxy clusters known as the "Bullet Cluster", poses a significant challenge for all theories proposing a modified gravity solution to the missing mass problem, including MOND. Astronomers measured the distribution of stellar and gas mass in the clusters using visible and X-ray light, respectively, and in addition mapped the inferred dark matter density using gravitational lensing. In MOND, one would expect the "missing mass" to be centred on regions of visible mass which experience accelerations lower than a0 (assuming the external field effect is negligible). In ΛCDM, on the other hand, one would expect the dark matter to be significantly offset from the visible mass because the halos of the two colliding clusters would pass through each other (assuming, as is conventional, that dark matter is collisionless), whilst the cluster gas would interact and end up at the centre. An offset is clearly seen in the observations. It has been suggested, however, that MOND-based models may be able to generate such an offset in strongly non-spherically symmetric systems, such as the Bullet Cluster.
Some ultra diffuse galaxies, such as NGC 1052-DF2, originally appeared to be free of dark matter. Were this the case, it would have posed a problem for MOND because it cannot explain the rotation curves. However, further research showed that the galaxies were at a different distance than previously thought, leaving the galaxies with plenty of room for dark matter.
A significant piece of evidence in favor of standard dark matter is the observed anisotropies in the cosmic microwave background. While ΛCDM is able to explain the observed angular power spectrum, MOND has a much harder time, although it is possible to construct relativistic generalizations of MOND that can fit the observations too. MOND also encounters difficulties explaining structure formation, with density perturbations in MOND perhaps growing so rapidly that too much structure is formed by the present epoch. However, galaxy surveys appear to show massive galaxy formation occurring at much greater rapidity early in time than is possible according to ΛCDM.
Several other studies have noted observational difficulties with MOND. For example, it has been claimed that MOND offers a poor fit to the velocity dispersion profile of globular clusters and the temperature profile of galaxy clusters, that different values of a0 are required for agreement with different galaxies' rotation curves, and that MOND is naturally unsuited to forming the basis of cosmology. Furthermore, many versions of MOND predict that the speed of light is different from the speed of gravity, but in 2017 the speed of gravitational waves was measured to be equal to the speed of light to high precision. This is well understood in modern relativistic theories of MOND, with the constraint from gravitational waves actually helping by substantially restricting how a covariant theory might be constructed.
Besides these observational issues, MOND and its relativistic generalizations are plagued by theoretical difficulties. Several ad hoc and inelegant additions to general relativity are required to create a theory compatible with a non-Newtonian non-relativistic limit, though the predictions in this limit are rather clear. This is the case for the more commonly used modified gravity versions of MOND, but some formulations (most prominently those based on modified inertia) have long suffered from poor compatibility with cherished physical principles such as conservation laws. Researchers working on MOND generally do not interpret it as a modification of inertia, with only very limited work done on this area.
Proposals for testing MOND
Several observational and experimental tests have been proposed to help distinguish between MOND and dark matter-based models:
The detection of particles suitable for constituting cosmological dark matter would strongly suggest that ΛCDM is correct and no modification to Newton's laws is required.
If MOND is taken as a theory of modified inertia, it predicts the existence of anomalous accelerations on the Earth at particular places and times of the year. These could be detected in a precision experiment. This prediction would not hold if MOND is taken as a theory of modified gravity, as the external field effect produced by the Earth would cancel MONDian effects at the Earth's surface.
It has been suggested that MOND could be tested in the Solar System using the LISA Pathfinder mission (launched in 2015). In particular, it may be possible to detect the anomalous tidal stresses predicted by MOND to exist at the Earth-Sun saddlepoint of the Newtonian gravitational potential. It may also be possible to measure MOND corrections to the perihelion precession of the planets in the Solar System, or a purpose-built spacecraft.
One potential astrophysical test of MOND is to investigate whether isolated galaxies behave differently from otherwise-identical galaxies that are under the influence of a strong external field. Another is to search for non-Newtonian behaviour in the motion of binary star systems where the stars are sufficiently separated for their accelerations to be below a0.
Testing MOND using the redshift-dependence of radial acceleration Sabine Hossenfelder and Tobias Mistele propose a parameter-free MOND model they call Covariant Emergent Gravity and suggest that as measurements of radial acceleration improve, various MOND models and particle dark matter might be distinguishable because MOND predicts a much smaller redshift-dependence.
See also
MOND researchers:
Notes
References
Further reading
Technical:
Merritt, David (2020). A Philosophical Approach to MOND: Assessing the Milgromian Research Program in Cosmology (Cambridge: Cambridge University Press), 282 pp.
Popular:
A non-Standard model, David Merritt, Aeon Magazine, July 2021
Dark matter critics focus on details, ignore big picture, Lee, 14 Nov 2012
"Dark matter" doubters not silenced yet , World Science, 2 Aug 2007
Does Dark Matter Really Exist?, Milgrom, Scientific American, Aug 2002
External links
The MOND pages, Stacy McGaugh
Mordehai Milgrom's website
"The Dark Matter Crisis" blog, Pavel Kroupa, Marcel Pawlowski
Pavel Kroupa's website
Superfluid dark matter may provide a more natural way to arrive at the MOND equation.
Astrophysics
Classical mechanics
Theories of gravity
Unsolved problems in astronomy
Unsolved problems in physics
Astronomical hypotheses
Dark matter
Celestial mechanics
Physics beyond the Standard Model
Matter | Modified Newtonian dynamics | [
"Physics",
"Astronomy"
] | 7,378 | [
"Dark matter",
"Astronomical hypotheses",
"Unsolved problems in astronomy",
"Astronomical sub-disciplines",
"Concepts in astronomy",
"Theoretical physics",
"Unsolved problems in physics",
"Classical mechanics",
"Astrophysics",
"Exotic matter",
"Mechanics",
"Particle physics",
"Astronomical c... |
21,593,473 | https://en.wikipedia.org/wiki/Swiss%201.2-metre%20Leonhard%20Euler%20Telescope | Leonhard Euler Telescope, or the Swiss EULER Telescope, is a national, fully automatic reflecting telescope, built and operated by the Geneva Observatory. It is located at an altitude of at ESO's La Silla Observatory site in the Chilean Norte Chico region, about 460 kilometers north of Santiago de Chile. The telescope, which saw its first light on 12 April 1998, is named after Swiss mathematician Leonhard Paul Euler.
The Euler telescope uses the CORALIE instrument to search for exoplanets. In addition, the telescope uses the multi-purpose EulerCam (ecam), a high precision photometry instrument, and a smaller, piggyback mounted telescope, called "Pisco". Its first discovery was a planet in orbit around Gliese 86, determined to be a hot Jupiter with an orbital period of only 15.8 earth days and about four times the mass of Jupiter. Since then, many other exoplanets have been discovered or examined in follow-up observations.
Together with the Mercator Telescope, Euler was part of the Southern Sky extrasolar Planet search Programme, which has discovered numerous extrasolar planets. It has also been frequently employed for follow-up characterization to determine the mass of exoplanets discovered by the Wide Angle Search for Planets, SuperWASP.
CORALIE
The CORALIE spectrograph is an echelle- type spectrograph used for astronomy. It is a copy of the ELODIE spectrograph used by Michel Mayor and Didier Queloz to detect the planet orbiting a star . In April 1998 it was built and installed at the Euler Telescope. Later in 2007 it was upgraded by Didier Queloz and his team to increase its performances to support Wide Angle Search for Planets program and Next-Generation Transit Survey. The instrument is optimized to measure Doppler effect on a star's electromagnetic spectrum with great precision to detect the gravitational tug of an exoplanet orbiting around it. It also known as "radial velocity" or "wobble" method, is an indirect detection method. The mass of the planet can be estimated from these measurements.
The spectrograph participates in the Southern Sky extrasolar Planet search Programme initiated by Michel Mayor
In 2010 visible camera EulerCam was installed by Didier Queloz. Camera main objective was to measure planet by transit method by supporting ground base program such as Wide Angle Search for Planets . The size of an exoplanet can be estimated using the transit method. By combining the measured size and mass from both methods, it can be determined whether the observed exoplanet is gaseous or rocky.
Characteristics
The resolution of CORALIE is fixed at R = 50,000 with three-pixel sampling. The detector charge-coupled device is 2k X 2k with a 15 micrometer pixel size.
Discovered exoplanets
The first five planetary object discovered using CORALIE are
Gallery
Video
See also
ELODIE spectrograph
List of largest optical telescopes in the 20th century
Stéphane Udry
WASP-15
References
External links
ESO La Silla 1.2m Leonhard Euler Telescope
Southern Sky extrasolar Planet search Programme
The CORALIE survey for southern extrasolar planets
www.exoplanets.ch
University of Geneva – The Geneva Observatory
daviddarling.info /Euler
ESO press release: 4 May 2000
Reflecting telescopes
European Southern Observatory
Spectrographs
Astronomical instruments
Telescope instruments
Exoplanet search projects
Articles containing video clips | Swiss 1.2-metre Leonhard Euler Telescope | [
"Physics",
"Chemistry",
"Astronomy"
] | 707 | [
"Exoplanet search projects",
"Telescope instruments",
"Spectrum (physical sciences)",
"Spectrographs",
"Astronomical instruments",
"Astronomy projects",
"Spectroscopy"
] |
21,600,649 | https://en.wikipedia.org/wiki/Dudley%27s%20theorem | In probability theory, Dudley's theorem is a result relating the expected upper bound and regularity properties of a Gaussian process to its entropy and covariance structure.
History
The result was first stated and proved by V. N. Sudakov, as pointed out in a paper by Richard M. Dudley. Dudley had earlier credited Volker Strassen with making the connection between entropy and regularity.
Statement
Let (Xt)t∈T be a Gaussian process and let dX be the pseudometric on T defined by
For ε > 0, denote by N(T, dX; ε) the entropy number, i.e. the minimal number of (open) dX-balls of radius ε required to cover T. Then
Furthermore, if the entropy integral on the right-hand side converges, then X has a version with almost all sample path bounded and (uniformly) continuous on (T, dX).
References
(See chapter 11)
Entropy
Theorems regarding stochastic processes | Dudley's theorem | [
"Physics",
"Chemistry",
"Mathematics"
] | 209 | [
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Theorems about stochastic processes",
"Theorems in probability theory",
"Entropy",
"Asymmetry",
"Wikipedia categories named after physical quantities",
"Symmetry",
"Dynamical systems"
] |
25,971,988 | https://en.wikipedia.org/wiki/Plano-convex%20ingot | Plano-convex ingots are lumps of metal with a flat or slightly concave top and a convex base. They are sometimes, misleadingly, referred to as bun ingots which imply the opposite concavity. They are most often made of copper, although other materials such as copper alloy, lead and tin are used. The first examples known were from the Near East during the 3rd and 2nd Millennia BC. By the end of the Bronze Age they were found throughout Europe and in Western and South Asia. Similar ingot forms continued in use during later Roman and Medieval periods.
Manufacture
Traditionally bun ingots were seen as a primary product of smelting, forming at the base of a furnace beneath a layer of less dense slag. However, experimental reconstruction of copper smelting showed that regular plano-convex ingots are difficult to form within the smelting furnace, producing only small ingots or copper prills that need to be remelted. High purity copper bun ingots found in Late Bronze Age Britain and the Mediterranean seem to have undergone a secondary refining procedure.
The metallographic structure and high iron compositions of some plano-convex ingots suggest that they are the product of primary smelting. Tylecote suggested that Roman plano-convex copper ingots may have been formed by tapping both slag and copper in one step into a mould or pit outside the furnace. A similar process was described by Agricola in book IX of his De Re Metallica and has been replicated experimentally.
Structure
Although all bun ingots share the same basic morphology, the details of their form and the texture of their convex base is dependent on the mould in which they cooled. Bun ingots made in purpose-dug depressions in sand can be highly variable in form even on the same site, whereas ingots cast in reusable moulds will form sets of identical “mould siblings”.
The composition of the metal and its cooling conditions affect structure. As the ingot cools gases are released giving the upper surface a “blistered” texture and if cooling takes place outside of the furnace, the outer surface often becomes oxidised. Casting in a warm mould or reheating furnace gives the ingot an even columnar structure running in the direction of cooling, whereas ingots cast in a cold mould have a distinctive two stage cooling structure with an outer chilled layer reflecting the rapid cooling of the bottom when it came into contact with the mould. A slightly concave upper surface can be produced if the top of the ingot cools more slowly than the bottom.
Britain
Late Bronze Age
By the Late Bronze Age, the copper bun ingot, either in a simple form or with a hole in its center, had become the main form of copper ingot, replacing the earlier ‘bar ingot’ or rippenbarre. Weights of complete examples average ~4 kg, but examples of up to about 7 kg are known. Many early finds of British LBA bun ingots were unstratified but recently bun-shaped ingots and ingot fragments have been found in hoards alongside bronze artifacts and scrap metal. Several offshore finds of probable LBA date suggest that copper bun ingots may have been traded by sea during this period.
Composition and Structure
The copper is of high purity, although earlier examples are sometimes composed of arsenical copper. Tylecote suggested that they are not primary smelting products and instead were refined and recast. The macrostructure of a half section example from Gillan, Cornwall shows a columnar structure that probably indicates slow cooling in a reheating furnace or a warm mold, rather than from pouring into a cold mold.
Iron Age and Roman period
A second major group of British bun ingots date to the Roman period and are found mostly in the copper-rich highland areas of Wales and in Scotland. They are heavier than the LBA examples, with weights ranging between 12 and 22 kg.
Some have stamps clearly dating them to the Roman period including an example that reads SOCIO ROMAE NATSOL. The term "socio" suggests that the ingots were cast by a private company rather than by the state. Fraser Hunter reassessed the context of the Scottish examples and some of the unstamped Welsh examples and argues that they could in fact date to the Iron Age or at least reflect native rather than Roman copper working. Although ingots of any sort are not common in the British Iron Age, planoconvex or bun-shaped ingots exist, e.g. a tin ingot discovered within the Iron Age hillfort at Chun Castle, Cornwall.
Composition and Structure of Roman Ingots
The Roman Bun Ingots are less pure than the earlier LBA examples and Tylecote suggests that they may be a direct product of smelting. Theoretically such an ingot could be formed in the base of the furnace. However, this is problematic in the case of the stamped examples as this would require the furnace to be dismantled or else have a short shaft to allow access for stamping. As a solution the furnace could have been tapped into a mould at the completion of smelting. It is possible that both methods were used as several of the ingots seem to have had additional metal poured onto the top in order to allow stamping.
References
Sources
Metallurgy
Casting (manufacturing) | Plano-convex ingot | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,095 | [
"Metallurgy",
"Materials science",
"nan"
] |
25,972,348 | https://en.wikipedia.org/wiki/Helmholtz-Zentrum%20Dresden-Rossendorf | The Helmholtz-Zentrum Dresden-Rossendorf (HZDR) is a Dresden-based research laboratory. It conducts research in three of the Helmholtz Association's areas: materials, health, and energy. HZDR is a member of the Helmholtz Association of German Research Centres.
History
HZDR is located at the site of the former Central Institute for Nuclear Physics (later: Central Institute for Nuclear Research) in Dresden-Rossendorf, which was founded in 1956 and became the largest nuclear research institute in the GDR. The former research center in Rossendorf was part of the German Academy of Sciences. The German-born British physicist Klaus Fuchs, who took part in the Manhattan Project and acted as a spy for the Soviet Union, was deputy director until 1974.
In 1992, Forschungszentrum Rossendorf was founded at the research site. In 2006 the name changed to "Forschungszentrum Dresden-Rossendorf", to emphasize the connection to the research infrastructure in the city of Dresden. In 2011 the center became a member of the Helmholtz Association of German Research Centres.
Research programs
HZDR conducts research in the materials, health and energy sectors in Dresden and at four other locations in Germany and one in France. In Grenoble, it operates a beamline for radiochemistry research at the European Synchrotron Radiation Facility (ESRF). Three of HZDR's five large-scale facilities are available to international scientists.
Materials
HZDR scientists are investigating the structure and function of new materials in order to better understand, optimize, and use them for specific applications. This includes research on novel superconducting and semiconducting materials using high magnetic fields or ion beams. They are developing detectors for applications in medicine and technology, and are advancing technologies for particle acceleration.
Health
HZDR aims at making progress in early diagnosis and therapy of cancer. It collaborates with partners from university medicine (National Center for Radiation Research in Oncology, OncoRay, in Dresden). HZDR cancer research concentrates on three major fields: new radioactive pharmaceuticals for cancer diagnosis and therapy, innovative medical imaging methods used in oncology as well as particle acceleration using new laser technologies for radiation oncology.
Energy
HZDR researchers are looking for economically and ecologically feasible energy solutions. The Helmholtz Institute Freiberg for Resource Technology, a joint initiative of HZDR and TU Bergakademie Freiberg, is targeting new technologies for the exploration, mining, and use of strategically important metals and minerals, e.g. biotechnological methods for metal recycling. Scientists also study energy-intensive processes in industry, like steel casting or in the chemical industry. They are examining nuclear repositories and reactors. And they are contributing to new storage technologies, e.g., developing a liquid metal battery.
Research facilities
HZDR operates multiple research facilities:
ELBE is a Center for High-Power Radiation Sources and HZDR's largest research facility. It encompasses a superconducting Electron Linear accelerator for beams with high Brilliance and low Emittance (ELBE) and two FEL for the mid and far infrared spectra. In addition, the electron beam delivers multiple other secondary beams (quasi-monochromatic X-rays, polarized Bremsstrahlung, pulsed neutron beams and pulsed mono-energetic positrons).
The high-power laser Dresden Laser Acceleration Source (DRACO), a titanium:sapphire laser, achieves a power of 1 PW by means of chirped pulse amplification and is used to accelerate protons and electrons to high energies using laser plasma acceleration. DRACO is part of HZDR's ELBE Center for High-Power Radiation Sources.
With PEnELOPE, another laser system with petawatt energies is under construction. It is a short-pulse laser source in the petawatt range pumped by diode lasers. In particular, it is intended to enable the laser-assisted acceleration of protons for medical applications. The ultimate goal is to replace the large particle accelerators required today for proton beam cancer therapy with much more compact facilities.
The Dresden High Magnetic Field Laboratory (Hochfeld-Magnetlabor Dresden, HLD) is located directly next to ELBE in order to be able to perform combined experiments. Here, particularly strong pulsed magnetic fields are generated. Magnetic fields of up to 100 tesla are available here for materials research. The coils, which were also developed at the site, can generate fields of 95 tesla for fractions of a second (as of May 2017). The coils are cooled to around -200 °C with liquid nitrogen and a current of several tens of thousands of amperes flows through them for a short time. A capacitor bank is used for this purpose (Fig.). At HLD, the fundamental, quantum mechanical properties of magnetism are also investigated and new components such as high-temperature superconductors are developed. HLD is a user facility and partnering in the EU project European Magnetic Field Laboratory (EMFL), a consortium dedicated to unite and coordinate the existing European high magnetic field laboratories.
The Helmholtz International Beamline for Extreme Fields (HIBEF) was set up by the HZDR together with the Deutsches Elektronen-Synchrotron (DESY) at the X-ray laser European XFEL in Hamburg. HIBEF combines the X-ray radiation of the European XFEL with two superlasers, a powerful magnetic coil and a platform for research with diamond stamp cells. In this way, the behavior of matter under the influence of exceptionally high pressures, temperatures and magnetic fields can be studied with unprecedented precision.
The Ion Beam Center (IBC) offers the possibility of selectively bombarding samples with charged atoms of various light and heavy chemical elements coming from different sources. These plasma and ion sources generate ions of all species at energies between 10 eV and 50 MeV. Several machines can accelerate the projectiles to different energies, which allows their effect on the sample to be controlled. Depending on the element and energy, these ion beams are suitable for investigating or selectively modifying samples. These machines are used primarily for the development of tiny electronic components, layered semiconductor systems such as in solar cells, or optical materials such as the transparent but conductive surfaces of modern screens. IBC is funded as a user facility by the EU.
ROBL, the Rossendorf Beamline at the ESRF in Grenoble/ France, comprises two facilities for radiochemical experiments.
The PET Center is operated together with Technische Universität Dresden and University Hospital Dresden. Researchers are developing imaging methods for cancer diagnosis as well as new approaches to cancer treatment. Together, these institutions also operate the National Center for Radiation Research in Oncology – OncoRay.
The thermohydraulic test facility TOPFLOW investigates stationary and transient phenomena in two-phase flows and develops models derived from Computational Fluid Dynamic (CFD) Codes.
The DREsden Sodium facility for DYNamo and thermohydraulic studies (DRESDYN) is intended as a platform both for large scale experiments related to geo- and astrophysics as well as for experiments related to thermohydraulic and safety aspects of liquid metal batteries and liquid metal fast reactors. Its most ambitious projects are a homogeneous hydromagnetic dynamo driven solely by precession and a large Taylor-Couette type experiment for the combined investigation of the magnetorotational instability and the Tayler instability.
Departments
The HZDR comprises eight institutes:
Institute of Ion Beam Physics and Materials Research
Institute Dresden High Magnetic Field Laboratory
Institute of Fluid Dynamics
Institute of Radiation Physics
Institute of Radiopharmaceutical Cancer Research
Institute of Radiooncology – OncoRay
Institute of Resource Ecology
Helmholtz Institute Freiberg for Resource Technology, together with the TU Bergakademie Freiberg.
In addition, there are research departments that cover specific research foci as independent units: CASUS (Center for Advanced Systems Understanding) as an institute in formation and the Department of Theoretical Physics.
Scientific-technical support is provided to all institutes and research departments by two central departments:
Central Department Research Technology, for the development and setup of research facilities and experiments.
Central Department Information Services and Computing, for the informatics infrastructure of all HZDR sites.
Collaborations
The HZDR is nationally and internationally connected to other institutions and organised in various research alliances.
International collaborations
European Synchrotron Radiation Facility (ESFR)
European XFEL
WHELMI Lab (Weizmann-Helmholtz Laboratory for Laser Matter Interaction)
LEAPS Initiative (League of European Accelerator-based Photon Sources)
ERF AISBL (Association of European-level Research Infrastructure Facilities)
ELI (Extreme Light Infrastructure)
European Magnetic Field Laboratory (EMFL)
EIT RawMaterials
INFACT (Innovative, Non-Invasive and Fully Acceptable Exploration Technologies)
Universität Breslau
ARIEL (Accelerator and Research reactor Infrastructures for Education and Learning)
Monash University Melbourne
Lightsources.org, a platform of the world-wide light source community
RADIATE (Research And Development with Ion Beams – Advancing Technology in Europe)
FineFuture (Resource optimization as a common challenge)
Helmholtz-SESAME Beamline (HESEB)
National collaborations
DRESDEN-concept (Dresden Research and Education Synergies for the Development of Excellence and Novelty)
TU Dresden
TU Bergakademie Freiberg
TU Chemnitz
German Working Group for Repository Research / Deutsche Arbeitsgemeinschaft Endlagerforschung
Competence Pool for Radiation Research / Kompetenzverbund Strahlenforschung
Competence Pool East for Nuclear Technology / Kompetenzverbund Ost für Kerntechnik
Universitätsklinikum Carl Gustav Carus Dresden
OncoRay – Zentrum für Strahlenforschung in der Onkologie
NCT/UCC – National Center for Tumor Diseases
DKTK – German Cancer Consortium
ROTOP Pharmaka GmbH
Universität Rostock
Staff and research sites
The HZDR employs about 1,400 staff, working at six research sites. The headquarters is in Dresden.
Technology transfer
The HZDR Innovation GmbH corporation offers industrial services using HZDR's know-how and infrastructures in ion implantation. This technology is applied for doping material surfaces with foreign atoms or to produce defects in semiconductors. It is also used to create materials with targeted features such as oxidation resistance, which is important for aviation or automotive lightweight construction, or biocompatibility for medical implants. Products of HZDR Innovation that have already been commercialized include a grid sensor and measuring instruments for analyzing multiphase flows.
Students and young scientists
Roughly 170 doctoral students work there. The HZDR installed junior research groups to promote excellent young scientists, the topics of which as of 2021 are:
Physical chemistry of biomolecular condensates
Bubbles go with the turbulent flows
Terahertz-driven dynamics at surfaces and interfaces
Artificial Intelligence for the future photon science
Advanced modelling of multiphase flows
Nano Safety
BioKollekt
Application-oriented laser particle acceleration
Another junior research group receives special funding from the Helmholtz Association:
Ultrafast X-ray Methods for Laboratory Astrophysics
In addition, there is a DFG-funded junior research group in the Emmy Noether Program:
Towards Fluid Dynamics of Foam and Froth
Another group receives funding from the European Research Council (ERC):
TOP: Towards the Bottom of the Periodic Table
HZDR operates an International Helmholtz Research School for Nanoelectronic Networks (NANONET) as well as a Summer Student Program.
Notes
Multidisciplinary research institutes
Physics research institutes
Medical research institutes in Germany
Particle physics facilities
Nuclear research institutes
Medical and health organisations based in Saxony
Scientific organisations based in East Germany | Helmholtz-Zentrum Dresden-Rossendorf | [
"Engineering"
] | 2,417 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
25,975,488 | https://en.wikipedia.org/wiki/Course%20of%20Theoretical%20Physics | The Course of Theoretical Physics is a ten-volume series of books covering theoretical physics that was initiated by Lev Landau and written in collaboration with his student Evgeny Lifshitz starting in the late 1930s.
It is said that Landau composed much of the series in his head while in an NKVD prison in 1938–1939. However, almost all of the actual writing of the early volumes was done by Lifshitz, giving rise to the witticism, "not a word of Landau and not a thought of Lifshitz". The first eight volumes were finished in the 1950s, written in Russian and translated into English in the late 1950s by John Stewart Bell, together with John Bradbury Sykes, M. J. Kearsley, and W. H. Reid. The last two volumes were written in the early 1980s. and Lev Pitaevskii also contributed to the series. The series is often referred to as "Landau and Lifshitz", "Landafshitz" (Russian: "Ландафшиц"), or "Lanlifshitz" (Russian: "Ланлифшиц") in informal settings.
Impact
The presentation of material is advanced and typically considered suitable for graduate-level study. Despite this specialized character, it is estimated that a million volumes of the Course were sold by 2005.
The series has been called "renowned" in Science and "celebrated" in American Scientist. A note in Mathematical Reviews states, "The usefulness and the success of this course have been proved by the great number of successive editions in Russian, English, French, German and other languages." At a centenary celebration of Landau's career, it was observed that the Course had shown "unprecedented longevity."
In 1962, Landau and Lifshitz were awarded the Lenin Prize for their work on the Course. This was the first occasion on which the Lenin Prize had been awarded for the teaching of physics.
English editions
The following list does not include reprints and revised editions.
Volume 1
Volume 1 covers classical mechanics without special or general relativity, in the Lagrangian and Hamiltonian formalisms.
Volume 2
Volume 2 covers relativistic mechanics of particles, and classical field theory for fields, specifically special relativity and electromagnetism, general relativity and gravitation.
Volume 3
Volume 3 covers quantum mechanics without special relativity.
Volume 4
The original edition comprised two books, labelled part 1 and part 2. The first covered general aspects of relativistic quantum mechanics and relativistic quantum field theory, leading onto quantum electrodynamics. The second continued with quantum electrodynamics and what was then known about the strong and weak interactions. These books were published in the early 1970s, at a time when the strong and weak forces were still not well understood. In the second edition, the corresponding sections were scrapped and replaced with more topics in the well-established quantum electrodynamics, and the two parts were unified into one, thus providing a one-volume exposition on relativistic quantum field theory with the electromagnetic interaction as the prototype of a quantum field theory.
Volume 5
Early version:
Volume 5 covers general statistical mechanics and thermodynamics and applications, including chemical reactions, phase transitions, and condensed matter physics.
Volume 6
Volume 6 covers fluid mechanics in a condensed but varied exposition, from ideal to viscous fluids, includes a chapter on relativistic fluid mechanics, and another on superfluids.
Volume 7
Volume 7 covers elasticity theory of solids, including viscous solids, vibrations and waves in crystals with dislocations, and a chapter on the mechanics of liquid crystals.
Volume 8
Volume 8 covers electromagnetism in materials, and includes a variety of topics in condensed matter physics, a chapter on magnetohydrodynamics, and another on nonlinear optics.
Volume 9
Volume 9 builds on the original statistical physics book, with more applications to condensed matter theory.
Volume 10
Volume 10 presents various applications of kinetic theory to condensed matter theory, and to metals, insulators, and phase transitions.
See also
Lectures on Theoretical Physics
List of textbooks on classical and quantum mechanics
List of textbooks in thermodynamics and statistical mechanics
List of textbooks in electromagnetism
The Theoretical Minimum
Notes
External links
Internet Archive: (for volumes 1, 2, 3, 6, 7, 8) and (for volume 4), and (for volume 5).
Britannica Online: Course of Theoretical Physics
Internet Archive: Landau-Lifschitz Vol. 1-10
Classical mechanics
Nauka (publisher) books
Physics textbooks
Quantum mechanics
Series of non-fiction books
Statistical mechanics
Pergamon Press books | Course of Theoretical Physics | [
"Physics"
] | 959 | [
"Classical mechanics",
"Quantum mechanics",
"Mechanics",
"Statistical mechanics",
"Works about quantum mechanics"
] |
25,982,350 | https://en.wikipedia.org/wiki/Avizo%20%28software%29 | Avizo (pronounce: ‘a-VEE-zo’) is a general-purpose commercial software application for scientific and industrial data visualization and analysis.
Avizo is developed by Thermo Fisher Scientific and was originally designed and developed by the Visualization and Data Analysis Group at Zuse Institute Berlin (ZIB) under the name Amira. Avizo was commercially released in November 2007. For the history of its development, see the Wikipedia article about Amira.
Overview
Avizo is a software application which enables users to perform interactive visualization and computation on 3D data sets.
The Avizo interface is modelled on the visual programming. Users manipulate data and module components, organized in an interactive graph representation (called Pool), or in a Tree view. Data and modules can be interactively connected together, and controlled with several parameters, creating a visual processing network whose output is displayed in a 3D viewer.
With this interface, complex data can be interactively explored and analyzed by applying a controlled sequence of computation and display processes resulting in a meaningful visual representation and associated derived data.
Application areas
Avizo has been designed to support different types of applications and workflows from 2D and 3D image data processing to simulations.
It is a versatile and customizable visualization tool used in many fields:
Scientific visualization
Materials Research
Tomography, Microscopy, etc.
Nondestructive testing, Industrial Inspection, and Visual Inspection
Computer-aided Engineering and simulation data post-processing
Porous medium analysis
Civil Engineering
Seismic Exploration, Reservoir Engineering, Microseismic Monitoring, Borehole Imaging
Geology, Digital Rock Physics (DRP), Earth Sciences
Archaeology
Food technology and agricultural science
Physics, Chemistry
Climatology, Oceanography, Environmental Studies
Astrophysics
Features
Data import:
2D and 3D image stack and volume data: from microscopes (electron, optical), X-ray tomography (CT, micro-/nano-CT, synchrotron), neutron tomography and other acquisition devices (MRI, radiography, GPR)
Geometric models (such as point sets, line sets, surfaces, grids)
Numerical simulation data (such as Computational fluid dynamics or Finite element analysis data)
Molecular data
Time series and animations
Seismic data
Well logs
4D Multivariate Climate Models
2D/3D data visualization:
Volume rendering
Digital Volume Correlation
Visualization of sections, through various slicing and clipping methods
Isosurface rendering
Polygonal meshes
Scalar fields, Vector fields, Tensor representations, Flow visualization (Illuminated Streamlines, Stream Ribbons)
Image processing:
2D/3D Alignment of image slices, Image registration
Image filtering
Mathematical Morphology (erode, dilate, open, close, tophat)
Watershed Transform, Distance Transform
Image segmentation
3D models reconstruction:
Polygonal surface generation from segmented objects
Generation of tetrahedral grids
Surface reconstruction from point clouds
Skeletonization (reconstruction of dendritic, porous or fracture network)
Surface model simplification
Quantification and analysis:
Measurements and statistics
Analysis spreadsheet and charting
Material properties computation, based on 3D images:
Absolute permeability
Thermal conductivity
Molecular diffusivity
Electrical resistivity/formation factor
3D image-based meshing for CFD and FEA:
From 3D imaging modalities (CT, micro-CT, MRI, etc.)
Surface and volume meshes generation
Export to FEA and CFD solvers for simulation
Post-processing for simulation analysis
Presentation, automation:
MovieMaker, Multiscreen, Video wall, collaboration, and VR support
TCL Scripting, C++ extension API
Avizo is based on Open Inventor 3D graphics toolkits (FEI Visualization Sciences Group).
References
External links
Scientific Publications
Official Avizo forum
Avizo videos
3D graphics software
3D imaging
Computational fluid dynamics
Computer vision software
Data and information visualization software
Earth sciences graphics software
Graphics software
Image processing software
Image segmentation
Mesh generators
Molecular dynamics software
Molecular modelling software
Nondestructive testing
Physics software
Science software
Simulation software
Software that uses Qt
Virtual reality | Avizo (software) | [
"Physics",
"Chemistry",
"Materials_science"
] | 815 | [
"Molecular dynamics software",
"Molecular modelling software",
"Computational chemistry software",
"Computational fluid dynamics",
"Computational physics",
"Molecular modelling",
"Molecular dynamics",
"Materials testing",
"Nondestructive testing",
"Fluid dynamics",
"Physics software"
] |
953,440 | https://en.wikipedia.org/wiki/Radiation%20exposure%20%28disambiguation%29 | Radiation exposure may refer to:
Exposure (radiation), caused by ionizing photons, namely X-rays and gamma rays; or ionizing particles, usually alpha particles, neutrons, protons, or electrons
Humans being subjected to an ionizing radiation hazard, either by irradiation or contamination
In modern radiology, and in scientific papers from the early 20th century, radiation exposure may refer to kerma (physics)
Exposure (photography), photographic film exposure to ionizing radiation
Any material being subjected to even everyday levels of any type of radiation, such as heat or light
Radiation Exposure Compensation Act, a federal statute of the United States providing for the monetary compensation of people who contracted cancer and a number of other specified diseases as a direct result of their exposure to radiation under certain circumstances
Radiation Exposure Monitoring, a system for monitoring exposure
Radiobiology
Radiation effects | Radiation exposure (disambiguation) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 172 | [
"Physical phenomena",
"Radiobiology",
"Materials science",
"Radiation",
"Condensed matter physics",
"Radiation effects",
"Radioactivity"
] |
953,723 | https://en.wikipedia.org/wiki/Historadiography | Historadiography is a technique formerly utilized in the fields of histology and cellular biology to provide semiquantitative information regarding the density of a tissue sample. It is usually synonymous with microradiography. This is achieved by layering a ground section of mineralized tissue (such as bone) with photographic emulsion on a glass slide and exposing the sample to a beam of X-rays. After developing the emulsion, the resulting radiograph can be viewed with a microscope. A side-by-side comparison with a slide containing radiographs of various substances of known mass can provide a rough mass estimate, and therefore a rough approximation of the concentration of calcium salts in the sample.
Historadiography has also been used to visualize staining of tissue, such as spinal cord samples with thorotrast, which contains thorium that is opaque to X-rays.
Over recent decades researchers have generally lost interest in historadiography. The most recent publication using the term (1998) to be indexed in PubMed referred to autoradiography of tritium incorporated in thymidine.
References
Histology | Historadiography | [
"Chemistry"
] | 226 | [
"Histology",
"Microscopy"
] |
954,006 | https://en.wikipedia.org/wiki/Classical%20XY%20model | The classical XY model (sometimes also called classical rotor (rotator) model or O(2) model) is a lattice model of statistical mechanics. In general, the XY model can be seen as a specialization of Stanley's n-vector model for .
Definition
Given a -dimensional lattice , per each lattice site there is a two-dimensional, unit-length vector
The spin configuration, is an assignment of the angle for each .
Given a translation-invariant interaction and a point dependent external field , the configuration energy is
The case in which except for nearest neighbor is called nearest neighbor case.
The configuration probability is given by the Boltzmann distribution with inverse temperature :
where is the normalization, or partition function. The notation indicates the expectation of the random variable in the infinite volume limit, after periodic boundary conditions have been imposed.
Rigorous results
The existence of the thermodynamic limit for the free energy and spin correlations were proved by Ginibre, extending to this case the Griffiths inequality.
Using the Griffiths inequality in the formulation of Ginibre, Aizenman and Simon proved that the two point spin correlation of the ferromagnetics XY model in dimension , coupling and inverse temperature is dominated by (i.e. has an upper bound given by) the two point correlation of the ferromagnetic Ising model in dimension , coupling and inverse temperature Hence the critical of the XY model cannot be smaller than the double of the critical of the Ising model
One dimension
As in any 'nearest-neighbor' n-vector model with free (non-periodic) boundary conditions, if the external field is zero, there exists a simple exact solution. In the free boundary conditions case, the Hamiltonian is
therefore the partition function factorizes under the change of coordinates
This gives
where is the modified Bessel function of the first kind. The partition function can be used to find several important thermodynamic quantities. For example, in the thermodynamic limit (), the free energy per spin is
Using the properties of the modified Bessel functions, the specific heat (per spin) can be expressed as
where , and is the short-range correlation function,
Even in the thermodynamic limit, there is no divergence in the specific heat. Indeed, like the one-dimensional Ising model, the one-dimensional XY model has no phase transitions at finite temperature.
The same computation for periodic boundary condition (and still ) requires the transfer matrix formalism, though the result is the same.
This transfer matrix approach is also required when using free boundary conditions, but with an applied field . If the applied field is small enough that it can be treated as a perturbation to the system in zero-field, then the magnetic susceptibility can be estimated. This is done by using the eigenstates computed by the transfer matrix approach and computing the energy shift with second-order perturbation theory, then comparing with the free-energy expansion . One finds
where is the Curie constant (a value typically associated with the susceptibility in magnetic materials). This expression is also true for the one-dimensional Ising model, with the replacement .
Two dimensions
The two-dimensional XY model with nearest-neighbor interactions is an example of a two-dimensional system with continuous symmetry that does not have long-range order as required by the Mermin–Wagner theorem. Likewise, there is not a conventional phase transition present that would be associated with symmetry breaking. However, as will be discussed later, the system does show signs of a transition from a disordered high-temperature state to a quasi-ordered state below some critical temperature, called the Kosterlitz-Thouless transition. In the case of a discrete lattice of spins, the two-dimensional XY model can be evaluated using the transfer matrix approach, reducing the model to an eigenvalue problem and utilizing the largest eigenvalue from the transfer matrix. Though the exact solution is intractable, it is possible to use certain approximations to get estimates for the critical temperature which occurs at low temperatures. For example, Mattis (1984) used an approximation to this model to estimate a critical temperature of the system as
The 2D XY model has also been studied in great detail using Monte Carlo simulations, for example with the Metropolis algorithm. These can be used to compute thermodynamic quantities like the system energy, specific heat, magnetization, etc., over a range of temperatures and time-scales. In the Monte Carlo simulation, each spin is associated to a continuously-varying angle (often, it can be discretized into finitely-many angles, like in the related Potts model, for ease of computation. However, this is not a requirement.) At each time step the Metropolis algorithm chooses one spin at random and rotates its angle by some random increment . This change in angle causes a change in the energy of the system, which can be positive or negative. If negative, the algorithm accepts the change in angle; if positive, the configuration is accepted with probability , the Boltzmann factor for the energy change. The Monte Carlo method has been used to verify, with various methods, the critical temperature of the system, and is estimated to be . The Monte Carlo method can also compute average values that are used to compute thermodynamic quantities like magnetization, spin-spin correlation, correlation lengths, and specific heat. These are important ways to characterize the behavior of the system near the critical temperature. The magnetization and squared magnetization, for example, can be computed as
where are the number of spins. The mean magnetization characterizes the magnitude of the net magnetic moment of the system; in many magnetic systems this is zero above a critical temperature and becomes non-zero spontaneously at low temperatures. Similarly the mean-squared magnetization characterizes the average of the square of net components of the spins across the lattice. Either of these are commonly used to characterize the order parameter of a system. Rigorous analysis of the XY model shows the magnetization in the thermodynamic limit is zero, and that the square magnetization approximately follows , which vanishes in the thermodynamic limit. Indeed, at high temperatures this quantity approaches zero since the components of the spins will tend to be randomized and thus sum to zero. However at low temperatures for a finite system, the mean-square magnetization increases, suggesting there are regions of the spin space that are aligned to contribute to a non-zero contribution. The magnetization shown (for a 25x25 lattice) is one example of this, that appears to suggest a phase transition, while no such transition exists in the thermodynamic limit.
Furthermore, using statistical mechanics one can relate thermodynamic averages to quantities like specific heat by calculating
The specific heat is shown at low temperatures near the critical temperature . There is no feature in the specific heat consistent with critical behavior (like a divergence) at this predicted temperature. Indeed, estimating the critical temperature comes from other methods, like from the helicity modulus, or the temperature dependence of the divergence of susceptibility. However, there is a feature in the specific heat in the form of a peak at . This peak position and height have been shown not to depend on system size, for lattices of linear size greater than 256; indeed, the specific heat anomaly remains rounded and finite for increasing lattice size, with no divergent peak.
The nature of the critical transitions and vortex formation can be elucidated by considering a continuous version of the XY model. Here, the discrete spins are replaced by a field representing the spin's angle at any point in space. In this case the angle of the spins must vary smoothly over changes in position. Expanding the original cosine as a Taylor series, the Hamiltonian can be expressed in the continuum approximation as
The continuous version of the XY model is often used to model systems that possess order parameters with the same kinds of symmetry, e.g. superfluid helium, hexatic liquid crystals. This is what makes them peculiar from other phase transitions which are always accompanied with a symmetry breaking. Topological defects in the XY model lead to a vortex-unbinding transition from the low-temperature phase to the high-temperature disordered phase. Indeed, the fact that at high temperature correlations decay exponentially fast, while at low temperatures decay with power law, even though in both regimes , is called Kosterlitz–Thouless transition. Kosterlitz and Thouless provided a simple argument of why this would be the case: this considers the ground state consisting of all spins in the same orientation, with the addition then of a single vortex. The presence of these contributes an entropy of roughly , where is an effective length scale (for example, the lattice size for a discrete lattice) Meanwhile, the energy of the system increases due to the vortex, by an amount . Putting these together, the free energy of a system would change due to the spontaneous formation of a vortex by an amount
In the thermodynamic limit, the system does not favor the formation of vortices at low temperatures, but does favor them at high temperatures, above the critical temperature . This indicates that at low temperatures, any vortices that arise will want to annihilate with antivortices to lower the system energy. Indeed, this will be the case qualitatively if one watches 'snapshots' of the spin system at low temperatures, where vortices and antivortices gradually come together to annihilate. Thus, the low-temperature state will consist of bound vortex-antivortex pairs. Meanwhile at high temperatures, there will be a collection of unbound vortices and antivortices that are free to move about the plane.
To visualize the Ising model, one can use an arrow pointing up or down, or represented as a point colored black/white to indicate its state. To visualize the XY spin system, the spins can be represented as an arrow pointing in some direction, or as being represented as a point with some color. Here it is necessary to represent the spin with a spectrum of colors due to each of the possible continuous variables. This can be done using, for example, a continuous and periodic red-green-blue spectrum. As shown on the figure, cyan corresponds to a zero angle (pointing to the right), whereas red corresponds to a 180 degree angle (pointing to the left). One can then study snapshots of the spin configurations at different temperatures to elucidate what happens above and below the critical temperature of the XY model. At high temperatures, the spins will not have a preferred orientation and there will be unpredictable variation of angles between neighboring spins, as there will be no preferred energetically favorable configuration. In this case, the color map will look highly pixellated. Meanwhile at low temperatures, one possible ground-state configuration has all spins pointed in the same orientation (same angle); these would correspond to regions (domains) of the color map where all spins have roughly the same color.
To identify vortices (or antivortices) present as a result of the Kosterlitz–Thouless transition, one can determine the signed change in angle by traversing a circle of lattice points counterclockwise. If the total change in angle is zero, this corresponds to no vortex being present; whereas a total change in angle of corresponds to a vortex (or antivortex). These vortexes are topologically non-trivial objects that come in vortex-antivortex pairs, which can separate or pair-annihilate. In the colormap, these defects can be identified in regions where there is a large color gradient where all colors of the spectrum meet around a point. Qualitatively, these defects can look like inward- or outward-pointing sources of flow, or whirlpools of spins that collectively clockwise or counterclockwise, or hyperbolic-looking features with some spins pointing toward and some spins pointing away from the defect. As the configuration is studied at long time scales and at low temperatures, it is observed that many of these vortex-antivortex pairs get closer together and eventually pair-annihilate. It is only at high temperatures that these vortices and antivortices are liberated and unbind from one another.
In the continuous XY model, the high-temperature spontaneous magnetization vanishes:
Besides, cluster expansion shows that the spin correlations cluster exponentially fast: for instance
At low temperatures, i.e. , the spontaneous magnetization remains zero (see the Mermin–Wagner theorem),
but the decay of the correlations is only power law: Fröhlich and Spencer found the lower bound
while McBryan and Spencer found the upper bound, for any
Three and higher dimensions
Independently of the range of the interaction, at low enough temperature the magnetization is positive.
At high temperature, the spontaneous magnetization vanishes: . Besides, cluster expansion shows that the spin correlations cluster exponentially fast: for instance .
At low temperature, infrared bound shows that the spontaneous magnetization is strictly positive: . Besides, there exists a 1-parameter family of extremal states, , such that but, conjecturally, in each of these extremal states the truncated correlations decay algebraically.
Phase transition
As mentioned above in one dimension the XY model does not have a phase transition, while in two dimensions it has the Berezinski-Kosterlitz-Thouless transition between the phases with exponentially and powerlaw decaying correlation functions.
In three and higher dimensions the XY model has a ferromagnet-paramagnet phase transition. At low temperatures the spontaneous magnetization is nonzero: this is the ferromagnetic phase. As the temperature is increased, spontaneous magnetization gradually decreases and vanishes at a critical temperature. It remains zero at all higher temperatures: this is the paramagnetic phase.
In four and higher dimensions the phase transition has mean field theory critical exponents (with logarithmic corrections in four dimensions).
Three dimensional case: the critical exponents
The three dimensional case is interesting because the critical exponents at the phase transition are nontrivial. Many three-dimensional physical systems belong to the same universality class as the three dimensional XY model and share the same critical exponents, most notably easy-plane magnets and liquid Helium-4. The values of these critical exponents are measured by experiments, Monte Carlo simulations, and can also be computed by theoretical methods of quantum field theory, such as the renormalization group and the conformal bootstrap. Renormalization group methods are applicable because the critical point of the XY model is believed to be described by a renormalization group fixed point. Conformal bootstrap methods are applicable because it is also believed to be a unitary three dimensional conformal field theory.
Most important critical exponents of the three dimensional XY model are . All of them can be expressed via just two numbers: the scaling dimensions and of the complex order parameter field and of the leading singlet operator (same as in the Ginzburg–Landau description). Another important field is (same as ), whose dimension determines the correction-to-scaling exponent . According to a conformal bootstrap computation, these three dimensions are given by:
This gives the following values of the critical exponents:
Monte Carlo methods give compatible determinations: .
See also
Classical Heisenberg model
Coulomb gas
Goldstone boson
Ising model
Potts model
n-vector model
Kosterlitz–Thouless transition
Topological defect
Superfluid film
Sigma model
Sine-Gordon model
Notes
References
Evgeny Demidov, Vortices in the XY model (2004)
Further reading
H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena, (Oxford University Press, Oxford and New York 1971);
H. Kleinert, Gauge Fields in Condensed Matter, Vol. I, " SUPERFLOW AND VORTEX LINES", pp. 1–742, Vol. II, "STRESSES AND DEFECTS", pp. 743–1456, World Scientific (Singapore, 1989); Paperback (also available online: Vol. I and Vol. II)
External links
Real-time XY model WebGL simulation
Interactive Monte Carlo simulation of the Ising, XY and Heisenberg models with 3D graphics (requires WebGL compatible browser)
Lattice models | Classical XY model | [
"Physics",
"Materials_science"
] | 3,411 | [
"Statistical mechanics",
"Condensed matter physics",
"Lattice models",
"Computational physics"
] |
954,231 | https://en.wikipedia.org/wiki/Cuspy%20halo%20problem | The cuspy halo problem (also known as the core-cusp problem) is a discrepancy between the inferred dark matter density profiles of low-mass galaxies and the density profiles predicted by cosmological N-body simulations. Nearly all simulations form dark matter halos which have "cuspy" dark matter distributions, with density increasing steeply at small radii, while the rotation curves of most observed dwarf galaxies suggest that they have flat central dark matter density profiles ("cores").
Several possible solutions to the core-cusp problem have been proposed. Many recent studies have shown that including baryonic feedback (particularly feedback from supernovae and active galactic nuclei) can "flatten out" the core of a galaxy's dark matter profile, since feedback-driven gas outflows produce a time-varying gravitational potential that transfers energy to the orbits of the collisionless dark matter particles. Other works have shown that the core-cusp problem can be solved outside of the most widely accepted Cold Dark Matter (CDM) paradigm: simulations with warm or self-interacting dark matter also produce dark matter cores in low-mass galaxies. It is also possible that the distribution of dark matter that minimizes the system energy has a flat central dark matter density profile.
Simulation results
According to W.J.G. de Blok "The presence of a cusp in the centers of CDM halos is one of the earliest and strongest results derived from N-body cosmological simulations." Numerical simulations for CDM structure formation predict some structure properties that conflict with astronomical observations.
Observations
The discrepancies range from galaxies to clusters of galaxies. "The main one that has attracted a lot of attention is the cuspy halo problem, namely that CDM models predict halos that have a high density core or have an inner profile that is too steep compared to observations."
Potential solutions
The conflict between numerical simulations and astronomical observations creates numerical constraints related to the core/cusp problem. Observational constraints on halo concentrations imply the existence of theoretical constraints on cosmological parameters. According to McGaugh, Barker, and de Blok, there might be 3 basic possibilities for interpreting the halo concentration limits stated by them or anyone else:
"CDM halos must have cusps, so the stated limits hold and provide new constraints on cosmological parameters."
"Something (e.g. feedback, modifications of the nature of dark matter) eliminates cusps and thus the constraints on cosmology."
"The picture of halo formation suggested by CDM simulations is wrong."
One approach to solving the cusp-core problem in galactic halos is to consider models that modify the nature of dark matter; theorists have considered warm, fuzzy, self-interacting, and meta-cold dark matter, among other possibilities. One straightforward solution could be that the distribution of dark matter that minimizes the system energy has a flat central dark matter density profile.
See also
Dwarf galaxy problem (also known as "the missing satellites problem")
List of unsolved problems in physics
References
Dark matter
Unsolved problems in physics | Cuspy halo problem | [
"Physics",
"Astronomy"
] | 643 | [
"Dark matter",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Unsolved problems in physics",
"Exotic matter",
"Physics beyond the Standard Model",
"Matter"
] |
954,234 | https://en.wikipedia.org/wiki/Dwarf%20galaxy%20problem | The dwarf galaxy problem, also known as the missing satellites problem, arises from a mismatch between observed dwarf galaxy numbers and collisionless numerical cosmological simulations that predict the evolution of the distribution of matter in the universe. In simulations, dark matter clusters hierarchically, in ever increasing numbers of halo "blobs" as halos' components' sizes become smaller-and-smaller. However, although there seem to be enough observed normal-sized galaxies to match the simulated distribution of dark matter halos of comparable mass, the number of observed dwarf galaxies is orders of magnitude lower than expected from such simulation.
Context
For example, around 38 dwarf galaxies have been observed in the Local Group, and only around 11 orbiting the Milky Way, yet dark matter simulations predict that there should be around 500 dwarf satellites for the Milky Way alone.
Prospective resolution
There are two main alternatives which may resolve the dwarf galaxy problem: The smaller-sized clumps of dark matter may be unable to obtain or retain the baryonic matter needed to form stars in the first place; or, after they form, dwarf galaxies may be quickly “eaten” by the larger galaxies that they orbit.
Baryonic matter too sparse
One proposal is that the smaller halos do exist but that only a few of them end up becoming visible, because they are unable to acquire enough baryonic matter to form a visible dwarf galaxy. In support of this, in 2007 the Keck telescopes observed eight newly discovered ultra-faint Milky Way dwarf satellites of which six were around 99.9% dark matter (with a mass-to-light ratio of about 1,000).
Early demise of young dwarfs
The other popular proposed solution is that dwarf galaxies may tend to merge into the galaxies they orbit shortly after star-formation, or to be quickly torn apart and tidally stripped by larger galaxies, due to complicated orbital interactions.
Tidal stripping may also have been part of the problem of detecting dwarf galaxies in the first place: Finding dwarf galaxies is an extremely difficult task, since they tend to have low surface brightness and are highly diffuse – so much so that they are close to blending into background and foreground stars.
See also
Dark galaxy
Cold dark matter
Cuspy halo problem (also known as "the core/cusp problem")
List of unsolved problems in physics
Footnotes
References
External links
Dark matter
Galaxies
Large-scale structure of the cosmos
Unsolved problems in physics | Dwarf galaxy problem | [
"Physics",
"Astronomy"
] | 492 | [
"Dark matter",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Galaxies",
"Unsolved problems in physics",
"Exotic matter",
"Astronomical objects",
"Physics beyond the Standard Model",
"Matter"
] |
954,328 | https://en.wikipedia.org/wiki/Ridge%20regression | Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also known as Tikhonov regularization, named for Andrey Tikhonov, it is a method of regularization of ill-posed problems. It is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters. In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias (see bias–variance tradeoff).
The theory was first introduced by Hoerl and Kennard in 1970 in their Technometrics papers "Ridge regressions: biased estimation of nonorthogonal problems" and "Ridge regressions: applications in nonorthogonal problems".
Ridge regression was developed as a possible solution to the imprecision of least square estimators when linear regression models have some multicollinear (highly correlated) independent variables—by creating a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance and mean square estimator are often smaller than the least square estimators previously derived.
Overview
In the simplest case, the problem of a near-singular moment matrix is alleviated by adding positive elements to the diagonals, thereby decreasing its condition number. Analogous to the ordinary least squares estimator, the simple ridge estimator is then given by
where is the regressand, is the design matrix, is the identity matrix, and the ridge parameter serves as the constant shifting the diagonals of the moment matrix. It can be shown that this estimator is the solution to the least squares problem subject to the constraint , which can be expressed as a Lagrangian minimization:
which shows that is nothing but the Lagrange multiplier of the constraint. In fact, there is a one-to-one relationship between and and since, in practice, we do not know , we define heuristically or find it via additional data-fitting strategies, see Determination of the Tikhonov factor.
Note that, when , in which case the constraint is non-binding, the ridge estimator reduces to ordinary least squares. A more general approach to Tikhonov regularization is discussed below.
History
Tikhonov regularization was invented independently in many different contexts.
It became widely known through its application to integral equations in the works of Andrey Tikhonov and David L. Phillips. Some authors use the term Tikhonov–Phillips regularization.
The finite-dimensional case was expounded by Arthur E. Hoerl, who took a statistical approach, and by Manus Foster, who interpreted this method as a Wiener–Kolmogorov (Kriging) filter. Following Hoerl, it is known in the statistical literature as ridge regression, named after ridge analysis ("ridge" refers to the path from the constrained maximum).
Tikhonov regularization
Suppose that for a known real matrix and vector , we wish to find a vector such that
where and may be of different sizes and may be non-square.
The standard approach is ordinary least squares linear regression. However, if no satisfies the equation or more than one does—that is, the solution is not unique—the problem is said to be ill posed. In such cases, ordinary least squares estimation leads to an overdetermined, or more often an underdetermined system of equations. Most real-world phenomena have the effect of low-pass filters in the forward direction where maps to . Therefore, in solving the inverse-problem, the inverse mapping operates as a high-pass filter that has the undesirable tendency of amplifying noise (eigenvalues / singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version of that is in the null-space of , rather than allowing for a model to be used as a prior for .
Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as
where is the Euclidean norm.
In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization:
for some suitably chosen Tikhonov matrix . In many cases, this matrix is chosen as a scalar multiple of the identity matrix (), giving preference to solutions with smaller norms; this is known as regularization. In other cases, high-pass operators (e.g., a difference operator or a weighted Fourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous.
This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted by , is given by
The effect of regularization may be varied by the scale of matrix . For this reduces to the unregularized least-squares solution, provided that (ATA)−1 exists. Note that in case of a complex matrix , as usual the transpose has to be replaced by the Hermitian matrix .
regularization is used in many contexts aside from linear regression, such as classification with logistic regression or support vector machines, and matrix factorization.
Application to existing fit results
Since Tikhonov Regularization simply adds a quadratic term to the objective function in optimization problems,
it is possible to do so after the unregularised optimisation has taken place.
E.g., if the above problem with yields the solution ,
the solution in the presence of can be expressed as:
with the "regularisation matrix" .
If the parameter fit comes with a covariance matrix of the estimated parameter uncertainties ,
then the regularisation matrix will be
and the regularised result will have a new covariance
In the context of arbitrary likelihood fits, this is valid, as long as the quadratic approximation of the likelihood function is valid. This means that, as long as the perturbation from the unregularised result is small, one can regularise any result that is presented as a best fit point with a covariance matrix. No detailed knowledge of the underlying likelihood function is needed.
Generalized Tikhonov regularization
For general multivariate normal distributions for and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently, one can seek an to minimize
where we have used to stand for the weighted norm squared (compare with the Mahalanobis distance). In the Bayesian interpretation is the inverse covariance matrix of , is the expected value of , and is the inverse covariance matrix of . The Tikhonov matrix is then given as a factorization of the matrix (e.g. the Cholesky factorization) and is considered a whitening filter.
This generalized problem has an optimal solution which can be written explicitly using the formula
or equivalently, when Q is not a null matrix:
Lavrentyev regularization
In some situations, one can avoid using the transpose , as proposed by Mikhail Lavrentyev. For example, if is symmetric positive definite, i.e. , so is its inverse , which can thus be used to set up the weighted norm squared in the generalized Tikhonov regularization, leading to minimizing
or, equivalently up to a constant term,
This minimization problem has an optimal solution which can be written explicitly using the formula
which is nothing but the solution of the generalized Tikhonov problem where
The Lavrentyev regularization, if applicable, is advantageous to the original Tikhonov regularization, since the Lavrentyev matrix can be better conditioned, i.e., have a smaller condition number, compared to the Tikhonov matrix
Regularization in Hilbert space
Typically discrete linear ill-conditioned problems result from discretization of integral equations, and one can formulate a Tikhonov regularization in the original infinite-dimensional context. In the above we can interpret as a compact operator on Hilbert spaces, and and as elements in the domain and range of . The operator is then a self-adjoint bounded invertible operator.
Relation to singular-value decomposition and Wiener filter
With , this least-squares solution can be analyzed in a special way using the singular-value decomposition. Given the singular value decomposition
with singular values , the Tikhonov regularized solution can be expressed as
where has diagonal values
and is zero elsewhere. This demonstrates the effect of the Tikhonov parameter on the condition number of the regularized problem. For the generalized case, a similar representation can be derived using a generalized singular-value decomposition.
Finally, it is related to the Wiener filter:
where the Wiener weights are and is the rank of .
Determination of the Tikhonov factor
The optimal regularization parameter is usually unknown and often in practical problems is determined by an ad hoc method. A possible approach relies on the Bayesian interpretation described below. Other approaches include the discrepancy principle, cross-validation, L-curve method, restricted maximum likelihood and unbiased predictive risk estimator. Grace Wahba proved that the optimal parameter, in the sense of leave-one-out cross-validation minimizes
where is the residual sum of squares, and is the effective number of degrees of freedom.
Using the previous SVD decomposition, we can simplify the above expression:
and
Relation to probabilistic formulation
The probabilistic formulation of an inverse problem introduces (when all uncertainties are Gaussian) a covariance matrix representing the a priori uncertainties on the model parameters, and a covariance matrix representing the uncertainties on the observed parameters. In the special case when these two matrices are diagonal and isotropic, and , and, in this case, the equations of inverse theory reduce to the equations above, with .
Bayesian interpretation
Although at first the choice of the solution to this regularized problem may look artificial, and indeed the matrix seems rather arbitrary, the process can be justified from a Bayesian point of view. Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a unique solution. Statistically, the prior probability distribution of is sometimes taken to be a multivariate normal distribution. For simplicity here, the following assumptions are made: the means are zero; their components are independent; the components have the same standard deviation . The data are also subject to errors, and the errors in are also assumed to be independent with zero mean and standard deviation . Under these assumptions the Tikhonov-regularized solution is the most probable solution given the data and the a priori distribution of , according to Bayes' theorem.
If the assumption of normality is replaced by assumptions of homoscedasticity and uncorrelatedness of errors, and if one still assumes zero mean, then the Gauss–Markov theorem entails that the solution is the minimal unbiased linear estimator.
See also
LASSO estimator is another regularization method in statistics.
Elastic net regularization
Matrix regularization
Notes
References
Further reading
Linear algebra
Estimation methods
Inverse problems
Regression analysis | Ridge regression | [
"Mathematics"
] | 2,333 | [
"Inverse problems",
"Applied mathematics",
"Linear algebra",
"Algebra"
] |
954,333 | https://en.wikipedia.org/wiki/Box%20topology | In topology, the cartesian product of topological spaces can be given several different topologies. One of the more natural choices is the box topology, where a base is given by the Cartesian products of open sets in the component spaces. Another possibility is the product topology, where a base is also given by the Cartesian products of open sets in the component spaces, but only finitely many of which can be unequal to the entire component space.
While the box topology has a somewhat more intuitive definition than the product topology, it satisfies fewer desirable properties. In particular, if all the component spaces are compact, the box topology on their Cartesian product will not necessarily be compact, although the product topology on their Cartesian product will always be compact. In general, the box topology is finer than the product topology, although the two agree in the case of finite direct products (or when all but finitely many of the factors are trivial).
Definition
Given such that
or the (possibly infinite) Cartesian product of the topological spaces , indexed by , the box topology on is generated by the base
The name box comes from the case of Rn, in which the basis sets look like boxes. The set endowed with the box topology is sometimes denoted by
Properties
Box topology on Rω:
The box topology is completely regular
The box topology is neither compact nor connected
The box topology is not first countable (hence not metrizable)
The box topology is not separable
The box topology is paracompact (and hence normal and completely regular) if the continuum hypothesis is true
Example — failure of continuity
The following example is based on the Hilbert cube. Let Rω denote the countable cartesian product of R with itself, i.e. the set of all sequences in R. Equip R with the standard topology and Rω with the box topology. Define:
So all the component functions are the identity and hence continuous, however we will show f is not continuous. To see this, consider the open set
Suppose f were continuous. Then, since:
there should exist such that But this would imply that
which is false since for Thus f is not continuous even though all its component functions are.
Example — failure of compactness
Consider the countable product where for each i, with the discrete topology. The box topology on will also be the discrete topology. Since discrete spaces are compact if and only if they are finite, we immediately see that is not compact, even though its component spaces are.
is not sequentially compact either: consider the sequence given by
Since no two points in the sequence are the same, the sequence has no limit point, and therefore is not sequentially compact.
Convergence in the box topology
Topologies are often best understood by describing how sequences converge. In general, a Cartesian product of a space with itself over an indexing set is precisely the space of functions from to , denoted . The product topology yields the topology of pointwise convergence; sequences of functions converge if and only if they converge at every point of .
Because the box topology is finer than the product topology, convergence of a sequence in the box topology is a more stringent condition. Assuming is Hausdorff, a sequence of functions in converges in the box topology to a function if and only if it converges pointwise to and
there is a finite subset and there is an such that for all the sequence in is constant for all . In other words, the sequence is eventually constant for nearly all and in a uniform way.
Comparison with product topology
The basis sets in the product topology have almost the same definition as the above, except with the qualification that all but finitely many Ui are equal to the component space Xi. The product topology satisfies a very desirable property for maps fi : Y → Xi into the component spaces: the product map f: Y → X defined by the component functions fi is continuous if and only if all the fi are continuous. As shown above, this does not always hold in the box topology. This actually makes the box topology very useful for providing counterexamples—many qualities such as compactness, connectedness, metrizability, etc., if possessed by the factor spaces, are not in general preserved in the product with this topology.
See also
Cylinder set
List of topologies
Notes
References
Steen, Lynn A. and Seebach, J. Arthur Jr.; Counterexamples in Topology, Holt, Rinehart and Winston (1970). .
External links
Topological spaces
Operations on structures | Box topology | [
"Mathematics"
] | 913 | [
"Topological spaces",
"Topology",
"Mathematical structures",
"Space (mathematics)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.