id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
25,668,921 | https://en.wikipedia.org/wiki/Volcano%20plot%20%28statistics%29 | In statistics, a volcano plot is a type of scatter-plot that is used to quickly identify changes in large data sets composed of replicate data. It plots significance versus fold-change on the y and x axes, respectively. These plots are increasingly common in omic experiments such as genomics, proteomics, and metabolomics where one often has a list of many thousands of replicate data points between two conditions and one wishes to quickly identify the most meaningful changes. A volcano plot combines a measure of statistical significance from a statistical test (e.g., a p value from an ANOVA model) with the magnitude of the change, enabling quick visual identification of those data-points (genes, etc.) that display large magnitude changes that are also statistically significant.
A volcano plot is constructed by plotting the negative logarithm of the p value on the y axis (usually base 10). This results in data points with low p values (highly significant) appearing toward the top of the plot. The x axis is the logarithm of the fold change between the two conditions. The logarithm of the fold change is used so that changes in both directions appear equidistant from the center. Plotting points in this way results in two regions of interest in the plot: those points that are found toward the top of the plot that are far to either the left- or right-hand sides. These represent values that display large magnitude fold changes (hence being left or right of center) as well as high statistical significance (hence being toward the top).
Additional information can be added by coloring the points according to a third dimension of data (such as signal intensity), but this is not uniformly employed. Volcano plots are also used to graphically display a significance analysis of microarrays (SAM) gene selection criterion, an example of regularization.
The concept of volcano plot can be generalized to other applications, where the x axis is related to a measure of
the strength of a statistical signal, and y axis is related to a measure of the statistical significance of the signal.
For example, in a genetic association case-control study, such as Genome-wide association study,
a point in a volcano plot represents a single-nucleotide polymorphism.
Its x value can be the logarithm of the odds ratio and its y value can be -log10 of the p value from a Chi-square test
or a Chi-square test statistic.
Volcano plots show a characteristic upwards two arm shape because the x axis, i.e. the underlying log2-fold changes, are generally normal distribution whereas the y axis, the log10-p values, tend toward greater significance for fold-changes that deviate more strongly from zero.
The density of the normal distribution takes the form
.
So the
of that is
and the negative
is
which is a parabola whose arms reach upwards
on the left and right sides.
The upper bound of the data is one parabola
and the lower bound is another parabola.
References
External links
NCI Documentation describing statistical methods to analyze microarrays, including volcano plots
Description of volcano plots at MathWorks
Bioinformatics
Statistical charts and diagrams | Volcano plot (statistics) | [
"Engineering",
"Biology"
] | 653 | [
"Bioinformatics",
"Biological engineering"
] |
2,792,254 | https://en.wikipedia.org/wiki/Proton%20Synchrotron | The Proton Synchrotron (PS, sometimes also referred to as CPS) is a particle accelerator at CERN. It is CERN's first synchrotron, beginning its operation in 1959. For a brief period the PS was the world's highest energy particle accelerator. It has since served as a pre-accelerator for the Intersecting Storage Rings (ISR) and the Super Proton Synchrotron (SPS), and is currently part of the Large Hadron Collider (LHC) accelerator complex. In addition to protons, PS has accelerated alpha particles, oxygen and sulfur nuclei, electrons, positrons, and antiprotons.
Today, the PS is part of CERN's accelerator complex. It accelerates protons for the LHC as well as a number of other experimental facilities at CERN. Using a negative hydrogen ion source, the ions are first accelerated to the energy of 160 MeV in the linear accelerator Linac 4. The hydrogen ion is then stripped of both electrons, leaving only the nucleus containing one proton, which is injected into the Proton Synchrotron Booster (PSB), which accelerates the protons to 2 GeV, followed by the PS, which pushes the beam to 25 GeV. The protons are then sent to the Super Proton Synchrotron, and accelerated to 450 GeV before they are injected into the LHC. The PS also accelerates heavy ions from the Low Energy Ion Ring (LEIR) at an energy of 72 MeV, for collisions in the LHC.
Background
The synchrotron (as in Proton Synchrotron) is a type of cyclic particle accelerator, descended from the cyclotron, in which the accelerating particle beam travels around a fixed path. The magnetic field which bends the particle beam into its fixed path increases with time, and is synchronized to the increasing energy of the particles. As the particles travels around the fixed circular path they will oscillate around their equilibrium orbit, a phenomenon called betatron oscillations.
In a conventional synchrotron the focusing of the circulating particles is achieved by weak focusing: the magnetic field that guides the particles around the fixed radius decreases slightly with radius, causing the orbits of the particles with slightly different positions to approximate each other. The amount of focusing in this way is not very great, and consequently the amplitudes of the betatron oscillations are large. Weak focusing requires a large vacuum chamber, and consequently big magnets. Most of the cost of a conventional synchrotron is the magnets. The PS was the first accelerator at CERN that made use of the alternating-gradient principle, also called strong focusing: quadrupole magnets are used to alternately focus horizontally and vertically many times around the circumference of the accelerator. The focusing of the particle can in theory become as strong as one wishes, and the amplitude of the betatron oscillations as small as desired. The net result is that you can reduce the cost of the magnets.
Operational history
Preliminary studies
When early in the 1950s the plans for a European laboratory of particle physics began to take shape, two different accelerator projects emerged. One machine was to be of standard type, easy and relatively fast and cheap to build: the synchrocyclotron, achieving collisions at a center-of-mass energy of 600 MeV. The second device was a much more ambitious undertaking: an accelerator bigger than any other then existing, a synchrotron that could accelerate protons up to an energy of 10 GeV – the PS.
By May 1952 a design group was set up with Odd Dahl in charge. Other members of the group were among others Rolf Widerøe, Frank Kenneth Goward, and John Adams. After a visit to the Cosmotron at Brookhaven National Laboratory in the US, the group learnt of a new idea for making cheaper and higher energy machines: alternating-gradient focusing. The idea was so attractive that the study of a 10 GeV synchrotron was dropped, and a study of a machine implementing the new idea initiated. Using this principle a 30 GeV accelerator could be built for the same cost as a 10 GeV accelerator using weak focusing. However, the stronger focusing the higher a precision of alignment of magnets required. This proved a serious problem in the construction of the accelerator.
A second problem in the construction period was the machines behavior at an energy called "transition energy". At this point the relative increase in particle velocity changes from being greater to being smaller, causing the amplitude of the betatron oscillation to go to zero and loss of stability in the beam. This was solved by a jump, or a sudden shift in the acceleration, in which pulsed quadruples made the protons traverse the transition energy level much faster.
The PS was approved in October 1953, as a synchrotron of 25 GeV energy with a radius of 72 meter, and a budget of 120 million Swiss franc. The focusing strength chosen required a vacuum chamber of 12 cm width and 8 cm height, with magnets of about 4000 tonnes total mass. Dahl resigned as head of the project in October 1954 and was replaced by John Adams. By August 1959 the PS was ready for its first beam, and on 24 of November the machine reached a beam energy of 24 GeV.
1960–1976: Fixed-target and pre-accelerator to ISR
By the end of 1965 the PS was the center of a spider's web of beam lines: It supplied protons to the South Hall (Meyrin site) where an internal target produced five secondary beams, serving a neutrino experiment and a muon storage ring; the North Hall (Meyrin site) where two bubble chambers (80 cm hydrogen Saclay, heavy liquid CERN) were fed by an internal target; when the East Hall (Meyrin site) became available in 1963, protons from the PS hit an internal target producing a secondary beam filtered by electrostatic separators to the CERN 2 m bubble chamber and additional experiments.
Together with the construction of the Intersecting Storage Rings (ISR), an improvement program for the PS was decided in 1965, also making space for the Gargamelle and the Big European Bubble Chamber experiments. The injection energy of the PS was raised by constructing an 800 MeV four ring booster — the Proton Synchrotron Booster (PSB) — which became operational in 1972.
1976–1991: Pre-accelerator to SPS/SpS and LEAR
In 1976 the Super Proton Synchrotron (SPS) became a new client of the PS. When SPS started to operate as a proton–antiproton collider — the SpS — the PS had the double task of producing an intense 26 GeV/c proton beam for generating antiprotons at 3.5 GeV/c to be stored in the Antiproton Accumulator (AA), and then accelerating the antiprotons to 26 GeV/c for transfer to the SPS.
The linear accelerator, now serving the PSB, was replaced in 1978 by Linac 2, leading to an further increase in intensity. During this period acceleration of light ions entered the scene. Linac 1, which was replaced by Linac 2, was equipped to accelerate deuterons that were accelerated in the PS, and transferred to the ISR where they collided with protons or deuterons.
When the Low Energy Antiproton Ring (LEAR), for deceleration and storage of antiprotons, became operational in 1982, PS resumed the new role of an antiproton decelerator. It decelerated antiprotons from the AA to 180 MeV, and injected them into LEAR. During this period the PS complex truly earned its nickname of "versatile particle factory". Up to 1996, PS would regularly accelerate ions for SPS fixed-target experiments, protons for the East Hall or antiproton production at AA, decelerate protons for LEAR, and later accelerate electrons and positrons for the Large Electron–Positron Collider (LEP).
1991–2001: Pre-accelerator to LEP
To provide leptons to LEP, three more machines had to been added to the PS complex: LIL-V electron linear accelerator, the LIL-W electron and positron linear accelerator, and the EPA (Electron-Positron Accumulator) storage ring. A modest amount of additional hardware had to be added to modify PS from a 25 GeV proton synchrotron to a 3.5 GeV lepton synchrotron.
During this period the demand for heavier ions to be delivered as a primary beam to the SPS North experimental hall (Prévessin site) also increased. Both sulfur and oxygen ions were accelerated with great success.
2001–today: Pre-accelerator to LHC
After the end of operation as a LEP injector, the PS started a new period of operation in preparation as LHC injector and for new fixed-target experiments. New experiments started running in the East area, such as the CLOUD experiment. The PS complex was also remodeled when the AA area was replaced by the Antiproton Decelerator and its experimental area.
By increasing the energy of the PSB and the Linac 2, the PS achieved record intensities in 2000 and 2001. During the whole of 2005 PS was shut down: radiation damage had caused aging of the main magnets. The magnets, originally estimated to have a lifetime of less than 10 years, had exceeded the estimate by more than a factor of four, and went through a refurbishment program. The tunnel was emptied, magnets refurbished, and the machine realigned. In 2008 PS started operating as a pre-accelerator to the LHC. Simultaneously the ion operation changed: LEAR was converted into a storage ring — the Low Energy Ion Ring (LEIR) — and the PSB stopped being an ion injector.
Construction and operation
The PS is built in a tunnel, in which temperature is controlled to ± 1°. Around the circumference, 628 meters, there are 100 magnet units of 4.4 m nominal length, 80 short straight sectors of 1.6 m, and 20 straight sectors of 3 m. Sixteen long straight sections are equipped with acceleration cavities, 20 short ones with quadruple correction lenses, and 20 short ones with sets of sextuple and octuplet lenses. Other straight sections are reserved for beam observation stations and injection devices, targets, and ejection magnets.
As the alignment of the magnets is of paramount importance, the units are mounted on a free floating ring of concrete, 200 meters in diameter. As a further precaution, the concrete ring has steel pipes cast in it, where water passes through the ring to keep a constant temperature in the magnets.
Findings and discoveries
Using a neutrino beam produced by a proton beam from PS, the Gargamelle experiment discovered neutral currents in 1973.
References
External links
Accelerator physics
Buildings and structures in the canton of Geneva
CERN accelerators
Particle accelerators
Particle physics facilities
CERN facilities | Proton Synchrotron | [
"Physics"
] | 2,296 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
2,792,708 | https://en.wikipedia.org/wiki/Solar%20physics | Solar physics is the branch of astrophysics that specializes in the study of the Sun. It intersects with many disciplines of pure physics and astrophysics.
Because the Sun is uniquely situated for close-range observing (other stars cannot be resolved with anything like the spatial or temporal resolution that the Sun can), there is a split between the related discipline of observational astrophysics (of distant stars) and observational solar physics.
The study of solar physics is also important as it provides a "physical laboratory" for the study of plasma physics.
History
Ancient times
Babylonians were keeping a record of solar eclipses, with the oldest record originating from the ancient city of Ugarit, in modern-day Syria. This record dates to about 1300 BC. Ancient Chinese astronomers were also observing solar phenomena (such as solar eclipses and visible sunspots) with the purpose of keeping track of calendars, which were based on lunar and solar cycles. Unfortunately, records kept before 720 BC are very vague and offer no useful information. However, after 720 BC, 37 solar eclipses were noted over the course of 240 years.
Medieval times
Astronomical knowledge flourished in the Islamic world during medieval times. Many observatories were built in cities from Damascus to Baghdad, where detailed astronomical observations were taken. Particularly, a few solar parameters were measured and detailed observations of the Sun were taken. Solar observations were taken with the purpose of navigation, but mostly for timekeeping. Islam requires its followers to pray five times a day, at specific position of the Sun in the sky. As such, accurate observations of the Sun and its trajectory on the sky were needed. In the late 10th century, Iranian astronomer Abu-Mahmud Khojandi built a massive observatory near Tehran. There, he took accurate measurements of a series of meridian transits of the Sun, which he later used to calculate the obliquity of the ecliptic.
Following the fall of the Western Roman Empire, Western Europe was cut from all sources of ancient scientific knowledge, especially those written in Greek. This, plus de-urbanisation and diseases such as the Black Death led to a decline in scientific knowledge in Medieval Europe, especially in the early Middle Ages. During this period, observations of the Sun were taken either in relation to the zodiac, or to assist in building places of worship such as churches and cathedrals.
Renaissance period
In astronomy, the renaissance period started with the work of Nicolaus Copernicus. He proposed that planets revolve around the Sun and not around the Earth, as it was believed at the time. This model is known as the heliocentric model. His work was later expanded by Johannes Kepler and Galileo Galilei. Particularly, Galilei used his new telescope to look at the Sun. In 1610, he discovered sunspots on its surface. In the autumn of 1611, Johannes Fabricius wrote the first book on sunspots, De Maculis in Sole Observatis ("On the spots observed in the Sun").
Modern times
Modern day solar physics is focused towards understanding the many phenomena observed with the help of modern telescopes and satellites. Of particular interest are the structure of the solar photosphere, the coronal heat problem and sunspots.
Research
The Solar Physics Division of the American Astronomical Society boasts 555 members (as of May 2007), compared to several thousand in the parent organization.
A major thrust of current (2009) effort in the field of solar physics is integrated understanding of the entire Solar System including the Sun and its effects throughout interplanetary space within the heliosphere and on planets and planetary atmospheres. Studies of phenomena that affect multiple systems in the heliosphere, or that are considered to fit within a heliospheric context, are called heliophysics, a new coinage that entered usage in the early years of the current millennium.
Space based
Helios
Helios-A and Helios-B are a pair of spacecraft launched in December 1974 and January 1976 from Cape Canaveral, as a joint venture between the German Aerospace Center and NASA. Their orbits approach the Sun closer than Mercury. They included instruments to measure the solar wind, magnetic fields, cosmic rays, and interplanetary dust. Helios-A continued to transmit data until 1986.
SOHO
The Solar and Heliospheric Observatory, SOHO, is a joint project between NASA and ESA that was launched in December 1995. It was launched to probe the interior of the Sun, make observations of the solar wind and phenomena associated with it and investigate the outer layers of the Sun.
HINODE
A publicly funded mission led by the Japanese Aerospace Exploration Agency, the HINODE satellite, launched in 2006, consists of a coordinated set of optical, extreme ultraviolet and X-ray instruments. These investigate the interaction between the solar corona and the Sun's magnetic field.
SDO
The Solar Dynamics Observatory (SDO) was launched by NASA in February 2010 from Cape Canaveral. The main goals of the mission are understanding how solar activity arises and how it affects life on Earth by determining how the Sun's magnetic field is generated and structured and how the stored magnetic energy is converted and released into space.
PSP
The Parker Solar Probe (PSP) was launched in 2018 with the mission of making detailed observations of the outer solar corona. It has made the closest approaches to the Sun of any artificial object.
Ground based
ATST
The Advanced Technology Solar Telescope (ATST) is a solar telescope facility that is under construction in Maui. Twenty-two institutions are collaborating on the ATST project, with the main funding agency being the National Science Foundation.
SSO
Sunspot Solar Observatory (SSO) operates the Richard B. Dunn Solar Telescope (DST) on behalf of the NSF.
Big Bear
The Big Bear Solar Observatory in California houses several telescopes including the New Solar Telescope(NTS) which is a 1.6 meter, clear-aperture, off-axis Gregorian telescope. The NTS saw first light in December 2008. Until the ATST comes on line, the NTS remains the largest solar telescope in the world. The Big Bear Observatory is one of several facilities operated by the Center for Solar-Terrestrial Research at New Jersey Institute of Technology (NJIT).
Other
EUNIS
The Extreme Ultraviolet Normal Incidence Spectrograph (EUNIS) is a two channel imaging spectrograph that first flew in 2006. It observes the solar corona with high spectral resolution. So far, it has provided information on the nature of coronal bright points, cool transients and coronal loop arcades. Data from it also helped calibrating SOHO and a few other telescopes.
See also
Aeronomy
Helioseismography
Heliophysics
Institute for Solar Physics (in La Palma in the Canary Islands)
Further reading
References
External links
Living Reviews in Solar Physics
NASA's Marshall Space Flight Center Solar Physics Page
NASA's Goddard Space Flight Center Solar Physics Laboratory
MPS Solar Physics Group
SUPARCO Solar physics Page
Center for Solar-Terrestrial Research
Sun
Space science | Solar physics | [
"Physics",
"Astronomy"
] | 1,426 | [
"Space science",
"Astronomical sub-disciplines",
"Outer space",
"Astrophysics"
] |
2,793,007 | https://en.wikipedia.org/wiki/Madhava%20of%20Sangamagrama | Mādhava of Sangamagrāma (Mādhavan) () was an Indian mathematician and astronomer who is considered to be the founder of the Kerala school of astronomy and mathematics in the Late Middle Ages. Madhava made pioneering contributions to the study of infinite series, calculus, trigonometry, geometry and algebra. He was the first to use infinite series approximations for a range of trigonometric functions, which has been called the "decisive step onward from the finite procedures of ancient mathematics to treat their limit-passage to infinity".
Biography
Little is known about Madhava's life with certainty. However, from scattered references to Madhava found in diverse manuscripts, historians of Kerala school have pieced together information about the mathematician. In a manuscript preserved in the Oriental Institute, Baroda, Madhava has been referred to as Mādhavan vēṇvārōhādīnām karttā ... Mādhavan Ilaññippaḷḷi Emprān. It has been noted that the epithet 'Emprān' refers to the Emprāntiri community, to which Madhava might have belonged.
The term "Ilaññippaḷḷi" has been identified as a reference to the residence of Madhava. This is corroborated by Madhava himself. In his short work on the moon's positions titled Veṇvāroha, Madhava says that he was born in a house named bakuḷādhiṣṭhita . . . vihāra. This is clearly Sanskrit for Ilaññippaḷḷi. Ilaññi is the Malayalam name of the evergreen tree Mimusops elengi and the Sanskrit name for the same is Bakuḷa. Palli is a term for village. The Sanskrit house name bakuḷādhiṣṭhita . . . vihāra has also been interpreted as a reference to the Malayalam house name Iraññi ninna ppaḷḷi and some historians have tried to identify it with one of two currently existing houses with names Iriññanavaḷḷi and Iriññārapaḷḷi both of which are located near Irinjalakuda town in central Kerala. This identification is far fetched because both names have neither phonetic similarity nor semantic equivalence to the word "Ilaññippaḷḷi".
Most of the writers of astronomical and mathematical works who lived after Madhava's period have referred to Madhava as "Sangamagrama Madhava" and as such it is important that the real import of the word "Sangamagrama" be made clear. The general view among many scholars is that Sangamagrama is the town of Irinjalakuda some 70 kilometers south of the Nila river and about 70 kilometers south of Cochin. It seems that there is not much concrete ground for this belief except perhaps the fact that the presiding deity of an early medieval temple in the town, the Koodalmanikyam Temple, is worshiped as Sangameswara meaning the Lord of the Samgama and so Samgamagrama can be interpreted as the village of Samgameswara. But there are several places in Karnataka with samgama or its equivalent kūḍala in their names and with a temple dedicated to Samgamḗsvara, the lord of the confluence. (Kudalasangama in Bagalkot district is one such place with a celebrated temple dedicated to the Lord of the Samgama.)
There is a small town on the southern banks of the Nila river, around 10 kilometers upstream from Tirunavaya, called Kūḍallūr. The exact literal Sanskrit translation of this place name is Samgamagram: kūṭal in Malayalam means a confluence (which in Sanskrit is samgama) and ūr means a village (which in Sanskrit is grama). Also the place is at the confluence of the Nila river and its most important tributary, namely, the Kunti river. (There is no confluence of rivers near Irinjalakuada.) Incidentally there is still existing a Nambudiri (Malayali Brahmin) family by name Kūtallūr Mana a few kilometers away from the Kudallur village. The family has its origins in Kudallur village itself. For many generations this family hosted a great Gurukulam specialising in Vedanga. That the only available manuscript of Sphuṭacandrāpti, a book authored by Madhava, was obtained from the manuscript collection of Kūtallūr Mana might strengthen the conjecture that Madhava might have had some association with Kūtallūr Mana. Thus the most plausible possibility is that the forefathers of Madhava migrated from the Tulu land or thereabouts to settle in Kudallur village, which is situated on the southern banks of the Nila river not far from Tirunnavaya, a generation or two before his birth and lived in a house known as Ilaññippaḷḷi whose present identity is unknown.
Date
There are also no definite evidences to pinpoint the period during which Madhava flourished. In his Venvaroha, Madhava gives a date in 1400 CE as the epoch. Madhava's pupil Parameshvara Nambudiri, the only known direct pupil of Madhava, is known to have completed his seminal work Drigganita in 1430 and the Paramesvara's date has been determined as -1455. From such circumstantial evidences historians have assigned the date to Madhava.
Historiography
Although there is some evidence of mathematical work in Kerala prior to Madhava (e.g., Sadratnamala c. 1300, a set of fragmentary results), it is clear from citations that Madhava provided the creative impulse for the development of a rich mathematical tradition in medieval Kerala. However, except for a couple, most of Madhava's original works have been lost. He is referred to in the work of subsequent Kerala mathematicians, particularly in Nilakantha Somayaji's Tantrasangraha (c. 1500), as the source for several infinite series expansions, including sin θ and arctan θ. The 16th-century text Mahajyānayana prakāra (Method of Computing Great Sines) cites Madhava as the source for several series derivations for . In Jyeṣṭhadeva's Yuktibhāṣā (c. 1530), written in Malayalam, these series are presented with proofs in terms of the Taylor series expansions for polynomials like 1/(1+x2), with x = tan θ, etc.
Thus, what is explicitly Madhava's work is a source of some debate. The Yukti-dipika (also called the Tantrasangraha-vyakhya), possibly composed by Sankara Variar, a student of Jyeṣṭhadeva, presents several versions of the series expansions for sin θ, cos θ, and arctan θ, as well as some products with radius and arclength, most versions of which appear in Yuktibhāṣā. For those that do not, Rajagopal and Rangachari have argued, quoting extensively from the original Sanskrit, that since some of these have been attributed by Nilakantha to Madhava, some of the other forms might also be the work of Madhava.
Others have speculated that the early text Karanapaddhati (c. 1375–1475), or the Mahajyānayana prakāra was written by Madhava, but this is unlikely.
Karanapaddhati, along with the even earlier Keralite mathematics text Sadratnamala, as well as the Tantrasangraha and Yuktibhāṣā, were considered in an 1834 article by C. M. Whish, which was the first to draw attention to their priority over Newton in discovering the Fluxion (Newton's name for differentials). In the mid-20th century, the Russian scholar Jushkevich revisited the legacy of Madhava, and a comprehensive look at the Kerala school was provided by Sarma in 1972.
Lineage
There are several known astronomers who preceded Madhava, including Kǖṭalur Kizhār (2nd century), Vararuci (4th century), and Śaṅkaranārāyaṇa (866 AD). It is possible that other unknown figures preceded him. However, we have a clearer record of the tradition after Madhava. Parameshvara was a direct disciple. According to a palm leaf manuscript of a Malayalam commentary on the Surya Siddhanta, Parameswara's son Damodara (c. 1400–1500) had Nilakantha Somayaji as one of his disciples. Jyeshtadeva was a disciple of Nilakantha. Achyutha Pisharadi of Trikkantiyur is mentioned as a disciple of Jyeṣṭhadeva, and the grammarian Melpathur Narayana Bhattathiri as his disciple.
Contributions
If we consider mathematics as a progression from finite processes of algebra to considerations of the infinite, then the first steps towards this transition typically come with infinite series expansions. It is this transition to the infinite series that is attributed to Madhava. In Europe, the first such series were developed by James Gregory in 1667. Madhava's work is notable for the series, but what is truly remarkable is his estimate of an error term (or correction term). This implies that he understood very well the limit nature of the infinite series. Thus, Madhava may have invented the ideas underlying infinite series expansions of functions, power series, trigonometric series, and rational approximations of infinite series.
However, as stated above, which results are precisely Madhava's and which are those of his successors is difficult to determine. The following presents a summary of results that have been attributed to Madhava by various scholars.
Infinite series
Among his many contributions, he discovered infinite series for the trigonometric functions of sine, cosine, arctangent, and many methods for calculating the circumference of a circle. One of Madhava's series is known from the text Yuktibhāṣā, which contains the derivation and proof of the power series for inverse tangent, discovered by Madhava. In the text, Jyeṣṭhadeva describes the series in the following manner:
This yields:
or equivalently:
This series is Gregory's series (named after James Gregory, who rediscovered it three centuries after Madhava). Even if we consider this particular series as the work of Jyeṣṭhadeva, it would pre-date Gregory by a century, and certainly other infinite series of a similar nature had been worked out by Madhava. Today, it is referred to as the Madhava-Gregory-Leibniz series.
Trigonometry
Madhava composed an accurate table of sines. Madhava's values are accurate to the seventh decimal place. Marking a quarter circle at twenty-four equal intervals, he gave the lengths of the half-chord (sines) corresponding to each of them. It is believed that he may have computed these values based on the series expansions:
sin q = q − q3/3! + q5/5! − q7/7! + ...
cos q = 1 − q2/2! + q4/4! − q6/6! + ...
The value of (pi)
Madhava's work on the value of the mathematical constant Pi is cited in the Mahajyānayana prakāra ("Methods for the great sines"). While some scholars such as Sarma feel that this book may have been composed by Madhava himself, it is more likely the work of a 16th-century successor. This text attributes most of the expansions to Madhava, and gives the following infinite series expansion of , now known as the Madhava-Leibniz series:
which he obtained from the power-series expansion of the arc-tangent function. However, what is most impressive is that he also gave a correction term Rn for the error after computing the sum up to n terms, namely:
Rn = (−1)n / (4n), or
Rn = (−1)n⋅n / (4n2 + 1), or
Rn = (−1)n⋅(n2 + 1) / (4n3 + 5n),
where the third correction leads to highly accurate computations of .
It has long been speculated how Madhava found these correction terms. They are the first three convergents of a finite continued fraction, which, when combined with the original Madhava's series evaluated to n terms, yields about 3n/2 correct digits:
The absolute value of the correction term in next higher order is
|Rn| = (4n3 + 13n) / (16n4 + 56n2 + 9).
He also gave a more rapidly converging series by transforming the original infinite series of , obtaining the infinite series
By using the first 21 terms to compute an approximation of , he obtains a value correct to 11 decimal places (3.14159265359).
The value of 3.1415926535898, correct to 13 decimals, is sometimes attributed to Madhava, but may be due to one of his followers. These were the most accurate approximations of given since the 5th century (see History of numerical approximations of ).
The text Sadratnamala appears to give the astonishingly accurate value of = 3.14159265358979324 (correct to 17 decimal places). Based on this, R. Gupta has suggested that this text was also composed by Madhava.
Madhava also carried out investigations into other series for arc lengths and the associated approximations to rational fractions of .
Calculus
Madhava developed the power series expansion for some trigonometry functions which were further developed by his successors at the Kerala school of astronomy and mathematics. (Certain ideas of calculus were known to earlier mathematicians.) Madhava also extended some results found in earlier works, including those of Bhāskara II. However, they did not combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, or turn calculus into the powerful problem-solving tool we have today.
Madhava's works
K. V. Sarma has identified Madhava as the author of the following works:
Golavada
Madhyamanayanaprakara
Mahajyanayanaprakara (Method of Computing Great Sines)
Lagnaprakarana ()
Venvaroha ()
Sphuṭacandrāpti ()
Aganita-grahacara ()
Chandravakyani () (Table of Moon-mnemonics)
Kerala School of Astronomy and Mathematics
The Kerala school of astronomy and mathematics, founded by Madhava, flourished between the 14th and 16th centuries, and included among its members Parameshvara, Neelakanta Somayaji, Jyeshtadeva, Achyuta Pisharati, Melpathur Narayana Bhattathiri and Achyuta Panikkar. The group is known for series expansion of three trigonometric functions of sine, cosine and arctant and proofs of their results where later given in the Yuktibhasa. The group also did much other work in astronomy: more pages are devoted to astronomical computations than purely mathematical results.
The Kerala school also contributed to linguistics (the relation between language and mathematics is an ancient Indian tradition, see Kātyāyana). The ayurvedic and poetic traditions of Kerala can be traced back to this school. The famous poem, Narayaniyam, was composed by Narayana Bhattathiri.
Influence
Madhava has been called "the greatest mathematician-astronomer of medieval India", some of his discoveries in this field show him to have possessed extraordinary intuition". O'Connor and Robertson state that a fair assessment of Madhava is that
he took the decisive step towards modern classical analysis.
Possible propagation to Europe
The Kerala school was well known in the 15th and 16th centuries, in the period of the first contact with European navigators in the Malabar Coast. At the time, the port of Muziris, near Sangamagrama, was a major center for maritime trade, and a number of Jesuit missionaries and traders were active in this region. Given the fame of the Kerala school, and the interest shown by some of the Jesuit groups during this period in local scholarship, some scholars, including G. Joseph of the U. Manchester have suggested that the writings of the Kerala school may have also been transmitted to Europe around this time, which was still about a century before Newton. However, there is no direct evidence by way of relevant manuscripts that such a transmission actually took place. According to David Bressoud, "there is no evidence that the Indian work of series was known beyond India, or even outside of Kerala, until the nineteenth century."
See also
Madhava Observatory
Madhava's sine table
Madhava series
Madhava's correction term
Venvaroha
Yuktibhāṣā
Kerala school of astronomy and mathematics
List of astronomers and mathematicians of the Kerala school
List of Indian mathematicians
Indian mathematics
History of calculus
References
External links
Biography on MacTutor
1340s births
1420s deaths
Scientists from Kerala
History of calculus
Indian Hindus
Kerala school of astronomy and mathematics
14th-century Indian mathematicians
15th-century Indian mathematicians
People from Irinjalakuda
15th-century Indian astronomers
14th-century Indian astronomers
Scholars from Kerala
Mathematical series | Madhava of Sangamagrama | [
"Mathematics"
] | 3,653 | [
"Sequences and series",
"Mathematical structures",
"Series (mathematics)",
"Calculus",
"Mathematics of infinitesimals",
"History of calculus"
] |
2,795,027 | https://en.wikipedia.org/wiki/Exploded-view%20drawing | An exploded-view drawing is a diagram, picture, schematic or technical drawing of an object, that shows the relationship or order of assembly of various parts.
It shows the components of an object slightly separated by distance, or suspended in surrounding space in the case of a three-dimensional exploded diagram. An object is represented as if there had been a small controlled explosion emanating from the middle of the object, causing the object's parts to be separated an equal distance away from their original locations.
The exploded-view drawing is used in parts catalogs, assembly and maintenance manuals and other instructional material.
The projection of an exploded view is usually shown from above and slightly in diagonal from the left or right side of the drawing. (See exploded-view drawing of a gear pump to the right: it is slightly from above and shown from the left side of the drawing in diagonal.)
Overview
An exploded-view drawing is a type of drawing, that shows the intended assembly of mechanical or other parts. It shows all parts of the assembly and how they fit together. In mechanical systems usually the component closest to the center are assembled first, or is the main part in which the other parts get assembled. This drawing can also help to represent the disassembly of parts, where the parts on the outside normally get removed first.
Exploded diagrams are common in descriptive manuals showing parts placement, or parts contained in an assembly or sub-assembly. Usually such diagrams have the part identification number and a label indicating which part fills the particular position in the diagram. Many spreadsheet applications can automatically create exploded diagrams, such as exploded pie charts.
In patent drawings in an exploded views the separated parts should be embraced by a bracket, to show the relationship or order of assembly of various parts are permissible, see image. When an exploded view is shown in a figure that is on the same sheet as another figure, the exploded view should be placed in brackets.
Exploded views can also be used in architectural drawing, for example in the presentation of landscape design. An exploded view can create an image in which the elements are flying through the air above the architectural plan, almost like a cubist painting. The locations can be shadowed or dotted in the siteplan of the elements.
History
The exploded view was among the many graphic inventions of the Renaissance, which were developed to clarify pictorial representation in a renewed naturalistic way. The exploded view can be traced back to the early fifteenth century notebooks of Marino Taccola (1382–1453), and were perfected by Francesco di Giorgio (1439–1502) and Leonardo da Vinci (1452–1519).
One of the first clearer examples of an exploded view was created by Leonardo in his design drawing of a reciprocating motion machine. Leonardo applied this method of presentation in several other studies, including those on human anatomy.
The term "Exploded-View Drawing" emerged in the 1940s, and is one of the first times defined in 1965 as "Three-dimensional (isometric) illustration that shows the mating relationships of parts, subassemblies, and higher assemblies. May also show the sequence of assembling or disassembling the detail parts."
See also
Cross-section
Cutaway drawing
Cutaway (industrial)
Perspective
References
Technical drawing
Methods of representation | Exploded-view drawing | [
"Engineering"
] | 676 | [
"Design engineering",
"Technical drawing",
"Civil engineering"
] |
2,795,314 | https://en.wikipedia.org/wiki/Airborne%20wind%20turbine | An airborne wind turbine is a design concept for a wind turbine with a rotor supported in the air without a tower, thus benefiting from the higher velocity and persistence of wind at high altitudes, while avoiding the expense of tower construction, or the need for slip rings or yaw mechanism. An electrical generator may be on the ground or airborne. Challenges include safely suspending and maintaining turbines hundreds of meters off the ground in high winds and storms, transferring the harvested and/or generated power back to earth, and interference with aviation.
Airborne wind turbines may operate in low or high altitudes; they are part of a wider class of Airborne Wind Energy Systems (AWES) addressed by high-altitude wind power and crosswind kite power. When the generator is on the ground, then the tethered aircraft need not carry the generator mass or have a conductive tether. When the generator is aloft, then a conductive tether would be used to transmit energy to the ground or used aloft or beamed to receivers using microwave or laser. Kites and helicopters come down when there is insufficient wind; kytoons and blimps may resolve the matter with other disadvantages. Also, bad weather such as lightning or thunderstorms, could temporarily suspend use of the machines, probably requiring them to be brought back down to the ground and covered. Some schemes require a long power cable and, if the turbine is high enough, a prohibited airspace zone. As of 2022, few commercial airborne wind turbines are in regular operation.
Aerodynamic variety
An aerodynamic airborne wind power system relies on the wind for support.
In one class, the generator is aloft; an aerodynamic structure resembling a kite, tethered to the ground, extracts wind energy by supporting a wind turbine. In another class of devices, such as crosswind kite power, generators are on the ground; one or more airfoils or kites exert force on a tether, which is converted to electrical energy. An airborne turbine requires conductors in the tether or some other apparatus to transmit power to the ground. Systems that rely on a winch can instead place the weight of the generator at ground level, and the tethers need not conduct electricity.
Aerodynamic wind energy systems have been a subject of research interest since at least 1980. Multiple proposals have been put forth but no commercial products are available.
Other projects for airborne wind energy systems include:
Ampyx Power
Kitepower
KiteGen
Rotokite
HAWE System
SkySails
X-Wind technology
Makani Power crosswind hybrid kite system
Windswept and Interesting Kite Turbine Ring to Ring Torque Transfer
Kitemill
Aerostat variety
An aerostat-type wind power system relies at least in part on buoyancy to support the wind-collecting elements. Aerostats vary in their designs and resulting lift-to-drag ratio; the kiting effect of higher lift-over-drag shapes for the aerostat can effectively keep an airborne turbine aloft; a variety of such kiting balloons were made famous in the kytoon by Domina Jalbert.
Balloons can be incorporated to keep systems up without wind, but balloons leak slowly and have to be resupplied with lifting gas, possibly patched as well. Very large, sun-heated balloons may solve the helium or hydrogen leakage problems.
An Ontario based company called Magenn was developing a turbine called the Magenn Air Rotor System (MARS). A future -wide MARS system would use a horizontal rotor in a helium suspended apparatus which is tethered to a transformer on the ground. Magenn claims that their technology provides high torque, low starting speeds, and superior overall efficiency thanks to its ability to deploy higher in comparison to non-aerial solutions. The first prototypes were built by TCOM in April 2008. No production units have been delivered.
Boston-based Altaeros Energies uses a helium-filled balloon shroud to lift a wind turbine into the air, transferring the resultant power down to a base station through the same cables used to control the shroud. A 35-foot prototype using a standard Skystream 2.5kW 3.7m wind turbine was flown and tested in 2012. In fall 2013, Altaeros was at work on its first commercial-scale demonstration in Alaska.
Another concept, released in 2023, proposed a helium-filled balloon with attached sails, which create pressure and drive the rotation of the system around its horizontal axis. The kinetic energy is transferred to a generator on the ground through ropes in circular motion.
See also
High-altitude wind power
Ram air turbine
References
Bibliography
Vance, E. Wind power: High hopes. Nature 460, 564–566 (2009). https://doi.org/10.1038/460564a
Houska, Boris, and Moritz Diehl. "Optimal control for power generating kites." 2007 European Control Conference (ECC). IEEE, 2007.
External links
Kitemill - Taking windpower to new heights
Energy Kite Systems
Why Airborne Wind Energy Airborne Wind Energy Labs
Aerodynamics
Electrical generators
Electromechanical engineering
Kites
Airborne wind power
Wind turbines | Airborne wind turbine | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,038 | [
"Electrical generators",
"Machines",
"Aerodynamics",
"Physical systems",
"Electromechanical engineering",
"Mechanical engineering by discipline",
"Aerospace engineering",
"Electrical engineering",
"Fluid dynamics"
] |
2,796,079 | https://en.wikipedia.org/wiki/Electrical%20enclosure | An electrical enclosure is a cabinet for electrical or electronic equipment to mount switches, knobs and displays and to prevent electrical shock to equipment users and protect the contents from the environment. The enclosure is the only part of the equipment which is seen by users. It may be designed not only for its utilitarian requirements, but also to be pleasing to the eye. Regulations may dictate the features and performance of enclosures for electrical equipment in hazardous areas, such as petrochemical plants or coal mines. Electronic packaging may place many demands on an enclosure for heat dissipation, radio frequency interference and electrostatic discharge protection, as well as functional, esthetic and commercial constraints.
Standards
Internationally, IEC 60529 classifies the IP Codes (ingress protection rating) of enclosures.
In the United States, the National Electrical Manufacturers Association (NEMA) publishes NEMA enclosure type standards for the performance of various classes of electrical enclosures. The NEMA standards cover corrosion resistance, ability to protect from rain and submersion, etc.
Materials
Electrical enclosures are usually made from rigid plastics, or metals such as steel, stainless steel, or aluminum. Steel cabinets may be painted or galvanized. Mass-produced equipment will generally have a customized enclosure, but standardized enclosures are made for custom-built or small production runs of equipment. For plastic enclosures ABS is used for indoor applications not in harsh environments. Polycarbonate, glass-reinforced, and fiberglass boxes are used where stronger cabinets are required, and may additionally have a gasket to exclude dust and moisture.
Metal cabinets may meet the conductivity requirements for electrical safety bonding and shielding of enclosed equipment from electromagnetic interference. Non-metallic enclosures may require additional installation steps to ensure metallic conduit systems are properly bonded.
Stainless steel and carbon steel
Carbon steel and stainless steel are both used for enclosure construction due to their high durability and corrosion resistance. These materials are also moisture resistant and chemical resistant. They are the strongest of the construction options. Carbon steel can be hot or cold rolled. Hot rolled carbon steel is used for stamping and moderate forming applications. Cold rolled sheet is produced from low carbon steel and then cold reduced to a certain thickness and can meet ASTM A366 and ASTM A611 requirements.
Stainless steel enclosures are suited for medical, pharma, and food industry applications since they are bacterial and fungal resistant due to their non-porous quality. Stainless steel enclosures may be specified to permit wash-down cleaning in, for example, food manufacturing areas.
Aluminum
Aluminum is chosen because of its light weight, relative strength, low cost, and corrosion resistance. It performs well in harsh environments and it is sturdy, capable of withstanding high impact with a high malleable strength. Aluminum also acts as a shield against electromagnetic interference.
Polycarbonate
Polycarbonate used for electrical enclosures is strong but light, non-conductive and non-magnetic. It is also resistant to corrosion and some acidic environments; however, it is sensitive to abrasive cleaners. Polycarbonate is the easiest material to modify.
Fiberglass
Fiberglass enclosures resist chemicals in corrosive applications. The material can be used over all indoor and outdoor temperature ranges. Fiberglass can be installed in environments that are constantly wet.
Terminology
Enclosures for some purposes have partially punched openings (knockouts) which can be removed to accommodate cables, connectors, or conduits. Where they are small and primarily intended to conceal electrical junctions from sight, or protect them from tampering, they are also known as junction boxes, street cabinets or technically as serving area interface.
Telecommunications
Telecommunication enclosures are fully assembled or modular field-assembled transportable structures capable of housing an electronic communications system. These enclosures provide a controlled internal environment for the communications equipment and occasional craftspeople. The enclosures are designed with locks, security, and alarms to discourage access by unauthorized persons. Enclosures can be provided with a decorative facade to comply with local building requirements.
Fire risk
Electrical enclosures are prone to fires that can be very intense (in the order of the megawatt) and are hence an important topic of fire safety engineering.
See also
19 inch rack
Cable management
DIN rail
Housing (engineering)
Rack unit
Telco can
Utility box art
Utility vault
References
External links
IEC IP definitions, and a comparison of IEC<>NEMA definitions
Types of Enclosures
Electrical Enclosure with Terminal
IP Protection Ratings vs. NEMA Equivalency
What Is an Electrical Enclosure? Definition, Using, Requirements | Electrical enclosure | [
"Engineering"
] | 923 | [
"Electrical enclosures",
"Electrical engineering"
] |
2,796,131 | https://en.wikipedia.org/wiki/Introduction%20to%20quantum%20mechanics | Quantum mechanics is the study of matter and its interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. The desire to resolve inconsistencies between observed phenomena and classical theory led to a revolution in physics, a shift in the original scientific paradigm: the development of quantum mechanics.
Many aspects of quantum mechanics are counterintuitive and can seem paradoxical because they describe behavior quite different from that seen at larger scales. In the words of quantum physicist Richard Feynman, quantum mechanics deals with "nature as She is—absurd". Features of quantum mechanics often defy simple explanations in everyday language. One example of this is the uncertainty principle: precise measurements of position cannot be combined with precise measurements of velocity. Another example is entanglement: a measurement made on one particle (such as an electron that is measured to have spin 'up') will correlate with a measurement on a second particle (an electron will be found to have spin 'down') if the two particles have a shared history. This will apply even if it is impossible for the result of the first measurement to have been transmitted to the second particle before the second measurement takes place.
Quantum mechanics helps us understand chemistry, because it explains how atoms interact with each other and form molecules. Many remarkable phenomena can be explained using quantum mechanics, like superfluidity. For example, if liquid helium cooled to a temperature near absolute zero is placed in a container, it spontaneously flows up and over the rim of its container; this is an effect which cannot be explained by classical physics.
History
James C. Maxwell's unification of the equations governing electricity, magnetism, and light in the late 19th century led to experiments on the interaction of light and matter. Some of these experiments had aspects which could not be explained until quantum mechanics emerged in the early part of the 20th century.
Evidence of quanta from the photoelectric effect
The seeds of the quantum revolution appear in the discovery by J.J. Thomson in 1897 that cathode rays were not continuous but "corpuscles" (electrons). Electrons had been named just six years earlier as part of the emerging theory of atoms. In 1900, Max Planck, unconvinced by the atomic theory, discovered that he needed discrete entities like atoms or electrons to explain black-body radiation.
Very hot – red hot or white hot – objects look similar when heated to the same temperature. This look results from a common curve of light intensity at different frequencies (colors), which is called black-body radiation. White hot objects have intensity across many colors in the visible range. The lowest frequencies above visible colors are infrared light, which also give off heat. Continuous wave theories of light and matter cannot explain the black-body radiation curve. Planck spread the heat energy among individual "oscillators" of an undefined character but with discrete energy capacity; this model explained black-body radiation.
At the time, electrons, atoms, and discrete oscillators were all exotic ideas to explain exotic phenomena. But in 1905 Albert Einstein proposed that light was also corpuscular, consisting of "energy quanta", in contradiction to the established science of light as a continuous wave, stretching back a hundred years to Thomas Young's work on diffraction.
Einstein's revolutionary proposal started by reanalyzing Planck's black-body theory, arriving at the same conclusions by using the new "energy quanta". Einstein then showed how energy quanta connected to Thomson's electron. In 1902, Philipp Lenard directed light from an arc lamp onto freshly cleaned metal plates housed in an evacuated glass tube. He measured the electric current coming off the metal plate, at higher and lower intensities of light and for different metals. Lenard showed that amount of current – the number of electrons – depended on the intensity of the light, but that the velocity of these electrons did not depend on intensity. This is the photoelectric effect. The continuous wave theories of the time predicted that more light intensity would accelerate the same amount of current to higher velocity, contrary to this experiment. Einstein's energy quanta explained the volume increase: one electron is ejected for each quantum: more quanta mean more electrons.
Einstein then predicted that the electron velocity would increase in direct proportion to the light frequency above a fixed value that depended upon the metal. Here the idea is that energy in energy-quanta depends upon the light frequency; the energy transferred to the electron comes in proportion to the light frequency. The type of metal gives a barrier, the fixed value, that the electrons must climb over to exit their atoms, to be emitted from the metal surface and be measured.
Ten years elapsed before Millikan's definitive experiment verified Einstein's prediction. During that time many scientists rejected the revolutionary idea of quanta. But Planck's and Einstein's concept was in the air and soon began to affect other physics and quantum theories.
Quantization of bound electrons in atoms
Experiments with light and matter in the late 1800s uncovered a reproducible but puzzling regularity. When light was shown through purified gases, certain frequencies (colors) did not pass. These dark absorption 'lines' followed a distinctive pattern: the gaps between the lines decreased steadily. By 1889, the Rydberg formula predicted the lines for hydrogen gas using only a constant number and the integers to index the lines. The origin of this regularity was unknown. Solving this mystery would eventually become the first major step toward quantum mechanics.
Throughout the 19th century evidence grew for the atomic nature of matter. With Thomson's discovery of the electron in 1897, scientist began the search for a model of the interior of the atom. Thomson proposed negative electrons swimming in a pool of positive charge. Between 1908 and 1911, Rutherford showed that the positive part was only 1/3000th of the diameter of the atom.
Models of "planetary" electrons orbiting a nuclear "Sun" were proposed, but cannot explain why the electron does not simply fall into the positive charge. In 1913 Niels Bohr and Ernest Rutherford connected the new atom models to the mystery of the Rydberg formula: the orbital radius of the electrons were constrained and the resulting energy differences matched the energy differences in the absorption lines. This meant that absorption and emission of light from atoms was energy quantized: only specific energies that matched the difference in orbital energy would be emitted or absorbed.
Trading one mystery – the regular pattern of the Rydberg formula – for another mystery – constraints on electron orbits – might not seem like a big advance, but the new atom model summarized many other experimental findings. The quantization of the photoelectric effect and now the quantization of the electron orbits set the stage for the final revolution.
Throughout the first and the modern era of quantum mechanics the concept that classical mechanics must be valid macroscopically constrained possible quantum models. This concept was formalized by Bohr in 1923 as the correspondence principle. It requires quantum theory to converge to classical limits.
A related concept is Ehrenfest's theorem, which shows that the average values obtained from quantum mechanics (e.g. position and momentum) obey classical laws.
Quantization of spin
In 1922 Otto Stern and Walther Gerlach demonstrated that the magnetic properties of silver atoms defy classical explanation, the work contributing to Stern’s 1943 Nobel Prize in Physics. They fired a beam of silver atoms through a magnetic field. According to classical physics, the atoms should have emerged in a spray, with a continuous range of directions. Instead, the beam separated into two, and only two, diverging streams of atoms. Unlike the other quantum effects known at the time, this striking result involves the state of a single atom. In 1927, T.E. Phipps and J.B. Taylor obtained a similar, but less pronounced effect using hydrogen atoms in their ground state, thereby eliminating any doubts that may have been caused by the use of silver atoms.
In 1924, Wolfgang Pauli called it "two-valuedness not describable classically" and associated it with electrons in the outermost shell. The experiments lead to formulation of its theory described to arise from spin of the electron in 1925, by Samuel Goudsmit and George Uhlenbeck, under the advice of Paul Ehrenfest.
Quantization of matter
In 1924 Louis de Broglie proposed that electrons in an atom are constrained not in "orbits" but as standing waves. In detail his solution did not work, but his hypothesis – that the electron "corpuscle" moves in the atom as a wave – spurred Erwin Schrödinger to develop a wave equation for electrons; when applied to hydrogen the Rydberg formula was accurately reproduced.
Max Born's 1924 paper "Zur Quantenmechanik" was the first use of the words "quantum mechanics" in print. His later work included developing quantum collision models; in a footnote to a 1926 paper he proposed the Born rule connecting theoretical models to experiment.
In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target which showed a diffraction pattern indicating wave nature of electron whose theory was fully explained by Hans Bethe. A similar experiment by George Paget Thomson and Alexander Reid, firing electrons at thin celluloid foils and later metal films, observing rings, independently discovered matter wave nature of electrons.
Further developments
In 1928 Paul Dirac published his relativistic wave equation simultaneously incorporating relativity, predicting anti-matter, and providing a complete theory for the Stern–Gerlach result. These successes launched a new fundamental understanding of our world at small scale: quantum mechanics.
Planck and Einstein started the revolution with quanta that broke down the continuous models of matter and light. Twenty years later "corpuscles" like electrons came to be modeled as continuous waves. This result came to be called wave-particle duality, one iconic idea along with the uncertainty principle that sets quantum mechanics apart from older models of physics.
Quantum radiation, quantum fields
In 1923 Compton demonstrated that the Planck-Einstein energy quanta from light also had momentum; three years later the "energy quanta" got a new name "photon" Despite its role in almost all stages of the quantum revolution, no explicit model for light quanta existed until 1927 when Paul Dirac began work on a quantum theory of radiation that became quantum electrodynamics. Over the following decades this work evolved into quantum field theory, the basis for modern quantum optics and particle physics.
Wave–particle duality
The concept of wave–particle duality says that neither the classical concept of "particle" nor of "wave" can fully describe the behavior of quantum-scale objects, either photons or matter. Wave–particle duality is an example of the principle of complementarity in quantum physics. An elegant example of wave-particle duality is the double-slit experiment.
In the double-slit experiment, as originally performed by Thomas Young in 1803, and then Augustin Fresnel a decade later, a beam of light is directed through two narrow, closely spaced slits, producing an interference pattern of light and dark bands on a screen. The same behavior can be demonstrated in water waves: the double-slit experiment was seen as a demonstration of the wave nature of light.
Variations of the double-slit experiment have been performed using electrons, atoms, and even large molecules, and the same type of interference pattern is seen. Thus it has been demonstrated that all matter possesses wave characteristics.
If the source intensity is turned down, the same interference pattern will slowly build up, one "count" or particle (e.g. photon or electron) at a time. The quantum system acts as a wave when passing through the double slits, but as a particle when it is detected. This is a typical feature of quantum complementarity: a quantum system acts as a wave in an experiment to measure its wave-like properties, and like a particle in an experiment to measure its particle-like properties. The point on the detector screen where any individual particle shows up is the result of a random process. However, the distribution pattern of many individual particles mimics the diffraction pattern produced by waves.
Uncertainty principle
Suppose it is desired to measure the position and speed of an object—for example, a car going through a radar speed trap. It can be assumed that the car has a definite position and speed at a particular moment in time. How accurately these values can be measured depends on the quality of the measuring equipment. If the precision of the measuring equipment is improved, it provides a result closer to the true value. It might be assumed that the speed of the car and its position could be operationally defined and measured simultaneously, as precisely as might be desired.
In 1927, Heisenberg proved that this last assumption is not correct. Quantum mechanics shows that certain pairs of physical properties, for example, position and speed, cannot be simultaneously measured, nor defined in operational terms, to arbitrary precision: the more precisely one property is measured, or defined in operational terms, the less precisely can the other be thus treated. This statement is known as the uncertainty principle. The uncertainty principle is not only a statement about the accuracy of our measuring equipment but, more deeply, is about the conceptual nature of the measured quantities—the assumption that the car had simultaneously defined position and speed does not work in quantum mechanics. On a scale of cars and people, these uncertainties are negligible, but when dealing with atoms and electrons they become critical.
Heisenberg gave, as an illustration, the measurement of the position and momentum of an electron using a photon of light. In measuring the electron's position, the higher the frequency of the photon, the more accurate is the measurement of the position of the impact of the photon with the electron, but the greater is the disturbance of the electron. This is because from the impact with the photon, the electron absorbs a random amount of energy, rendering the measurement obtained of its momentum increasingly uncertain, for one is necessarily measuring its post-impact disturbed momentum from the collision products and not its original momentum (momentum which should be simultaneously measured with position). With a photon of lower frequency, the disturbance (and hence uncertainty) in the momentum is less, but so is the accuracy of the measurement of the position of the impact.
At the heart of the uncertainty principle is a fact that for any mathematical analysis in the position and velocity domains, achieving a sharper (more precise) curve in the position domain can only be done at the expense of a more gradual (less precise) curve in the speed domain, and vice versa. More sharpness in the position domain requires contributions from more frequencies in the speed domain to create the narrower curve, and vice versa. It is a fundamental tradeoff inherent in any such related or complementary measurements, but is only really noticeable at the smallest (Planck) scale, near the size of elementary particles.
The uncertainty principle shows mathematically that the product of the uncertainty in the position and momentum of a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to the Planck constant.
Wave function collapse
Wave function collapse means that a measurement has forced or converted a quantum (probabilistic or potential) state into a definite measured value. This phenomenon is only seen in quantum mechanics rather than classical mechanics.
For example, before a photon actually "shows up" on a detection screen it can be described only with a set of probabilities for where it might show up. When it does appear, for instance in the CCD of an electronic camera, the time and space where it interacted with the device are known within very tight limits. However, the photon has disappeared in the process of being captured (measured), and its quantum wave function has disappeared with it. In its place, some macroscopic physical change in the detection screen has appeared, e.g., an exposed spot in a sheet of photographic film, or a change in electric potential in some cell of a CCD.
Eigenstates and eigenvalues
Because of the uncertainty principle, statements about both the position and momentum of particles can assign only a probability that the position or momentum has some numerical value. Therefore, it is necessary to formulate clearly the difference between the state of something indeterminate, such as an electron in a probability cloud, and the state of something having a definite value. When an object can definitely be "pinned-down" in some respect, it is said to possess an eigenstate.
In the Stern–Gerlach experiment discussed above, the quantum model predicts two possible values of spin for the atom compared to the magnetic axis. These two eigenstates are named arbitrarily 'up' and 'down'. The quantum model predicts these states will be measured with equal probability, but no intermediate values will be seen. This is what the Stern–Gerlach experiment shows.
The eigenstates of spin about the vertical axis are not simultaneously eigenstates of spin about the horizontal axis, so this atom has an equal probability of being found to have either value of spin about the horizontal axis. As described in the section above, measuring the spin about the horizontal axis can allow an atom that was spun up to spin down: measuring its spin about the horizontal axis collapses its wave function into one of the eigenstates of this measurement, which means it is no longer in an eigenstate of spin about the vertical axis, so can take either value.
The Pauli exclusion principle
In 1924, Wolfgang Pauli proposed a new quantum degree of freedom (or quantum number), with two possible values, to resolve inconsistencies between observed molecular spectra and the predictions of quantum mechanics. In particular, the spectrum of atomic hydrogen had a doublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated his exclusion principle, stating, "There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers."
A year later, Uhlenbeck and Goudsmit identified Pauli's new degree of freedom with the property called spin whose effects were observed in the Stern–Gerlach experiment.
Dirac wave equation
In 1928, Paul Dirac extended the Pauli equation, which described spinning electrons, to account for special relativity. The result was a theory that dealt properly with events, such as the speed at which an electron orbits the nucleus, occurring at a substantial fraction of the speed of light. By using the simplest electromagnetic interaction, Dirac was able to predict the value of the magnetic moment associated with the electron's spin and found the experimentally observed value, which was too large to be that of a spinning charged sphere governed by classical physics. He was able to solve for the spectral lines of the hydrogen atom and to reproduce from physical first principles Sommerfeld's successful formula for the fine structure of the hydrogen spectrum.
Dirac's equations sometimes yielded a negative value for energy, for which he proposed a novel solution: he posited the existence of an antielectron and a dynamical vacuum. This led to the many-particle quantum field theory.
Quantum entanglement
In quantum physics, a group of particles can interact or be created together in such a way that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. This is known as quantum entanglement.
An early landmark in the study of entanglement was the Einstein–Podolsky–Rosen (EPR) paradox, a thought experiment proposed by Albert Einstein, Boris Podolsky and Nathan Rosen which argues that the description of physical reality provided by quantum mechanics is incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing these hidden variables.
The thought experiment involves a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality. In the same year, Erwin Schrödinger used the word "entanglement" and declared: "I would not call that one but rather the characteristic trait of quantum mechanics."
The Irish physicist John Stewart Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles are able to interact instantaneously no matter how widely they ever become separated. Performing experiments like those that Bell suggested, physicists have found that nature obeys quantum mechanics and violates Bell inequalities. In other words, the results of these experiments are incompatible with any local hidden variable theory.
Quantum field theory
The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted to quantize the energy of the electromagnetic field; just as in quantum mechanics the energy of an electron in the hydrogen atom was quantized. Quantization is a procedure for constructing a quantum theory starting from a classical theory.
Merriam-Webster defines a field in physics as "a region or space in which a given effect (such as magnetism) exists". Other effects that manifest themselves as fields are gravitation and static electricity. In 2008, physicist Richard Hammond wrote:
Sometimes we distinguish between quantum mechanics (QM) and quantum field theory (QFT). QM refers to a system in which the number of particles is fixed, and the fields (such as the electromechanical field) are continuous classical entities. QFT ... goes a step further and allows for the creation and annihilation of particles ...
He added, however, that quantum mechanics is often used to refer to "the entire notion of quantum view".
In 1931, Dirac proposed the existence of particles that later became known as antimatter. Dirac shared the Nobel Prize in Physics for 1933 with Schrödinger "for the discovery of new productive forms of atomic theory".
Quantum electrodynamics
Quantum electrodynamics (QED) is the name of the quantum theory of the electromagnetic force. Understanding QED begins with understanding electromagnetism. Electromagnetism can be called "electrodynamics" because it is a dynamic interaction between electrical and magnetic forces. Electromagnetism begins with the electric charge.
Electric charges are the sources of and create, electric fields. An electric field is a field that exerts a force on any particles that carry electric charges, at any point in space. This includes the electron, proton, and even quarks, among others. As a force is exerted, electric charges move, a current flows, and a magnetic field is produced. The changing magnetic field, in turn, causes electric current (often moving electrons). The physical description of interacting charged particles, electrical currents, electrical fields, and magnetic fields is called electromagnetism.
In 1928 Paul Dirac produced a relativistic quantum theory of electromagnetism. This was the progenitor to modern quantum electrodynamics, in that it had essential ingredients of the modern theory. However, the problem of unsolvable infinities developed in this relativistic quantum theory. Years later, renormalization largely solved this problem. Initially viewed as a provisional, suspect procedure by some of its originators, renormalization eventually was embraced as an important and self-consistent tool in QED and other fields of physics. Also, in the late 1940s Feynman diagrams provided a way to make predictions with QED by finding a probability amplitude for each possible way that an interaction could occur. The diagrams showed in particular that the electromagnetic force is the exchange of photons between interacting particles.
The Lamb shift is an example of a quantum electrodynamics prediction that has been experimentally verified. It is an effect whereby the quantum nature of the electromagnetic field makes the energy levels in an atom or ion deviate slightly from what they would otherwise be. As a result, spectral lines may shift or split.
Similarly, within a freely propagating electromagnetic wave, the current can also be just an abstract displacement current, instead of involving charge carriers. In QED, its full description makes essential use of short-lived virtual particles. There, QED again validates an earlier, rather mysterious concept.
Standard Model
The Standard Model of particle physics is the quantum field theory that describes three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifies all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
Although the Standard Model is believed to be theoretically self-consistent and has demonstrated success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain baryon asymmetry, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses. Accordingly, it is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations.
Interpretations
The physical measurements, equations, and predictions pertinent to quantum mechanics are all consistent and hold a very high level of confirmation. However, the question of what these abstract models say about the underlying nature of the real world has received competing answers. These interpretations are widely varying and sometimes somewhat abstract. For instance, the Copenhagen interpretation states that before a measurement, statements about a particle's properties are completely meaningless, while the many-worlds interpretation describes the existence of a multiverse made up of every possible universe.
Light behaves in some aspects like particles and in other aspects like waves. Matter—the "stuff" of the universe consisting of particles such as electrons and atoms—exhibits wavelike behavior too. Some light sources, such as neon lights, give off only certain specific frequencies of light, a small set of distinct pure colors determined by neon's atomic structure. Quantum mechanics shows that light, along with all other forms of electromagnetic radiation, comes in discrete units, called photons, and predicts its spectral energies (corresponding to pure colors), and the intensities of its light beams. A single photon is a quantum, or smallest observable particle, of the electromagnetic field. A partial photon is never experimentally observed. More broadly, quantum mechanics shows that many properties of objects, such as position, speed, and angular momentum, that appeared continuous in the zoomed-out view of classical mechanics, turn out to be (in the very tiny, zoomed-in scale of quantum mechanics) quantized. Such properties of elementary particles are required to take on one of a set of small, discrete allowable values, and since the gap between these values is also small, the discontinuities are only apparent at very tiny (atomic) scales.
Applications
Everyday applications
The relationship between the frequency of electromagnetic radiation and the energy of each photon is why ultraviolet light can cause sunburn, but visible or infrared light cannot. A photon of ultraviolet light delivers a high amount of energy—enough to contribute to cellular damage such as occurs in a sunburn. A photon of infrared light delivers less energy—only enough to warm one's skin. So, an infrared lamp can warm a large surface, perhaps large enough to keep people comfortable in a cold room, but it cannot give anyone a sunburn.
Technological applications
Applications of quantum mechanics include the laser, the transistor, the electron microscope, and magnetic resonance imaging. A special class of quantum mechanical applications is related to macroscopic quantum phenomena such as superfluid helium and superconductors. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable for modern electronics.
In even a simple light switch, quantum tunneling is absolutely vital, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives also use quantum tunneling, to erase their memory cells.
See also
Einstein's thought experiments
Macroscopic quantum phenomena
Philosophy of physics
Quantum computing
Virtual particle
Teaching quantum mechanics
List of textbooks on classical and quantum mechanics
Notes
Notes are in the main script
References
Bibliography
Scientific American Reader, 1953.
; cited in:
Van Vleck, J. H.,1928, "The Correspondence Principle in the Statistical Interpretation of Quantum Mechanics", Proc. Natl. Acad. Sci. 14: 179.
Further reading
The following titles, all by working physicists, attempt to communicate quantum theory to laypeople, using a minimum of technical apparatus.
Jim Al-Khalili (2003). Quantum: A Guide for the Perplexed. Weidenfeld & Nicolson. .
Chester, Marvin (1987). Primer of Quantum Mechanics. John Wiley. .
Brian Cox and Jeff Forshaw (2011) The Quantum Universe. Allen Lane. .
Richard Feynman (1985). QED: The Strange Theory of Light and Matter. Princeton University Press. .
Ford, Kenneth (2005). The Quantum World. Harvard Univ. Press. Includes elementary particle physics.
Ghirardi, GianCarlo (2004). Sneaking a Look at God's Cards, Gerald Malsbary, trans. Princeton Univ. Press. The most technical of the works cited here. Passages using algebra, trigonometry, and bra–ket notation can be passed over on a first reading.
Tony Hey and Walters, Patrick (2003). The New Quantum Universe. Cambridge Univ. Press. Includes much about the technologies quantum theory has made possible. .
Vladimir G. Ivancevic, Tijana T. Ivancevic (2008). Quantum leap: from Dirac and Feynman, Across the universe, to human body and mind. World Scientific Publishing Company. Provides an intuitive introduction in non-mathematical terms and an introduction in comparatively basic mathematical terms. .
J. P. McEvoy and Oscar Zarate (2004). Introducing Quantum Theory. Totem Books. '
N. David Mermin (1990). "Spooky actions at a distance: mysteries of the QT" in his Boojums all the way through. Cambridge Univ. Press: 110–76. The author is a rare physicist who tries to communicate to philosophers and humanists. .
Roland Omnès (1999). Understanding Quantum Mechanics. Princeton Univ. Press. .
Victor Stenger (2000). Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpts. 5–8. .
Martinus Veltman (2003). Facts and Mysteries in Elementary Particle Physics. World Scientific Publishing Company. .
External links
"Microscopic World – Introduction to Quantum Mechanics". by Takada, Kenjiro, emeritus professor at Kyushu University
The Quantum Exchange (tutorials and open-source learning software).
Atoms and the Periodic Table
Single and double slit interference
Time-Evolution of a Wavepacket in a Square Well An animated demonstration of a wave packet dispersion over time.
Articles containing video clips | Introduction to quantum mechanics | [
"Physics"
] | 6,984 | [
"Theoretical physics",
"Quantum mechanics"
] |
2,796,616 | https://en.wikipedia.org/wiki/Service-oriented%20development%20of%20applications | In the field of software application development, service-oriented development of applications (or SODA)
is a way of producing service-oriented architecture applications. Use of the term SODA was first used by the Gartner research firm.
SODA represents one possible activity for company to engage in when making the transition to service-oriented architecture (SOA). However, it has been argued that an overreliance on SODA can reduce overall system flexibility, reuse, and business agility. This danger is greater for sites that use an application server, which could diminish flexibility in redeployment and composition of services.
See also
Enterprise service bus
Service-oriented modeling
References
External links
Gartner articles on the ROI aspects of SODA (Registration and fee required.)
Pillars of Service-Oriented development
What's the Big Deal About SOA
Software architecture
Service-oriented (business computing) | Service-oriented development of applications | [
"Technology"
] | 180 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
2,796,758 | https://en.wikipedia.org/wiki/Space%20Exploration%20Alliance | The Space Exploration Alliance (SEA) is an umbrella organization formed by 13 United States space advocacy groups, industry associations, and space policy organizations. It was established on June 3, 2004. The SEA's primary objective is to support the refocus of NASA's human space activities toward exploration beyond low Earth orbit (LEO).
The initial effort, officially known as the Vision for Space Exploration (VSE), includes plans for return missions to the Moon with astronauts, and with the intent of establishing a permanent lunar base. Once such plans are attained, the focus will shift to missions on Mars and beyond. This plan was announced on January 15, 2004 by US President George W. Bush at the NASA Headquarters.
The organizations involved in supporting the Space Exploration Alliance include:
Explore Mars Inc
Federation of Galaxy Explorers
Moon Society
The Mars Society
National Society of Black Engineers
Buzz Aldrin's Human SpaceFlight Institute
Students for the Exploration and Development of Space
Department of Space
History
The Space Exploration Alliance (SEA) initially aimed to gain widespread congressional support for the new national Vision for Space Exploration outside Low Earth orbit, which the SEA refers to as "Moon, Mars and Beyond". The SEA's efforts included a campaign on Capitol Hill in Washington, D.C. held from July 11 to July 13, 2004.
Many signed petitions from National Space Society (NSS) members were presented during the congressional visits. The NSS members were able to secure their first-year funding for the Vision for Space Exploration initiative.
A second campaign was held on May 17 through May 19, 2005, in conjunction with the NSS's annual International Space Development Conference.
In a press release issued on October 15, 2005, the Space Frontier Foundation announced its intent to leave the Alliance, citing "philosophical differences" and an unwillingness to become "a fan club for a status quo that has failed so miserably time after time in our nation's quest for space."
The SEA holds an annual "Legislative Blitz" in Washington, D.C..
See also
Space colonization
Vision for Space Exploration
References
External links
Official website
2004 establishments in the United States
Non-profit organizations based in the United States
Space advocacy organizations
Organizations established in 2004
Space policy | Space Exploration Alliance | [
"Astronomy"
] | 449 | [
"Space advocacy organizations",
"Astronomy organizations"
] |
2,797,401 | https://en.wikipedia.org/wiki/COMSOL%20Multiphysics | COMSOL Multiphysics is a finite element analyzer, solver, and simulation software package for various physics and engineering applications, especially coupled phenomena and multiphysics. The software facilitates conventional physics-based user interfaces and coupled systems of partial differential equations (PDEs). COMSOL Multiphysics provides an IDE and unified workflow for electrical, mechanical, fluid, acoustics, and chemical applications.
Beside the classical problems that can be addressed with application modules, the core Multiphysics package can be used to solve PDEs in weak form. An API for Java and MATLAB can be used to control the software externally. The program also serves as an application builder for physics applications. Several modules are available for COMSOL, categorized according to the applications areas of Electrical, Mechanical, Fluid, Acoustic, Chemical, Multipurpose, and Interfacing.
See also
Finite element method
Multiphysics
List of computer simulation software
References
External links
Finite element software
Finite element software for Linux
Computer-aided engineering software
Physics software | COMSOL Multiphysics | [
"Physics"
] | 201 | [
"Physics software",
"Computational physics stubs",
"Computational physics"
] |
30,256,643 | https://en.wikipedia.org/wiki/Calcium%20ammonium%20nitrate | Calcium ammonium nitrate or CAN, also known as nitro-limestone or nitrochalk, is a widely used inorganic fertilizer, accounting for 4% of all nitrogen fertilizer used worldwide in 2007.
Production
The term "calcium ammonium nitrate" is applied to multiple different, but closely related formulations. One variety of calcium ammonium nitrate is made by adding powdered limestone to ammonium nitrate; another, fully water-soluble version, is a mixture of calcium nitrate and ammonium nitrate, which crystallizes as a hydrated double salt: 5Ca(NO3)2•NH4NO3•10H2O. Unlike ammonium nitrate, these calcium containing formulations are not classified as oxidizers by the United States Department of Transportation.
Consumption of CAN was 3.54 million tonnes in 1973/74, 4.45 million tonnes in 1983/84, 3.58 million tonnes in 1993/94. Production of calcium ammonium nitrate consumed 3% of world ammonia production in 2003.
Physical and chemical properties
Calcium ammonium nitrate is hygroscopic. Its dissolution in water is endothermic, leading to its use in some instant cold packs.
Use
Most calcium ammonium nitrate is used as a fertilizer. Fertilizer grade CAN contains roughly 8% calcium and 21-27% nitrogen. CAN is preferred for use on acid soils, as it acidifies soil less than many common nitrogen fertilizers. It is also used in place of ammonium nitrate where ammonium nitrate is banned.
Calcium ammonium nitrate is used in some instant cold packs as an alternative to ammonium nitrate.
Calcium ammonium nitrate has seen use in improvised explosives. The CAN is not used directly, but is instead first converted to ammonium nitrate; "More than 85% of the IEDs used against U.S. forces in Afghanistan contain homemade explosives, and of those, about 70% are made with ammonium nitrate derived from calcium ammonium nitrate". CAN and other fertilizers were banned in the Malakand Division and in Afghanistan following reports of its use by militants to make explosives. Due to these bans, "Potassium chlorate — the stuff that makes matches catch fire — has surpassed fertilizer as the explosive of choice for insurgents."
References
Calcium compounds
Ammonium compounds
Nitrates
Inorganic fertilizers
Double salts | Calcium ammonium nitrate | [
"Chemistry"
] | 484 | [
"Double salts",
"Oxidizing agents",
"Nitrates",
"Salts",
"Ammonium compounds"
] |
30,258,075 | https://en.wikipedia.org/wiki/FIGLA | Folliculogenesis-specific basic helix-loop-helix, also known as factor in the germline alpha (FIGalpha) or transcription factor FIGa, is a protein that in humans is encoded by the FIGLA gene. The FIGLA gene is a germ cell-specific transcription factor preferentially expressed in oocytes that can be found on human chromosome 2p13.3.
Function
This gene encodes a protein that functions in postnatal oocyte-specific gene expression. The protein is a basic helix-loop-helix transcription factor that regulates multiple oocyte-specific genes, including genes involved in folliculogenesis, oocyte differentiation, and those that encode the zona pellucida. FIGLA is related to the zona pellucida genes ZP1, ZP2, and ZP3.
Clinical significance
Mutation in the FIGLA gene are associated with premature ovarian failure. Premature ovarian failure is a genetic disorder that leads to hypergonadotropic ovarian failure and infertility. It is believed that premature ovarian failure in humans is caused by FIGLA haploninsuffciency, which disrupts the formation of the primordial follicles. This was observed in FIGLA mice knockouts which had diminished follicular endowment and accelerated oocyte loss throughout their reproductive life span. Women with mutations in their FIGLA were shown to have a form of premature ovarian failure. As well as the failure to form primordial follicles, knockout mice also lacked zona pellucida genes Zp1, Zp2, and ZP3 expression.
References
Further reading
Transcription factors | FIGLA | [
"Chemistry",
"Biology"
] | 355 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
30,258,345 | https://en.wikipedia.org/wiki/Magnetoelectrochemistry | Magnetoelectrochemistry is a branch of electrochemistry dealing with magnetic effects in electrochemistry.
History
These effects have been supposed to exist since the time of Michael Faraday.
There have also been observations on the existence of Hall effect in electrolytes. Until these observations, magnetoelectrochemistry was an esoteric curiosity, though
this field has had a rapid development in the past years and is now an active area of research. Other scientific fields which contributed to the development of magnetoelectrochemistry are magnetohydrodynamics and convective diffusion theory.
Effects of magnetic field
There are three types of magnetic effects in electrochemistry:
on electrolytes
on mass transfer
on metal deposition
Notes
See also
Electrochemical engineering
Magnetochemistry
Electrochemical energy conversion
Magnetic mineralogy
Magnetohydrodynamics
External links
Encyclopedia of Electrochemistry
Magnetoelectrochem
HZDR
TU Dresden
Electrochemistry | Magnetoelectrochemistry | [
"Chemistry"
] | 195 | [
"Electrochemistry",
"Physical chemistry stubs",
"Electrochemistry stubs"
] |
30,268,344 | https://en.wikipedia.org/wiki/CGHS%20model | The Callan–Giddings–Harvey–Strominger model or CGHS model in short is a toy model of general relativity in 1 spatial and 1 time dimension.
Overview
General relativity is a highly nonlinear model, and as such, its 3+1D version is usually too complicated to analyze in detail. In 3+1D and higher, propagating gravitational waves exist, but not in 2+1D or 1+1D. In 2+1D, general relativity becomes a topological field theory with no local degrees of freedom, and all 1+1D models are locally flat. However, a slightly more complicated generalization of general relativity which includes dilatons will turn the 2+1D model into one admitting mixed propagating dilaton-gravity waves, as well as making the 1+1D model geometrically nontrivial locally. The 1+1D model still does not admit any propagating gravitational (or dilaton) degrees of freedom, but with the addition of matter fields, it becomes a simplified, but still nontrivial model. With other numbers of dimensions, a dilaton-gravity coupling can always be rescaled away by a conformal rescaling of the metric, converting the Jordan frame to the Einstein frame. But not in two dimensions, because the conformal weight of the dilaton is now 0. The metric in this case is more amenable to analytical solutions than the general 3+1D case. And of course, 0+1D models cannot capture any nontrivial aspect of relativity because there is no space at all.
This class of models retains just enough complexity to include among its solutions black holes, their formation, FRW cosmological models, gravitational singularities, etc. In the quantized version of such models with matter fields, Hawking radiation also shows up, just as in higher-dimensional models.
Action
A very specific choice of couplings and interactions leads to the CGHS model.
where g is the metric tensor, is the dilaton field, fi are the matter fields, and λ2 is the cosmological constant. In particular, the cosmological constant is nonzero, and the matter fields are massless real scalars.
This specific choice is classically integrable, but still not amenable to an exact quantum solution. It is also the action for Non-critical string theory and dimensional reduction of higher-dimensional model. It also distinguishes it from Jackiw–Teitelboim gravity and Liouville gravity, which are entirely different models.
The matter field only couples to the causal structure, and in the light-cone gauge , has the simple generic form
,
with a factorization between left- and right-movers.
The Raychaudhuri equations are
and
.
The dilaton evolves according to
,
while the metric evolves according to
.
The conformal anomaly due to matter induces a Liouville term in the effective action.
Black hole
A vacuum black hole solution is given by
,
where M is the ADM mass.
Singularities appear at .
The masslessness of the matter fields allow a black hole to completely evaporate away via Hawking radiation. In fact, this model was originally studied to shed light upon the black hole information paradox.
See also
dilaton
general relativity
quantum gravity
RST model
Jackiw–Teitelboim gravity
Liouville gravity
References
Quantum gravity
General relativity | CGHS model | [
"Physics"
] | 704 | [
"Unsolved problems in physics",
"General relativity",
"Quantum gravity",
"Theory of relativity",
"Physics beyond the Standard Model"
] |
3,751,191 | https://en.wikipedia.org/wiki/Torricelli%27s%20law | Torricelli's law, also known as Torricelli's theorem, is a theorem in fluid dynamics relating the speed of fluid flowing from a hole to the height of fluid above the hole. The law states that the speed of efflux of a fluid through a sharp-edged hole in the wall of the tank filled to a height above the hole is the same as the speed that a body would acquire in falling freely from a height ,
where is the acceleration due to gravity. This expression comes from equating the kinetic energy gained, , with the potential energy lost, , and solving for . The law was discovered (though not in this form) by the Italian scientist Evangelista Torricelli, in 1643. It was later shown to be a particular case of Bernoulli's principle.
Derivation
Under the assumptions of an incompressible fluid with negligible viscosity, Bernoulli's principle states that the hydraulic energy is constant
at any two points in the flowing liquid. Here is fluid speed, is the acceleration due to gravity, is the height above some reference point, is the pressure, and is the density.
In order to derive Torricelli's formula the first point with no index is taken at the liquid's surface, and the second just outside the opening. Since the liquid is assumed to be incompressible, is equal to and; both can be represented by one symbol . The pressure and are typically both atmospheric pressure, so . Furthermore
is equal to the height of the liquid's surface over the opening:
The velocity of the surface can by related to the outflow velocity by the continuity equation , where is the orifice's cross section and is the (cylindrical) vessel's cross section. Renaming to (A like Aperture) gives:
Torricelli's law is obtained as a special case when the opening is very small relative to the horizontal cross-section of the container :
Torricelli's law can only be applied when viscous effects can be neglected which is the case for water flowing out through orifices in vessels.
Experimental verification: Spouting can experiment
Every physical theory must be verified by experiments. The spouting can experiment consists of a cylindrical vessel filled up with water and with several holes in different heights. It is designed to show that in a liquid with an open surface, pressure increases with depth. The fluid exit velocity is greater further down the vessel.
The outflowing jet forms a downward parabola where every parabola reaches farther out the larger the distance between the orifice and the surface is. The shape of the parabola is only dependent on the outflow velocity and can be determined from the fact that every molecule of the liquid forms a ballistic trajectory (see projectile motion) where the initial velocity is the outflow velocity :
The results confirm the correctness of Torricelli's law very well.
Discharge and time to empty a cylindrical vessel
Assuming that a vessel is cylindrical with fixed cross-sectional area , with orifice of area at the bottom, then rate of change of water level height is not constant. The water volume in the vessel is changing due to the discharge out of the vessel:
Integrating both sides and re-arranging, we obtain
where is the initial height of the water level and is the total time taken to drain all the water and hence empty the vessel.
This formula has several implications. If a tank with volume with cross section and height , so that , is fully filled, then the time to drain all the water is
This implies that high tanks with same filling volume drain faster than wider ones.
Lastly, we can re-arrange the above equation to determine the height of the water level as a function of time as
where is the height of the container while is the discharge time as given above.
Discharge experiment, coefficient of discharge
The discharge theory can be tested by measuring the emptying time or time series of the water level within the cylindrical vessel. In many cases, such experiments do not confirm the presented discharge theory: when comparing the theoretical predictions of the discharge process with measurements, very large differences can be found in such cases. In reality, the tank usually drains much more slowly. Looking at the discharge formula
two quantities could be responsible for this discrepancy: the outflow velocity or the effective outflow cross section.
In 1738 Daniel Bernoulli attributed the discrepancy between the theoretical and the observed outflow behavior to the formation of a vena contracta which reduces the outflow cross-section from the orifice's cross-section to the contracted cross-section and stated that the discharge is:
Actually this is confirmed by state-of-the-art experiments (see ) in which the discharge, the outflow velocity and the cross-section of the vena contracta were measured. Here it was also shown that the outflow velocity is predicted extremely well by Torricelli's law and that no velocity correction (like a "coefficient of velocity") is needed.
The problem remains how to determine the cross-section of the vena contracta. This is normally done by introducing a discharge coefficient which relates the discharge to the orifice's cross-section and Torricelli's law:
For low viscosity liquids (such as water) flowing out of a round hole in a tank, the discharge coefficient is in the order of 0.65. By discharging through a round tube or hose, the coefficient of discharge can be increased to over 0.9. For rectangular openings, the discharge coefficient can be up to 0.67, depending on the height-width ratio.
Applications
Horizontal distance covered by the jet of liquid
If is height of the orifice above the ground and is height of the liquid column from the ground (height of liquid's surface), then the horizontal distance covered by the jet of liquid to reach the same level as the base of the liquid column can be easily derived. Since be the vertical height traveled by a particle of jet stream, we have from the laws of falling body
where is the time taken by the jet particle to fall from the orifice to the ground. If the horizontal efflux velocity is , then the horizontal distance traveled by the jet particle during the time duration is
Since the water level is above the orifice, the horizontal efflux velocity as given by Torricelli's law. Thus, we have from the two equations
The location of the orifice that yields the maximum horizontal range is obtained by differentiating the above equation for with respect to , and solving . Here we have
Solving we obtain
and the maximum range
Clepsydra problem
A clepsydra is a clock that measures time by the flow of water. It consists of a pot with a small hole at the bottom through which the water can escape. The amount of escaping water gives the measure of time. As given by the Torricelli's law, the rate of efflux through the hole depends on the height of the water; and as the water level diminishes, the discharge is not uniform. A simple solution is to keep the height of the water constant. This can be attained by letting a constant stream of water flow into the vessel, the overflow of which is allowed to escape from the top, from another hole. Thus having a constant height, the discharging water from the bottom can be collected in another cylindrical vessel with uniform graduation to measure time. This is an inflow clepsydra.
Alternatively, by carefully selecting the shape of the vessel, the water level in the vessel can be made to decrease at constant rate. By measuring the level of water remaining in the vessel, the time can be measured with uniform graduation. This is an example of outflow clepsydra. Since the water outflow rate is higher when the water level is higher (due to more pressure), the fluid's volume should be more than a simple cylinder when the water level is high. That is, the radius should be larger when the water level is higher. Let the radius increase with the height of the water level above the exit hole of area That is, . We want to find the radius such that the water level has a constant rate of decrease, i.e. .
At a given water level , the water surface area is . The instantaneous rate of change in water volume is
From Torricelli's law, the rate of outflow is
From these two equations,
Thus, the radius of the container should change in proportion to the quartic root of its height,
Likewise, if the shape of the vessel of the outflow clepsydra cannot be modified according to the above specification, then we need to use non-uniform graduation to measure time. The emptying time formula above tells us the time should be calibrated as the square root of the discharged water height, More precisely,
where is the time taken by the water level to fall from the height of to height of .
Torricelli's original derivation
Evangelista Torricelli's original derivation can be found in the second book 'De motu aquarum' of his 'Opera Geometrica'. He starts a tube AB (Figure (a)) filled up with water to the level A. Then a narrow opening is drilled at the level of B and connected to a second vertical tube BC. Due to the hydrostatic principle of communicating vessels the water lifts up to the same filling level AC in both tubes (Figure (b)). When finally the tube BC is removed (Figure (c)) the water should again lift up to this height, which is named AD in Figure (c). The reason for that behavior is the fact that a droplet's falling velocity from a height A to B is equal to the initial velocity that is needed to lift up a droplet from B to A.
When performing such an experiment only the height C (instead of D in figure (c)) will be reached which contradicts the proposed theory. Torricelli attributes this defect to the air resistance and to the fact that the descending drops collide with ascending drops.
Torricelli's argumentation is, as a matter of fact, wrong because the pressure in free jet is the surrounding atmospheric pressure, while the pressure in a communicating vessel is the hydrostatic pressure. At that time the concept of pressure was unknown.
See also
Darcy's law
Dynamic pressure
Fluid statics
Hagen–Poiseuille equation
Helmholtz's theorems
Kirchhoff equations
Knudsen equation
Manning equation
Mild-slope equation
Morison equation
Navier–Stokes equations
Oseen flow
Pascal's law
Poiseuille's law
Potential flow
Pressure
Static pressure
Pressure head
Relativistic Euler equations
Reynolds decomposition
Stokes flow
Stokes stream function
Stream function
Streamlines, streaklines and pathlines
References
Further reading
Stanley Middleman, An Introduction to Fluid Dynamics: Principles of Analysis and Design (John Wiley & Sons, 1997)
Eponymous theorems of physics
Fluid dynamics
Physics experiments | Torricelli's law | [
"Physics",
"Chemistry",
"Engineering"
] | 2,243 | [
"Physics experiments",
"Equations of physics",
"Chemical engineering",
"Eponymous theorems of physics",
"Experimental physics",
"Piping",
"Fluid dynamics",
"Physics theorems"
] |
3,752,883 | https://en.wikipedia.org/wiki/A15%20phases | The A15 phases (also known as β-W or Cr3Si structure types) are series of intermetallic compounds with the chemical formula A3B (where A is a transition metal and B can be any element) and a specific structure. The A15 phase is also one of the members in the Frank–Kasper phases family. Many of these compounds have superconductivity at around , which is comparatively high, and remain superconductive in magnetic fields of tens of teslas (hundreds of kilogauss). This kind of superconductivity (Type-II superconductivity) is an important area of study as it has several practical applications.
History
The first time that A15 structure was observed was in 1931 when an electrolytically deposited layer of tungsten was examined. Discussion of whether the β-tungsten structure is an allotrope of tungsten or the structure of a tungsten suboxide was long-standing, but since the 1950s there has been many publications showing that the material is a true allotrope of tungsten.
The first intermetallic compound discovered with typical A3B composition was chromium silicide Cr3Si, discovered in 1933. Several other compounds with A15 structure were discovered in following years. No large interest existed in research on those compounds. This changed with the discovery that vanadium silicide V3Si showed superconductivity at around 17 K in 1953. In following years, several other A3B superconductors were found. Niobium-germanium held the record for the highest temperature of 23.2 K from 1973 until the discovery of the cuprate superconductors in 1986. It took time for the method to produce wires from the very brittle A15 phase materials to be established. This method is still complicated. Though some A15 phase materials can withstand higher magnetic field intensity and have higher critical temperatures than the NbZr and NbTi alloys, NbTi is still used for most applications due to easier manufacturing.
Nb3Sn is used for some high field applications, for example high-end MRI scanners and NMR spectrometers.
A relaxed form of the Voronoi diagram of the A15 phase seems to have the least surface area among all the possible partitions of three-dimensional Euclidean space in regions of equal volume. This partition, also known as the Weaire–Phelan structure, is often present in clathrate hydrates.
Examples
Vanadium-silicon
Vanadium-gallium
Niobium-germanium
Niobium-tin
Titanium-gold
Tungsten (β-phase)
See also
Weaire–Phelan structure, for the space tessellation generated by A15 phases
References
Further reading
Intermetallics
Superconductors
Crystal structure types | A15 phases | [
"Physics",
"Chemistry",
"Materials_science"
] | 587 | [
"Inorganic compounds",
"Metallurgy",
"Superconductivity",
"Crystal structure types",
"Crystallography",
"Intermetallics",
"Condensed matter physics",
"Alloys",
"Superconductors"
] |
3,753,477 | https://en.wikipedia.org/wiki/McMaster%20Nuclear%20Reactor | The McMaster Nuclear Reactor (MNR) is a 5 MWth open pool reactor located on the campus of McMaster University, in Hamilton, Ontario, Canada.
Description
MNR began operating in April 1959, as the first university-based research reactor in the Commonwealth of Nations, and has been the highest-flux research reactor in Canada since the closing of the National Research Universal (NRU) reactor at Chalk River Laboratories in 2018. The reactor consists of two connected pools; the core can be located and operated in either one. This allows the core to be moved away from experimental apparatus for maintenance. MNR is an example of a reactor where the core is visible while the reactor is operating. The core itself appears to be glowing blue when looked at from the surface, as a result of the Cherenkov radiation.
MNR is the only research reactor in Canada with a full containment structure. The reactor is fuelled with low-enrichment uranium and cooled and moderated with light water. Heat is transported to the atmosphere through the secondary coolant system via two cooling towers adjacent to the reactor building.
The reactor is used for a variety of purposes: undergraduate education involves NAA (Neutron Activation Analysis), reactor physics experiments and radioisotopes for tracers and counting experiments. Graduate studies use neutron beams for neutron radiography, neutron diffraction, prompt gamma NAA and geochronological techniques. Commercial activities include radioisotope production and neutron radiography. The facilities also include a Hot Cell and high-activity cobalt source and high level radioisotope laboratories. Researchers using MNR are based at McMaster as well as other universities in Canada and around the world.
The MNR also produces half of the world's supply of iodine-125, a radioisotope that is used to treat various types of cancer. During the 2009 shutdown of the Chalk River reactor, however, the university increased production of iodine-125 by 20% and offered to retrofit the MNR to handle the production of molybdenum-99. The MNR had previously handled the production of molybdenum in the 1970s when the Chalk River facilities underwent a vessel replacement.
Alleged links to terrorism and lawsuit
Paul L. Williams, an American author, has published books which claim that McMaster University in general and the nuclear reactor specifically have been infiltrated by terrorist groups, who have managed to steal of unspecified nuclear material.
The University strenuously denies these claims, and in 2007 was in the process of suing Williams for upwards of $2 million. The Canadian Nuclear Safety Commission, which regulates all radioactive material in Canada, have released a letter stating that "We can confirm that there has never been a report of any nuclear material that has been lost or stolen from McMaster's reactor".
See also
Nuclear reactor
Research reactor
References
McMaster University
Nuclear research institutes
Nuclear technology in Canada
Buildings and structures in Hamilton, Ontario
Nuclear research reactors | McMaster Nuclear Reactor | [
"Engineering"
] | 599 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
3,753,827 | https://en.wikipedia.org/wiki/9%2C10-Bis%28phenylethynyl%29anthracene | 9,10-Bis(phenylethynyl)anthracene (BPEA) is an aromatic hydrocarbon with the chemical formula is C30H18. It displays strong fluorescence and is used as a chemiluminescent fluorophore with high quantum efficiency.
It is used in lightsticks as a fluorophor producing ghostly green light. It is also used as a dopant for organic semiconductors in OLEDs.
The emission wavelength can be lowered by substituting the anthracene core by halogens or alkyls. 2-ethyl and 1,2-dimethyl substituted BPEAs are also in use.
1-chloro-9,10-bis(phenylethynyl)anthracene emits yellow-green light, used in 30-minute high-intensity Cyalume sticks
2-chloro-9,10-bis(phenylethynyl)anthracene emits green light, used in 12-hour low-intensity Cyalume sticks
See also
Lightstick
Organic light-emitting diode
5,12-Bis(phenylethynyl)naphthacene
9,10-Diphenylanthracene
External links
Absorption and emission spectra
National Pollutant Inventory - Polycyclic Aromatic Hydrocarbon Fact Sheet
Novel red-emitting BPEAs
Fluorescent dyes
Organic semiconductors
Anthracenes
Diynes
Phenyl compounds
Alkyne derivatives | 9,10-Bis(phenylethynyl)anthracene | [
"Chemistry"
] | 307 | [
"Semiconductor materials",
"Molecular electronics",
"Organic semiconductors"
] |
3,754,843 | https://en.wikipedia.org/wiki/Supervisory%20control%20theory | The supervisory control theory (SCT), also known as the Ramadge–Wonham framework (RW framework), is a method for automatically synthesizing supervisors that restrict the behavior of a plant such that as much as possible of the given specifications are fulfilled. The plant is assumed to spontaneously generate events. The events are in either one of the following two categories controllable or uncontrollable. The supervisor observes the string of events generated by the plant and might prevent the plant from generating a subset of the controllable events. However, the supervisor has no means of forcing the plant to generate an event.
In its original formulation the SCT considered the plant and the specification to be modeled by formal languages, not necessarily regular languages generated by finite automata as was done in most subsequent work.
See also
References
Control theory | Supervisory control theory | [
"Mathematics"
] | 171 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
3,755,359 | https://en.wikipedia.org/wiki/Confluence | In geography, a confluence (also: conflux) occurs where two or more watercourses join to form a single channel. A confluence can occur in several configurations: at the point where a tributary joins a larger river (main stem); or where two streams meet to become the source of a river of a new name (such as the confluence of the Monongahela and Allegheny rivers, forming the Ohio River); or where two separated channels of a river (forming a river island) rejoin at the downstream end. The point of confluence where the channel flows into a larger body of water may be called the river mouth.
Scientific study of confluences
Confluences are studied in a variety of sciences. Hydrology studies the characteristic flow patterns of confluences and how they give rise to patterns of erosion, bars, and scour pools. The water flows and their consequences are often studied with mathematical models. Confluences are relevant to the distribution of living organisms (i.e., ecology) as well; "the general pattern [downstream of confluences] of increasing stream flow and decreasing slopes drives a corresponding shift in habitat characteristics."
Another science relevant to the study of confluences is chemistry, because sometimes the mixing of the waters of two streams triggers a chemical reaction, particularly in a polluted stream. The United States Geological Survey gives an example: "chemical changes occur when a stream contaminated with acid mine drainage combines with a stream with near-neutral pH water; these reactions happen very rapidly and influence the subsequent transport of metals downstream of the mixing zone."
A natural phenomenon at confluences that is obvious even to casual observers is a difference in color between the two streams; see images in this article for several examples. According to Lynch, "the color of each river is determined by many things: type and amount of vegetation in the watershed, geological properties, dissolved chemicals, sediments and biologic content – usually algae." Lynch also notes that color differences can persist for miles downstream before they finally blend completely.
River confluence flow zones
Hydrodynamic behaviour of flow in a confluence can be divided into six distinct features which are commonly called confluence flow zones (CFZ). These include
Stagnation zone
Flow deflection zone
Flow separation zone / recirculation zone
Maximum velocity zone
Flow recovery zone
Shear layers
Confluences in engineering
The broader field of engineering encompasses a vast assortment of subjects which concern confluences.
In hydraulic civil engineering, where two or more underground culverted / artificially buried watercourses intersect, great attention should be paid to the hydrodynamic aspects of the system to ensure the longevity and efficiency of the structure.
Engineers have to design these systems whilst considering a list of factors that ensure the discharge point is structurally stable as the entrance of the lateral culvert into the main structure may compromise the stability of the structure due to the lack of support at the discharge, this often constitutes additional supports in the form of structural bracing. The velocities and hydraulic efficiencies should be meticulously calculated and can be altered by integrating different combinations of geometries, components such a gradients, cascades and an adequate junction angle which is sympathetic to the direction of the watercourse’s flow to minimise turbulent flow, maximise evacuation velocity and to ultimately maximise hydraulic efficiency.
Cultural and societal significance
Since rivers often serve as political boundaries, confluences sometimes demarcate three abutting political entities, such as nations, states, or provinces, forming a tripoint. Various examples are found in the list below.
A number of major cities, such as Chongqing, St. Louis, and Khartoum, arose at confluences; further examples appear in the list. Within a city, a confluence often forms a visually prominent point, so that confluences are sometimes chosen as the site of prominent public buildings or monuments, as in Koblenz, Lyon, and Winnipeg. Cities also often build parks at confluences, sometimes as projects of municipal improvement, as at Portland and Pittsburgh. In other cases, a confluence is an industrial site, as in Philadelphia or Mannheim. Often a confluence lies in the shared floodplain of the two rivers and nothing is built on it, for example at Manaus, described below.
One other way that confluences may be exploited by humans is as sacred places in religions. Rogers suggests that for the ancient peoples of the Iron Age in northwest Europe, watery locations were often sacred, especially sources and confluences. Pre-Christian Slavic peoples chose confluences as the sites for fortified triangular temples, where they practiced human sacrifice and other sacred rites. In Hinduism, the confluence of two sacred rivers often is a pilgrimage site for ritual bathing. In Pittsburgh, a number of adherents to Mayanism consider their city's confluence to be sacred.
Notable confluences
Africa
At Lokoja, Nigeria, the Benue River flows into the Niger.
At Kazungula in Zambia, the Chobe River flows into the Zambezi. The confluence defines the tripoint of Zambia (north of the rivers), Botswana (south of the rivers) and Namibia (west of the rivers). The land border between Botswana and Zimbabwe to the east also reaches the Zambezi at this confluence, so there is a second tripoint (Zambia-Botswana-Zimbabwe) only 150 meters downstream from the first. See Kazungula and Quadripoint, and Gallery below for image.
The Sudanese capital of Khartoum is located at the confluence of the White Nile and the Blue Nile, the beginning of the Nile.
Asia
82 km north of Basra in Iraq at the town of Al-Qurnah is the confluence of the rivers Tigris and Euphrates, forming the Shatt al-Arab.
At Devprayag in India, the Ganges River originates at the confluence of the Bhagirathi and the Alaknanda; see images above.
Near Allahabad, India, the Yamuna flows into the Ganges. In Hinduism, this is a pilgrimage site for ritual bathing; during a Kumbh Mela event tens of millions of people visit the site. In Hindu belief the site is held to be a triple confluence (Triveni Sangam), the third river being the metaphysical (not physically present) Sarasvati.
Karad, in Maharashtra, India, is the site of the Pritisangam (meaning: Lovely Confluence), a T-shaped confluence of Krishna River and Koyna River, where Koyna River mergers into Krishna River forming a T-shape and then the merged rivers flow to the east as Krishna River.
Kuala Lumpur, the capital of Malaysia, is where the Gombak River (previously known as Sungai Lumpur, which means "muddy river") flows into the Klang River at the site of the Jamek Mosque. Recently, the Kolam Biru (Blue Pool), a pool with elaborate fountains, has been installed at the apex of the confluence.
Both Taipei and New Taipei are where the Dahan and Xindian meet and flow into the Tamsui River.
The Nam Khan River flows into the Mekong at Luang Prabang in Laos.
Pak Nam Pho, the downtown of Nakhon Sawan in Thailand, is the confluence of the rivers Ping and Nan, forming the Chao Phraya the main artery of central Thailand.
Pak Phraek, the old town zone of Kanchanaburi in Thailand, is the confluence of the rivers Khwae Yai and Khwae Noi, forming the Mae Klong the main artery of western Thailand.
The Pa Sak combines the Chao Phraya at Ayutthaya in Thailand. The confluence is a location of the historic monastery Wat Phanan Choeng, built around 26 years before the founding of the Ayutthaya Kingdom.
The Jialing flows into the Yangtze at Chongqing in China. The confluence forms a focal point in the city, marked by Chaotianmen Square, built in 1998.
In the Far East, the Amur forms the international boundary between China and Russia. The Ussuri, which also demarcates the border, flows into the Amur at a point midway between Fuyuan in China and Khabarovsk in Russia. The apex of the confluence is located in a rural area, part of China, where a commemorative park, Dongji Square, has been built; it features an enormous sculpture representing the Chinese character for "East". The Amur-Ussuri border region was the location of the Sino-Soviet border conflict of 1969; the borderline near the confluence was settled peacefully by treaty in 2008.
In Georgia, in the town of Pasanauri on the southern slopes of the Caucasus Mountains, the Tetri Aragvi ("White Aragvi") is joined by the Shavi Aragvi ("Black Aragvi"). Together, these two rivers continue as the Aragvi River. The conflux is known for its dramatic visual contrast of the two rivers.
Australia
The two largest rivers in Australia, the Murray and its tributary the Darling, converge at Wentworth, New South Wales.
Europe
Seine
The Seine divides in the historical center of Paris, flowing around two river islands, the Île Saint-Louis and the Île de la Cité. At the downstream confluence, where the river becomes a single channel again, the Île de la Cité is crossed by the famous Pont Neuf, adjacent to an equestrian statue of King Henri IV and the historically more recent Vert Galant park. The site has repeatedly been portrayed by artists including Monet, Renoir, and Pissarro.
Further upstream, the Marne empties into the Seine at Charenton-le-Pont and Alfortville, just southeast of the Paris city limits. The site is dominated by the Huatian Chinagora, a four-star hotel under Chinese management.
Rhine
The Rhine carries much river traffic, and major inland ports are found at its confluence with the Ruhr at Duisburg, and with the Neckar at Mannheim; see Mannheim Harbour.
The Main flows into the Rhine just south of Mainz.
The Mosel flows into the Rhine further north at Koblenz. The name "Koblenz" itself has its origin in the Latin name "Confluentes". In German, this confluence is known as the "Deutsches Eck" ("German corner") and is the site of an imposing monument to German unification featuring an equestrian statue of Kaiser Wilhelm I.
Upstream in Switzerland, a small town also named Koblenz (for the same reason) is where the Aare joins the Rhine.
Danube basin
Passau, Germany, sometimes called the (City of Three Rivers), is the site of a triple confluence, described thus in a guidebook: "from the north the little Ilz sluices brackish water down from the peat-rich Bavarian Forest, meeting the cloudy brown of the Danube as it flows from the west and the pale snow-melt jade of the Inn from the south [i.e., the Alps] to create a murky tricolour."
The Thaya flows into the Morava in a rural location near Hohenau an der March in Austria, forming the tripoint of Austria, Czechia, and Slovakia.
The Morava flows into the Danube at Devín, on the border between Slovakia and Austria.
The Sava flows into the Danube at Belgrade, the capital of Serbia.
In karst topography, which arises in soluble rock, rivers sometimes flow underground and form subterranean confluences, as at Planina Cave in Slovenia, where the Pivka and Rak merge to form the Unica.
Other
Lyon, France lies where the Saône flows into the Rhone. A major new museum of science and anthropology, the Musée des Confluences, opened on the site in 2014.
Near Toulouse, France lies where the Ariège (river) flows into the Garonne. Both take their source in the Pyrenees.
The Lusatian Neisse flows into the Oder at a rural location in Poland opposite the German village of Ratzdorf. The two rivers form the Oder-Neisse line, the postwar boundary of Germany and Poland.
The Triangle of Three Emperors, a former political tripoint, lies in present-day Poland. The empires that abutted (in the decades before World War I) were the Austrian, German, and Russian.
Rovaniemi, the capital of Finnish Lapland and one of the largest towns above the Arctic Circle, is at the confluence of rivers Ounasjoki and Kemijoki.
Kryvyi Rih, Ukraine is located (and named after) on the confluence of the Saksahan and Inhulets River.
The Oka flows into the Volga at Nizhny Novgorod in Russia. The Alexander Nevsky Cathedral overlooks the site.
The English city of Southampton is built at the confluence of the tidal estuaries of the River Test and River Itchen which combine to form Southampton Water estuary.
North America
Mississippi basin
The Greater Twin Cities area of Minneapolis and St. Paul, Minnesota features two important Mississippi confluences. Near historical Fort Snelling and the town of Mendota—about 9 miles downstream on the Mississippi from Minneapolis—the Minnesota River flows into the Mississippi at Pike Island. The area around this confluence is a location of spiritual, cultural, and historical significance to the Dakota people and is also the site of the earliest European settlements in the Twin Cities area. About 30 miles further downstream from the Minnesota-Mississippi confluence—and 25 miles downstream from St. Paul—the Mississippi joins with the St. Croix River near Hastings, Minnesota, and Prescott, Wisconsin.
Vicksburg, Mississippi lies atop bluffs overlooking the confluence of the Mississippi River with its tributary the Yazoo. Both rivers, as well as the bluffs, played an important role in the Vicksburg Campaign, a pivotal event of the American Civil War.
The Missouri River flows into the Mississippi River at Jones-Confluence Point State Park, just north of St. Louis, Missouri. Slightly further upstream, the Illinois River flows into the Mississippi.
The Madison, Jefferson and Gallatin Rivers in Three Forks, Montana form the confluence of the Missouri River.
At Keokuk, Iowa, the Des Moines River flows into the Mississippi. This forms the political tripoint between the U.S. states of Iowa, Missouri, and Illinois.
Just south of Cairo, Illinois, the Ohio River flows into the Mississippi, forming the tripoint between the states of Illinois, Missouri, and Kentucky.
The Ohio River is formed by the confluence of the Monongahela and Allegheny rivers, located in Pittsburgh, Pennsylvania. The site is of great historical significance; in the 1970s it was upgraded by the creation of Point State Park, highlighted by a large fountain.
Atlantic watersheds
At Harpers Ferry, West Virginia, the Shenandoah River flows into the Potomac River, at the tripoint of the U.S. states of Virginia, West Virginia, and Maryland.
At Philadelphia, Pennsylvania, the Schuylkill River flows into the Delaware River, next to the former Philadelphia Naval Shipyard; the site remains industrial.
At Cohoes, New York, a few miles north of Albany, the Mohawk River flows into the Hudson in three channels separated by islands. The confluence is historically important: upstream traffic on or along the Hudson often took a left turn at the Mohawk, which offers a uniquely level passageway through the Appalachian Mountains that assisted commerce and the settlement of the West.
At Ottawa, the capital of Canada, the Rideau River flows—unusually, as a waterfall—into the Ottawa River; see Rideau Falls. On the island separating the two portions of the falls is a park with military monuments, among them the Ottawa Memorial.
The Hochelaga Archipelago, including the island and city of Montreal, is located where the Ottawa River flows into the St. Lawrence River in Quebec, Canada.
Winnipeg, Canada, is at the confluence of the Red River, and the Assiniboine River. The area is referred to as The Forks by locals, and has been an important trade location for over 6000 years.
Pacific watersheds
The Green River flows into the Colorado River at the heart of Canyonlands National Park in Utah's Canyon Country.
The Snake River flows into the Columbia River at Sacagawea State park near the Tri-Cities of Washington. It should also be noted that the significant Yakima river also flows into the Columbia just a few miles upstream, thus giving the region the unofficial preposition of Three Rivers
In Portland, Oregon, the Willamette River flows into the Columbia at Kelley Point Park, built on land acquired from the Port of Portland in 1984.
Lytton, British Columbia, Canada, is located at the confluence of the muddy Fraser River and the clearer Thompson River.
South America
Manaus, Brazil is on the Rio Negro near its confluence with the Amazon (see Meeting of Waters). It is the chief port and a hub for the region's extensive river system.
The Iguazú flows into the Paraná at the "Triple Frontier" (, ), the tripoint for Paraguay, Argentina, and Brazil.
In Ciudad Guayana, Venezuela there is a confluence between Orinoco River and Caroní River.
Confluences of non-rivers
Occasionally, "confluence" is used to describe the meeting of tidal or other non-riverine bodies of water, such as two canals or a canal and a lake. A one-mile (1.6 km) portion of the Industrial Canal in New Orleans accommodates the Gulf Intracoastal Waterway and the Mississippi River-Gulf Outlet Canal; therefore those three waterways are confluent there.
The term confluence can also apply to the process of merging or flowing together of other substance. For example, it may refer to the merger of the flow of two glaciers.
See also
References
Letizia, Chiara (2017) "The Sacred Confluence, between Nature and Culture," in Marie Lecomte-Tilouine (ed.) Nature, Culture and Religion at the Crossroads of Asia. Routledge. Extracts available on line at Google Books.
External links
A collection of full-size, vivid photographs of confluences, most of them mentioned in the list above.
Physical geography
Rivers
Bodies of water
River morphology
Hydraulic engineering
Hydrology
Fluid dynamics | Confluence | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,730 | [
"Hydrology",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Piping",
"Hydraulic engineering",
"Fluid dynamics"
] |
3,755,801 | https://en.wikipedia.org/wiki/SU-8%20photoresist | SU-8 is a commonly used epoxy-based negative photoresist. Negative refers to a photoresist whereby the parts exposed to UV become cross-linked, while the remainder of the film remains soluble and can be washed away during development.
As shown in the structural diagram, SU-8 derives its name from the presence of 8 epoxy groups. This is a statistical average per moiety. It is these epoxies that cross-link to give the final structure.
It can be made into a viscous polymer that can be spun or spread over a thickness ranging from below 1 micrometer up to above 300 micrometers, or Thick Film Dry Sheets (TFDS) for lamination up to above 1 millimetre thick. Up to 500 μm, the resist can be processed with standard contact lithography. Above 500 μm, absorption leads to increasing sidewall undercuts and poor curing at the substrate interface. It can be used to pattern high aspect ratio structures. An aspect ratio of (> 20) has been achieved with the solution formulation and (> 40) has been demonstrated from the dry resist. Its maximum absorption is for ultraviolet light with a wavelength of the i-line: 365 nm (it is not practical to expose SU-8 with g-line ultraviolet light). When exposed, SU-8's long molecular chains cross-link, causing the polymerisation of the material. SU-8 series photoresists use gamma-butyrolactone or cyclopentanone as the primary solvent.
SU-8 was originally developed as a photoresist for the microelectronics industry, to provide a high-resolution mask for fabrication of semiconductor devices.
It is now mainly used in the fabrication of microfluidics (mainly via soft lithography, but also with other imprinting techniques such as nanoimprint lithography) and microelectromechanical systems parts. It is also one of the most biocompatible materials known and is often used in bio-MEMS for life science applications.
Composition and processing
SU-8 is composed of Bisphenol A Novolac epoxy that is dissolved in an organic solvent (gamma-butyrolactone GBL or cyclopentanone, depending on the formulation) and up to 10 wt% of mixed Triarylsulfonium/hexafluoroantimonate salt as the photoacid generator.
SU-8 absorbs light in the UV region, allowing fabrication of relatively thick (hundreds of micrometers) structures with nearly vertical side walls. The fact that a single photon can trigger multiple polymerizations makes the SU-8 a chemically amplified resist which is polymerized by photoacid generation. The light irradiated on the resist interacts with the salt in the solution, creating hexafluoroantimonic acid that then protonates the epoxides groups in the resin monomers. The monomer are thus activated but the polymerization will not proceed significantly until the temperature is raised as part of the post-expose bake. It is at this stage that the epoxy groups in the resin cross-link to form the cured structure. When fully cured, the high crosslinking degree gives to the resist its excellent mechanical properties.
The processing of SU-8 is similar to other negative resists with particular attention on the control of the temperature in the baking steps. The baking times depend on the SU-8 layer thickness; the thicker the layer, the longer the baking time. The temperature is controlled during the baking in order to reduce stress formation in the thick layer (leading to cracks) as the solvent evaporates.
The soft bake is the most important of the bake steps for stress formation. It is performed after spin coating. Its function is to remove the solvent from the resist and make the layer solid. Typically at least 5% of the solvent remains in the layer after the soft bake, however the thicker the coating, the harder it becomes to remove the solvent, as evaporating solvent through thick layers becomes increasingly difficult with coating thickness. The bake is performed on a programmable hot plate to reduce the skinning effect of solvent depletion at the surface creating a dense layer which makes the remainder of the solvent more difficult to remove. In order to reduce stress, the bake procedure is generally a two-step process made up of holding at 65 °C before ramping to 95 °C and holding again for a time dependent on the layer thickness. The temperature is then lowered slowly to room temperature.
When dry films are used, the photoresist is laminated rather than spin-coated. As this formulation is essentially solventless (less than 1% solvent remaining), it does not require a soft bake step and does not suffer stress or skinning. For enhanced adhesion, a post lamination bake can be added. This step is carried out in a similar way to the solution based resist - i.e. holding at 65 °C then 95 °C, the time dependent on film thickness.
After this stage the SU-8 layer can now be exposed. Typically this is through a photomask with an inverse pattern, as the resist is negative. The exposure time is a function of exposure dose and film thickness. After exposure the SU-8 needs to be baked again to complete the polymerization. This baking step is not as critical as the prebake but the rising of the temperature (again to 95 °C) needs to be slow and controlled. At this point the resist is ready to be developed.
The main developer for SU-8 is 1-methoxy-2-propanol acetate. Development time is primarily a function of SU-8 thickness.
After exposing and developing, its highly cross-linked structure gives it high stability to chemicals and radiation damage - hence the name "resist". Cured cross-linked SU-8 shows very low levels of outgassing in a vacuum.
However it is very difficult to remove, and tends to outgas in an unexposed state.
Newer formulations
SU-8 2000 series resists use cyclopentanone for the primary solvent and can be used to create films between 0.5 and 100 μm in thickness. This formulation may offer improved adhesion on some substrates versus the original formulation.
SU-8 3000 series resists also use cyclopentanone for the primary solvent and are designed to be spun into thicker films ranging from 2 to 75 μm in a single coat.
SU-8 GLM2060 series of low-stress photoresist consist of epoxy GBL and silica formulation CTE 14.
SU-8 GCM3060 Series of GERSTELTEC conductive SU8 with nanoparticles of silver.
SU-8 GMC10xx Series of GERSTELTEC colored SU8 Red, Bleau, Green, black and others.
SU-8 GMJB10XX Series of GERSTELTEC low viscosities epoxy for inkjet applications.
SU8 GM10XX Series of Classic GERSTELTEC epoxy.
Its polymerization process proceeds upon photoactivation of a photoacid generator (triarylsulfonium salts, for example) and subsequent post exposure baking. The polymerization process it a cationic chain growth, which takes place by ring opening polymerization of the epoxide groups.
SUEX is a Thick Dry Film Sheet (TDFS) which is a solventless formulation applied by lamination. As this formulation is a dry sheet, there is high uniformity, no edge-bead formation and very little waste. These sheets come in a range of thicknesses from 100 μm to over 1mm. DJMicrolaminates also sell a thinner range, ADEX TFDS, which are available in thicknesses from 5 μm through to 75 μm.
External links
SU-8: Thick Photo-Resist for MEMS A webpage with a lot of material data and process tricks.
http://www.gersteltec.ch/
Microchem data sheet
SU 8 Information Provides information on how to use SU 8 to create desired thicknesses.
SU-8 Spin Speed Calculator Selects a SU-8 type and calculates RPM for a given thickness.
Suppliers: The solution based SU-8 can be obtained from Microchem or Gersteltec ; the SUEX dry sheets are obtained from DJ Microlaminates , formerly known as DJ Devcorp
References
Polymer chemistry
Polymers
Materials science | SU-8 photoresist | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,755 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan",
"Polymer chemistry",
"Polymers"
] |
35,544,664 | https://en.wikipedia.org/wiki/Yang%E2%80%93Mills%20equations | In physics and mathematics, and especially differential geometry and gauge theory, the Yang–Mills equations are a system of partial differential equations for a connection on a vector bundle or principal bundle. They arise in physics as the Euler–Lagrange equations of the Yang–Mills action functional. They have also found significant use in mathematics.
Solutions of the equations are called Yang–Mills connections or instantons. The moduli space of instantons was used by Simon Donaldson to prove Donaldson's theorem.
Motivation
Physics
In their foundational paper on the topic of gauge theories, Robert Mills and Chen-Ning Yang developed (essentially independent of the mathematical literature) the theory of principal bundles and connections in order to explain the concept of gauge symmetry and gauge invariance as it applies to physical theories. The gauge theories Yang and Mills discovered, now called Yang–Mills theories, generalised the classical work of James Maxwell on Maxwell's equations, which had been phrased in the language of a gauge theory by Wolfgang Pauli and others. The novelty of the work of Yang and Mills was to define gauge theories for an arbitrary choice of Lie group , called the structure group (or in physics the gauge group, see Gauge group (mathematics) for more details). This group could be non-Abelian as opposed to the case corresponding to electromagnetism, and the right framework to discuss such objects is the theory of principal bundles.
The essential points of the work of Yang and Mills are as follows. One assumes that the fundamental description of a physical model is through the use of fields, and derives that under a local gauge transformation (change of local trivialisation of principal bundle), these physical fields must transform in precisely the way that a connection (in physics, a gauge field) on a principal bundle transforms. The gauge field strength is the curvature of the connection, and the energy of the gauge field is given (up to a constant) by the Yang–Mills action functional
The principle of least action dictates that the correct equations of motion for this physical theory should be given by the Euler–Lagrange equations of this functional, which are the Yang–Mills equations derived below:
Mathematics
In addition to the physical origins of the theory, the Yang–Mills equations are of important geometric interest. There is in general no natural choice of connection on a vector bundle or principal bundle. In the special case where this bundle is the tangent bundle to a Riemannian manifold, there is such a natural choice, the Levi-Civita connection, but in general there is an infinite-dimensional space of possible choices. A Yang–Mills connection gives some kind of natural choice of a connection for a general fibre bundle, as we now describe.
A connection is defined by its local forms for a trivialising open cover for the bundle . The first attempt at choosing a canonical connection might be to demand that these forms vanish. However, this is not possible unless the trivialisation is flat, in the sense that the transition functions are constant functions. Not every bundle is flat, so this is not possible in general. Instead one might ask that the local connection forms are themselves constant. On a principal bundle the correct way to phrase this condition is that the curvature vanishes. However, by Chern–Weil theory if the curvature vanishes (that is to say, is a flat connection), then the underlying principal bundle must have trivial Chern classes, which is a topological obstruction to the existence of flat connections: not every principal bundle can have a flat connection.
The best one can hope for is then to ask that instead of vanishing curvature, the bundle has curvature as small as possible. The Yang–Mills action functional described above is precisely (the square of) the -norm of the curvature, and its Euler–Lagrange equations describe the critical points of this functional, either the absolute minima or local minima. That is to say, Yang–Mills connections are precisely those that minimize their curvature. In this sense they are the natural choice of connection on a principal or vector bundle over a manifold from a mathematical point of view.
Definition
Let be a compact, oriented, Riemannian manifold. The Yang–Mills equations can be phrased for a connection on a vector bundle or principal -bundle over , for some compact Lie group . Here the latter convention is presented. Let denote a principal -bundle over . Then a connection on may be specified by a Lie algebra-valued differential form on the total space of the principal bundle. This connection has a curvature form , which is a two-form on with values in the adjoint bundle of . Associated to the connection is an exterior covariant derivative , defined on the adjoint bundle. Additionally, since is compact, its associated compact Lie algebra admits an invariant inner product under the adjoint representation.
Since is Riemannian, there is an inner product on the cotangent bundle, and combined with the invariant inner product on there is an inner product on the bundle of -valued two-forms on . Since is oriented, there is an -inner product on the sections of this bundle. Namely,
where inside the integral the fiber-wise inner product is being used, and is the Riemannian volume form of . Using this -inner product, the formal adjoint operator of is defined by
.
Explicitly this is given by where is the Hodge star operator acting on two-forms.
Assuming the above set up, the Yang–Mills equations are a system of (in general non-linear) partial differential equations given by
Since the Hodge star is an isomorphism, by the explicit formula for the Yang–Mills equations can equivalently be written
A connection satisfying () or () is called a Yang–Mills connection.
Every connection automatically satisfies the Bianchi identity , so Yang–Mills connections can be seen as a non-linear analogue of harmonic differential forms, which satisfy
.
In this sense the search for Yang–Mills connections can be compared to Hodge theory, which seeks a harmonic representative in the de Rham cohomology class of a differential form. The analogy being that a Yang–Mills connection is like a harmonic representative in the set of all possible connections on a principal bundle.
Derivation
The Yang–Mills equations are the Euler–Lagrange equations of the Yang–Mills functional, defined by
To derive the equations from the functional, recall that the space of all connections on is an affine space modelled on the vector space . Given a small deformation of a connection in this affine space, the curvatures are related by
To determine the critical points of (), compute
The connection is a critical point of the Yang–Mills functional if and only if this vanishes for every , and this occurs precisely when () is satisfied.
Moduli space of Yang–Mills connections
The Yang–Mills equations are gauge invariant. Mathematically, a gauge transformation is an automorphism of the principal bundle , and since the inner product on is invariant, the Yang–Mills functional satisfies
and so if satisfies (), so does .
There is a moduli space of Yang–Mills connections modulo gauge transformations. Denote by the gauge group of automorphisms of . The set classifies all connections modulo gauge transformations, and the moduli space of Yang–Mills connections is a subset. In general neither or is Hausdorff or a smooth manifold. However, by restricting to irreducible connections, that is, connections whose holonomy group is given by all of , one does obtain Hausdorff spaces. The space of irreducible connections is denoted , and so the moduli spaces are denoted and .
Moduli spaces of Yang–Mills connections have been intensively studied in specific circumstances. Michael Atiyah and Raoul Bott studied the Yang–Mills equations for bundles over compact Riemann surfaces. There the moduli space obtains an alternative description as a moduli space of holomorphic vector bundles. This is the Narasimhan–Seshadri theorem, which was proved in this form relating Yang–Mills connections to holomorphic vector bundles by Donaldson. In this setting the moduli space has the structure of a compact Kähler manifold. Moduli of Yang–Mills connections have been most studied when the dimension of the base manifold is four. Here the Yang–Mills equations admit a simplification from a second-order PDE to a first-order PDE, the anti-self-duality equations.
Anti-self-duality equations
When the dimension of the base manifold is four, a coincidence occurs: the Hodge star operator maps two-forms to two-forms,
.
The Hodge star operator squares to the identity in this case, and so has eigenvalues and . In particular, there is a decomposition
into the positive and negative eigenspaces of , the self-dual and anti-self-dual two-forms. If a connection on a principal -bundle over a four-manifold satisfies either or , then by (), the connection is a Yang–Mills connection. These connections are called either self-dual connections or anti-self-dual connections, and the equations the self-duality (SD) equations and the anti-self-duality (ASD) equations. The spaces of self-dual and anti-self-dual connections are denoted by and , and similarly for and .
The moduli space of ASD connections, or instantons, was most intensively studied by Donaldson in the case where and is simply-connected. In this setting, the principal -bundle is classified by its second Chern class, . For various choices of principal bundle, one obtains moduli spaces with interesting properties. These spaces are Hausdorff, even when allowing reducible connections, and are generically smooth. It was shown by Donaldson that the smooth part is orientable. By the Atiyah–Singer index theorem, one may compute that the dimension of , the moduli space of ASD connections when , to be
where is the first Betti number of , and is the dimension of the positive-definite subspace of with respect to the intersection form on . For example, when and , the intersection form is trivial and the moduli space has dimension . This agrees with existence of the BPST instanton, which is the unique ASD instanton on up to a 5 parameter family defining its centre in and its scale. Such instantons on may be extended across the point at infinity using Uhlenbeck's removable singularity theorem. More generally, for positive the moduli space has dimension
Applications
Donaldson's theorem
The moduli space of Yang–Mills equations was used by Donaldson to prove Donaldson's theorem about the intersection form of simply-connected four-manifolds. Using analytical results of Clifford Taubes and Karen Uhlenbeck, Donaldson was able to show that in specific circumstances (when the intersection form is definite) the moduli space of ASD instantons on a smooth, compact, oriented, simply-connected four-manifold gives a cobordism between a copy of the manifold itself, and a disjoint union of copies of the complex projective plane . We can count the number of copies of in two ways: once using that signature is a cobordism invariant, and another using a Hodge-theoretic interpretation of reducible connections. Interpreting these counts carefully, one can conclude that such a smooth manifold has diagonalisable intersection form.
The moduli space of ASD instantons may be used to define further invariants of four-manifolds. Donaldson defined polynomials on the second homology group of a suitably restricted class of four-manifolds, arising from pairings of cohomology classes on the moduli space. This work has subsequently been surpassed by Seiberg–Witten invariants.
Dimensional reduction and other moduli spaces
Through the process of dimensional reduction, the Yang–Mills equations may be used to derive other important equations in differential geometry and gauge theory. Dimensional reduction is the process of taking the Yang–Mills equations over a four-manifold, typically , and imposing that the solutions be invariant under a symmetry group. For example:
By requiring the anti-self-duality equations to be invariant under translations in a single direction of , one obtains the Bogomolny equations which describe magnetic monopoles on .
By requiring the self-duality equations to be invariant under translation in two directions, one obtains Hitchin's equations first investigated by Hitchin. These equations naturally lead to the study of Higgs bundles and the Hitchin system.
By requiring the anti-self-duality equations to be invariant in three directions, one obtains the Nahm equations on an interval.
There is a duality between solutions of the dimensionally reduced ASD equations on and called the Nahm transform, after Werner Nahm, who first described how to construct monopoles from Nahm equation data. Hitchin showed the converse, and Donaldson proved that solutions to the Nahm equations could further be linked to moduli spaces of rational maps from the complex projective line to itself.
The duality observed for these solutions is theorized to hold for arbitrary dual groups of symmetries of a four-manifold. Indeed there is a similar duality between instantons invariant under dual lattices inside , instantons on dual four-dimensional tori, and the ADHM construction can be thought of as a duality between instantons on and dual algebraic data over a single point.
Symmetry reductions of the ASD equations also lead to a number of integrable systems, and Ward's conjecture is that in fact all known integrable ODEs and PDEs come from symmetry reduction of ASDYM. For example reductions of SU(2) ASDYM give the sine-Gordon and Korteweg–de Vries equation, of ASDYM gives the Tzitzeica equation, and a particular reduction to dimensions gives the integrable chiral model of Ward. In this sense it is a 'master theory' for integrable systems, allowing many known systems to be recovered by picking appropriate parameters, such as choice of gauge group and symmetry reduction scheme. Other such master theories are four-dimensional Chern–Simons theory and the affine Gaudin model.
Chern–Simons theory
The moduli space of Yang–Mills equations over a compact Riemann surface can be viewed as the configuration space of Chern–Simons theory on a cylinder . In this case the moduli space admits a geometric quantization, discovered independently by Nigel Hitchin and Axelrod–Della Pietra–Witten.
See also
Connection (vector bundle)
Connection (principal bundle)
Donaldson theory
Stable Yang–Mills connection
F-Yang–Mills equations
Bi-Yang–Mills equations
Hermitian Yang–Mills equations
Deformed Hermitian Yang–Mills equations
Yang–Mills–Higgs equations
Notes
References
Differential geometry
Mathematical physics
Partial differential equations | Yang–Mills equations | [
"Physics",
"Mathematics"
] | 3,044 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
35,547,268 | https://en.wikipedia.org/wiki/Bacterial%20morphological%20plasticity | Bacterial morphological plasticity refers to changes in the shape and size that bacterial cells undergo when they encounter stressful environments. Although bacteria have evolved complex molecular strategies to maintain their shape, many are able to alter their shape as a survival strategy in response to protist predators, antibiotics, the immune response, and other threats.
Bacterial shape and size under selective forces
Normally, bacteria have different shapes and sizes which include coccus, rod and helical/spiral (among others less common) and that allow for their classification. For instance, rod shapes may allow bacteria to attach more readily in environments with shear stress (e.g., in flowing water). Cocci may have access to small pores, creating more attachment sites per cell and hiding themselves from external shear forces. Spiral bacteria combine some of the characteristics cocci (small footprints) and of filaments (more surface area on which shear forces can act) and the ability to form an unbroken set of cells to build biofilms. Several bacteria alter their morphology in response to the types and concentrations of external compounds. Bacterial morphology changes help to optimize interactions with cells and the surfaces to which they attach. This mechanism has been described in bacteria such as Escherichia coli and Helicobacter pylori.
Bacterial filamentation
Physiological mechanisms
Oxidative stress, nutrient limitation, DNA damage and antibiotic exposure are examples of stressors that cause bacteria to halt septum formation and cell division. Filamentous bacteria have been considered to be over-stressed, sick and dying members of the population. However, the filamentous members of some communities have vital roles in the population's continued existence, since the filamentous phenotype can confer protection against lethal environments. Filamentous bacteria can be over 90 μm in length and play an important role in the pathogenesis of human cystitis. Filamentous forms arise via several different mechanisms.
Base Excision Repair (BER) mechanism
This is a strategy to repair DNA damage observed in E. coli. This involves two types of enzymes:
Bifunctional glycosylases: the endonuclease III (encoded by nth gene)
Apurinic/Apirimidinic (AP)-endonucleases: endonuclease IV (encoded by nfo gene) and exonuclease III (encoded by xth gene).
Under this mechanism, daughter cells are protected from receiving damaged copies of the bacterial chromosome, and at the same time promoting bacterial survival. A mutant for these genes lack BER activity and a strong formation of filamentous structures is observed.
SulA/FtsZ mediated filamentation
This is a mechanism to halt cell division and repair DNA. In the presence of single-stranded DNA regions, due to the action of different external cues (that induce mutations), the major bacterial recombinase (RecA) binds to this DNA regions and is activated by the presence of free nucleotide triphosphates. This activated RecA stimulates the autoproteolysis of the SOS transcriptional repressor LexA. The LexA regulon includes a cell division inhibitor, SulA, that prevent the transmission of mutant DNA to the daughter cells. SulA is a dimer that binds FtsZ (a tubulin-like GTPase) in a 1:1 ratio and acts specifically on its polymerization which results in the formation of non-septated bacteria filaments. A similar mechanism may occur in Mycobacterium tuberculosis,which also elongates after being phagocytized.
M. tuberculosis
Septum site determining protein (Ssd) encoded by rv3660c promotes filamentation in response to the stressful intracellular environment. SSD inhibits septum and is also found in Mycobacterium smegmatis. The bacterial filament ultrastructure is consistent with inhibition of FtsZ polymerization (previously described). Ssd is believed to be part of a global regulatory mechanism in this bacteria that promotes a shift into an altered metabolic state.
Helicobacter pylori
In this spiral-shaped Gram-negative bacterium, the filamentation mechanism are regulated by two mechanisms: the peptidases that cause peptidoglycan relaxation and the coiled-coil-rich proteins (Ccrp) that are responsible for the helical cell shape in vitro as well as in vivo. A rod shape could have probably an advantage for motility than the regular helical shape. In this model, there is another protein Mre, which is not exactly involved in the maintenance of cell shape but in the cell cycle. It has been demotrated that mutant cells were highly elongated due to a delay in cell division and contained non-segregated chromosomes.
Environmental cues
Immune response
Some of the strategies for bacteria to bypass host defenses include the generation of filamentous structures. As it has been observed in other organisms (such as fungi), filamentous forms are resistant to phagocytosis. As an example of this, during urinary tract infection, filamentous structures of uropathogenic E. coli (UPEC) start to develop in response to host innate immune response (more exactly in response to Toll-like receptor 4-TLR4). TLR-4 is stimulated by the lipopolysaccharide (LPS) and recruits neutrophils (PMN) which are important leukocytes to eliminate these bacteria. Adopting filamentous structures, bacteria resist these phagocytic cells and their neutralizing activity (which include antimicrobial peptides, degradative enzyme and reactive oxygen species).
It is believed that filamentation is induced as a response of DNA damage (by the mechanisms previously exposed), participating SulA mechanism and additional factors. Furthermore, the length of the filamentous bacteria could have a stronger attachment to the epithelial cells, with an increased number of adhesins participating in the interaction, making even harder the work for (PMN). The interaction between phagocyte cells and adopting filamentous-shape bacteria provide an advantage to their survival. In this relate, filamentation could be not only a virulence, but also a resistance factor in these bacteria.
Predator protist
Bacteria exhibit a high degree of "morphological plasticity" that protects them from predation. Bacterial capture by protozoa is affected by size and irregularities in shape of bacteria. Oversized, filamentous, or prosthecate bacteria may be too large to be ingested. On the other hand, other factors such as extremely tiny cells, high-speed motility, tenacious attachment to surfaces, formation of biofilms and multicellular conglomerates may also reduce predation. Several phenotypic features of bacteria are adapted to escape protistan-grazing pressure.
Protistan grazing or bacterivory is a protozoan feeding on bacteria. It affects prokaryotic size and the distribution of microbial groups. There are several feeding mechanisms used to seek and capture prey, because the bacteria have to avoid being consumed from these factors. There are six feeding mechanisms listed by Kevin D. Young.
Filter feeding: transport water through a filter or sieve
Sedimentation: allows prey to settle into a capture device
Interception: capture by predator-induced current or motility and phagocytosis
Raptorial: predator craws and ingests prey through pharynx or by pseudopods
Pallium: prey engulfed e.g. by extrusion of feeding membrane
Myzocytosis: punctures prey and suck out cytoplasm and content
Bacterial responses are elicited depending on the predator and prey combinations because feeding mechanisms differ among the protists. Moreover, the grazing protists also produce the by-products, which directly lead to the morphological plasticity of prey bacteria. For example, the morphological phenotypes of Flectobacillus spp. were evaluated in the presence and absence of the flagellate grazer Orchromonas spp. in a laboratory that has environmental control within a chemostat. Without grazer and with adequate nutrient supply, the Flectobacillus spp. grew mainly in medium-sized rod (4-7 μm), remaining a typical 6.2 μm in length. With the predator, the Flectobacillus spp. size was altered to an average 18.6 μm and it is resistant to grazing. If the bacteria are exposed to the soluble by-products produced by grazing Orchromonas spp. and pass through a dialysis membrane, the bacterial length can increase to an average 11.4 μm. Filamentation occurs as a direct response to these effectors that are produced by the predator and there is a size preference for grazing that varies for each species of protist. The filamentous bacteria that are larger than 7 μm in length are generally inedible by marine protists. This morphological class is called grazing resistant. Thus, filamentation leads to the prevention of phagocytosis and killing by predator.
Bimodal effect
Bimodal effect is a situation that bacterial cell in an intermediate size range are consumed more rapidly than the very large or the very small. The bacteria, which are smaller than 0.5 μm in diameter, are grazed by protists four to six times less than larger cells. Moreover, the filamentous cells or cells with diameters greater than 3 μm are often too large to ingest by protists or are grazed at substantially lower rates than smaller bacteria. The specific effects vary with the size ratio between predator and prey. Pernthaler et al. classified susceptible bacteria into four groups by rough size.
Bacterial size < 0.4 μm were not grazed well
Bacterial size between 0.4 μm and 1.6 μm were "grazing vulnerable"
Bacterial size between 1.6 μm and 2.4 μm were "grazing suppressed"
Bacterial size > 2.4 μm were "grazing resistant"
Filamentous preys are resistant to protist predation in a number of marine environments. In fact, there is no bacterium entirely safe. Some predators graze the larger filaments to some degree. Morphological plasticity of some bacterial strains is able to show at different growth conditions. For instance, at enhanced growth rates, some strains can form large thread-like morphotypes. While filament formation in subpopulations can occur during starvation or at suboptimal growth conditions. These morphological shifts could be triggered by external chemical cues that might be released by the predator itself.
Besides bacterial size, there are several factors affecting the predation of protists. Bacterial shape, the spiral morphology may play a defensive role towards predation feedings. For example, Arthrospira may reduce its susceptibility to predation by altering its spiral pitch. This alteration inhibits some natural geometric feature of the protist's ingestion apparatus. Multicellular complexes of bacterial cells also change the ability of protist's ingestion. Cells in biofilms or microcolonies are often more resistant to predation. For instance, the swarm cells of Serratia liquefaciens resist predation by its predator, Tetrahymenu. Due to the normal-sized cells that first contact a surface are most susceptible, bacteria need elongating swarm cells to protect them from predation until the biofilm matures. For aquatic bacteria, they can produce a wide range of extracellular polymeric substances (EPS), which comprise protein, nucleic acids, lipids, polysaccharides and other biological macromolecules. EPS secretion protects bacteria from HNF grazing. The EPS-producing planktonic bacteria typically develop subpopulations of single cells and microcolonies that are embedded in an EPS matrix. The larger microcolonies are also protected from flagellate predation because of their size. The shift to the colonial type may be a passive consequence of selective feeding on single cells. However, the microcolony formation can be specifically induced in the presence of predators by cell-cell communication (quorum sensing).
As for bacterial motility, the bacteria with high-speed motility sometimes avoid grazing better than their nonmotile or slower strains especially the smallest, fastest bacteria. Moreover, a cell's movement strategy may be altered by predation. The bacteria move by run-and-reverse strategy, which help them to beat a hasty retreat before being trapped instead of moving by the run-and-tumble strategy. However, there is a study showed that the probability of random contacts between predators and prey increases with bacterial swimming, and motile bacteria can be consumed at higher rates by HNFs. In addition, bacterial surface properties affect predation as well as other factors. For example, there is an evidence shows that protists prefer gram-negative bacteria than gram-positive bacteria. Protists consume gram-positive cells at much lower rates than consuming gram-negative cells. The heterotrophic nanoflagellates actively avoid grazing on gram-positive actinobacteria as well. Grazing on gram-positive cells takes longer digestion time than on gram-negative cells. As a result of this, the predator cannot handle more prey until the previous ingested material is consumed or expelled. Moreover, bacterial cell surface charge and hydrophobicity have also been suggested that might reduce grazing ability. Another strategy that bacteria can use for avoiding the predation is to poison their predator. For example, certain bacteria such as Chromobacterium violaceum and Pseudomonas aeruginosa can secrete toxin agents related to quorum sensing to kill their predators.
Antibiotics
Antibiotics can induce a broad range of morphological changes in bacterial cells including spheroplast, protoplast and ovoid cell formation, filamentation (cell elongation), localized swelling, bulge formation, blebbing, branching, bending, and twisting. Some of these changes are accompanied by altered antibiotic susceptibility or altered bacterial virulence. In patients treated with β-lactam antibiotics, for example, filamentous bacteria are commonly found in their clinical specimens. Filamentation is accompanied by both a decrease in antibiotic susceptibility and an increase in bacterial virulence. This has implications for both disease treatment and disease progression.
Antibiotics used to treat Burkholderia pseudomallei infection (melioidosis), for example β-lactams, fluoroquinolones and thymidine synthesis inhibitors, can induce filamentation and other physiological changes. The ability of some β-lactam antibiotics to induce bacterial filamentation is attributable to their inhibition of certain penicillin-binding proteins (PBPs). PBPs are responsible for assembly of the peptidoglycan network in the bacterial cell wall. Inhibition of PBP-2 changes normal cells to spheroplasts, while inhibition of PBP-3 changes normal cells to filaments. PBP-3 synthesizes the septum in dividing bacteria, so inhibition of PBP-3 leads to the incomplete formation of septa in dividing bacteria, resulting in cell elongation without separation. Ceftazidime, ofloxacin, trimethoprim and chloramphenicol have all been shown to induce filamentation. Treatment at or below the minimal inhibitory concentration (MIC) induces bacterial filamentation and decreases killing within human macrophages. B.pseudomallei filaments revert to normal forms when the antibiotics are removed, and daughter cells maintain cell-division capacity and viability when re-exposed to antibiotics. Thus, filamentation may be a bacterial survival strategy. In Pseudomonas aeruginosa, antibiotic-induced filamentation appears to trigger a change from normal growth phase to stationary growth phase. Filamentous bacteria also release more endotoxin (lipopolysaccharide), one of the toxins responsible for septic shock.
In addition to the mechanism described above, some antibiotics induce filamentation via the SOS response. During repair of DNA damage, the SOS response aids bacterial propagation by inhibiting cell division. DNA damage induces the SOS response in E.coli through the DpiBA two-component signal transduction system, leading to inactivation of the ftsL gene product, penicillin binding protein 3 (PBP-3). The ftsL gene is a group of filamentation temperature-sensitive genes used in cell division. Their product (PBP-3), as mentioned above, is a membrane transpeptidase required for peptidoglycan synthesis at the septum. Inactivation of the ftsL gene product requires the SOS-promoting recA and lexA genes as well as dpiA and transiently inhibits bacterial cell division. The DpiA is the effector for the DpiB two-component system. Interaction of DpiA with replication origins competes with the binding of the replication proteins DnaA and DnaB. When overexpressed, DpiA can interrupt DNA replication and induce the SOS response resulting in inhibition of cell division.
Nutritional stress
Nutritional stress can change bacterial morphology. A common shape alteration is filamentation which can be triggered by a limited availability of one or more substrates, nutrients or electron acceptors. Since the filament can increase a cell's uptake–surface area without significantly changing its volume appreciably. Moreover, the filamentation benefits bacterial cells attaching to a surface because it increases specific surface area in direct contact with the solid medium. In addition, the filamentation may allows bacterial cells to access nutrients by enhancing the possibility that part of the filament will contact a nutrient-rich zone and pass compounds to the rest of the cell's biomass. For example, Actinomyces israelii grows as filamentous rods or branched in the absence of phosphate, cysteine, or glutathione. However, it returns to a regular rod-like morphology when adding back these nutrients.
See also
Filamentation
Protoplasts
Spheroplasts
References
Bacteriology
Morphology (biology) | Bacterial morphological plasticity | [
"Biology"
] | 3,805 | [
"Morphology (biology)"
] |
35,548,424 | https://en.wikipedia.org/wiki/Systematic%20Protein%20Investigative%20Research%20Environment | Systematic Protein Investigative Research Environment (SPIRE) provides web-based experiment-specific mass spectrometry (MS) proteomics analysis in order to identify proteins and peptides, and label-free expression and relative expression analyses. SPIRE provides a web-interface and generates results in both interactive and simple data formats.
Methodology
Spire's analyses are based on an experimental design that generates false discovery rates and local false discovery rates (FDR, LFDR) and integrates open-source search and data analysis methods. By combining X! Tandem, OMSSA and SpectraST SPIRE can produce an increase in protein IDs (50-90%) over current combinations of scoring and single search engines while also providing accurate multi-faceted error estimation. SPIRE combines its analysis results with data on protein function, pathways and protein expression from model organisms.
Integration with other information
SPIRE also connects results to publicly available proteomics data through its Multi-Omics Profiling Expression Database (MOPED). SPIRE can provide analysis and annotation for user-supplied protein ID and expression data. Users can upload data (standardized appropriately) or mail in data files.
References
Further reading
Mass spectrometry software
Proteomics
Protein databases | Systematic Protein Investigative Research Environment | [
"Physics",
"Chemistry"
] | 248 | [
"Mass spectrometry software",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Chemistry software"
] |
35,549,075 | https://en.wikipedia.org/wiki/Regularized%20meshless%20method | In numerical mathematics, the regularized meshless method (RMM), also known as the singular meshless method or desingularized meshless method, is a meshless boundary collocation method designed to solve certain partial differential equations whose fundamental solution is explicitly known. The RMM is a strong-form collocation method with merits being meshless, integration-free, easy-to-implement, and high stability. Until now this method has been successfully applied to some typical problems, such as potential, acoustics, water wave, and inverse problems of bounded and unbounded domains.
Description
The RMM employs the double layer potentials from the potential theory as its basis/kernel functions. Like the method of fundamental solutions (MFS), the numerical solution is approximated by a linear combination of double layer kernel functions with respect to different source points. Unlike the MFS, the collocation and source points of the RMM, however, are coincident and placed on the physical boundary without the need of a fictitious boundary in the MFS. Thus, the RMM overcomes the major bottleneck in the MFS applications to the real world problems.
Upon the coincidence of the collocation and source points, the double layer kernel functions will present various orders of singularity. Thus, a subtracting and adding-back regularizing technique is introduced and, hence, removes or cancels such singularities.
History and recent development
These days the finite element method (FEM), finite difference method (FDM), finite volume method (FVM), and boundary element method (BEM) are dominant numerical techniques in numerical modelings of many fields of engineering and sciences. Mesh generation is tedious and even very challenging problems in their solution of high-dimensional moving or complex-shaped boundary problems and is computationally costly and often mathematically troublesome.
The BEM has long been claimed to alleviate such drawbacks thanks to the boundary-only discretizations and its semi-analytical nature. Despite these merits, the BEM, however, involves quite sophisticated mathematics and some tricky singular integrals. Moreover, surface meshing in a three-dimensional domain remains to be a nontrivial task. Over the past decades, considerable efforts have been devoted to alleviating or eliminating these difficulties, leading to the development of meshless/meshfree boundary collocation methods which require neither domain nor boundary meshing. Among these methods, the MFS is the most popular with the merit of easy programming, mathematical simplicity, high accuracy, and fast convergence.
In the MFS, a fictitious boundary outside the problem domain is required in order to avoid the singularity of the fundamental solution. However, determining the optimal location of the fictitious boundary is a nontrivial task to be studied. Dramatic efforts have ever since been made to remove this long perplexing issue. Recent advances include, for example, boundary knot method (BKM), regularized meshless method (RMM), modified MFS (MMFS), and singular boundary method (SBM)
The methodology of the RMM was firstly proposed by Young and his collaborators in 2005. The key idea is to introduce a subtracting and adding-back regularizing technique to remove the singularity of the double layer kernel function at the origin, so that the source points can be placed directly on the real boundary. Up to now, the RMM has successfully been applied to a variety of physical problems, such as potential, exterior acoustics antiplane piezo-electricity, acoustic eigenproblem with multiply-connected domain, inverse problem, possion’ equation and water wave problems. Furthermore, some improved formulations have been made aiming to further improve the feasibility and efficiency of this method, see, for example, the weighted RMM for irregular domain problems and analytical RMM for 2D Laplace problems.
See also
Radial basis function
Boundary element method
Method of fundamental solutions
Boundary knot method
Boundary particle method
Singular boundary method
References
Numerical analysis
Numerical differential equations | Regularized meshless method | [
"Mathematics"
] | 817 | [
"Computational mathematics",
"Mathematical relations",
"Approximations",
"Numerical analysis"
] |
35,550,129 | https://en.wikipedia.org/wiki/State%20Batteries%20in%20Western%20Australia | State Batteries in Western Australia were government owned and run ore-crushing facilities for the gold mining industry. Western Australia was the only Australian state to provide batteries to assist gold prospectors and small mines. They existed in almost all of the mineral fields of Western Australia.
State Batteries were gold batteries where ore was crushed to separate gold ore. Stamp mills were gauged by the number of heads they had in operation for the crushing of ore.
Many of the government operated batteries had very short operating times, some for a year or two, while a few were 50 years or more in operation. They were part of the Western Australian Department of Mines operations.
Origins
The first private battery in Kalgoorlie was constructed at the Croesus mine in 1894.
As early as 1897 there was consideration of ore-crushing facilities being funded by private or government means.
The first government battery was constructed at Norseman in 1898. But by 1906 there was a Batteries Inquiry Board.
Decline
In the 1930s, despite the depression, a significant number still operated.
There were close to 100 operating Batteries in Western Australia – either private or Government in 1949, and by 1958 there were less than 50. Currently there are no operating state batteries, but ore processing continues in some of the same locations, such as Tuckabianna.
By 1982 a Government review of State Battery operations eventuated in a functional review, and the eventual closure of State Batteries in 1987.
List
The following State Batteries are known to have existed in Western Australia.
Bamboo 1913–1962
Black Range-. see also Sandstone
Bulong 1898–1899
Carlaminda
Coolgardie 1904
Cue 1919–1968
Darlot 1901-1980s
Desdemona 1909–1912
Devon 1908–?
Donnybrook 1900–1904
Duketon 1905–1907
Dumpling Gully
Kalgoorlie 1932
Kalpini 1906–1911
Laverton 1902 Known to be operating between 1916 and 1941.
Leonora 1898
Linden 1908
Marble Bar 1910–?
Marvel Loch – Known to be operating between 1912 and 1950
Meekatharra 1901–
Messenger's Patch 1909–?
Menzies 1904 – ?
Mt Egerton State Battery 1912–1921
Mt Ida 1898–1953
Mt Jackson 1912–1921
Mt Keith 1913–1928
Mt Sir Samuel 1910–1921
Mulline 1898–1921
Mulwarrie 1901–1920
Nannine 1907–1912
Niagara 1900–1922
Norseman 1898–? modernised in 1950
Ora Banda 1913–?
Paddington 1903–?
Paynes Find 1912–?
Paynesville 1900–1902
Pig well 1904–1912
Pinjin 1905–1914
Quinns 1911–1920
Randalls 1905–1908
Ravelstone -Peak Hill 1917–1965
Ravensthorpe ? –
Sandstone 1911
South Greenbushes 1906
Southern Cross 1903–1905
Tuckabianna 1918–?
Tuckanurra 1898–1923
Warriedar 1940 ?
Widgiemooltha 1900–1911
Wiluna 1904–1950
Yalgoo 1898–1941
Yarri 1905–?
Yerilla 1898–1920
Youanmi 1909–1940
Yundamindera 1903–1907
See also
Hints to Prospectors and Owners of Treatment Plants
References
Gold mining in Western Australia
Stamp mills | State Batteries in Western Australia | [
"Chemistry",
"Engineering"
] | 630 | [
"Stamp mills",
"Metallurgical facilities",
"Mining equipment"
] |
35,551,743 | https://en.wikipedia.org/wiki/Carlitz%20exponential | In mathematics, the Carlitz exponential is a characteristic p analogue to the usual exponential function studied in real and complex analysis. It is used in the definition of the Carlitz module – an example of a Drinfeld module.
Definition
We work over the polynomial ring Fq[T] of one variable over a finite field Fq with q elements. The completion C∞ of an algebraic closure of the field Fq((T−1)) of formal Laurent series in T−1 will be useful. It is a complete and algebraically closed field.
First we need analogues to the factorials, which appear in the definition of the usual exponential function. For i > 0 we define
and D0 := 1. Note that the usual factorial is inappropriate here, since n! vanishes in Fq[T] unless n is smaller than the characteristic of Fq[T].
Using this we define the Carlitz exponential eC:C∞ → C∞ by the convergent sum
Relation to the Carlitz module
The Carlitz exponential satisfies the functional equation
where we may view as the power of map or as an element of the ring of noncommutative polynomials. By the universal property of polynomial rings in one variable this extends to a ring homomorphism ψ:Fq[T]→C∞{τ}, defining a Drinfeld Fq[T]-module over C∞{τ}. It is called the Carlitz module.
References
Algebraic number theory
Finite fields | Carlitz exponential | [
"Mathematics"
] | 304 | [
"Algebraic number theory",
"Number theory"
] |
35,552,831 | https://en.wikipedia.org/wiki/Ciliogenesis | Ciliogenesis is defined as the building of the cell's antenna (primary cilia) or extracellular fluid mediation mechanism (motile cilium). It includes the assembly and disassembly of the cilia during the cell cycle. Cilia are important appendages of cells and are involved in numerous activities such as cell signaling, processing developmental signals, and directing the flow of fluids such as mucus over and around cells. Due to the importance of these cell processes, defects in ciliogenesis can lead to numerous human diseases related to non-functioning cilia known as ciliopathies.
Assembly
Primary cilia are found to be formed when a cell exits the cell cycle. Cilia consist of four main compartments: the basal body at the base, the transition zone, the axenome which is an arrangement of nine doublet microtubules and considered to be the core of the cilium, and the ciliary membrane. Primary cilia contain nine doublet microtubules arranged as a cylinder in their axenome and are denoted as a 9+0 pattern. Motile cilia are denoted as a 9+2 pattern because they contain two extra microtubules in the center of the cylinder that forms the axenome. Due to differences between primary and motile cilia, differences are seen in the formation process.
Ciliogenesis occurs through an ordered set of steps. Basal bodies migrate to the surface of the cell and attach to the cell cortex. Along the way, the basal bodies attach to membrane vesicles that fuse with the plasma membrane of the cell. The alignment of cilia is determined by the positioning and orientation of the basal bodies at this step. Once the alignment is determined, axonemal microtubules extend from the basal body and forming the cilia.
Proteins must be synthesized in the cytoplasm of the cell and cannot be synthesized within cilia. For the cilium to elongate, proteins must be selectively imported from the cytoplasm into the cilium and transported to the tip of the cilium by intraflagellar transport (IFT). Once the cilium is completely formed, it continues to incorporate new tubulin at the tip of the cilia while older tubulin is simultaneously degraded. This requires an active mechanism that maintains ciliary length. Impairments in these mechanisms can affect the motility of the cell and cell signaling between cells.
There are two noted types of ciliogenesis: compartmentalized and cytosolic. Most cells undergo compartmentalized ciliogenesis in which cilia are enveloped by extensions of the plasma membrane for the entirety of development. In cytosolic ciliogenesis, the axenome must interact with proteins in the cytoplasm therefore it is directly exposed to the cytoplasm. In some cells, cytosolic ciliogenesis occurs after compartmentalized ciliogenesis.
Disassembly
Cilia disassembly is much less understood than cilia assembly. From recent discoveries, three distinct types of cilia disassembly have been identified. One variety of cilia disassembly occurs when the length of the cilia is gradually reduced until it is no longer functional. Another category of cilia disassembly is shedding where cilia are severed from the main cell body. An example of this is Chlamydomonas in which a severing enzyme known as katanin separates basal bodies from axenomes.
In some organisms, a third method of cilia disassembly has been seen in which the entire axenome is internalized and then later disintegrated.
Cilia presence is seen to be inversely related to the progression of the cell cycle which can be seen by assembly occurring during cellular quiescence and disassembly occurring when the cell cycle is stimulated.
Regulation
Different cells use their cilia for different purposes, such as sensory signaling or the movement of fluid. For this reason, when cilia form and how long they are can differ from cell to cell. The processes controlling ciliary formation, degradation, and length must be regulated to ensure that each cell is able to perform its necessary tasks.
Each type of cell has an optimal length for its cilia which must be regulated to ensure optimal function of the cell. Some of the same processes that are used to control the formation and removal of cilia (such as IFT) are thought to be used in the regulation of cilia length. Cilia length also differs depending on where a cell is in the cell cycle.
Three categories of molecular events that potentially regulate cilia disassembly include activation of AurA kinase and deacetylation of microtubules, depolymerization of microtubules, and ciliary membrane remodeling.
Cilia regulation is grossly understudied; however, dysregulation of ciliogenesis is linked to several diseases.
Ciliopathies
Ciliary defects can lead to a broad range of human diseases known as ciliopathies that are caused by mutations in ciliary proteins. Because of how widespread cilia are, defects can cause ciliopathies in many different regions of the body.
Cilia also play a role in cell signaling and the cell cycle therefore defects to them can have a serious impact on the cell’s ability to function.
Some common ciliopathies include primary ciliary dyskinesia, hydrocephalus, polycystic liver and kidney disease, some forms of retinal degeneration, nephronophthisis, Bardet–Biedl syndrome, Alström syndrome, and Meckel–Gruber syndrome.
References
Cell biology
Organelles | Ciliogenesis | [
"Biology"
] | 1,178 | [
"Cell biology"
] |
35,554,502 | https://en.wikipedia.org/wiki/Multi-Omics%20Profiling%20Expression%20Database | The Multi-Omics Profiling Expression Database (MOPED) was an expanding multi-omics resource that supports rapid browsing of transcriptomics and proteomics information from publicly available studies on model organisms and humans. As to date (2021) it has ceased activities and is unaccessible online.
Systematic Protein Investigative Research Environment
MOPED is designed to simplify the comparison and sharing of data for the greater research community. MOPED employs the standardized analysis pipeline SPIRE to uniquely provide protein level absolute and relative expression data, meta- analysis capabilities and quantitative data. Processed relative expression transcriptomics data were obtained from the Gene Expression Omnibus (GEO). Data can be queried for specific proteins and genes, browsed based on organism, tissue, localization and condition, and sorted by false discovery rate and expression. MOPED empowers users to visualize their own expression data and compare it with existing studies. Further, MOPED links to various protein and pathway data- bases, including GeneCards, Panther, Entrez, UniProt, KEGG, SEED, and Reactome. Protein and gene identifiers are integrated from GeneCards (cross-referenced with MOPED), Genbank, RefSeq, UniProt, WormBase, and Saccharomyces Genome Database (SGD). The current version of MOPED (MOPED 2.5, 2014) contains approximately 5 million total records including ~260 experiments and ~390 conditions. MOPED is developed and supported by the Kolker team at Seattle Children's Research Institute.
Model Organism Protein Expression Database
MOPED was previously known as the Model Organism Protein Expression Database, before changing its name to the Multi-Omics Profiling Expression Database.
References
Further reading
Biological databases
Proteomics
Omics | Multi-Omics Profiling Expression Database | [
"Biology"
] | 362 | [
"Bioinformatics",
"Omics",
"Biological databases"
] |
28,807,941 | https://en.wikipedia.org/wiki/%CE%95-net%20%28computational%20geometry%29 | In computational geometry, an ε-net (pronounced epsilon-net) is the approximation of a general set by a collection of simpler subsets. In probability theory it is the approximation of one probability distribution by another.
Background
Let X be a set and R be a set of subsets of X; such a pair is called a range space or hypergraph, and the elements of R are called ranges or hyperedges. An ε-net of a subset P of X is a subset N of P such that any range r ∈ R with |r ∩ P| ≥ ε|P| intersects N. In other words, any range that intersects at least a proportion ε of the elements of P must also intersect the ε-net N.
For example, suppose X is the set of points in the two-dimensional plane, R is the set of closed filled rectangles (products of closed intervals), and P is the unit square [0, 1] × [0, 1]. Then the set N consisting of the 8 points shown in the adjacent diagram is a 1/4-net of P, because any closed filled rectangle intersecting at least 1/4 of the unit square must intersect one of these points. In fact, any (axis-parallel) square, regardless of size, will have a similar 8-point 1/4-net.
For any range space with finite VC dimension d, regardless of the choice of P, there exists an ε-net of P of size
because the size of this set is independent of P, any set P can be described using a set of fixed size.
This facilitates the development of efficient approximation algorithms. For example, suppose we wish to estimate an upper bound on the area of a given region, that falls inside a particular rectangle P. One can estimate this to within an additive factor of ε times the area of P by first finding an ε-net of P, counting the proportion of elements in the ε-net falling inside the region with respect to the rectangle P, and then multiplying by the area of P. The runtime of the algorithm depends only on ε and not P. One straightforward way to compute an ε-net with high probability is to take a sufficient number of random points, where the number of random points also depends only on ε. For example, in the diagram shown, any rectangle in the unit square containing at most three points in the 1/4-net has an area of at most 3/8 + 1/4 = 5/8.
ε-nets also provide approximation algorithms for the NP-complete hitting set and set cover problems.
Probability theory
Let be a probability distribution over some set . An -net for a class of subsets of is any subset such that
for any
Intuitively approximates the probability distribution.
A stronger notion is -approximation. An -approximation for class is a subset such that for any it holds
References
Computational geometry
Probability theory | Ε-net (computational geometry) | [
"Mathematics"
] | 599 | [
"Computational geometry",
"Computational mathematics"
] |
28,808,092 | https://en.wikipedia.org/wiki/AlgaePARC | Wageningen UR (University & Research centre) has constructed AlgaePARC (Algae Production And Research Centre) at the Wageningen Campus. The goal of AlgaePARC is to fill the gap between fundamental research on algae and full-scale algae production facilities. This will be done by setting up flexible pilot scale facilities to perform applied research and obtain direct practical experience. It is a joined initiative of BioProcess Engineering and Food & Biobased Research of the Wageningen University.
AlgaePARC facility
AlgaePARC uses four different photobioreactors comprising 24 m2 ground surface: an open pond, two types of tubular reactors and a plastic film bioreactor, and a number of smaller systems for the testing of new technologies. This facility is unique, because it is the first facility in which the productivity of four different production systems can be compared during the year under identical conditions. At the same time, knowledge is gained for the development of new photobioreactors and the design of systems on a production scale.
For the construction of the facility 2.25 M€ has been made available by the Ministry of Agriculture, Nature and Food Quality (1.5 M€) and the Provincie Gelderland (0.75 M€).
Microalgae
Microalgae are currently seen by some persons as a promising source of biodiesel and chemical building blocks, which can be used in paint and plastics. Biomass from algae offers a sustainable alternative to products and fuels from the petrochemical industry. When fully developed this contributes to a biobased economy as algae help to reduce the emissions of carbon dioxide (CO2) and make the economy less dependent on fossil fuels.
AlgaePARC research
The costs of biomass produced from algae for biofuels are still ten times too high to be able to compete with today’s other fuels. Within the business community, the question being asked is how it could be produced more cheaply, making it economically viable. Companies within the energy, food, oil and chemical sectors, the Ministry of Agriculture, Nature & Food Quality, the Provincial Government of Gelderland, Oost NV and Wageningen UR are all working together in or contributing to the algae research centre AlgaePARC in order to answer that question.
See also
Algae
Microalgae
Microbiofuels
Photobioreactors
Phytoplankton
Planktonic algae
Biofuel
External links
AlgaePARC
Algae at the WUR
References
Biotechnology
Biological engineering
Bioreactors | AlgaePARC | [
"Chemistry",
"Engineering",
"Biology"
] | 507 | [
"Bioreactors",
"Biological engineering",
"Bioengineering stubs",
"Chemical reactors",
"Biotechnology stubs",
"Biochemical engineering",
"Biotechnology",
"Microbiology equipment",
"nan"
] |
28,812,064 | https://en.wikipedia.org/wiki/Rising%20step%20load%20testing | Rising Step Load Testing (or RSL testing) is a testing system that can apply loads in tension or bending to evaluate hydrogen-induced cracking (also called hydrogen embrittlement). It was specifically designed to conduct the accelerated ASTM F1624 step-modified, slow strain rate tests on a variety of test coupons or structural components. It can also function to conduct conventional ASTM E8 tensile tests; ASTM F519 200-hr Sustained Load Tests with subsequent programmable step loads to rupture for increased reliability; and ASTM G129 Slow Strain Rate Tensile tests.
Testing
The RSL Testing System can be applied to all of the specimen geometries in ASTM F519, including Notched Round Tensile Bars, Notched C-Rings, and Notched Square Bars. Product testing of actual hardware can also be conducted, such as with fasteners. Taking mechanical advantage of by testing in bending allows large diameter bolts to be tested with only a 1-kip load cell.
Test precision
The RSL Test Method has been demonstrated as a valuable tool in the testing of high-performance materials for determining susceptibility to hydrogen embrittlement. This test is dependent upon the test machine’s capability to provide a profile with incremental increases in the applied stress as a function of time. It is imperative that the load increases do not overshoot the next elevation in applied stress. This is achieved through careful design and operation of the loading mechanisms. Once this is achieved, repeatability is good with variance in the low single digits that are probably related more to surface roughness, internal defects, and other intrinsic material properties differences rather than the testing equipment.
Crack sensitivity detection
Precision in controlling the load allows for greater sensitivity in measuring crack extension via load drop and compliance correlation than obtainable with high-voltage electrical resistivity measurements and eliminates the need for clip gages. This capability allows for precise electronic detection of the maximum load required for Crack Tip Opening Displacement calculations of Fracture Toughness and precise detection of the onset of crack growth required for measurement of the threshold stress for hydrogen embrittlement, environmental or stress corrosion cracking.
Benefits
Speed: One of the major advantages of the RSL Test Method is the time in which valid and reproducible results can be obtained. One example is the long used Sustained Load Test of notched round bar tensile specimens found in ASTM F 519. In this test, the exposed test specimens are subjected to a load equal to 75% of the Fracture Strength and held for 200 hours. If the specimen does not break, the specimen has passed. If it fractures prior to the 200 hours then it has failed. The RSL test method can provide the same information in less than two days and will also give a percentage of Fracture Strength value which provides very valuable additional information: it can demonstrate if the threshold exceeds the 75% mark and by how much.
Simultaneous testing: Four or five specimens can be run at the same time, allowing very rapid characterization of materials on a batch-by-batch basis.
Accuracy: Another benefit it the quantitative nature of the test, which makes it much more useful for comparing batch-to-batch or for evaluating different coatings, materials, machining effects, cleaners, and other variables that may affect the hydrogen embrittlement of materials.
Fast Results- Similarly, an ASTM E1681 KIscc determination can take 12 specimens and up to 14-months because of the necessary run out times to confirm the threshold level. Using the RSL Test Method can provide the same results with 5-coupons in one to two days.
Ability to evaluate multiple materials properties: with just two specimens, the RSL testing equipment can be quickly used to determine (1) yield and ultimate tensile strength, (2) fracture toughness including K1_CTOD, (3) environmentally assisted cracking threshold K1C_SCC, and the dynamic tearing modulus.
References
Materials testing
Fracture mechanics | Rising step load testing | [
"Materials_science",
"Engineering"
] | 815 | [
"Structural engineering",
"Fracture mechanics",
"Materials science",
"Materials testing",
"Materials degradation"
] |
21,284,666 | https://en.wikipedia.org/wiki/Helmholtz-Zentrum%20Berlin | Helmholtz-Zentrum Berlin für Materialien und Energie (Helmholtz Centre for Materials and Energy, HZB) is part of the Helmholtz Association of German Research Centres. The institute studies the structure and dynamics of materials and investigates solar cell technology. It also runs the third-generation BESSY II synchrotron in Adlershof. Until the end of 2019 it ran the 10 megawatt BER II nuclear research reactor at the Lise Meitner campus in Wannsee.
History
Following the renaming of Hahn-Meitner-Institut Berlin GmbH to Helmholtz-Zentrum Berlin für Materialien und Energie GmbH on 5 June 2008, the legal merger of Berliner Elektronenspeicherring-Gesellschaft für Synchrotronstrahlung (BESSY) with HZB became visible on 1 January 2009.
The Hahn-Meitner-Institut Berlin für Kernforschung (HMI), named after Otto Hahn and Lise Meitner, was named 14 March 1959 in Berlin-Wannsee to operate the BER I research reactor that began operation with 50 kW on 24 July 1958, then named Institut für Kernforschung (IKB). The IKB was founded by the senate of Berlin (West) in winter 1956/57 as a dependent authority. Research originally focused on radiochemistry. In 1971, the federal government took over 90% of the shares in the HMI by converting it into a GmbH (limited liability company).
The Berliner Elektronenspeicherring-Gesellschaft für Synchrotronstrahlung GmbH (BESSY) was founded in 1979. The first synchrotron BESSY I in Berlin-Wilmersdorf began operations in 1982.
In May 2022, South African President Cyril Ramaphosa and German Chancellor Olaf Scholz, who was in South Africa on the final leg of a trip to Africa, publicly announced that HZB and Sasol had agreed to conduct research into substances to help produce sustainable aviation fuel on a commercial scale.
References
External links
Helmholtz-Zentrum Berlin: Official English language website
Helmholtz-Zentrum Berlin: Official German language website
Nuclear technology
Nuclear research institutes
Neutron facilities
Research institutes in Berlin
Synchrotron radiation facilities | Helmholtz-Zentrum Berlin | [
"Physics",
"Materials_science",
"Engineering"
] | 466 | [
"Nuclear research institutes",
"Nuclear organizations",
"Nuclear technology",
"Materials testing",
"Nuclear physics",
"Synchrotron radiation facilities"
] |
21,284,746 | https://en.wikipedia.org/wiki/NPDGamma%20experiment | NPDGamma is an ongoing effort to measure the parity-violating asymmetry in polarized cold neutron capture on parahydrogen.
Polarized neutrons of energies 2 meV – 15 meV are incident on a liquid parahydrogen target. The neutrons are captured, forming deuterium and releasing a gamma ray with an energy of 2.2 MeV (the binding energy of deuterium), which is subsequently detected. The detector array consists of 4 rings of 12 detectors each, where each ring is concentric around the neutron beam. The polarization of the incoming neutron beam is alternated rapidly to study the spin correlation of the direction of the emitted gamma ray.
The measured quantity is the difference in the number of gamma rays emitted within a solid angle between the two neutron spin states. This difference (divided by the sum) forms an asymmetry with possible parity-violating and parity-conserving components, where the former is known as . By studying the parity-violating correlation between the spin of the incoming neutron (its polarization) and the direction of the emitted gamma ray, one primarily probes (traditionally known as ), the long-range coupling constant used to describe the hadronic weak interaction.
The first phase of the NPDGamma experiment was carried out at Los Alamos National Lab in 2006, but did not have the statistical sensitivity to test theoretical predictions ().
The current phase of the NPDGamma experiment is running at the Spallation Neutron Source at Oak Ridge National Lab. Production data began in the spring of 2012.
In 2018 the NPDGamma collaboration reported a successful measurement of parity violation.
References
External links
Experiment Wiki
Quantum mechanics
Particle experiments
Nuclear physics | NPDGamma experiment | [
"Physics"
] | 356 | [
"Theoretical physics",
"Quantum mechanics",
"Particle physics",
"Nuclear physics",
"Particle physics stubs"
] |
21,291,452 | https://en.wikipedia.org/wiki/Lugeon | A Lugeon is a unit devised to quantify the water permeability of bedrock and the hydraulic conductivity resulting from fractures; it is named after Maurice Lugeon, a Swiss geologist who first formulated the method in 1933. More specifically, the Lugeon test is used to measure the amount of water injected into a segment of the bored hole under a steady pressure; the value (Lugeon value) is defined as the loss of water in litres per minute and per metre borehole at an over-pressure of 1 MPa.
Although the Lugeon test may serve other purposes, its main object is to determine the Lugeon coefficient which by definition is water absorption measured in litres per metre of test-stage per minute at a pressure of 10 kg/cm2 (1 MN/m2).
References
Hydrology | Lugeon | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 170 | [
"Hydrology",
"Hydrology stubs",
"Environmental engineering"
] |
2,008,215 | https://en.wikipedia.org/wiki/Quark%20model | In particle physics, the quark model is a classification scheme for hadrons in terms of their valence quarks—the quarks and antiquarks that give rise to the quantum numbers of the hadrons. The quark model underlies "flavor SU(3)", or the Eightfold Way, the successful classification scheme organizing the large number of lighter hadrons that were being discovered starting in the 1950s and continuing through the 1960s. It received experimental verification beginning in the late 1960s and is a valid and effective classification of them to date. The model was independently proposed by physicists Murray Gell-Mann, who dubbed them "quarks" in a concise paper, and George Zweig, who suggested "aces" in a longer manuscript. André Petermann also touched upon the central ideas from 1963 to 1965, without as much quantitative substantiation. Today, the model has essentially been absorbed as a component of the established quantum field theory of strong and electroweak particle interactions, dubbed the Standard Model.
Hadrons are not really "elementary", and can be regarded as bound states of their "valence quarks" and antiquarks, which give rise to the quantum numbers of the hadrons. These quantum numbers are labels identifying the hadrons, and are of two kinds. One set comes from the Poincaré symmetry—JPC, where J, P and C stand for the total angular momentum, P-symmetry, and C-symmetry, respectively.
The other set is the flavor quantum numbers such as the isospin, strangeness, charm, and so on. The strong interactions binding the quarks together are insensitive to these quantum numbers, so variation of them leads to systematic mass and coupling relationships among the hadrons in the same flavor multiplet.
All quarks are assigned a baryon number of . Up, charm and top quarks have an electric charge of +, while the down, strange, and bottom quarks have an electric charge of −. Antiquarks have the opposite quantum numbers. Quarks are spin- particles, and thus fermions. Each quark or antiquark obeys the Gell-Mann–Nishijima formula individually, so any additive assembly of them will as well.
Mesons are made of a valence quark–antiquark pair (thus have a baryon number of 0), while baryons are made of three quarks (thus have a baryon number of 1). This article discusses the quark model for the up, down, and strange flavors of quark (which form an approximate flavor SU(3) symmetry). There are generalizations to larger number of flavors.
History
Developing classification schemes for hadrons became a timely question after new experimental techniques uncovered so many of them that it became clear that they could not all be elementary. These discoveries led Wolfgang Pauli to exclaim "Had I foreseen that, I would have gone into botany." and Enrico Fermi to advise his student Leon Lederman: "Young man, if I could remember the names of these particles, I would have been a botanist." These new schemes earned Nobel prizes for experimental particle physicists, including Luis Alvarez, who was at the forefront of many of these developments. Constructing hadrons as bound states of fewer constituents would thus organize the "zoo" at hand. Several early proposals, such as the ones by Enrico Fermi and Chen-Ning Yang (1949), and the Sakata model (1956), ended up satisfactorily covering the mesons, but failed with baryons, and so were unable to explain all the data.
The Gell-Mann–Nishijima formula, developed by Murray Gell-Mann and Kazuhiko Nishijima, led to the Eightfold Way classification, invented by Gell-Mann, with important independent contributions from Yuval Ne'eman, in 1961. The hadrons were organized into SU(3) representation multiplets, octets and decuplets, of roughly the same mass, due to the strong interactions; and smaller mass differences linked to the flavor quantum numbers, invisible to the strong interactions. The Gell-Mann–Okubo mass formula systematized the quantification of these small mass differences among members of a hadronic multiplet, controlled by the explicit symmetry breaking of SU(3).
The spin- baryon, a member of the ground-state decuplet, was a crucial prediction of that classification. After it was discovered in an experiment at Brookhaven National Laboratory, Gell-Mann received a Nobel Prize in Physics for his work on the Eightfold Way, in 1969.
Finally, in 1964, Gell-Mann and George Zweig, discerned independently what the Eightfold Way picture encodes: They posited three elementary fermionic constituents—the "up", "down", and "strange" quarks—which are unobserved, and possibly unobservable in a free form. Simple pairwise or triplet combinations of these three constituents and their antiparticles underlie and elegantly encode the Eightfold Way classification, in an economical, tight structure, resulting in further simplicity. Hadronic mass differences were now linked to the different masses of the constituent quarks.
It would take about a decade for the unexpected nature—and physical reality—of these quarks to be appreciated more fully (See Quarks). Counter-intuitively, they cannot ever be observed in isolation (color confinement), but instead always combine with other quarks to form full hadrons, which then furnish ample indirect information on the trapped quarks themselves. Conversely, the quarks serve in the definition of quantum chromodynamics, the fundamental theory fully describing the strong interactions; and the Eightfold Way is now understood to be a consequence of the flavor symmetry structure of the lightest three of them.
Mesons
The Eightfold Way classification is named after the following fact: If we take three flavors of quarks, then the quarks lie in the fundamental representation, 3 (called the triplet) of flavor SU(3). The antiquarks lie in the complex conjugate representation . The nine states (nonet) made out of a pair can be decomposed into the trivial representation, 1 (called the singlet), and the adjoint representation, 8 (called the octet). The notation for this decomposition is
Figure 1 shows the application of this decomposition to the mesons. If the flavor symmetry were exact (as in the limit that only the strong interactions operate, but the electroweak interactions are notionally switched off), then all nine mesons would have the same mass. However, the physical content of the full theory includes consideration of the symmetry breaking induced by the quark mass differences, and considerations of mixing between various multiplets (such as the octet and the singlet).
N.B. Nevertheless, the mass splitting between the and the is larger than the quark model can accommodate, and this "– puzzle" has its origin in topological peculiarities of the strong interaction vacuum, such as instanton configurations.
Mesons are hadrons with zero baryon number. If the quark–antiquark pair are in an orbital angular momentum state, and have spin , then
≤ J ≤ L + S, where S = 0 or 1,
P = (−1)L+1, where the 1 in the exponent arises from the intrinsic parity of the quark–antiquark pair.
C = (−1)L+S for mesons which have no flavor. Flavored mesons have indefinite value of C.
For isospin and 0 states, one can define a new multiplicative quantum number called the G-parity such that .
If P = (−1)J, then it follows that S = 1, thus PC = 1. States with these quantum numbers are called natural parity states; while all other quantum numbers are thus called exotic (for example, the state ).
Baryons
Since quarks are fermions, the spin–statistics theorem implies that the wavefunction of a baryon must be antisymmetric under the exchange of any two quarks. This antisymmetric wavefunction is obtained by making it fully antisymmetric in color, discussed below, and symmetric in flavor, spin and space put together. With three flavors, the decomposition in flavor is
The decuplet is symmetric in flavor, the singlet antisymmetric and the two octets have mixed symmetry. The space and spin parts of the states are thereby fixed once the orbital angular momentum is given.
It is sometimes useful to think of the basis states of quarks as the six states of three flavors and two spins per flavor. This approximate symmetry is called spin-flavor SU(6). In terms of this, the decomposition is
The 56 states with symmetric combination of spin and flavour decompose under flavor SU(3) into
where the superscript denotes the spin, S, of the baryon. Since these states are symmetric in spin and flavor, they should also be symmetric in space—a condition that is easily satisfied by making the orbital angular momentum . These are the ground-state baryons.
The octet baryons are the two nucleons (, ), the three Sigmas (, , ), the two Xis (, ), and the Lambda (). The decuplet baryons are the four Deltas (, , , ), three Sigmas (, , ), two Xis (, ), and the Omega ().
For example, the constituent quark model wavefunction for the proton is
Mixing of baryons, mass splittings within and between multiplets, and magnetic moments are some of the other quantities that the model predicts successfully.
The group theory approach described above assumes that the quarks are eight components of a single particle, so the anti-symmetrization applies to all the quarks. A simpler approach is to consider the eight flavored quarks as eight separate, distinguishable, non-identical particles. Then the anti-symmetrization applies only to two identical quarks (like uu, for instance).
Then, the proton wavefunction can be written in a simpler form:
and the
If quark–quark interactions are limited to two-body interactions, then all the successful quark model predictions, including sum rules for baryon masses and magnetic moments, can be derived.
Discovery of color
Color quantum numbers are the characteristic charges of the strong force, and are completely uninvolved in electroweak interactions. They were discovered as a consequence of the quark model classification, when it was appreciated that the spin baryon, the , required three up quarks with parallel spins and vanishing orbital angular momentum. Therefore, it could not have an antisymmetric wavefunction, (required by the Pauli exclusion principle). Oscar Greenberg noted this problem in 1964, suggesting that quarks should be para-fermions.
Instead, six months later, Moo-Young Han and Yoichiro Nambu suggested the existence of a hidden degree of freedom, they labeled as the group SU(3)' (but later called 'color). This led to three triplets of quarks whose wavefunction was anti-symmetric in the color degree of freedom.
Flavor and color were intertwined in that model: they did not commute.
The modern concept of color completely commuting with all other charges and providing the strong force charge was articulated in 1973, by William Bardeen, Harald Fritzsch, and Murray Gell-Mann.
States outside the quark model
While the quark model is derivable from the theory of quantum chromodynamics, the structure of hadrons is more complicated than this model allows. The full quantum mechanical wavefunction of any hadron must include virtual quark pairs as well as virtual gluons, and allows for a variety of mixings. There may be hadrons which lie outside the quark model. Among these are the glueballs (which contain only valence gluons), hybrids (which contain valence quarks as well as gluons) and exotic hadrons (such as tetraquarks or pentaquarks).
See also
Subatomic particles
Hadrons, baryons, mesons and quarks
Exotic hadrons: exotic mesons and exotic baryons
Quantum chromodynamics, flavor, the QCD vacuum
Notes
References
Thomson, M A (2011), Lecture notes
Hadrons
Quarks
Particle physics
Concepts in physics
Standard Model | Quark model | [
"Physics"
] | 2,641 | [
"Standard Model",
"Hadrons",
"Subatomic particles",
"Particle physics",
"nan",
"Matter"
] |
2,008,633 | https://en.wikipedia.org/wiki/Ansys | Ansys, Inc. is an American multinational company with its headquarters based in Canonsburg, Pennsylvania. It develops and markets CAE/multiphysics engineering simulation software for product design, testing and operation and offers its products and services to customers worldwide.
History
Origins
Ansys was founded in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson. The idea for Ansys was first conceived by Swanson while working at the Westinghouse Astronuclear Laboratory in the 1960s. At the time, engineers performed finite element analysis (FEA) by hand. Westinghouse rejected Swanson's idea to automate FEA by developing general purpose engineering software, so Swanson left the company in 1969 to develop the software on his own. He founded SASI the next year, working out of his farmhouse in Pittsburgh.
Swanson developed the initial ANSYS software on punch cards and used a mainframe computer that was rented by the hour. Westinghouse hired him as a consultant, under the condition that any code he developed for Westinghouse could also be included in the Ansys product line. Westinghouse became the first Ansys user.
Swanson sold his interest in the company to venture capitalists in 1994, and the company was renamed "Ansys" after the software. Ansys went public on NASDAQ in 1996. In the 2000s, the company acquired other engineering design companies, obtaining additional technology for fluid dynamics, electronics design, and physics analysis. Ansys became a component of the NASDAQ-100 index on December 23, 2019.
Growth
By 1991, SASI had 153 employees and $29 million in annual revenue, controlling 10 percent of the market for finite element analysis software. According to The Engineering Design Revolution, the company became "well-respected" among engineering circles, but remained small. In 1992, SASI acquired Compuflo, which marketed and developed fluid dynamics analysis software. In 1994, Swanson sold his majority interest in the company to venture capitalist firm TA Associates. Peter Smith was appointed CEO and SASI was renamed after the software, Ansys, the following year.
Ansys went public in 1996, raising about $46 million in an initial public offering. By 1997, Ansys had grown to $50.5 million in annual revenue. In the late 1990s, Ansys shifted its business model away from software licenses, and corresponding revenue declined. However, revenue from services increased. From 1996 to 2000, profits at Ansys grew an average of 160% per year. In February 2000, Jim Cashman was appointed CEO.
Current CEO Ajei S. Gopal was appointed in early 2017. In November 2020, South China Morning Post reported that Ansys software had been used for Chinese military research in the development of hypersonic missile technology. In October 2022, Washington Post reviewed procurement documents and confirmed that Ansys technology had been acquired by seven Chinese entities present on either the export blacklist or with known links to Chinese missile technology. Ansys said that it and its subsidiaries have no records of the indicated sales or shipments and suggested that piracy may have been involved. In January 2024 Synopsys and Ansys announced a definitive agreement under which Synopsys would acquire Ansys in a deal valued at around $35 billion.
List of acquisitions
Engineering simulation software
Ansys develops and markets engineering simulation software for use across the product life cycle. Ansys Mechanical finite element analysis software uses computer models to simulate structures, electronics, or machine components to evaluate the strength, toughness, elasticity, temperature distribution, electromagnetism, fluid flow, and other attributes. Ansys is used to determine how a product will function with different specifications, without building test products or conducting crash tests. For example, Ansys software may simulate how a bridge will hold up after years of traffic, how to best process salmon in a cannery to reduce waste, or how to design a slide that uses less material without sacrificing safety.
Most Ansys simulations are performed using the Ansys Workbench system, which is one of the company's main products. Typically Ansys users break down larger structures into small components that are each modeled and tested individually. A user may start by defining the dimensions of an object, and then adding weight, pressure, temperature and other physical properties. Finally, the Ansys software simulates and analyzes movement, fatigue, fractures, fluid flow, temperature distribution, electromagnetic efficiency and other effects over time.
Ansys also develops software for data management and backup, academic research and teaching. Ansys software is sold on an annual subscription basis.
Software history
The first commercial version of Ansys software was labeled version 2.0 and released in 1971. At the time, the software was made up of boxes of punch cards, and the program was typically run overnight to get results the following morning. In 1975, non-linear and thermo-electric features were added. The software was exclusively used on mainframes, until version 3.0 (the second release) was introduced for the VAXstation in 1979. Version 3 had a command-line interface like DOS.
In 1980, the Apple II version was released, allowing Ansys to convert to a graphical user interface in version 4 later that year. Version 4 of the Ansys software was easier to use and added features to simulate electromagnetism. In 1989, Ansys began working with Compuflo. Compuflo's Flotran fluid dynamics software was integrated into Ansys by version 5, which was released in 1993. Performance improvements in version 5.1 shortened processing time two to four-fold, and was followed by a series of performance improvements to keep pace with advancements in computing. Ansys also began integrating its software with CAD software, such as Autodesk.
In 1996, Ansys released the DesignSpace structural analysis software, the LS-DYNA crash and drop test simulation product, and the Ansys Computational Fluid Dynamics (CFD) simulator. Ansys also added parallel processing support for PCs with multiple processors. The educational product Ansys/ed was introduced in 1998. Version 6.0 of the main Ansys product was released in December 2001. Version 6.0 made large-scale modeling practical for the first time, but many users were frustrated by a new blue user interface. The interface was redone a few months later in 6.1. Version 8.0 introduced the Ansys multi-field solver, which allows users to simulate how multiple physics problems would interact with one another.
Version 8.0 was published in 2005 and introduced Ansys' fluid–structure interaction software, which simulates the effect structures and fluids have on one another. Ansys also released its Probabilistic Design System and DesignXplorer software products, which both deal with probabilities and randomness of physical elements. In 2009 version 12 was released with an overhauled second version of Workbench. Ansys also began increasingly consolidating features into the Workbench software.
Version 15 of Ansys was released in 2014. It added a new features for composites, bolted connections, and better mesh tools. In February 2015, version 16 introduced the AIM physics engine and Electronics Desktop, which is for semiconductor design. The following year, version 17 introduced a new user interface and performance improvement for computing fluid dynamics problems. In January 2017, Ansys released version 18. Version 18 allowed users to collect real-world data from products and then incorporate that data into future simulations. The Ansys Application Builder, which allows engineers to build, use, and sell custom engineering tools, was also introduced with version 18.
Released in January 2020, Ansys R1 2020 updates Ansys' simulation process and data management (SPDM), materials information and electromagnetics product offerings. In early 2020, the Ansys Academic Program surpassed one million student downloads.
In May 2020, Ansys joined Microsoft, Dell and Lendlease on the steering committee of the Digital Twin Consortium, which aims to advance the use of digital twin technology. The company collaborated with the US Army and L3Harris to advance the use of FACE technical standard. In April, 2020, Samsung Foundry certified Ansys' RaptorH EM simulation solution for developing 2.5D/3D-ICs and systems-on-chip using Samsung's signoff flow. In August, 2020, Ansys received TSMC certification for its SoIC 3D chip stacking technology. In October, 2020, the company signed the agreement to acquire Analytical Graphics Inc. for $700 million.
In 2021, Optimo Medical AG integrated their Optimeyes digital twin technology with Ansys Mechanical to create identical copies of cornea for surgical procedure testing purposes. Ansys and Siemens Energy collaborated to improve additive manufacturing (AM) processes. In May 2021, Ansys acquired Phoenix Integration, Inc. for an undisclosed amount.
In November, 2021, the company was certified for Samsung's 3 nm and 4 nm Process Technologies. The same year, Ansys acquired Zemax for an undisclosed amount. The company began supporting Arm-based Graviton2 Processors, first time that Ansys' EDA semiconductor simulation solutions were made available on the Arm Neoverse architecture. In partnership with Cornell University, Ansys developed simulating courses.
In March 2022, the company announced collaboration with GlobalFoundries to address issues facing data centres. In April, 2022, Ansys announced signing a definitive agreement to acquire OnScale to expand its cloud portfolio.
In May, 2022, Ansys acquired Motor Design Limited (MDL) for an undisclosed amount. In October, 2022, the company acquired C&R Technologies, a company that specialised in providing orbital thermal analysis.
In December, 2022, Ansys announced that it had signed a definitive agreement to acquire DYNAmore, which specialises in developing simulation solutions for the automotive industry.
References
External links
Engineering software companies
Software companies based in Pennsylvania
Software companies established in 1970
Computational fluid dynamics
Finite element software for Linux
1970 establishments in Pennsylvania
Mesh generators
Companies based in Washington County, Pennsylvania
Canonsburg, Pennsylvania
1996 initial public offerings
Software companies of the United States
Announced information technology acquisitions | Ansys | [
"Physics",
"Chemistry",
"Technology"
] | 2,130 | [
"Announced information technology acquisitions",
"Computational fluid dynamics",
"Computational physics",
"Information technology",
"Fluid dynamics"
] |
2,009,061 | https://en.wikipedia.org/wiki/Regularization%20%28mathematics%29 | In mathematics, statistics, finance, and computer science, particularly in machine learning and inverse problems, regularization is a process that converts the answer of a problem to a simpler one. It is often used in solving ill-posed problems or to prevent overfitting.
Although regularization procedures can be divided in many ways, the following delineation is particularly helpful:
Explicit regularization is regularization whenever one explicitly adds a term to the optimization problem. These terms could be priors, penalties, or constraints. Explicit regularization is commonly employed with ill-posed optimization problems. The regularization term, or penalty, imposes a cost on the optimization function to make the optimal solution unique.
Implicit regularization is all other forms of regularization. This includes, for example, early stopping, using a robust loss function, and discarding outliers. Implicit regularization is essentially ubiquitous in modern machine learning approaches, including stochastic gradient descent for training deep neural networks, and ensemble methods (such as random forests and gradient boosted trees).
In explicit regularization, independent of the problem or model, there is always a data term, that corresponds to a likelihood of the measurement and a regularization term that corresponds to a prior. By combining both using Bayesian statistics, one can compute a posterior, that includes both information sources and therefore stabilizes the estimation process. By trading off both objectives, one chooses to be more aligned to the data or to enforce regularization (to prevent overfitting). There is a whole research branch dealing with all possible regularizations. In practice, one usually tries a specific regularization and then figures out the probability density that corresponds to that regularization to justify the choice. It can also be physically motivated by common sense or intuition.
In machine learning, the data term corresponds to the training data and the regularization is either the choice of the model or modifications to the algorithm. It is always intended to reduce the generalization error, i.e. the error score with the trained model on the evaluation set and not the training data.
One of the earliest uses of regularization is Tikhonov regularization (ridge regression), related to the method of least squares.
Regularization in machine learning
In machine learning, a key challenge is enabling models to accurately predict outcomes on unseen data, not just on familiar training data. Regularization is crucial for addressing overfitting—where a model memorizes training data details but can't generalize to new data. The goal of regularization is to encourage models to learn the broader patterns within the data rather than memorizing it. Techniques like early stopping, L1 and L2 regularization, and dropout are designed to prevent overfitting and underfitting, thereby enhancing the model's ability to adapt to and perform well with new data, thus improving model generalization.
Early Stopping
Stops training when validation performance deteriorates, preventing overfitting by halting before the model memorizes training data.
L1 and L2 Regularization
Adds penalty terms to the cost function to discourage complex models:
L1 regularization (also called LASSO) leads to sparse models by adding a penalty based on the absolute value of coefficients.
L2 regularization (also called ridge regression) encourages smaller, more evenly distributed weights by adding a penalty based on the square of the coefficients.
Dropout
In the context of neural networks, the Dropout technique repeatedly ignores random subsets of neurons during training, which simulates the training of multiple neural network architectures at once to improve generalization.
Classification
Empirical learning of classifiers (from a finite data set) is always an underdetermined problem, because it attempts to infer a function of any given only examples .
A regularization term (or regularizer) is added to a loss function:
where is an underlying loss function that describes the cost of predicting when the label is , such as the square loss or hinge loss; and is a parameter which controls the importance of the regularization term. is typically chosen to impose a penalty on the complexity of . Concrete notions of complexity used include restrictions for smoothness and bounds on the vector space norm.
A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution (as depicted in the figure above, where the green function, the simpler one, may be preferred). From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters.
Regularization can serve multiple purposes, including learning simpler models, inducing models to be sparse and introducing group structure into the learning problem.
The same idea arose in many fields of science. A simple form of regularization applied to integral equations (Tikhonov regularization) is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, including total variation regularization, have become popular.
Generalization
Regularization can be motivated as a technique to improve the generalizability of a learned model.
The goal of this learning problem is to find a function that fits or predicts the outcome (label) that minimizes the expected error over all possible inputs and labels. The expected error of a function is:
where and are the domains of input data and their labels respectively.
Typically in learning problems, only a subset of input data and labels are available, measured with some noise. Therefore, the expected error is unmeasurable, and the best surrogate available is the empirical error over the available samples:
Without bounds on the complexity of the function space (formally, the reproducing kernel Hilbert space) available, a model will be learned that incurs zero loss on the surrogate empirical error. If measurements (e.g. of ) were made with noise, this model may suffer from overfitting and display poor expected error. Regularization introduces a penalty for exploring certain regions of the function space used to build the model, which can improve generalization.
Tikhonov regularization (ridge regression)
These techniques are named for Andrey Nikolayevich Tikhonov, who applied regularization to integral equations and made important contributions in many other areas.
When learning a linear function , characterized by an unknown vector such that , one can add the -norm of the vector to the loss expression in order to prefer solutions with smaller norms. Tikhonov regularization is one of the most common forms. It is also known as ridge regression. It is expressed as:
where would represent samples used for training.
In the case of a general function, the norm of the function in its reproducing kernel Hilbert space is:
As the norm is differentiable, learning can be advanced by gradient descent.
Tikhonov-regularized least squares
The learning problem with the least squares loss function and Tikhonov regularization can be solved analytically. Written in matrix form, the optimal is the one for which the gradient of the loss function with respect to is 0.
where the third statement is a first-order condition.
By construction of the optimization problem, other values of give larger values for the loss function. This can be verified by examining the second derivative .
During training, this algorithm takes time. The terms correspond to the matrix inversion and calculating , respectively. Testing takes time.
Early stopping
Early stopping can be viewed as regularization in time. Intuitively, a training procedure such as gradient descent tends to learn more and more complex functions with increasing iterations. By regularizing for time, model complexity can be controlled, improving generalization.
Early stopping is implemented using one data set for training, one statistically independent data set for validation and another for testing. The model is trained until performance on the validation set no longer improves and then applied to the test set.
Theoretical motivation in least squares
Consider the finite approximation of Neumann series for an invertible matrix where :
This can be used to approximate the analytical solution of unregularized least squares, if is introduced to ensure the norm is less than one.
The exact solution to the unregularized least squares learning problem minimizes the empirical error, but may fail. By limiting , the only free parameter in the algorithm above, the problem is regularized for time, which may improve its generalization.
The algorithm above is equivalent to restricting the number of gradient descent iterations for the empirical risk
with the gradient descent update:
The base case is trivial. The inductive case is proved as follows:
Regularizers for sparsity
Assume that a dictionary with dimension is given such that a function in the function space can be expressed as:
Enforcing a sparsity constraint on can lead to simpler and more interpretable models. This is useful in many real-life applications such as computational biology. An example is developing a simple predictive test for a disease in order to minimize the cost of performing medical tests while maximizing predictive power.
A sensible sparsity constraint is the norm , defined as the number of non-zero elements in . Solving a regularized learning problem, however, has been demonstrated to be NP-hard.
The norm (see also Norms) can be used to approximate the optimal norm via convex relaxation. It can be shown that the norm induces sparsity. In the case of least squares, this problem is known as LASSO in statistics and basis pursuit in signal processing.
regularization can occasionally produce non-unique solutions. A simple example is provided in the figure when the space of possible solutions lies on a 45 degree line. This can be problematic for certain applications, and is overcome by combining with regularization in elastic net regularization, which takes the following form:
Elastic net regularization tends to have a grouping effect, where correlated input features are assigned equal weights.
Elastic net regularization is commonly used in practice and is implemented in many machine learning libraries.
Proximal methods
While the norm does not result in an NP-hard problem, the norm is convex but is not strictly differentiable due to the kink at x = 0. Subgradient methods which rely on the subderivative can be used to solve regularized learning problems. However, faster convergence can be achieved through proximal methods.
For a problem such that is convex, continuous, differentiable, with Lipschitz continuous gradient (such as the least squares loss function), and is convex, continuous, and proper, then the proximal method to solve the problem is as follows. First define the proximal operator
and then iterate
The proximal method iteratively performs gradient descent and then projects the result back into the space permitted by .
When is the regularizer, the proximal operator is equivalent to the soft-thresholding operator,
This allows for efficient computation.
Group sparsity without overlaps
Groups of features can be regularized by a sparsity constraint, which can be useful for expressing certain prior knowledge into an optimization problem.
In the case of a linear model with non-overlapping known groups, a regularizer can be defined:
where
This can be viewed as inducing a regularizer over the norm over members of each group followed by an norm over groups.
This can be solved by the proximal method, where the proximal operator is a block-wise soft-thresholding function:
Group sparsity with overlaps
The algorithm described for group sparsity without overlaps can be applied to the case where groups do overlap, in certain situations. This will likely result in some groups with all zero elements, and other groups with some non-zero and some zero elements.
If it is desired to preserve the group structure, a new regularizer can be defined:
For each , is defined as the vector such that the restriction of to the group equals and all other entries of are zero. The regularizer finds the optimal disintegration of into parts. It can be viewed as duplicating all elements that exist in multiple groups. Learning problems with this regularizer can also be solved with the proximal method with a complication. The proximal operator cannot be computed in closed form, but can be effectively solved iteratively, inducing an inner iteration within the proximal method iteration.
Regularizers for semi-supervised learning
When labels are more expensive to gather than input examples, semi-supervised learning can be useful. Regularizers have been designed to guide learning algorithms to learn models that respect the structure of unsupervised training samples. If a symmetric weight matrix is given, a regularizer can be defined:
If encodes the result of some distance metric for points and , it is desirable that . This regularizer captures this intuition, and is equivalent to:
where is the Laplacian matrix of the graph induced by .
The optimization problem can be solved analytically if the constraint is applied for all supervised samples. The labeled part of the vector is therefore obvious. The unlabeled part of is solved for by:
The pseudo-inverse can be taken because has the same range as .
Regularizers for multitask learning
In the case of multitask learning, problems are considered simultaneously, each related in some way. The goal is to learn functions, ideally borrowing strength from the relatedness of tasks, that have predictive power. This is equivalent to learning the matrix .
Sparse regularizer on columns
This regularizer defines an L2 norm on each column and an L1 norm over all columns. It can be solved by proximal methods.
Nuclear norm regularization
where is the eigenvalues in the singular value decomposition of .
Mean-constrained regularization
This regularizer constrains the functions learned for each task to be similar to the overall average of the functions across all tasks. This is useful for expressing prior information that each task is expected to share with each other task. An example is predicting blood iron levels measured at different times of the day, where each task represents an individual.
Clustered mean-constrained regularization
where is a cluster of tasks.
This regularizer is similar to the mean-constrained regularizer, but instead enforces similarity between tasks within the same cluster. This can capture more complex prior information. This technique has been used to predict Netflix recommendations. A cluster would correspond to a group of people who share similar preferences.
Graph-based similarity
More generally than above, similarity between tasks can be defined by a function. The regularizer encourages the model to learn similar functions for similar tasks.
for a given symmetric similarity matrix .
Other uses of regularization in statistics and machine learning
Bayesian learning methods make use of a prior probability that (usually) gives lower probability to more complex models. Well-known model selection techniques include the Akaike information criterion (AIC), minimum description length (MDL), and the Bayesian information criterion (BIC). Alternative methods of controlling overfitting not involving regularization include cross-validation.
Examples of applications of different methods of regularization to the linear model are:
See also
Bayesian interpretation of regularization
Bias–variance tradeoff
Matrix regularization
Regularization by spectral filtering
Regularized least squares
Lagrange multiplier
Variance reduction
Notes
References
Mathematical analysis
Inverse problems | Regularization (mathematics) | [
"Mathematics"
] | 3,078 | [
"Inverse problems",
"Applied mathematics",
"Mathematical analysis"
] |
2,009,062 | https://en.wikipedia.org/wiki/Regularization%20%28physics%29 | In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. The regulator, also known as a "cutoff", models our lack of knowledge about physics at unobserved scales (e.g. scales of small size or large energy levels). It compensates for (and requires) the possibility of separation of scales that "new physics" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an "effective theory" within its intended scale of use.
It is distinct from renormalization, another technique to control infinities without assuming new physics, by adjusting for self-interaction feedback.
Regularization was for many decades controversial even amongst its inventors, as it combines physical and epistemological claims into the same equations. However, it is now well understood and has proven to yield useful, accurate predictions.
Overview
Regularization procedures deal with infinite, divergent, and nonsensical expressions by introducing an auxiliary concept of a regulator (for example, the minimal distance in space which is useful, in case the divergences arise from short-distance physical effects). The correct physical result is obtained in the limit in which the regulator goes away (in our example, ), but the virtue of the regulator is that for its finite value, the result is finite.
However, the result usually includes terms proportional to expressions like which are not well-defined in the limit . Regularization is the first step towards obtaining a completely finite and meaningful result; in quantum field theory it must be usually followed by a related, but independent technique called renormalization. Renormalization is based on the requirement that some physical quantities — expressed by seemingly divergent expressions such as — are equal to the observed values. Such a constraint allows one to calculate a finite value for many other quantities that looked divergent.
The existence of a limit as ε goes to zero and the independence of the final result from the regulator are nontrivial facts. The underlying reason for them lies in universality as shown by Kenneth Wilson and Leo Kadanoff and the existence of a second order phase transition. Sometimes, taking the limit as ε goes to zero is not possible. This is the case when we have a Landau pole and for nonrenormalizable couplings like the Fermi interaction. However, even for these two examples, if the regulator only gives reasonable results for (where is a superior energy cuttoff) and we are working with scales of the order of , regulators with still give pretty accurate approximations. The physical reason why we can't take the limit of ε going to zero is the existence of new physics below Λ.
It is not always possible to define a regularization such that the limit of ε going to zero is independent of the regularization. In this case, one says that the theory contains an anomaly. Anomalous theories have been studied in great detail and are often founded on the celebrated Atiyah–Singer index theorem or variations thereof (see, for example, the chiral anomaly).
Classical physics example
The problem of infinities first arose in the classical electrodynamics of point particles in the 19th and early 20th century.
The mass of a charged particle should include the mass–energy in its electrostatic field (electromagnetic mass). Assume that the particle is a charged spherical shell of radius . The mass–energy in the field is
which becomes infinite as . This implies that the point particle would have infinite inertia, making it unable to be accelerated. Incidentally, the value of that makes equal to the electron mass is called the classical electron radius, which (setting and restoring factors of and ) turns out to be
where is the fine-structure constant, and is the Compton wavelength of the electron.
Regularization: Classical physics theory breaks down at small scales, e.g., the difference between an electron and a point particle shown above. Addressing this problem requires new kinds of additional physical constraints. For instance, in this case, assuming a finite electron radius (i.e., regularizing the electron mass-energy) suffices to explain the system below a certain size. Similar regularization arguments work in other renormalization problems. For example, a theory may hold under one narrow set of conditions, but due to calculations involving infinities or singularities, it may breakdown under other conditions or scales. In the case of the electron, another way to avoid infinite mass-energy while retaining the point nature of the particle is to postulate tiny additional dimensions over which the particle could 'spread out' rather than restrict its motion solely over 3D space. This is precisely the motivation behind string theory and other multi-dimensional models including multiple time dimensions. Rather than the existence of unknown new physics, assuming the existence of particle interactions with other surrounding particles in the environment, renormalization offers an alternative strategy to resolve infinities in such classical problems.
Specific types
Specific types of regularization procedures include
Dimensional regularization
Pauli–Villars regularization
Lattice regularization
Zeta function regularization
Causal regularization
Hadamard regularization
Realistic regularization
Conceptual problem
Perturbative predictions by quantum field theory about quantum scattering of elementary particles, implied by a corresponding Lagrangian density, are computed using the Feynman rules, a regularization method to circumvent ultraviolet divergences so as to obtain finite results for Feynman diagrams containing loops, and a renormalization scheme. Regularization method results in regularized n-point Green's functions (propagators), and a suitable limiting procedure (a renormalization scheme) then leads to perturbative S-matrix elements. These are independent of the particular regularization method used, and enable one to model perturbatively the measurable physical processes (cross sections, probability amplitudes, decay widths and lifetimes of excited states). However, so far no known regularized n-point Green's functions can be regarded as being based on a physically realistic theory of quantum-scattering since the derivation of each disregards some of the basic tenets of conventional physics (e.g., by not being Lorentz-invariant, by introducing either unphysical particles with a negative metric or wrong statistics, or discrete space-time, or lowering the dimensionality of space-time, or some combination thereof). So the available regularization methods are understood as formalistic technical devices, devoid of any direct physical meaning. In addition, there are qualms about renormalization. For a history and comments on this more than half-a-century old open conceptual problem, see e.g.
Pauli's conjecture
As it seems that the vertices of non-regularized Feynman series adequately describe interactions in quantum scattering, it is taken that their ultraviolet divergences are due to the asymptotic, high-energy behavior of the Feynman propagators. So it is a prudent, conservative approach to retain the vertices in Feynman series, and modify only the Feynman propagators to create a regularized Feynman series. This is the reasoning behind the formal Pauli–Villars covariant regularization by modification of Feynman propagators through auxiliary unphysical particles, cf. and representation of physical reality by Feynman diagrams.
In 1949 Pauli conjectured there is a realistic regularization, which is implied by a theory that respects all the established principles of contemporary physics. So its propagators (i) do not need to be regularized, and (ii) can be regarded as such a regularization of the propagators used in quantum field theories that might reflect the underlying physics. The additional parameters of such a theory do not need to be removed (i.e. the theory needs no renormalization) and may provide some new information about the physics of quantum scattering, though they may turn out experimentally to be negligible. By contrast, any present regularization method introduces formal coefficients that must eventually be disposed of by renormalization.
Opinions
Paul Dirac was persistently, extremely critical about procedures of renormalization. In 1963, he wrote, "… in the renormalization theory we have a theory that has defied all the attempts of the mathematician to make it sound. I am inclined to suspect that the renormalization theory is something that will not survive in the future,…" He further observed that "One can distinguish between two main procedures for a theoretical physicist. One of them is to work from the experimental basis ... The other procedure is to work from the mathematical basis. One examines and criticizes the existing theory. One tries to pin-point the faults in it and then tries to remove them. The difficulty here is to remove the faults without destroying the very great successes of the existing theory."
Abdus Salam remarked in 1972, "Field-theoretic infinities first encountered in Lorentz's computation of electron have persisted in classical electrodynamics for seventy and in quantum electrodynamics for some thirty-five years. These long years of frustration have left in the subject a curious affection for the infinities and a passionate belief that they are an inevitable part of nature; so much so that even the suggestion of a hope that they may after all be circumvented - and finite values for the renormalization constants computed - is considered irrational."
However, in Gerard ’t Hooft’s opinion, "History tells us that if we hit upon some obstacle, even if it looks like a pure formality or just a technical complication, it should be carefully scrutinized. Nature might be telling us something, and we should find out what it is."
The difficulty with a realistic regularization is that so far there is none, although nothing could be destroyed by its bottom-up approach; and there is no experimental basis for it.
Minimal realistic regularization
Considering distinct theoretical problems, Dirac in 1963 suggested: "I believe separate ideas will be needed to solve these distinct problems and that they will be solved one at a time through successive stages in the future evolution of physics. At this point I find myself in disagreement with most physicists. They are inclined to think one master idea will be discovered that will solve all these problems together. I think it is asking too much to hope that anyone will be able to solve all these problems together. One should separate them one from another as much as possible and try to tackle them separately. And I believe the future development of physics will consist of solving them one at a time, and that after any one of them has been solved there will still be a great mystery about how to attack further ones."
According to Dirac, "Quantum electrodynamics is the domain of physics that we know most about, and presumably it will have to be put in order before we can hope to make any fundamental progress with other field theories, although these will continue to develop on the experimental basis."
Dirac’s two preceding remarks suggest that we should start searching for a realistic regularization in the case of quantum electrodynamics (QED) in the four-dimensional Minkowski spacetime, starting with the original QED Lagrangian density.
The path-integral formulation provides the most direct way from the Lagrangian density to the corresponding Feynman series in its Lorentz-invariant form. The free-field part of the Lagrangian density determines the Feynman propagators, whereas the rest determines the vertices. As the QED vertices are considered to adequately describe interactions in QED scattering, it makes sense to modify only the free-field part of the Lagrangian density so as to obtain such regularized Feynman series that the Lehmann–Symanzik–Zimmermann reduction formula provides a perturbative S-matrix that: (i) is Lorentz-invariant and unitary; (ii) involves only the QED particles; (iii) depends solely on QED parameters and those introduced by the modification of the Feynman propagators—for particular values of these parameters it is equal to the QED perturbative S-matrix; and (iv) exhibits the same symmetries as the QED perturbative S-matrix. Let us refer to such a regularization as the minimal realistic regularization, and start searching for the corresponding, modified free-field parts of the QED Lagrangian density.
Transport theoretic approach
According to Bjorken and Drell, it would make physical sense to sidestep ultraviolet divergences by using more detailed description than can be provided by differential field equations. And Feynman noted about the use of differential equations: "... for neutron diffusion it is only an approximation that is good when the distance over which we are looking is large compared with the mean free path. If we looked more closely, we would see individual neutrons running around." And then he wondered, "Could it be that the real world consists of little X-ons which can be seen only at very tiny distances? And that in our measurements we are always observing on such a large scale that we can’t see these little X-ons, and that is why we get the differential equations? ... Are they [therefore] also correct only as a smoothed-out imitation of a really much more complicated microscopic world?"
Already in 1938, Heisenberg proposed that a quantum field theory can provide only an idealized, large-scale description of quantum dynamics, valid for distances larger than some fundamental length, expected also by Bjorken and Drell in 1965. Feynman's preceding remark provides a possible physical reason for its existence; either that or it is just another way of saying the same thing (there is a fundamental unit of distance) but having no new information.
Hints at new physics
The need for regularization terms in any quantum field theory of quantum gravity is a major motivation for physics beyond the standard model. Infinities of the non-gravitational forces in QFT can be controlled via renormalization only but additional regularization - and hence new physics—is required uniquely for gravity. The regularizers model, and work around, the breakdown of QFT at small scales and thus show clearly the need for some other theory to come into play beyond QFT at these scales. A. Zee (Quantum Field Theory in a Nutshell, 2003) considers this to be a benefit of the regularization framework—theories can work well in their intended domains but also contain information about their own limitations and point clearly to where new physics is needed.
See also
Zeldovich regularization
References
Concepts in physics
Quantum field theory
Summability methods | Regularization (physics) | [
"Physics",
"Mathematics"
] | 3,048 | [
"Sequences and series",
"Quantum field theory",
"Mathematical structures",
"Summability methods",
"Quantum mechanics",
"nan"
] |
2,009,207 | https://en.wikipedia.org/wiki/Small-angle%20approximation | For small angles, the trigonometric functions sine, cosine, and tangent can be calculated with reasonable accuracy by the following simple approximations:
provided the angle is measured in radians. Angles measured in degrees must first be converted to radians by multiplying them by .
These approximations have a wide range of uses in branches of physics and engineering, including mechanics, electromagnetism, optics, cartography, astronomy, and computer science. One reason for this is that they can greatly simplify differential equations that do not need to be answered with absolute precision.
There are a number of ways to demonstrate the validity of the small-angle approximations. The most direct method is to truncate the Maclaurin series for each of the trigonometric functions. Depending on the order of the approximation, is approximated as either or as .
Justifications
Graphic
The accuracy of the approximations can be seen below in Figure 1 and Figure 2. As the measure of the angle approaches zero, the difference between the approximation and the original function also approaches 0.
Geometric
The red section on the right, , is the difference between the lengths of the hypotenuse, , and the adjacent side, . As is shown, and are almost the same length, meaning is close to 1 and helps trim the red away.
The opposite leg, , is approximately equal to the length of the blue arc, . Gathering facts from geometry, , from trigonometry, and , and from the picture, and leads to:
Simplifying leaves,
Calculus
Using the squeeze theorem, we can prove that
which is a formal restatement of the approximation for small values of θ.
A more careful application of the squeeze theorem proves that from which we conclude that for small values of θ.
Finally, L'Hôpital's rule tells us that which rearranges to for small values of θ. Alternatively, we can use the double angle formula . By letting , we get that .
Algebraic
The Taylor series expansions of trigonometric functions sine, cosine, and tangent near zero are:
where is the angle in radians. For very small angles, higher powers of become extremely small, for instance if , then , just one ten-thousandth of . Thus for many purposes it suffices to drop the cubic and higher terms and approximate the sine and tangent of a small angle using the radian measure of the angle, , and drop the quadratic term and approximate the cosine as .
If additional precision is needed the quadratic and cubic terms can also be included,
,
, and
.
Dual numbers
One may also use dual numbers, defined as numbers in the form , with and satisfying by definition and . By using the MacLaurin series of cosine and sine, one can show that and . Furthermore, it is not hard to prove that the Pythagorean identity holds:
Error of the approximations
Near zero, the relative error of the approximations , , and is quadratic in : for each order of magnitude smaller the angle is, the relative error of these approximations shrinks by two orders of magnitude. The approximation has relative error which is quartic in : for each order of magnitude smaller the angle is, the relative error shrinks by four orders of magnitude.
Figure 3 shows the relative errors of the small angle approximations. The angles at which the relative error exceeds 1% are as follows:
at about 0.14 radians (8.1°)
at about 0.17 radians (9.9°)
at about 0.24 radians (14.0°)
at about 0.66 radians (37.9°)
Angle sum and difference
The angle addition and subtraction theorems reduce to the following when one of the angles is small (β ≈ 0):
{|
|style="text-align:right;"| cos(α + β) ||≈ cos(α) − β sin(α),
|-
|style="text-align:right;"| cos(α − β) ||≈ cos(α) + β sin(α),
|-
|style="text-align:right;"| sin(α + β) ||≈ sin(α) + β cos(α),
|-
|style="text-align:right;"| sin(α − β) ||≈ sin(α) − β cos(α).
|}
Specific uses
Astronomy
In astronomy, the angular size or angle subtended by the image of a distant object is often only a few arcseconds (denoted by the symbol ″), so it is well suited to the small angle approximation. The linear size () is related to the angular size () and the distance from the observer () by the simple formula:
where is measured in arcseconds.
The quantity is approximately equal to the number of arcseconds in a circle (), divided by , or, the number of arcseconds in 1 radian.
The exact formula is
and the above approximation follows when is replaced by .
Motion of a pendulum
The second-order cosine approximation is especially useful in calculating the potential energy of a pendulum, which can then be applied with a Lagrangian to find the indirect (energy) equation of motion.
When calculating the period of a simple pendulum, the small-angle approximation for sine is used to allow the resulting differential equation to be solved easily by comparison with the differential equation describing simple harmonic motion.
Optics
In optics, the small-angle approximations form the basis of the paraxial approximation.
Wave Interference
The sine and tangent small-angle approximations are used in relation to the double-slit experiment or a diffraction grating to develop simplified equations like the following, where is the distance of a fringe from the center of maximum light intensity, is the order of the fringe, is the distance between the slits and projection screen, and is the distance between the slits:
Structural mechanics
The small-angle approximation also appears in structural mechanics, especially in stability and bifurcation analyses (mainly of axially-loaded columns ready to undergo buckling). This leads to significant simplifications, though at a cost in accuracy and insight into the true behavior.
Piloting
The 1 in 60 rule used in air navigation has its basis in the small-angle approximation, plus the fact that one radian is approximately 60 degrees.
Interpolation
The formulas for addition and subtraction involving a small angle may be used for interpolating between trigonometric table values:
Example: sin(0.755)
where the values for sin(0.75) and cos(0.75) are obtained from trigonometric table. The result is accurate to the four digits given.
See also
Skinny triangle
Versine
Exsecant
References
Trigonometry
Equations of astronomy | Small-angle approximation | [
"Physics",
"Astronomy"
] | 1,435 | [
"Concepts in astronomy",
"Equations of astronomy"
] |
2,009,397 | https://en.wikipedia.org/wiki/Lithium%20borate | Lithium borate, also known as lithium tetraborate is an inorganic compound with the formula Li2B4O7. A colorless solid, lithium borate is used in making glasses and ceramics.
Structure
Its structure consists of a polymeric borate backbone. The Li+ centers are bound to four and five oxygen ligands. Boron centers are trigonal and tetrahedral.
Lithium borate can be used in the laboratory as LB buffer for gel electrophoresis of DNA and RNA. It is also used in the borax fusion method to vitrify mineral powder specimens for analysis by WDXRF spectroscopy.
See also
LB buffer
Lithium metaborate (LiBO2)
References
Borates
Lithium compounds | Lithium borate | [
"Chemistry"
] | 149 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
2,009,410 | https://en.wikipedia.org/wiki/Lithium%20acetate | Lithium acetate (CH3COOLi) is a salt of lithium and acetic acid. It is often abbreviated as LiOAc.
Uses
Lithium acetate is used in the laboratory as buffer for gel electrophoresis of DNA and RNA. It has a lower electrical conductivity and can be run at higher speeds than can gels made from TAE buffer (5-30V/cm as compared to 5-10V/cm). At a given voltage, the heat generation and thus the gel temperature is much lower than with TAE buffers, therefore the voltage can be increased to speed up electrophoresis so that a gel run takes only a fraction of the usual time. Downstream applications, such as isolation of DNA from a gel slice or Southern blot analysis, work as expected when using lithium acetate gels.
Lithium boric acid or sodium boric acid are usually preferable to lithium acetate or TAE when analyzing smaller fragments of DNA (less than 500 bp) due to the higher resolution of borate-based buffers in this size range as compared to acetate buffers.
Lithium acetate is also used to permeabilize the cell wall of yeast for use in DNA transformation. It is believed that the beneficial effect of LiOAc is caused by its chaotropic effect; denaturing DNA, RNA and proteins.
References
Acetates
Lithium salts
Organolithium compounds | Lithium acetate | [
"Chemistry"
] | 288 | [
"Lithium salts",
"Salts"
] |
2,009,899 | https://en.wikipedia.org/wiki/Cicero%20%28typography%29 | A cicero is a unit of measure used in typography in Italy, France and other continental European countries, first used by Pannartz and Sweynheim in 1468 for the edition of Cicero's Epistulae ad Familiares. The font size thus acquired the name cicero.
It is of the historical French inch, and is divided into 12 points, known in English as French points or Didot points. The unit of the cicero is similar to an English pica, although the French inch was slightly larger than the English inch. There are about 1.066 picas to a cicero; a pica is 4.23333333 mm and a cicero is 4.51165812456 mm.
Cicero (and the points derived from cicero) was used in the early days of typography in continental Europe. In modern times, all computers use pica (and the points derived from pica) as font size measurement – alongside millimeters in countries using the metric system – for line length and paper size measurement.
References
Typography
Units of length | Cicero (typography) | [
"Mathematics"
] | 222 | [
"Quantity",
"Units of measurement",
"Units of length"
] |
6,701,870 | https://en.wikipedia.org/wiki/NOON%20state | In quantum optics, a NOON state or N00N state is a quantum-mechanical many-body entangled state:
which represents a superposition of N particles in mode a with zero particles in mode b, and vice versa. Usually, the particles are photons, but in principle any bosonic field can support NOON states.
Applications
NOON states are an important concept in quantum metrology and quantum sensing for their ability to make precision phase measurements when used in an optical interferometer. For example, consider the observable
The expectation value of for a system in a NOON state switches between +1 and −1 when changes from 0 to . Moreover, the error in the phase measurement becomes
This is the so-called Heisenberg limit, and gives a quadratic improvement over the standard quantum limit. NOON states are closely related to Schrödinger cat states and GHZ states, and are extremely fragile.
Towards experimental realization
There have been several theoretical proposals for creating photonic NOON states. Pieter Kok, Hwang Lee, and Jonathan Dowling proposed the first general method based on post-selection via photodetection. The down-side of this method was its exponential scaling of the success probability of the protocol. Pryde and White subsequently introduced a simplified method using intensity-symmetric multiport beam splitters, single photon inputs, and either heralded or conditional measurement. Their method, for example, allows heralded production of the N = 4 NOON state without the need for postselection or zero photon detections, and has the same success probability of 3/64 as the more complicated circuit of Kok et al. Cable and Dowling proposed a method that has polynomial scaling in the success probability, which can therefore be called efficient.
Two-photon NOON states, where N = 2, can be created deterministically from two identical photons and a 50:50 beam splitter. This is called the Hong–Ou–Mandel effect in quantum optics. Three- and four-photon NOON states cannot be created deterministically from single-photon states, but they have been created probabilistically via post-selection using spontaneous parametric down-conversion. A different approach, involving the interference of non-classical light created by spontaneous parametric down-conversion and a classical laser beam on a 50:50 beam splitter, was used by I. Afek, O. Ambar, and Y. Silberberg to experimentally demonstrate the production of NOON states up to N = 5.
Super-resolution has previously been used as indicator of NOON state production, in 2005 Resch et al. showed that it could equally well be prepared by classical interferometry. They showed that only phase super-sensitivity is an unambiguous indicator of a NOON state; furthermore they introduced criteria for determining if it has been achieved based on the observed visibility and efficiency. Phase super sensitivity of NOON states with N = 2 was demonstrated and super resolution, but not super sensitivity as the efficiency was too low, of NOON states up to N = 4 photons was also demonstrated experimentally.
History and terminology
NOON states were first introduced by Barry C. Sanders in the context of studying quantum decoherence in Schrödinger cat states. They were independently rediscovered in 2000 by Jonathan P. Dowling's group at JPL, who introduced them as the basis for the concept of quantum lithography. The term "NOON state" first appeared in print as a footnote in a paper published by Hwang Lee, Pieter Kok, and Jonathan Dowling on quantum metrology, where it was spelled N00N, with zeros instead of Os.
References
Quantum information science
Quantum states | NOON state | [
"Physics"
] | 749 | [
"Quantum states",
"Quantum mechanics"
] |
6,703,729 | https://en.wikipedia.org/wiki/Ramanujan%27s%20congruences | In mathematics, Ramanujan's congruences are the congruences for the partition function p(n) discovered by Srinivasa Ramanujan:
In plain words, e.g., the first congruence means that If a number is 4 more than a multiple of 5, i.e. it is in the sequence
4, 9, 14, 19, 24, 29, . . .
then the number of its partitions is a multiple of 5.
Later other congruences of this type were discovered, for numbers and for Tau-functions.
Background
In his 1919 paper, he proved the first two congruences using the following identities (using q-Pochhammer symbol notation):
He then stated that "It appears there are no equally simple properties for any moduli involving primes other than these".
After Ramanujan died in 1920, G. H. Hardy extracted proofs of all three congruences from an unpublished manuscript of Ramanujan on p(n) (Ramanujan, 1921). The proof in this manuscript employs the Eisenstein series.
In 1944, Freeman Dyson defined the rank function for a partition and conjectured the existence of a "crank" function for partitions that would provide a combinatorial proof of Ramanujan's congruences modulo 11. Forty years later, George Andrews and Frank Garvan found such a function, and proved the celebrated result that the crank simultaneously "explains" the three Ramanujan congruences modulo 5, 7 and 11.
In the 1960s, A. O. L. Atkin of the University of Illinois at Chicago discovered additional congruences for small prime moduli. For example:
Extending the results of A. Atkin, Ken Ono in 2000 proved that there are such Ramanujan congruences modulo every integer coprime to 6. For example, his results give
Later Ken Ono conjectured that the elusive crank also satisfies exactly the same types of general congruences. This was proved by his Ph.D. student Karl Mahlburg in his 2005 paper Partition Congruences and the Andrews–Garvan–Dyson Crank, linked below. This paper won the first Proceedings of the National Academy of Sciences Paper of the Year prize.
A conceptual explanation for Ramanujan's observation was finally discovered in January 2011 by considering the Hausdorff dimension of the following function in the l-adic topology:
It is seen to have dimension 0 only in the cases where ℓ = 5, 7 or 11 and since the partition function can be written as a linear combination of these functions this can be considered a formalization and proof of Ramanujan's observation.
In 2001, R.L. Weaver gave an effective algorithm for finding congruences of the partition function, and tabulated 76,065 congruences. This was extended in 2012 by F. Johansson to 22,474,608,014 congruences, one large example being
References
External links
Dyson's rank, crank and adjoint. A list of references.
Theorems in number theory
Srinivasa Ramanujan
Equivalence (mathematics) | Ramanujan's congruences | [
"Mathematics"
] | 669 | [
"Mathematical theorems",
"Mathematical problems",
"Number theory",
"Theorems in number theory"
] |
1,370,229 | https://en.wikipedia.org/wiki/Electrorheological%20fluid | Electrorheological (ER) fluids are suspensions of extremely fine non-conducting but electrically active particles (up to 50 micrometres diameter) in an electrically insulating fluid. The apparent viscosity of these fluids changes reversibly by an order of up to 100,000 in response to an electric field. For example, a typical ER fluid can go from the consistency of a liquid to that of a gel, and back, with response times on the order of milliseconds. The effect is sometimes called the Winslow effect after its discoverer, the American inventor Willis Winslow, who obtained a US patent on the effect in 1947 and wrote an article published in 1949.
The ER effect
The change in apparent viscosity is dependent on the applied electric field, i.e. the potential divided by the distance between the plates. The change is not a simple change in viscosity, hence these fluids are now known as ER fluids, rather than by the older term Electro Viscous fluids. The effect is better described as an electric field dependent shear yield stress. When activated an ER fluid behaves as a Bingham plastic (a type of viscoelastic material), with a yield point which is determined by the electric field strength. After the yield point is reached, the fluid shears as a fluid, i.e. the incremental shear stress is proportional to the rate of shear (in a Newtonian fluid there is no yield point and stress is directly proportional to shear). Hence the resistance to motion of the fluid can be controlled by adjusting the applied electric field.
Composition and theory
ER fluids are a type of smart fluid. A simple ER fluid can be made by mixing cornflour in a light vegetable oil or (better) silicone oil.
There are two main theories to explain the effect: the interfacial tension or 'water bridge' theory, and the electrostatic theory. The water bridge theory assumes a three phase system, the particles contain the third phase which is another liquid (e.g. water) immiscible with the main phase liquid (e.g. oil). With no applied electric field the third phase is strongly attracted to and held within the particles. This means the ER fluid is a suspension of particles, which behaves as a liquid. When an electric field is applied the third phase is driven to one side of the particles by electro osmosis and binds adjacent particles together to form chains. This chain structure means the ER fluid has become a solid. The electrostatic theory assumes just a two phase system, with dielectric particles forming chains aligned with an electric field in an analogous way to how magnetorheological fluid (MR) fluids work. An ER fluid has been constructed with the solid phase made from a conductor coated in an insulator. This ER fluid clearly cannot work by the water bridge model. However, although demonstrating that some ER fluids work by the electrostatic effect, it does not prove that all ER fluids do so. The advantage of having an ER fluid which operates on the electrostatic effect is the elimination of leakage current, i.e. potentially there is no direct current. Of course, since ER devices behave electrically as capacitors, and the main advantage of the ER effect is the speed of response, an alternating current is to be expected.
The particles are electrically active. They can be ferroelectric or, as mentioned above, made from a conducting material coated with an insulator, or electro-osmotically active particles. In the case of ferroelectric or conducting material, the particles would have a high dielectric constant. There may be some confusion here as to the dielectric constant of a conductor, but "if a material with a high dielectric constant is placed in an electric field, the magnitude of that field will be measurably reduced within the volume of the dielectric" (see main page: Dielectric constant), and since the electric field is zero in an ideal conductor, then in this context the dielectric constant of a conductor is infinite.
Another factor that influences the ER effect is the geometry of the electrodes. The introduction of parallel grooved electrodes showed slight increase in the ER effect but perpendicular grooved electrodes doubled the ER effect. A much larger increase in ER effect can be obtained by coating the electrodes with electrically polarisable materials. This turns the usual disadvantage of dielectrophoresis into a useful effect. It also has the effect of reducing leakage currents in the ER fluid.
The giant electrorheological (GER) fluid was discovered in 2003, and is able to sustain higher yield strengths than many other ER fluids. The GER fluid consists of Urea coated nanoparticles of Barium Titanium Oxalate suspended in silicone oil. The high yield strength is due to the high dielectric constant of the particles, the small size of the particles and the Urea coating. Another advantage of the GER is that the relationship between the electrical field strength and the yield strength is linear after the electric field reaches 1 kV/mm. The GER is a high yield strength, but low electrical field strength and low current density fluid compared to many other ER fluids. The procedure for preparation of the suspension is given in. The major concern is the use of oxalic acid for the preparation of the particles as it is a strong organic acid.
Applications
The normal application of ER fluids is in fast acting hydraulic valves and clutches, with the separation between plates being in the order of 1 mm and the applied potential being in the order of 1 kV. In simple terms, when the electric field is applied, an ER hydraulic valve is shut or the plates of an ER clutch are locked together, when the electric field is removed the ER hydraulic valve is open or the clutch plates are disengaged. Other common applications are in ER brakes (think of a brake as a clutch with one side fixed) and shock absorbers (which can be thought of as closed hydraulic systems where the shock is used to try to pump fluid through a valve).
There are many novel uses for these fluids. Potential uses are in accurate abrasive polishing and as haptic controllers and tactile displays.
ER fluid has also been proposed to have potential applications in flexible electronics, with the fluid incorporated in elements such as rollable screens and keypads, in which the viscosity-changing qualities of the fluid allowing the rollable elements to become rigid for use, and flexible to roll and retract for storing when not in use. Motorola filed a patent application for mobile device applications in 2006.
Problems and advantages
A major problem is that ER fluids are suspensions, hence in time they tend to settle out, so advanced ER fluids tackle this problem by means such as matching the densities of the solid and liquid components, or by using nanoparticles, which brings ER fluids into line with the development of magnetorheological fluids. Another problem is that the breakdown voltage of air is ~ 3 kV/mm, which is near the electric field needed for ER devices to operate.
An advantage is that an ER device can control considerably more mechanical power than the electrical power used to control the effect, i.e. it can act as a power amplifier. But the main advantage is the speed of response. There are few other effects able to control such large amounts of mechanical or hydraulic power so rapidly.
Unfortunately, the increase in apparent viscosity experienced by most Electrorheological fluids used in shear or flow modes is relatively limited. The ER fluid changes from a Newtonian liquid to a partially crystalline "semi-hard slush". However, an almost complete liquid to solid phase change can be obtained when the electrorheological fluid additionally experiences compressive stress. This effect has been used to provide electrorheological Braille displays and very effective clutches.
See also
Continuum mechanics
Debye–Falkenhagen effect
Electroactive polymers
Electroadhesion
Electroviscous effects
Ferrofluid
Fluid mechanics
Magnetorheological fluid
Electrowetting
Smart fluid
References
Smart materials | Electrorheological fluid | [
"Materials_science",
"Engineering"
] | 1,653 | [
"Smart materials",
"Materials science"
] |
1,371,370 | https://en.wikipedia.org/wiki/NIPRNet | The Non-classified Internet Protocol (IP) Router Network (NIPRNet) is an IP network used to exchange unclassified information, including information subject to controls on distribution, among the private network's users. The NIPRNet also provides its users access to the Internet.
It is one of the United States Department of Defense's three main networks. The others include SIPRNet and JWICS.
History
NIPRNet is composed of Internet Protocol routers owned by the United States Department of Defense (DOD). It was created in the 1980s and managed by the Defense Information Systems Agency (DISA) to supersede the earlier MILNET.
Security improvements
In the year leading up to 2010 NIPRNet has grown faster than the U.S. Department of Defense can monitor. DoD spent $10 million in 2010 to map out the current state of the NIPRNet, in an effort to analyze its expansion, and identify unauthorized users, who are suspected to have quietly joined the network. The NIPRNet survey, which uses IPSonar software developed by Lumeta Corporation, also looked for weakness in security caused by network configuration. The Department of Defense has made a major effort in the year leading up to 2010, to improve network security. The Pentagon announced it was requesting $2.3 billion in the 2012 budget to bolster network security within the Defense Department and to strengthen ties with its counterparts at the Department of Homeland Security.
Alternative names
SIPRNet and NIPRNet are referred to colloquially as SIPPERnet and NIPPERnet (or simply sipper and nipper), respectively.
See also
Classified website
SIPRNet
RIPR
Joint Worldwide Intelligence Communications System (JWICS)
Intellipedia
Protective distribution system
NATO CRONOS
References
External links
DISA
Army and Defense Knowledge Online
Wide area networks
Cryptography | NIPRNet | [
"Mathematics",
"Engineering"
] | 371 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
1,371,504 | https://en.wikipedia.org/wiki/Gear%20pump | A gear pump uses the meshing of gears to pump fluid by displacement. They are one of the most common types of pumps for hydraulic fluid power applications. The gear pump was invented around 1600 by Johannes Kepler.
Gear pumps are also widely used in chemical installations to pump high-viscosity fluids. There are two main variations: external gear pumps which use two external spur gears, and internal gear pumps which use an external and an internal spur gear (internal spur gear teeth face inwards, see below). Gear pumps provide positive displacement (or fixed displacement), meaning they pump a constant amount of fluid for each revolution. Some gear pumps are designed to function as either a motor or a pump.
Theory of operation
As the gears rotate they separate on the intake side of the pump, creating a void and suction which is filled by fluid. The fluid is carried by the gears to the discharge side of the pump, where the meshing of the gears displaces the fluid. The mechanical clearances are small— on the order of 10 μm. The tight clearances, along with the speed of rotation, effectively prevent the fluid from leaking backwards.
The rigid design of the gears and houses allow for very high pressures and the ability to pump highly viscous fluids.
Many variations exist, including helical and herringbone gear sets (instead of spur gears), lobe shaped rotors similar to Roots blowers (commonly used as superchargers), and mechanical designs that allow the stacking of pumps. The most common variations are shown below (the drive gear is shown blue and the idler is shown purple).
External precision gear pumps are usually limited to maximum working pressures of around and maximum rotation speeds around 3,000 RPM. Some manufacturers produce gear pumps with higher working pressures and speeds but these types of pumps tend to be noisy and special precautions may have to be made.
Suction and pressure ports need to interface where the gears mesh (shown as dim gray lines in the internal pump images). Some internal gear pumps have an additional, crescent-shaped seal (shown above, right). This crescent functions to keep the gears separated and also reduces eddy currents.
Pump formulas:
Flow rate = pumped volume per rotation × rotational speed
Power = flow rate × pressure
Power in HP ≈ flow rate in US gal/min × (pressure in lbf/in2)/1714
Efficiency
Gear pumps are generally very efficient, especially in high-pressure applications.
Factors affecting efficiency:
Clearances: Geometric clearances at the end and outer diameter of the gears allows leakage and back flow. However sometimes higher clearances help reduce hydrodynamic friction and improve efficiency.
Gear backlash: High backlash between gears also allows fluid leakage. However, this helps to reduce wasted energy from trapping the fluid between gear teeth (known as pressure trapping).
Applications
Petrochemicals: Pure or filled bitumen, pitch, diesel oil, crude oil, lube oil etc.
Chemicals: Sodium silicate, acids, plastics, mixed chemicals, isocyanates etc.
Paint and ink
Resins and adhesives
Pulp and paper: acid, soap, lye, black liquor, kaolin, lime, latex, sludge etc.
Food: Chocolate, cacao butter, fillers, sugar, vegetable fats and oils, molasses, animal food etc.
Aviation: Jet engine fuel pumps
Development
The invention of the gear pump is not uniformly solved. On the one hand, it goes back to Johannes Kepler in 1604; on the other hand, Gottfried Heinrich Graf zu Pappenheim is mentioned, who is said to have constructed the capsule blower with two rotating axes for pumping air and water. Pappenheim should have adopted Kepler’s design without mentioning his name.
See also
Gerotor
Hydraulic pump
Vane pump
References
External links
External gear pump description
Internal gear pump description
Pumps
Hydraulics | Gear pump | [
"Physics",
"Chemistry"
] | 789 | [
"Pumps",
"Turbomachinery",
"Physical systems",
"Hydraulics",
"Fluid dynamics"
] |
1,372,446 | https://en.wikipedia.org/wiki/Gravitational%20anomaly | In theoretical physics, a gravitational anomaly is an example of a gauge anomaly: it is an effect of quantum mechanics — usually a one-loop diagram—that invalidates the general covariance of a theory of general relativity combined with some other fields. The adjective "gravitational" is derived from the symmetry of a gravitational theory, namely from general covariance. A gravitational anomaly is generally synonymous with diffeomorphism anomaly, since general covariance is symmetry under coordinate reparametrization; i.e. diffeomorphism.
General covariance is the basis of general relativity, the classical theory of gravitation. Moreover, it is necessary for the consistency of any theory of quantum gravity, since it is required in order to cancel unphysical degrees of freedom with a negative norm, namely gravitons polarized along the time direction. Therefore, all gravitational anomalies must cancel out.
The anomaly usually appears as a Feynman diagram with a chiral fermion running in the loop (a polygon) with n external gravitons attached to the loop where where is the spacetime dimension.
Gravitational anomalies
Consider a classical gravitational field represented by the vielbein and a quantized Fermi field . The generating functional for this quantum field is
where is the quantum action and the factor before the Lagrangian is the vielbein determinant, the variation of the quantum action renders
in which we denote a mean value with respect to the path integral by the bracket . Let us label the Lorentz, Einstein and Weyl transformations respectively by their parameters ; they spawn the following anomalies:
Lorentz anomaly
which readily indicates that the energy-momentum tensor has an anti-symmetric part.
Einstein anomaly
this is related to the non-conservation of the energy-momentum tensor, i.e. .
Weyl anomaly
which indicates that the trace is non-zero.
See also
Mixed anomaly
Green–Schwarz mechanism
Gravitational instanton
References
External links
Anomalies (physics)
Anomaly | Gravitational anomaly | [
"Physics"
] | 411 | [
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum gravity",
"Physics beyond the Standard Model",
"Quantum physics stubs"
] |
1,372,610 | https://en.wikipedia.org/wiki/Coleman%E2%80%93Mandula%20theorem | In theoretical physics, the Coleman–Mandula theorem is a no-go theorem stating that spacetime and internal symmetries can only combine in a trivial way. This means that the charges associated with internal symmetries must always transform as Lorentz scalars. Some notable exceptions to the no-go theorem are conformal symmetry and supersymmetry. It is named after Sidney Coleman and Jeffrey Mandula who proved it in 1967 as the culmination of a series of increasingly generalized no-go theorems investigating how internal symmetries can be combined with spacetime symmetries. The supersymmetric generalization is known as the Haag–Łopuszański–Sohnius theorem.
History
In the early 1960s, the global flavor symmetry associated with the eightfold way was shown to successfully describe the hadron spectrum for hadrons of the same spin. This led to efforts to expand the global symmetry to a larger symmetry mixing both flavour and spin, an idea similar to that previously considered in nuclear physics by Eugene Wigner in 1937 for an symmetry. This non-relativistic model united vector and pseudoscalar mesons of different spin into a 35-dimensional multiplet and it also united the two baryon decuplets into a 56-dimensional multiplet. While this was reasonably successful in describing various aspects of the hadron spectrum, from the perspective of quantum chromodynamics this success is merely a consequence of the flavour and spin independence of the force between quarks. There were many attempts to generalize this non-relativistic model into a fully relativistic one, but these all failed.
At the time it was also an open question whether there existed a symmetry for which particles of different masses could belong to the same multiplet. Such a symmetry could then account for the mass splitting found in mesons and baryons. It was only later understood that this is instead a consequence of the differing up-, down-, and strange-quark masses which leads to a breakdown of the internal flavor symmetry.
These two motivations led to a series of no-go theorems to show that spacetime symmetries and internal symmetries could not be combined in any but a trivial way. The first notable theorem was proved by William McGlinn in 1964, with a subsequent generalization by Lochlainn O'Raifeartaigh in 1965. These efforts culminated with the most general theorem by Sidney Coleman and Jeffrey Mandula in 1967.
Little notice was given to this theorem in subsequent years. As a result, the theorem played no role in the early development of supersymmetry, which instead emerged in the early 1970s from the study of dual resonance models, which are the precursor to string theory, rather than from any attempts to overcome the no-go theorem. Similarly, the Haag–Łopuszański–Sohnius theorem, a supersymmetric generalization of the Coleman–Mandula theorem, was proved in 1975 after the study of supersymmetry was already underway.
Theorem
Consider a theory that can be described by an S-matrix and that satisfies the following conditions
The symmetry group is a Lie group which includes the Poincaré group as a subgroup,
Below any mass, there are only a finite number of particle types,
Any two-particle state undergoes some reaction at almost all energies,
The amplitudes for elastic two-body scattering are analytic functions of the scattering angle at almost all energies and angles,
A technical assumption that the group generators are distributions in momentum space.
The Coleman–Mandula theorem states that the symmetry group of this theory is necessarily a direct product of the Poincaré group and an internal symmetry group. The last technical assumption is unnecessary if the theory is described by a quantum field theory and is only needed to apply the theorem in a wider context.
A kinematic argument for why the theorem should hold was provided by Edward Witten. The argument is that Poincaré symmetry acts as a very strong constraint on elastic scattering, leaving only the scattering angle unknown. Any additional spacetime dependent symmetry would overdetermine the amplitudes, making them nonzero only at discrete scattering angles. Since this conflicts with the assumption of the analyticity of the scattering angles, such additional spacetime dependent symmetries are ruled out.
Limitations
Conformal symmetry
The theorem does not apply to a theory of massless particles, with these allowing for conformal symmetry as an additional spacetime dependent symmetry. In particular, the algebra of this group is the conformal algebra, which consists of the Poincaré algebra together with the commutation relations for the dilaton generator and the special conformal transformations generator.
Supersymmetry
The Coleman–Mandula theorem assumes that the only symmetry algebras are Lie algebras, but the theorem can be generalized by instead considering Lie superalgebras. Doing this allows for additional anticommutating generators known as supercharges which transform as spinors under Lorentz transformations. This extension gives rise to the super-Poincaré algebra, with the associated symmetry known as supersymmetry. The Haag–Łopuszański–Sohnius theorem is the generalization of the Coleman–Mandula theorem to Lie superalgebras, with it stating that supersymmetry is the only new spacetime dependent symmetry that is allowed. For a theory with massless particles, the theorem is again evaded by conformal symmetry which can be present in addition to supersymmetry giving a superconformal algebra.
Low dimensions
In a one or two dimensional theory the only possible scattering is forwards and backwards scattering so analyticity of the scattering angles is no longer possible and the theorem no longer holds. Spacetime dependent internal symmetries are then possible, such as in the massive Thirring model which can admit an infinite tower of conserved charges of ever higher tensorial rank.
Quantum groups
Models with nonlocal symmetries whose charges do not act on multiparticle states as if they were a tensor product of one-particle states, evade the theorem. Such an evasion is found more generally for quantum group symmetries which avoid the theorem because the corresponding algebra is no longer a Lie algebra.
Other limitations
For other spacetime symmetries besides the Poincaré group, such as theories with a de Sitter background or non-relativistic field theories with Galilean invariance, the theorem no longer applies. It also does not hold for discrete symmetries, since these are not Lie groups, or for spontaneously broken symmetries since these do not act on the S-matrix level and thus do not commute with the S-matrix.
See also
Extended supersymmetry
Supergroup
Supersymmetry algebra
Notes
Further reading
Coleman–Mandula theorem on Scholarpedia
Sascha Leonhardt on the Coleman–Mandula theorem
Quantum field theory
Supersymmetry
Theorems in quantum mechanics
No-go theorems | Coleman–Mandula theorem | [
"Physics",
"Mathematics"
] | 1,429 | [
"Theorems in quantum mechanics",
"Quantum field theory",
"No-go theorems",
"Equations of physics",
"Unsolved problems in physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Physics beyond the Standard Model",
"Supersymmetry",
"Symmetry",
"Physics theorems"
] |
1,372,638 | https://en.wikipedia.org/wiki/LSZ%20reduction%20formula | In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements (the scattering amplitudes) from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.
Although the LSZ reduction formula cannot handle bound states, massless particles and topological solitons, it can be generalized to cover bound states, by use of composite fields which are often nonlocal. Furthermore, the method, or variants thereof, have turned out to be also fruitful in other fields of theoretical physics. For example, in statistical physics they can be used to get a particularly general formulation of the fluctuation-dissipation theorem.
In and out fields
S-matrix elements are amplitudes of transitions between in states and out states. An in state describes the state of a system of particles which, in a far away past, before interacting, were moving freely with definite momenta and, conversely, an out state describes the state of a system of particles which, long after interaction, will be moving freely with definite momenta
In and out states are states in Heisenberg picture so they should not be thought to describe particles at a definite time, but rather to describe the system of particles in its entire evolution, so that the S-matrix element:
is the probability amplitude for a set of particles which were prepared with definite momenta to interact and be measured later as a new set of particles with momenta
The easy way to build in and out states is to seek appropriate field operators that provide the right creation and annihilation operators. These fields are called respectively in and out fields:
Just to fix ideas, suppose we deal with a Klein–Gordon field that interacts in some way which doesn't concern us:
may contain a self interaction or interaction with other fields, like a Yukawa interaction . From this Lagrangian, using Euler–Lagrange equations, the equation of motion follows:
where, if does not contain derivative couplings:
We may expect the in field to resemble the asymptotic behaviour of the free field as , making the assumption that in the far away past interaction described by the current is negligible, as particles are far from each other. This hypothesis is named the adiabatic hypothesis. However self interaction never fades away and, besides many other effects, it causes a difference between the Lagrangian mass and the physical mass of the boson. This fact must be taken into account by rewriting the equation of motion as follows:
This equation can be solved formally using the retarded Green's function of the Klein–Gordon operator :
allowing us to split interaction from asymptotic behaviour. The solution is:
The factor is a normalization factor that will come handy later, the field is a solution of the homogeneous equation associated with the equation of motion:
and hence is a free field which describes an incoming unperturbed wave, while the last term of the solution gives the perturbation of the wave due to interaction.
The field is indeed the in field we were seeking, as it describes the asymptotic behaviour of the interacting field as , though this statement will be made more precise later. It is a free scalar field so it can be expanded in plane waves:
where:
The inverse function for the coefficients in terms of the field can be easily obtained and put in the elegant form:
where:
The Fourier coefficients satisfy the algebra of creation and annihilation operators:
and they can be used to build in states in the usual way:
The relation between the interacting field and the in field is not very simple to use, and the presence of the retarded Green's function tempts us to write something like:
implicitly making the assumption that all interactions become negligible when particles are far away from each other. Yet the current contains also self interactions like those producing the mass shift from to . These interactions do not fade away as particles drift apart, so much care must be used in establishing asymptotic relations between the interacting field and the in field.
The correct prescription, as developed by Lehmann, Symanzik and Zimmermann, requires two normalizable states and , and a normalizable solution of the Klein–Gordon equation . With these pieces one can state a correct and useful but very weak asymptotic relation:
The second member is indeed independent of time as can be shown by differentiating and remembering that both and satisfy the Klein–Gordon equation.
With appropriate changes the same steps can be followed to construct an out field that builds out states. In particular the definition of the out field is:
where is the advanced Green's function of the Klein–Gordon operator. The weak asymptotic relation between out field and interacting field is:
The reduction formula for scalars
The asymptotic relations are all that is needed to obtain the LSZ reduction formula. For future convenience we start with the matrix element:
which is slightly more general than an S-matrix element. Indeed, is the expectation value of the time-ordered product of a number of fields between an out state and an in state. The out state can contain anything from the vacuum to an undefined number of particles, whose momenta are summarized by the index . The in state contains at least a particle of momentum , and possibly many others, whose momenta are summarized by the index . If there are no fields in the time-ordered product, then is obviously an S-matrix element. The particle with momentum can be 'extracted' from the in state by use of a creation operator:
where the prime on denotes that one particle has been taken out. With the assumption that no particle with momentum is present in the out state, that is, we are ignoring forward scattering, we can write:
because acting on the left gives zero. Expressing the construction operators in terms of in and out fields, we have:
Now we can use the asymptotic condition to write:
Then we notice that the field can be brought inside the time-ordered product, since it appears on the right when and on the left when :
In the following, dependence in the time-ordered product is what matters, so we set:
It can be shown by explicitly carrying out the time integration that:
so that, by explicit time derivation, we have:
By its definition we see that is a solution of the Klein–Gordon equation, which can be written as:
Substituting into the expression for and integrating by parts, we arrive at:
That is:
Starting from this result, and following the same path another particle can be extracted from the in state, leading to the insertion of another field in the time-ordered product. A very similar routine can extract particles from the out state, and the two can be iterated to get vacuum both on right and on left of the time-ordered product, leading to the general formula:
Which is the LSZ reduction formula for Klein–Gordon scalars. It gains a much better looking aspect if it is written using the Fourier transform of the correlation function:
Using the inverse transform to substitute in the LSZ reduction formula, with some effort, the following result can be obtained:
Leaving aside normalization factors, this formula asserts that S-matrix elements are the residues of the poles that arise in the Fourier transform of the correlation functions as four-momenta are put on-shell.
Reduction formula for fermions
Recall that solutions to the quantized free-field Dirac equation may be written as
where the metric signature is mostly plus, is an annihilation operator for b-type particles of momentum and spin , is a creation operator for d-type particles of spin , and the spinors and satisfy and . The Lorentz-invariant measure is written as , with . Consider now a scattering event consisting of an in state of non-interacting particles approaching an interaction region at the origin, where scattering occurs, followed by an out state of outgoing non-interacting particles. The probability amplitude for this process is given by
where no extra time-ordered product of field operators has been inserted, for simplicity. The situation considered will be the scattering of b-type particles to b-type particles. Suppose that the in state consists of particles with momenta and spins , while the out state contains particles of momenta and spins . The in and out states are then given by
Extracting an in particle from yields a free-field creation operator acting on the state with one less particle. Assuming that no outgoing particle has that same momentum, we then can write
where the prime on denotes that one particle has been taken out. Now recall that in the free theory, the b-type particle operators can be written in terms of the field using the inverse relation
where . Denoting the asymptotic free fields by and , we find
The weak asymptotic condition needed for a Dirac field, analogous to that for scalar fields, reads
and likewise for the out field. The scattering amplitude is then
where now the interacting field appears in the inner product. Rewriting the limits in terms of the integral of a time derivative, we have
where the row vector of matrix elements of the barred Dirac field is written as . Now, recall that is a solution to the Dirac equation:
Solving for , substituting it into the first term in the integral, and performing an integration by parts, yields
Switching to Dirac index notation (with sums over repeated indices) allows for a neater expression, in which the quantity in square brackets is to be regarded as a differential operator:
Consider next the matrix element appearing in the integral. Extracting an out state creation operator and subtracting the corresponding in state operator, with the assumption that no incoming particle has the same momentum, we have
Remembering that , where , we can replace the annihilation operators with in fields using the adjoint of the inverse relation. Applying the asymptotic relation, we find
Note that a time-ordering symbol has appeared, since the first term requires on the left, while the second term requires it on the right. Following the same steps as before, this expression reduces to
The rest of the in and out states can then be extracted and reduced in the same way, ultimately resulting in
The same procedure can be done for the scattering of d-type particles, for which 's are replaced by 's, and 's and 's are swapped.
Field strength normalization
The reason of the normalization factor in the definition of in and out fields can be understood by taking that relation between the vacuum and a single particle state with four-moment on-shell:
Remembering that both and are scalar fields with their Lorentz transform according to:
where is the four-momentum operator, we can write:
Applying the Klein–Gordon operator on both sides, remembering that the four-moment is on-shell and that is the Green's function of the operator, we obtain:
So we arrive to the relation:
which accounts for the need of the factor . The in field is a free field, so it can only connect one-particle states with the vacuum. That is, its expectation value between the vacuum and a many-particle state is null. On the other hand, the interacting field can also connect many-particle states to the vacuum, thanks to interaction, so the expectation values on the two sides of the last equation are different, and need a normalization factor in between. The right hand side can be computed explicitly, by expanding the in field in creation and annihilation operators:
Using the commutation relation between and we obtain:
leading to the relation:
by which the value of may be computed, provided that one knows how to compute .
Notes
References
Quantum field theory | LSZ reduction formula | [
"Physics"
] | 2,438 | [
"Quantum field theory",
"Quantum mechanics"
] |
1,372,820 | https://en.wikipedia.org/wiki/C2H2 | {{DISPLAYTITLE:C2H2}}
C2H2 may mean:
Molecular formulae
The molecular formula C2H2 (molar mass: 26.04 g/mol, exact mass: 26.01565 u) may refer to:
Acetylene (or ethyne)
Methylidenecarbene
Vinylidene group
Transcription factors
C2H2 zinc finger, short for Cys2His2 a class of transcription factors with the small protein structural motif stabilised with Zinc ions | C2H2 | [
"Chemistry"
] | 105 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
34,363,332 | https://en.wikipedia.org/wiki/Diffusion%20layer | In electrochemistry, the diffusion layer, according to IUPAC, is defined as the "region in the vicinity of an electrode where the concentrations are different from their value in the bulk solution. The definition of the thickness of the diffusion layer is arbitrary because the concentration approaches asymptotically the value in the bulk solution". The diffusion layer thus depends on the diffusion coefficient () of the analyte and, for voltammetric measurements, on the scan rate (V/s). It is usually considered to be some multiple of (where = scan rate).
The value is physically relevant since the concentration of solute varies according to the expression derived from Fick's laws:
where is the error function. When the concentration is approximately 52% of the bulk concentration:
At slow scan rates, the diffusion layer is large, on the order of micrometers, whereas at fast scan rates the diffusion layer is nanometers in thickness. The relationship is described in part by the Cottrell equation.
Relevant to cyclic voltammetry, the diffusion layer has negligible volume compared the volume of the bulk solution. For this reason, cyclic voltammetry experiments have an inexhaustible supply of fresh analyte.
References
Diffusion | Diffusion layer | [
"Physics",
"Chemistry"
] | 255 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion"
] |
34,365,252 | https://en.wikipedia.org/wiki/Estimated%20ultimate%20recovery | Estimated ultimate recovery or Expected ultimate recovery (EUR) of a resource is the sum of the proven reserves at a specific time and the cumulative production up to that point.
References
Petroleum production
Petroleum economics | Estimated ultimate recovery | [
"Chemistry"
] | 41 | [
"Petroleum",
"Petroleum stubs"
] |
27,528,308 | https://en.wikipedia.org/wiki/Kharkiv%20Institute%20of%20Physics%20and%20Technology | The National Science Center Kharkiv Institute of Physics and Technology (KIPT) (), formerly the Ukrainian Physics and Technology Institute (UPTI) is the oldest and largest physical science research centre in Ukraine. Today it is known as a science center as it consists of several institutes that are part of the Kharkiv Institute of Physics and Technology science complex.
History
The institute was founded on 30 October 1928, by the Government of Soviet Ukraine on an initiative of Abram Ioffe on the northern outskirts of Kharkiv (in khutir Piatykhatky) as the Ukrainian Institute of Physics and Technology for the purpose of research on nuclear physics and condensed matter physics.
From the moment of its creation, the institute was run by the People's Commissariat of Heavy Industry.
On 10 October 1932 the first experiments in nuclear fission in the Soviet Union were conducted here. The Soviet nuclear physicists Anton Valter, Georgiy Latyshev, Cyril Sinelnikov, and Aleksandr Leipunskii used a lithium atom nucleus. Later the Ukrainian Institute of Physics and Technology was able to obtain liquid hydrogen and helium. They also constructed the first triple coordinate radar station, and the institute became a pioneer of the Soviet high vacuum engineering which was developed into an industrial vacuum metallurgy.
During Stalin's Great Terror in 1938, the institute suffered the so-called UPTI Affair: three leading physicists of the Kharkiv Institute (Lev Landau, Yuri Rumer and Moisey Korets) were arrested by the Soviet secret police.
The Ukrainian Institute of Physics and Technology was the "Laboratory no. 1" for nuclear physics, and was responsible for the first conceptual development of a nuclear bomb in the USSR.
It was damaged by shelling during the 2022 Russian invasion of Ukraine, resulting in heavy damage to the Neutron Source nuclear facility. It is guarded by the 4th State Objects Protection Regiment.
Directors
1929 — 1933: Ivan Obreimov
1933 — 1934: Aleksandr Leipunskii
1934 — 1936: Semyon Davidovich
1936 — 1938: Aleksandr Leipunskii
1938 — 1941: Aleksandr Shpetny
1944 — 1965: Cyril Sinelnikov
1965 — 1980: Victor Ivanov
1980 — 1996: Viktor Zelensky
1996 — 2004: Vladimir Lapshin
2004 — 2017: Ivan Neklyudov
2017 — 2024: Nikolay Shulga
Important institutes
Science and education institutions in Pyatykhatky.
Kharkiv Institute of Physics and Technology
The Lev Shubnikov Low Temperature Laboratory at the Ukrainian Institute of Physics and Technology was founded in 1931. Lev Shubnikov was a head of the cryogenic laboratory at the Ukrainian Physics and Technology Institute in 1931–1937. In 1935, Rjabinin, Schubnikow experimentally discovered the Type-II superconductors at the cryogenic laboratory at the institute.
Institute of condensed matter physics, materials studies and technology
Plasma physics institute, Institute of high energy and nuclear physics
Institute of plasma electronics and new methods of acceleration
Akhiezer Institute of theoretical physics
Other institutes
Kharkiv University faculty of physics and technology, located nearby.
Notable alumni
Aleksander Akhiezer
Naum Akhiezer
Semion Braude
Dmitri Ivanenko
Fritz Houtermans
Arnold Kosevich
Eduard Kuraev
Igor Kurchatov
Lev Landau
Oleg Lavrentiev
Aleksandr Leipunskii
Ilya Lifshitz
Evgeny Lifshitz
Boris Podolsky
Isaak Pomeranchuk
Antonina Prikhot'ko
Lev Shubnikov
Cyril Sinelnikov
László Tisza
See also
List of science centers
References
External links
National Science Center, Kharkiv Institute of Physics and Technology
Kharkiv Institute of Physics and Technology . National Academy of Sciences of Ukraine
Research institutes established in 1928
Nuclear research institutes
Research institutes in Kharkiv
Institutes of the National Academy of Sciences of Ukraine
Science museums in Ukraine
Kyivskyi District (Kharkiv)
Research institutes in the Soviet Union
People's Commissariat of Heavy Industry | Kharkiv Institute of Physics and Technology | [
"Engineering"
] | 820 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
27,530,273 | https://en.wikipedia.org/wiki/Retro-Diels%E2%80%93Alder%20reaction | The retro-Diels–Alder reaction (rDA reaction) is the reverse of the Diels–Alder (DA) reaction, a [4+2] cycloelimination. It involves the formation of a diene and dienophile from a cyclohexene. It can be accomplished spontaneously with heat, or with acid or base mediation.
In principle, it becomes thermodynamically favorable for the Diels–Alder reactions to proceed in the reverse direction if the temperature is high enough. In practice, this reaction generally requires some special structural features in order to proceed at temperatures of synthetic relevance. For instance, the cleavage of cyclohexene to give butadiene and ethene has been observed, but only at temperatures exceeding 800 K. With an appropriate driving force, however, the Diels–Alder reaction proceeds in reverse under relatively mild conditions, providing diene and dienophile from starting cyclohexene derivatives. As early as 1929, this process was known and applied to the detection of cyclohexadienes, which released ethylene and aromatic compounds after reacting with acetylenes through a Diels–Alder/retro-Diels–Alder sequence. Since then, a variety of substrates have been subject to the rDA, yielding many different dienes and dienophiles. Additionally, conducting the rDA in the presence of a scavenging diene or dienophile has led to the capture of many transient reactive species.
Mechanism and stereochemistry
Prevailing mechanism
The retro-Diels–Alder reaction proper is the microscopic reverse of the Diels–Alder reaction: a concerted (but not necessarily synchronous), pericyclic, single-step process. Evidence for the retro-Diels–Alder reaction was provided by the observation of endo-exo isomerization of Diels–Alder adducts. It was postulated that at high temperatures, isomerization of kinetic endo adducts to more thermodynamically stable exo products occurred via an rDA/DA sequence. However, such isomerization may take place via a completely intramolecular, [3,3]-sigmatropic (Cope) process. Evidence for the latter was provided by the reaction below—none of the "head-to-head" isomer was obtained, suggesting a fully intramolecular isomerization process.
(2)
Stereochemistry
Like the Diels–Alder reaction, the rDA preserves configuration in the diene and dienophile. Much less is known about the relative rates of reversion of endo and exo adducts, and studies have pointed to no correlation between relative configuration in the cyclohexene starting material and reversion rate.
Scope and limitations
A few rDA reactions occur spontaneously at room temperature because of the high reactivity or volatility of the emitted dienophile. Most, however, require additional thermal or chemical activation. The relative tendencies of a variety of dienes and dienophiles to form via rDA are described below:
Diene: furan, pyrrole > benzene > naphthalene > fulvene > cyclopentadiene > anthracene > butadiene
Dienophile: N2 > CO2 > naphthalene > benzene, nitriles > methacrylate > maleimides > cyclopentadiene, imines, alkenes > alkynes
All-carbon dienophiles
Because the Diels–Alder reaction exchanges two π bonds for two σ bonds, it is intrinsically thermodynamically favored in the forward direction. However, a variety of strategies for overcoming this inherent thermodynamic bias are known. Complexation of Lewis acids to basic functionality in the starting material may induce the retro-Diels–Alder reaction, even in cases when the forward reaction is intramolecular.
(3)
Base mediation can be used to induce rDA in cases when the separated products are less basic than the starting material. This strategy has been used, for instance, to generate aromatic cyclopentadienyl anions from adducts of cyclopentadiene. Strategically placed electron-withdrawing groups in the starting material can render this process essentially irreversible.
(4)
If isolation or reaction of an elusive diene or dienophile is the goal, one of two strategies may be used. Flash vacuum pyrolysis of Diels–Alder adducts synthesized by independent means can provide extremely reactive, short-lived dienophiles (which can then be captured by a unique diene). Alternatively, the rDA reaction may be carried out in the presence of a scavenger. The scavenger reacts with either the diene or (more typically) the dienophile to drive the equilibrium of the retro-DA process toward products. Highly reactive cyanoacrylates may be isolated from Diels–Alder adducts (synthesized independently) with the use of a scavenger.
(5)
Heteroatomic dienophiles
Nitriles may be released in rDA reactions of DA adducts of pyrimidines or pyrazines. The resulting highly substituted pyridines can be difficult to access by other means.
(6)
Release of isocyanates from Diels–Alder adducts of pyridones can be used to generate highly substituted aromatic compounds. The isocyanates may be isolated or trapped if they are the desired product.
(7)
Release of nitrogen from six-membered, cyclic diazenes is common and often spontaneous at room temperature. Such a reaction can be utilized in click reactions where alkanes react with a 1,2,4,5-tetrazine in a diels alder then retro diels alder reaction with the loss of nitrogen. In this another example, the epoxide shown undergoes rDA at 0 °C. The isomer with a cis relationship between the diazene and epoxide reacts only after heating to >180 °C.
(8)
The concerted release of oxygen via rDA results in the formation of singlet oxygen. Very high yields of singlet oxygen result from rDA reactions of some cyclic peroxides—in this example, a greater than 90% yield of singlet oxygen was obtained.
(9)
Carbon dioxide is a common dienophile released during rDA reactions. Diels–Alder adducts of alkynes and 2-pyrones can undergo rDA to release carbon dioxide and generate aromatic compounds.
(10)
Experimental conditions and procedure
Typical conditions
Internal energy is the only factor controlling the extent of rDA reactions, and temperature is usually the only variable cited for these reactions. Thus, there are no conditions which can be regarded as "typical." For rDA reactions that afford a volatile product, removal of this product may facilitate the reaction, although most of these reactions (nitrogen- and oxygen-releasing rDA, for instance) are irreversible without any extra inducement.
References
Cycloadditions
Tandem mass spectrometry
Name reactions | Retro-Diels–Alder reaction | [
"Physics",
"Chemistry"
] | 1,499 | [
"Name reactions",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Tandem mass spectrometry"
] |
27,534,842 | https://en.wikipedia.org/wiki/Advances%20in%20Space%20Research | Advances in Space Research is a peer-reviewed scientific journal that is published 24 times per year by Elsevier. It was established in 1981 and is the official journal of the Committee on Space Research (COSPAR). The editor-in-chief is Pascal Willis.
Topics of interest for this journal are all interactions observed in space research, including space studies of the Earth's surface, meteorology, and climate. Acceptable articles in the context of space research are from the perspective of astrophysics, materials science, the life sciences, and fundamental physics. Also included in this context is the study of planetary meteorologies, and planetary climates. Other research encompasses Earth-based astronomy observations, the study of space debris, and space weather.
Abstracting and indexing
The journal is abstracted and indexed in the following databases:
Chemical Abstracts
Current Contents/Physics
Current Contents/Chemistry & Earth Science
Geographical Abstracts
Geological Abstracts
Inspec
Index to Scientific & Technical Proceedings
Meteorological & Geoastrophysical Abstracts
Science Citation Index
Scopus
According to the Journal Citation Reports, Advance in Space Research has a 2020 impact factor of 2.152.
References
External links
Space science journals
English-language journals
Elsevier academic journals
Semi-monthly journals
Academic journals established in 1981
Academic journals associated with international learned and professional societies
Aerospace engineering journals
Astronomy journals
Earth and atmospheric sciences journals | Advances in Space Research | [
"Astronomy",
"Engineering"
] | 268 | [
"Astronomy journals",
"Aerospace engineering journals",
"Works about astronomy",
"Aerospace engineering"
] |
24,254,714 | https://en.wikipedia.org/wiki/Optomechanics | Optomechanics is the manufacture and maintenance of optical parts and devices. This includes the design and manufacture of hardware used to hold and align elements in optical systems, such as:
Optical tables, breadboards, and rails
Mirror mounts
Optical mounts
Translation stages
Rotary stage
Optical fiber aligners
Pedestals and posts
Micrometers, screws and screw sets
Optomechanics also covers the methods used to design and package compact and rugged optical trains, and the manufacture and maintenance of fiber optic materials
References
Optical devices | Optomechanics | [
"Materials_science",
"Engineering"
] | 102 | [
"Glass engineering and science",
"Optical devices"
] |
24,255,023 | https://en.wikipedia.org/wiki/Ambroxide | Ambroxide, widely known by the brand name Ambroxan, is a naturally occurring terpenoid and one of the key constituents responsible for the odor of ambergris. It is an autoxidation product of ambrein. Ambroxide is used in perfumery for creating ambergris notes and as a fixative. Small amounts (< 0.01 ppm) are used as a flavoring in food.
Synthesis
Ambroxide is synthesized from sclareol, a component of the essential oil of clary sage. Sclareol is oxidatively degraded to a lactone, which is hydrogenated to the corresponding diol. The resulting compound is dehydrated to form ambroxide.
References
Perfume ingredients
Flavors
Terpenes and terpenoids
Tetrahydrofurans
Decalins
Heterocyclic compounds with 3 rings | Ambroxide | [
"Chemistry"
] | 179 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Terpenes and terpenoids",
"Natural products"
] |
24,255,209 | https://en.wikipedia.org/wiki/Khimera | Khimera is a software product from Kintech Lab intended for calculation of the kinetic parameters of microscopic processes, thermodynamic and transport properties of substances and their mixtures in gases, plasmas and also of heterogeneous processes.
The development of a kinetic mechanism is a key stage of present-day technologies for the creation of hi-tech devices and processes in a wide range of fields, such as microelectronics, chemical industry, and the design and optimization of combustion engines and power stations.
Khimera with Chemical WorkBench, another software product from Kintech Lab, allows both the development of complex physical and chemical mechanisms and their validation. Essential feature of Khimera is its user-friendly interface for importing and utilizing the results of quantum-chemical calculations for estimating rate constants of elementary processes and thermodynamic and transport properties.
Fields of application
Khimera incorporates up to date achievements in the development of the wide range of models of elementary physicochemical processes; these models are of particular importance for hi-tech applications in:
microelectronics
materials science
chemical industry
automobile and aviation industry
power engineering.
Basic capabilities
The computation modules of Khimera allow one to calculate the kinetic parameters of elementary processes and thermodynamic and transport properties from the data on the molecular structures and properties obtained from quantum-chemical calculations or from an experiment. The molecular properties and the parameters of molecular interactions can be calculated using quantum-chemical software (Gaussian, GAMESS, Jaguar, ADF) and directly imported into Khimera in an automatic mode. The results of calculations can be presented visually and exported for the further use in kinetic modeling and CFD packages.
References
1. J Comput Chem 23: 1375–1389, 2002
2. https://web.archive.org/web/20160611153527/http://www.softscout.com/software/Science-and-Laboratory/Laboratory-Information-Management-LIMS/Khimera.html
Chemical engineering
Chemical kinetics
Combustion
Computational chemistry software
Molecular modelling software | Khimera | [
"Chemistry",
"Engineering"
] | 435 | [
"Chemical reaction engineering",
"Molecular modelling software",
"Computational chemistry software",
"Chemistry software",
"Chemical engineering",
"Molecular modelling",
"Combustion",
"Computational chemistry",
"nan",
"Chemical kinetics"
] |
24,257,028 | https://en.wikipedia.org/wiki/NDepend | NDepend is a static analysis tool for C# and .NET code to manage code quality and security. The tool proposes a large number of features, from CI/CD Web Reporting to Quality Gate and Dependencies Visualization. For that reason, the community refers to it as the "Swiss Army Knife" for .NET Developers.
Features
The main features of NDepend are:
Interactive Web Reports about all Aspects of .NET Code Quality and Security Sample Reports Here. Reports can be built on any platform: Windows, Linux and MacOS
[Roslyn Analyzers Issues Import https://www.ndepend.com/docs/reporting-roslyn-analyzers-issues]
Quality Gates
CI/CD Integration with Azure DevOps, GitHub Action, Bamboo, Jenkins, TeamCity, AppVeyor
Dependency Visualization (using dependency graphs, and dependency matrix)
Smart Technical Debt Estimation
Declarative code rule over C# LINQ query (CQLinq).
Software Metrics (NDepend currently supports more than 100 code metrics: Cyclomatic complexity; Afferent and Efferent Coupling; Relational Cohesion; Google page rank of .NET types; Percentage of code covered by tests, etc.)
Code Coverage data import from Visual Studio coverage, dotCover, OpenCover, NCover, NCrunch.
All results are compared against a baseline allowing the user to focus on newly identified issues.
Integration with Visual Studio 2022, 2019, 2017, 2015, 2013, 2012, 2010, or can run as a standalone through VisualNDepend.exe, side by side with JetBrains Rider or Visual Studio Code.
Code rules through LINQ queries (CQLinq)
Live code queries and code rules through LINQ queries is the backbone of NDepend, all features use it extensively. Here are some sample code queries:
Base class should not use derivatives:
// <Name>Base class should not use derivatives</Name>
warnif count > 0
from baseClass in JustMyCodeTypes
where baseClass.IsClass && baseClass.NbChildren > 0 // <-- for optimization!
let derivedClassesUsed = baseClass.DerivedTypes.UsedBy(baseClass)
where derivedClassesUsed.Count() > 0
select new { baseClass, derivedClassesUsed }
Avoid making complex methods even more complex (source code cyclomatic complexity):
// <Name>Avoid making complex methods even more complex (source code cyclomatic complexity)</Name>
warnif count > 0
from m in JustMyCodeMethods where
!m.IsAbstract &&
m.IsPresentInBothBuilds() &&
m.CodeWasChanged()
let oldCC = m.OlderVersion().CyclomaticComplexity
where oldCC > 6 && m.CyclomaticComplexity > oldCC
select new { m,
oldCC,
newCC = m.CyclomaticComplexity,
oldLoc = m.OlderVersion().NbLinesOfCode,
newLoc = m.NbLinesOfCode,
}
Additionally, the tool provides a live CQLinq query editor with code completion and embedded documentation.
See also
Design Structure Matrix
List of tools for static code analysis
Software visualization
Sourcetrail
External links
NDepend reviewed by the .NET community
Exiting The Zone Of Pain: Static Analysis with NDepend.aspx (Program Manager, Microsoft) discusses NDepend
Stack Overflow discussion: use of NDepend
Abhishek Sur, on NDepend
NDepend code metrics by Andre Loker
Static analysis with NDepend by Henry Cordes
Hendry Luk discusses Continuous software quality with NDepend
Jim Holmes (Author of the book "Windows Developer Power Tools"), on NDepend.
Mário Romano discusses Metrics and Dependency Matrix with NDepend
Nates Stuff review
Scott Mitchell (MSDN Magazine), Code Exploration using NDepend
Travis Illig on NDepend
Books that mention NDepend
Girish Suryanarayana, Ganesh Samarthyam, and Tushar Sharma. Refactoring for Software Design Smells: Managing Technical Debt (2014)
Marcin Kawalerowicz and Craig Berntson. Continuous Integration in .NET (2010)
James Avery and Jim Holmes. Windows developer power tools (2006)
Patrick Cauldwell and Scott Hanselman. Code Leader: Using People, Tools, and Processes to Build Successful Software (2008)
Yogesh Shetty and Samir Jayaswal. Practical .NET for financial markets (2006)
Paul Duvall. Continuous Integration (2007)
Rick Leinecker and Vanessa L. Williams. Visual Studio 2008 All-In-One Desk Reference For Dummies (2008)
Patrick Smacchia. Practical .Net 2 and C# 2: Harness the Platform, the Language, the Framework (2006)
Static program analysis tools
.NET programming tools
Software metrics | NDepend | [
"Mathematics",
"Engineering"
] | 1,064 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
24,261,417 | https://en.wikipedia.org/wiki/Ferroelectric%20polymer | Ferroelectric polymers
are a group of crystalline polar polymers that are also ferroelectric, meaning that they maintain a permanent electric polarization that can be reversed, or switched, in an external electric field.
Ferroelectric polymers, such as polyvinylidene fluoride (PVDF), are used in acoustic transducers and electromechanical actuators because of their inherent piezoelectric response, and as heat sensors because of their inherent pyroelectric response.
Background
First reported in 1971, ferroelectric polymers are polymer chains that must exhibit ferroelectric behavior, hence piezoelectric and pyroelectric behavior.
A ferroelectric polymer must contain permanent electrical polarization that can be reversed repeatedly, by an opposing electric field. In the polymer, dipoles can be randomly oriented, but application of an electric field will align the dipoles, leading to ferroelectric behavior. In order for this effect to happen, the material must be below its Curie Temperature. Above the Curie Temperature, the polymer exhibits paraelectric behavior, which does not allow for ferroelectric behavior because the electric fields do not align.
A consequence of ferroelectric behavior leads to piezoelectric behavior, where the polymer will generate an electric field when stress is applied, or change shape upon application of an electric field. This is viewed as shrinking, or changes in conformation of the polymer in an electric field; or by stretching and compressing the polymer, measure generated electric fields. Pyroelectric behavior stems from the change in temperature causing electric behavior of the material. While only ferroelectric behavior is required for a ferroelectric polymer, current ferroelectric polymers exhibit pyroelectric and piezoelectric behavior.
In order to have an electric polarization that can be reversed, ferroelectric polymers are often crystalline, much like other ferroelectric materials. Ferroelectric properties are derived from electrets, which are defined as a dielectric body that polarizes when an electric field and heat is applied. Ferroelectric polymers differ in that the entire body undergoes polarization, and the requirement of heat is not necessary. Although they differ from electrets, they are referred to as electrets often. Ferroelectric polymers fall into a category of ferroelectric materials known as an 'order-disorder' material. This material undergoes a change from randomly oriented dipoles which are paraelectric, to ordered dipoles which become ferroelectric.
After the discovery of PVDF, many other polymers have been sought after that contain ferroelectric, piezoelectric, and pyroelectric properties. Initially different blends and copolymers of PVDF were discovered, such as a polyvinylidene fluoride with poly(methyl methacrylate).
Other structures were discovered to possess ferroelectric properties, such as polytrifluoroethylene
and odd-numbered nylon.
History
The concept of ferroelectricity was first discovered in 1921. This phenomenon began to play a much larger role in electronic applications during the 1950s after the increased use of BaTiO3. This ferroelectric material is part of the corner-sharing oxygen octahedral structure, but ferroelectrics can also be grouped into three other categories. These categories include organic polymers, ceramic polymer composites, and compounds containing hydrogen-bonded radicals. It wasn't until 1969 that Kawai first observed the piezoelectric effect in a polymer polyvinylidene fluoride. Two years later, the ferroelectric properties of the same polymer were reported. Throughout the 1970s and 1980s, these polymers were applied to data storage and retrieval. Subsequently, there has been tremendous growth during the past decade in exploring the materials science, physics, and technology of poly(vinylidenefluoride) and other fluorinated polymers. Copolymer PVDF with trifluoroethylene and odd-numbered nylons were additional polymers that were discovered to be ferroelectric. This propelled a number of developing applications on piezoelectricity and pyroelectricity.
Polyvinylidene fluoride
Synthesis
Polyvinylidine fluoride is produced by the radical polymerization of vinylidine fluoride.
Study of Structure
To minimize the potential energy of the chains arising from internal steric and electrostatic interactions, the rotation about single bonds happens in the chain of PVDF. There are two most favorable torsional bond arrangements: trans ( t ) and gauche± ( g± ). In the case of “ t”, the substituents are at 180° to each other. In the case of “g±”, the substituents are at ±60° to each other. PVDF molecules contain two hydrogen and two fluorine atoms per repeat unit, so they have a choice of multiple conformations. However, rotational barriers are relatively high, the chains can be stabilized into favorable conformations other than that of lowest energy. The three known conformations of PVDF are all-trans, tg+tg−, and . The first two conformations are the most common ones and are sketched out in the figure on right. In the tg+tg− conformation, the inclination of dipoles to the chain axis leads to the polar components of both perpendicular (4.0 × 10−30 C-m per repeat) and parallel to the chain (3.4 × 10−30 C-m per repeat). In the all trans structure, the alignment of all its dipoles are in the same direction normal to the chain axis. In this way, it can be expected that the all trans is the most highly polar conformation in PVDF (7.× 10−30 C-m per repeat). These polar conformations are the crucial factors that lead to the ferroelectric properties.
Current research
Ferroelectric polymers and other materials have been incorporated into many applications, but there is still cutting edge research that is currently being done. For example, there is research being conducted on novel ferroelectric polymer composites with high dielectric constants. Ferroelectric polymers, such as polyvinylidene fluoride and poly[(vinylidenefluoride-co-trifluoroethylene], are very attractive for many applications because they exhibit good piezoelectric and pyroelectric responses and low acoustic impedance, which matches water and human skin. More importantly, they can be tailored to meet various requirements. A common approach for enhancing the dielectric constant is to disperse a high-dielectric-constant ceramic powder into the polymers. Popular ceramic powders are lead-based complexes such as and . This can be disadvantageous because lead can be potentially harmful and at high particulate loading, the polymers lose their flexibility and a low quality composite is obtained. Current advances use a blending procedure to make composites that are based on the simple combination of PVDF and cheap metal powders. Specifically, Ni powders were used to make up the composites. The dielectric constants were enhanced from values there were less than 10 to approximately 400. This large enhancement is explained by the percolation theory.
These ferroelectric materials have also been used as sensors. More specifically, these types of polymers have been used for high pressure and shock compression sensors. It has been discovered that ferroelectric polymers exhibit piezoluminescence upon the application of stress. Piezoluminescence has been looked for in materials that are piezoelectric.
It is useful to distinguish among the several regimes in a typical stress–strain curve for a solid material. The three regimes of the stress–strain curve include elastic, plastic, and fracture. Light emitted in the elastic regime is known piezoluminescence. Fig. 7 shows a general stress–strain curve.
These types of polymers have played a role in biomedical and robotic applications and liquid crystalline polymers. In 1974, R.B. Meyer predicted ferroelectricity in chiral smectic liquid crystals by pure symmetry conditions. Shortly after, Clark and Lagerwall had done work on the fast electrooptic effect in a surface-stabilized ferroelectric liquid crystal (SSFLC) structure. This opened up promising possibility of technical applications of ferroelectric liquid crystals in high-information display devices. Through applied research, it was shown that SSFLC structure has faster switching times and bistability behavior in comparison with commonly used nematic liquid crystal displays. In the same time period, the first side-chain liquid crystalline polymers (SCLCP) were synthesized. These comb-like polymers has mesogenic side chains that are covalently bonded (via flexible spacer units) to the polymer backbone. The most important feature of the SCLCP's is their glassy state. In other words, these polymers have a "frozen" ordered state along one axis when cooled below their glass transition temperature. This is advantageous for research in the area of nonlinear optical and optical data storage devices. The disadvantage is that these SCLCP's suffered from their slow switching times due to their high rotational viscosity.
Applications
Nonvolatile memory
The ferroelectric property exhibits polarization–electric-field-hysteresis loop, which is related to "memory". One application is integrating ferroelectric polymer Langmuir–Blodgett (LB) films with semiconductor technology to produce nonvolatile ferroelectric random-access memory and data-storage devices. Recent research with LB films and more conventional solvent formed films shows that the VDF copolymers (consisting of 70% vinylidene fluoride (VDF) and 30% trifluoroethylene (TrFE)) are promising materials for nonvolatile memory applications. The device is built in the form of the metal–ferroelectric–insulator–semiconductor (MFIS) capacitance memory. The results demonstrated that LB films can provide devices with low-voltage operation.
Thin Film Electronics successfully demonstrated roll-to-roll printed non-volatile memories based on ferroelectric polymers in 2009.
Transducers
The ferroelectric effect always relates the various force to electric properties, which can be applied in transducers. The flexibility and low cost of polymers facilitates the application of ferroelectric polymers in transducers. The device configuration is simple, it usually consists of a piece of ferroelectric film with an electrode on the top
and bottom surfaces. Contacts to the two electrodes complete the design.
Sensors
When the device functions as a sensor, a mechanical or acoustic force applied to one of the surfaces causes a compression of the material. Via the direct piezoelectric effect, a voltage is generated between the electrodes.
Actuators
In actuators, a voltage applied between the electrodes causes a strain on the film through the inverse piezoelectric effect.
Soft transducers in the form of ferroelectric polymer foams have been proved to have great potential.
See also
Polyvinylidene fluoride
Ferroelectricity
Piezoelectricity
Pyroelectricity
References
Ferroelectric materials
Polymers | Ferroelectric polymer | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,362 | [
"Physical phenomena",
"Ferroelectric materials",
"Materials",
"Electrical phenomena",
"Polymer chemistry",
"Polymers",
"Hysteresis",
"Matter"
] |
24,264,072 | https://en.wikipedia.org/wiki/Carpuject | The carpuject is a syringe device for the administration of injectable fluid medication. It was patented by the Sterling Drug Company, which became the Sterling Winthrop, after World War II. It is designed with a luer-lock device to accept a sterile hypodermic needle or to be linked directly to intravenous tubing line. The product can deliver an intravenous or intramuscular injection by means of a holder which attaches to the barrel and plunger to the barrel plug. Medication is prefilled into the syringe barrel. When the plug at the end of the barrel is advanced to the head of the barrel it discharges and releases the contents through the needle or into the lumen of the tubing.
The carpuject competed with the tubex injection system developed by Wyeth. It has been redesigned several times to comply with sterility and infection controls standards.
In 1974, Sterling opened a manufacturing plant in McPherson, Kansas. In 1988 Kodak purchased Winthrop Labs and in 1994 sold the injectable drug division and all intellectual property rights to Sanofi, a French pharmaceutical company, now Sanofi Aventis. In 1997 Sanofi sold the injectable carpuject line of business to Abbott Laboratories of Abbott Park, IL for US$200 million. They added generic injectable drugs to the injectable line. In about 2004 Abbott separated its hospital supply line into a separate hospital supply company, Hospira from its drug division. The split placed all of Abbott's hospital products in a separate division. In 2015, Hospira, including the carpuject device, was purchased by Pfizer.
References
Medical equipment | Carpuject | [
"Biology"
] | 349 | [
"Medical equipment",
"Medical technology"
] |
24,270,179 | https://en.wikipedia.org/wiki/FlowMon | Flowmon is a name for monitoring probe which is the result of academic research activity on CESNET and also a name for a commercial product which is marketed by university spin-off company Flowmon Networks.
Flowmon probe - result of research activities
Flowmon probe is an appliance for monitoring and reporting information of IP flows in high-speed computer networks. The probe is being developed by Liberouter team within the scope of CESNET research plan Optical National Research Network and its New Applications, research activity 602 - Programmable hardware.
Flowmon probe is built upon a pair of programmable network cards, called COMBO, and a host computer with Linux operating system. The pair of COMBO cards consists of a main card with PCI, PCI-X or PCI-Express connector for a connection to a motherboard of the host computer and of an add-on card with 2 or 4 network interfaces. Both cards contain programmable chips (FPGAs) which are able to process high amount of data at multi-gigabit speed. The flow monitoring process itself is split between the hardware (acceleration cards) and the application software running on the host computer. Following the principle of hardware/software codesign, all time-critical tasks are implemented in FPGA chips on acceleration cards while more complex operations are carried out by the application software. This concept enables monitoring of modern high-speed (1 Gbps, 10 Gbps) networks with no packet loss and with no necessity of input sampling. At the same time, a flexible and user-friendly interface is provided by software.
Flowmon probe is a passive monitoring device, i.e. it does not alter passing traffic in any way. Therefore, its detection is hardly possible. When connected to a network, Flowmon probe observes all passing traffic/packets, extracts and aggregates information of IP flows into flow records. Flowmon probe is able to export aggregated data to external collectors in NetFlow (version 5 and 9) and IPFIX format. Collectors collect incoming flow records and store them for automated or manual and visual analysis (automated malicious traffic detection, filter rules, graphs and statistical schemas). The whole system allows monitoring of actual state of monitored network as well as long-term traffic analysis.
Flowmon probe is part of GÉANT2 Security Toolset, which consists of the NetFlow analysis tools NfSen and NfDump and the Flowmon appliance.
See also
Network traffic measurement
IP Flow Information Export
NetFlow
Network analyzers
Networking hardware | FlowMon | [
"Engineering"
] | 516 | [
"Computer networks engineering",
"Networking hardware"
] |
537,422 | https://en.wikipedia.org/wiki/National%20Electrical%20Code | The National Electrical Code (NEC), or NFPA 70, is a regionally adoptable standard for the safe installation of electrical wiring and equipment in the United States. It is part of the National Fire Code series published by the National Fire Protection Association (NFPA), a private trade association. Despite the use of the term "national," it is not a federal law. It is typically adopted by states and municipalities in an effort to standardize their enforcement of safe electrical practices. In some cases, the NEC is amended, altered and may even be rejected in lieu of regional regulations as voted on by local governing bodies.
The "authority having jurisdiction" inspects for compliance with the standards.
The NEC should not be confused with the National Electrical Safety Code (NESC), published by the Institute of Electrical and Electronics Engineers (IEEE). The NESC is used for electric power and communication utility systems including overhead lines, underground lines, and power substations.
Background
The NEC is developed by NFPA's Committee on the National Electrical Code, which consists of twenty code-making panels and a technical correlating committee. Work on the NEC is sponsored by the National Fire Protection Association. The NEC is approved as an American national standard by the American National Standards Institute (ANSI). It is formally identified as ANSI/NFPA 70.
First published in 1897, the NEC is updated and published every three years, with the 2023 edition being the most current. Most states adopt the most recent edition within a few of years of its publication. As with any "uniform" code, jurisdictions may regularly omit or modify some sections, or add their own requirements (sometimes based upon earlier versions of the NEC, or locally accepted practices). However, no court has faulted anyone for using the latest version of the NEC, even when the local code was not updated.
In the United States, anyone, including the city issuing building permits, may face a civil liability lawsuit for negligently creating a situation that results in loss of life or property. Those who fail to adhere to well known best practices for safety have been held negligent. This liability and the desire to protect residents has motivated cities to adopt and enforce building codes that specify standards and practices for electrical systems (as well as other departments such as water and fuel-gas systems). That creates a system whereby a city can best avoid lawsuits by adopting a single standard set of building code laws. This has led to the NEC becoming the de facto standard set of electrical requirements. A licensed electrician will have spent years of apprenticeship studying and practicing the NEC requirements prior to obtaining their license.
The Deactivation and Decommissioning (D&D) customized extension of the electrical code standard defined by National Electrical Code was developed since current engineering standards and code requirements do not adequately address the unique situations arising during D&D activities at U.S. Department of Energy (DOE) facilities. The additional guidance is needed to clarify the current electrical code for these situations. The guidance document provides guidance on how to interpret selected articles of NFPA 70, “National Electrical Code” (NEC), in particular certain articles within Article 590, “Temporary Power,” for D&D electrical activities at DOE sites.
The NEC also contains information about the official definition of HAZLOC and the related standards given by the Occupational Safety and Health Administration and dealing with hazardous locations such as explosive atmospheres.
Public access
The NEC is available as a bound book containing approximately 1000 pages. It has been available in electronic form since the 1993 edition. Although the code is updated every three years, some jurisdictions do not immediately adopt the new edition.
The NEC is also available as a restricted, digitized coding model that can be read online free of charge on certain computing platforms that support the restricted viewer software; however this digital version cannot be saved, copied, or printed.
In the United States, statutory law cannot be copyrighted and is freely accessible and copyable by anyone. When a standards organization develops a new coding model and it is not yet accepted by any jurisdiction as law, it is still the private property of the standards organization and the reader may be restricted from downloading or printing the text for offline viewing. For that privilege, the coding model must still be purchased as either printed media or electronic format (e.g. PDF.) Once the coding model has been accepted as law, it loses copyright protection and may be freely obtained at no cost.
Structure
The NEC is composed of an introduction, nine chapters, annexes A through J, and the index. The introduction sets forth the purpose, scope, enforcement, and rules or information that are general in nature. The first four chapters cover definitions and rules for installations (voltages, connections, markings, etc.), circuits and circuit protection, methods and materials for wiring (wiring devices, conductors, cables, etc.), and general-purpose equipment (cords, receptacles, switches, heaters, etc.). The next three chapters deal with special occupancies (high risk to multiple persons), special equipment (signs, machinery, etc.) and special conditions (emergency systems, alarms, etc.). Chapter 8 is specific to additional requirements for communications systems (telephone, radio/TV, etc.) and chapter 9 is composed of tables regarding conductor, cable and conduit properties, among other things. Annexes A-J relate to referenced standards, calculations, examples, additional tables for proper implementation of various code articles (for example, how many wires fit in a conduit) and a model adoption ordinance.
The introduction and the first 8 chapters contain numbered parts, articles, sections (or lists or tables), item, specifics, inclusions/exclusions, precise inclusion/exclusion, italicized exceptions, and explanatory material – explanations that are not part of the rules. Articles are coded with numerals and letters, as ###.###(A)(#)(a). For example, 805.133(A)(1)(a)(1), would be read as "article 805, section 133, item (A) Separation from Other Conductors, specific (1) In Raceways, cable Trays, Boxes,... inclusion (a) Other Circuits, precise inclusion (1) Class 2 and Class 3...." and would be found in Chapter 8, Part IV Installation Methods Within Buildings. For internal references, some lengthy articles are further broken into "parts" with Roman-numerals (parts I, II, III, etc.).
Each code article is numbered based on the chapter it is in. Those wiring methods acceptable by the NEC are found in chapter 3, thus all approved wiring method code articles are in the 300s. Efforts have been underway for some time to make the code easier to use. Some of those efforts include using the same extension for both code articles and for the support of wiring methods.
The NFPA also publishes a 1,497-page NEC Handbook (for each new NEC edition) that contains the entire code, plus additional illustrations and explanations, and helpful cross-references within the code and to earlier versions of the code. The explanations are only for reference and are not enforceable.
Many NEC requirements refer to "listed" or "labeled" devices and appliances, and this means that the item has been designed, manufactured, tested or inspected, and marked in accordance with requirements of the listing agency. To be listed, the device must meet testing and other requirements set by a listing agency such as Underwriters Laboratories (UL), SGS North America, Intertek (Formerly ETL), Canadian Standards Association (CSA), or FM Approvals (FM). These are examples of "national recognized testing laboratories" (NRTL) approved by the U.S. Department of Labor's Occupational Safety and Health Administration (OSHA) under the requirements of 29 CFR 1910.7. Only a listed device can carry the listing brand (or "mark") of the listing agency. Upon payment of an investigation fee to determine suitability, an investigation is started. To be labeled as fit for a particular purpose (for example "wet locations", "domestic range") a device must be tested for that specific use by the listing agency and then the appropriate label applied to the device. A fee is paid to the listing agency for each item so labeled, that is, for each label. Most NRTLs will also require that the manufacturer's facilities and processes be inspected as evidence that a product will be manufactured reliably and with the same qualities as the sample or samples submitted for evaluation. An NRTL may also conduct periodic sample testing of off-the-shelf products to confirm that safety design criteria are being upheld during production. Because of the reputation of these listing agencies, the "authority having jurisdiction" ( or "AHJ" – as they are commonly known) usually will quickly accept any device, appliance, or piece of equipment having such a label, provided that an end user or installer uses the product in accordance with manufacturer's instructions and the limitations of the listing standard. However, an AHJ, under the National Electrical Code provisions, has the authority to deny approval for even listed and labeled products. Likewise, an AHJ may make a written approval of an installation or product that does not meet either NEC or listing requirements, although this is normally done only after an appropriate review of the specific conditions of a particular case or location.
Requirements
Article 210 addresses "branch circuits" (as opposed to service or feeder circuits) and receptacles and fixtures on branch circuits.Electrical Construction and Maintenance Magazine, Branch Circuits, Part 2. There are requirements for the minimum number of branches, and placement of receptacles, according to the location and purpose of the receptacle outlet. Ten important items in Article 210 have been summarized in a codebook.
Feeder and branch circuit wiring systems are designed primarily for copper conductors. Aluminum wiring is listed by Underwriters Laboratories for interior wiring applications and became increasingly used around 1966 due to its lower cost. Prior to 1972, however, the aluminum wire used was manufactured to conform to the 1350 series aluminum alloy, but this alloy was eventually deemed unsuitable for branch circuits due to galvanic corrosion where the copper and aluminum touched, resulting in poor contact and resistance to current flow, connector overheating problems, and potential fire risk. Today, a new aluminum wire (AA-8000) has been approved for branch circuits that does not cause corrosion where it contacts copper, but it is not readily available and is not manufactured below size #8 AWG. Hence, copper wire is used almost exclusively in branch circuitry.
A ground fault circuit interrupter (GFCI) is required for all receptacles in wet locations defined in the Code. The NEC also has rules about how many circuits and receptacles should be placed in a given residential dwelling, and how far apart they can be in a given type of room, based upon the typical cord length of small appliances.
As of 1962, the NEC required that new 120 Volt household receptacle outlets, for general purpose use, be both grounded and polarized. NEMA connectors implement these requirements.
The NEC also permits grounding-type receptacles in non-grounded wiring protected by a GFCI; this only applies when old non-grounded receptacles are replaced with grounded receptacles, and the new receptacles must be marked with 'No equipment ground' and 'GFCI Protected' .
The 1999 Code required that new 120/240 volt receptacles, such as those for electric ranges and dryers, be grounded also, which necessitates a fourth slot in their faces. Changes in standards often create problems for new work in old buildings.
Unlike circuit breakers and fuses, which only open the circuit when the current exceeds a fixed value for a fixed time, a GFCI device will interrupt electrical service when more than 4 to 6 milliamperes of current in either conductor leaks to ground. A GFCI detects an imbalance between the current in the "hot" side and the current in the "neutral" side. One GFCI receptacle can serve as protection for several downstream conventional receptacles. GFCI devices come in many configurations including circuit-breakers, portable devices and receptacles.
Another safety device introduced with the 1999 code is the arc-fault circuit interrupter (AFCI). This device detects arcs from hot to neutral that can develop when insulation between wires becomes frayed or damaged. While arcs from hot to neutral would not trip a GFCI device since current is still balanced, circuitry in an AFCI device detects those arcs and will shut down a circuit. AFCI devices generally replace the circuit breaker in the circuit. As of the 1999 National Electrical Code, AFCI protection is required in new construction on all 15- and 20-amp, 125-volt circuits to bedrooms.
Conduit and cable protection
The NEC requires that conductors of a circuit must be inside a raceway, cable, trench, cord, or cable tray. Additional protection such as NM cable inside raceway is needed if the installation method is subjected to physical damage as determined by the authority having jurisdiction.
Temperature rating
The temperature rating of a wire or cable is generally the maximum safe ambient temperature that the wire can carry full-load power without the cable insulation melting, oxidizing, or self-igniting. A full-load wire does heat up slightly due to the metallic resistance of the wire, but this wire heating is factored into the cable's temperature rating. (NEC 310.10)
The NEC specifies acceptable numbers of conductors in crowded areas such as inside conduit, referred to as the fill rating. If the accepted fill rating is exceeded, then all the cables in the conduit are derated, lowering their acceptable maximum ambient operating temperature. Derating is necessary because multiple conductors carrying full-load power generate heat that may exceed the normal insulation temperature rating. (NEC 310.16)
The NEC also specifies adjustments of the ampacity for wires in circular raceways exposed to sunlight on rooftops, due to the heating effects of solar radiation. Electrical Construction and Maintenance Magazine, Conductors for General Use, Chapter 3 Articles in NEC, starting with Article 342 This section is expected to be modified to include cables in future editions.
In certain situations, temperature rating can be higher than normal, such as for knob-and-tube wiring where two or more load-carrying wires are never likely to be in close proximity. A knob-and-tube installation uses wires suspended in air. This gives them a greater heat dissipation rating than standard three-wire NM-2 cable, which includes two tightly bundled load and return wires.
Copyright Status
NEC, like many NFPA standards, relies on sales of its copyrighted standards to fund its development. In 2016, the group PUBLIC.RESOURCE.ORG, INC published copies of the code online free of cost, arguing that as a standard adopted as law, it should be publicly available. The case challenges the nature of funding sources for development of the standards, which are often adopted as law, but created without taxpayer dollars. NFPA in response has pointed to its making a free version of its standards available online, albeit in a less convenient forum than the standard that is available for purchase.
See also
Electrical code
IEEE C2
Slash rating
Central Electricity Authority Regulations
References
External links
Free, restricted access to the NEC online, 1968 through 2023 versions (Must register with the NFPA to access these documents)
Electrical safety
Electrical wiring
Safety codes
NFPA Standards | National Electrical Code | [
"Physics",
"Engineering"
] | 3,237 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
537,599 | https://en.wikipedia.org/wiki/AMP-activated%20protein%20kinase | 5' AMP-activated protein kinase or AMPK or 5' adenosine monophosphate-activated protein kinase is an enzyme (EC 2.7.11.31) that plays a role in cellular energy homeostasis, largely to activate glucose and fatty acid uptake and oxidation when cellular energy is low. It belongs to a highly conserved eukaryotic protein family and its orthologues are SNF1 in yeast, and SnRK1 in plants. It consists of three proteins (subunits) that together make a functional enzyme, conserved from yeast to humans. It is expressed in a number of tissues, including the liver, brain, and skeletal muscle. In response to binding AMP and ADP, the net effect of AMPK activation is stimulation of hepatic fatty acid oxidation, ketogenesis, stimulation of skeletal muscle fatty acid oxidation and glucose uptake, inhibition of cholesterol synthesis, lipogenesis, and triglyceride synthesis, inhibition of adipocyte lipogenesis, inhibition of adipocyte lipolysis, and modulation of insulin secretion by pancreatic β-cells.
It should not be confused with cyclic AMP-activated protein kinase (protein kinase A).
Structure
AMPK is a heterotrimeric protein complex that is formed by α, β, and γ subunits. Each of these three subunits takes on a specific role in both the stability and activity of AMPK. Specifically, the γ subunit includes four particular Cystathionine-β-synthase (CBS) domains, giving AMPK its ability to sensitively detect shifts in the AMP/ATP ratio. AMPK is deactivated upon AMP displacement by ATP at CBS site 3, suggesting CBS3 to be the primary allosteric regulatory site. The four CBS domains create two binding sites for AMP commonly referred to as Bateman domains. Binding of one AMP to a Bateman domain cooperatively increases the binding affinity of the second AMP to the other Bateman domain. As AMP binds both Bateman domains the γ subunit undergoes a conformational change which exposes the catalytic domain found on the α subunit. It is in this catalytic domain where AMPK becomes activated when phosphorylation takes place at threonine-172 (on α1 isoform) or Thr-174 (on α2 isoform) by an upstream AMPK kinase (AMPKK). The α, β, and γ subunits can also be found in different isoforms: the γ subunit can exist as either the γ1, γ2 or γ3 isoform; the β subunit can exist as either the β1 or β2 isoform; and the α subunit can exist as either the α1 or α2 isoform. Although the most common isoforms expressed in most cells are the α1, β1, and γ1 isoforms, it has been demonstrated that the α2, β2, γ2, and γ3 isoforms are also expressed in cardiac and skeletal muscle.
The following human genes encode AMPK subunits:
α – PRKAA1, PRKAA2
β – PRKAB1, PRKAB2
γ – PRKAG1, PRKAG2, PRKAG3
The crystal structure of mammalian AMPK regulatory core domain (α C terminal, β C terminal, γ) has been solved in complex with AMP, ADP or ATP.
Regulation
Due to the presence of isoforms of its components, there are 12 versions of AMPK in mammals, each of which can have different tissue localizations, and different functions under different conditions. AMPK is regulated allosterically and by post-translational modification, which work together.
If residue Thr-172 of AMPK's α1-subunit (or Thr-174 of AMPK's α2-subunit) is phosphorylated, AMPK is activated around 100-fold; access to that residue by phosphatases is blocked if AMP or ADP can block access for and ATP can displace AMP and ADP. That residue is phosphorylated by at least three kinases (liver kinase B1 (LKB1), which works in a complex with STRAD and MO25, Calcium/calmodulin-dependent protein kinase kinase II-(CAMKK2), and TGFβ-activated kinase 1 (TAK1)) and is dephosphorylated by three phosphatases (protein phosphatase 2A (PP2A); protein phosphatase 2C (PP2C) and Mg2+-/Mn2+-dependent protein phosphatase 1E (PPM1E)).
Regulation of AMPK by CaMKK2 requires a direct interaction of these two proteins via their kinase domains. The interaction of CaMKK2 with AMPK only involves the α and β subunits of AMPK (AMPK γ is absent from the CaMKK2 complex), thus rendering regulation of AMPK in this context to changes in calcium levels but not AMP or ADP.
AMPK is regulated allosterically mostly by competitive binding to the CBS sites on its γ subunit between ATP (which allows phosphatase access to Thr-172) and AMP or ADP (each of which blocks access to phosphatases). It thus appears that AMPK is a sensor of AMP/ATP or ADP/ATP ratios and thus cell energy level. AMPK undergoes a large conformational change upon ATP binding. A region on the α subunit known as the kinase domain (KD) dissociates from its active-state conformation and loosely associates with the γ subunit ~100Å away. The KD also rotates ~180° in the conformational change. Upon KD dissociation, the active loop (AL) of the α subunit which contains the critical phosphorylated Thr residue is fully exposed to upstream phosphatases. This conformational change represents a plausible mechanism for AMPK modulation. When cellular energy states are low (high AMP/ATP or ADP/ATP levels), AMPK adopts the KD-associated conformation and AMPK is protected from dephosphorylation and remains activated. When cellular energy states are high, AMPK adopts the KD-displaced conformation, the AL is exposed to upstream phosphatases, and AMPK is deactivated.
The pharmacological compounds Merck Compound 991 and Abbott A769662 bind to the allosteric drug and metabolism site (ADaM) on the β subunit and have been shown to activate AMPK up to 10-fold. ADaM site binding may have roles in AMPK activation as well as protection against dephosphorylation.
There are other mechanisms by which AMPK is inhibited or activated by insulin, leptin, and diacylglycerol by inducing various other phosphorylations.
AMPK may be inhibited or activated by various tissue-specific ubiquitinations.
It is also regulated by several protein-protein interactions, and may either be activated or inhibited by oxidative factors; the role of oxidation in regulating AMPK was controversial as of 2016.
Function
When AMPK phosphorylates acetyl-CoA carboxylase 1 (ACC1) or sterol regulatory element-binding protein 1c (SREBP1c), it inhibits synthesis of fatty acids, cholesterol, and triglycerides, and activates fatty acid uptake and β-oxidation.
AMPK stimulates glucose uptake in skeletal muscle by phosphorylating Rab-GTPase-activating protein TBC1D1, which ultimately induces fusion of GLUT4 vesicles with the plasma membrane. AMPK stimulates glycolysis by activating phosphorylation of 6-phosphofructo-2-kinase/fructose-2,6-bisphosphatase 2/3 and activating phosphorylation of glycogen phosphorylase, and it inhibits glycogen synthesis through inhibitory phosphorylation of glycogen synthase. In the liver, AMPK inhibits gluconeogenesis by inhibiting transcription factors including hepatocyte nuclear factor 4 (HNF4) and CREB regulated transcription coactivator 2 (CRTC2).
AMPK inhibits the energy-intensive protein biosynthesis process and can also force a switch from cap-dependent translation to cap-independent translation, which requires less energy, by phosphorylation of TSC2, RPTOR, transcription initiation factor 1A.66, and eEF2K. When TSC2 is activated it inhibits mTORC1. As a result of inhibition of mTORC1 by AMPK, protein synthesis comes to a halt. Activation of AMPK signifies low energy within the cell, so all of the energy consuming pathways like protein synthesis are inhibited, and pathways that generate energy are activated to restore appropriate energy levels in the cell.
AMPK activates autophagy by directly and indirectly activating ULK1. AMPK also appears to stimulate mitochondrial biogenesis by regulating PGC-1α which in turn promotes gene transcription in mitochondria. AMPK also activates anti-oxidant defenses.
Clinical significance
Exercise/training
Many biochemical adaptations of skeletal muscle that take place during a single bout of exercise or an extended duration of training, such as increased mitochondrial biogenesis and capacity, increased muscle glycogen, and an increase in enzymes which specialize in glucose uptake in cells such as GLUT4 and hexokinase II are thought to be mediated in part by AMPK when it is activated. Additionally, recent discoveries can conceivably suggest a direct AMPK role in increasing blood supply to exercised/trained muscle cells by stimulating and stabilizing both vasculogenesis and angiogenesis. Taken together, these adaptations most likely transpire as a result of both temporary and maintained increases in AMPK activity brought about by increases in the AMP:ATP ratio during single bouts of exercise and long-term training.
During a single acute exercise bout, AMPK allows the contracting muscle cells to adapt to the energy challenges by increasing expression of hexokinase II, translocation of GLUT4 to the plasma membrane, for glucose uptake, and by stimulating glycolysis. If bouts of exercise continue through a long-term training regimen, AMPK and other signals will facilitate contracting muscle adaptations by escorting muscle cell activity to a metabolic transition resulting in a fatty-acid oxidation approach to ATP generation as opposed to a glycolytic approach. AMPK accomplishes this transition to the oxidative mode of metabolism by upregulating and activating oxidative enzymes such as hexokinase II, PPAR-α, PPAR-δ, PGC-1, UCP-3, cytochrome C and TFAM.
Mutations in the skeletal muscle calcium release channel (RYR1) underlies a life- threatening response to heat in patients with malignant hyperthermia susceptibility (MHS). Upon acute exposure to heat, these mutations cause uncontrolled Ca2+ release from the sarcoplasmic reticulum, leading to sustained muscle contractures, severe hyperthermia, and sudden death. At basal conditions, the temperature-dependent Ca2+ leak also leads to increased energy demand and activation of energy sensing AMP kinase (AMPK) in skeletal muscle. The activated AMPK increases muscle metabolic activity, including glycolysis, which leads to marked elevation of circulating lactate.
AMPK activity increases with exercise and the LKB1/MO25/STRAD complex is considered to be the major upstream AMPKK of the 5’-AMP-activated protein kinase phosphorylating the α subunit of AMPK at Thr-172. This fact is puzzling considering that although AMPK protein abundance has been shown to increase in skeletal tissue with endurance training, its level of activity has been shown to decrease with endurance training in both trained and untrained tissue. Currently, the activity of AMPK immediately following a 2 hour bout of exercise of an endurance trained rat is unclear. It is possible that a direct link exists between the observed decrease in AMPK activity in endurance trained skeletal muscle and the apparent decrease in the AMPK response to exercise with endurance training.
Although AMPKα2 activation has been thought to be important for mitochondrial adaptations to exercise training, a recent study investigating the response to exercise training in AMPKα2 knockout mice opposes this idea. Their study compared the response to exercise training of several proteins and enzymes in wild type and AMPKα2 knockout mice. And even though the knockout mice had lower basal markers of mitochondrial density (COX-1, CS, and HAD), these markers increased similarly to the wild type mice after exercise training. These findings are supported by another study also showing no difference in mitochondrial adaptations to exercise training between wild type and knockout mice.
Maximum life span
The C. elegans homologue of AMPK, aak-2, has been shown by Michael Ristow and colleagues to be required for extension of life span in states of glucose restriction mediating a process named mitohormesis.
Lipid metabolism
One of the effects of exercise is an increase in fatty acid metabolism, which provides more energy for the cell. One of the key pathways in AMPK's regulation of fatty acid oxidation is the phosphorylation and inactivation of acetyl-CoA carboxylase. Acetyl-CoA carboxylase (ACC) converts acetyl-CoA to malonyl-CoA, an inhibitor of carnitine palmitoyltransferase 1 (CPT-1). CPT-1 transports fatty acids into the mitochondria for oxidation. Inactivation of ACC, therefore, results in increased fatty acid transport and subsequent oxidation. It is also thought that the decrease in malonyl-CoA occurs as a result of malonyl-CoA decarboxylase (MCD), which may be regulated by AMPK. MCD is an antagonist to ACC, decarboxylating malonyl-CoA to acetyl-CoA, resulting in decreased malonyl-CoA and increased CPT-1 and fatty acid oxidation.
AMPK also plays an important role in lipid metabolism in the liver. It has long been known that hepatic ACC has been regulated in the liver by phosphorylation. AMPK also phosphorylates and inactivates 3-hydroxy-3-methylglutaryl-CoA reductase (HMGCR), a key enzyme in cholesterol synthesis. HMGR converts 3-hydroxy-3-methylglutaryl-CoA, which is made from acetyl-CoA, into mevalonic acid, which then travels down several more metabolic steps to become cholesterol. AMPK, therefore, helps regulate fatty acid oxidation and cholesterol synthesis.
Glucose transport
Insulin is a hormone which helps regulate glucose levels in the body. When blood glucose is high, insulin is released from the Islets of Langerhans. Insulin, among other things, will then facilitate the uptake of glucose into cells via increased expression and translocation of glucose transporter GLUT-4. Under conditions of exercise, however, blood sugar levels are not necessarily high, and insulin is not necessarily activated, yet muscles are still able to bring in glucose. AMPK seems to be responsible in part for this exercise-induced glucose uptake. Goodyear et al. observed that with exercise, the concentration of GLUT-4 was increased in the plasma membrane, but decreased in the microsomal membranes, suggesting that exercise facilitates the translocation of vesicular GLUT-4 to the plasma membrane. While acute exercise increases GLUT-4 translocation, endurance training will increase the total amount of GLUT-4 protein available. It has been shown that both electrical contraction and AICA ribonucleotide (AICAR) treatment increase AMPK activation, glucose uptake, and GLUT-4 translocation in perfused rat hindlimb muscle, linking exercise-induced glucose uptake to AMPK. Chronic AICAR injections, simulating some of the effects of endurance training, also increase the total amount of GLUT-4 protein in the muscle cell.
Two proteins are essential for the regulation of GLUT-4 expression at a transcriptional level – myocyte enhancer factor 2 (MEF2) and GLUT4 enhancer factor (GEF). Mutations in the DNA binding regions for either of these proteins results in ablation of transgene GLUT-4 expression. These results prompted a study in 2005 which showed that AMPK directly phosphorylates GEF, but it doesn't seem to directly activate MEF2. AICAR treatment has been shown, however, to increase transport of both proteins into the nucleus, as well as increase the binding of both to the GLUT-4 promoter region.
There is another protein involved in carbohydrate metabolism that is worthy of mention along with GLUT-4. The enzyme hexokinase phosphorylates a six-carbon sugar, most notably glucose, which is the first step in glycolysis. When glucose is transported into the cell it is phosphorylated by hexokinase. This phosphorylation keeps glucose from leaving the cell, and by changing the structure of glucose through phosphorylation, it decreases the concentration of glucose molecules, maintaining a gradient for more glucose to be transported into the cell. Hexokinase II transcription is increased in both red and white skeletal muscle upon treatment with AICAR. With chronic injections of AICAR, total protein content of hexokinase II increases in rat skeletal muscle.
Mitochondria
Mitochondrial enzymes, such as cytochrome c, succinate dehydrogenase, malate dehydrogenase, α-ketoglutarate dehydrogenase, and citrate synthase, increase in expression and activity in response to exercise. AICAR stimulation of AMPK increases cytochrome c and δ-aminolevulinate synthase (ALAS), a rate-limiting enzyme involved in the production of heme. Malate dehydrogenase and succinate dehydrogenase also increase, as well as citrate synthase activity, in rats treated with AICAR injections. Conversely, in LKB1 knockout mice, there are decreases in cytochrome c and citrate synthase activity, even if the mice are "trained" by voluntary exercise.
AMPK is required for increased peroxisome proliferator-activated receptor γ coactivator-1α (PGC-1α) expression in skeletal muscle in response to creatine depletion. PGC-1α is a transcriptional regulator for genes involved in fatty acid oxidation, gluconeogenesis, and is considered the master regulator for mitochondrial biogenesis.
To do this, it enhances the activity of transcription factors like nuclear respiratory factor 1 (NRF-1), myocyte enhancer factor 2 (MEF2), host cell factor (HCF), and others. It also has a positive feedback loop, enhancing its own expression. Both MEF2 and cAMP response element (CRE) are essential for contraction-induced PGC-1α promoter activity. LKB1 knockout mice show a decrease in PGC-1α, as well as mitochondrial proteins.
Thyroid hormone
AMPK and thyroid hormone regulate some similar processes. Knowing these similarities, Winder and Hardie et al. designed an experiment to see if AMPK was influenced by thyroid hormone. They found that all of the subunits of AMPK were increased in skeletal muscle, especially in the soleus and red quadriceps, with thyroid hormone treatment. There was also an increase in phospho-ACC, a marker of AMPK activity.
Glucose sensing systems
Loss of AMPK has been reported to alter the sensitivity of glucose sensing cells, through poorly defined mechanisms. Loss of the AMPKα2 subunit in pancreatic β-cells and hypothalamic neurons decreases the sensitivity of these cells to changes in extracellular glucose concentration. Moreover, exposure of rats to recurrent bouts of insulin induced hypoglycemia/glucopenia, reduces the activation of AMPK within the hypothalamus, whilst also suppressing the counterregulatory response to hypoglycemia.
Pharmacological activation of AMPK by delivery of AMPK activating drug AICAR, directly into the hypothalamus can increase the counterregulatory response to hypoglycaemia.
Lysosomal damage, inflammatory diseases, and metformin
AMPK is recruited to lysosomes and regulated at the lysosomes via several systems of clinical significance. This includes the AXIN - LKB1 complex, acting in response to glucose limitations functioning independently of AMP sensing, which detects low glucose as absence of fructose-1,6-bisphosphate via a dynamic set of interactions between lysosomally localized V-ATPase-aldolase in contact with the endoplasmic reticulum localized TRPV. A second AMPK-control system localized to lysosomes depends on the Galectin-9-TAK1 system and ubiquitination responses at controlled by deubiquitinating enzymes such as USP9X leading to AMPK activation in response to lysosomal damage, a condition that can occur biochemically, physically via protein aggregates such as proteopathic tau in Alzheimer's disease, crystalline silica causing silicosis, cholesterol crystals causing inflammation via NLRP3 inflammasome and rupture of atherosclerotic lesions, urate crystals associated with gout, or during microbial invasion such as Mycobacterium tuberculosis or coronaviruses causing SARS. Both of the above lysosomally localized systems controlling AMPK activate it in response to metformin, a widely prescribed anti-diabetic drug.
Tumor suppression and promotion
Some evidence indicates that AMPK may have a role in tumor suppression. Studies have found that AMPK may exert most, or even all of, the tumor suppressing properties of liver kinase B1 (LKB1). Additionally, studies where the AMPK activator metformin was used to treat diabetes found a correlation with a reduced risk of cancer, compared to other medications. Gene knockout and knockdown studies with mice found that mice without the gene to express AMPK had greater risks of developing lymphomas, though as the gene was knocked out globally instead of just in B cells, it was impossible to conclude that AMP knockout had cell-autonomous effects within tumor progenitor cells.
In contrast, some studies have linked AMPK with a role as a tumor promoter by protecting cancer cells from stress. Thus, once cancerous cells have formed in an organism, AMPK may swap from protecting against cancer to protecting the cancer itself. Studies have found that tumor cells with AMPK knockout are more susceptible to death by glucose starvation or extracellular matrix detachment, which may indicate AMPK has a role in preventing these two outcomes. A recent study on pancreatic cancer suggests that AMPKα may play a role in the metastatic cascade and the phenotype of cancer cells. Mechanistically, the authors propose that in the absence of AMPKα, pancreatic cancer cells are more vulnerable to oxidative stress, supporting a tumor-promoting function of AMPKα.
Controversy over role in adaption to exercise/training
A seemingly paradoxical role of AMPK occurs when we take a closer look at the energy-sensing enzyme in relation to exercise and long-term training. Similar to short-term acute training scale, long-term endurance training studies also reveal increases in oxidative metabolic enzymes, GLUT-4, mitochondrial size and quantity, and an increased dependency on the oxidation of fatty acids; however, Winder et al. reported in 2002 that despite observing these increased oxidative biochemical adaptations to long-term endurance training (similar to those mentioned above), the AMPK response (activation of AMPK with the onset of exercise) to acute bouts of exercise decreased in red quadriceps (RQ) with training (3 – see Fig.1). Conversely, the study did not observe the same results in white quadriceps (WQ) and soleus (SOL) muscles that they did in RQ. The trained rats used for that endurance study ran on treadmills 5 days/wk in two 1-h sessions, morning and afternoon. The rats were also running up to 31m/min (grade 15%). Finally, following training, the rats were sacrificed either at rest or following 10 minutes of exercise.
Because the AMPK response to exercise decreases with increased training duration, many questions arise that would challenge the AMPK role with respect to biochemical adaptations to exercise and endurance training. This is due in part to the marked increases in the mitochondrial biogenesis, upregulation of GLUT-4, UCP-3, Hexokinase II along with other metabolic and mitochondrial enzymes despite decreases in AMPK activity with training. Questions also arise because skeletal muscle cells which express these decreases in AMPK activity in response to endurance training also seem to be maintaining an oxidative dependent approach to metabolism, which is likewise thought to be regulated to some extent by AMPK activity.
If the AMPK response to exercise is responsible in part for biochemical adaptations to training, how then can these adaptations to training be maintained if the AMPK response to exercise is being attenuated with training? It is hypothesized that these adaptive roles to training are maintained by AMPK activity and that the increases in AMPK activity in response to exercise in trained skeletal muscle have not yet been observed due to biochemical adaptations that the training itself stimulated in the muscle tissue to reduce the metabolic need for AMPK activation. In other words, due to previous adaptations to training, AMPK will not be activated, and further adaptation will not occur, until the intracellular ATP levels become depleted from an even higher intensity energy challenge than prior to those previous adaptations.
See also
Salicylic acid
Aspirin
Salsalate
Notes
References
External links
Exercise biochemistry
Protein kinases
EC 2.7.11 | AMP-activated protein kinase | [
"Chemistry",
"Biology"
] | 5,557 | [
"Biochemistry",
"Exercise biochemistry"
] |
540,732 | https://en.wikipedia.org/wiki/Poincar%C3%A9%20duality | In mathematics, the Poincaré duality theorem, named after Henri Poincaré, is a basic result on the structure of the homology and cohomology groups of manifolds. It states that if M is an n-dimensional oriented closed manifold (compact and without boundary), then the kth cohomology group of M is isomorphic to the th homology group of M, for all integers k
Poincaré duality holds for any coefficient ring, so long as one has taken an orientation with respect to that coefficient ring; in particular, since every manifold has a unique orientation mod 2, Poincaré duality holds mod 2 without any assumption of orientation.
History
A form of Poincaré duality was first stated, without proof, by Henri Poincaré in 1893. It was stated in terms of Betti numbers: The kth and th Betti numbers of a closed (i.e., compact and without boundary) orientable n-manifold are equal. The cohomology concept was at that time about 40 years from being clarified. In his 1895 paper Analysis Situs, Poincaré tried to prove the theorem using topological intersection theory, which he had invented. Criticism of his work by Poul Heegaard led him to realize that his proof was seriously flawed. In the first two complements to Analysis Situs, Poincaré gave a new proof in terms of dual triangulations.
Poincaré duality did not take on its modern form until the advent of cohomology in the 1930s, when Eduard Čech and Hassler Whitney invented the cup and cap products and formulated Poincaré duality in these new terms.
Modern formulation
The modern statement of the Poincaré duality theorem is in terms of homology and cohomology: if M is a closed oriented n-manifold, then there is a canonically defined isomorphism for any integer k. To define such an isomorphism, one chooses a fixed fundamental class [M] of M, which will exist if is oriented. Then the isomorphism is defined by mapping an element to the cap product .
Homology and cohomology groups are defined to be zero for negative degrees, so Poincaré duality in particular implies that the homology and cohomology groups of orientable closed n-manifolds are zero for degrees bigger than n.
Here, homology and cohomology are integral, but the isomorphism remains valid over any coefficient ring. In the case where an oriented manifold is not compact, one has to replace homology by Borel–Moore homology
or replace cohomology by cohomology with compact support
Dual cell structures
Given a triangulated manifold, there is a corresponding dual polyhedral decomposition. The dual polyhedral decomposition is a cell decomposition of the manifold such that the k-cells of the dual polyhedral decomposition are in bijective correspondence with the ()-cells of the triangulation, generalizing the notion of dual polyhedra.
Precisely, let be a triangulation of an -manifold . Let be a simplex of . Let be a top-dimensional simplex of containing , so we can think of as a subset of the vertices of . Define the dual cell corresponding to so that is the convex hull in of the barycentres of all subsets of the vertices of that contain . One can check that if is -dimensional, then is an -dimensional cell. Moreover, the dual cells to form a CW-decomposition of , and the only ()-dimensional dual cell that intersects an -cell is . Thus the pairing given by taking intersections induces an isomorphism , where is the cellular homology of the triangulation , and and are the cellular homologies and cohomologies of the dual polyhedral/CW decomposition the manifold respectively. The fact that this is an isomorphism of chain complexes is a proof of Poincaré duality. Roughly speaking, this amounts to the fact that the boundary relation for the triangulation is the incidence relation for the dual polyhedral decomposition under the correspondence .
Naturality
Note that is a contravariant functor while is covariant. The family of isomorphisms
is natural in the following sense: if
is a continuous map between two oriented n-manifolds which is compatible with orientation, i.e. which maps the fundamental class of M to the fundamental class of N, then
where and are the maps induced by in homology and cohomology, respectively.
Note the very strong and crucial hypothesis that maps the fundamental class of M to the fundamental class of N. Naturality does not hold for an arbitrary continuous map , since in general is not an injection on cohomology. For example, if is a covering map then it maps the fundamental class of M to a multiple of the fundamental class of N. This multiple is the degree of the map .
Bilinear pairings formulation
Assuming the manifold M is compact, boundaryless, and orientable, let
denote the torsion subgroup of and let
be the free part – all homology groups taken with integer coefficients in this section. Then there are bilinear maps which are duality pairings (explained below).
and
.
Here is the quotient of the rationals by the integers, taken as an additive group. Notice that in the torsion linking form, there is a −1 in the dimension, so the paired dimensions add up to , rather than to n.
The first form is typically called the intersection product and the 2nd the torsion linking form. Assuming the manifold M is smooth, the intersection product is computed by perturbing the homology classes to be transverse and computing their oriented intersection number. For the torsion linking form, one computes the pairing of x and y by realizing nx as the boundary of some class z. The form then takes the value equal to the fraction whose numerator is the transverse intersection number of z with y, and whose denominator is n.
The statement that the pairings are duality pairings means that the adjoint maps
and
are isomorphisms of groups.
This result is an application of Poincaré duality
,
together with the universal coefficient theorem, which gives an identification
and
.
Thus, Poincaré duality says that and are isomorphic, although there is no natural map giving the isomorphism, and similarly and are also isomorphic, though not naturally.
Middle dimension
While for most dimensions, Poincaré duality induces a bilinear pairing between different homology groups, in the middle dimension it induces a bilinear form on a single homology group. The resulting intersection form is a very important topological invariant.
What is meant by "middle dimension" depends on parity. For even dimension , which is more common, this is literally the middle dimension k, and there is a form on the free part of the middle homology:
By contrast, for odd dimension , which is less commonly discussed, it is most simply the lower middle dimension k, and there is a form on the torsion part of the homology in that dimension:
However, there is also a pairing between the free part of the homology in the lower middle dimension k and in the upper middle dimension :
The resulting groups, while not a single group with a bilinear form, are a simple chain complex and are studied in algebraic L-theory.
Applications
This approach to Poincaré duality was used by Józef Przytycki and Akira Yasuhara to give an elementary homotopy and diffeomorphism classification of 3-dimensional lens spaces.
Application to Euler Characteristics
An immediate result from Poincaré duality is that any closed odd-dimensional manifold M has Euler characteristic zero, which in turn gives that any manifold that bounds has even Euler characteristic.
Thom isomorphism formulation
Poincaré duality is closely related to the Thom isomorphism theorem. Let be a compact, boundaryless oriented n-manifold, and the product of M with itself. Let V be an open tubular neighbourhood of the diagonal in . Consider the maps:
the Homology cross product
inclusion.
excision map where is the normal disc bundle of the diagonal in .
the Thom isomorphism. This map is well-defined as there is a standard identification which is an oriented bundle, so the Thom isomorphism applies.
Combined, this gives a map , which is the intersection product, generalizing the intersection product discussed above. A similar argument with the Künneth theorem gives the torsion linking form.
This formulation of Poincaré duality has become popular as it defines Poincaré duality for any generalized homology theory, given a Künneth theorem and a Thom isomorphism for that homology theory. A Thom isomorphism theorem for a homology theory is now viewed as the generalized notion of orientability for that theory. For example, a spinC-structure on a manifold is a precise analog of an orientation within complex topological k-theory.
Generalizations and related results
The Poincaré–Lefschetz duality theorem is a generalisation for manifolds with boundary. In the non-orientable case, taking into account the sheaf of local orientations, one can give a statement that is independent of orientability: see twisted Poincaré duality.
Blanchfield duality is a version of Poincaré duality which provides an isomorphism between the homology of an abelian covering space of a manifold and the corresponding cohomology with compact supports. It is used to get basic structural results about the Alexander module and can be used to define the signatures of a knot.
With the development of homology theory to include K-theory and other extraordinary theories from about 1955, it was realised that the homology could be replaced by other theories, once the products on manifolds were constructed; and there are now textbook treatments in generality. More specifically, there is a general Poincaré duality theorem for a generalized homology theory which requires a notion of orientation with respect to a homology theory, and is formulated in terms of a generalized Thom isomorphism theorem. The Thom isomorphism theorem in this regard can be considered as the germinal idea for Poincaré duality for generalized homology theories.
Verdier duality is the appropriate generalization to (possibly singular) geometric objects, such as analytic spaces or schemes, while intersection homology was developed by Robert MacPherson and Mark Goresky for stratified spaces, such as real or complex algebraic varieties, precisely so as to generalise Poincaré duality to such stratified spaces.
There are many other forms of geometric duality in algebraic topology, including Lefschetz duality, Alexander duality, Hodge duality, and S-duality.
More algebraically, one can abstract the notion of a Poincaré complex, which is an algebraic object that behaves like the singular chain complex of a manifold, notably satisfying Poincaré duality on its homology groups, with respect to a distinguished element (corresponding to the fundamental class). These are used in surgery theory to algebraicize questions about manifolds. A Poincaré space is one whose singular chain complex is a Poincaré complex. These are not all manifolds, but their failure to be manifolds can be measured by obstruction theory.
See also
Bruhat decomposition
Fundamental class
Weyl group
References
Further reading
External links
Intersection form at the Manifold Atlas
Linking form at the Manifold Atlas
Homology theory
Manifolds
Duality theories
Theorems in algebraic geometry | Poincaré duality | [
"Mathematics"
] | 2,347 | [
"Theorems in algebraic geometry",
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Category theory",
"Duality theories",
"Geometry",
"Manifolds",
"Theorems in geometry"
] |
540,979 | https://en.wikipedia.org/wiki/Magnetic%20moment | In electromagnetism, the magnetic moment or magnetic dipole moment is the combination of strength and orientation of a magnet or other object or system that exerts a magnetic field. The magnetic dipole moment of an object determines the magnitude of torque the object experiences in a given magnetic field. When the same magnetic field is applied, objects with larger magnetic moments experience larger torques. The strength (and direction) of this torque depends not only on the magnitude of the magnetic moment but also on its orientation relative to the direction of the magnetic field. Its direction points from the south pole to north pole of the magnet (i.e., inside the magnet).
The magnetic moment also expresses the magnetic force effect of a magnet. The magnetic field of a magnetic dipole is proportional to its magnetic dipole moment. The dipole component of an object's magnetic field is symmetric about the direction of its magnetic dipole moment, and decreases as the inverse cube of the distance from the object.
Examples of objects or systems that produce magnetic moments include: permanent magnets; astronomical objects such as many planets, including the Earth, and some moons, stars, etc.; various molecules; elementary particles (e.g. electrons); composites of elementary particles (protons and neutronsas of the nucleus of an atom); and loops of electric current such as exerted by electromagnets.
Definition, units, and measurement
Definition
The magnetic moment can be defined as a vector (really pseudovector) relating the aligning torque on the object from an externally applied magnetic field to the field vector itself. The relationship is given by:
where is the torque acting on the dipole, is the external magnetic field, and is the magnetic moment.
This definition is based on how one could, in principle, measure the magnetic moment of an unknown sample. For a current loop, this definition leads to the magnitude of the magnetic dipole moment equaling the product of the current times the area of the loop. Further, this definition allows the calculation of the expected magnetic moment for any known macroscopic current distribution.
An alternative definition is useful for thermodynamics calculations of the magnetic moment. In this definition, the magnetic dipole moment of a system is the negative gradient of its intrinsic energy, , with respect to external magnetic field:
Generically, the intrinsic energy includes the self-field energy of the system plus the energy of the internal workings of the system. For example, for a hydrogen atom in a 2p state in an external field, the self-field energy is negligible, so the internal energy is essentially the eigenenergy of the 2p state, which includes Coulomb potential energy and the kinetic energy of the electron. The interaction-field energy between the internal dipoles and external fields is not part of this internal energy.
Units
The unit for magnetic moment in International System of Units (SI) base units is A⋅m2, where A is ampere (SI base unit of current) and m is meter (SI base unit of distance). This unit has equivalents in other SI derived units including:
where N is newton (SI derived unit of force), T is tesla (SI derived unit of magnetic flux density), and J is joule (SI derived unit of energy). Although torque (N·m) and energy (J) are dimensionally equivalent, torques are never expressed in units of energy.
In the CGS system, there are several different sets of electromagnetism units, of which the main ones are ESU, Gaussian, and EMU. Among these, there are two alternative (non-equivalent) units of magnetic dipole moment:
where statA is statamperes, cm is centimeters, erg is ergs, and G is gauss. The ratio of these two non-equivalent CGS units (EMU/ESU) is equal to the speed of light in free space, expressed in cm⋅s−1.
All formulae in this article are correct in SI units; they may need to be changed for use in other unit systems. For example, in SI units, a loop of current with current and area has magnetic moment (see below), but in Gaussian units the magnetic moment is .
Other units for measuring the magnetic dipole moment include the Bohr magneton and the nuclear magneton.
Measurement
The magnetic moments of objects are typically measured with devices called magnetometers, though not all magnetometers measure magnetic moment: Some are configured to measure magnetic field instead. If the magnetic field surrounding an object is known well enough, though, then the magnetic moment can be calculated from that magnetic field.
Relation to magnetization
The magnetic moment is a quantity that describes the magnetic strength of an entire object. Sometimes, though, it is useful or necessary to know how much of the net magnetic moment of the object is produced by a particular portion of that magnet. Therefore, it is useful to define the magnetization field as:
where and are the magnetic dipole moment and volume of a sufficiently small portion of the magnet This equation is often represented using derivative notation such that
where is the elementary magnetic moment and is the volume element. The net magnetic moment of the magnet therefore is
where the triple integral denotes integration over the volume of the magnet. For uniform magnetization (where both the magnitude and the direction of is the same for the entire magnet (such as a straight bar magnet) the last equation simplifies to:
where is the volume of the bar magnet.
The magnetization is often not listed as a material parameter for commercially available ferromagnetic materials, though. Instead the parameter that is listed is residual flux density (or remanence), denoted . The formula needed in this case to calculate in (units of A⋅m2) is:
where:
is the residual flux density, expressed in teslas.
is the volume of the magnet (in m3).
is the permeability of vacuum ().
Models
The preferred classical explanation of a magnetic moment has changed over time. Before the 1930s, textbooks explained the moment using hypothetical magnetic point charges. Since then, most have defined it in terms of Ampèrian currents. In magnetic materials, the cause of the magnetic moment are the spin and orbital angular momentum states of the electrons, and varies depending on whether atoms in one region are aligned with atoms in another.
Magnetic pole model
The sources of magnetic moments in materials can be represented by poles in analogy to electrostatics. This is sometimes known as the Gilbert model. In this model, a small magnet is modeled by a pair of fictitious magnetic monopoles of equal magnitude but opposite polarity. Each pole is the source of magnetic force which weakens with distance. Since magnetic poles always come in pairs, their forces partially cancel each other because while one pole pulls, the other repels. This cancellation is greatest when the poles are close to each other i.e. when the bar magnet is short. The magnetic force produced by a bar magnet, at a given point in space, therefore depends on two factors: the strength of its poles (magnetic pole strength), and the vector separating them. The magnetic dipole moment is related to the fictitious poles as
It points in the direction from South to North pole. The analogy with electric dipoles should not be taken too far because magnetic dipoles are associated with angular momentum (see Relation to angular momentum). Nevertheless, magnetic poles are very useful for magnetostatic calculations, particularly in applications to ferromagnets. Practitioners using the magnetic pole approach generally represent the magnetic field by the irrotational field , in analogy to the electric field .
Ampèrian loop model
After Hans Christian Ørsted discovered that electric currents produce a magnetic field and André-Marie Ampère discovered that electric currents attract and repel each other similar to magnets, it was natural to hypothesize that all magnetic fields are due to electric current loops. In this model developed by Ampère, the elementary magnetic dipole that makes up all magnets is a sufficiently small amperian loop of current I. The dipole moment of this loop is
where is the area of the loop. The direction of the magnetic moment is in a direction normal to the area enclosed by the current consistent with the direction of the current using the right hand rule.
Localized current distributions
The magnetic dipole moment can be calculated for a localized (does not extend to infinity) current distribution assuming that we know all of the currents involved. Conventionally, the derivation starts from a multipole expansion of the vector potential. This leads to the definition of the magnetic dipole moment as:
where is the vector cross product, is the position vector, and is the electric current density and the integral is a volume integral. When the current density in the integral is replaced by a loop of current I in a plane enclosing an area S then the volume integral becomes a line integral and the resulting dipole moment becomes
which is how the magnetic dipole moment for an Amperian loop is derived.
Practitioners using the current loop model generally represent the magnetic field by the solenoidal field , analogous to the electrostatic field .
Magnetic moment of a solenoid
A generalization of the above current loop is a coil, or solenoid. Its moment is the vector sum of the moments of individual turns. If the solenoid has identical turns (single-layer winding) and vector area ,
Quantum mechanical model
When calculating the magnetic moments of materials or molecules on the microscopic level it is often convenient to use a third model for the magnetic moment that exploits the linear relationship between the angular momentum and the magnetic moment of a particle. While this relation is straightforward to develop for macroscopic currents using the amperian loop model (see below), neither the magnetic pole model nor the amperian loop model truly represents what is occurring at the atomic and molecular levels. At that level quantum mechanics must be used. Fortunately, the linear relationship between the magnetic dipole moment of a particle and its angular momentum still holds, although it is different for each particle. Further, care must be used to distinguish between the intrinsic angular momentum (or spin) of the particle and the particle's orbital angular momentum. See below for more details.
Effects of an external magnetic field
Torque on a moment
The torque on an object having a magnetic dipole moment in a uniform magnetic field is:
This is valid for the moment due to any localized current distribution provided that the magnetic field is uniform. For non-uniform B the equation is also valid for the torque about the center of the magnetic dipole provided that the magnetic dipole is small enough.
An electron, nucleus, or atom placed in a uniform magnetic field will precess with a frequency known as the Larmor frequency. See Resonance.
Force on a moment
A magnetic moment in an externally produced magnetic field has a potential energy :
In a case when the external magnetic field is non-uniform, there will be a force, proportional to the magnetic field gradient, acting on the magnetic moment itself. There are two expressions for the force acting on a magnetic dipole, depending on whether the model used for the dipole is a current loop or two monopoles (analogous to the electric dipole). The force obtained in the case of a current loop model is
Assuming existence of magnetic monopole, the force is modified as follows:
In the case of a pair of monopoles being used (i.e. electric dipole model), the force is
And one can be put in terms of the other via the relation
In all these expressions is the dipole and is the magnetic field at its position. Note that if there are no currents or time-varying electrical fields or magnetic charge, , and the two expressions agree.
Relation to free energy
One can relate the magnetic moment of a system to the free energy of that system. In a uniform magnetic field , the free energy can be related to the magnetic moment of the system as
where is the entropy of the system and is the temperature. Therefore, the magnetic moment can also be defined in terms of the free energy of a system as
Magnetism
In addition, an applied magnetic field can change the magnetic moment of the object itself; for example by magnetizing it. This phenomenon is known as magnetism. An applied magnetic field can flip the magnetic dipoles that make up the material causing both paramagnetism and ferromagnetism. Additionally, the magnetic field can affect the currents that create the magnetic fields (such as the atomic orbits) which causes diamagnetism.
Effects on environment
Magnetic field of a magnetic moment
Any system possessing a net magnetic dipole moment will produce a dipolar magnetic field (described below) in the space surrounding the system. While the net magnetic field produced by the system can also have higher-order multipole components, those will drop off with distance more rapidly, so that only the dipole component will dominate the magnetic field of the system at distances far away from it.
The magnetic field of a magnetic dipole depends on the strength and direction of a magnet's magnetic moment but drops off as the cube of the distance such that:
where is the magnetic field produced by the magnet and is a vector from the center of the magnetic dipole to the location where the magnetic field is measured. The inverse cube nature of this equation is more readily seen by expressing the location vector as the product of its magnitude times the unit vector in its direction () so that:
The equivalent equations for the magnetic -field are the same except for a multiplicative factor of μ0 = , where μ0 is known as the vacuum permeability. For example:
Forces between two magnetic dipoles
As discussed earlier, the force exerted by a dipole loop with moment on another with moment is
where is the magnetic field due to moment . The result of calculating the gradient is
where is the unit vector pointing from magnet 1 to magnet 2 and is the distance. An equivalent expression is
The force acting on is in the opposite direction.
Torque of one magnetic dipole on another
The torque of magnet 1 on magnet 2 is
Theory underlying magnetic dipoles
The magnetic field of any magnet can be modeled by a series of terms for which each term is more complicated (having finer angular detail) than the one before it. The first three terms of that series are called the monopole (represented by an isolated magnetic north or south pole) the dipole (represented by two equal and opposite magnetic poles), and the quadrupole (represented by four poles that together form two equal and opposite dipoles). The magnitude of the magnetic field for each term decreases progressively faster with distance than the previous term, so that at large enough distances the first non-zero term will dominate.
For many magnets the first non-zero term is the magnetic dipole moment. (To date, no isolated magnetic monopoles have been experimentally detected.) A magnetic dipole is the limit of either a current loop or a pair of poles as the dimensions of the source are reduced to zero while keeping the moment constant. As long as these limits only apply to fields far from the sources, they are equivalent. However, the two models give different predictions for the internal field (see below).
Magnetic potentials
Traditionally, the equations for the magnetic dipole moment (and higher order terms) are derived from theoretical quantities called magnetic potentials which are simpler to deal with mathematically than the magnetic fields.
In the magnetic pole model, the relevant magnetic field is the demagnetizing field . Since the demagnetizing portion of does not include, by definition, the part of due to free currents, there exists a magnetic scalar potential such that
In the amperian loop model, the relevant magnetic field is the magnetic induction . Since magnetic monopoles do not exist, there exists a magnetic vector potential such that
Both of these potentials can be calculated for any arbitrary current distribution (for the amperian loop model) or magnetic charge distribution (for the magnetic charge model) provided that these are limited to a small enough region to give:
where is the current density in the amperian loop model, is the magnetic pole strength density in analogy to the electric charge density that leads to the electric potential, and the integrals are the volume (triple) integrals over the coordinates that make up . The denominators of these equation can be expanded using the multipole expansion to give a series of terms that have larger of power of distances in the denominator. The first nonzero term, therefore, will dominate for large distances. The first non-zero term for the vector potential is:
where is:
where is the vector cross product, is the position vector, and is the electric current density and the integral is a volume integral.
In the magnetic pole perspective, the first non-zero term of the scalar potential is
Here may be represented in terms of the magnetic pole strength density but is more usefully expressed in terms of the magnetization field as:
The same symbol is used for both equations since they produce equivalent results outside of the magnet.
External magnetic field produced by a magnetic dipole moment
The magnetic flux density for a magnetic dipole in the amperian loop model, therefore, is
Further, the magnetic field strength is
Internal magnetic field of a dipole
The two models for a dipole (magnetic poles or current loop) give the same predictions for the magnetic field far from the source. However, inside the source region, they give different predictions. The magnetic field between poles (see the figure for Magnetic pole model) is in the opposite direction to the magnetic moment (which points from the negative charge to the positive charge), while inside a current loop it is in the same direction (see the figure to the right). The limits of these fields must also be different as the sources shrink to zero size. This distinction only matters if the dipole limit is used to calculate fields inside a magnetic material.
If a magnetic dipole is formed by taking a "north pole" and a "south pole", bringing them closer and closer together but keeping the product of magnetic pole charge and distance constant, the limiting field is
If a magnetic dipole is formed by making a current loop smaller and smaller, but keeping the product of current and area constant, the limiting field is
Unlike the expressions in the previous section, this limit is correct for the internal field of the dipole.
These fields are related by , where is the magnetization.
Relation to angular momentum
The magnetic moment has a close connection with angular momentum called the gyromagnetic effect. This effect is expressed on a macroscopic scale in the Einstein–de Haas effect, or "rotation by magnetization", and its inverse, the Barnett effect, or "magnetization by rotation". Further, a torque applied to a relatively isolated magnetic dipole such as an atomic nucleus can cause it to precess (rotate about the axis of the applied field). This phenomenon is used in nuclear magnetic resonance.
Viewing a magnetic dipole as current loop brings out the close connection between magnetic moment and angular momentum. Since the particles creating the current (by rotating around the loop) have charge and mass, both the magnetic moment and the angular momentum increase with the rate of rotation. The ratio of the two is called the gyromagnetic ratio or so that:
where is the angular momentum of the particle or particles that are creating the magnetic moment.
In the amperian loop model, which applies for macroscopic currents, the gyromagnetic ratio is one half of the charge-to-mass ratio. This can be shown as follows. The angular momentum of a moving charged particle is defined as:
where is the mass of the particle and is the particle's velocity. The angular momentum of the very large number of charged particles that make up a current therefore is:
where is the mass density of the moving particles. By convention the direction of the cross product is given by the right-hand rule.
This is similar to the magnetic moment created by the very large number of charged particles that make up that current:
where and is the charge density of the moving charged particles.
Comparing the two equations results in:
where is the charge of the particle and is the mass of the particle.
Even though atomic particles cannot be accurately described as orbiting (and spinning) charge distributions of uniform charge-to-mass ratio, this general trend can be observed in the atomic world so that:
where the -factor depends on the particle and configuration. For example, the -factor for the magnetic moment due to an electron orbiting a nucleus is one while the -factor for the magnetic moment of electron due to its intrinsic angular momentum (spin) is a little larger than 2. The -factor of atoms and molecules must account for the orbital and intrinsic moments of its electrons and possibly the intrinsic moment of its nuclei as well.
In the atomic world the angular momentum (spin) of a particle is an integer (or half-integer in the case of fermions) multiple of the reduced Planck constant . This is the basis for defining the magnetic moment units of Bohr magneton (assuming charge-to-mass ratio of the electron) and nuclear magneton (assuming charge-to-mass ratio of the proton). See electron magnetic moment and Bohr magneton for more details.
Atoms, molecules, and elementary particles
Fundamentally, contributions to any system's magnetic moment may come from sources of two kinds: 1) motion of electric charges, such as electric currents; and 2) the intrinsic magnetism due spin of elementary particles, such as the electron.
Contributions due to the sources of the first kind can be calculated from knowing the distribution of all the electric currents (or, alternatively, of all the electric charges and their velocities) inside the system, by using the formulas below.
Contributions due to particle spin sum the magnitude of each elementary particle's intrinsic magnetic moment, a fixed number, often measured experimentally to a great precision. For example, any electron's magnetic moment is measured to be . The direction of the magnetic moment of any elementary particle is entirely determined by the direction of its spin, with the negative value indicating that any electron's magnetic moment is antiparallel to its spin.
The net magnetic moment of any system is a vector sum of contributions from one or both types of sources.
For example, the magnetic moment of an atom of hydrogen-1 (the lightest hydrogen isotope, consisting of a proton and an electron) is a vector sum of the following contributions:
the intrinsic moment of the electron,
the orbital motion of the electron around the proton,
the intrinsic moment of the proton.
Similarly, the magnetic moment of a bar magnet is the sum of the contributing magnetic moments, which include the intrinsic and orbital magnetic moments of the unpaired electrons of the magnet's material and the nuclear magnetic moments.
Magnetic moment of an atom
For an atom, individual electron spins are added to get a total spin, and individual orbital angular momenta are added to get a total orbital angular momentum. These two then are added using angular momentum coupling to get a total angular momentum. For an atom with no nuclear magnetic moment, the magnitude of the atomic dipole moment, , is then
where is the total angular momentum quantum number, is the Landé -factor, and is the Bohr magneton. The component of this magnetic moment along the direction of the magnetic field is then
The negative sign occurs because electrons have negative charge.
The integer (not to be confused with the moment, ) is called the magnetic quantum number or the equatorial quantum number, which can take on any of values:
Due to the angular momentum, the dynamics of a magnetic dipole in a magnetic field differs from that of an electric dipole in an electric field. The field does exert a torque on the magnetic dipole tending to align it with the field. However, torque is proportional to rate of change of angular momentum, so precession occurs: the direction of spin changes. This behavior is described by the Landau–Lifshitz–Gilbert equation:
where is the gyromagnetic ratio, is the magnetic moment, is the damping coefficient and is the effective magnetic field (the external field plus any self-induced field). The first term describes precession of the moment about the effective field, while the second is a damping term related to dissipation of energy caused by interaction with the surroundings.
Magnetic moment of an electron
Electrons and many elementary particles also have intrinsic magnetic moments, an explanation of which requires a quantum mechanical treatment and relates to the intrinsic angular momentum of the particles as discussed in the article Electron magnetic moment. It is these intrinsic magnetic moments that give rise to the macroscopic effects of magnetism, and other phenomena, such as electron paramagnetic resonance.
The magnetic moment of the electron is
where is the Bohr magneton, is electron spin, and the g-factor is 2 according to Dirac's theory, but due to quantum electrodynamic effects it is slightly larger in reality: . The deviation from 2 is known as the anomalous magnetic dipole moment.
Again it is important to notice that is a negative constant multiplied by the spin, so the magnetic moment of the electron is antiparallel to the spin. This can be understood with the following classical picture: if we imagine that the spin angular momentum is created by the electron mass spinning around some axis, the electric current that this rotation creates circulates in the opposite direction, because of the negative charge of the electron; such current loops produce a magnetic moment which is antiparallel to the spin. Hence, for a positron (the anti-particle of the electron) the magnetic moment is parallel to its spin.
Magnetic moment of a nucleus
The nuclear system is a complex physical system consisting of nucleons, i.e., protons and neutrons. The quantum mechanical properties of the nucleons include the spin among others. Since the electromagnetic moments of the nucleus depend on the spin of the individual nucleons, one can look at these properties with measurements of nuclear moments, and more specifically the nuclear magnetic dipole moment.
Most common nuclei exist in their ground state, although nuclei of some isotopes have long-lived excited states. Each energy state of a nucleus of a given isotope is characterized by a well-defined magnetic dipole moment, the magnitude of which is a fixed number, often measured experimentally to a great precision. This number is very sensitive to the individual contributions from nucleons, and a measurement or prediction of its value can reveal important information about the content of the nuclear wave function. There are several theoretical models that predict the value of the magnetic dipole moment and a number of experimental techniques aiming to carry out measurements in nuclei along the nuclear chart.
Magnetic moment of a molecule
Any molecule has a well-defined magnitude of magnetic moment, which may depend on the molecule's energy state. Typically, the overall magnetic moment of a molecule is a combination of the following contributions, in the order of their typical strength:
magnetic moments due to its unpaired electron spins (paramagnetic contribution), if any
orbital motion of its electrons, which in the ground state is often proportional to the external magnetic field (diamagnetic contribution)
the combined magnetic moment of its nuclear spins, which depends on the nuclear spin configuration.
Examples of molecular magnetism
The dioxygen molecule, O, exhibits strong paramagnetism, due to unpaired spins of its outermost two electrons.
The carbon dioxide molecule, CO, mostly exhibits diamagnetism, a much weaker magnetic moment of the electron orbitals that is proportional to the external magnetic field. The nuclear magnetism of a magnetic isotope such as C or O will contribute to the molecule's magnetic moment.
The dihydrogen molecule, H, in a weak (or zero) magnetic field exhibits nuclear magnetism, and can be in a para- or an ortho- nuclear spin configuration.
Many transition metal complexes are magnetic. The spin-only formula is a good first approximation for high-spin complexes of first-row transition metals.
Elementary particles
In atomic and nuclear physics, the Greek symbol represents the magnitude of the magnetic moment, often measured in Bohr magnetons or nuclear magnetons, associated with the intrinsic spin of the particle and/or with the orbital motion of the particle in a system. Values of the intrinsic magnetic moments of some particles are given in the table below:
See also
Orders of magnitude (magnetic moment)
Moment (physics)
Electric dipole moment
Toroidal dipole moment
Magnetic susceptibility
Orbital magnetization
Magnetic dipole–dipole interaction
References and notes
External links
Magnetostatics
Magnetism
Electric and magnetic fields in matter
Physical quantities
Moment (physics)
Magnetic moment | Magnetic moment | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 5,859 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic moment",
"Condensed matter physics",
"Physical properties",
"Moment (physics)"
] |
541,026 | https://en.wikipedia.org/wiki/ASARCO | ASARCO (American Smelting and Refining Company) is a mining, smelting, and refining company based in Tucson, Arizona, which mines and processes primarily copper. The company has been a subsidiary of Grupo México since 1999.
Its three largest open-pit mines are the Mission, Silver Bell and Ray mines in Arizona. Its mines produce of copper a year. ASARCO conducts solvent extraction and electrowinning at the Ray and Silver Bell mines in Pima County, Arizona, and Pinal County, Arizona, and operates a smelter in Hayden, Arizona. ASARCO's smelting plant in El Paso, Texas, was suspended in 1999 and then demolished on April 13, 2013. Before closing, the plant produced of anodes each year. Refining at the mines as well as at a copper refinery in Amarillo, Texas, produce of refined copper each year.
ASARCO's hourly workers are primarily represented by the United Steelworkers.
ASARCO has 20 superfund sites across the United States, and it is subject to considerable litigation over pollution. After emerging from bankruptcy in 2008, it made a settlement with the government of $1.79 billion for contamination at various sites; the funds were allotted to the Environmental Protection Agency (EPA) for cleanup at 26 sites around the country.
History
ASARCO was founded in 1888 as the American Smelting and Refining Company by Henry H. Rogers, William Rockefeller, Adolph Lewisohn, Robert S. Towne, Anton Eilers, and Leonard Lewisohn. From 1901 to 1959, American Smelting and Refining was included in the Dow Jones Industrial Average.
In April 1901, the Guggenheim family gained control of the company, and in 1905, bought the Tacoma smelter from the Bunker Hill Mining Company. ASARCO eventually controlled 90% of the U.S. lead production, essentially becoming a smelter trust.
On January 11, 1916, sixteen ASARCO employees were killed and mutilated by Pancho Villa's men near the town of Santa Isabel, Chihuahua. It was one of the incidents that sparked the Mexican Expedition, a United States Army attempt to capture or kill Villa.
Based in Tucson, Arizona, the company grew to conduct mining, smelting, and refining of primarily copper. Open-pit mining is primarily utilized as the most efficient method of recovering this metal; the company's three largest such works are the Mission, Silver Bell, and the Ray mines in Arizona. The company had also operated in silver mining in Idaho. Its mines produce of copper a year.
ASARCO conducts solvent extraction and electrowinning at the Ray and Silver Bell mines in Pima County, Arizona, and Pinal County, Arizona, and operates a smelter in Hayden, Arizona. It also had a smelting plant in El Paso, Texas, operations of which were suspended.
In 1975 it officially changed its name to ASARCO Incorporated. In 1999 it was acquired by Grupo México, which had begun as ASARCO's 49%-owned Mexican subsidiary in 1965.
On August 9, 2005, the company filed for Chapter 11 bankruptcy in Corpus Christi, Texas under then-president Daniel Tellechea.
As of 2019, ASARCO operates two primary locations in the United States, a mining and smelting complex in Arizona and a copper refinery in Amarillo, Texas.
Pollution and environmental issues
ASARCO has been found responsible for environmental pollution at 20 Superfund sites across the U.S. by the Environmental Protection Agency. Among those sites are:
American Smelting and Refining Co., located in Omaha, Nebraska. Plant dissembled, remediation completed and site reused.
Interstate Lead Company, or ILCO, labeled EPA Site ALD041906173, and located in Leeds, Jefferson County, Alabama
Argo Smelter, Omaha & Grant Smelter, labeled EPA Site COD002259588, and located at Vasquez Boulevard and I-70 in Denver, Colorado
"Smeltertown", El Paso County, Texas, where the copper plant's furnaces were illegally used to dispose of hazardous waste. The plant has since been dismantled.
California Gulch mine and river systems in Leadville, Colorado;
Summitville Consolidated Mining Corp., Inc. (SCMCI), now bankrupt, EPA Site COD983778432, in Del Norte, Rio Grande County, Colorado;
ASARCO Globe Plant, EPA Site COD007063530, Globeville, near South Platte River, Denver and Adams County, Colorado;
Bunker Hill Mining and Metallurgical, Coeur d'Alene River Basin, Idaho;
Kin-Buc Landfill in New Jersey;
Tar Creek (Ottawa County) lead and zinc operations and surrounding residences in Oklahoma;
Commencement Bay, Near Shore/Tide Flats smelter, groundwater, and residences in Tacoma and Ruston, Washington.
Everett Smelter, Everett, Washington.
Murray, Utah lead smelter operation, since reclaimed as part of EPA Superfund program and now the location of the Intermountain Medical Center.
Litigation history
After the Colorado Department of Public Health and Environment sued ASARCO for damages to natural resources in 1983, the EPA placed the ASARCO Globe Plant on its National Priorities List of Superfund sites, with ASARCO to pay for the site's cleanup.
In 1972 ASARCO's downtown Omaha plant in Nebraska was found to be releasing high amounts of lead into the air and ground surrounding the plant. In 1995 ASARCO submitted a demolition and site cleanup plan to the Nebraska Department of Environmental Quality for their impact on the local residential area. Fined $3.6 million in 1996 for discharging lead and other pollutants into the Missouri River, ASARCO closed its Omaha plant in July 1997. After extensive site cleanup, the land was turned over to the City of Omaha as a park. All of East Omaha, comprising more than 8,000 acres (32 km2), was declared a Superfund site. As of 2003, 290 acres (1.2 km2) had been cleaned.
In 1991 the Coeur d'Alene Tribe filed suit under CERCLA against Hecla Mining Company, ASARCO and other defendants for damages and cleanup costs downstream of what has been designated as the Bunker Hill Mine and Smelting Complex Superfund site. Contamination had affected Lake Coeur d'Alene and the Saint Joe River, as well as related waters and lands, and cleanup had been under way since the early 1980s. In 1996 the United States joined the suit. In 2008 after emerging from bankruptcy, ASARCO LLC settled for $452 million for contributions to this site. This was part of a nearly $2 billion settlement (see below) with the US for a total of 26 sites.
In 2007, the Environmental Protection Agency released the results of soil and air tests in Hayden, Arizona, taken adjacent to the ASARCO Hayden Smelter. The results showed abnormally high amounts of pollutants that violate prescribed health standards. Arsenic, lead and copper were among the most egregious pollutants found in Hayden. As a consequence of the contamination, the EPA proposed to add Hayden, Arizona, to the list of Federal Superfund sites. This action would provide funding to clean up the contamination. ASARCO fought the action, supported by Democratic Gov. Janet Napolitano, who said: "I am asking that the EPA delay final decision on listing until March 31, 2008. This would provide ample time for the EPA, in close coordination with ADEQ, to enter an agreement with ASARCO to conduct remedial actions..." After emerging from Chapter 11 bankruptcy in 2008, ASARCO made a settlement with the government of $1.79 billion for contamination at various sites; the funds were allotted to the Environmental Protection Agency (EPA) for cleanup at 26 sites around the country. A final settlement for $1.79 billion was made in 2009 for up 80 sites, including one of the most notorious, the smelting plant at El Paso, Texas, for which cleanup was set to start in 2010.
Documentary
ASARCO's Tar Creek Superfund site was the subject of the film documentary Tar Creek (2009), made by Matt Myers. At one time, Tar Creek was considered to be the worst environmental problem on the EPA's list of more than 1200 sites.
See also
1913 El Paso smelters' strike
List of Superfund sites in Alabama
List of Superfund sites in Colorado
List of Superfund sites in Illinois
List of Superfund sites in Oklahoma
Picher, Oklahoma
Francis H. Brownell
References
External links
Official website
profile in International Directory of Company Histories, Vol. 4. St. James Press, 1991 (via fundinguniverse.com)
Grupo México history
A Toxic Century: Mining Giant ASARCO Must Clean Up Mess : NPR 2010
Link to CNN transcript of the ASARCO El Paso Video 2008
Marilyn Berlin Snell, "Going for Broke" Sierra Club Magazine, May/June 2006.
Michael E. Ketterer, The ASARCO El Paso Smelter: A Source of Local Contamination of Soils in El Paso (Texas), Ciudad Juarez (Chihuahua, Mexico), and Anapra (New Mexico), 2006.
Jake Bernstein, Clean up or Cover Up? "The Texas Observer", 2004.
Corpus Christi's Refinery row
Describes criminal conviction of an ASARCO supplier
ASARCO Taylor Springs Illinois , Historical Society of Montgomery County Illinois
Companies based in Tucson, Arizona
Companies that filed for Chapter 11 bankruptcy in 2005
Former components of the Dow Jones Industrial Average
Metal companies of the United States
Copper mining companies of the United States
Smelting
Superfund sites in Colorado
Mines in Arizona
Grupo México
1888 establishments in Arizona Territory
American companies established in 1888 | ASARCO | [
"Chemistry"
] | 1,999 | [
"Metallurgical processes",
"Smelting"
] |
541,158 | https://en.wikipedia.org/wiki/Java%20Management%20Extensions | Java Management Extensions (JMX) is a Java technology that supplies tools for managing and monitoring applications, system objects, devices (such as printers) and service-oriented networks. Those resources are represented by objects called MBeans (for Managed Bean). In the API, classes can be dynamically loaded and instantiated.
Managing and monitoring applications can be designed and developed using the Java Dynamic Management Kit.
JSR 003 of the Java Community Process defined JMX 1.0, 1.1 and 1.2. JMX 2.0 was being developed under JSR 255, but this JSR was subsequently withdrawn. The JMX Remote API 1.0 for remote management and monitoring is specified by JSR 160. An extension of the JMX Remote API for Web Services was being developed under JSR 262.
Adopted early on by the J2EE community, JMX has been a part of J2SE since version 5.0. "JMX" is a trademark of Oracle Corporation.
Architecture
JMX uses a three-level architecture:
The Probe level – also called the Instrumentation level – contains the probes (called MBeans) instrumenting the resources
The Agent level, or MBeanServer – the core of JMX. It acts as an intermediary between the MBean and the applications.
The Remote Management level enables remote applications to access the MBeanServer through connectors and adaptors. A connector provides full remote access to the MBeanServer API using various communication (RMI, IIOP, JMS, WS-* …), while an adaptor adapts the API to another protocol (SNMP, …) or to Web-based GUI (HTML/HTTP, WML/HTTP, …).
Applications can be generic consoles (such as JConsole and MC4J) or domain-specific (monitoring) applications. External applications can interact with the MBeans through the use of JMX connectors and protocol adapters. Connectors serve to connect an agent with a remote JMX-enabled management application. This form of communication involves a connector in the JMX agent and a connector client in the management application.
The Java Platform, Standard Edition ships with one connector, the RMI connector, which uses the Java Remote Method Protocol that is part of the Java remote method invocation API. This is the connector which most management applications use.
Protocol adapters provide a management view of the JMX agent through a given protocol. Management applications that connect to a protocol adapter are usually specific to the given protocol.
Managed beans
A managed bean – sometimes simply referred to as an MBean – is a type of JavaBean, created with dependency injection. Managed Beans are particularly used in the Java Management Extensions technology – but with Java EE 6 the specification provides for a more detailed meaning of a managed bean.
The MBean represents a resource running in the Java virtual machine, such as an application or a Java EE technical service (transactional monitor, JDBC driver, etc.). They can be used for collecting statistics on concerns like performance, resources usage, or problems (pull); for getting and setting application configurations or properties (push/pull); and notifying events like faults or state changes (push).
Java EE 6 provides that a managed bean is a bean that is implemented by a Java class, which is called its bean class. A top-level Java class is a managed bean if it is defined to be a managed bean by any other Java EE technology specification (for example, the JavaServer Faces technology specification), or if it meets all of the following conditions:
It is not a non-static inner class.
It is a concrete class, or is annotated @Decorator.
It is not annotated with an EJB component-defining annotation or declared as an EJB bean class in ejb-jar.xml.
No special declaration, such as an annotation, is required to define a managed bean.
A MBean can notify the MBeanServer of its internal changes (for the attributes) by implementing the javax.management.NotificationEmitter. The application interested in the MBean's changes registers a listener (javax.management.NotificationListener) to the MBeanServer. Note that JMX does not guarantee that the listeners will receive all notifications.
Types
There are two basic types of MBean:
Standard MBeans implement a business interface containing setters and getters for the attributes and the operations (i.e., methods).
Dynamic MBeans implement the javax.management.DynamicMBean interface that provides a way to list the attributes and operations, and to get and set the attribute values.
Additional types are Open MBeans, Model MBeans and Monitor MBeans. Open MBeans are dynamic MBeans that rely on the basic data types. They are self-explanatory and more user-friendly. Model MBeans are dynamic MBeans that can be configured during runtime. A generic MBean class is also provided for dynamically configuring the resources during program runtime.
A MXBean (Platform MBean) is a special type of MBean that reifies Java virtual machine subsystems such as garbage collection, JIT compilation, memory pools, multi-threading, etc.
A MLet (Management applet) is a utility MBean to load, instantiate and register MBeans in a MBeanServer from an XML description. The format of the XML descriptor is:
<MLET CODE = ''class'' | OBJECT = ''serfile''
ARCHIVE = ''archiveList''
[CODEBASE = ''codebaseURL'']
[NAME = ''objectName'']
[VERSION = ''version'']
>
[arglist]
</MLET>
Support
JMX is supported at various levels by different vendors:
JMX is supported by Java application servers such as OpenCloud Rhino Application Server , JBoss, JOnAS, WebSphere Application Server, WebLogic, SAP NetWeaver Application Server, Oracle Application Server 10g and Sun Java System Application Server.
JMX is supported by the UnboundID Directory Server, Directory Proxy Server, and Synchronization Server.
Systems management tools that support the protocol include Empirix OneSight, GroundWork Monitor, Hyperic, HP OpenView, IBM Director, ITRS Geneos, Nimsoft NMS, OpenNMS, Zabbix, Zenoss Core, and Zyrion, SolarWinds, Uptime Infrastructure Monitor, and LogicMonitor.
JMX is also supported by servlet containers such as Apache Tomcat. & Jetty (web server)
MX4J is Open Source JMX for Enterprise Computing.
jManage is an open source enterprise-grade JMX Console with Web and command-line interfaces.
MC4J is an open source visual console for connecting to servers supporting JMX
snmpAdaptor4j is an open source providing a simple access to MBeans via the SNMP protocol.
jvmtop is a lightweight open source JMX monitoring tool for the command-line
Prometheus can ingest JMX data via the JMX exporter which exposes metrics in Prometheus format.
New Relic's on-host infrastructure agent collects JMX data which is shown in various charts in its observability platform's dashboard.
Jolokia is a j2ee application which exposes JMX over HTTP.
See also
Jini
Network management
Simple Network Management Protocol
References
Further reading
Articles
"Enabling Component Architectures with JMX" by Marc Fleury and Juha Lindfors
"Introducing A New Vendor-Neutral J2EE Management API" by Andreas Schaefer
"Java in the management sphere" by Max Goff 1999
Oct 20
Nov 20
Dec 29
JMX/JBoss – The microkernel design
"Manage your JMX-enabled applications with jManage 1.0" by Rakesh Kalra Jan 16, 2006
"Managing J2EE Systems with JMX and JUnit " by Lucas McGregor
Sun Java Overview of Monitoring and Management
The Java EE 6 Tutorial: About managed beans
Books
Benjamin G Sullins, Mark B Whipple : JMX in Action: You will also get your first JMX application up and running, Manning Publications Co. 2002,
J. Steven Perry: Java Management Extensions, O'Reilly,
Jeff Hanson: Connecting JMX Clients and Servers: Understanding the Java Management Extensions, APress L. P.,
Marc Fleury, Juha Lindfors: JMX: Managing J2EE with Java Management Extensions, Sams Publishing,
External links
JMX 1.4 (JMX 1.4, part of Java 6)
JMX at JBoss.com
JMX on www.oracle.com
JSR 255 (JMX 2.0)
JSR 3 (JMX 1.0, 1.1, and 1.2)
Java APIs
Management Extensions
Management Extensions
Network management | Java Management Extensions | [
"Engineering"
] | 1,861 | [
"Computer networks engineering",
"Network management"
] |
22,732,293 | https://en.wikipedia.org/wiki/Internal%20model%20%28motor%20control%29 | In the subject area of control theory, an internal model is a process that simulates the response of the system in order to estimate the outcome of a system disturbance. The internal model principle was first articulated in 1976 by B. A. Francis and W. M. Wonham as an explicit formulation of the Conant and Ashby good regulator theorem. It stands in contrast to classical control, in that the classical feedback loop fails to explicitly model the controlled system (although the classical controller may contain an implicit model).
The internal model theory of motor control argues that the motor system is controlled by the constant interactions of the “plant” and the “controller.” The plant is the body part being controlled, while the internal model itself is considered part of the controller. Information from the controller, such as information from the central nervous system (CNS), feedback information, and the efference copy, is sent to the plant which moves accordingly.
Internal models can be controlled through either feed-forward or feedback control. Feed-forward control computes its input into a system using only the current state and its model of the system. It does not use feedback, so it cannot correct for errors in its control. In feedback control, some of the output of the system can be fed back into the system's input, and the system is then able to make adjustments or compensate for errors from its desired output. Two primary types of internal models have been proposed: forward models and inverse models. In simulations, models can be combined to solve more complex movement tasks.
Forward models
In their simplest form, forward models take the input of a motor command to the “plant” and output a predicted position of the body.
The motor command input to the forward model can be an efference copy, as seen in Figure 1. The output from that forward model, the predicted position of the body, is then compared with the actual position of the body. The actual and predicted position of the body may differ due to noise introduced into the system by either internal (e.g. body sensors are not perfect, sensory noise) or external (e.g. unpredictable forces from outside the body) sources. If the actual and predicted body positions differ, the difference can be fed back as an input into the entire system again so that an adjusted set of motor commands can be formed to create a more accurate movement.
Inverse models
Inverse models use the desired and actual position of the body as inputs to estimate the necessary motor commands which would transform the current position into the desired one. For example, in an arm reaching task, the desired position (or a trajectory of consecutive positions) of the arm is input into the postulated inverse model, and the inverse model generates the motor commands needed to control the arm and bring it into this desired configuration (Figure 2). Inverse internal models are also in close connection with the uncontrolled manifold hypothesis (UCM), see also here.
Combined forward and inverse models
Theoretical work has shown that in models of motor control, when inverse models are used in combination with a forward model, the efference copy of the motor command output from the inverse model can be used as an input to a forward model for further predictions. For example, if, in addition to reaching with the arm, the hand must be controlled to grab an object, an efference copy of the arm motor command can be input into a forward model to estimate the arm's predicted trajectory. With this information, the controller can then generate the appropriate motor command telling the hand to grab the object. It has been proposed that if they exist, this combination of inverse and forward models would allow the CNS to take a desired action (reach with the arm), accurately control the reach and then accurately control the hand to grip an object.
Adaptive Control theory
With the assumption that new models can be acquired and pre-existing models can be updated, the efference copy is important for the adaptive control of a movement task. Throughout the duration of a motor task, an efference copy is fed into a forward model known as a dynamics predictor whose output allows prediction of the motor output. When applying adaptive control theory techniques to motor control, efference copy is used in indirect control schemes as the input to the reference model.
Scientists
A wide range of scientists contribute to progress on the internal model hypothesis. Michael I. Jordan, Emanuel Todorov and
Daniel Wolpert contributed significantly to the mathematical formalization. Sandro Mussa-Ivaldi, Mitsuo Kawato, Claude Ghez, Reza Shadmehr, Randy Flanagan and Konrad Kording contributed with numerous behavioral experiments. The DIVA model of speech production developed by Frank H. Guenther and colleagues uses combined forward and inverse models to produce auditory trajectories with simulated speech articulators. Two interesting inverse internal models for the control of speech production were developed by Iaroslav Blagouchine & Eric Moreau. Both models combine the optimum principles and the equilibrium-point hypothesis (motor commands λ are taken as coordinates of the internal space). The input motor command λ is found by minimizing the length of the path traveled in the internal space, either under the acoustical constraint (the first model), or under the both acoustical and mechanical constraints (the second model). The acoustical constraint is related to the quality of the produced speech (measured in terms of formants), while the mechanical one is related to the stiffness of the tongue's body. The first model, in which the stiffness remains uncontrolled, is in agreement with the standard UCM hypothesis. In contrast, the second optimum internal model, in which the stiffness is prescribed, displays the good variability of speech (at least, in the reasonable range of stiffness) and is in agreement with the more recent versions of the uncontrolled manifold hypothesis (UCM). There is also a rich clinical literature on internal models including work from John Krakauer, Pietro Mazzoni, Maurice A. Smith, Kurt Thoroughman, Joern Diedrichsen, and Amy Bastian.
See also
Repetitive control
Efference copy
References
Motor control
Neuroscience
Control theory
Management cybernetics | Internal model (motor control) | [
"Mathematics",
"Biology"
] | 1,268 | [
"Behavior",
"Neuroscience",
"Applied mathematics",
"Motor control",
"Control theory",
"Dynamical systems"
] |
22,732,649 | https://en.wikipedia.org/wiki/Pisces%20V | Pisces V is a type of crewed submersible ocean exploration device, powered by battery, and capable of operating to depths of , a depth that is optimum for use in the sea waters around the Hawaiian Islands. It is used by scientists to explore the deep sea around the underwater banks in the main Hawaiian Islands, as well as the underwater features and seamounts in the Northwestern Hawaiian Islands, specifically around Kamaʻehuakanaloa Seamount (formerly Loihi).
In 1973, Pisces V took part in the rescue of Roger Mallinson and Roger Chapman, who were trapped on the seabed in Pisces Vs sister submersible Pisces III. In August 2002, Pisces V and her sister Pisces IV discovered a World War II Japanese midget submarine outside of Pearl Harbor which had been sunk by the destroyer in the first American shots fired in World War II. In 2011, marine scientists from HURL celebrated the 1,000 dives of Pisces V and Pisces IV.
Uses
The advantage of having two is that it allows preparation for an emergency. While one of the submersibles is conducting its dive, the other remains at readiness should there be an emergency, needing to be boarded on ship and hurried to the site of the problem. Such an emergency could include the submersible becoming tangled in fishing nets or entrapped in rocks or debris on the ocean floor. In such cases, the second heads to the rescue. There are also research experiments where it is advantageous to use the two vessels together.
In August 2002, Pisces V and her sister vessel Pisces IV discovered a Japanese midget submarine; sunk on December 7, 1941 by the destroyer in the first American shots fired in World War II, the submarine was hit by a 4"/50 caliber gun shot and depth charged shortly before the attack on Pearl Harbor began. The submarine was found in of water about off the mouth of Pearl Harbor. This was the culmination of a 61-year search for the vessel and has been called "the most significant modern marine archeological find ever in the Pacific, second only to the finding of Titanic in the Atlantic". In 2003, Pisces V visited the Japanese midget submarine it had found in Pearl Harbor the year before. The U.S. State Department worked in conjunction with the Japanese Foreign Ministry to determine Japanese wishes regarding the fate of the midget submarine. The submersibles are used by HURL as teaching devices. In 2008, two members of the Tampa Bay Chapter of SCUBAnauts were invited to team with HURL and to visit the historic wreck of the Japanese submarine. One SCUBAnaut said as he stepped on Pisces V that "it looked and felt as if I were in a space shuttle preparing for lift-off".
A mock-up of the control panel of Pisces V can be visited by the public at the Mokupāpapa Discovery Center in Hilo, Hawaii.
On March 5, 2009, scientists discovered seven new species of bamboo coral, six of which may be of a new genus, an extraordinary finding in a genus so broad. They were able to find these specimens through the use of Pisces V which allowed them to reach depths beyond those attained by scuba divers. They also discovered a giant sponge approximately three feet tall and three feet wide that scientists named the "cauldron sponge".
Notes
Further reading
External links
Submarines of Canada
Pisces-class deep submergence vehicles
Ships built in North Vancouver
1973 ships
Submarines of the United States
Hydrology
Physical geography | Pisces V | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 738 | [
"Hydrology",
"Environmental engineering"
] |
22,736,152 | https://en.wikipedia.org/wiki/Fermat%E2%80%93Catalan%20conjecture | In number theory, the Fermat–Catalan conjecture is a generalization of Fermat's Last Theorem and of Catalan's conjecture. The conjecture states that the equation
has only finitely many solutions (a,b,c,m,n,k) with distinct triplets of values (am, bn, ck) where a, b, c are positive coprime integers and m, n, k are positive integers satisfying
The inequality on m, n, and k is a necessary part of the conjecture. Without the inequality there would be infinitely many solutions, for instance with k = 1 (for any a, b, m, and n and with c = am + bn) or with m, n, and k all equal to two (for the infinitely many known Pythagorean triples).
Known solutions
As of 2015 the following ten solutions to equation (1) which meet the criteria of equation (2) are known:
(for to satisfy Eq. 2)
The first of these (1m + 23 = 32) is the only solution where one of a, b or c is 1, according to the Catalan conjecture, proven in 2002 by Preda Mihăilescu. While this case leads to infinitely many solutions of (1) (since one can pick any m for m > 6), these solutions only give a single triplet of values (am, bn, ck).
Partial results
It is known by the Darmon–Granville theorem, which uses Faltings's theorem, that for any fixed choice of positive integers m, n and k satisfying (2), only finitely many coprime triples (a, b, c) solving (1) exist. However, the full Fermat–Catalan conjecture is stronger as it allows for the exponents m, n and k to vary.
The abc conjecture implies the Fermat–Catalan conjecture.
For a list of results for impossible combinations of exponents, see Beal conjecture#Partial results. Beal's conjecture is true if and only if all Fermat–Catalan solutions have m = 2, n = 2, or k = 2.
See also
Sums of powers, a list of related conjectures and theorems
References
External links
Perfect Powers: Pillai's works and their developments. Waldschmidt, M.
Conjectures
Unsolved problems in number theory
Diophantine equations
Abc conjecture | Fermat–Catalan conjecture | [
"Mathematics"
] | 502 | [
"Unsolved problems in mathematics",
"Mathematical objects",
"Equations",
"Unsolved problems in number theory",
"Diophantine equations",
"Conjectures",
"Abc conjecture",
"Mathematical problems",
"Number theory"
] |
22,738,815 | https://en.wikipedia.org/wiki/Conductivity%20%28electrolytic%29 | Conductivity or specific conductance of an electrolyte solution is a measure of its ability to conduct electricity. The SI unit of conductivity is siemens per meter (S/m).
Conductivity measurements are used routinely in many industrial and environmental applications as a fast, inexpensive and reliable way of measuring the ionic content in a solution. For example, the measurement of product conductivity is a typical way to monitor and continuously trend the performance of water purification systems.
In many cases, conductivity is linked directly to the total dissolved solids (TDS). High quality deionized water has a conductivity of
at 25 °C.
This corresponds to a specific resistivity of .
The preparation of salt solutions often takes place in unsealed beakers. In this case the conductivity of purified water often is 10 to 20 times higher. A discussion can be found below.
Typical drinking water is in the range of 200–800 μS/cm, while sea water is about 50 mS/cm (or 0.05 S/cm).
Conductivity is traditionally determined by connecting the electrolyte in a Wheatstone bridge. Dilute solutions follow Kohlrausch's law of concentration dependence and additivity of ionic contributions. Lars Onsager gave a theoretical explanation of Kohlrausch's law by extending Debye–Hückel theory.
Units
The SI unit of conductivity is S/m and, unless otherwise qualified, it refers to 25 °C. More generally encountered is the traditional unit of μS/cm.
The commonly used standard cell has a width of 1 cm, and thus for very pure water in equilibrium with air would have a resistance of about 106 ohms, known as a megohm. Ultra-pure water could achieve 18 megohms or more. Thus in the past, megohm-cm was used, sometimes abbreviated to "megohm". Sometimes, conductivity is given in "microsiemens" (omitting the distance term in the unit). While this is an error, it can often be assumed to be equal to the traditional μS/cm. Often, by typographic limitations μS/cm is expressed as uS/cm.
The conversion of conductivity to the total dissolved solids depends on the chemical composition of the sample and can vary between 0.54 and 0.96. Typically, the conversion is done assuming that the solid is sodium chloride; 1 μS/cm is then equivalent to about 0.64 mg of NaCl per kg of water.
Molar conductivity has the SI unit S m2 mol−1. Older publications use the unit Ω−1 cm2 mol−1.
Measurement
The electrical conductivity of a solution of an electrolyte is measured by determining the resistance of the solution between two flat or cylindrical electrodes separated by a fixed distance. An alternating voltage is generally used in order to minimize water electrolysis. The resistance is measured by a conductivity meter. Typical frequencies used are in the range 1–3 kHz. The dependence on the frequency is usually small, but may become appreciable at very high frequencies, an effect known as the Debye–Falkenhagen effect.
A wide variety of instrumentation is commercially available. Most commonly, two types of electrode sensors are used, electrode-based sensors and inductive sensors. Electrode sensors with a static design are suitable for low and moderate conductivities, and exist in various types, having either two or four electrodes, where electrodes can be arrange oppositely, flat or in a cylinder. Electrode cells with a flexible design, where the distance between two oppositely arranged electrodes can be varied, offer high accuracy and can also be used for the measurement of highly conductive media. Inductive sensors are suitable for harsh chemical conditions but require larger sample volumes than electrode sensors. Conductivity sensors are typically calibrated with KCl solutions of known conductivity. Electrolytic conductivity is highly temperature dependent but many commercial systems offer automatic temperature correction.
Tables of reference conductivities are available for many common solutions.
Definitions
Resistance, , is proportional to the distance, , between the electrodes and is inversely proportional to the cross-sectional area of the sample, (noted on the Figure above). Writing (rho) for the specific resistance, or resistivity.
In practice the conductivity cell is calibrated by using solutions of known specific resistance, , so the individual quantities and need not be known precisely, but only their ratio. If the resistance of the calibration solution is , a cell-constant, defined as the ratio of and ( = ), is derived.
The specific conductance (conductivity), (kappa) is the reciprocal of the specific resistance.
Conductivity is also temperature-dependent.
Sometimes the conductance (reciprocical of the resistance) is denoted as = . Then the specific conductance (kappa) is:
Theory
The specific conductance of a solution containing one electrolyte depends on the concentration of the electrolyte. Therefore, it is convenient to divide the specific conductance by concentration. This quotient, termed molar conductivity, is denoted by
Strong electrolytes
Strong electrolytes are hypothesized to dissociate completely in solution. The conductivity of a solution of a strong electrolyte at low concentration follows Kohlrausch's Law
where is known as the limiting molar conductivity, is an empirical constant and is the electrolyte concentration. (Limiting here means "at the limit of the infinite dilution".) In effect, the observed conductivity of a strong electrolyte becomes directly proportional to concentration, at sufficiently low concentrations i.e. when
As the concentration is increased however, the conductivity no longer rises in proportion.
Moreover, Kohlrausch also found that the limiting conductivity of an electrolyte;
and are the limiting molar conductivities of the individual ions.
The following table gives values for the limiting molar conductivities for some selected ions.
An interpretation of these results was based on the theory of Debye and Hückel, yielding the Debye–Hückel–Onsager theory:
where and are constants that depend only on known quantities such as temperature, the charges on the ions and the dielectric constant and viscosity of the solvent. As the name suggests, this is an extension of the Debye–Hückel theory, due to Onsager. It is very successful for solutions at low concentration.
Weak electrolytes
A weak electrolyte is one that is never fully dissociated (there are a mixture of ions and complete molecules in equilibrium). In this case there is no limit of dilution below which the relationship between conductivity and concentration becomes linear. Instead, the solution becomes ever more fully dissociated at weaker concentrations, and for low concentrations of "well behaved" weak electrolytes, the degree of dissociation of the weak electrolyte becomes proportional to the inverse square root of the concentration.
Typical weak electrolytes are weak acids and weak bases. The concentration of ions in a solution of a weak electrolyte is less than the concentration of the electrolyte itself. For acids and bases the concentrations can be calculated when the value or values of the acid dissociation constant are known.
For a monoprotic acid, HA, obeying the inverse square root law, with a dissociation constant , an explicit expression for the conductivity as a function of concentration, , known as Ostwald's dilution law, can be obtained.
Various solvents exhibit the same dissociation if the ratio of relative permittivities equals the ratio cubic roots of concentrations of the electrolytes (Walden's rule).
Higher concentrations
Both Kohlrausch's law and the Debye–Hückel–Onsager equation break down as the concentration of the electrolyte increases above a certain value. The reason for this is that as concentration increases the average distance between cation and anion decreases, so that there is more interactions between close ions. Whether this constitutes ion association is a moot point. However, it has often been assumed that cation and anion interact to form an ion pair. So, an "ion-association" constant , can be derived for the association equilibrium between ions A+ and B−:
A+ + B− A+B− with =
Davies describes the results of such calculations in great detail, but states that should not necessarily be thought of as a true equilibrium constant, rather, the inclusion of an "ion-association" term is useful in extending the range of good agreement between theory and experimental conductivity data. Various attempts have been made to extend Onsager's treatment to more concentrated solutions.
The existence of a so-called conductance minimum in solvents having the relative permittivity under 60 has proved to be a controversial subject as regards interpretation. Fuoss and Kraus suggested that it is caused by the formation of ion triplets, and this suggestion has received some support recently.
Other developments on this topic have been done by Theodore Shedlovsky, E. Pitts, R. M. Fuoss, Fuoss and Shedlovsky, Fuoss and Onsager.
Mixed solvents systems
The limiting equivalent conductivity of solutions based on mixed solvents like water alcohol has minima depending on the nature of alcohol. For methanol the minimum is at 15 molar % water, and for the ethanol at 6 molar % water.
Conductivity versus temperature
Generally the conductivity of a solution increases with temperature, as the mobility of the ions increases. For comparison purposes reference values are reported at an agreed temperature, usually 298 K (≈ 25 °C or 77 °F), although occasionally 20 °C (68 °F) is used. So called 'compensated' measurements are made at a convenient temperature but the value reported is a calculated value of the expected value of conductivity of the solution, as if it had been measured at the reference temperature. Basic compensation is normally done by assuming a linear increase of conductivity versus temperature of typically 2% per kelvin. This value is broadly applicable for most salts at room temperature. Determination of the precise temperature coefficient for a specific solution is simple and instruments are typically capable of applying the derived coefficient (i.e. other than 2%).
Measurements of conductivity versus temperature can be used to determine the activation energy , using the Arrhenius equation:
where is the exponential prefactor, the gas constant, and the absolute temperature in Kelvin.
Solvent isotopic effect
The change in conductivity due to the isotope effect for deuterated electrolytes is sizable.
Applications
Despite the difficulty of theoretical interpretation, measured conductivity is a good indicator of the presence or absence of conductive ions in solution, and measurements are used extensively in many industries. For example, conductivity measurements are used to monitor quality in public water supplies, in hospitals, in boiler water and industries that depend on water quality such as brewing. This type of measurement is not ion-specific; it can sometimes be used to determine the amount of total dissolved solids (TDS) if the composition of the solution and its conductivity behavior are known. Conductivity measurements made to determine water purity will not respond to non conductive contaminants (many organic compounds fall into this category), therefore additional purity tests may be required depending on application.
Applications of TDS measurements are not limited to industrial use; many people use TDS as an indicator of the purity of their drinking water. Additionally, aquarium enthusiasts are concerned with TDS, both for freshwater and salt water aquariums. Many fish and invertebrates require quite narrow parameters for dissolved solids. Especially for successful breeding of some invertebrates normally kept in freshwater aquariums—snails and shrimp primarily—brackish water with higher TDS, specifically higher salinity, water is required. While the adults of a given species may thrive in freshwater, this is not always true for the young and some species will not breed at all in non-brackish water.
Sometimes, conductivity measurements are linked with other methods to increase the sensitivity of detection of specific types of ions. For example, in the boiler water technology, the boiler blowdown is continuously monitored for "cation conductivity", which is the conductivity of the water after it has been passed through a cation exchange resin. This is a sensitive method of monitoring anion impurities in the boiler water in the presence of excess cations (those of the alkalizing agent usually used for water treatment). The sensitivity of this method relies on the high mobility of H+ in comparison with the mobility of other cations or anions. Beyond cation conductivity, there are analytical instruments designed to measure Degas conductivity, where conductivity is measured after dissolved carbon dioxide has been removed from the sample, either through reboiling or dynamic degassing.
Conductivity detectors are commonly used with ion chromatography.
Conductivity of purified water in electrochemical experiments
The electronic conductivity of purified distilled water in electrochemical laboratory settings at room temperature is often between 0.05 and 1 μS/cm. Environmental influences during the preparation of salt solutions as gas absorption due to storing the water in an unsealed beaker may immediately increase the conductivity from and lead to values between 0.5 and 1 .
When distilled water is heated during the preparation of salt solutions, the conductivity increases even without adding salt. This is often not taken into account.
In a typical experiment under the fume hood in an unsealed beaker the conductivity of purified water increases typically non linearly from values below 1 μS/cm to values close 3.5 μS/cm at . This temperature dependence has to be taken into account particularly in dilute salt solutions.
See also
Einstein relation (kinetic theory)
Born equation
Debye–Falkenhagen effect
Law of dilution
Ion transport number
Ionic atmosphere
Wien effect
Conductimetric titration - methods to determine the equivalence point
References
Further reading
Hans Falkenhagen, Theorie der Elektrolyte, S. Hirzel Verlag, Leipzig, 1971
Conductivity of concentrated solutions of electrolytes in methyl and ethyl alcohols
Concentrated solutions and ionic cloud model
H. L. Friedman, F. Franks, Aqueous Simple Electrolytes Solutions
Electrochemical concepts
Physical chemistry
Water quality indicators | Conductivity (electrolytic) | [
"Physics",
"Chemistry",
"Environmental_science"
] | 2,966 | [
"Applied and interdisciplinary physics",
"Water pollution",
"Electrochemical concepts",
"Electrochemistry",
"Water quality indicators",
"nan",
"Physical chemistry"
] |
22,739,594 | https://en.wikipedia.org/wiki/Cryogenic%20rocket%20engine | A cryogenic rocket engine is a rocket engine that uses a cryogenic fuel and oxidizer; that is, both its fuel and oxidizer are gases which have been liquefied and are stored at very low temperatures. These highly efficient engines were first flown on the US Atlas-Centaur and were one of the main factors of NASA's success in reaching the Moon by the Saturn V rocket.
Rocket engines burning cryogenic propellants remain in use today on high performance upper stages and boosters. Upper stages are numerous. Boosters include ESA's Ariane 5, JAXA's H-II, ISRO's GSLV, LVM3, United States Delta IV and Space Launch System. The United States, Russia, Japan, India, France and China are the only countries that have operational cryogenic rocket engines.
Cryogenic propellants
Rocket engines need high mass flow rates of both oxidizer and fuel to generate useful thrust. Oxygen, the simplest and most common oxidizer, is in the gas phase at standard temperature and pressure, as is hydrogen, the simplest fuel. While it is possible to store propellants as pressurized gases, this would require large, heavy tanks that would make achieving orbital spaceflight difficult if not impossible. On the other hand, if the propellants are cooled sufficiently, they exist in the liquid phase at higher density and lower pressure, simplifying tankage. These cryogenic temperatures vary depending on the propellant, with liquid oxygen existing below and liquid hydrogen below . Since one or more of the propellants is in the liquid phase, all cryogenic rocket engines are by definition liquid-propellant rocket engines.
Various cryogenic fuel-oxidizer combinations have been tried, but the combination of liquid hydrogen (LH2) fuel and the liquid oxygen (LOX) oxidizer is one of the most widely used. Both components are easily and cheaply available, and when burned have one of the highest enthalpy releases in combustion, producing a specific impulse of up to 450 s at an effective exhaust velocity of .
Components and combustion cycles
The major components of a cryogenic rocket engine are the combustion chamber, pyrotechnic initiator, fuel injector, fuel and oxidizer turbopumps, cryo valves, regulators, the fuel tanks, and rocket engine nozzle. In terms of feeding propellants to the combustion chamber, cryogenic rocket engines are almost exclusively pump-fed. Pump-fed engines work in a gas-generator cycle, a staged-combustion cycle, or an expander cycle. Gas-generator engines tend to be used on booster engines due to their lower efficiency, staged-combustion engines can fill both roles at the cost of greater complexity, and expander engines are exclusively used on upper stages due to their low thrust.
LOX+LH2 rocket engines by country
Currently, six countries have successfully developed and deployed cryogenic rocket engines:
Comparison of first stage cryogenic rocket engines
Comparison of upper stage cryogenic rocket engines
References
External links
USA's Cryogenic Rocket engine RL10B-2
Russian Cryogenic Rocket Engines
Rocket propulsion
Rocket engines
Rocket
Hydrogen technologies | Cryogenic rocket engine | [
"Physics",
"Technology"
] | 650 | [
"Rocket engines",
"Applied and interdisciplinary physics",
"Cryogenics",
"Engines"
] |
22,740,176 | https://en.wikipedia.org/wiki/Suction%20excavator | A suction excavator, or vacuum excavator, is a construction vehicle that removes heavy debris or other materials from a hole on land using vacuuming. Suction excavators are meant to be less destructive than regular excavators. The suction excavator uses suction fans for the airflow to suck up the material that is then transported into the holding tank.
Hydro excavation, a type of suction excavator using high-pressure water jets, is sometimes referred to as daylighting, as the underground utilities are exposed to daylight during the process. Some suction excavators also use an air filter.
History
Since 1993, RSP UK Suction Excavators Ltd. has produced suction structures mounted onto two, three, and four-axle vehicles, stationary suction units, and custom-made machines. Pacific Tek, also founded in 1993, has created the Angled Vacuum Excavator Tank (1997) and the 180° Swivel Mount Valve Operator (1999).
In 1998, the Mobile Tiefbau Saugsysteme produced another type of suction excavator.
The global market size for suction excavators was estimated to be valued at in 2020.
Uses
Suction excavators are sometimes used for removing earth around buried utilities and tree roots. It can suck up liquids, e.g., water from a hollow. Typically, vacuum excavation loosens the soil with a blunt-nosed high-pressure air lance or water source and vacuums away loosened material.
Depending on the machine used and soil conditions, a 12-inch-square, 5-foot-deep pothole can be completed in 20 minutes or less. Vacuum excavation is sometimes used in conjunction with conventional underground (one-call) locating services. Because of overlapping buried utility lines, locating devices often miss some of the buried utilities on a site and cannot completely or accurately mark a site.
According to New Mexico 811's (NM811) Aligning Change, Locating with Potholing, "One-call paint marks and flags are the first steps in making the process of locating underground utilities safer, the use of vacuum excavation technology adds an additional margin of safety."
See also
Dredging
Gully emptier
Street sweeper
Suction dredger, used for dredging underwater
Suction (medicine)
References
External links
Engineering vehicles
Excavations | Suction excavator | [
"Engineering"
] | 495 | [
"Engineering vehicles"
] |
22,740,629 | https://en.wikipedia.org/wiki/Floating%20sheerleg | A floating sheerleg (also: shearleg) is a floating water vessel with a crane built on shear legs. Unlike other types of crane vessel, it is not capable of rotating its crane independently of its hull.
There is a huge variety in sheerleg capacity. The smaller cranes start at around 50 tons in lifting capacity, with the largest being able to lift 20,000 tons. The bigger sheerlegs usually have their own propulsion system and have a large accommodation facility on board, while smaller units are floating pontoons that need to be towed to their workplace by tugboats.
Sheerlegs are commonly used for salvaging ships, assistance in shipbuilding, loading and unloading large cargo into ships, and bridge building. They have grown considerably larger over the last decades due to a marked increase in vessel, cargo, and component size (of ships, offshore oil rigs, and other large fabrications), resulting in heavier lifts both during construction and in salvage operations.
List of floating sheerlegs by lifting capacity
Notes
References
Cranes (machines)
Floating cranes | Floating sheerleg | [
"Engineering"
] | 214 | [
"Engineering vehicles",
"Cranes (machines)"
] |
22,740,726 | https://en.wikipedia.org/wiki/Rigidity%20%28electromagnetism%29 | In particle physics, rigidity is a measure of the resistance of a particle to deflection by magnetic fields, defined as the particle's momentum divided by its charge. For a fully ionised nucleus moving at relativistic speed, this is equivalent to the energy per atomic number. It is an important quantity in accelerator physics and astroparticle physics.
Definitions
Motion within a magnetic field
The concept of rigidity is derived from the motion of a charged particle within a magnetic field: two particles follow the same trajectory through a magnetic field if they have the same rigidity, even if they have different masses and charges. This situation arises in many particle accelerator and particle detector designs.
If a charged particle enters a uniform magnetic field, with the field orientated perpendicular to the initial velocity, the Lorentz force accelerates the particle in the direction which is perpendicular to both the velocity and magnetic field vectors. The resulting circular motion of the particle has a radius known as the gyroradius . The rigidity is then defined as:
where is the magnetic field. In this definition, the units of rigidity R are tesla-metres (N·s/C).
Energy per unit charge
Alternatively, an entirely equivalent definition of rigidity is:
where is the momentum of the particle, is the speed of light, and is the electric charge of the particle. For a fully ionised atomic nucleus moving at relativistic speed, this simplifies to
where is the particle energy and is the atomic number. In this case the units of rigidity R are volts. This definition is often utilised in the study of cosmic rays, where the mass and charge of each particle is generally unknown.
Conversions
If the particle momentum , is given in units of GeV/c, then the rigidity in tesla-metres is:
where the factor 3.3356 (which has units of seconds per metre) is (giga-) divided by the speed of light in m/s.
References
Accelerator physics | Rigidity (electromagnetism) | [
"Physics"
] | 408 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
31,380,244 | https://en.wikipedia.org/wiki/Wafer%20bond%20characterization | Wafer bond characterization refers to the process of evaluating the quality and strength of a bond between two semiconductor wafers. The wafer bond characterization is based on different methods and tests. Considered a high importance of the wafer are the successful bonded wafers without flaws. Those flaws can be caused by void formation in the interface due to unevenness or impurities. The bond connection is characterized for wafer bond development or quality assessment of fabricated wafers and sensors.
Overview
Wafer bonds are commonly characterized by three important encapsulation parameters: bond strength, hermeticity of encapsulation and bonding induced stress.
The bond strength can be evaluated using double cantilever beam or chevron respectively micro-chevron tests. Other pull tests as well as burst, direct shear tests or bend tests enable the determination of the bond strength. The packaging hermeticity is characterized using membrane, He-leak, resonator/pressure tests.
Three additional possibilities to evaluate the bond connection are optical, electron and Acoustical measurements and instrumentation. At first, optical measurement techniques are using an optical microscope, IR transmission microscopy and visual inspection. Secondly, the electron measurement is commonly applied using an electron microscope, e.g. scanning electron microscopy (SEM), high voltage transmittance electron microscopy (HVTEM) and high resolution scanning electron microscopy (HRSEM). And finally, typical acoustic measurement approaches are scanning acoustic microscope (SAM), scanning laser acoustic microscope (SLAM) and C-mode scanning acoustic microscope (C-SAM).
The specimen preparation is sophisticated and the mechanical, electronic properties are important for the bonding technology characterization and comparison.
Infrared (IR) transmission microscopy
Infrared (IR) void imaging is possible if the analyzed materials are IR transparent, i.e. silicon. This method gives a rapid qualitative examination and is very suitable due to its sensitivity to the surface and to the buried interface. It obtains information on chemical nature of surface and interface.
Infrared transmitted light is based on the fact that silicon is translucent at wavelength ≥ 1.2 μm. The equipment consists of an infrared lamp as light source and an infrared video system (compare to figure "Schematic infrared transmission microscopy setup").
The IR imaging system enables the analysis of the bond wave and additionally micro mechanical structures as well as deformities in the silicon. This procedure allows also to analyze multiple layer bonds. The image contrast depends on the distance between the wafers. Usually if using monochromatic color IR the center of the wafers is display brighter based on the vicinity. Particles in the bond interface generate highly visible spots with differing contrast because of the interference (wave propagation) fringes. Unbonded areas can be shown if the void opening (height) is ≥ 1 nm.
Fourier transform infrared (FT-IR) spectroscopy
The Fourier transform infrared (FT-IR) spectroscopy is a non-destructive hermeticity characterization method. The radiation absorption enables the analysis with a specific wavelength for gases.
Ultrasonic microscopy
Ultrasonic microscopy uses high frequency sound waves to image bonded interfaces. Deionized water is used as the acoustic interconnect medium between the electromagnetic acoustic transducer and the wafer.
This method works with an ultrasonic transducer scanning the wafer bond. The reflected sound signal is used for the image creation. The lateral resolutions depends on the ultrasonic frequency, the acoustic beam diameter and the signal-to-noise ratio (contrast).
Unbonded areas, i.e. impurities or voids, do not reflect the ultrasonic beam like bonded areas, therefore a quality assessment of the bond is possible.
Double cantilever beam (DCB) test
Double cantilever beam test, also referred to as crack opening or razor blade method, is a method to define the strength of the bond. This is achieved by determining the energy of the bonded surfaces. A blade of a specific thickness is inserted between the bonded wafer pair. This leads to a split-up of the bond connection. The crack length equals the distance between the blade tip and the crack tip and is determined using IR transmitted light. The IR light is able to illuminate the crack, when using materials transparent to IR or visible light. If the fracture surface toughness is very high, it is very difficult to insert the blade and the wafers are endangered to break at the slide in of the blade.
The DCB test characterizes the time dependent strength by mechanical fracture evaluation and is therefore well suited for lifetime predictions. A disadvantage of this method is, that between the entering of the blade and the time to take the IR image, the results can be influenced. In addition, the measurement inaccuracy increases with a high surface fracture toughness resulting in a smaller crack length or broken wafers at the blade insertion as well as the influence of the fourth power of the measured crack length. The measured crack length determines surface energy in relation to a rectangular, beam-shaped specimen.
Thereby is the Young's modulus, the wafer thickness, the blade thickness and the measured crack length.
In literature different DCB models are mentioned, i.e. measurement approaches by Maszara, Gillis and Gilman, Srawley and Gross, Kanninen or Williams. The most commonly used approaches are by Maszara or Gillis and Gilman.
Maszara model
The Maszara model neglects shear stress as well as stress in the un-cleaved part for the obtained crack lengths. The compliance of a symmetric DCB specimen is described as follows:
The compliance is determined out of the crack length , the width and the beam thickness . defines the Young's modulus. The surface fracture energy is:
with as load-point displacement.
Gillis and Gilman model
The Gillis and Gilman approach considers bend and shear forces in the beam. The compliance equation is:
The first term describes the strain energy in the cantilever due to bending. The second term is the contribution from elastic deformations in the un-cleaved specimen part and the third term considers the shear deformation. Therefore, and are dependent on the conditions of the fixed end of the cantilever. The shear coefficient is dependent on the cross-section geometry of the beam.
Chevron test
The chevron test is used to determine the fracture toughness of brittle construction materials. The fracture toughness is a basic material parameter for analyzing the bond strength.
The chevron test uses a special notch geometry for the specimen that is loaded with an increasing tensile force. The chevron notch geometry is commonly in shape of a triangle with different bond patterns. At a specific tensile load the crack starts at the chevron tip and grows with continuous applied load until a critical length is reached. The crack growth becomes unstable and accelerates resulting in a fracture of the specimen. The critical length depends only on the specimen geometry and the loading condition. The fracture toughness commonly is determined by measuring the recorded fracture load of the test. This improves the test quality and accuracy and decreases measurement scatter.
Two approaches, based on energy release rate or stress intensity factor , can be used for explaining the chevron test method. The fracture occurs when or reach a critical value, describing the fracture toughness or .
The advantage using chevron notch specimen is due to the formation of a specified crack of well-defined length. The disadvantage of the approach is that the gluing required for loading is time consuming and may induce data scatter due to misalignment.
Micro chevron (MC) test
The micro chevron (MC) test is a modification of the chevron test using a specimen of defined and reproducible size and shape. The test allows the determination of the critical energy release rate and the critical fracture toughness . It is commonly used to characterize the wafer bond strength as well as the reliability. The reliability characterization is determined based on the fracture mechanical evaluation of critical failure. The evaluation is determined by analyzing the fracture toughness as well as the resistance against crack propagation.
The fracture toughness allows comparison of the strength properties independent on the particular specimen geometry. In addition, bond strength of the bonded interface can be determined. The chevron specimen is designed out of bonded stripes in shape of a triangle. The space of the tip of the chevron structure triangle is used as lever arm for the applied force. This reduces the force required to initiate the crack. The dimensions of the micro chevron structures are in the range of several millimeters and usually an angle of 70 ° chevron notch. This chevron pattern is fabricated using wet or reactive ion etching.
The MC test is applied with special specimen stamp glued onto the non-bonded edge of the processed structures. The specimen is loaded in a tensile tester and the load is applied perpendicular to the bonded area. When the load equals the maximum bearable conditions, a crack is initiated at the tip of the chevron notch.´
By increasing the mechanical stress by means of a higher loading, two opposing effects can be observed. First, the resistance against the crack expansion increases based on the increasing bonding of the triangular shaped first half of the chevron pattern. Second, the lever arm is getting longer with increased crack length . From the critical crack length an instable crack expansion and the destruction of the specimen is initiated. The critical crack length corresponds to the maximum force in a force-length-diagram and a minimum of the geometric function .
The fracture toughness can be calculated with maximum force, width and thickness :
The maximum force is determined during the test and the minimal stress intensity coefficient is determined by FE Simulation. In addition, the energy release rate can be determined with as modulus of elasticity and as Poisson's ratio in the following way.´
The advantage of this test is the high accuracy compared to other tensile or bend tests. It is an effective, reliable and precise approach for the development of wafer bonds as well as for the quality control of the micro mechanical device production.
Bond testing
Bond strength measurement or bond testing is performed in two basic methods: pull testing and shear testing. Both can be done destructively, which is more common (also on wafer level), or non destructively. They are used to determine the integrity of materials and manufacturing procedures, and to evaluate the overall performance of the bonding frame, as well as to compare various bonding technologies with each other. The success or failure of the bond is based on measuring the applied force, the failure type due to the applied force and the visual appearance of the residual medium used.
A development in bond strength testing of adhesively bonded composite structures is laser bond inspection (LBI). LBI provides a relative strength quotient derived from the fluence level of the laser energy delivered onto the material for the strength test compared to the strength of bonds previously mechanically tested at the same laser fluence. LBI provides nondestructive testing of bonds that were adequately prepared and meet engineering intent.
Pull testing
Measuring bond strength by pull testing is often the best way to get the failure mode in which you are interested. Additionally, and unlike a shear test, as the bond separates, the fracture surfaces are pulled away from each other, cleanly enabling accurate failure mode analysis. To pull a bond requires the substrate and interconnect to be gripped; because of size, shape and material properties, this can be difficult, particularly for the interconnection. In these cases, a set of accurately formed and aligned tweezer tips with precision control of their opening and closing is likely to make the difference between success and failure.
The most common type of pull tests is a Wire Pull test. Wire Pull testing applies an upward force under the wire, effectively pulling it away from the substrate or die.
Shear testing
Shear testing is the alternative method to determine the strength a bond can withstand. Various variants of shear testing exist. Like with pull testing, the objective is to recreate the failure mode of interest in the test. If that is not possible, the operator should focus on putting the highest possible load on the bond.
White Light Interferometers
White light interferometry is commonly used for detecting deformations of the wafer surface based on optical measurements. Low-coherence light from a white light source passes through the optical top wafer, e.g. glass wafer, to the bond interface. Usually there are three different white light interferometers:
diffraction grating interferometers
vertical scanning or coherence probe interferometers
white light scatter plate interferometers
For the white light interferometer the position of zero order interference fringe and the spacing of the interference fringes needs to be independent of wavelength.
White light interferometry is utilized to detect deformations of the wafer. Low coherence light from a white light source passes through the top wafer to the sensor. The white light is generated by a halogen lamp and modulated. The spectrum of the reflected light of the sensor cavity is detected by a spectrometer. The captured spectrum is used to obtain the cavity length of the sensor. The cavity length corresponds to the applied pressure and is determined by the spectrum of the reflection of the light of the sensor. This pressure value is subsequently displayed on a screen. The cavity length is determined using
with as refractive index of the sensor cavity material, and as adjacent peaks in the reflection spectrum.
The advantage of using white light interferometry as characterization method is the influence reduction of the bending loss.
References
Electronics manufacturing
Packaging (microfabrication)
Semiconductor technology
Wafer bonding
Microtechnology
Semiconductor device fabrication | Wafer bond characterization | [
"Materials_science",
"Engineering"
] | 2,775 | [
"Electronics manufacturing",
"Microtechnology",
"Packaging (microfabrication)",
"Materials science",
"Semiconductor device fabrication",
"Electronic engineering",
"Semiconductor technology"
] |
31,381,409 | https://en.wikipedia.org/wiki/Railworthiness | Railworthiness is the property or ability of a locomotive, passenger car, freight car, train or any kind of railway vehicle to be in proper operating condition or to meet acceptable safety standards of project, manufacturing, maintenance and railway use for transportation of persons, luggage or cargo.
Railworthiness is the condition of the rail system and its suitability for rail operations in that it has been designed, constructed, maintained and operated to approved standards and limitations by competent and authorised individuals, who are acting as members of an approved organisation and whose work is both certified as correct and accepted on behalf of the rail system owner.
See also
Airworthiness
Anticlimber
Buff strength
Crashworthiness
Cyberworthiness
Roadworthiness
Seaworthiness
Spaceworthiness
References
Mechanical engineering
Rolling stock
Rail regulation | Railworthiness | [
"Physics",
"Engineering"
] | 157 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
31,382,667 | https://en.wikipedia.org/wiki/Bioabsorbable%20metallic%20glass | Bioresorbable (or bioabsorbable) metallic glass is a type of amorphous metal, which is based on the Mg-Zn-Ca ternary system. Containing only elements which already exist inside the human body, namely Mg, Zn and Ca, these amorphous alloys are a special type of biodegradable metal.
History
The first reported metallic glass was an alloy () produced at Caltech by W. Klement (Jr.), Willens and Duwez in 1960. This and other early glass-forming alloys had to be cooled extremely rapidly (in the order of one mega-kelvin per second, 106 K/s) to avoid crystallization. An important consequence of this was that metallic glasses could only be produced in a limited number of forms (typically ribbons, foils, or wires) in which one or more dimensions were small so that heat could be extracted quickly enough to achieve the necessary cooling rates. As a result, metallic glass specimens (with a few exceptions) were limited to thicknesses of less than one hundred micrometers.
Mg-Zn-Ca based metallic glasses are a relatively new group of amorphous metals, possessing commercial and technical advantages over early compositions. Gu and co-workers produced the first Mg-Zn-Ca BMG in 2005, reporting high glass forming ability, high strength and most importantly exceptional plasticity. This lanthanide-free, Mg-based glass attracted immediate interest due to its low density and cost, and particularly because of its uncharacteristically high ductility. This property was unexpected for such compositions, as the constituent elements are found to be of relatively low Poisson ratio, and hence contribute little to the inherent plasticity of the glass. This unlikely asset was seized upon by Li in 2008, who made use of the Poisson ratio principle and increased Mg content at the expense of Zn to further enhance plasticity. Further improvements were achieved by incremental addition of Ca to the binary composition, producing numerous ternary alloys along the 350 °C isotherm of the Mg-Zn-Ca system.
Ternary Ca-Mg-Zn bulk metallic glasses were also discovered in 2005. Similar to the Mg-Zn-Ca, these two amorphous alloys are both bioresorbable metallic glasses and are based on the same Mg-Zn-Ca ternary system. The elements are displayed in order of decreasing atomic concentration. Hence, the distinction between these two metallic glasses lies in their most dominant element, namely Ca and Mg. These Ca-based bulk glassy alloys had compositions of , , and , where x = 0, 5 and 10; y = 0, 5, 7.5, 10, and 15; and z = 0, 5, 7.5, 10, and 15. Critical casting thicknesses of up to 10 mm were achieved.
Properties
Unlike traditional steel or titanium, this material dissolves in organisms at a rate of roughly 1 millimeter per month and is replaced with bone tissue. This speed can be adjusted by varying the content of zinc.
Amorphous Ca65Zn20Mg15 alloy exhibits extremely poor corrosion resistance. Wang et al. reported that the said amorphous alloy completely disintegrated after no more than 3 hours exposure in biocorrosion environment. In static distilled water at room temperature, Dahlman et al. also reported destructive corrosion reactions of the same material, decomposing into a multiphase powder.
Ca-BMGs with higher Zn contents as reported by Cao et al. showed an elastic modulus in the range of 35–46 GPa, and a hardness of 0.7–1.4 GPa.
Recent developments
Metallic glasses based on the Mg-Zn-Ca ternary alloy system only consist of the elements which already exist inside the human body. As such, it is being explored as a potential bioresorbable biomaterial for use in orthopaedic applications.
See also
Bioresorbable stents
Materials science
References
External links
Bioabsorbable metallic glasses
"New materials for bone repair become nutrients, not poison"
Metallurgy
Alloys
Amorphous metals
Biomaterials | Bioabsorbable metallic glass | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 866 | [
"Biomaterials",
"Metallurgy",
"Unsolved problems in physics",
"Materials science",
"Amorphous metals",
"Materials",
"Chemical mixtures",
"Alloys",
"nan",
"Amorphous solids",
"Matter",
"Medical technology"
] |
31,384,365 | https://en.wikipedia.org/wiki/Cyclamide | Cyclamides are a class of oligopeptides produced by cyanobacteria algae strains such as Microcystis aeruginosa. Some of them can be toxic.
Cyclamides are cyclopeptides with either six or eight amino acids, some of which are modified from their natural proteinogenic form. They are typically characterized by thiazole and oxazole rings which are thought to be cysteine and threonine derivatives, respectively. Cyclamides are biosynthesized through ribosomic pathways.
See also
Cyanopeptolin
Microcystin
References
External links
Peptides
Cyanotoxins | Cyclamide | [
"Chemistry"
] | 139 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
897,539 | https://en.wikipedia.org/wiki/Hamilton%E2%80%93Jacobi%20equation | In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
The Hamilton–Jacobi equation is a formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, it fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the eighteenth century) of finding an analogy between the propagation of light and the motion of a particle. The wave equation followed by mechanical systems is similar to, but not identical with, Schrödinger's equation, as described below; for this reason, the Hamilton–Jacobi equation is considered the "closest approach" of classical mechanics to quantum mechanics. The qualitative form of this connection is called Hamilton's optico-mechanical analogy.
In mathematics, the Hamilton–Jacobi equation is a necessary condition describing extremal geometry in generalizations of problems from the calculus of variations. It can be understood as a special case of the Hamilton–Jacobi–Bellman equation from dynamic programming.
Overview
The Hamilton–Jacobi equation is a first-order, non-linear partial differential equation
for a system of particles at coordinates . The function is the system's Hamiltonian giving the system's energy. The solution of this equation is the action, , called Hamilton's principal function. .
The solution can be related to the system Lagrangian by an indefinite integral of the form used in the principle of least action:
Geometrical surfaces of constant action are perpendicular to system trajectories, creating a wavefront-like view of the system dynamics. This property of the Hamilton–Jacobi equation connects classical mechanics to quantum mechanics.
Mathematical formulation
Notation
Boldface variables such as represent a list of generalized coordinates,
A dot over a variable or list signifies the time derivative (see Newton's notation). For example,
The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, such as
The action functional (a.k.a. Hamilton's principal function)
Definition
Let the Hessian matrix be invertible. The relation
shows that the Euler–Lagrange equations form a system of second-order ordinary differential equations. Inverting the matrix transforms this system into
Let a time instant and a point in the configuration space be fixed. The existence and uniqueness theorems guarantee that, for every the initial value problem with the conditions and has a locally unique solution Additionally, let there be a sufficiently small time interval such that extremals with different initial velocities would not intersect in The latter means that, for any and any there can be at most one extremal for which and Substituting into the action functional results in the Hamilton's principal function (HPF)
where
Formula for the momenta
The momenta are defined as the quantities This section shows that the dependency of on disappears, once the HPF is known.
Indeed, let a time instant and a point in the configuration space be fixed. For every time instant and a point let be the (unique) extremal from the definition of the Hamilton's principal function . Call the velocity at . Then
Formula
Given the Hamiltonian of a mechanical system, the Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for the Hamilton's principal function ,
Alternatively, as described below, the Hamilton–Jacobi equation may be derived from Hamiltonian mechanics by treating as the generating function for a canonical transformation of the classical Hamiltonian
The conjugate momenta correspond to the first derivatives of with respect to the generalized coordinates
As a solution to the Hamilton–Jacobi equation, the principal function contains undetermined constants, the first of them denoted as , and the last one coming from the integration of .
The relationship between and then describes the orbit in phase space in terms of these constants of motion. Furthermore, the quantities
are also constants of motion, and these equations can be inverted to find as a function of all the and constants and time.
Comparison with other formulations of mechanics
The Hamilton–Jacobi equation is a single, first-order partial differential equation for the function of the generalized coordinates and the time . The generalized momenta do not appear, except as derivatives of , the classical action.
For comparison, in the equivalent Euler–Lagrange equations of motion of Lagrangian mechanics, the conjugate momenta also do not appear; however, those equations are a system of , generally second-order equations for the time evolution of the generalized coordinates. Similarly, Hamilton's equations of motion are another system of 2N first-order equations for the time evolution of the generalized coordinates and their conjugate momenta .
Since the HJE is an equivalent expression of an integral minimization problem such as Hamilton's principle, the HJE can be useful in other problems of the calculus of variations and, more generally, in other branches of mathematics and physics, such as dynamical systems, symplectic geometry and quantum chaos. For example, the Hamilton–Jacobi equations can be used to determine the geodesics on a Riemannian manifold, an important variational problem in Riemannian geometry. However as a computational tool, the partial differential equations are notoriously complicated to solve except when is it possible to separate the independent variables; in this case the HJE become computationally useful.
Derivation using a canonical transformation
Any canonical transformation involving a type-2 generating function leads to the relations
and Hamilton's equations in terms of the new variables and new Hamiltonian have the same form:
To derive the HJE, a generating function is chosen in such a way that, it will make the new Hamiltonian . Hence, all its derivatives are also zero, and the transformed Hamilton's equations become trivial
so the new generalized coordinates and momenta are constants of motion. As they are constants, in this context the new generalized momenta are usually denoted , i.e. and the new generalized coordinates are typically denoted as , so .
Setting the generating function equal to Hamilton's principal function, plus an arbitrary constant :
the HJE automatically arises
When solved for , these also give us the useful equations
or written in components for clarity
Ideally, these N equations can be inverted to find the original generalized coordinates as a function of the constants and , thus solving the original problem.
Separation of variables
When the problem allows additive separation of variables, the HJE leads directly to constants of motion. For example, the time t can be separated if the Hamiltonian does not depend on time explicitly. In that case, the time derivative in the HJE must be a constant, usually denoted (), giving the separated solution
where the time-independent function is sometimes called the abbreviated action or Hamilton's characteristic function and sometimes written (see action principle names). The reduced Hamilton–Jacobi equation can then be written
To illustrate separability for other variables, a certain generalized coordinate and its derivative are assumed to appear together as a single function
in the Hamiltonian
In that case, the function S can be partitioned into two functions, one that depends only on qk and another that depends only on the remaining generalized coordinates
Substitution of these formulae into the Hamilton–Jacobi equation shows that the function ψ must be a constant (denoted here as ), yielding a first-order ordinary differential equation for
In fortunate cases, the function can be separated completely into functions
In such a case, the problem devolves to ordinary differential equations.
The separability of S depends both on the Hamiltonian and on the choice of generalized coordinates. For orthogonal coordinates and Hamiltonians that have no time dependence and are quadratic in the generalized momenta, will be completely separable if the potential energy is additively separable in each coordinate, where the potential energy term for each coordinate is multiplied by the coordinate-dependent factor in the corresponding momentum term of the Hamiltonian (the Staeckel conditions). For illustration, several examples in orthogonal coordinates are worked in the next sections.
Examples in various coordinate systems
Spherical coordinates
In spherical coordinates the Hamiltonian of a free particle moving in a conservative potential U can be written
The Hamilton–Jacobi equation is completely separable in these coordinates provided that there exist functions such that can be written in the analogous form
Substitution of the completely separated solution
into the HJE yields
This equation may be solved by successive integrations of ordinary differential equations, beginning with the equation for
where is a constant of the motion that eliminates the dependence from the Hamilton–Jacobi equation
The next ordinary differential equation involves the generalized coordinate
where is again a constant of the motion that eliminates the dependence and reduces the HJE to the final ordinary differential equation
whose integration completes the solution for .
Elliptic cylindrical coordinates
The Hamiltonian in elliptic cylindrical coordinates can be written
where the foci of the ellipses are located at on the -axis. The Hamilton–Jacobi equation is completely separable in these coordinates provided that has an analogous form
where , and are arbitrary functions. Substitution of the completely separated solution
into the HJE yields
Separating the first ordinary differential equation
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
which itself may be separated into two independent ordinary differential equations
that, when solved, provide a complete solution for .
Parabolic cylindrical coordinates
The Hamiltonian in parabolic cylindrical coordinates can be written
The Hamilton–Jacobi equation is completely separable in these coordinates provided that has an analogous form
where , , and are arbitrary functions. Substitution of the completely separated solution
into the HJE yields
Separating the first ordinary differential equation
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
which itself may be separated into two independent ordinary differential equations
that, when solved, provide a complete solution for .
Waves and particles
Optical wave fronts and trajectories
The HJE establishes a duality between trajectories and wavefronts. For example, in geometrical optics, light can be considered either as “rays” or waves. The wave front can be defined as the surface that the light emitted at time has reached at time . Light rays and wave fronts are dual: if one is known, the other can be deduced.
More precisely, geometrical optics is a variational problem where the “action” is the travel time along a path, where is the medium's index of refraction and is an infinitesimal arc length. From the above formulation, one can compute the ray paths using the Euler–Lagrange formulation; alternatively, one can compute the wave fronts by solving the Hamilton–Jacobi equation. Knowing one leads to knowing the other.
The above duality is very general and applies to all systems that derive from a variational principle: either compute the trajectories using Euler–Lagrange equations or the wave fronts by using Hamilton–Jacobi equation.
The wave front at time , for a system initially at at time , is defined as the collection of points such that . If is known, the momentum is immediately deduced.
Once is known, tangents to the trajectories are computed by solving the equationfor , where is the Lagrangian. The trajectories are then recovered from the knowledge of .
Relationship to the Schrödinger equation
The isosurfaces of the function can be determined at any time t. The motion of an -isosurface as a function of time is defined by the motions of the particles beginning at the points on the isosurface. The motion of such an isosurface can be thought of as a wave moving through -space, although it does not obey the wave equation exactly. To show this, let S represent the phase of a wave
where is a constant (the Planck constant) introduced to make the exponential argument dimensionless; changes in the amplitude of the wave can be represented by having be a complex number. The Hamilton–Jacobi equation is then rewritten as
which is the Schrödinger equation.
Conversely, starting with the Schrödinger equation and our ansatz for , it can be deduced that
The classical limit () of the Schrödinger equation above becomes identical to the following variant of the Hamilton–Jacobi equation,
Applications
HJE in a gravitational field
Using the energy–momentum relation in the form
for a particle of rest mass travelling in curved space, where are the contravariant coordinates of the metric tensor (i.e., the inverse metric) solved from the Einstein field equations, and is the speed of light. Setting the four-momentum equal to the four-gradient of the action ,
gives the Hamilton–Jacobi equation in the geometry determined by the metric :
in other words, in a gravitational field.
HJE in electromagnetic fields
For a particle of rest mass and electric charge moving in electromagnetic field with four-potential in vacuum, the Hamilton–Jacobi equation in geometry determined by the metric tensor has a form
and can be solved for the Hamilton principal action function to obtain further solution for the particle trajectory and momentum:
where and with the cycle average of the vector potential.
A circularly polarized wave
In the case of circular polarization,
Hence
where , implying the particle moving along a circular trajectory with a permanent radius and an invariable value of momentum directed along a magnetic field vector.
A monochromatic linearly polarized plane wave
For the flat, monochromatic, linearly polarized wave with a field directed along the axis
hence
implying the particle figure-8 trajectory with a long its axis oriented along the electric field vector.
An electromagnetic wave with a solenoidal magnetic field
For the electromagnetic wave with axial (solenoidal) magnetic field:
hence
where is the magnetic field magnitude in a solenoid with the effective radius , inductivity , number of windings , and an electric current magnitude through the solenoid windings. The particle motion occurs along the figure-8 trajectory in plane set perpendicular to the solenoid axis with arbitrary azimuth angle due to axial symmetry of the solenoidal magnetic field.
See also
Canonical transformation
Constant of motion
Hamiltonian vector field
Hamilton–Jacobi–Einstein equation
WKB approximation
Action-angle coordinates
References
Further reading
Hamiltonian mechanics
Symplectic geometry
Partial differential equations
William Rowan Hamilton | Hamilton–Jacobi equation | [
"Physics",
"Mathematics"
] | 2,962 | [
"Hamiltonian mechanics",
"Theoretical physics",
"Classical mechanics",
"Dynamical systems"
] |
897,776 | https://en.wikipedia.org/wiki/Arm%20solution | In the engineering field of robotics, an arm solution is a set of calculations that allow the real-time computation of the control commands needed to place the end of a robotic arm at a desired position and orientation in space.
A typical industrial robot is built with fixed length segments that are connected either at joints whose angles can be controlled, or along linear slides whose length can be controlled. If each angle and slide distance is known, the position and orientation of the end of the robot arm relative to its base can be computed efficiently with simple trigonometry.
Going the other way — calculating the angles and slides needed to achieve a desired position and orientation — is much harder. The mathematical procedure for doing this is called an arm solution. For some robot designs, such as the Stanford arm, Vicarm SCARA robot or cartesian coordinate robots, this can be done in closed form. Other robot designs require an iterative solution, which requires more computer resources.
See also
321 kinematic structure
Inverse kinematics
Motion planning
External links
infolab.stanford.edu - The Stanford Arm (1969), with a configuration such that the mathematical computations (arm solutions) were simplified to speed up computations
D. L. Pieper, The kinematics of manipulators under computer control. PhD thesis, Stanford University, Department of Mechanical Engineering, 1968.
Robot kinematics
Trigonometry | Arm solution | [
"Mathematics",
"Engineering"
] | 284 | [
"Applied mathematics",
"Applied mathematics stubs",
"Robotics engineering",
"Robot kinematics"
] |
897,855 | https://en.wikipedia.org/wiki/Secure%20Hash%20Algorithms | The Secure Hash Algorithms are a family of cryptographic hash functions published by the National Institute of Standards and Technology (NIST) as a U.S. Federal Information Processing Standard (FIPS), including:
SHA-0: A retronym applied to the original version of the 160-bit hash function published in 1993 under the name "SHA". It was withdrawn shortly after publication due to an undisclosed "significant flaw" and replaced by the slightly revised version SHA-1.
SHA-1: A 160-bit hash function which resembles the earlier MD5 algorithm. This was designed by the National Security Agency (NSA) to be part of the Digital Signature Algorithm. Cryptographic weaknesses were discovered in SHA-1, and the standard was no longer approved for most cryptographic uses after 2010.
SHA-2: A family of two similar hash functions, with different block sizes, known as SHA-256 and SHA-512. They differ in the word size; SHA-256 uses 32-bit words where SHA-512 uses 64-bit words. There are also truncated versions of each standard, known as SHA-224, SHA-384, SHA-512/224 and SHA-512/256. These were also designed by the NSA.
SHA-3: A hash function formerly called Keccak, chosen in 2012 after a public competition among non-NSA designers. It supports the same hash lengths as SHA-2, and its internal structure differs significantly from the rest of the SHA family.
The corresponding standards are FIPS PUB 180 (original SHA), FIPS PUB 180-1 (SHA-1), FIPS PUB 180-2 (SHA-1, SHA-256, SHA-384, and SHA-512). NIST has updated Draft FIPS Publication 202, SHA-3 Standard separate from the Secure Hash Standard (SHS).
Comparison of SHA functions
In the table below, internal state means the "internal hash sum" after each compression of a data block.
Validation
All SHA-family algorithms, as FIPS-approved security functions, are subject to official validation by the CMVP (Cryptographic Module Validation Program), a joint program run by the American National Institute of Standards and Technology (NIST) and the Canadian Communications Security Establishment (CSE).
References
Cryptography | Secure Hash Algorithms | [
"Mathematics",
"Engineering"
] | 471 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
898,010 | https://en.wikipedia.org/wiki/Hodge%20cycle | In differential geometry, a Hodge cycle or Hodge class is a particular kind of homology class defined on a complex algebraic variety V, or more generally on a Kähler manifold. A homology class x in a homology group
where V is a non-singular complex algebraic variety or Kähler manifold is a Hodge cycle, provided it satisfies two conditions. Firstly, k is an even integer , and in the direct sum decomposition of H shown to exist in Hodge theory, x is purely of type . Secondly, x is a rational class, in the sense that it lies in the image of the abelian group homomorphism
defined in algebraic topology (as a special case of the universal coefficient theorem). The conventional term Hodge cycle therefore is slightly inaccurate, in that x is considered as a class (modulo boundaries); but this is normal usage.
The importance of Hodge cycles lies primarily in the Hodge conjecture, to the effect that Hodge cycles should always be algebraic cycles, for V a complete algebraic variety. This is an unsolved problem, one of the Millennium Prize Problems. It is known that being a Hodge cycle is a necessary condition to be an algebraic cycle that is rational, and numerous particular cases of the conjecture are known.
References
Hodge theory | Hodge cycle | [
"Engineering"
] | 253 | [
"Tensors",
"Differential forms",
"Hodge theory"
] |
898,161 | https://en.wikipedia.org/wiki/Geopark | A geopark is a protected area with internationally significant geology within which sustainable development is sought and which includes tourism, conservation, education and research concerning not just geology but other relevant sciences.
In 2005, a European Geopark was defined as being:
"a territory with a particular geological heritage and with a sustainable territorial development....the ultimate aim of a European Geopark is to bring enhanced employment opportunities for the people who live there."
Today the geopark is virtually synonymous with the UNESCO geopark, which is defined and managed under the voluntary authority of UNESCO's International Geoscience and Geoparks Programme (IGGP). UNESCO provides a standard for geoparks and a certification service to territories that apply for it. The service is available to member states of UNESCO.
The list of members is not the same as the member states of the United Nations. Membership in the UN does not automatically imply membership of UNESCO, even though UNESCO is part of the UN. Both lists have about 193 member nations, but not exactly the same 193. The UN list covers most of the geopolitical world, but the UNESCO list lacks Israel, for example, which resigned in 2018 because they believed UNESCO is anti-Israel.
The UNESCO Global Geoparks Network co-ordinates the activities of the many UNESCO Global Geoparks (UGGp's) around the world. It is divided into regional networks, such as the European Geoparks Network. The EGN historically preceded the UGGN, being founded in 2000 with the first four geoparks. It joined with UNESCO in 2001 and in 2005 agreed in the Madonie Declaration to be a regional network of the UGGps, which had been created by UNESCO in 2004.
The Madonie Declaration of 2004, which was signed by Nikolas Zouros for the EGN and Wolfgang Eder for UNESCO, established what was later called a "bottom up" system of precedence. An applicant geopark must first be a member of the EGN before applying to the UGGN. Furthermore, another level was created, the National Geoparks Network, which at first glance seems a contradiction in terms. Geoparks are international. What the Declaration meant was, if a potentially international type of site (a possible geosite) existed within the candidate park's country, the park must belong to it before it can apply to the regional network. This type was dubbed an NGN. Its sites could then be included under the geopark umbrella by being candidates for the international network. In 2014 the creation of other regions besides the EGN was allowed and encouraged, permitting geoparks to fulfill their declared global nature.
Etymology and usage of geopark
Ge- or geo- is a word-formative prefix derived from the ancient Greek word for "Earth." Due to the use of ancient Greek and Latin words to form international scientific vocabulary, geo- might appear in any modern language of any type by the process of compounding. Since geo- is well known in most modern languages it is especially amenable to word production, the impromptu manufacture of words of self-evident meaning. Geopark and all its associated new geo- words began as produced neologisms but are fast becoming legitimate scientific compounds.
Produced words are often open to interpretation: they mean whatever the writer intended them to mean or whatever the reader interpreted them to mean. Eventually the word receives a common understanding that can be dictionary-defined. "Geopark" is right at that point. Henriques and Brilha, after listing four interpretations not to be allowed now, cite features that must be present in the application of "geopark:" a development plan, a geoheritage, conservation, and sustainability. These are features that must receive the credibility of the international organizations certifying the park as a geopark, without which certification they cannot be scientific geoparks. The overall qualification, therefore, is that they must be certified as geoparks by the accepted international organizations. No certification, no geopark.
The innovation of geo-compounds is neither new or recent, the most ancient perhaps being the geo-metria, "earth measurement," of ancient Greece. There have been a smattering of "Earth" words ever since. Geo-logia is a relative newcomer, in mediaeval Latin "the study of earthly things" (such as law) in contrast to divine things. It was preempted to refer to the 18th century topics of fossils and rock stratification. Most geo-compounds come from the 19th and early 20th century. Geo- means "Earth" rather than "geological," which would be redundant.
After a floruit of international exploration, scientific research, and park-building in the later 19th century, the world wars represent a sharp decline of conservation and tourism, as the goals of war are opposite those of peace. Even the League of Nations, predecessor of the United Nations, did not unite. The last world war saw the irrecoverable destruction of national heritages and the terrible misuse of science. The United Nations and its and educational, scientific, and cultural branch, UNESCO, heir to the League's International Committee on Intellectual Cooperation, both founded in 1945 to do a better job at peace-keeping and cooperation, were at first hindered by the Cold War. As it manifestly drew to an end in the 1970's, and the countries of east Europe would be open once more, UNESCO began to be more effective, formulating organizations to respond to a growing demand for the protection of the heritage that was left.
The current round of innovation to which geo-park belongs dates to the last decades of the 20th century and the first of the 21st, although it may not be over yet. They began as marketing terms in the vending of what Farsani calls "sustainable tourism," characterizing it as "a new niche market," the key words being, in addition to geopark, geotourism, geoheritage, geosite, geoconservation, and geodiversity. It is not possible to discover what individuals first innovated the words. Authors such as Farsani can only state the groups among which they were thought to be first current.
The term “geopark” was apparently first used to describe a newly instituted park in the west Vulkaneifel district of the Eifel Mountains of Rhineland-Palatinate, Germany. The region had tended to be economically depressed due to the preference of buyers and sellers for markets in nearby France. They did have a noted geological asset: a now dormant forested volcanic range. The land shows evidence of ancient volcanos, including crater lakes, mineral springs, and pipe formations. The place also abounds in fossils. Although of interest to scientists and hikers, the terrain was generally regarded as a liability, some 19th-century plans even having been made to fill lakes.
Types of geopark
The word geopark is no longer open to the process of innovation through word production. It has been defined by various organizations in the field of earth science. An essential element of the definition is that a geopark must be branded as part of an international geopark network. A national park is not necessarily a geopark. For example, the United States has a system of national parks, but none of them are geoparks. Canada, on the other hand, has several.
A geopark network requires the branding of an international scientific association. They only brand protected areas that meet certain standards, as presented above. The branding has no effect on the previous status of an area. It might already have been other types of park, such as a national park. If the geopark branding is removed, it is still those other types of park. No matter what the type, management, the exercise of authority over the area, is always national; the scientific organizations have no sovereignty; they are simply advisory and certifying agencies guided by decisions made at international conventions.
National geoparks
A "national geopark" is a post de facto designation by UNESCO of a "geographical area" or a transnational geographical area already known to be "of international, regional, and/or national importance" as a candidate geopark. It has not yet been certified as belonging to a regional or the global UNESCO geopark network. It has been "already inscribed" as a member of some other network; that is, "national geopark" is a sort of floating candidacy that can be attached to any other parkland of interest, after which attachment the parkland qualifies for the designation of geopark. The candidates so designated are termed a "national network for geoparks." If it exists in a member nation all geoparks of the regional network in that nation as well as the global network must also belong to it.
Some of the networks from which UNESCO national geoparks might be chosen are World Heritage Sites, Agenda 21, Man and the Biosphere Programme. UNESCO also provides a list of recommended geosite types, such as "minerals and mineral resources," "fossils," etc.
The national networks (one for each nation) are intended as the bottom level of the bottom-up system. They support national conservation, education, cultural development, research, as well as economic sustainability. There is some effort to control conflict of mandate; for example, Fossils are not allowed to be sold, which practice would favor sustainability, but work against conservation. For some geoparks, such as Sitia geopark (east Crete), the conflict between geotourist development and the conservation of archaeological sites is a severe one, reaching the law courts. As with the other levels of geopark, the parks are subject to review for recertification every four years.
Transnational geoparks
A transnational geopark crosses a national border to extend continuously in two member nations. The park must belong to two national geoparks, one in each nation, and one regional geopark. Both national geoparks collaborate to prepare a single application, which is submitted by both to the regional and global networks. Both member nations must endorse the park. The management bodies in each nation must collaborate to establish a single set of activities and strategies for the entire park. They can appoint either two collaborating managements or one management.
The certified transnational geoparks are:
Geopark Karawanken
Marble Arch Caves Global Geopark
Novohrad – Nógrád Geopark
Muscau Arch / Łuk Mużakowa UNESCO Global Geopark
Regional geoparks
A regional geopark is a member of an independent network of geoparks that has agreed with UNESCO to provide candidates for the global network. All members of the regional network are a priori members of a national geopark network. They are also members of the global network if they are certified for it. A regional geopark would not be a global geopark if it has not yet been certified as such or its certification has lapsed and it has applied for recertification (Yellow Card status).
A region is more than one country. A current list of accepted regions is:
African Geoparks Network
Asia Pacific Geoparks Network (APGN)
European Geoparks Network (EGN)
Latin America and the Caribbean Geoparks Network (LACGN)
Acting regional geopark
Canada has some geoparks. The most logical regional classification for these might have been the "North American Regional Geopark Network," following a proposed continental tradition for geopark regions. However, the United States does not have any geoparks, and Mexico is covered under Latin America. There are no other nations in North America that can be combined into a region. The United States and Israel resigned from UNESCO in 2018 because they believed that UNESCO is anti-Israel, though the US re-joined in 2023.
Canadian geoparks according to the rules must belong to a regional network before they can apply for global status, but there is none, and may not be any in the foreseeable future. UNESCO therefore treating Canada as a special case allows the national geoparks network, the Canadian Geoparks Network, to give global and green card certification. A regionalization based strictly on continents did not turn out to be practical for other regions also.
UNESCO Global Geoparks
A global geopark is one that has been certified to the fullest extent, and is therefore a member of UNESCO's global network of geoparks. It is per se also a member of a regional geopark network and also a member of a national geopark network, if its nation has one, or a transnational geopark. A certification is good for four years, after which it must be certified again. In the language of certification, a recertified global geopark is termed a "green-card geopark." If a geopark fails recertification it is given two years to pass, in which it is a "yellow-card geopark." After two years if it is still unrecertified it is a "red-card geopark;" that is, no longer a geopark, and is removed from connection with or concern by UNESCO. To reapply, it must start the application over. Recertified geoparks do not have to keep the same borders; only a portion may be recertified.
See also
Geoconservation
Geotourism
Notes
Citations
Reference bibliography
External links
Asia Pacific Geoparks Network
European Geoparks Network, a list and map of 94 regional geoparks, 2022
Global Geoparks Network, a list and map of 177 certified Geoparks, 2022
Protected areas
Earth sciences
Geology
Geography education
Environmental education
Disaster preparedness | Geopark | [
"Environmental_science"
] | 2,800 | [
"Environmental education",
"Environmental social science"
] |
32,867,251 | https://en.wikipedia.org/wiki/Daylight%20factor | In architecture, a daylight factor (DF) is the ratio of the light level inside a structure to the light level outside the structure. It is defined as:
DF = (Ei / Eo) x 100%
where,
Ei = illuminance due to daylight at a point on the indoors working plane,
Eo = simultaneous outdoor illuminance on a horizontal plane from an unobstructed hemisphere of overcast sky.
To calculate Ei, requires knowing the amount of outside light received inside of a building. Light can reach a room via through a glazed window, rooflight, or other aperture via three paths:
Direct light from a patch of sky visible at the point considered, known as the sky component (SC),
Light reflected from an exterior surface and then reaching the point considered, known as the externally reflected component (ERC),
Light entering through the window but reaching the point only after reflection from an internal surface, known as the internally reflected component (IRC).
The sum of the three components gives the illuminance level (typically measured in lux) at the point considered:
Illuminance = SC + ERC + IRC
The daylight factor can be improved by increasing SC (for example placing a window so it "sees" more of the sky rather than adjacent buildings), increasing ERC (for example by painting surrounding buildings white), increasing IRC (for example by using light colours for room surfaces). In most rooms, the ceiling and floor are a fixed colour, and much of the walls are covered by furnishings. This gives less flexibility in changing the daylight factor by using different wall colours than might be expected meaning changing SC is often the key to good daylight design.
Architects and engineers use daylight factors in architecture and building design to assess the internal natural lighting levels as perceived on working planes or surfaces. They use this information to determine if light is sufficient for occupants to carry out normal activities. The design day for daylight factor calculations is based on the standard CIE overcast Sky for 21 September at 12:00pm, and where the Ground Ambient light level is 11921 Lux. CIE being the Commission Internationale de l´Eclairage, or International Commission on Illumination.
Calculating daylight factors requires complex repetition of calculations and thus is generally undertaken using a complex software product such as Radiance. This is a suite of tools for performing lighting simulation, which includes a renderer as well as many other tools for measuring simulated light levels. It uses ray tracing to perform all lighting calculations. One failing in many of these calculations is that they are often completed without wall hangings or furniture against the walls. This can lead to higher predictions of the daylight factor than is correct.
To assess the effect of a poor or good daylight factor, one might compare the results for a given calculation against published design guidance. In the UK this is likely to be CIBSE Lighting Guide 10 (LG10-1999), which broadly bands average daylight factors into the following categories:
Under 2 – Not adequately lit – artificial lighting is required all of the time
Over 5 – Well lit – artificial lighting generally not required, except at dawn and dusk – but glare and solar gain may cause problems
See also
Daylighting
Right to light
Climate based daylight modelling
Notes
External links
International Commission on Illumination
Light
Visibility
Energy-saving lighting
Lighting | Daylight factor | [
"Physics",
"Mathematics"
] | 678 | [
"Visibility",
"Physical phenomena",
"Physical quantities",
"Spectrum (physical sciences)",
"Quantity",
"Electromagnetic spectrum",
"Waves",
"Light",
"Wikipedia categories named after physical quantities"
] |
32,869,192 | https://en.wikipedia.org/wiki/List%20of%20dualities |
Mathematics
In mathematics, a duality, generally speaking, translates concepts, theorems or mathematical structures into other concepts, theorems or structures, in a one-to-one fashion, often (but not always) by means of an involution operation: if the dual of A is B, then the dual of B is A.
Alexander duality
Alvis–Curtis duality
Artin–Verdier duality
Beta-dual space
Coherent duality
Conjugate hyperbola
De Groot dual
Dual abelian variety
Dual basis in a field extension
Dual bundle
Dual curve
Dual (category theory)
Dual graph
Dual group
Dual object
Dual pair
Dual polygon
Dual polyhedron
Dual problem
Dual representation
Dual q-Hahn polynomials
Dual q-Krawtchouk polynomials
Dual space
Dual topology
Dual wavelet
Duality (optimization)
Duality (order theory)
Duality of stereotype spaces
Duality (projective geometry)
Duality theory for distributive lattices
Dualizing complex
Dualizing sheaf
Eckmann–Hilton duality
Esakia duality
Fenchel's duality theorem
Hodge dual
Isbell duality
Jónsson–Tarski duality
Lagrange duality
Langlands dual
Lefschetz duality
Local Tate duality
Opposite category
Poincaré duality
Twisted Poincaré duality
Poitou–Tate duality
Pontryagin duality
S-duality (homotopy theory)
Schur–Weyl duality
Series-parallel duality
Serre duality
Spanier–Whitehead duality
Stone's duality
Tannaka–Krein duality
Verdier duality
Grothendieck local duality
Philosophy and religion
Dualism (philosophy of mind)
Epistemological dualism
Dualistic cosmology
Soul dualism
Yin and yang
Engineering
Duality (electrical circuits)
Duality (mechanical engineering)
Observability/Controllability in control theory
Physics
Complementarity (physics)
Dual resonance model
Duality (electricity and magnetism)
Englert–Greenberger duality relation
Holographic duality
AdS/CFT correspondence
Kramers–Wannier duality
Mirror symmetry
3D mirror symmetry
Montonen–Olive duality
Mysterious duality (M-theory)
Seiberg duality
String duality
S-duality
T-duality
U-duality
Wave–particle duality
Economics and finance
Convex duality
See also
Mechanical–electrical analogies
References
Mathematics-related lists
Physics-related lists | List of dualities | [
"Mathematics"
] | 496 | [
"Mathematical structures",
"Category theory",
"Duality theories",
"Geometry"
] |
32,872,804 | https://en.wikipedia.org/wiki/Vortex%20core%20line | In scientific visualization, a vortex core line is a line-like feature tracing the center of a vortex with in a velocity field.
Detection methods
Several methods exist to detect vortex core lines in a flow field. studied and compared nine methods for vortex detection, including five methods for the identification of vortex core lines. Although this list is incomplete, they considered it representative for the state of the art (as of 2004).
One of these five methods is by : in a velocity field v(x,t) a vector x lies on a vortex core line if v(x,t) is an eigenvector of the tensor derivative and the other – not corresponding – eigenvalues are complex.
Another is the Lambda2 method, which is Galilean invariant and thus produces the same results when a uniform velocity field is added to the existing velocity field or when the field is translated.
See also
Flow visualization
References
Visualization (graphics)
Vortices | Vortex core line | [
"Chemistry",
"Mathematics",
"Technology"
] | 196 | [
"Vortices",
"Computer science stubs",
"Computer science",
"Computing stubs",
"Fluid dynamics",
"Dynamical systems"
] |
32,873,814 | https://en.wikipedia.org/wiki/Image-based%20flow%20visualization | In scientific visualization, image-based flow visualization (or visualisation) is a computer modelling technique developed by Jarke van Wijk to visualize two dimensional flows of liquids such as water and air, like the wind movement of a tornado. Compared with integration techniques it has the advantage of producing a whole image at every step, as the technique relies upon graphical computing methods for frame-by-frame capture of the model of advective transport of a decaying dye. It is a method from the texture advection family.
Principle
The core idea is to create a noise texture on a regular grid and then bend this grid according to the flow (the vector field). The bent grid is then sampled at the original grid locations. Thus, the output is a version of the noise, that is displaced according to the flow.
The advantage of this approach is that it can be accelerated on modern graphics hardware, thus allowing for real-time or almost real-time simulation of 2D flow data. This is particularly handy if one wants to visualise multiple scaled versions of the vector field to first gain an overview and then concentrate on the details.
References
External links
Website of Jarke van Wijk with demo software and pictures
Scientific visualization
Numerical function drawing
Fluid dynamics | Image-based flow visualization | [
"Chemistry",
"Engineering"
] | 255 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
5,087,444 | https://en.wikipedia.org/wiki/SCIAMACHY | SCIAMACHY (SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY; Greek: σκιαμαχεί: analogously: "Fighting shadows") was one of ten instruments aboard of ESA's ENVIronmental SATellite, ENVISAT. It was a satellite spectrometer designed to measure sunlight, transmitted, reflected and scattered by the Earth's atmosphere or surface in the ultraviolet, visible and near infrared wavelength region (240 nm - 2380 nm) at moderate spectral resolution (0.2 nm - 1.5 nm). SCIAMACHY was built by Netherlands and Germany at TNO/TPD, SRON and Dutch Space.
Launch and termination
SCIAMACHY, aboard the ENVISAT satellite, was launched by ESA (European Space Agency) from Kourou, French Guiana, in March 2002. ENVISAT's mission was ended in May 2012, after loss of contact one month earlier.
Operation
The absorption, reflection and scattering characteristics of the atmosphere were determined by measuring the extraterrestrial solar irradiance and the upwelling radiance observed in different viewing geometries. The ratio of extraterrestrial irradiance and the upwelling radiance can be inverted to provide information about the amounts and distribution of important atmospheric constituents, which absorb or scatter light, and the spectral reflectance (or albedo) of the Earth's surface.
Purpose
SCIAMACHY was conceived to improve global knowledge and understanding of a variety of issues of importance for the chemistry and physics of the Earth's atmosphere (troposphere, stratosphere and mesosphere) and potential changes resulting from either anthropogenic behavior or natural phenomena.
References
External links
ESA: SCIAMACHY
SCIAMACHY-News of DLR
SCIAMACHY-Homepage of the institute IUP-IFE Bremen
SCIAMACHY Portal
Spectrometers
Earth observation satellite sensors | SCIAMACHY | [
"Physics",
"Chemistry"
] | 389 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
5,087,910 | https://en.wikipedia.org/wiki/Fracton | A fracton is a collective quantized vibration on a substrate with a fractal structure.
Fractons are the fractal analog of phonons. Phonons are the result of applying translational symmetry to the potential in a Schrödinger equation. Fractal self-similarity can be thought of as a symmetry somewhat comparable to translational symmetry. Translational symmetry is symmetry under displacement or change of position, and fractal self-similarity is symmetry under change of scale. The quantum mechanical solutions to such a problem in general lead to a continuum of states with different frequencies. In other words, a fracton band is comparable to a phonon band. The vibrational modes are restricted to part of the substrate and are thus not fully delocalized, unlike phonon vibrational modes. Instead, there is a hierarchy of vibrational modes that encompass smaller and smaller parts of the substrate.
References
Literature
External links
The ‘Weirdest’ Matter, Made of Partial Particles, Defies Description
Fractals
Quasiparticles | Fracton | [
"Physics",
"Materials_science",
"Mathematics"
] | 214 | [
"Matter",
"Materials science stubs",
"Mathematical analysis",
"Functions and mappings",
"Mathematical analysis stubs",
"Mathematical objects",
"Fractals",
"Mathematical relations",
"Condensed matter physics",
"Quasiparticles",
"Condensed matter stubs",
"Subatomic particles"
] |
5,094,808 | https://en.wikipedia.org/wiki/Bond%20valence%20method | The bond valence method or mean method (or bond valence sum) (not to be mistaken for the valence bond theory in quantum chemistry) is a popular method in coordination chemistry to estimate the oxidation states of atoms. It is derived from the bond valence model, which is a simple yet robust model for validating chemical structures with localized bonds or used to predict some of their properties. This model is a development of Pauling's rules.
Method
The basic method is that the valence V of an atom is the sum of the individual bond valences vi surrounding the atom:
The individual bond valences in turn are calculated from the observed bond lengths.
Ri is the observed bond length, R0 is a tabulated parameter expressing the (ideal) bond length when the element i has exactly valence 1, and b is an empirical constant, typically 0.37 Å.
Another formula for has also been used:
Theory
Introduction
Although the bond valence model is mostly used for validating newly determined structures, it is capable of predicting many of the properties of those chemical structures that can be described by localized bonds
In the bond valence model, the valence of an atom, V, is defined as the number of electrons the atom uses for bonding. This is equal to the number of electrons in its valence shell if all the valence shell electrons are used for bonding. If they are not, the remainder will form non-bonding electron pairs, usually known as lone pairs.
The valence of a bond, S, is defined as the number of electron pairs forming the bond. In general this is not an integral number. Since each of the terminal atoms contributes equal numbers of electrons to the bond, the bond valence is also equal to the number of valence electrons that each atom contributes. Further, since within each atom, the negatively charged valence shell is linked to the positively charged core by an electrostatic flux that is equal to the charge on the valence shell, it follows that the bond valence is also equal to the electrostatic flux that links the core to the electrons forming the bond. The bond valence is thus equal to three different quantities: the number of electrons each atom contributes to the bond, the number of electron pairs that form the bond, and the electrostatic flux linking each core to the bonding electron pair.
The valence sum rule
It follows from these definitions, that the valence of an atom is equal to the sum of the valences of all the bonds it forms. This is known as the valence sum rule, Eq. 1, which is central to the bond valence model.
(Eq. 1)
A bond is formed when the valence shells of two atoms overlap. It is apparent that the closer two atoms approach each other, the larger the overlap region and the more electrons are associated with the bond. We therefore expect a correlation between the bond valence and the bond length and find empirically that for most bonds it can be described by Eq. 2:
(Eq. 2)
where is the valence and is the length of the bond, and and are parameters that are empirically determined for each bond type. For many bond types (but not all), is found to be close to 0.37 Å. A list of bond valence parameters for different bond types (i.e., for different pairs of cation and anion in given oxidation states) can be found at the web site. It is this empirical relation that links the formal theorems of the bond valence model to the real world and allows the bond valence model to be used to predict the real structure, geometry, and properties of a compound.
If the structure of a compound is known, the empirical bond valence - bond length correlation of Eq. 2 can be used to estimate the bond valences from their observed bond lengths. Eq. 1 can then be used to check that the structure is chemically valid; any deviation between the atomic valence and the bond valence sum needs to be accounted for.
The distortion theorem
Eq. 2 is used to derive the distortion theorem which states that the more the individual bond lengths in a coordination sphere deviate from their average, the more the average bond length increases provided the valence sum is kept constant. Alternatively if the average bond length is kept constant, the more the bond valence sum increases
The valence matching rule
If the structure is not known, the average bond valence, Sa can be calculated from the atomic valence, V, if the coordination number, N, of the atom is known using Eq. 3.
(Eq. 3)
If the coordination number is not known, a typical coordination number for the atom can be used instead. Some atoms, such as sulfur(VI), are only found with one coordination number with oxygen, in this case 4, but others, such as sodium, are found with a range of coordination numbers, though most lie close to the average, which for sodium is 6.2. In the absence of any better information, the average coordination number observed with oxygen is a convenient approximation, and when this number is used in Eq. 3, the resulting average bond valence is known as the bonding strength of the atom.
Since the bonding strength of an atom is the valence expected for a bond formed by that atom, it follows that the most stable bonds will be formed between atoms with the same bonding strengths. In practice some tolerance is allowed, but bonds are rarely formed if the ratio of the bonding strengths of the two atoms exceeds two, a condition expressed by the inequality shown in Eq. 4. This is known and the valence matching rule.
(Eq. 4)
Atoms with non-bonding valence electrons, i.e., with lone pairs, have more flexibility in their bonding strength than those without lone pairs depending on whether the lone pairs are stereoactive or not. If the lone pairs are not stereoactive, they are spread uniformly around the valence shell, if they are stereoactive they are concentrated in one portion of the coordination sphere preventing that portion from forming bonds. This results in the atom having a smaller coordination number, hence a higher bonding strength, when the lone pair is stereoactive. Ions with lone pairs have a greater ability to adapt their bonding strength to match that of the counter-ion. The lone pairs become stereoactive when the bonding strength of the counter-ion exceeds twice the bonding strength of the ion when its lone pairs are inactive.
Compounds that do not satisfy Eq. 4 are difficult, if not impossible, to prepare, and chemical reactions tend to favour the compounds that provide the best valence match. For example, the aqueous solubility of a compound depends on whether its ions are better matched to water than they are to each other.
Electronegativity
Several factors influence the coordination number of an atom, but the most important of these is its size; larger atoms have larger coordination numbers. The coordination number depends on the surface area of the atom, and so is proportional to r2. If VE is the charge on the atomic core (which is the same as the valence of the atom when all the electrons in the valence shell are bonding), and NE is the corresponding average coordination number, VE/NE is proportional to the electric field at the surface of the core, represented by SE in Eq. 5:
(Eq. 5)
Not surprisingly, SE gives the same ordering of the main group elements as the electronegativity, though it differs in its numerical value from traditional electronegativity scales. Because it is defined in structural terms, SE is the preferred measure of electronegativity in the bond valence model,
The ionic model
The bond valence model can be reduced to the traditional ionic model if certain conditions are satisfied. These conditions require that atoms be divided into cations and anions in such a way that (a) the electronegativity of every anion is equal to, or greater than, the electronegativity of any of the cations, (b) that the structure is electroneutral when the ions carry charges equal to their valence, and (c) that all the bonds have a cation at one end and an anion at the other. If these conditions are satisfied, as they are in many ionic and covalent compounds, the electrons forming a bond can all be formally assigned to the anion. The anion thus acquires a formal negative charge and the cation a formal positive charge, which is the picture on which the ionic model is based. The electrostatic flux that links the cation core to its bonding electrons now links the cation core to the anion. In this picture, a cation and anion are bonded to each other if they are linked by electrostatic flux, with the flux being equal to the valence of the bond. In a representative set of compounds Preiser et al. have confirmed that the electrostatic flux is the same as the bond valence determined from the bond lengths using Eq. 2.
The association of the cation bonding electrons with the anion in the ionic model is purely formal. There is no change in physical locations of any electrons, and there is no change in the bond valence. The terms "anion" and "cation" in the bond valence model are defined in terms of the bond topology, not the chemical properties of the atoms. This extends the scope of the ionic model well beyond compounds in which the bonding would normally be considered as "ionic". For example, methane, CH4, obeys the conditions for the ionic model with carbon as the cation and hydrogen as the anion (or vice versa, since carbon and hydrogen have the same electronegativity).
For compounds that contain cation-cation or anion-anion bonds it is usually possible to transform these homoionic bonds into cation-anion bonds either by treating the atoms linked by the homoionic bond as a single complex cation (e.g., Hg22+), or by treating the bonding electrons in the homoionic bond as a pseudo-anion to transform a cation-cation bond into two cation - pseudo-anion bonds, e.g., Hg2+-e2−-Hg2+.
The covalent model
Structures containing covalent bonds can be treated using the ionic model providing they satisfy the topological conditions given above, but a special situation applies to hydrocarbons which allows the bond valence model to be reduced to the traditional bond model of organic chemistry. If an atom has a valence, V, that is equal to its coordination number, N, its bonding strength according to Eq. 3 is exactly 1.0 vu (valence units), a condition that greatly simplifies the model. This condition is obeyed by carbon, hydrogen and silicon. Since these atoms all have bonding strengths of 1.0 vu the bonds between them are all predicted to have integral valences with carbon forming four single bonds and hydrogen one. Under these conditions, the bonds are all single bonds (or multiples of single bonds). Compounds can be constructed by linking carbon and hydrogen atoms with bonds that are all exactly equivalent. Under certain conditions, nitrogen can form three bonds and oxygen two, but since nitrogen and oxygen typically also form hydrogen bonds, the resulting N-H and O-H bonds have valences less than 1.0 vu, leading through the application of Eq. 1, to the C-C and C-H bonds having valences that differ from 1.0 vu. Nevertheless, the simple bonding rules of organic chemistry are still good approximations, though the rules of the bond valence model are better.
Predicting bonding geometry
A chemical structure can be represented by a bond network of the kind familiar in molecular diagrams. The infinitely connected bond networks found in crystals can be simplified into finite networks by extracting one formula unit and reconnecting any broken bonds to each other. If the bond network is not known, a plausible network can be created by connecting well matched cations and anions that satisfy Eq. 4. If the finite network contains only cation-anion bonds, every bond can be treated as an electric capacitor (two equal and opposite charges linked by electrostatic flux). The bond network is thus equivalent to a capacitive electrical circuit with the charge on each capacitor being equivalent to the bond valence. The individual bond capacitors are not initially known, but in the absence of any information to the contrary we assume that they are all equal. In this case the circuit can be solved using the Kirchhoff equations, yielding the valences of each bond. Eq. 2 can then be used to calculate bond lengths which are found to lie within a few picometres of the observed bond lengths if no additional constraints are present. Additional constraints include electronic anisotropies (lone pairs and Jahn-Teller distortions) or steric constraints, (bonds stretched or compressed in order to fit them into three-dimensional space). Hydrogen bonds are an example of a steric constraint. The repulsion resulting from the close approach of the donor and acceptor atoms causes the bonds to be stretched, and under this constraint the distortion theorem predicts that the hydrogen atom will move off-center.
The bond valence is a vector directed along the bond since it represents the electrostatic field linking the ions. If the atom is unconstrained, the sum of the bond valence vectors around an atom is expected to be zero, a condition that limits the range of possible bond angles.
Strengths and limitations of the model
The bond valence model is an extension of the electron counting rules and its strength lies in its simplicity and robustness. Unlike most models of chemical bonding, it does not require a prior knowledge of the atomic positions and so can be used to construct chemically plausible structures given only the composition. The empirical parameters of the model are tabulated and are readily transferable between bonds of the same type. The concepts used are familiar to chemists and provide ready insight into the chemical restraints acting on the structure. The bond valence model uses mostly classical physics, and with little more than a pocket calculator, it gives quantitative predictions of bond lengths and places limits on what structures can be formed.
However, like all models, the bond valence model has its limitations. It is restricted to compounds with localized bonds; it does not, in general, apply to metals or aromatic compounds where the electrons are delocalized. It cannot in principle predict electron density distributions or energies since these require the solution of the Schoedinger equation using the long-range Coulomb potential which is incompatible with the concept of a localized bond.
History
The bond valence method is a development of Pauling's rules. In 1930, Lawrence Bragg showed that Pauling's electrostatic valence rule could be represented by electrostatic lines of force emanating from cations in proportion to the cation charge and ending on anions. The lines of force are divided equally between the bonds to the corners of the coordination polyhedron.
Starting with Pauling in 1947 a correlation between cation–anion bond length and bond strength was noted. It was shown later that if bond lengths were included in the calculation of bond strength, its accuracy was improved, and this revised method of calculation was termed the bond valence. These new insights were developed by later workers culminating in the set of rules termed the bond valence model.
Actinide oxides
It is possible by bond valence calculations to estimate how great a contribution a given oxygen atom is making to the assumed valence of uranium. Zachariasen lists the parameters to allow such calculations to be done for many of the actinides. Bond valence calculations use parameters which are estimated after examining a large number of crystal structures of uranium oxides (and related uranium compounds); note that the oxidation states which this method provides are only a guide which assists in the understanding of a crystal structure.
For uranium binding to oxygen the constants R0 and B are tabulated in the table below. For each oxidation state use the parameters from the table shown below.
Doing the calculations
It is possible to do these simple calculations on paper or to use software. A program which does it can be obtained free of charge. In 2020 David Brown published a nearly comprehensive set of bond valence parameters on the IuCr web site.
References
Chemical bonding
Coordination chemistry | Bond valence method | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,385 | [
"Chemical bonding",
"Coordination chemistry",
"Condensed matter physics",
"nan"
] |
35,557,310 | https://en.wikipedia.org/wiki/Isotropic%20formulations | Isotropic formulations are thermodynamically stable microemulsions possessing lyotropic liquid crystal properties. They inhabit a state of matter and physical behaviour somewhere between conventional liquids and that of solid crystals. Isotropic formulations are amphiphillic, exhibiting selective synchronicity with both the water and lipid phases of the substrate to which they are applied. Most recently, isotropic formulations have been used extensively in dermatology for drug delivery.
Uses
While it is well established that the skin provides an ideal site for the administration of local and systemic drugs, it presents a formidable barrier to the permeation of most substances. Isotropic formulations have been used to deliver drugs locally and systemically via the skin appendages, intercellular and transcellular routes.
References
Phases of matter
Routes of administration | Isotropic formulations | [
"Physics",
"Chemistry"
] | 173 | [
"Pharmacology",
"Phases of matter",
"Matter",
"Routes of administration"
] |
20,210,883 | https://en.wikipedia.org/wiki/Quantum%20t-design | A quantum t-design is a probability distribution over either pure quantum states or unitary operators which can duplicate properties of the probability distribution over the Haar measure for polynomials of degree t or less. Specifically, the average of any polynomial function of degree t over the design is exactly the same as the average over Haar measure. Here the Haar measure is a uniform probability distribution over all quantum states or over all unitary operators. Quantum t-designs are so called because they are analogous to t-designs in classical statistics, which arose historically in connection with the problem of design of experiments. Two particularly important types of t-designs in quantum mechanics are projective and unitary t-designs.
A spherical design is a collection of points on the unit sphere for which polynomials of bounded degree can be averaged over to obtain the same value that integrating over surface measure on the sphere gives. Spherical and projective t-designs derive their names from the works of Delsarte, Goethals, and Seidel in the late 1970s, but these objects played earlier roles in several branches of mathematics, including numerical integration and number theory. Particular examples of these objects have found uses in quantum information theory, quantum cryptography, and other related fields.
Unitary t-designs are analogous to spherical designs in that they reproduce the entire unitary group via a finite collection of unitary matrices. The theory of unitary 2-designs was developed in 2006 specifically to achieve a practical means of efficient and scalable randomized benchmarking to assess the errors in quantum computing operations, called gates. Since then unitary t-designs have been found useful in other areas of quantum computing and more broadly in quantum information theory and applied to problems as far reaching as the black hole information paradox. Unitary t-designs are especially relevant to randomization tasks in quantum computing since ideal operations are usually represented by unitary operators.
Motivation
In a d-dimensional Hilbert space when averaging over all quantum pure states the natural group is SU(d), the special unitary group of dimension d. The Haar measure is, by definition, the unique group-invariant measure, so it is used to average properties that are not unitarily invariant over all states, or over all unitaries.
A particularly widely used example of this is the spin system. For this system the relevant group is SU(2) which is the group of all 2x2 unitary operators with determinant 1. Since every operator in SU(2) is a rotation of the Bloch sphere, the Haar measure for spin-1/2 particles is invariant under all rotations of the Bloch sphere. This implies that the Haar measure is the rotationally invariant measure on the Bloch sphere, which can be thought of as a constant density distribution over the surface of the sphere.
An important class of complex projective t-designs, are symmetric informationally complete positive operator-valued measures (POVMs), which are complex projective 2-design. Since such 2-designs must have at least elements, a SIC-POVM is a minimal sized complex projective 2-designs.
Spherical t-Designs
Complex projective t-designs have been studied in quantum information theory as quantum t-designs. These are closely related to spherical 2t-designs of vectors in the unit sphere in which when naturally embedded in give rise to complex projective t-designs.
Formally, we define a probability distribution over quantum states to be a complex projective t-design if
Here, the integral over states is taken over the Haar measure on the unit sphere in
Exact t-designs over quantum states cannot be distinguished from the uniform probability distribution over all states when using t copies of a state from the probability distribution. However, in practice even t-designs may be difficult to compute. For this reason approximate t-designs are useful.
Approximate t-designs are most useful due to their ability to be efficiently implemented. i.e. it is possible to generate a quantum state distributed according to the probability distribution in time.
This efficient construction also implies that the POVM of the operators can be implemented in time.
The technical definition of an approximate t-design is:
If
and
then is an -approximate t-design.
It is possible, though perhaps inefficient, to find an -approximate t-design consisting of quantum pure states for a fixed t.
Construction
For convenience d is assumed to be a power of 2.
Using the fact that for any d there exists a set of functions {0,...,d-1} {0,...,d-1} such that for any distinct {0,...,d-1} the image under f, where f is chosen at random from S, is exactly the uniform distribution over tuples of N elements of {0,...,d-1}.
Let be drawn from the Haar measure. Let be the probability distribution of and let . Finally let be drawn from P. If we define with probability and with probability then:
for odd j and for even j.
Using this and Gaussian quadrature we can construct so that is an approximate t-design.
Unitary t-Designs
Unitary t-designs are analogous to spherical designs in that they reproduce the entire unitary group via a finite collection of unitary matrices. The theory of unitary 2-designs was developed in 2006 specifically to achieve a practical means of efficient and scalable randomized benchmarking to assess the errors in quantum computing operations, called gates. Since then unitary t-designs have been found useful in other areas of quantum computing and more broadly in quantum information theory and in fields as far reaching as black hole physics. Unitary t-designs are especially relevant to randomization tasks in quantum computing since ideal operations are usually represented by unitary operators.
Elements of a unitary t-design are elements of the unitary group, U(d), the group of unitary matrices. A t-design of unitary operators will generate a t-design of states.
Suppose is a unitary t-design (i.e. a set of unitary operators). Then for any pure state let . Then will always be a t-design for states.
Formally define a unitary t-design, X, if
Observe that the space linearly spanned by the matrices over all choices of U is identical to the restriction and This observation leads to a conclusion about the duality between unitary designs and unitary codes.
Using the permutation maps it is possible to verify directly that a set of unitary matrices forms a t-design.
One direct result of this is that for any finite
With equality if and only if X is a t-design.
1 and 2-designs have been examined in some detail and absolute bounds for the dimension of X, |X|, have been derived.
Bounds for unitary designs
Define as the set of functions homogeneous of degree t in and homogeneous of degree t in , then if for every :
then X is a unitary t-design.
We further define the inner product for functions and on as the average value of as:
and as the average value of over any finite subset .
It follows that X is a unitary t-design if and only if .
From the above it is demonstrable that if X is a t-design then is an absolute bound for the design. This imposes an upper bound on the size of a unitary design. This bound is absolute meaning it depends only on the strength of the design or the degree of the code, and not the distances in the subset, X.
A unitary code is a finite subset of the unitary group in which a few inner product values occur between elements. Specifically, a unitary code is defined as a finite subset if for all in X takes only distinct values.
It follows that and if U and M are orthogonal:
See also
Spherical design
Equiangular lines
Welch bounds
References
Quantum information science
Information theory
Quantum information theory | Quantum t-design | [
"Mathematics",
"Technology",
"Engineering"
] | 1,588 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
20,212,950 | https://en.wikipedia.org/wiki/Chameleon%20particle | The chameleon is a hypothetical scalar particle that couples to matter more weakly than gravity, postulated as a dark energy candidate. Due to a non-linear self-interaction, it has a variable effective mass which is an increasing function of the ambient energy density—as a result, the range of the force mediated by the particle is predicted to be very small in regions of high density (for example on Earth, where it is less than 1 mm) but much larger in low-density intergalactic regions: out in the cosmos chameleon models permit a range of up to several thousand parsecs. As a result of this variable mass, the hypothetical fifth force mediated by the chameleon is able to evade current constraints on equivalence principle violation derived from terrestrial experiments even if it couples to matter with a strength equal or greater than that of gravity. Although this property would allow the chameleon to drive the currently observed acceleration of the universe's expansion, it also makes it very difficult to test for experimentally.
In 2021, physicists suggested that an excess reported at the dark matter detector experiment XENON1T rather than being a dark matter candidate could be a dark energy candidate: particularly, chameleon particles yet in July 2022 a new analysis by XENONnT discarded the excess.
Hypothetical properties
Chameleon particles were proposed in 2003 by Khoury and Weltman.
In most theories, chameleons have a mass that scales as some power of the local energy density: , where
Chameleons also couple to photons, allowing photons and chameleons to oscillate between each other in the presence of an external magnetic field.
Chameleons can be confined in hollow containers because their mass increases rapidly as they penetrate the container wall, causing them to reflect. One strategy to search experimentally for chameleons is to direct photons into a cavity, confining the chameleons produced, and then to switch off the light source. Chameleons would be indicated by the presence of an afterglow as they decay back into photons.
Experimental searches
A number of experiments have attempted to detect chameleons along with axions.
The GammeV experiment is a search for axions, but has been used to look for chameleons too. It consists of a cylindrical chamber inserted in a 5 T magnetic field. The ends of the chamber are glass windows, allowing light from a laser to enter and afterglow to exit. GammeV set the limited coupling to photons in 2009.
CHASE (CHameleon Afterglow SEarch) results published in November 2010, improve the limits on mass by 2 orders of magnitude and 5 orders for photon coupling.
A 2014 neutron mirror measurement excluded chameleon field for values of the coupling constant , where the effective potential of the chameleon quanta is written as , being the mass density of the environment, the chameleon potential and the reduced Planck mass.
The CERN Axion Solar Telescope has been suggested as a tool for detecting chameleons.
References
Citations
Journal articles
Astroparticle physics
Bosons
Dark energy
Force carriers
Hypothetical elementary particles
Physical cosmology
Subatomic particles with spin 0 | Chameleon particle | [
"Physics",
"Astronomy"
] | 661 | [
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Fundamental interactions",
"Dark energy",
"Hypothetical elementary particles",
"Physics beyond the Standard Model",
"Astronomical sub-disciplines",
"Concepts in astronomy",
"Astroparticle physics",
"Energy (physics)",
... |
20,213,962 | https://en.wikipedia.org/wiki/Quantum%20reference%20frame | A quantum reference frame is a reference frame which is treated quantum theoretically. It, like any reference frame, is an abstract coordinate system which defines physical quantities, such as time, position, momentum, spin, and so on. Because it is treated within the formalism of quantum theory, it has some interesting properties which do not exist in a normal classical reference frame.
Reference frame in classical mechanics and inertial frame
Consider a simple physics problem: a car is moving such that it covers a distance of 1 mile in every 2 minutes, what is its velocity in metres per second? With some conversion and calculation, one can come up with the answer "13.41m/s"; on the other hand, one can instead answer "0, relative to itself". The first answer is correct because it recognises a reference frame is implied in the problem. The second one, albeit pedantic, is also correct because it exploits the fact that there is not a particular reference frame specified by the problem. This simple problem illustrates the importance of a reference frame: a reference frame is quintessential in a clear description of a system, whether it is included implicitly or explicitly.
When speaking of a car moving towards east, one is referring to a particular point on the surface of the Earth; moreover, as the Earth is rotating, the car is actually moving towards a changing direction, with respect to the Sun. In fact, this is the best one can do: describing a system in relation to some reference frame. Describing a system with respect to an absolute space does not make much sense because an absolute space, if it exists, is unobservable. Hence, it is impossible to describe the path of the car in the above example with respect to some absolute space. This notion of absolute space troubled a lot of physicists over the centuries, including Newton. Indeed, Newton was fully aware of this stated that all inertial frames are observationally equivalent to each other. Simply put, relative motions of a system of bodies do not depend on the inertial motion of the whole system.
An inertial reference frame (or inertial frame in short) is a frame in which all the physical laws hold. For instance, in a rotating reference frame, Newton's laws have to be modified because there is an extra Coriolis force (such frame is an example of non-inertial frame). Here, "rotating" means "rotating with respect to some inertial frame". Therefore, although it is true that a reference frame can always be chosen to be any physical system for convenience, any system has to be eventually described by an inertial frame, directly or indirectly. Finally, one may ask how an inertial frame can be found, and the answer lies in the Newton's laws, at least in Newtonian mechanics: the first law guarantees the existence of an inertial frame while the second and third law are used to examine whether a given reference frame is an inertial one or not.
It may appear an inertial frame can now be easily found given the Newton's laws as empirical tests are accessible. Quite the contrary; an absolutely inertial frame is not and will most likely never be known. Instead, inertial frame is approximated. As long as the error of the approximation is undetectable by measurements, the approximately inertial frame (or simply "effective frame") is reasonably close to an absolutely inertial frame. With the effective frame and assuming the physical laws are valid in such frame, descriptions of systems will ends up as good as if the absolutely inertial frame was used. As a digression, the effective frame Astronomers use is a system called "International Celestial Reference Frame" (ICRF), defined by 212 radio sources and with an accuracy of about radians. However, it is likely that a better one will be needed when a more accurate approximation is required.
Reconsidering the problem at the very beginning, one can certainly find a flaw of ambiguity in it, but it is generally understood that a standard reference frame is implicitly used in the problem. In fact, when a reference frame is classical, whether or not including it in the physical description of a system is irrelevant. One will get the same prediction by treating the reference frame internally or externally.
To illustrate the point further, a simple system with a ball bouncing off a wall is used. In this system, the wall can be treated either as an external potential or as a dynamical system interacting with the ball. The former involves putting the external potential in the equations of motions of the ball while the latter treats the position of the wall as a dynamical degree of freedom. Both treatments provide the same prediction, and neither is particularly preferred over the other. However, as it will be discussed below, such freedom of choice cease to exist when the system is quantum mechanical.
Quantum reference frame
A reference frame can be treated in the formalism of quantum theory, and, in this case, such is referred as a quantum reference frame. Despite different name and treatment, a quantum reference frame still shares much of the notions with a reference frame in classical mechanics. It is associated to some physical system, and it is relational.
For example, if a spin-1/2 particle is said to be in the state , a reference frame is implied, and it can be understood to be some reference frame with respect to an apparatus in a lab. It is obvious that the description of the particle does not place it in an absolute space, and doing so would make no sense at all because, as mentioned above, absolute space is empirically unobservable. On the other hand, if a magnetic field along y-axis is said to be given, the behaviour of the particle in such field can then be described. In this sense, y and z are just relative directions. They do not and need not have absolute meaning.
One can observe that a z direction used in a laboratory in Berlin is generally totally different from a z direction used in a laboratory in Melbourne. Two laboratories trying to establish a single shared reference frame will face important issues involving alignment. The study of this sort of communication and coordination is a major topic in quantum information theory.
Just as in this spin-1/2 particle example, quantum reference frames are almost always treated implicitly in the definition of quantum states, and the process of including the reference frame in a quantum state is called quantisation/internalisation of reference frame while the process of excluding the reference frame from a quantum state is called dequantisation/externalisation of reference frame. Unlike the classical case, in which treating a reference internally or externally is purely an aesthetic choice, internalising and externalising a reference frame does make a difference in quantum theory.
One final remark may be made on the existence of a quantum reference frame. After all, a reference frame, by definition, has a well-defined position and momentum, while quantum theory, namely uncertainty principle, states that one cannot describe any quantum system with well-defined position and momentum simultaneously, so it seems there is some contradiction between the two. It turns out, an effective frame, in this case a classical one, is used as a reference frame, just as in Newtonian mechanics a nearly inertial frame is used, and physical laws are assumed to be valid in this effective frame. In other words, whether motion in the chosen reference frame is inertial or not is irrelevant.
The following treatment of a hydrogen atom motivated by Aharanov and Kaufherr can shed light on the matter. Supposing a hydrogen atom is given in a well-defined state of motion, how can one describe the position of the electron? The answer is not to describe the electron's position relative to the same coordinates in which the atom is in motion, because doing so would violate uncertainty principle, but to describe its position relative to the nucleus. As a result, more can be said about the general case from this: in general, it is permissible, even in quantum theory, to have a system with well-defined position in one reference frame and well-defined motion in some other reference frame.
Further considerations of quantum reference frame
An example of treatment of reference frames in quantum theory
Consider a hydrogen atom. Coulomb potential depends on the distance between the proton and electron only:
With this symmetry, the problem is reduced to that of a particle in a central potential:
Using separation of variables, the solutions of the equation can be written into radial and angular parts:
where , and are the orbital angular momentum, magnetic, and energy quantum numbers, respectively.
Now consider the Schrödinger equation for the proton and the electron:
A change of variables to relational and centre-of-mass coordinates yields
where is the total mass and is the reduced mass. A final change to spherical coordinates followed by a separation of variables will yield the equation for from above.
However, if the change of variables done early is now to be reversed, centre-of-mass needs to be put back into the equation for :
The importance of this result is that it shows the wavefunction for the compound system is entangled, contrary to what one would normally think in a classical standpoint. More importantly, it shows the energy of the hydrogen atom is not only associated with the electron but also with the proton, and the total state is not decomposable into a state for the electron and one for the proton separately.
Superselection rules
Superselection rules, in short, are postulated rules forbidding the preparation of quantum states that exhibit coherence between eigenstates of certain observables. It was originally introduced to impose additional restriction to quantum theory beyond those of selection rules. As an example, superselection rules for electric charges disallow the preparation of a coherent superposition of different charge eigenstates.
As it turns out, the lack of a reference frame is mathematically equivalent to superselection rules. This is a powerful statement because superselection rules have long been thought to have axiomatic nature, and now its fundamental standing and even its necessity are questioned. Nevertheless, it has been shown that it is, in principle, always possible (though not always easy) to lift all superselection rules on a quantum system.
Degradation of a quantum reference frame
During a measurement, whenever the relations between the system and the reference frame used is inquired, there is inevitably a disturbance to both of them, which is known as measurement back action. As this process is repeated, it decreases the accuracy of the measurement outcomes, and such reduction of the usability of a reference frame is referred to as the degradation of a quantum reference frame. A way to gauge the degradation of a reference frame is to quantify the longevity, namely, the number of measurements that can be made against the reference frame until certain error tolerance is exceeded.
For example, for a spin- system, the maximum number of measurements that can be made before the error tolerance, , is exceeded is given by . So the longevity and the size of the reference frame are of quadratic relation in this particular case.
In this spin- system, the degradation is due to the loss of purity of the reference frame state. On the other hand, degradation can also be caused by misalignment of background reference. It has been shown, in such case, the longevity has a linear relation with the size of the reference frame.
See also
Frame of reference
Information theory
Quantum information
Quantum spacetime
References
Quantum mechanics | Quantum reference frame | [
"Physics"
] | 2,355 | [
"Theoretical physics",
"Quantum mechanics"
] |
20,214,363 | https://en.wikipedia.org/wiki/Western%20Institute%20of%20Nanoelectronics | The Western Institute of Nanoelectronics (WIN) is a research institute founded in 2006 and headquartered at the UCLA Henry Samueli School of Engineering and Applied Science in Los Angeles, California, US. The WIN Center networks multiple universities with the Industry and government based sponsors (members of the Semiconductor Industry Association consortium NRI) and the National Institute of Standards and Technology (NIST) in pursuit of replacing Complementary Metal-Oxide Semiconductor Field-Effect Transistors (CMOS FET). WIN's research is focused on spintronics extending from materials, devices, and device interactions, metrology and circuits/architectures. Sponsors include:
Nanoelectronics Research Initiative (NRI)
UC Discovery
NIST
Intel Corporation
WIN is one of four research centers within the Nanoelectronics Research Initiative (NRI). Dr. Kang L. Wang serves as director. Current WIN university participants include four University of California campuses (Los Angeles, Berkeley, Santa Barbara and Irvine) and Stanford University, Denver University, Portland State University, and University of Iowa. NRI is a research initiative of the Nanoelectronics Research Corporation (NERC). NERC in turn is a subsidiary of the Semiconductor Research Corporation.
References
External links
Western Institute of Nanoelectronics website
WIN at SRC
Nanoelectronics
2006 establishments in California
University of California, Los Angeles
UCLA research institutes | Western Institute of Nanoelectronics | [
"Materials_science"
] | 284 | [
"Nanotechnology",
"Nanoelectronics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.