id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
1,358,959 | https://en.wikipedia.org/wiki/Sobol%20sequence | Sobol’ sequences (also called LPτ sequences or (t, s) sequences in base 2) are a type of quasi-random low-discrepancy sequence. They were first introduced by the Russian mathematician Ilya M. Sobol’ (Илья Меерович Соболь) in 1967.
These sequences use a base of two to form successively finer uniform partitions of the unit interval and then reorder the coordinates in each dimension.
Good distributions in the s-dimensional unit hypercube
Let Is = [0,1]s be the s-dimensional unit hypercube, and f a real integrable function over Is. The original motivation of Sobol’ was to construct a sequence xn in Is so that
and the convergence be as fast as possible.
It is more or less clear that for the sum to converge towards the integral, the points xn should fill Is minimizing the holes. Another good property would be that the projections of xn on a lower-dimensional face of Is leave very few holes as well. Hence the homogeneous filling of Is does not qualify because in lower dimensions many points will be at the same place, therefore useless for the integral estimation.
These good distributions are called (t,m,s)-nets and (t,s)-sequences in base b. To introduce them, define first an elementary s-interval in base b a subset of Is of the form
where aj and dj are non-negative integers, and for all j in {1, ...,s}.
Given 2 integers , a (t,m,s)-net in base b is a sequence xn of bm points of Is such that for all elementary interval P in base b of hypervolume λ(P) = bt−m.
Given a non-negative integer t, a (t,s)-sequence in base b is an infinite sequence of points xn such that for all integers , the sequence is a (t,m,s)-net in base b.
In his article, Sobol’ described Πτ-meshes and LPτ sequences, which are (t,m,s)-nets and (t,s)-sequences in base 2 respectively. The terms (t,m,s)-nets and (t,s)-sequences in base b (also called Niederreiter sequences) were coined in 1988 by Harald Niederreiter. The term Sobol’ sequences was introduced in late English-speaking papers in comparison with Halton, Faure and other low-discrepancy sequences.
A fast algorithm
A more efficient Gray code implementation was proposed by Antonov and Saleev.
As for the generation of Sobol’ numbers, they are clearly aided by the use of Gray code instead of n for constructing the n-th point draw.
Suppose we have already generated all the Sobol’ sequence draws up to n − 1 and kept in memory the values xn−1,j for all the required dimensions. Since the Gray code G(n) differs from that of the preceding one G(n − 1) by just a single, say the k-th, bit (which is a rightmost zero bit of n − 1), all that needs to be done is a single XOR operation for each dimension in order to propagate all of the xn−1 to xn, i.e.
Additional uniformity properties
Sobol’ introduced additional uniformity conditions known as property A and A’.
Definition A low-discrepancy sequence is said to satisfy Property A if for any binary segment (not an arbitrary subset) of the d-dimensional sequence of length 2d there is exactly one draw in each 2d hypercubes that result from subdividing the unit hypercube along each of its length extensions into half.
Definition A low-discrepancy sequence is said to satisfy Property A’ if for any binary segment (not an arbitrary subset) of the d-dimensional sequence of length 4d there is exactly one draw in each 4d hypercubes that result from subdividing the unit hypercube along each of its length extensions into four equal parts.
There are mathematical conditions that guarantee properties A and A'.
Theorem The d-dimensional Sobol’ sequence possesses Property A iff
where Vd is the d × d binary matrix defined by
with vk,j,m denoting the m-th digit after the binary point of the direction number vk,j = (0.vk,j,1vk,j,2...)2.
Theorem The d-dimensional Sobol’ sequence possesses Property A' iff
where Ud is the 2d × 2d binary matrix defined by
with vk,j,m denoting the m-th digit after the binary point of the direction number vk,j = (0.vk,j,1vk,j,2...)2.
Tests for properties A and A’ are independent. Thus it is possible to construct the Sobol’ sequence that satisfies both properties A and A’ or only one of them.
The initialisation of Sobol’ numbers
To construct a Sobol’ sequence, a set of direction numbers vi,j needs to be selected. There is some freedom in the selection of initial direction numbers. Therefore, it is possible to receive different realisations of the Sobol’ sequence for selected dimensions. A bad selection of initial numbers can considerably reduce the efficiency of Sobol’ sequences when used for computation.
Arguably the easiest choice for the initialisation numbers is just to have the l-th leftmost bit set, and all other bits to be zero, i.e. mk,j = 1 for all k and j. This initialisation is usually called unit initialisation. However, such a sequence fails the test for Property A and A’ even for low dimensions and hence this initialisation is bad.
Implementation and availability
Good initialisation numbers for different numbers of dimensions are provided by several authors. For example, Sobol’ provides initialisation numbers for dimensions up to 51. The same set of initialisation numbers is used by Bratley and Fox.
Initialisation numbers for high dimensions are available on Joe and Kuo. Peter Jäckel provides initialisation numbers up to dimension 32 in his book "Monte Carlo methods in finance".
Other implementations are available as C, Fortran 77, or Fortran 90 routines in the Numerical Recipes collection of software. A free/open-source implementation in up to 1111 dimensions, based on the Joe and Kuo initialisation numbers, is available in C, and up to 21201 dimensions in Python and Julia. A different free/open-source implementation in up to 1111 dimensions is available for C++, Fortran 90, Matlab, and Python.
Commercial Sobol’ sequence generators are available within, for example, the NAG Library. BRODA Ltd. provides Sobol' and scrambled Sobol' sequences generators with additional unifomity properties A and A' up to a maximum dimension 131072. These generators were co-developed with Prof. I. Sobol'. MATLAB contains Sobol' sequences generators up to dimension 1111 as part of its Statistics Toolbox.
See also
s
Notes
References
External links
Collected Algorithms of the ACM (See algorithms 647, 659, and 738.)
Collection of Sobol’ sequences generator programming codes
Freeware C++ generator of Sobol’ sequence
Low-discrepancy sequences
Sequences and series | Sobol sequence | [
"Mathematics"
] | 1,558 | [
"Sequences and series",
"Mathematical analysis",
"Mathematical structures",
"Mathematical objects"
] |
1,359,360 | https://en.wikipedia.org/wiki/Ansatz | In physics and mathematics, an ansatz (; , meaning: "initial placement of a tool at a work piece", plural ansatzes or, from German, ansätze ; ) is an educated guess or an additional assumption made to help solve a problem, and which may later be verified to be part of the solution by its results.
Use
An ansatz is the establishment of the starting equation(s), the theorem(s), or the value(s) describing a mathematical or physical problem or solution. It typically provides an initial estimate or framework to the solution of a mathematical problem, and can also take into consideration the boundary conditions (in fact, an ansatz is sometimes thought of as a "trial answer" and an important technique in solving differential equations).
After an ansatz, which constitutes nothing more than an assumption, has been established, the equations are solved more precisely for the general function of interest, which then constitutes a confirmation of the assumption. In essence, an ansatz makes assumptions about the form of the solution to a problem so as to make the solution easier to find.
It has been demonstrated that machine learning techniques can be applied to provide initial estimates similar to those invented by humans and to discover new ones in case no ansatz is available.
Examples
Given a set of experimental data that looks to be clustered about a line, a linear ansatz could be made to find the parameters of the line by a least squares curve fit. Variational approximation methods use ansätze and then fit the parameters.
Another example could be the mass, energy, and entropy balance equations that, considered simultaneous for purposes of the elementary operations of linear algebra, are the ansatz to most basic problems of thermodynamics.
Another example of an ansatz is to suppose the solution of a homogeneous linear differential equation to take an exponential form, or a power form in the case of a difference equation. More generally, one can guess a particular solution of a system of equations, and test such an ansatz by directly substituting the solution into the system of equations. In many cases, the assumed form of the solution is general enough that it can represent arbitrary functions, in such a way that the set of solutions found this way is a full set of all the solutions.
See also
Method of undetermined coefficients
Bayesian inference
Bethe ansatz
Coupled cluster, a technique for solving the many-body problem that is based on an exponential Ansatz
Demarcation problem
Guesstimate
Heuristic
Hypothesis
Trial and error
Train of thought
References
Bibliography
Philosophy of physics
Concepts in physics
Mathematical terminology
German_words_and_phrases | Ansatz | [
"Physics",
"Mathematics"
] | 543 | [
"Philosophy of physics",
"Applied and interdisciplinary physics",
"nan"
] |
1,359,407 | https://en.wikipedia.org/wiki/Pacific%20Tsunami%20Warning%20Center | The Pacific Tsunami Warning Center (PTWC), located on Ford Island, Hawaii, is one of two tsunami warning centers in the United States, covering Hawaii, Guam, American Samoa and the Northern Mariana Islands in the Pacific, as well as Puerto Rico, the U.S. Virgin Islands and the British Virgin Islands in the Caribbean Sea. Other parts of the United States are covered by the National Tsunami Warning Center.
PTWC is also the operational center of the Pacific Tsunami Warning System and issued tsunami warnings for dozens of countries from 1965 to 2014. In October 2014, the authority to issue tsunami warnings was delegated to individual member states. As a result, the center now issues advice rather than official warnings for non-U.S. coastlines, with the exception of the British Virgin Islands.
The PTWC uses seismic data as its starting point, but then takes into account oceanographic data when calculating possible threats. Tide gauges in the area of the earthquake are checked to establish if a tsunami has formed. The center then forecasts the future of the tsunami.
History
Up until the late 1940s, the United States had no way to warn the public about tsunami threats. After the 1946 Aleutian Islands earthquake, which generated a tsunami and killed more than 170 people in Hawaii, a plan was devised to warn the public of possible tsunami inundation. The facility became operational in 1948 and was called the Seismic Sea Wave Warning System (SSWWS), headquartered at the Coast and Geodetic Survey's seismological observatory in Honolulu, Hawaii.
Initially, the Seismic Sea Wave Warning System covered only the Hawaiian Islands and was limited to teletsunamis (distant events), using data from 4 seismic stations and 9 tide gages. The 1960 Valdivia earthquake and tsunami, which killed thousands of people, led to the establishment of the Pacific Tsunami Warning System under the auspices of UNESCO's Intergovernmental Oceanographic Commission, with the Seismic Sea Wave Warning System as its operational center. As a result, the name of the facility was changed to the Pacific Tsunami Warning Center.
The expanded system became operational in April 1965 but, like its local predecessor, was limited to teletsunamis – tsunamis which are capable of causing damage far away from their source. The system covered all countries of the Pacific Ocean with data from 20 seismic stations around the world and 40 tide stations.
In the aftermath of the 1964 Alaska earthquake and tsunami, which killed 131 people, it was decided to create another warning system to provide timely warnings about local events for coastal areas of Alaska. After Congress approved funding in 1965, the Alaska Regional Tsunami Warning System was launched in September 1967 with observatories in Palmer, Adak and Sitka. At that time, PTWC ended its coverage of Alaska.
The 1975 Hawaii earthquake and tsunami, which killed several people, highlighted the threat of tsunamis caused by nearby events. As a result, PTWC began issuing tsunami warnings for local events near Hawaii.
In 1982, the Alaska Tsunami Warning Center's area of responsibility was enlarged to include California, Oregon and Washington, as well as British Columbia in Canada, but only for earthquakes in the vicinity of the West Coast. PTWC continued to provide coverage of teletsunamis. The Alaska center's responsibilities were expanded in 1996 to include all Pacific-wide sources, after which it became known as the West Coast/Alaska Tsunami Warning Center (WCATWC). As a result, PTWC's area of responsibility was further reduced.
On December 1, 2001, the PTWC was re-dedicated as the Richard H. Hagemeyer Pacific Tsunami Warning Center, in honor of the former U.S. Tsunami Program Manager and National Weather Service Pacific Region Director who managed the center for many years.
In 2005, in the aftermath of the 2004 Indian Ocean earthquake and tsunami, the Pacific Tsunami Warning Center's responsibilities were expanded to include tsunami guidance for the Indian Ocean, the South China Sea and the Caribbean Sea, though its authority to issue warnings was limited to Puerto Rico and the U.S. Virgin Islands. For all other areas, the decision to issue tsunami warnings was left to individual countries.
The responsibility for Puerto Rico and the U.S. Virgin Islands was passed to the West Coast/Alaska Tsunami Warning Center in June 2007, while PTWC continued to issue advice for other parts of the Caribbean Sea. In 2013, the West Coast/Alaska Tsunami Warning Center became known as the National Tsunami Warning Center.
PTWC discontinued its messages for the Indian Ocean in 2013 after regional tsunami warning centers were opened in Australia, India and Indonesia.
In October 2014, the authority to issue official tsunami warnings for coastlines in the Pacific was delegated to individual member states. This happened because warnings and watches issued by PTWC caused confusion when they conflicted with a country's independently derived level of alert. As a result, the center now issues advice rather than official warnings for all non-U.S. coastlines, with the exception of the British Virgin Islands.
In 2015, the annual operating cost of the Pacific Tsunami Warning System was estimated to be between 50 and 80 million U.S. dollars.
In April 2017, the responsibility for Puerto Rico and the U.S. Virgin Islands returned to PTWC, along with the British Virgin Islands, to consolidate Caribbean responsibilities under one warning center.
As of 2023, the Pacific Tsunami Warning System has access to about 600 high-quality seismic stations around the world and about 500 coastal and deep-ocean sea level stations. It has 46 member states: Brunei, Cambodia, Canada, Chile (including Easter Island and the Juan Fernández Islands), China (which is considered to include Hong Kong and Macau), Colombia, Costa Rica, East Timor, North Korea, Ecuador (including the Galapagos Islands), El Salvador, Guatemala, Honduras, Indonesia, Japan, Malaysia, Mexico, Nicaragua, Panama, Peru, Philippines, South Korea, Russia, Singapore, Thailand, United States (including Guam, Northern Mariana Islands, and the Minor Outlying Islands), Vietnam, Australia (including Norfolk Island), Cook Islands, Fiji, France (including French Polynesia, New Caledonia and Wallis and Futuna), Kiribati (including the Gilbert Islands, the Phoenix Islands and Kiritimati), the Marshall Islands (including Kwajalein Atoll and Majuro), the Federated States of Micronesia, Nauru, New Zealand (including the Kermadec Islands), Niue, Palau, Papua New Guinea, Samoa, the Solomon Islands, Tokelau, Tonga, Tuvalu, the United Kingdom (including the Pitcairn Islands), and Vanuatu.
Coverage area
Alert levels
Official tsunami warnings and watches are limited to U.S. coastlines, with the exception of the British Virgin Islands. PTWC messages for other regions do not include alerts, but rather advice, as the authority to issue tsunami warnings was delegated to member states in 2014 to avoid confusion among the public.
Current format
Old format (before 2014)
The alert levels below were retired on October 1, 2014.
Distribution
Local populations in the United States of America receive tsunami information through radio and television receivers connected to the Emergency Alert System, and in some places (such as Hawaii) civil defense sirens and roving loudspeaker broadcasts from police vehicles. The public can subscribe to the RSS feed or email alerts from the PTWC web site, and the UNESCO site. Email and text messages are also available from the USGS Earthquake Notification Service which includes tsunami alerts.
Deep-ocean tsunami detection
In 1995, NOAA began developing the Deep-ocean Assessment and Reporting of Tsunamis (DART) system. By 2001, an array of six stations had been deployed in the Pacific Ocean.
Beginning in 2005, as a result of the tsunami caused by the 2004 Indian Ocean earthquake, plans were announced to add 32 more DART buoys to be operational by mid-2007.
These stations give detailed information about tsunamis while they are still far off shore. Each station consists of a sea-bed bottom pressure recorder (at a depth of 1000–6000 m) which detects the passage of a tsunami and transmits the data to a surface buoy via acoustic modem. The surface buoy then radios the information to the PTWC via the GOES satellite system. The bottom pressure recorder lasts for two years while the surface buoy is replaced every year. The system has considerably improved the forecasting and warning of tsunamis in the Pacific Ocean.
References
External links
US Tsunami Warning System
National Tsunami Warning Center
Northwest Pacific Tsunami Advisory
DART
How the US Tsunami Warning System works
Warning systems
Tsunami
Earthquake and seismic risk mitigation
Seismological observatories, organisations and projects
Disaster preparedness in the United States
Emergency management in Oceania
1949 establishments in Hawaii | Pacific Tsunami Warning Center | [
"Technology",
"Engineering"
] | 1,786 | [
"Structural engineering",
"Safety engineering",
"Measuring instruments",
"Warning systems",
"Earthquake and seismic risk mitigation"
] |
1,359,420 | https://en.wikipedia.org/wiki/Particle%20beam | A particle beam is a stream of charged or neutral particles. In particle accelerators, these particles can move with a velocity close to the speed of light. There is a difference between the creation and control of charged particle beams and neutral particle beams, as only the first type can be manipulated to a sufficient extent by devices based on electromagnetism. The manipulation and diagnostics of charged particle beams at high kinetic energies using particle accelerators are main topics of accelerator physics.
Sources
Charged particles such as electrons, positrons, and protons may be separated from their common surrounding. This can be accomplished by e.g. thermionic emission or arc discharge. The following devices are commonly used as sources for particle beams:
Ion source
Cathode-ray tube, or more specifically in one of its parts called electron gun. This is also part of traditional television and computer screens.
Photocathodes may also be built in as a part of an electron gun, using the photoelectric effect to separate particles from their substrate.
Neutron beams may be created by energetic proton beams which impact on a target, e.g. of beryllium material. (see article Particle therapy)
Bursting a petawatt laser onto a titanium foil to produce a proton beam.
Manipulation
Acceleration
Charged beams may be further accelerated by use of high resonant, sometimes also superconducting, microwave cavities. These devices accelerate particles by interaction with an electromagnetic field. Since the wavelength of hollow macroscopic, conducting devices is in the radio frequency (RF) band, the design of such cavities and other RF devices is also a part of accelerator physics.
More recently, plasma acceleration has emerged as a possibility to accelerate particles in a plasma medium, using the electromagnetic energy of pulsed high-power laser systems or the kinetic energy of other charged particles. This technique is under active development, but cannot provide reliable beams of sufficient quality at present.
Guidance
In all cases, the beam is steered with dipole magnets and focused with quadrupole magnets. With the end goal of reaching the desired position and beam spot size in the experiment.
Applications
High-energy physics
High-energy particle beams are used for particle physics experiments in large facilities; the most common examples being the Large Hadron Collider and the Tevatron.
Synchrotron radiation
Electron beams are employed in synchrotron light sources to produce X-ray radiation with a continuous spectrum over a wide frequency band which is called synchrotron radiation. This X-ray radiation is used at beamlines of the synchrotron light sources for a variety of spectroscopies (XAS, XANES, EXAFS, μ-XRF, μ-XRD) in order to probe and to characterize the structure and the chemical speciation of solids and biological materials.
Particle therapy
Energetic particle beams consisting of protons, neutrons, or positive ions (also called particle microbeams) may also be used for cancer treatment in particle therapy.
Linear accelerators use electron beam at near speed of light to treat deep cancers in patients. A tungsten/molybdenum target can be moved into the beam to create x-rays to treat surface cancers.
Astrophysics and space physics
Many phenomena in astrophysics are attributed to particle beams of various kinds. Solar Type III radio bursts, the most common impulsive radio signatures from the Sun, are used by scientists as a tool to better understand solar accelerated electron beams. Additionally, particle beams cause instabilities when interacting with plasma, which may lead to conditions causing electrostatic solitary waves.
Military
The U.S. Advanced Research Projects Agency started work on particle beam weapons in 1958. The general idea of such weaponry is to hit a target object with a stream of accelerated particles with high kinetic energy, which is then transferred to the atoms, or molecules, of the target. The power needed to project a high-powered beam of this kind surpasses the production capabilities of any standard battlefield powerplant, thus such weapons are not anticipated to be produced in the foreseeable future.
See also
Electron beam
Ion beam
Astrophysical jet
Atomic beam
Accelerator neutrino
References
Accelerator physics
pt:Feixe (física) | Particle beam | [
"Physics"
] | 856 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
1,359,541 | https://en.wikipedia.org/wiki/Solar%20rotation | Solar rotation varies with latitude. The Sun is not a solid body, but is composed of a gaseous plasma. Different latitudes rotate at different periods. The source of this differential rotation is an area of current research in solar astronomy. The rate of surface rotation is observed to be the fastest at the equator (latitude ) and to decrease as latitude increases. The solar rotation period is 25.67 days at the equator and 33.40 days at 75 degrees of latitude.
The Carrington rotation at the time this article was loaded, (UTC), was CR.
Surface rotation as an equation
The differential rotation rate of the photosphere can be approximated by the equation:
where is the angular velocity in degrees per day, is the solar latitude, A is angular velocity at the equator, and B, C are constants controlling the decrease in velocity with increasing latitude. The values of A, B, and C differ depending on the techniques used to make the measurement, as well as the time period studied. A current set of accepted average values is:
Sidereal rotation
At the equator, the solar rotation period is 24.47 days. This is called the sidereal rotation period, and should not be confused with the synodic rotation period of 26.24 days, which is the time for a fixed feature on the Sun to rotate to the same apparent position as viewed from Earth (the Earth's orbital rotation is in the same direction as the Sun's rotation). The synodic period is longer because the Sun must rotate for a sidereal period plus an extra amount due to the orbital motion of Earth around the Sun. Note that astrophysical literature does not typically use the equatorial rotation period, but instead often uses the definition of a Carrington rotation: a synodic rotation period of 27.2753 days or a sidereal period of 25.38 days. This chosen period roughly corresponds to the prograde rotation at a latitude of 26° north or south, which is consistent with the typical latitude of sunspots and corresponding periodic solar activity. When the Sun is viewed from the "north" (above Earth's north pole), solar rotation is counterclockwise (eastward). To a person standing on Earth's North Pole at the time of equinox, sunspots would appear to move from left to right across the Sun's face.
In Stonyhurst heliographic coordinates, the left side of the Sun's face is called East, and the right side of the Sun's face is called West. Therefore, sunspots are said to move across the Sun's face from east to west.
Bartels' Rotation Number
Bartels' Rotation Number is a serial count that numbers the apparent rotations of the Sun as viewed from Earth, and is used to track certain recurring or shifting patterns of solar activity. For this purpose, each rotation has a length of exactly 27 days, close to the synodic Carrington rotation rate. Julius Bartels arbitrarily assigned rotation day one to 8 February 1832. The serial number serves as a kind of calendar to mark the recurrence periods of solar and geophysical parameters.
Carrington rotation
The Carrington rotation is a system for comparing locations on the Sun over a period of time, allowing the following of sunspot groups or reappearance of eruptions at a later time.
Because solar rotation is variable with latitude, depth and time, any such system is necessarily arbitrary and only makes comparison meaningful over moderate periods of time. Solar rotation is taken to be 27.2753 days (see below) for the purpose of Carrington rotations. Each rotation of the Sun under this scheme is given a unique number called the Carrington Rotation Number, starting from November 9, 1853. (The Bartels Rotation Number is a similar numbering scheme that uses a period of exactly 27 days and starts from February 8, 1832.)
The heliographic longitude of a solar feature conventionally refers to its angular distance relative to the central meridian crossed by the Sun-Earth radial line.
The "Carrington longitude" of the same feature refers to an arbitrary fixed reference point of an imagined rigid rotation, as defined originally by Richard Christopher Carrington.
Carrington determined the solar rotation rate from low latitude sunspots in the 1850s and arrived at 25.38 days for the sidereal rotation period. Sidereal rotation is measured relative to the stars, but because the Earth is orbiting the Sun, we see this period as 27.2753 days.
It is possible to construct a diagram with the longitude of sunspots horizontally and time vertically. The longitude is measured by the time of crossing the central meridian and based on the Carrington rotations. In each rotation, plotted under the preceding ones, most sunspots or other phenomena will reappear directly below the same phenomenon on the previous rotation. There may be slight drifts left or right over longer periods of time.
The Bartels "musical diagram" or the Condegram spiral plot are other techniques for expressing the approximate 27-day periodicity of various phenomena originating at the solar surface.
Start of Carrington Rotation
Start dates of a new synodical solar rotation number according to Carrington.
Using sunspots to measure rotation
The rotation constants have been measured by measuring the motion of various features ("tracers") on the solar surface. The first and most widely used tracers are sunspots. Though sunspots had been observed since ancient times, it was only when the telescope came into use that they were observed to turn with the Sun, and thus the period of the solar rotation could be defined. The English scholar Thomas Harriot was probably the first to observe sunspots telescopically as evidenced by a drawing in his notebook dated December 8, 1610, and the first published observations (June 1611) entitled “De Maculis in Sole Observatis, et Apparente earum cum Sole Conversione Narratio” ("Narration on Spots Observed on the Sun and their Apparent Rotation with the Sun") were by Johannes Fabricius who had been systematically observing the spots for a few months and had noted also their movement across the solar disc. This can be considered the first observational evidence of the solar rotation. Christoph Scheiner (“Rosa Ursine sive solis”, book 4, part 2, 1630) was the first to measure the equatorial rotation rate of the Sun and noticed that the rotation at higher latitudes is slower, so he can be considered the discoverer of solar differential rotation.
Each measurement gives a slightly different answer, yielding the above standard deviations (shown as +/−). St. John (1918) was perhaps the first to summarise the published solar rotation rates, and concluded that the differences in series measured in different years can hardly be attributed to personal observation or to local disturbances on the Sun, and are probably due to time variations in the rate of rotation, and Hubrecht (1915) was the first one to find that the two solar hemispheres rotate differently. A study of magnetograph data showed a synodic period in agreement with other studies of 26.24 days at the equator and almost 38 days at the poles.
Internal solar rotation
Until the advent of helioseismology, the study of wave oscillations in the Sun, very little was known about the internal rotation of the Sun. The differential profile of the surface was thought to extend into the solar interior as rotating cylinders of constant angular momentum. Through helioseismology this is now known not to be the case and the rotation profile of the Sun has been found. On the surface, the Sun rotates slowly at the poles and quickly at the equator. This profile extends on roughly radial lines through the solar convection zone to the interior. At the tachocline the rotation abruptly changes to solid-body rotation in the solar radiation zone.
See also
Differential rotation in stars
Solar coordinate systems
Magnetohydrodynamics
Orbital period
Tachocline
References
Cox, Arthur N. (ed.), Allen's Astrophysical Quantities, 4th Ed, Springer, 1999.
Javaraiah, J., 2003. "Long-Term Variations in the Solar Differential Rotation", Solar Physics, 212 (1): 23–49.
St. John, C., 1918. "The Present Condition of the Problem of Solar Rotation", Publications of the Astronomical Society of the Pacific, 30, 319–325.
External links
Carrington Rotation Commencement Dates 1853–2016
Carrington Rotation Start and Stop Times
Carrington Rotation Number
Sun
Articles containing video clips
Rotation | Solar rotation | [
"Physics"
] | 1,745 | [
"Physical phenomena",
"Motion (physics)",
"Classical mechanics",
"Rotation"
] |
1,360,654 | https://en.wikipedia.org/wiki/Gauss%E2%80%93Kuzmin%E2%80%93Wirsing%20operator | In mathematics, the Gauss–Kuzmin–Wirsing operator is the transfer operator of the Gauss map that takes a positive number to the fractional part of its reciprocal. (This is not the same as the Gauss map in differential geometry.) It is named after Carl Gauss, Rodion Kuzmin, and Eduard Wirsing. It occurs in the study of continued fractions; it is also related to the Riemann zeta function.
Relationship to the maps and continued fractions
The Gauss map
The Gauss function (map) h is :
where denotes the floor function.
It has an infinite number of jump discontinuities at x = 1/n, for positive integers n. It is hard to approximate it by a single smooth polynomial.
Operator on the maps
The Gauss–Kuzmin–Wirsing operator acts on functions as
it has the fixed point , unique up to scaling, which is the density of the measure invariant under the Gauss map.
Eigenvalues of the operator
The first eigenfunction of this operator is
which corresponds to an eigenvalue of λ1 = 1. This eigenfunction gives the probability of the occurrence of a given integer in a continued fraction expansion, and is known as the Gauss–Kuzmin distribution. This follows in part because the Gauss map acts as a truncating shift operator for the continued fractions: if
is the continued fraction representation of a number 0 < x < 1, then
Because is conjugate to a Bernoulli shift, the eigenvalue is simple, and since the operator leaves invariant the Gauss–Kuzmin measure, the operator is ergodic with respect to the measure. This fact allows a short proof of the existence of Khinchin's constant.
Additional eigenvalues can be computed numerically; the next eigenvalue is λ2 = −0.3036630029...
and its absolute value is known as the Gauss–Kuzmin–Wirsing constant. Analytic forms for additional eigenfunctions are not known. It is not known if the eigenvalues are irrational.
Let us arrange the eigenvalues of the Gauss–Kuzmin–Wirsing operator according to an absolute value:
It was conjectured in 1995 by Philippe Flajolet and Brigitte Vallée that
In 2018, Giedrius Alkauskas gave a convincing argument that this conjecture can be refined to a much stronger statement:
here the function is bounded, and is the Riemann zeta function.
Continuous spectrum
The eigenvalues form a discrete spectrum, when the operator is limited to act on functions on the unit interval of the real number line. More broadly, since the Gauss map is the shift operator on Baire space , the GKW operator can also be viewed as an operator on the function space (considered as a Banach space, with basis functions taken to be the indicator functions on the cylinders of the product topology). In the later case, it has a continuous spectrum, with eigenvalues in the unit disk of the complex plane. That is, given the cylinder , the operator G shifts it to the left: . Taking to be the indicator function which is 1 on the cylinder (when ), and zero otherwise, one has that . The series
then is an eigenfunction with eigenvalue . That is, one has whenever the summation converges: that is, when .
A special case arises when one wishes to consider the Haar measure of the shift operator, that is, a function that is invariant under shifts. This is given by the Minkowski measure . That is, one has that .
Ergodicity
The Gauss map is in fact much more than ergodic: it is exponentially mixing, but the proof is not elementary.
Entropy
The Gauss map, over the Gauss measure, has entropy . This can be proved by the Rokhlin formula for entropy. Then using the Shannon–McMillan–Breiman theorem, with its equipartition property, we obtain Lochs' theorem.
Measure-theoretic preliminaries
A covering family is a set of measurable sets, such that any open set is a disjoint union of sets in it. Compare this with base in topology, which is less restrictive as it allows non-disjoint unions.
Knopp's lemma. Let be measurable, let be a covering family and suppose that . Then .
Proof. Since any open set is a disjoint union of sets in , we have for any open set , not just any set in .
Take the complement . Since the Lebesgue measure is outer regular, we can take an open set that is close to , meaning the symmetric difference has arbitrarily small measure .
At the limit, becomes have .
The Gauss map is ergodic
Fix a sequence of positive integers. Let . Let the interval be the open interval with end-points .
Lemma. For any open interval , we haveProof. For any we have by standard continued fraction theory. By expanding the definition, is an interval with end points . Now compute directly. To show the fraction is , use the fact that .
Theorem. The Gauss map is ergodic.
Proof. Consider the set of all open intervals in the form . Collect them into a single family . This is a covering family, because any open interval where are rational, is a disjoint union of finitely many sets in .
Suppose a set is -invariant and has positive measure. Pick any . Since Lebesgue measure is outer regular, there exists an open set which differs from by only . Since is -invariant, we also have . Therefore, By the previous lemma, we haveTake the limit, we have . By Knopp's lemma, it has full measure.
Relationship to the Riemann zeta function
The GKW operator is related to the Riemann zeta function. Note that the zeta function can be written as
which implies that
by change-of-variable.
Matrix elements
Consider the Taylor series expansions at x = 1 for a function f(x) and . That is, let
and write likewise for g(x). The expansion is made about x = 1 because the GKW operator is poorly behaved at x = 0. The expansion is made about 1 − x so that we can keep x a positive number, 0 ≤ x ≤ 1. Then the GKW operator acts on the Taylor coefficients as
where the matrix elements of the GKW operator are given by
This operator is extremely well formed, and thus very numerically tractable. The Gauss–Kuzmin constant is easily computed to high precision by numerically diagonalizing the upper-left n by n portion. There is no known closed-form expression that diagonalizes this operator; that is, there are no closed-form expressions known for the eigenvectors.
Riemann zeta
The Riemann zeta can be written as
where the are given by the matrix elements above:
Performing the summations, one gets:
where is the Euler–Mascheroni constant. These play the analog of the Stieltjes constants, but for the falling factorial expansion. By writing
one gets: a0 = −0.0772156... and a1 = −0.00474863... and so on. The values get small quickly but are oscillatory. Some explicit sums on these values can be performed. They can be explicitly related to the Stieltjes constants by re-expressing the falling factorial as a polynomial with Stirling number coefficients, and then solving. More generally, the Riemann zeta can be re-expressed as an expansion in terms of Sheffer sequences of polynomials.
This expansion of the Riemann zeta is investigated in the following references. The coefficients are decreasing as
References
General references
A. Ya. Khinchin, Continued Fractions, 1935, English translation University of Chicago Press, 1961 (See section 15).
K. I. Babenko, On a Problem of Gauss, Soviet Mathematical Doklady 19:136–140 (1978)
K. I. Babenko and S. P. Jur'ev, On the Discretization of a Problem of Gauss, Soviet Mathematical Doklady 19:731–735 (1978).
A. Durner, On a Theorem of Gauss–Kuzmin–Lévy. Arch. Math. 58, 251–256, (1992).
A. J. MacLeod, High-Accuracy Numerical Values of the Gauss–Kuzmin Continued Fraction Problem. Computers Math. Appl. 26, 37–44, (1993).
E. Wirsing, On the Theorem of Gauss–Kuzmin–Lévy and a Frobenius-Type Theorem for Function Spaces. Acta Arith. 24, 507–528, (1974).
Further reading
Keith Briggs, A precise computation of the Gauss–Kuzmin–Wirsing constant (2003) (Contains a very extensive collection of references.)
Phillipe Flajolet and Brigitte Vallée, On the Gauss–Kuzmin–Wirsing Constant (1995).
Linas Vepstas The Bernoulli Operator, the Gauss–Kuzmin–Wirsing Operator, and the Riemann Zeta (2004) (PDF)
External links
Continued fractions
Dynamical systems | Gauss–Kuzmin–Wirsing operator | [
"Physics",
"Mathematics"
] | 1,968 | [
"Continued fractions",
"Mechanics",
"Number theory",
"Dynamical systems"
] |
1,361,116 | https://en.wikipedia.org/wiki/Control%20volume | In continuum mechanics and thermodynamics, a control volume (CV) is a mathematical abstraction employed in the process of creating mathematical models of physical processes. In an inertial frame of reference, it is a fictitious region of a given volume fixed in space or moving with constant flow velocity through which the continuuum (a continuous medium such as gas, liquid or solid) flows. The closed surface enclosing the region is referred to as the control surface.
At steady state, a control volume can be thought of as an arbitrary volume in which the mass of the continuum remains constant. As a continuum moves through the control volume, the mass entering the control volume is equal to the mass leaving the control volume. At steady state, and in the absence of work and heat transfer, the energy within the control volume remains constant. It is analogous to the classical mechanics concept of the free body diagram.
Overview
Typically, to understand how a given physical law applies to the system under consideration, one first begins by considering how it applies to a small, control volume, or "representative volume". There is nothing special about a particular control volume, it simply represents a small part of the system to which physical laws can be easily applied. This gives rise to what is termed a volumetric, or volume-wise formulation of the mathematical model.
One can then argue that since the physical laws behave in a certain way on a particular control volume, they behave the same way on all such volumes, since that particular control volume was not special in any way. In this way, the corresponding point-wise formulation of the mathematical model can be developed so it can describe the physical behaviour of an entire (and maybe more complex) system.
In continuum mechanics the conservation equations (for instance, the Navier-Stokes equations) are in integral form. They therefore apply on volumes. Finding forms of the equation that are independent of the control volumes allows simplification of the integral signs. The control volumes can be stationary or they can move with an arbitrary velocity.
Substantive derivative
Computations in continuum mechanics often require that the regular time derivation operator
is replaced by the substantive derivative operator
.
This can be seen as follows.
Consider a bug that is moving through a volume where there is some scalar,
e.g. pressure, that varies with time and position:
.
If the bug during the time interval from
to
moves from
to
then the bug experiences a change in the scalar value,
(the total differential). If the bug is moving with a velocity
the change in particle position is
and we may write
where is the gradient of the scalar field p. So:
If the bug is just moving with the flow, the same formula applies, but now the velocity vector,v, is that of the flow, u.
The last parenthesized expression is the substantive derivative of the scalar pressure.
Since the pressure p in this computation is an arbitrary scalar field, we may abstract it and write the substantive derivative operator as
See also
Continuum mechanics
Cauchy momentum equation
Special relativity
Substantive derivative
References
James R. Welty, Charles E. Wicks, Robert E. Wilson & Gregory Rorrer Fundamentals of Momentum, Heat, and Mass Transfer
Notes
External links
PDFs
Integral Approach to the Control Volume analysis of Fluid Flow
Continuum mechanics
Thermodynamics | Control volume | [
"Physics",
"Chemistry",
"Mathematics"
] | 674 | [
"Dynamical systems",
"Classical mechanics",
"Thermodynamics",
"Continuum mechanics"
] |
1,361,454 | https://en.wikipedia.org/wiki/Stochastic%20differential%20equation | A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.
SDEs have a random differential that is in the most basic case random white noise calculated as the distributional derivative of a Brownian motion or more generally a semimartingale. However, other types of random behaviour are possible, such as jump processes like Lévy processes or semimartingales with jumps.
Stochastic differential equations are in general neither differential equations nor random differential equations. Random differential equations are conjugate to stochastic differential equations. Stochastic differential equations can also be extended to differential manifolds.
Background
Stochastic differential equations originated in the theory of Brownian motion, in the work of Albert Einstein and Marian Smoluchowski in 1905, although Louis Bachelier was the first person credited with modeling Brownian motion in 1900, giving a very early example of a stochastic differential equation now known as Bachelier model. Some of these early examples were linear stochastic differential equations, also called Langevin equations after French physicist Langevin, describing the motion of a harmonic oscillator subject to a random force.
The mathematical theory of stochastic differential equations was developed in the 1940s through the groundbreaking work of Japanese mathematician Kiyosi Itô, who introduced the concept of stochastic integral and initiated the study of nonlinear stochastic differential equations. Another approach was later proposed by Russian physicist Stratonovich, leading to a calculus similar to ordinary calculus.
Terminology
The most common form of SDEs in the literature is an ordinary differential equation with the right hand side perturbed by a term dependent on a white noise variable. In most cases, SDEs are understood as continuous time limit of the corresponding stochastic difference equations. This understanding of SDEs is ambiguous and must be complemented by a proper mathematical definition of the corresponding integral. Such a mathematical definition was first proposed by Kiyosi Itô in the 1940s, leading to what is known today as the Itô calculus.
Another construction was later proposed by Russian physicist Stratonovich,
leading to what is known as the Stratonovich integral.
The Itô integral and Stratonovich integral are related, but different, objects and the choice between them depends on the application considered. The Itô calculus is based on the concept of non-anticipativeness or causality, which is natural in applications where the variable is time.
The Stratonovich calculus, on the other hand, has rules which resemble ordinary calculus and has intrinsic geometric properties which render it more natural when dealing with geometric problems such as random motion on manifolds, although it is possible and in some cases preferable to model random motion on manifolds through Itô SDEs, for example when trying to optimally approximate SDEs on submanifolds.
An alternative view on SDEs is the stochastic flow of diffeomorphisms. This understanding is unambiguous and corresponds to the Stratonovich version of the continuous time limit of stochastic difference equations. Associated with SDEs is the Smoluchowski equation or the Fokker–Planck equation, an equation describing the time evolution of probability distribution functions. The generalization of the Fokker–Planck evolution to temporal evolution of differential forms is provided by the concept of stochastic evolution operator.
In physical science, there is an ambiguity in the usage of the term "Langevin SDEs". While Langevin SDEs can be of a more general form, this term typically refers to a narrow class of SDEs with gradient flow vector fields. This class of SDEs is particularly popular because it is a starting point of the Parisi–Sourlas stochastic quantization procedure, leading to a N=2 supersymmetric model closely related to supersymmetric quantum mechanics. From the physical point of view, however, this class of SDEs is not very interesting because it never exhibits spontaneous breakdown of topological supersymmetry, i.e., (overdamped) Langevin SDEs are never chaotic.
Stochastic calculus
Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is almost surely nowhere differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Itô stochastic calculus and the Stratonovich stochastic calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist (e.g. Øksendal, 2003) and conveniently, one can readily convert an Itô SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down.
Numerical solutions
Numerical methods for solving stochastic differential equations include the Euler–Maruyama method, Milstein method, Runge–Kutta method (SDE), Rosenbrock method, and methods based on different representations of iterated stochastic integrals.
Use in physics
In physics, SDEs have wide applicability ranging from molecular dynamics to neurodynamics and to the dynamics of astrophysical objects. More specifically, SDEs describe all dynamical systems, in which quantum effects are either unimportant or can be taken into account as perturbations. SDEs can be viewed as a generalization of the dynamical systems theory to models with noise. This is an important generalization because real systems cannot be completely isolated from their environments and for this reason always experience external stochastic influence.
There are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. Therefore, the following is the most general class of SDEs:
where is the position in the system in its phase (or state) space, , assumed to be a differentiable manifold, the is a flow vector field representing deterministic law of evolution, and is a set of vector fields that define the coupling of the system to Gaussian white noise, . If is a linear space and are constants, the system is said to be subject to additive noise, otherwise it is said to be subject to multiplicative noise. This term is somewhat misleading as it has come to mean the general case even though it appears to imply the limited case in which .
For a fixed configuration of noise, SDE has a unique solution differentiable with respect to the initial condition. Nontriviality of stochastic case shows up when one tries to average various objects of interest over noise configurations. In this sense, an SDE is not a uniquely defined entity when noise is multiplicative and when the SDE is understood as a continuous time limit of a stochastic difference equation. In this case, SDE must be complemented by what is known as "interpretations of SDE" such as Itô or a Stratonovich interpretations of SDEs. Nevertheless, when SDE is viewed as a continuous-time stochastic flow of diffeomorphisms, it is a uniquely defined mathematical object that corresponds to Stratonovich approach to a continuous time limit of a stochastic difference equation.
In physics, the main method of solution is to find the probability distribution function as a function of time using the equivalent Fokker–Planck equation (FPE). The Fokker–Planck equation is a deterministic partial differential equation. It tells how the probability distribution function evolves in time similarly to how the Schrödinger equation gives the time evolution of the quantum wave function or the diffusion equation gives the time evolution of chemical concentration. Alternatively, numerical solutions can be obtained by Monte Carlo simulation. Other techniques include the path integration that draws on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables) or by writing down ordinary differential equations for the statistical moments of the probability distribution function.
Use in probability and mathematical finance
The notation used in probability theory (and in many applications of probability theory, for instance in signal processing with the filtering problem and in mathematical finance) is slightly different. It is also the notation used in publications on numerical methods for solving stochastic differential equations. This notation makes the exotic nature of the random function of time in the physics formulation more explicit. In strict mathematical terms, cannot be chosen as an ordinary function, but only as a generalized function. The mathematical formulation treats this complication with less ambiguity than the physics formulation.
A typical equation is of the form
where denotes a Wiener process (standard Brownian motion).
This equation should be interpreted as an informal way of expressing the corresponding integral equation
The equation above characterizes the behavior of the continuous time stochastic process Xt as the sum of an ordinary Lebesgue integral and an Itô integral. A heuristic (but very helpful) interpretation of the stochastic differential equation is that in a small time interval of length δ the stochastic process Xt changes its value by an amount that is normally distributed with expectation μ(Xt, t) δ and variance σ(Xt, t)2 δ and is independent of the past behavior of the process. This is so because the increments of a Wiener process are independent and normally distributed. The function μ is referred to as the drift coefficient, while σ is called the diffusion coefficient. The stochastic process Xt is called a diffusion process, and satisfies the Markov property.
The formal interpretation of an SDE is given in terms of what constitutes a solution to the SDE. There are two main definitions of a solution to an SDE, a strong solution and a weak solution Both require the existence of a process Xt that solves the integral equation version of the SDE. The difference between the two lies in the underlying probability space (). A weak solution consists of a probability space and a process that satisfies the integral equation, while a strong solution is a process that satisfies the equation and is defined on a given probability space. The Yamada–Watanabe theorem makes a connection between the two.
An important example is the equation for geometric Brownian motion
which is the equation for the dynamics of the price of a stock in the Black–Scholes options pricing model of financial mathematics.
Generalizing the geometric Brownian motion, it is also possible to define SDEs admitting strong solutions and whose distribution is a convex combination of densities coming from different geometric Brownian motions or Black Scholes models, obtaining a single SDE whose solutions is distributed as a mixture dynamics of lognormal distributions of different Black Scholes models. This leads to models that can deal with the volatility smile in financial mathematics.
The simpler SDE called arithmetic Brownian motion
was used by Louis Bachelier as the first model for stock prices in 1900, known today as Bachelier model.
There are also more general stochastic differential equations where the coefficients μ and σ depend not only on the present value of the process Xt, but also on previous values of the process and possibly on present or previous values of other processes too. In that case the solution process, X, is not a Markov process, and it is called an Itô process and not a diffusion process. When the coefficients depends only on present and past values of X, the defining equation is called a stochastic delay differential equation.
A generalization of stochastic differential equations with the Fisk-Stratonovich integral to semimartingales with jumps are the SDEs of Marcus type. The Marcus integral is an extension of McShane's stochastic calculus.
An innovative application in stochastic finance derives from the usage of the equation for Ornstein–Uhlenbeck process
which is the equation for the dynamics of the return of the price of a stock under the hypothesis that returns display a Log-normal distribution.
Under this hypothesis, the methodologies developed by Marcello Minenna determines prediction interval able to identify abnormal return that could hide market abuse phenomena.
SDEs on manifolds
More generally one can extend the theory of stochastic calculus onto differential manifolds and for this purpose one uses the Fisk-Stratonovich integral. Consider a manifold , some finite-dimensional vector space , a filtered probability space with satisfying the usual conditions and let be the one-point compactification and be -measurable. A stochastic differential equation on written
is a pair , such that
is a continuous -valued semimartingale,
is a homomorphism of vector bundles over .
For each the map is linear and for each .
A solution to the SDE on with initial condition is a continuous -adapted -valued process up to life time , s.t. for each test function the process is a real-valued semimartingale and for each stopping time with the equation
holds -almost surely, where is the differential at . It is a maximal solution if the life time is maximal, i.e.,
-almost surely. It follows from the fact that for each test function is a semimartingale, that is a semimartingale on . Given a maximal solution we can extend the time of onto full and after a continuation of on we get
up to indistinguishable processes.
Although Stratonovich SDEs are the natural choice for SDEs on manifolds, given that they satisfy the chain rule and that their drift and diffusion coefficients behave as vector fields under changes of coordinates, there are cases where Ito calculus on manifolds is preferable. A theory of Ito calculus on manifolds was first developed by Laurent Schwartz through the concept of Schwartz morphism, see also the related 2-jet interpretation of Ito SDEs on manifolds based on the jet-bundle. This interpretation is helpful when trying to optimally approximate the solution of an SDE given on a large space with the solutions of an SDE given on a submanifold of that space, in that a Stratonovich based projection does not result to be optimal. This has been applied to the filtering problem, leading to optimal projection filters.
As rough paths
Usually the solution of an SDE requires a probabilistic setting, as the integral implicit in the solution is a stochastic integral. If it were possible to deal with the differential equation path by path, one would not need to define a stochastic integral and one could develop a theory independently of probability theory.
This points to considering the SDE
as a single deterministic differential equation for every , where is the sample space in the given probability space (). However, a direct path-wise interpretation of the SDE is not possible, as the Brownian motion paths have unbounded variation and are nowhere differentiable with probability one, so that there is no naive way to give meaning to terms like , precluding also a naive path-wise definition of the stochastic integral as an integral against every single . However, motivated by the Wong-Zakai result for limits of solutions of SDEs with regular noise and using rough paths theory, while adding a chosen definition of iterated integrals of Brownian motion, it is possible to define a deterministic rough integral for every single that coincides for example with the Ito integral with probability one for a particular choice of the iterated Brownian integral. Other definitions of the iterated integral lead to deterministic pathwise equivalents of different stochastic integrals, like the Stratonovich integral. This has been used for example in financial mathematics to price options without probability.
Existence and uniqueness of solutions
As with deterministic ordinary and partial differential equations, it is important to know whether a given SDE has a solution, and whether or not it is unique. The following is a typical existence and uniqueness theorem for Itô SDEs taking values in n-dimensional Euclidean space Rn and driven by an m-dimensional Brownian motion B; the proof may be found in Øksendal (2003, §5.2).
Let T > 0, and let
be measurable functions for which there exist constants C and D such that
for all t ∈ [0, T] and all x and y ∈ Rn, where
Let Z be a random variable that is independent of the σ-algebra generated by Bs, s ≥ 0, and with finite second moment:
Then the stochastic differential equation/initial value problem
has a P-almost surely unique t-continuous solution (t, ω) ↦ Xt(ω) such that X is adapted to the filtration FtZ generated by Z and Bs, s ≤ t, and
General case: local Lipschitz condition and maximal solutions
The stochastic differential equation above is only a special case of a more general form
where
is a continuous semimartingale in and is a continuous semimartingal in
is a map from some open nonempty set , where is the space of all linear maps from to .
More generally one can also look at stochastic differential equations on manifolds.
Whether the solution of this equation explodes depends on the choice of . Suppose satisfies some local Lipschitz condition, i.e., for and some compact set and some constant the condition
where is the Euclidean norm. This condition guarantees the existence and uniqueness of a so-called maximal solution.
Suppose is continuous and satisfies the above local Lipschitz condition and let be some initial condition, meaning it is a measurable function with respect to the initial σ-algebra. Let be a predictable stopping time with almost surely. A -valued semimartingale is called a maximal solution of
with life time if
for one (and hence all) announcing the stopped process is a solution to the stopped stochastic differential equation
on the set we have almost surely that with .
is also a so-called explosion time.
Some explicitly solvable examples
Explicitly solvable SDEs include:
Linear SDE: General case
where
Reducible SDEs: Case 1
for a given differentiable function is equivalent to the Stratonovich SDE
which has a general solution
where
Reducible SDEs: Case 2
for a given differentiable function is equivalent to the Stratonovich SDE
which is reducible to
where where is defined as before.
Its general solution is
SDEs and supersymmetry
In supersymmetric theory of SDEs, stochastic dynamics is defined via stochastic evolution operator acting on the differential forms on the phase space of the model. In this exact formulation of stochastic dynamics, all SDEs possess topological supersymmetry which represents the preservation of the continuity of the phase space by continuous time flow. The spontaneous breakdown of this supersymmetry is the mathematical essence of the ubiquitous dynamical phenomenon known across disciplines as chaos, turbulence, self-organized criticality etc. and the Goldstone theorem explains the associated long-range dynamical behavior, i.e., the butterfly effect, 1/f and crackling noises, and scale-free statistics of earthquakes, neuroavalanches, solar flares etc.
See also
Backward stochastic differential equation
Langevin dynamics
Local volatility
Stochastic process
Stochastic volatility
Stochastic partial differential equations
Diffusion process
Stochastic difference equation
References
Further reading
Evans, Lawrence C (2013). An Introduction to Stochastic Differential Equations American Mathematical Society.
Desmond Higham and Peter Kloeden: "An Introduction to the Numerical Simulation of Stochastic Differential Equations", SIAM, (2021).
Differential equations
Stochastic processes | Stochastic differential equation | [
"Mathematics"
] | 4,074 | [
"Applied mathematics",
"Mathematical objects",
"Differential equations",
"Equations",
"Mathematical finance"
] |
1,362,465 | https://en.wikipedia.org/wiki/Classical%20electron%20radius | The classical electron radius is a combination of fundamental physical quantities that define a length scale for problems involving an electron interacting with electromagnetic radiation. It links the classical electrostatic self-interaction energy of a homogeneous charge distribution to the electron's relativistic mass-energy. According to modern understanding, the electron is a point particle with a point charge and no spatial extent. Nevertheless, it is useful to define a length that characterizes electron interactions in atomic-scale problems. The classical electron radius is given as
where is the elementary charge, is the electron mass, is the speed of light, and is the permittivity of free space. This numerical value is several times larger than the radius of the proton.
In cgs units, the permittivity factor and do not enter, but the classical electron radius has the same value.
The classical electron radius is sometimes known as the Lorentz radius or the Thomson scattering length. It is one of a trio of related scales of length, the other two being the Bohr radius and the reduced Compton wavelength of the electron . Any one of these three length scales can be written in terms of any other using the fine-structure constant :
Derivation
The classical electron radius length scale can be motivated by considering the energy necessary to assemble an amount of charge into a sphere of a given radius . The electrostatic potential at a distance from a charge is
.
To bring an additional amount of charge from infinity necessitates putting energy into the system, , by an amount
.
If the sphere is assumed to have constant charge density, , then
and .
Integrating for from zero to the final radius yields the expression for the total energy , necessary to assemble the total charge into a uniform sphere of radius :
.
This is called the electrostatic self-energy of the object. The charge is now interpreted as the electron charge, , and the energy is set equal to the relativistic mass–energy of the electron, , and the numerical factor 3/5 is ignored as being specific to the special case of a uniform charge density. The radius is then defined to be the classical electron radius, , and one arrives at the expression given above.
Note that this derivation does not say that is the actual radius of an electron. It only establishes a dimensional link between electrostatic self energy and the mass–energy scale of the electron.
Discussion
The classical electron radius appears in the classical limit of modern theories as well, including non-relativistic Thomson scattering and the relativistic Klein–Nishina formula. Also, is roughly the length scale at which renormalization becomes important in quantum electrodynamics. That is, at short-enough distances, quantum fluctuations within the vacuum of space surrounding an electron begin to have calculable effects that have measurable consequences in atomic and particle physics.
Based on the assumption of a simple mechanical model, attempts to model the electron as a non-point particle have been described by some as ill-conceived and counter-pedagogic.
See also
Electromagnetic mass
References
Further reading
Arthur N. Cox, Ed. "Allen's Astrophysical Quantities", 4th Ed, Springer, 1999.
External links
Length Scales in Physics: the Classical Electron Radius
Physical constants
Atomic physics
Electron
Radii | Classical electron radius | [
"Physics",
"Chemistry",
"Mathematics"
] | 657 | [
"Electron",
"Molecular physics",
"Physical quantities",
"Quantity",
"Quantum mechanics",
"Physical constants",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
1,362,724 | https://en.wikipedia.org/wiki/Kazhdan%27s%20property%20%28T%29 | In mathematics, a locally compact topological group G has property (T) if the trivial representation is an isolated point in its unitary dual equipped with the Fell topology. Informally, this means that if G acts unitarily on a Hilbert space and has "almost invariant vectors", then it has a nonzero invariant vector. The formal definition, introduced by David Kazhdan (1967), gives this a precise, quantitative meaning.
Although originally defined in terms of irreducible representations, property (T) can often be checked even when there is little or no explicit knowledge of the unitary dual. Property (T) has important applications to group representation theory, lattices in algebraic groups over local fields, ergodic theory, geometric group theory, expanders, operator algebras and the theory of networks.
Definitions
Let G be a σ-compact, locally compact topological group and π : G → U(H) a unitary representation of G on a (complex) Hilbert space H. If ε > 0 and K is a compact subset of G, then a unit vector ξ in H is called an (ε, K)-invariant vector if
The following conditions on G are all equivalent to G having property (T) of Kazhdan, and any of them can be used as the definition of property (T).
(1) The trivial representation is an isolated point of the unitary dual of G with Fell topology.
(2) Any sequence of continuous positive definite functions on G converging to 1 uniformly on compact subsets, converges to 1 uniformly on G.
(3) Every unitary representation of G that has an (ε, K)-invariant unit vector for any ε > 0 and any compact subset K, has a non-zero invariant vector.
(4) There exists an ε > 0 and a compact subset K of G such that every unitary representation of G that has an (ε, K)-invariant unit vector, has a nonzero invariant vector.
(5) Every continuous affine isometric action of G on a real Hilbert space has a fixed point (property (FH)).
If H is a closed subgroup of G, the pair (G,H) is said to have relative property (T) of Margulis if there exists an ε > 0 and a compact subset K of G such that whenever a unitary representation of G has an (ε, K)-invariant unit vector, then it has a non-zero vector fixed by H.
Discussion
Definition (4) evidently implies definition (3). To show the converse, let G be a locally compact group satisfying (3), assume by contradiction that for every K and ε there is a unitary representation that has a (K, ε)-invariant unit vector and does not have an invariant vector. Look at the direct sum of all such representation and that will negate (4).
The equivalence of (4) and (5) (Property (FH)) is the Delorme-Guichardet theorem. The fact that (5) implies (4) requires the assumption that G is σ-compact (and locally compact) (Bekka et al., Theorem 2.12.4).
General properties
Property (T) is preserved under quotients: if G has property (T) and H is a quotient group of G then H has property (T). Equivalently, if a homomorphic image of a group G does not have property (T) then G itself does not have property (T).
If G has property (T) then G/[G, G] is compact.
Any countable discrete group with property (T) is finitely generated.
An amenable group which has property (T) is necessarily compact. Amenability and property (T) are in a rough sense opposite: they make almost invariant vectors easy or hard to find.
Kazhdan's theorem: If Γ is a lattice in a Lie group G then Γ has property (T) if and only if G has property (T). Thus for n ≥ 3, the special linear group SL(n, Z) has property (T).
Examples
Compact topological groups have property (T). In particular, the circle group, the additive group Zp of p-adic integers, compact special unitary groups SU(n) and all finite groups have property (T).
Simple real Lie groups of real rank at least two have property (T). This family of groups includes the special linear groups SL(n, R) for n ≥ 3 and the special orthogonal groups SO(p,q) for p > q ≥ 2 and SO(p,p) for p ≥ 3. More generally, this holds for simple algebraic groups of rank at least two over a local field.
The pairs (Rn ⋊ SL(n, R), Rn) and (Zn ⋊ SL(n, Z), Zn) have relative property (T) for n ≥ 2.
For n ≥ 2, the noncompact Lie group Sp(n, 1) of isometries of a quaternionic hermitian form of signature (n,1) is a simple Lie group of real rank 1 that has property (T). By Kazhdan's theorem, lattices in this group have property (T). This construction is significant because these lattices are hyperbolic groups; thus, there are groups that are hyperbolic and have property (T). Explicit examples of groups in this category are provided by arithmetic lattices in Sp(n, 1) and certain quaternionic reflection groups.
Examples of groups that do not have property (T) include
The additive groups of integers Z, of real numbers R and of p-adic numbers Qp.
The special linear groups SL(2, Z) and SL(2, R), as a result of the existence of complementary series representations near the trivial representation, although SL(2,Z) has property (τ) with respect to principal congruence subgroups, by Selberg's theorem.
Noncompact solvable groups.
Nontrivial free groups and free abelian groups.
Discrete groups
Historically property (T) was established for discrete groups Γ by embedding them as lattices in real or p-adic Lie groups with property (T). There are now several direct methods available.
The algebraic method of Shalom applies when Γ = SL(n, R) with R a ring and n ≥ 3; the method relies on the fact that Γ can be boundedly generated, i.e. can be expressed as a finite product of easier subgroups, such as the elementary subgroups consisting of matrices differing from the identity matrix in one given off-diagonal position.
The geometric method has its origins in ideas of Garland, Gromov and Pierre Pansu. Its simplest combinatorial version is due to Zuk: let Γ be a discrete group generated by a finite subset S, closed under taking inverses and not containing the identity, and define a finite graph with vertices S and an edge between g and h whenever g−1h lies in S. If this graph is connected and the smallest non-zero eigenvalue of the Laplacian of the corresponding simple random walk is greater than , then Γ has property (T). A more general geometric version, due to Zuk and , states that if a discrete group Γ acts properly discontinuously and cocompactly on a contractible 2-dimensional simplicial complex with the same graph theoretic conditions placed on the link at each vertex, then Γ has property (T). Many new examples of hyperbolic groups with property (T) can be exhibited using this method.
The computer-assisted method is based on a suggestion by Narutaka Ozawa and has been successfully implemented by several researchers. It is based on the algebraic characterization of property (T) in terms of an inequality in the real group algebra, for which a solution may be found by solving a semidefinite programming problem numerically on a computer. Notably, this method has confirmed property (T) for the automorphism group of the free group of rank at least 5. No human proof is known for this result.
Applications
Grigory Margulis used the fact that SL(n, Z) (for n ≥ 3) has property (T) to construct explicit families of expanding graphs, that is, graphs with the property that every subset has a uniformly large "boundary". This connection led to a number of recent studies giving an explicit estimate of Kazhdan constants, quantifying property (T) for a particular group and a generating set.
Alain Connes used discrete groups with property (T) to find examples of type II1 factors with countable fundamental group, so in particular not the whole of positive reals . Sorin Popa subsequently used relative property (T) for discrete groups to produce a type II1 factor with trivial fundamental group.
Groups with property (T) also have Serre's property FA.
Toshikazu Sunada observed that the positivity of the bottom of the spectrum of a "twisted" Laplacian on a closed manifold is related to property (T) of the fundamental group. This observation yields Brooks' result which says that the bottom of the spectrum of the Laplacian on the universal covering manifold over a closed Riemannian manifold M equals zero if and only if the fundamental group of M is amenable.
References
.
Lubotzky, A. and A. Zuk, On property (τ), monograph to appear.
.
.
.
Unitary representation theory
Topological groups
Geometric group theory
Computer-assisted proofs | Kazhdan's property (T) | [
"Physics",
"Mathematics"
] | 1,993 | [
"Geometric group theory",
"Group actions",
"Computer-assisted proofs",
"Space (mathematics)",
"Topological spaces",
"Topological groups",
"Symmetry"
] |
1,363,288 | https://en.wikipedia.org/wiki/Gyrotron | A gyrotron is a class of high-power linear-beam vacuum tubes that generates millimeter-wave electromagnetic waves by the cyclotron resonance of electrons in a strong magnetic field. Output frequencies range from about 20 to 527 GHz, covering wavelengths from microwave to the edge of the terahertz gap. Typical output powers range from tens of kilowatts to 1–2 megawatts. Gyrotrons can be designed for pulsed or continuous operation. The gyrotron was invented by Soviet scientists at NIRFI, based in Nizhny Novgorod, Russia.
Principle
The gyrotron is a type of free-electron maser that generates high-frequency electromagnetic radiation by stimulated cyclotron resonance of electrons moving through a strong magnetic field. It can produce high power at millimeter wavelengths because, as a fast-wave device, its dimensions can be much larger than the wavelength of the radiation. This is unlike conventional microwave vacuum tubes such as klystrons and magnetrons, in which the wavelength is determined by a single-mode resonant cavity, a slow-wave structure. Thus, as operating frequencies increase, the resonant cavity structures must decrease in size, which limits their power-handling capability.
In the gyrotron, a hot filament in an electron gun (1) at one end of the tube emits an annular-shaped (hollow tubular) beam of electrons (6), which is accelerated by a high-voltage DC anode (10) and then travels through a large tubular resonant cavity structure (2) in a strong axial magnetic field, usually created by a superconducting magnet around the tube (8). The field causes the electrons to move helically in tight circles around the magnetic field lines as they travel lengthwise through the tube. At the position in the tube where the magnetic field reaches its maximum (2), the electrons radiate electromagnetic waves, parallel to the axis of the tube, at their cyclotron resonance frequency. The millimeter radiation forms standing waves in the tube, which acts as an open-ended resonant cavity, and is formed into a beam. The beam is converted by a mode converter (9) and reflected by mirrors (4), which direct it through a window (5) in the side of the tube into a microwave waveguide (7). A collector electrode absorbs the spent electron beam at the end of the tube (3).
As in other linear-beam microwave tubes, the energy of the output electromagnetic waves comes from the kinetic energy of the electron beam, which is due to the accelerating anode voltage (10). In the region before the resonant cavity where the magnetic field strength is increasing, it compresses the electron beam, converting the longitudinal drift velocity to transverse orbital velocity, in a process similar to that occurring in a magnetic mirror used in plasma confinement. The orbital velocity of the electrons is 1.5 to 2 times their axial beam velocity. Due to the standing waves in the resonant cavity, the electrons become "bunched"; that is, their phase becomes coherent (synchronized), so they are all at the same point in their orbit at the same time. Therefore, they emit coherent radiation.
The electron speed in a gyrotron is slightly relativistic (on the order of but not close to the speed of light). This contrasts to the free-electron laser (and xaser) that work on different principles and whose electrons are highly relativistic.
Applications
Gyrotrons are used for many industrial and high-technology heating applications. For example, gyrotrons are used in nuclear fusion research experiments to heat plasmas and also in the manufacturing industry as a rapid heating tool in processing glass, composites, and ceramics, as well as for annealing (solar and semiconductors). Military applications include the Active Denial System.
In 2021 Quaise Energy announced the idea of using a gyrotron as a boring machine to drill a hole 20 kilometers in depth and use it to produce geothermal energy. The technique would use frequencies of 30 to 300 GHz and would transfer energy to a rock 1012 more efficiently than using a laser. Lasers would further be disrupted by the vaporized rock, which would affect the longer wavelength much less. Drilling rates of 70 meters/hour appear to be possible with a 1-MW gyrotron.
Types
The output window of the tube from which the microwave beam emerges can be in two locations. In the transverse-output gyrotron, the beam exits through a window on the side of the tube. This requires a 45° mirror at the end of the cavity to reflect the microwave beam, positioned at one side so the electron beam misses it. In the axial-output gyrotron, the beam exits through a window at the end of the tube at the far end of the cylindrical collector electrode which collects the electrons.
The original gyrotron developed in 1964 was an oscillator, but since that time gyrotron amplifiers have been developed. The helical gyrotron electron beam can amplify an applied microwave signal similarly to the way a straight electron beam amplifies in classical microwave tubes such as the klystron, so there is a series of gyrotrons that function analogously to these tubes. Their advantage is that they can operate at much higher frequencies.
The gyro-monotron (gyro-oscillator) is a single-cavity gyrotron that functions as an oscillator. A gyro-klystron is an amplifier that functions analogously to a klystron tube. Has two microwave cavities along the electron beam, an input cavity upstream to which the signal to be amplified is applied and an output cavity downstream from which the output is taken. A gyro-TWT is an amplifier that functions analogously to a travelling wave tube (TWT). It has a slow wave structure similar to a TWT paralleling the beam, with the input microwave signal applied to the upstream end and the amplified output signal taken from the downstream end. A gyro-BWO is an oscillator that functions analogously to a backward wave oscillator (BWO). It generates oscillations traveling in an opposite direction to the electron beam, which is output at the upstream end of the tube. A gyro-twystron is an amplifier that functions analogously to a twystron, a tube that combines a klystron and a TWT. Like a klystron, it has an input cavity at the upstream end followed by buncher cavities to bunch the electrons, which are followed by a TWT-type slow-wave structure that develops the amplified output signal. Like a TWT, it has a wide bandwidth.
Manufacturers
The gyrotron was invented in the Soviet Union. Present makers include Communications & Power Industries (USA), Gycom (Russia), Thales Group (EU), Toshiba (Japan, now Canon, Inc., also from Japan), and Bridge12 Technologies. System developers include Gyrotron Technology.
See also
Electron cyclotron resonance
Fusion power
Terahertz radiation
References
External links
Gyrotron
Microwave technology
Terahertz technology
Soviet inventions
Vacuum tubes
Particle accelerators | Gyrotron | [
"Physics"
] | 1,539 | [
"Spectrum (physical sciences)",
"Terahertz technology",
"Vacuum tubes",
"Electromagnetic spectrum",
"Vacuum",
"Matter"
] |
1,363,423 | https://en.wikipedia.org/wiki/Human%20Proteome%20Folding%20Project | The Human Proteome Folding Project (HPF) is a collaborative effort between New York University (Bonneau Lab), the Institute for Systems Biology (ISB) and the University of Washington (Baker Lab), using the Rosetta software developed by the Rosetta Commons. The project is managed by the Bonneau lab.
HPF Phase 1 applied Rosetta v4.2x software on the human genome and 89 others, starting in November 2004. Phase 1 ended in July 2006. HPF Phase 2 (HPF2) applies the Rosetta v4.8x software in higher resolution, "full atom refinement" mode, concentrating on cancer biomarkers (proteins found at dramatically increased levels in cancer tissues), human secreted proteins and malaria.
Phase 1 ran on two volunteer computing grids: on United Devices' grid.org, and on the World Community Grid, an IBM philanthropic initiative. Phase 2 of the project ran exclusively on the World Community Grid; it terminated in 2013 after more than 9 years of IBM involvement.
The Institute for Systems Biology will use the results of the computations within its larger research efforts.
Publications
*
(Used fold enrichment and function predictions)
(Used predictions as hypothesis for further experimental characterization)
See also
BOINC
Folding@home
Foldit
Human proteome project
List of volunteer computing projects
References
External links
HPF page at WCG
HPF updates by Dr. Bonneau
The Yeast Resource Center Public Data Repository offers predicted structures for many organisms (humans, yeast, bacteria, phages) produced by the HPF
Berkeley Open Infrastructure for Network Computing projects
Proteomics
Protein structure
Volunteer computing projects | Human Proteome Folding Project | [
"Chemistry"
] | 333 | [
"Protein structure",
"Structural biology"
] |
25,642,027 | https://en.wikipedia.org/wiki/Amanita%20aestivalis | Amanita aestivalis, commonly known as the white American star-footed amanita, is a species of fungus in the mushroom family Amanitaceae. The cap of the white fruit body is in diameter. It sits atop a stem that is long. The entire fruit body will slowly stain a reddish-brown color in response to bruising. A. aestivalis may be a synonym for A. brunnescens, and may be confused with several other white-bodied amanitas. The fungus is distributed in eastern North America.
Description
The cap of the fruit body is in diameter, and depending on its age, may range from egg-shaped to convex to somewhat flattened. Older specimens may have edges that are curved upwards. The color is white or pale tan in the center of the cap; older specimens may have areas of discolored tissue colored brownish-red shades. Sometimes, the edge of the cap has radial grooves—up to long—that mirror the position of the underlying gills. When moist, the cap is sticky to the touch; when dry, it is shiny, usually without any remnants of the thin volva. The white gills are crowded close together, and are free from attachment to the stem. They are subventricose: slightly swollen in the middle, and tapering near the ends.
The stem is long by thick, and slightly thicker at the base than at the top. It is stuffed with whitish hyphae that resemble cotton. The surface of the stem is smooth or has delicate tufts of soft, white, woolly hairs. There is a rimmed bulb at the base of the stem, which can reach a diameter of over . The ring—located on the upper portion of the stem, from the top—is white, membranous, and long-lasting. The volva remains closely attached to the bulb, although a portion may stretch out like a thin membrane and adhere to the base of the stem before collapsing. The flesh will slowly turn pinkish-brown to chocolate-brown when it has been injured or bruised. Young specimens do not have any distinct odor, but fruit bodies may smell slightly of onions or garlic in age.
Microscopic characteristics
Viewed in deposit, like with a spore print, the basidiospores of A. aestivalis are white. Examination with a microscope reveals further details: they are roughly spherical, hyaline (translucent) and thin-walled, with dimensions of 7.8–8.8 μm. The spores are amyloid, meaning that they will absorb iodine when stained with Melzer's reagent and appear blue to blackish-blue. The spore-bearing cells, the basidia, are four-spored, thin-walled, and measure 32–60 long by 4–13 μm thick. There are no clamps present at the bases of the basidia.
Similar species
According to Singer, the species is often mistaken for A. verna in the eastern United States. A. verna, however, has ellipsoid spores. Other white amanitas within the range of A. aestivalis include the deadly toxic species A. virosa (has a more loose cottony stem), A. phalloides (the cap usually has an olive-green tint) and A. bisporigera (typically has two-spored basidia). A. aestivalis is sometimes considered a white form of A. brunnescens, but this latter species has dusky brownish gray radial stripes and usually has many fibrils (short section of hyphae) projecting from the surface, to produce a fine, hairy appearance. Further, it stains more rapidly than A. aestivalis. A. asteropus (the "European star-footed Amanita") is cream to yellow color, and differs from A. aestivalis in its reaction to chemical tests. It is only known from Europe.
Taxonomy
American mycologist Rolf Singer first described the species in 1949 based on specimens he had collected in Massachusetts, Michigan, New York and Virginia. Because this original report was published without a Latin description (contrary to the naming conventions of the International Code of Botanical Nomenclature), he later amended his description in 1959. There is some doubt as to whether A. aestivalis is a distinct species from A. brunnescens (the "brown American star-footed Amanita"), as described by George F. Atkinson in 1918. Singer claimed that the latter species could be distinguished from the former by the consistent absence of dusky brownish-gray radial stripes on the cap. However, in 1927, mycologist Louis Charles Christopher Krieger described the variant A. brunnescens var. pallida, which he said was identical to A. brunnescens except for the white or very pale cap. In his 1986 monograph on North American species of Amanita, David T. Jenkins preferred to reserve judgment on the matter.
Amanita aestivalis is classified in the section Vallidae of the genus Amanita, a grouping of amanitas characterized by having spherical spores, well-developed rings, weakly reddening flesh, and "limbate" volvals (with narrow "limbs" protruding from a soft, margined bulb).
The specific epithet is derived from the Latin adjective aestivalis, meaning "pertaining to the summer". Its vernacular name is the "white American star-footed Amanita".
Distribution and habitat
Fruit bodies typically appear from late June until autumn. In North America, it has been found in the states of New England, as well as Alabama, New York, and Virginia. The distribution extends north to the southeastern provinces of Canada and south to Florida.
Fruit bodies of the fungus grow on the ground in deciduous, coniferous, and mixed forests. A preference has been noted for oak woods containing Tsuga or Pinus species, as well as beech wood with Picea, Abies, and Betula.
Ecology
A. aestivalis is a mycorrhizal species, meaning it forms a mutualistic relationship in which the vegetative hyphae of the fungus grow around and enclose the tiny roots of trees and shrubs. In this way, the plant is better able to absorb phosphorus and other soil nutrients, while the fungus receives moisture, protection, and nutritive byproducts of the plant's metabolism.
Edibility
Although the edibility has not been documented for this species, some sources have noted that toxicity is suspected.
See also
List of Amanita species
References
External links
Aestivalis
Fungi of North America
Fungi described in 1959
Taxa named by Rolf Singer
Fungus species | Amanita aestivalis | [
"Biology"
] | 1,385 | [
"Fungi",
"Fungus species"
] |
25,642,802 | https://en.wikipedia.org/wiki/Tensor%20software | Tensor software is a class of mathematical software designed for manipulation and calculation with tensors.
Standalone software
SPLATT is an open source software package for high-performance sparse tensor factorization. SPLATT ships a stand-alone executable, C/C++ library, and Octave/MATLAB API.
Cadabra is a computer algebra system (CAS) designed specifically for the solution of problems encountered in field theory. It has extensive functionality for tensor polynomial simplification including multi-term symmetries, fermions and anti-commuting variables, Clifford algebras and Fierz transformations, implicit coordinate dependence, multiple index types and many more. The input format is a subset of TeX. Both a command-line and a graphical interface are available.
Tela is a software package similar to MATLAB and GNU Octave, but designed specifically for tensors.
Software for use with Mathematica
Tensor is a tensor package written for the Mathematica system. It provides many functions relevant for General Relativity calculations in general Riemann–Cartan geometries.
Ricci is a system for Mathematica 2.x and later for doing basic tensor analysis, available for free.
TTC Tools of Tensor Calculus is a Mathematica package for doing tensor and exterior calculus on differentiable manifolds.
EDC and RGTC, "Exterior Differential Calculus" and "Riemannian Geometry & Tensor Calculus," are free Mathematica packages for tensor calculus especially designed but not only for general relativity.
Tensorial "Tensorial 4.0" is a general purpose tensor calculus package for Mathematica.
xAct: Efficient Tensor Computer Algebra for Mathematica. xAct is a collection of packages for fast manipulation of tensor expressions.
GREAT is a free package for Mathematica that computes the Christoffel connection and the basic tensors of General Relativity from a given metric tensor.
Atlas 2 for Mathematica is a powerful Mathematica toolbox which allows to do a wide range of modern differential geometry calculations
GRTensorM is a computer algebra package for performing calculations in the general area of differential geometry.
MathGR is a package to manipulate tensor and GR calculations with either abstract or explicit indices, simplify tensors with permutational symmetries, decompose tensors from abstract indices to partially or completely explicit indices and convert partial derivatives into total derivatives.
TensoriaCalc is a tensor calculus package written for Mathematica 9 and higher, aimed at providing user-friendly functionality and a smooth consistency with the Mathematica language itself. As of January 2015, given a metric and the coordinates used, TensoriaCalc can compute Christoffel symbols, the Riemann curvature tensor, and Ricci tensor/scalar; it allows for user-defined tensors and is able to perform basic operations such as taking the covariant derivatives of tensors. TensoriaCalc is continuously under development due to time constraints faced by its developer.
OGRe is a modern free and open-source Mathematica package for tensor calculus, released in 2021 for Mathematica 12.0 and later. It is designed to be both powerful and user-friendly, and is especially suitable for general relativity. OGRe allows performing arbitrarily complicated tensor operations, and automatically transforms between index configurations and coordinate systems behind the scenes as needed for each operation.
Software for use with Maple
GRTensorII is a computer algebra package for performing calculations in the general area of differential geometry.
Atlas 2 for Maple is a modern differential geometry for Maple.
DifferentialGeometry is a package which performs fundamental operations of calculus on manifolds, differential geometry, tensor calculus, General Relativity, Lie algebras, Lie groups, transformation groups, jet spaces, and the variational calculus. It is included with Maple.
Physics is a package developed as part of Maple, which implements symbolic computations with most of the objects used in mathematical physics. It includes objects from general relativity (tensors, metrics, covariant derivatives, tetrads etc.), quantum mechanics (Kets, Bras, commutators, noncommutative variables) etc.
Software for use with Matlab
Tensorlab is a MATLAB toolbox for multilinear algebra and structured data fusion.
Tensor Toolbox Multilinear algebra MATLAB software.
MPCA and MPCA+LDA Multilinear subspace learning software: Multilinear principal component analysis.
UMPCA Multilinear subspace learning software: Uncorrelated multilinear principal component analysis.
UMLDA Multilinear subspace learning software: Uncorrelated multilinear discriminant analysis.
Software for use with Maxima
Maxima is a free open source general purpose computer algebra system which includes several packages for tensor algebra calculations in its core distribution.
It is particularly useful for calculations with abstract tensors, i.e., when one wishes to do calculations without defining all components of the tensor explicitly. It comes with three tensor packages:
itensor for abstract (indicial) tensor manipulation,
ctensor for component-defined tensors, and
atensor for algebraic tensor manipulation.
Software for use with R
Tensor is an R package for basic tensor operations.
rTensor provides several tensor decomposition approaches.
nnTensor provides several non-negative tensor decomposition approaches.
ttTensor provides several tensor-train decomposition approaches.
tensorBF is an R package for Bayesian Tensor decomposition.
MTF Bayesian Multi-Tensor Factorization for data fusion and Bayesian versions of Tensor PCA and Tensor CCA. Software: MTF.
Software for use with Python
TensorLy provides several tensor decomposition approaches.
OGRePy is Python port of the Mathematica package OGRe (see ), released in 2024 for Python 3.12 and later. It utilizes SymPy for symbolic computations and Jupyter as a notebook interface. OGRePy allows calculating arbitrary tensor formulas using any combination of addition, multiplication by scalar, trace, contraction, partial derivative, covariant derivative, and permutation of indices, and provides facilities for calculating various curvature tensors and geodesic equations.
Software for use with Julia
TensorDecompositions.jl provides several tensor decomposition approaches.
TensorToolbox.jl provides several tensor decomposition approaches. This follows the functionality of MATLAB Tensor toolbox and Hierarchical Tucker Toolbox.
ITensors.jl is a library for rapidly creating correct and efficient tensor network algorithms. This is the Julia version of ITensor, not a wrapper around the C++ version but full implementations by Julia language.
Software for use with SageMath
SageManifolds: tensor calculus on smooth manifolds; all SageManifolds code is included in SageMath since version 7.5; it allows for computations in various vector frames and coordinate charts, the manifold not being required to be parallelizable.
Software for use with Java
ND4J: N-dimensional arrays for the JVM is a Java library for basic tensor operations and scientific computing.
Tensor: computation for regular or unstructured multi-dimensional tensors. Scalar entries are either in numeric or exact precision. API inspired by Mathematica. Java 8 library in with no external dependencies.
Libraries
Redberry is an open source computer algebra system designed for symbolic tensor manipulation. Redberry provides common tools for expression manipulation, generalized on tensorial objects, as well as tensor-specific features: indices symmetries, LaTeX-style input, natural dummy indices handling, multiple index types etc. The HEP package includes tools for Feynman diagrams calculation: Dirac and SU(N) algebra, Levi-Civita simplifications, tools for calculation of one-loop counterterms etc. Redberry is written in Java and provides extensive Groovy-based programming language.
libxm is a lightweight distributed-parallel tensor library written in C.
FTensor is a high performance tensor library written in C++.
TL is a multi-threaded tensor library implemented in C++ used in Dynare++. The library allows for folded/unfolded, dense/sparse tensor representations, general ranks (symmetries). The library implements Faa Di Bruno formula and is adaptive to available memory. Dynare++ is a standalone package solving higher order Taylor approximations to equilibria of non-linear stochastic models with rational expectations.
vmmlib is a C++ linear algebra library that supports 3-way tensors, emphasizing computation and manipulation of several tensor decompositions.
Spartns is a Sparse Tensor framework for Common Lisp.
FAstMat is a thread-safe general tensor algebra library written in C++ and specially designed for FEM/FVM/BEM/FDM element/edge wise computations.
Cyclops Tensor Framework is a distributed memory library for efficient decomposition of tensors of arbitrary type and parallel MPI+OpenMP execution of tensor contractions/functions.
TiledArray is a scalable, block-sparse tensor library that is designed to aid in rapid composition of high-performance algebraic tensor equation. It is designed to scale from a single multicore computer to a massively-parallel, distributed-memory system.
libtensor is a set of performance linear tensor algebra routines for large tensors found in post-Hartree–Fock methods in quantum chemistry.
ITensor features automatic contraction of matching tensor indices. It is written in C++ and has higher-level features for quantum physics algorithms based on tensor networks.
Fastor is a high performance C++ tensor algebra library that supports tensors of any arbitrary dimensions and all their possible contraction and permutation thereof. It employs compile-time graph search optimisations to find the optimal contraction sequence between arbitrary number of tensors in a network. It has high level domain specific features for solving nonlinear multiphysics problem using FEM.
Xerus is a C++ tensor algebra library for tensors of arbitrary dimensions and tensor decomposition into general tensor networks (focusing on matrix product states). It offers Einstein notation like syntax and optimizes the contraction order of any network of tensors at runtime so that dimensions need not be fixed at compile-time.
References
Computer algebra systems
Tensors | Tensor software | [
"Mathematics",
"Engineering"
] | 2,101 | [
"Computer algebra systems",
"Tensors",
"Mathematical software"
] |
25,646,652 | https://en.wikipedia.org/wiki/C22H34O2 | {{DISPLAYTITLE:C22H34O2}}
The molecular formula C22H34O2 (molar mass: 330.51 g/mol, exact mass: 330.2559 u) may refer to:
Anagestone
Docosapentaenoic acid
E-EPA
Hexahydrocannabihexol
3β-Methoxypregnenolone
Topterone
Molecular formulas | C22H34O2 | [
"Physics",
"Chemistry"
] | 89 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
25,646,687 | https://en.wikipedia.org/wiki/C20H38O2 | {{DISPLAYTITLE:C20H38O2}}
The molecular formula C20H38O2 may refer to:
Eicosanolide
Eicosenoic acids
Paullinic acid, also called 7-eicosenoic acid
Prostanoic acid, also called 9-eicosenoic acid
11-Eicosenoic acid, also called gondoic acid
Ethyl oleate
Gadoleic acid
Vaccenyl acetate
Molecular formulas | C20H38O2 | [
"Physics",
"Chemistry"
] | 98 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
18,952,167 | https://en.wikipedia.org/wiki/Formulary%20%28pharmacy%29 | A formulary is a list of pharmaceutical drugs, often decided upon by a group of people, for various reasons such as insurance coverage or use at a medical facility. Traditionally, a formulary contained a collection of formulas for the compounding and testing of medication (a resource closer to what would be referred to as a pharmacopoeia today). Today, the main function of a prescription formulary is to specify particular medications that are approved to be prescribed at a particular hospital, in a particular health system, or under a particular health insurance policy. The development of prescription formularies is based on evaluations of efficacy, safety, and cost-effectiveness of drugs.
Depending on the individual formulary, it may also contain additional clinical information, such as side effects, contraindications, and doses.
By the turn of the millennium, 156 countries had national or provincial essential medicines lists and 135 countries had national treatment.
Australia
In Australia, where there is a public health care system, medications are subsidised under the Pharmaceutical Benefits Scheme (PBS) and medications that are available under the PBS and the indications for which they can be obtained under said scheme can be found in at least two places, the PBS webpage and the Australian Medicines Handbook.
Canada
The Prescription Drug List is the national formulary that lists all medical ingredients for human and animal use available with a prescription with the exception of those under the Controlled Drugs and Substances Act. The Canadian Agency for Drugs and Technologies in Health (CADTH) is the advisory body that evaluates new medical technologies and prescription medication. Based on recommendations the provincial and territorial governments decide whether or not to implement changes to their healthcare system and public drug formularies. Provincial and territorial government provide partial prescription drug coverage and the overall drug payment is a mix of public taxation, private insurance and out-of-pocket expenses. Insurance coverage differs regionally, although each public drug coverage plan must meet standards set by the federal government. Regional health authorities are in charge of regulating and providing its residents insurance while the federal government provides insurance for specifically eligible veterans, First Nations, Inuit, Canadian Forces, federal inmates and some refugees.
United States
In the US, where a system of quasi-private healthcare is in place, a formulary is a list of prescription drugs available to enrollees, and a tiered formulary provides financial incentives for patients to select lower-cost drugs. For example, under a 3-tier formulary, the first tier typically includes generic drugs with the lowest cost sharing (e.g., 10% coinsurance), the second includes preferred brand-name drugs with higher cost sharing (e.g., 25%), and the third includes non-preferred brand-name drugs with the highest cost-sharing (e.g., 40%).
When used appropriately, formularies can help manage drug costs imposed on the insurance policy. However, for drugs that are not on formulary, patients must pay a larger percentage of the cost of the drug, sometimes 100%. Formularies vary between drug plans and differ in the breadth of drugs covered and costs of co-pay and premiums. Most formularies cover at least one drug in each drug class, and encourage generic substitution (also known as a preferred drug list). Formularies have shown to cause issues in hospitals when patients are discharged when not aligned with outpatient drug insurance plans.
United Kingdom
In the UK, the National Health Service (NHS) provides publicly funded universal health care, financed by national health insurance. Here, formularies exist to specify which drugs are available on the NHS. The two main reference sources providing this information are the British National Formulary (BNF) and the Drug Tariff. There is a section in the Drug Tariff, known unofficially as the "Blacklist", detailing medicines which are not to be prescribed under the NHS and must be paid for privately by the patient. Recommendations for additions to the NHS formulary are provided by the National Institute for Health and Care Excellence.
In addition to this, local NHS hospital trusts and Primary Care (General Practitioners) Clinical Commissioning Groups (CCGs), produce their own lists of medicines deemed preferable for prescribing within their locality or organisation; such lists are usually a subset of the more comprehensive BNF. These formularies are not absolutely binding, and physicians may prescribe a non-formulary medicine if they consider it necessary and justifiable. Often, these local formularies are shared between a Primary Care Organisation (PCO) and hospitals within that PCO's jurisdiction, in order to facilitate the procedure of transferring a patient from primary care to secondary care, thus causing fewer "interfacing" issues in the process.
As in the United States, the NHS actively encourages prescribing of generic drugs, in order to save more of the budget allocated to them by the Department of Health.
National formulary
A national formulary contains a list of medicines that are approved for prescription throughout the country, indicating which products are interchangeable. It includes key information on the composition, description, selection, prescribing, dispensing and administration of medicines. Those drugs considered less suitable for prescribing are clearly identified.
Examples of national formularies are:
Australian Pharmaceutical Formulary (APF)
Österreichisches Arzneibuch (ÖAB), the Austrian national formulary
British National Formulary (BNF) and British National Formulary for Children (BNFC)
Farmacotherapeutisch Kompas (FK), the Dutch national formulary
Formularium Nasional (Fornas), the Indonesian national formulary
Hrvatska Farmakopeja, the Croatian national formulary
Japan National Health Insurance Drug Price List
Pharmaceutical Schedule, New Zealand's publicly funded national formulary
United States National Formulary, later bought out and merged with the United States Pharmacopeia (USP-NF)
Farmaceutiska Specialiteter i Sverige (FASS), the Swedish national formulary. Usage of the database is free of charge and it has no promotional texts or advertising. FASS has been developed by the Swedish Association of the Pharmaceutical Industry (LIF) in close cooperation with Sweden's pharmaceutical industry, with additional assistance from the Medical Products Agency, the Pharmaceutical Benefits Board and the National Corporation of Pharmacies. Information on interactions is derived from a joint development between the Department of Pharmaceutical Biosciences at Uppsala University and the Swedish Association of the Pharmaceutical Industry (LIF).
See also
References
External links
A National Formulary for Canada, Department of Economics, University of Calgary, 2005 (archived 6 July 2011)
The Kazakhstan National Formulary (archived 27 March 2022)
Pharmacy
Pharmaceuticals policy
Pharmacological classification systems
Pharmaceutical terminology
Health care management
Health care quality
Health economics | Formulary (pharmacy) | [
"Chemistry"
] | 1,380 | [
"Pharmacological classification systems",
"Pharmacology",
"Pharmacy"
] |
18,952,443 | https://en.wikipedia.org/wiki/Osmoregulation | Osmoregulation is the active regulation of the osmotic pressure of an organism's body fluids, detected by osmoreceptors, to maintain the homeostasis of the organism's water content; that is, it maintains the fluid balance and the concentration of electrolytes (salts in solution which in this case is represented by body fluid) to keep the body fluids from becoming too diluted or concentrated. Osmotic pressure is a measure of the tendency of water to move into one solution from another by osmosis. The higher the osmotic pressure of a solution, the more water tends to move into it. Pressure must be exerted on the hypertonic side of a selectively permeable membrane to prevent diffusion of water by osmosis from the side containing pure water.
Although there may be hourly and daily variations in osmotic balance, an animal is generally in an osmotic steady state over the long term. Organisms in aquatic and terrestrial environments must maintain the right concentration of solutes and amount of water in their body fluids; this involves excretion (getting rid of metabolic nitrogen wastes and other substances such as hormones that would be toxic if allowed to accumulate in the blood) through organs such as the skin and the kidneys.
Regulators and conformers
Two major types of osmoregulation are osmoconformers and osmoregulators. Osmoconformers match their body osmolarity to their environment actively or passively. Most marine invertebrates are osmoconformers, although their ionic composition may be different from that of seawater. In a strictly osmoregulating animal, the amounts of internal salt and water are held relatively constant in the face of environmental changes. It requires that intake and outflow of water and salts be equal over an extended period of time.
Organisms that maintain an internal osmolarity different from the medium in which they are immersed have been termed osmoregulators. They tightly regulate their body osmolarity, maintaining constant internal conditions. They are more common in the animal kingdom. Osmoregulators actively control salt concentrations despite the salt concentrations in the environment. An example is freshwater fish. The gills actively uptake salt from the environment by the use of mitochondria-rich cells. Water will diffuse into the fish, so it excretes a very hypotonic (dilute) urine to expel all the excess water. A marine fish has an internal osmotic concentration lower than that of the surrounding seawater, so it tends to lose water and gain salt. It actively excretes salt out from the gills. Most fish are stenohaline, which means they are restricted to either salt or fresh water and cannot survive in water with a different salt concentration than they are adapted to. However, some fish show an ability to effectively osmoregulate across a broad range of salinities; fish with this ability are known as euryhaline species, e.g., flounder. Flounder have been observed to inhabit two disparate environments—marine and fresh water—and it is inherent to adapt to both by bringing in behavioral and physiological modifications.
Some marine fish, like sharks, have adopted a different, efficient mechanism to conserve water, i.e., osmoregulation. They retain urea in their blood in relatively higher concentration. Urea damages living tissues so, to cope with this problem, some fish retain trimethylamine oxide, which helps to counteract urea's destabilizing effects on cells. Sharks, having slightly higher solute concentration (i.e., above 1000 mOsm which is sea solute concentration), do not drink water like fresh water fish.
In plants
While there are no specific osmoregulatory organs in higher plants, the stomata are important in regulating water loss through evapotranspiration, and on the cellular level the vacuole is crucial in regulating the concentration of solutes in the cytoplasm. Strong winds, low humidity and high temperatures all increase evapotranspiration from leaves. Abscisic acid is an important hormone in helping plants to conserve water—it causes stomata to close and stimulates root growth so that more water can be absorbed.
Plants share with animals the problems of obtaining water but, unlike in animals, the loss of water in plants is crucial to create a driving force to move nutrients from the soil to tissues. Certain plants have evolved methods of water conservation.
Xerophytes are plants that can survive in dry habitats, such as deserts, and are able to withstand prolonged periods of water shortage. Succulent plants such as the cacti store water in the vacuoles of large parenchyma tissues. Other plants have leaf modifications to reduce water loss, such as needle-shaped leaves, sunken stomata, and thick, waxy cuticles as in the pine. The sand-dune marram grass has rolled leaves with stomata on the inner surface.
Hydrophytes are plants that grow in aquatic habitats; they may be floating, submerged, or emergent, and may grow in seasonal (rather than permanent) wetlands. In these plants the water absorption may occur through the whole surface of the plant, e.g., the water lily, or solely through the roots, as in sedges. These plants do not face major osmoregulatory challenges from water scarcity, but aside from species adapted for seasonal wetlands, have few defenses against desiccation.
Halophytes are plants living in soils with high salt concentrations, such as salt marshes or alkaline soils in desert basins. They have to absorb water from such a soil which has higher salt concentration and therefore lower water potential(higher osmotic pressure). Halophytes cope with this situation by activating salts in their roots. As a consequence, the cells of the roots develop lower water potential which brings in water by osmosis. The excess salt can be stored in cells or excreted out from salt glands on leaves. The salt thus secreted by some species help them to trap water vapours from the air, which is absorbed in liquid by leaf cells. Therefore, this is another way of obtaining additional water from air, e.g., glasswort and cord-grass.
Mesophytes are plants living in lands of temperate zone, which grow in well-watered soil. They can easily compensate the water lost by transpiration through absorbing water from the soil. To prevent excessive transpiration they have developed a waterproof external covering called cuticle.
In animals
Humans
Kidneys play a very large role in human osmoregulation by regulating the amount of water reabsorbed from glomerular filtrate in kidney tubules, which is controlled by hormones such as antidiuretic hormone (ADH), aldosterone, and angiotensin II. For example, a decrease in water potential is detected by osmoreceptors in the hypothalamus, which stimulates ADH release from the pituitary gland to increase the permeability of the walls of the collecting ducts in the kidneys. Therefore, a large proportion of water is reabsorbed from fluid in the kidneys to prevent too much water from being excreted.
Marine mammals
Drinking is not common behavior in pinnipeds and cetaceans. Water balance is maintained in marine mammals by metabolic and dietary water, while accidental ingestion and dietary salt may help maintain homeostasis of electrolytes. The kidneys of pinnipeds and cetaceans are lobed in structure, unlike those of non-bears among terrestrial mammals, but this specific adaptation does not confer any greater concentrating ability. Unlike most other aquatic mammals, manatees frequently drink fresh water and sea otters frequently drink saltwater.
Teleosts
In teleost (advanced ray-finned) fishes, the gills, kidney and digestive tract are involved in maintenance of body fluid balance, as the main osmoregulatory organs. Gills in particular are considered the primary organ by which ionic concentration is controlled in marine teleosts.
Unusually, the catfishes in the eeltail family Plotosidae have an extra-branchial salt-secreting dendritic organ. The dendritic organ is likely a product of convergent evolution with other vertebrate salt-secreting organs. The role of this organ was discovered by its high NKA and NKCC activity in response to increasing salinity. However, the Plotosidae dendritic organ may be of limited use under extreme salinity conditions, compared to more typical gill-based ionoregulation.
In protists
Amoeba makes use of contractile vacuoles to collect excretory wastes, such as ammonia, from the intracellular fluid by diffusion and active transport. As osmotic action pushes water from the environment into the cytoplasm, the vacuole moves to the surface and pumps the contents into the environment.
In bacteria
Bacteria respond to osmotic stress by rapidly accumulating electrolytes or small organic solutes via transporters whose activities are stimulated by increases in osmolarity. The bacteria may also turn on genes encoding transporters of osmolytes and enzymes that synthesize osmoprotectants. The EnvZ/OmpR two-component system, which regulates the expression of porins, is well characterized in the model organism E. coli.
Vertebrate excretory systems
Waste products of the nitrogen metabolism
Ammonia is a toxic by-product of protein metabolism and is generally converted to less toxic substances after it is produced then excreted; mammals convert ammonia to urea, whereas birds and reptiles form uric acid to be excreted with other wastes via their cloacas.
Achieving osmoregulation in vertebrates
Four processes occur:
filtration – fluid portion of blood (plasma) is filtered from a nephron (functional unit of vertebrate kidney) structure known as the glomerulus into Bowman's capsule or glomerular capsule (in the kidney's cortex) and flows down the proximal convoluted tubule to a "u-turn" called the Loop of Henle (loop of the nephron) in the medulla portion of the kidney.
reabsorption – most of the viscous glomerular filtrate is returned to blood vessels that surround the convoluted tubules.
secretion – the remaining fluid becomes urine, which travels down collecting ducts to the medullary region of the kidney.
excretion – the urine (in mammals) is stored in the urinary bladder and exits via the urethra; in other vertebrates, the urine mixes with other wastes in the cloaca before leaving the body (frogs also have a urinary bladder).
See also
References
E. Solomon, L. Berg, D. Martin, Biology 6th edition. Brooks/Cole Publishing. 2002
Human homeostasis
Cell biology
Membrane biology | Osmoregulation | [
"Chemistry",
"Biology"
] | 2,290 | [
"Cell biology",
"Human homeostasis",
"Membrane biology",
"Homeostasis",
"Molecular biology"
] |
18,952,492 | https://en.wikipedia.org/wiki/Anthocyanin | Anthocyanins (), also called anthocyans, are water-soluble vacuolar pigments that, depending on their pH, may appear red, purple, blue, or black. In 1835, the German pharmacist Ludwig Clamor Marquart named a chemical compound that gives flowers a blue color, Anthokyan, in his treatise "Die Farben der Blüthen" (English: The Colors of Flowers). Food plants rich in anthocyanins include the blueberry, raspberry, black rice, and black soybean, among many others that are red, blue, purple, or black. Some of the colors of autumn leaves are derived from anthocyanins.
Anthocyanins belong to a parent class of molecules called flavonoids synthesized via the phenylpropanoid pathway. They can occur in all tissues of higher plants, including leaves, stems, roots, flowers, and fruits. Anthocyanins are derived from anthocyanidins by adding sugars. They are odorless and moderately astringent.
Although approved as food and beverage colorant in the European Union, anthocyanins are not approved for use as a food additive because they have not been verified as safe when used as food or supplement ingredients. There is no conclusive evidence that anthocyanins have any effect on human biology or diseases.
Anthocyanin-rich plants
Coloration
In flowers, the coloration that is provided by anthocyanin accumulation may attract a wide variety of animal pollinators, while in fruits, the same coloration may aid in seed dispersal by attracting herbivorous animals to the potentially-edible fruits bearing these red, blue, or purple colors.
Plant physiology
Anthocyanins may have a protective role in plants against extreme temperatures. Tomato plants protect against cold stress with anthocyanins countering reactive oxygen species, leading to a lower rate of cell death in leaves.
Light absorbance
The absorbance pattern responsible for the red color of anthocyanins may be complementary to that of green chlorophyll in photosynthetically active tissues such as young Quercus coccifera leaves. It may protect the leaves from attacks by herbivores that may be attracted by green color.
Occurrence
Anthocyanins are found in the cell vacuole, mostly in flowers and fruits, but also in leaves, stems, and roots. In these parts, they are found predominantly in outer cell layers such as the epidermis and peripheral mesophyll cells.
Most frequently occurring in nature are the glycosides of cyanidin, delphinidin, malvidin, pelargonidin, peonidin, and petunidin. Roughly 2% of all hydrocarbons fixed in photosynthesis are converted into flavonoids and their derivatives, such as the anthocyanins. Not all land plants contain anthocyanin; in the Caryophyllales (including cactus, beets, and amaranth), they are replaced by betalains. Anthocyanins and betalains have never been found in the same plant.
Sometimes bred purposely for high anthocyanin content, ornamental plants such as sweet peppers may have unusual culinary and aesthetic appeal.
In flowers
Anthocyanins occur in the flowers of many plants, such as the blue poppies of some Meconopsis species and cultivars. Anthocyanins have also been found in various tulip flowers, such as Tulipa gesneriana, Tulipa fosteriana and Tulipa eichleri.
In food
Plants rich in anthocyanins are Vaccinium species, such as blueberry, cranberry, and bilberry; Rubus berries, including black raspberry, red raspberry, and blackberry; blackcurrant, cherry, eggplant (aubergine) peel, black rice, ube, Okinawan sweet potato, Concord grape, muscadine grape, red cabbage, and violet petals. Red-fleshed peaches and apples contain anthocyanins. Anthocyanins are less abundant in banana, asparagus, pea, fennel, pear, and potato, and may be totally absent in certain cultivars of green gooseberries.
The highest recorded amount appears to be specifically in the seed coat of black soybean (Glycine max L. Merr.) containing approximately 2 g per 100 g, in purple corn kernels and husks, and in the skins and pulp of black chokeberry (Aronia melanocarpa L.) (see table). Due to critical differences in sample origin, preparation, and extraction methods determining anthocyanin content, the values presented in the adjoining table are not directly comparable.
Nature, traditional agriculture methods, and plant breeding have produced various uncommon crops containing anthocyanins, including blue- or red-flesh potatoes and purple or red broccoli, cabbage, cauliflower, carrots, and corn. Garden tomatoes have been subjected to a breeding program using introgression lines of genetically modified organisms (but not incorporating them in the final purple tomato) to define the genetic basis of purple coloration in wild species that originally were from Chile and the Galapagos Islands. The variety known as "Indigo Rose" became available commercially to the agricultural industry and home gardeners in 2012. Investing tomatoes with high anthocyanin content doubles their shelf-life and inhibits growth of a post-harvest mold pathogen, Botrytis cinerea.
Some tomatoes also have been modified genetically with transcription factors from snapdragons to produce high levels of anthocyanins in the fruits. Anthocyanins also may be found in naturally ripened olives, and are partly responsible for the red and purple colors of some olives.
In leaves of plant foods
Content of anthocyanins in the leaves of colorful plant foods such as purple corn, blueberries, or lingonberries, is about ten times higher than in the edible kernels or fruit.
The color spectrum of grape berry leaves may be analysed to evaluate the amount of anthocyanins. Fruit maturity, quality, and harvest time may be evaluated on the basis of the spectrum analysis.
Autumn leaf color
The reds, purples, and their blended combinations responsible for autumn foliage are derived from anthocyanins. Unlike carotenoids, anthocyanins are not present in the leaf throughout the growing season, but are produced actively, toward the end of summer. They develop in late summer in the sap of leaf cells, resulting from complex interactions of factors inside and outside the plant. Their formation depends on the breakdown of sugars in the presence of light as the level of phosphate in the leaf is reduced. Orange leaves in autumn result from a combination of anthocyanins and carotenoids.
Anthocyanins are present in approximately 10% of tree species in temperate regions, although in certain areas such as New England, up to 70% of tree species may produce anthocyanins.
Colorant safety
Anthocyanins are approved for use as food colorants in the European Union, Australia, and New Zealand, having colorant code E163. In 2013, a panel of scientific experts for the European Food Safety Authority concluded that anthocyanins from various fruits and vegetables have been insufficiently characterized by safety and toxicology studies to approve their use as food additives. Extending from a safe history of using red grape skin extract and blackcurrant extracts to color foods produced in Europe, the panel concluded that these extract sources were exceptions to the ruling and were sufficiently shown to be safe.
Anthocyanin extracts are not specifically listed among approved color additives for foods in the United States; however, grape juice, red grape skin and many fruit and vegetable juices, which are approved for use as colorants, are rich in naturally occurring anthocyanins. No anthocyanin sources are included among approved colorants for drugs or cosmetics. When esterified with fatty acids, anthocyanins can be used as a lipophilic colorant for foods.
In human consumption
Although anthocyanins have been shown to have antioxidant properties in vitro, there is no evidence for antioxidant effects in humans after consuming foods rich in anthocyanins. Unlike controlled test-tube conditions, the fate of anthocyanins in vivo shows they are poorly conserved (less than 5%), with most of what is absorbed existing as chemically modified metabolites that are excreted rapidly. The increase in antioxidant capacity of blood seen after the consumption of anthocyanin-rich foods may not be caused directly by the anthocyanins in the food, but instead by increased uric acid levels derived from metabolizing flavonoids (anthocyanin parent compounds) in the food. It is possible that metabolites of ingested anthocyanins are reabsorbed in the gastrointestinal tract from where they may enter the blood for systemic distribution and have effects as smaller molecules.
In a 2010 review of scientific evidence concerning the possible health benefits of eating foods claimed to have "antioxidant properties" due to anthocyanins, the European Food Safety Authority concluded that 1) there was no basis for a beneficial antioxidant effect from dietary anthocyanins in humans, 2) there was no evidence of a cause-and-effect relationship between the consumption of anthocyanin-rich foods and protection of DNA, proteins, and lipids from oxidative damage, and 3) there was no evidence generally for consumption of anthocyanin-rich foods having any "antioxidant", "anti-cancer", "anti-aging", or "healthy aging" effects.
Chemical properties
Flavylium cation derivatives
Glycosides of anthocyanidins
The anthocyanins, anthocyanidins with sugar group(s), are mostly 3-glucosides of the anthocyanidins. The anthocyanins are subdivided into the sugar-free anthocyanidin aglycones and the anthocyanin glycosides. As of 2003, more than 400 anthocyanins had been reported, while later literature in early 2006, puts the number at more than 550 different anthocyanins. The difference in chemical structure that occurs in response to changes in pH, is the reason why anthocyanins often are used as pH indicators, as they change from red in acids to blue in bases through a process called halochromism.
Stability
Anthocyanins are thought to be subject to physiochemical degradation in vivo and in vitro. Structure, pH, temperature, light, oxygen, metal ions, intramolecular association, and intermolecular association with other compounds (copigments, sugars, proteins, degradation products, etc.) generally are known to affect the color and stability of anthocyanins. B-ring hydroxylation status and pH have been shown to mediate the degradation of anthocyanins to their phenolic acid and aldehyde constituents. Indeed, significant portions of ingested anthocyanins are likely to degrade to phenolic acids and aldehyde in vivo, following consumption. This characteristic confounds scientific isolation of specific anthocyanin mechanisms in vivo.
pH
Anthocyanins generally are degraded at higher pH. However, some anthocyanins, such as petanin (petunidin 3-[6-O-(4-O-(E)-p-coumaroyl-O-α--rhamnopyranosyl)-β--glucopyranoside]-5-O-β--glucopyranoside), are resistant to degradation at pH 8 and may be used effectively as a food colorant.
Use as environmental pH indicator
Anthocyanins may be used as pH indicators because their color changes with pH; they are red or pink in acidic solutions (pH < 7), purple in neutral solutions (pH ≈ 7), greenish-yellow in alkaline solutions (pH > 7), and colorless in very alkaline solutions, where the pigment is completely reduced.
Biosynthesis
Anthocyanin pigments are assembled like all other flavonoids from two different streams of chemical raw materials in the cell:
One stream involves the shikimate pathway to produce the amino acid phenylalanine, (see phenylpropanoids)
The other stream produces three molecules of malonyl-CoA, a C3 unit from a C2 unit (acetyl-CoA),
These streams meet and are coupled together by the enzyme chalcone synthase, which forms an intermediate chalcone-like compound via a polyketide folding mechanism that is commonly found in plants,
The chalcone is subsequently isomerized by the enzyme chalcone isomerase to the prototype pigment naringenin,
Naringenin is subsequently oxidized by enzymes such as flavanone hydroxylase, flavonoid 3'-hydroxylase, and flavonoid 3',5'-hydroxylase,
These oxidation products are further reduced by the enzyme dihydroflavonol 4-reductase to the corresponding colorless leucoanthocyanidins,
Leucoanthocyanidins once were believed to be the immediate precursors of the next enzyme, a dioxygenase referred to as anthocyanidin synthase, or, leucoanthocyanidin dioxygenase. Flavan-3-ols, the products of leucoanthocyanidin reductase (LAR), recently have been shown to be their true substrates,
The resulting unstable anthocyanidins are further coupled to sugar molecules by enzymes such as UDP-3-O-glucosyltransferase, to yield the final relatively-stable anthocyanins.
Thus, more than five enzymes are required to synthesize these pigments, each working in concert. Even a minor disruption in any of the mechanisms of these enzymes by either genetic or environmental factors, would halt anthocyanin production. While the biological burden of producing anthocyanins is relatively high, plants benefit significantly from the environmental adaptation, disease tolerance, and pest tolerance provided by anthocyanins.
In anthocyanin biosynthetic pathway, L-phenylalanine is converted to naringenin by phenylalanine ammonialyase, cinnamate 4-hydroxylase, 4-coumarate CoA ligase, chalcone synthase, and chalcone isomerase. Then, the next pathway is catalyzed, resulting in the formation of complex aglycone and anthocyanin through composition by flavanone 3-hydroxylase, flavonoid 3'-hydroxylase, dihydroflavonol 4-reductase, anthocyanidin synthase, UDP-glucoside: flavonoid glucosyltransferase, and methyl transferase.
Genetic analysis
The phenolic metabolic pathways and enzymes may be studied by mean of transgenesis of genes. The Arabidopsis regulatory gene in the production of anthocyanin pigment 1 (AtPAP1) may be expressed in other plant species.
Dye-sensitized solar cells
Anthocyanins have been used in organic solar cells because of their ability to convert light energy into electrical energy. The many benefits to using dye-sensitized solar cells instead of traditional p-n junction silicon cells, include lower purity requirements and abundance of component materials, as well as the fact that they may be produced on flexible substrates, making them amenable to roll-to-roll printing processes.
Visual markers
Anthocyanins fluoresce, enabling a tool for plant cell research to allow live cell imaging without a requirement for other fluorophores. Anthocyanin production may be engineered into genetically modified materials to enable their identification visually.
See also
Phenolic compounds in wine
p-Coumaroylated anthocyanin
References
Further reading
External links
Anthocyanins FAQ MadSci Network
PH indicators
E-number additives
Biological pigments | Anthocyanin | [
"Chemistry",
"Materials_science",
"Biology"
] | 3,482 | [
"Titration",
"PH indicators",
"Chromism",
"Chemical tests",
"Equilibrium chemistry",
"Pigmentation",
"Biological pigments",
"Anthocyanins"
] |
18,952,779 | https://en.wikipedia.org/wiki/Spaser | A spaser or plasmonic laser is a type of laser which aims to confine light at a subwavelength scale far below Rayleigh's diffraction limit of light, by storing some of the light energy in electron oscillations called surface plasmon polaritons. The phenomenon was first described by David J. Bergman and Mark Stockman in 2003. The word spaser is an acronym for "surface plasmon amplification by stimulated emission of radiation". The first such devices were announced in 2009 by three groups: a 44-nanometer-diameter nanoparticle with a gold core surrounded by a dyed silica gain medium created by researchers from Purdue, Norfolk State and Cornell universities, a nanowire on a silver screen by a Berkeley group, and a semiconductor layer of 90 nm surrounded by silver pumped electrically by groups at the Eindhoven University of Technology and at Arizona State University. While the Purdue-Norfolk State-Cornell team demonstrated the confined plasmonic mode, the Berkeley team and the Eindhoven-Arizona State team demonstrated lasing in the so-called plasmonic gap mode. In 2018, a team from Northwestern University demonstrated a tunable nanolaser that can preserve its high mode quality by exploiting hybrid quadrupole plasmons as an optical feedback mechanism.
The spaser is a proposed nanoscale source of optical fields that is being investigated in a number of leading laboratories around the world. Spasers could find a wide range of applications, including nanoscale lithography, fabrication of ultra-fast photonic nano circuits, single-molecule biochemical sensing, and microscopy.
From Nature Photonics:
Study of the quantum mechanical model of the spaser suggests that it should be possible to manufacture a spasing device analogous in function to the MOSFET transistor, but this has not yet been experimentally verified.
See also
Nanolaser
Polariton laser
Surface-enhanced Raman spectroscopy
References
Further reading
Condensed matter physics
Laser types
Nanoelectronics
Plasmonics | Spaser | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 422 | [
"Plasmonics",
"Phases of matter",
"Materials science",
"Surface science",
"Condensed matter physics",
"Nanoelectronics",
"Nanotechnology",
"Solid state engineering",
"Matter"
] |
18,952,860 | https://en.wikipedia.org/wiki/Transpiration | Transpiration is the process of water movement through a plant and its evaporation from aerial parts, such as leaves, stems and flowers. It is a passive process that requires no energy expense by the plant. Transpiration also cools plants, changes osmotic pressure of cells, and enables mass flow of mineral nutrients. When water uptake by the roots is less than the water lost to the atmosphere by evaporation, plants close small pores called stomata to decrease water loss, which slows down nutrient uptake and decreases CO2 absorption from the atmosphere limiting metabolic processes, photosynthesis, and growth.
Water and nutrient uptake
Water is necessary for plants, but only a small amount of water taken up by the roots is used for growth and metabolism. The remaining 97–99.5% is lost by transpiration and guttation. Water with any dissolved mineral nutrients is absorbed into the roots by osmosis, which travels through the xylem by way of water molecule adhesion and cohesion to the foliage and out small pores called stomata (singular "stoma"). The stomata are bordered by guard cells and their stomatal accessory cells (together known as stomatal complex) that open and close the pore. The cohesion-tension theory explains how leaves pull water through the xylem. Water molecules stick together or exhibit cohesion. As a water molecule evaporates from the leaf's surface it pulls on the adjacent water molecule, creating a continuous water flow through the plant.
Two major factors influence the rate of water flow from the soil to the roots: the hydraulic conductivity of the soil and the magnitude of the pressure gradient through the soil. Both of these factors influence the rate of bulk flow of water moving from the roots to the stomatal pores in the leaves via the xylem. Mass flow of liquid water from the roots to the leaves is driven in part by capillary action, but primarily driven by water potential differences. If the water potential in the ambient air is lower than that in the leaf airspace of the stomatal pore, water vapor will travel down the gradient and move from the leaf airspace to the atmosphere. This movement lowers the water potential in the leaf airspace and causes evaporation of liquid water from the mesophyll cell walls. This evaporation increases the tension on the water menisci in the cell walls and decreases their radius, thus exerting tension in the cells' water. Because of the cohesive properties of water, the tension travels through the leaf cells to the leaf and stem xylem, where a momentary negative pressure is created as water is pulled up the xylem from the roots. In taller plants and trees, the force of gravity pulling the water inside can only be overcome by the decrease in hydrostatic pressure in the upper parts of the plants due to the diffusion of water out of stomata into the atmosphere.
Etymology
The word transpiration comes from the words trans, a Latin preposition that means "across," and spiration, which comes from the Latin verb spīrāre, meaning "to breathe." The motion suffix adds the meaning "the act of," creating the meaning, "the ACT of breathing across."
Capillary action
Capillary action is the process of a liquid flowing in narrow spaces without the assistance of, or even in opposition to, external forces like gravity. The effect can be seen in the drawing up of liquids between the hairs of a paint-brush, in a thin tube, in porous materials such as paper and plaster, in some non-porous materials such as sand and liquefied carbon fiber, or in a biological cell. It occurs because of intermolecular forces between the liquid and surrounding solid surfaces. If the diameter of the tube is sufficiently small, then the combination of surface tension (which is caused by cohesion within the liquid) and adhesive forces between the liquid and container wall act to propel the liquid.
Regulation
Plants regulate the rate of transpiration by controlling the size of the stomatal apertures. The rate of transpiration is also influenced by the evaporative demand of the atmosphere surrounding the leaf such as boundary layer conductance, humidity, temperature, wind, and incident sunlight. Along with above-ground factors, soil temperature and moisture can influence stomatal opening, and thus transpiration rate. The amount of water lost by a plant also depends on its size and the amount of water absorbed at the roots. Factors that effect root absorption of water include: moisture content of the soil, excessive soil fertility or salt content, poorly developed root systems, and those impacted by pathogenic bacteria and fungi such as pythium or rhizoctonia.
During a growing season, a leaf will transpire many times more water than its own weight. An acre of corn gives off about 3,000–4,000 gallons (11,400–15,100 liters) of water each day, and a large oak tree can transpire 40,000 gallons (151,000 liters) per year. The transpiration ratio is the ratio of the mass of water transpired to the mass of dry matter produced; the transpiration ratio of crops tends to fall between 200 and 1000 (i.e., crop plants transpire 200 to 1000 kg of water for every kg of dry matter produced).
Transpiration rates of plants can be measured by a number of techniques, including potometers, lysimeters, porometers, photosynthesis systems and thermometric sap flow sensors. Isotope measurements indicate transpiration is the larger component of evapotranspiration. Recent evidence from a global study of water stable isotopes shows that transpired water is isotopically different from groundwater and streams. This suggests that soil water is not as well mixed as widely assumed.
Desert plants have specially adapted structures, such as thick cuticles, reduced leaf areas, sunken stomata and hairs to reduce transpiration and conserve water. Many cacti conduct photosynthesis in succulent stems, rather than leaves, so the surface area of the shoot is very low. Many desert plants have a special type of photosynthesis, termed crassulacean acid metabolism or CAM photosynthesis, in which the stomata are closed during the day and open at night when transpiration will be lower.
Cavitation
To maintain the pressure gradient necessary for a plant to remain healthy they must continuously uptake water with their roots. They need to be able to meet the demands of water lost due to transpiration. If a plant is incapable of bringing in enough water to remain in equilibrium with transpiration an event known as cavitation occurs. Cavitation is when the plant cannot supply its xylem with adequate water so instead of being filled with water the xylem begins to be filled with water vapor. These particles of water vapor come together and form blockages within the xylem of the plant. This prevents the plant from being able to transport water throughout its vascular system. There is no apparent pattern of where cavitation occurs throughout the plant's xylem. If not effectively taken care of, cavitation can cause a plant to reach its permanent wilting point, and die. Therefore, the plant must have a method by which to remove this cavitation blockage, or it must create a new connection of vascular tissue throughout the plant. The plant does this by closing its stomates overnight, which halts the flow of transpiration. This then allows for the roots to generate over 0.05 mPa of pressure, and that is capable of destroying the blockage and refilling the xylem with water, reconnecting the vascular system. If a plant is unable to generate enough pressure to eradicate the blockage it must prevent the blockage from spreading with the use of pit pears and then create new xylem that can re-connect the vascular system of the plant.
Scientists have begun using magnetic resonance imaging (MRI) to monitor the internal status of the xylem during transpiration, in a non invasive manner. This method of imaging allows for scientists to visualize the movement of water throughout the entirety of the plant. It also is capable of viewing what phase the water is in while in the xylem, which makes it possible to visualize cavitation events. Scientists were able to see that over the course of 20 hours of sunlight more than 10 xylem vessels began filling with gas particles becoming cavitated. MRI technology also made it possible to view the process by which these xylem structures are repaired in the plant. After three hours in darkness it was seen that the vascular tissue was resupplied with liquid water. This was possible because in darkness the stomates of the plant are closed and transpiration no longer occurs. When transpiration is halted the cavitation bubbles are destroyed by the pressure generated by the roots. These observations suggest that MRIs are capable of monitoring the functional status of xylem and allows scientists to view cavitation events for the first time.
Effects on the environment
Cooling
Transpiration cools plants, as the evaporating water carries away heat energy due to its large latent heat of vaporization of 2260 kJ per liter.
See also
Antitranspirant – a substance to prevent transpiration
Canopy conductance
Ecohydrology
Eddy covariance flux (aka eddy correlation, eddy flux)
Hydrology (agriculture)
Latent heat flux
Perspiration
Soil plant atmosphere continuum
Stomatal conductance
Transpiration stream
Turgor pressure
Water Evaluation And Planning system (WEAP)
References
External links
USGS The Water Cycle: Evapotranspiration
Hydrology
Plant physiology | Transpiration | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 2,003 | [
"Plant physiology",
"Hydrology",
"Plants",
"Environmental engineering"
] |
18,953,543 | https://en.wikipedia.org/wiki/Tabby%20concrete | Tabby is a type of concrete made by burning oyster shells to create lime, then mixing it with water, sand, ash and broken oyster shells. Tabby was used by early Spanish settlers in present-day Florida, then by British colonists primarily in coastal South Carolina and Georgia. It is a man-made analogue of coquina, a naturally-occurring sedimentary rock derived from shells and also used for building.
Revivals in the use of tabby spread northward and continued into the early 19th century. Tabby was normally protected with a coating of plaster or stucco.
Origin
"Tabby" or "tapia" derives from the Spanish tabique de ostión (literally, "adobe wall of oyster [shell]").
There is evidence that North African Moors brought a predecessor form of tabby to Spain when they invaded the peninsula, but there is also evidence that the Iberian use is earlier and that it spread from there south to Morocco. A form of tabby is used in Morocco today and some tabby structures survive in Spain, though in both instances the aggregate is granite, not oyster shells.
It is likely that 16th-century Spanish explorers first brought tabby (which appears as , , and in early documents) to the coast of Florida in the sixteenth century. is Spanish for 'mud wall' and Arabic means 'a mixture of mortar and lime' or African tabi. In fact, the mortar used to chink the earliest cabins in this area was a mixture of mud and Spanish moss.
The oldest known example of tabby concrete in North America is the Spanish Fort San Antón de Carlos located on Mound Key in Florida.
Some researchers believe that English colonists developed their own process independently of the Spanish.
James Oglethorpe is credited with introducing "Oglethorpe tabby" into Georgia after seeing Spanish forts in Florida and encouraging its use, using it himself for his house near Fort Frederica. Later Thomas Spalding, who had grown up in Oglethorpe's house, led a tabby revival in the second quarter of the 19th century sometimes referred to as "Spalding tabby". Another revival occurred with the development of Jekyll Island in the 1880s.
Regions of use
Limestone to make building lime was not locally available to early settlers, so lime was imported or made from oyster shells. Shell middens along the coast were a supply of shells to make tabby, which diffused from two primary centers or hearths: one at Saint Augustine, Florida, and the other at Beaufort, South Carolina.
The British tradition began later (some time close to, but earlier than, 1700, upon introduction of the techniques from Spanish Florida) than the Spanish (1580), and spread far more widely as a building material, reaching at least as far north as Staten Island, New York, where it can be found in the still-standing Abraham Manee House, erected circa 1670. Beaufort, South Carolina, was both the primary center for British tabby and the location of the earliest British tabby in the southeastern US. It was here that the British tradition first developed, and from this hearth tabby eventually spread throughout the sea island district.
Herbert Eugene Bolton, John Tate Lanning, and other historians believed, from the mid-19th century into the middle of the 20th century, that tabby ruins in coastal Georgia and northeastern Florida were the remains of Spanish missions, even though local residents had earlier identified the ruins as those of late-18th century plantation buildings. The fact that the ruins were of structures built after the establishment of the Georgia Colony by Great Britain was not fully accepted by historians until late in the 20th century. With the exception of St. Augustine and, possibly, a few other important places, Spanish mission buildings were built with wooden posts supporting the roof and walls of palmetto thatch, wattle and daub or planks, or left open.
The LaPointe Krebs House, also known as the Old Spanish Fort (Pascagoula, Mississippi) is an extant tabby structure on the U.S. Gulf of Mexico. The house was constructed in 1757 in Louisiane, during the French Colonial period.
Tabby was used in the West Indies, including the islands of Antigua and Barbados.
Process
The labor-intensive process depended on slave labor to crush and burn the oyster shells into quicklime. The quicklime was then slaked (hydrated) and combined with more shells, sand, and water. It was poured or tamped into wood forms called cradles, built up in layers in a similar manner to rammed earth. Tabby was used in place of bricks, which could not be made locally because of the absence of local clay.
Tabby was used like concrete for floors, foundations, columns, roofs. Besides replacing bricks, it was also used as "oyster shell mortar" or "burnt shell mortar".
Significant examples
St. Simons Island Light, Georgia (foundation only)
Wormsloe Plantation house ruins, Isle of Hope, Georgia
McIntosh Sugarmill, Camden County, Georgia. Link to historical marker on McIntosh sugar works
Horton House, Jekyll Island, Georgia. There is a historical marker.
Slave quarters at Kingsley Plantation, Fort George Island, near Jacksonville, Florida
Bodiford Drug Store, Cedar Key, Florida
Heron Restaurant, Cedar Key, Florida
Island Hotel, Cedar Key, Florida
Isaac Fripp House Ruins, Saint Helena Island near Frogmore, South Carolina
Along Florida's west coast, Gamble Plantation Mansion, Ellenton
George Adderley House in Marathon, Florida
Colonial Dorchester State Historic Site near Charleston, South Carolina
Tabby Manse (Thomas Fuller House), Beaufort, South Carolina
The LaPointe Krebs House, also known as the Old Spanish Fort (Pascagoula, Mississippi)
See also
Bahareque
References
Further reading
External links
"Tabby: The Oyster Shell Concrete of the Lowcountry", Beaufort County, South Carolina Public Library.
Colin Brooker, "The Conservation and Repair of Tabby in Beaufort County, South Carolina", revised version of formal talk, "The Conservation of Tabby in Beaufort County, South Carolina," given at Jekyll Island Club Hotel, Jekyll Island, Georgia, on February 25, 1998.
Paper on Tabby by the Henry Ford Museum
Tabby historical marker at Jekyll Island
Building materials
Concrete
Tabby buildings | Tabby concrete | [
"Physics",
"Engineering"
] | 1,289 | [
"Structural engineering",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Concrete",
"Matter",
"Architecture"
] |
18,958,074 | https://en.wikipedia.org/wiki/Closure%20%28business%29 | Business closure is the term used to refer to when a business ceases operations. While the term is often associated with the failure of a commercial enterprise, businesses may also close because the owners have sold it at a higher value than what they invested in it. Entrepreneurs may close a business for personal reasons, such as retirement or moving into full-time employment themselves.
Reasons for closure
Closure may be the result of a bankruptcy, where the organization lacks sufficient funds to continue operations, as a result of the proprietor of the business dying, as a result of a business being purchased by another organization (or a competitor) and shut down as superfluous, or because it is the non-surviving entity in a corporate merger. A closure may occur because the purpose for which the organization was created is no longer necessary.
While a closure is typically of a business or a non-profit organization, any entity which is created by human beings can be subject to a closure, from a single church to a whole religion, up to and including an entire country if, for some reason, it ceases to exist.
Closures are of two types, voluntary or involuntary. Voluntary closures of organizations are much rarer than involuntary ones, as, in the absence of some change making operations impossible or unnecessary, most operations will continue until something happens that causes a change requiring this situation.
The most common form of voluntary closure would be when those involved in an organization such as a social club, a band, or other non-profit organization decide to cease operating. Once the organization has paid any outstanding debts and completed any pending operations, closure may simply mean that the organization ceases to exist.
If an organization has debts that cannot be paid, it may be necessary to perform a liquidation of its assets. If there is anything left after the assets are converted to cash, in the case of a for-profit organization, the remainder is distributed to the stockholders; in the case of a non-profit, by law any remaining assets must be distributed to another non-profit.
If an organization has more debts than assets, it may have to declare bankruptcy. If the organization is viable, it may reorganizes itself as a result of the bankruptcy and continue operations. If it is not viable for the business to continue operating, then a closure occurs through a bankruptcy liquidation: its assets are liquidated, the creditors are paid from whatever assets could be liquidated, and the business ceases operations.
Examples
Possibly the largest "closure" in history (but more closely analogous to a demerger) was the split of the Soviet Union into its constituent countries. In comparison, the end of East Germany can be considered a merger rather than a closure as West Germany assumed all of the assets and liabilities of East Germany. The end of the Soviet Union was the equivalent of a closure through a bankruptcy liquidation, because while Russia assumed most of the assets and responsibilities of the former Soviet Union, it did not assume all of them. There have been issues over who is responsible for unpaid parking tickets accumulated by motor vehicles operated on behalf of diplomatic missions operated by the former Soviet Union in other countries, as Russia claims it is not responsible for them.
Several major business closures include the bankruptcy of the Penn Central railroad, the Enron scandals, and MCI Worldcom's bankruptcy and eventual merger into Verizon.
References
Business
Endings | Closure (business) | [
"Physics"
] | 682 | [
"Spacetime",
"Endings",
"Physical quantities",
"Time"
] |
2,777,780 | https://en.wikipedia.org/wiki/Optimal%20maintenance | Optimal maintenance is the discipline within operations research concerned with maintaining a system in a manner that maximizes profit or minimizes cost. Cost functions depending on the reliability, availability and maintainability characteristics of the system of interest determine the parameters to minimize. Parameters often considered are the cost of failure, the cost per time unit of "downtime" (for example: revenue losses), the cost (per time unit) of corrective maintenance, the cost per time unit of preventive maintenance and the cost of repairable system replacement [Cassady and Pohl]. The foundation of any maintenance model relies on the correct description of the underlying deterioration process and failure behavior of the component, and on the relationships between maintained components in the product breakdown (system / sub-system / assembly / sub-assembly...).
Optimal Maintenance strategies are often constructed using stochastic models and focus on finding an optimal inspection time or the optimal acceptable degree of system degradation before maintenance and/or replacement. Cost considerations on an Asset scale may also lead to select a "run-to-failure" approach for specific components.
There are four main survey papers available accomplished to cover the spectrum of optimal maintenance:
Optimal maintenance models for systems subject to failure–a review by YS Sherif, ML Smith published in Naval Research Logistics Quarterly, 1981.
C. Valdez-Flores, R.M. Feldman, “A survey of preventive maintenance models for stochastically deteriorating single-unit systems”, Naval Research Logistics, vol 36, 1989 Aug, pp 419–446.
J.J. McCall, “Maintenance policies for stochastically failing equipment:a survey”, Management Science, vol 11, 1965 Mar, pp 493–524.
W.P. Pierskalla, J.A. Voelker, “A survey of maintenance models: The control and surveillance of deteriorating systems”, Naval Research Logistics Quarterly, vol 23, 1976 Sep, pp 353–388.
Operations research | Optimal maintenance | [
"Mathematics"
] | 407 | [
"Applied mathematics",
"Operations research"
] |
2,778,009 | https://en.wikipedia.org/wiki/Spin%20tensor | In mathematics, mathematical physics, and theoretical physics, the spin tensor is a quantity used to describe the rotational motion of particles in spacetime. The spin tensor has application in
general relativity and special relativity, as well as quantum mechanics, relativistic quantum mechanics, and quantum field theory.
The special Euclidean group SE(d) of direct isometries is generated by translations and rotations. Its Lie algebra is written .
This article uses Cartesian coordinates and tensor index notation.
Background on Noether currents
The Noether current for translations in space is momentum, while the current for increments in time is energy. These two statements combine into one in spacetime: translations in spacetime, i.e. a displacement between two events, is generated by the four-momentum P. Conservation of four-momentum is given by the continuity equation:
where is the stress–energy tensor, and ∂ are partial derivatives that make up the four-gradient (in non-Cartesian coordinates this must be replaced by the covariant derivative). Integrating over space:
gives the four-momentum vector at time t.
The Noether current for a rotation about the point y is given by a tensor of 3rd order, denoted . Because of the Lie algebra relations
where the 0 subscript indicates the origin (unlike momentum, angular momentum depends on the origin), the integral:
gives the angular momentum tensor at time t.
Definition
The spin tensor is defined at a point x to be the value of the Noether current at x of a rotation about x,
The continuity equation
implies:
and therefore, the stress–energy tensor is not a symmetric tensor.
The quantity S is the density of spin angular momentum (spin in this case is not only for a point-like particle, but also for an extended body), and M is the density of orbital angular momentum. The total angular momentum is always the sum of spin and orbital contributions.
The relation:
gives the torque density showing the rate of conversion between the orbital angular momentum and spin.
Examples
Examples of materials with a nonzero spin density are molecular fluids, the electromagnetic field and turbulent fluids. For molecular fluids, the individual molecules may be spinning. The electromagnetic field can have circularly polarized light. For turbulent fluids, we may arbitrarily make a distinction between long wavelength phenomena and short wavelength phenomena. A long wavelength vorticity may be converted via turbulence into tinier and tinier vortices transporting the angular momentum into smaller and smaller wavelengths while simultaneously reducing the vorticity. This can be approximated by the eddy viscosity.
See also
Belinfante–Rosenfeld stress–energy tensor
Poincaré group
Lorentz group
Relativistic angular momentum
Mathisson–Papapetrou–Dixon equations
Pauli–Lubanski pseudovector
References
External links
Tensors
Special relativity
General relativity
Quantum mechanics
Quantum field theory
Lie groups | Spin tensor | [
"Physics",
"Mathematics",
"Engineering"
] | 588 | [
"Quantum field theory",
"Lie groups",
"Mathematical structures",
"Tensors",
"Theoretical physics",
"Quantum mechanics",
"General relativity",
"Special relativity",
"Algebraic structures",
"Theory of relativity"
] |
2,779,670 | https://en.wikipedia.org/wiki/Quintuple%20bond | A quintuple bond in chemistry is an unusual type of chemical bond, first reported in 2005 for a dichromium compound. Single bonds, double bonds, and triple bonds are commonplace in chemistry. Quadruple bonds are rarer and are currently known only among the transition metals, especially for Cr, Mo, W, and Re, e.g. [Mo2Cl8]4− and [Re2Cl8]2−. In a quintuple bond, ten electrons participate in bonding between the two metal centers, allocated as σ2π4δ4.
In some cases of high-order bonds between metal atoms, the metal-metal bonding is facilitated by ligands that link the two metal centers and reduce the interatomic distance. By contrast, the chromium dimer with quintuple bonding is stabilized by a bulky terphenyl (2,6-[(2,6-diisopropyl)phenyl]phenyl) ligands. The species is stable up to 200 °C. The chromium–chromium quintuple bond has been analyzed with multireference ab initio and DFT methods, which were also used to elucidate the role of the terphenyl ligand, in which the flanking aryls were shown to interact very weakly with the chromium atoms, causing only a small weakening of the quintuple bond. A 2007 theoretical study identified two global minima for quintuple bonded RMMR compounds: a trans-bent molecular geometry and surprisingly another trans-bent geometry with the R substituent in a bridging position.
In 2005, a quintuple bond was postulated to exist in the hypothetical uranium molecule U2 based on computational chemistry. Diuranium compounds are rare, but do exist; for example, the anion.
In 2007 the shortest-ever metal–metal bond (180.28 pm) was reported to exist also in a compound containing a quintuple chromium-chromium bond with diazadiene bridging ligands. Other metal–metal quintuple bond containing complexes that have been reported include quintuply bonded dichromium with [6-(2,4,6-triisopropylphenyl)pyridin-2-yl](2,4,6-trimethylphenyl)amine bridging ligands and a dichromium complex with amidinate bridging ligands.
Synthesis of quintuple bonds is usually achieved through reduction of a dimetal species using potassium graphite. This adds valence electrons to the metal centers, giving them the needed number of electrons to participate in quintuple bonding. Below is a figure of a typical quintuple bond synthesis.
Dimolybdenum quintuple bonds
In 2009 a dimolybdenum compound with a quintuple bond and two diamido bridging ligands was reported with a Mo–Mo bond length of 202 pm. The compound was synthesised starting from potassium octachlorodimolybdate (which already contains a Mo2 quadruple bond) and a lithium amidinate, followed by reduction with potassium graphite:
Bonding
As stated above metal-metal quintuple bonds have a σ2π4δ4 configuration. Among the five bonds present between the metal centers, one is a sigma bond, two are pi bonds, and two are delta bonds. The σ-bond is the result of mixing between the dz2 orbital on each metal center. The first π-bond comes from mixing of the dyz orbitals from each metal while the other π-bond comes from the dxz orbitals on each metal mixing. Finally the δ-bonds come from mixing of the dxy orbitals as well as mixing between the dx2−y2 orbitals from each metal.
Molecular orbital calculations have elucidated the relative energies of the orbitals created by these bonding interactions. As shown in the figure below, the lowest energy orbitals are the π bonding orbitals followed by the σ bonding orbital. The next highest are the δ bonding orbitals which represent the HOMO. Because the 10 valence electrons of the metals are used to fill these first 5 orbitals, the next highest orbital becomes the LUMO which is the δ* antibonding orbital. Though the π and δ orbitals are represented as being degenerate, they in fact are not. This is because the model shown here is a simplification and that hybridization of s, p, and d orbitals is believed to take place, causing a change in the orbital energy levels.
Ligand role in metal–metal quintuple bond length
Quintuple bond lengths are heavily dependent on the ligands bound to the metal centers. Nearly all complexes containing a metal–metal quintuple bond have bidentate bridging ligands, and even those that do not, such as the terphenyl complex mentioned earlier, have some bridging characteristic to it through metal–ipso-carbon interactions.
The bidentate ligand can act as a sort of tweezer in that in order for chelation to occur the metal atoms must move closer together, thereby shortening the quintuple bond length. The two ways in which to obtain shorter metal–metal distances is to either reduce the distance between the chelating atoms in the ligand by changing the structure, or by using steric effects to force a conformational change in the ligand that bends the molecule in a way that forces the chelating atoms closer together. An example of the latter is shown below:
The above example shows the ligand used in the dimolybdenum complex shown earlier. When the carbon between the two nitrogens in the ligand has a hydrogen bound to it, the steric repulsion is small. However, when the hydrogen is replaced with a much more bulky phenyl ring the steric repulsion increases dramatically and the ligand "bows" which causes a change in the orientation of the lone pairs of electrons on the nitrogen atoms. These lone pairs are what is responsible for forming bonds with the metal centers so forcing them to move closer together also forces the metal centers to be positioned closer together. Thus, decreasing the length of the quintuple bond. In the case where this ligand is bound to quintuply bonded dimolybdenum the quintuple bond length goes from 201.87 pm to 201.57 pm when the hydrogen in replaced with a phenyl group. Similar results have also been demonstrated in dichromium quintuple bond complexes as well.
Research trends
Efforts continue to prepare shorter quintuple bonds.
Quintuple-bonded dichromium complexes appear to act like magnesium to produce Grignard reagents.
References
See also
Chemical bonding | Quintuple bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,433 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
2,780,729 | https://en.wikipedia.org/wiki/ABINIT | ABINIT is an open-source suite of programs for materials science, distributed under the GNU General Public License. ABINIT implements density functional theory, using a plane wave basis set and pseudopotentials, to compute the electronic density and derived properties of materials ranging from molecules to surfaces to solids. It is developed collaboratively by researchers throughout the world.
A web-based easy-to-use graphical version, which includes access to a limited set of ABINIT's full functionality, is available for free use through the nanohub.
The latest version 9.10.3 was released on June 24, 2023.
Overview
ABINIT implements density functional theory by solving the Kohn–Sham equations describing the electrons in a material, expanded in a plane wave basis set and using a self-consistent conjugate gradient method to determine the energy minimum. Computational efficiency is achieved through the use of fast Fourier transforms, and pseudopotentials to describe core electrons. As an alternative to standard norm-conserving pseudopotentials, the projector augmented-wave method may be used. In addition to total energy, forces and stresses are also calculated so that geometry optimizations and ab initio molecular dynamics may be carried out. Materials that can be treated by ABINIT include insulators, metals, and magnetically ordered systems including Mott-Hubbard insulators.
Derived properties
In addition to computing the electronic ground state of materials, ABINIT implements density functional perturbation theory to compute response functions including
Phonons
Dielectric response
Born effective charges and IR oscillator strength tensor
Response to strain and elastic properties
Nonlinear responses, including piezoelectric response, Raman cross sections, and electro-optic response.
ABINIT can also compute excited state properties via
time-dependent density functional theory
many-body perturbation theory, using the GW approximation and Bethe–Salpeter equation.
See also
List of quantum chemistry and solid state physics software
References
External links
Graphical version (web-based) of ABINIT
Computational chemistry software
Density functional theory software
Free physics software | ABINIT | [
"Physics",
"Chemistry",
"Materials_science"
] | 430 | [
"Materials science stubs",
"Computational chemistry software",
"Chemistry software",
"Computational chemistry",
"Density functional theory software",
"Condensed matter physics",
"Condensed matter stubs"
] |
2,781,586 | https://en.wikipedia.org/wiki/Fluor%20Corporation | Fluor Corporation is an American engineering and construction firm, headquartered in Irving, Texas. It is a holding company that provides services through its subsidiaries in three main areas: oil and gas, industrial and infrastructure, government and power. It is the largest publicly traded engineering and construction company in the Fortune 500 rankings and is listed as 265th overall.
Fluor was founded in 1912 by John Simon Fluor as Fluor Construction Company. It grew quickly, predominantly by building oil refineries, pipelines, and other facilities for the oil and gas industry, at first in California, and then in the Middle East and globally. In the late 1960s, it began diversifying into oil drilling, coal mining and other raw materials like lead. A global recession in the oil and gas industry and losses from its mining operation led to restructuring and layoffs in the 1980s. Fluor sold its oil operations and diversified its construction work into a broader range of services and industries.
In the 1990s, Fluor introduced new services like equipment rentals and staffing. Nuclear waste cleanup projects and other environmental work became a significant portion of Fluor's revenues. The company also did projects related to the Manhattan Project, rebuilding after the Iraq War, recovering from Hurricane Katrina and building the Trans-Alaska Pipeline System.
Corporate history
Early history
Fluor Corporation's predecessor, Rudolph Fluor & Brother, was founded in 1890 by John Simon Fluor and his two brothers in Oshkosh, Wisconsin as a saw and paper mill. John Fluor acted as its president and contributed $100 in personal savings to help the business get started. The company was renamed Fluor Bros. Construction Co. in 1903.
In 1912 John Fluor moved to Santa Ana, California for health reasons without his brothers and founded Fluor Corporation out of his garage under the name Fluor Construction Company. By 1924 the business had annual revenues of $100,000 ($1.56 million in 2021 dollars) and a staff of 100 employees. John Fluor delegated most of the company's operations to his sons, Peter and Simon Fluor. A $100,000 capital investment was made that year and it was incorporated. John's eldest son Peter served as head of sales and grew the company to $1.5 million ($20.4 million in 2013 dollars) in revenues by 1929. In 1929 the company re-incorporated as Fluor Corporation. By the 1930s, Fluor had operations in Europe, the Middle East and Australia. Business declined rapidly during the Great Depression, but picked up again during World War II. During the war Fluor manufactured synthetic rubber and was responsible for a substantial portion of high-octane gasoline production in the United States. A Gas-Gasoline division of Fluor was created in Houston in 1948.
Fluor's headquarters were moved to Alhambra, an inner suburb of Los Angeles, in 1940 in order to be closer to its oil and gas clients, before moving again to Orange County, California in the 1960s due to concerns about the cost of living and traffic. John Simon Fluor died in 1944. He was succeeded by his son Peter Fluor, who died three years later. Peter was followed by Shirley Meserve (1947) and Donald Darnell (1949), then John Simon "Si" Fluor Jr. in 1952 and J. Robert (Bob) Fluor in 1962. Fluor was listed on the New York Stock Exchange in the 1950s. In 1961, Fluor acquired an interest in construction, design and contracting firm William J. Moran.
Diversification and restructuring
Fluor diversified its business more extensively in 1967, when five companies were merged into a division called Coral Drilling and it started a deep-water oil exploration business in Houston called Deep Oil Technology. It also created Fluor Ocean Services in Houston in 1968 and acquired an interest in other fossil fuel operations in the 1970s. Fluor acquired a construction company, Pike Corp. of America and the engineering division of its prior partner in Australia, Utah Construction. In 1972, Fluor bought land in Irvine, California and started building its new headquarters on it. The following year, the company's oil and gas operations were consolidated under a new entity, Fluor Oil and Gas Corp.
In 1977, Fluor acquired Daniel International Corporation. Fluor's business had become predominantly international, while Daniel International's $1 billion construction business was mostly domestic. The acquisition allowed the company to use union labor at Fluor, or non-union labor at Daniel, for each client. Fluor made a $2.9 billion acquisition of a zinc, gold, lead and coal mining operation, St. Joe Minerals, in 1981 after a bidding competition for the business with Seagram.
By the 1980s, Fluor's primary business was building large refineries, petrochemical plants, oil pipelines and other facilities for the gas and oil industry, especially in the Middle East. By 1981, Fluor's staff had grown to 29,000 and revenue, backlog, and profits had each increased more than 30 percent over the prior year. However, by 1984 the mining operation was causing heavy losses and the oil and gas industry Fluor served was in a worldwide recession due to declining oil prices. From 1981 to 1984, Fluor's backlog went from $16 billion to $4 billion. In 1985 it reported $633 million in losses. David Tappan took Bob Fluor's place as CEO in 1984 after Bob died from cancer and led a difficult restructuring.
The company sold $750 million in assets, including Fluor's headquarters in Irvine, in order to pay $1 billion in debt. Staff were reduced from 32,000 to 14,000. In 1986 Fluor sold all of its oil assets and some of its gold mining operations. Fluor Engineers, Inc. and Daniel International were merged, forming Fluor Daniel. By 1987, Fluor had returned to profitability with $26.6 million in profits and $108.5 million by 1989. By the end of the restructuring, Fluor had three major divisions: Fluor Daniel, Fluor Construction International and St. Joe Minerals Corp. Each division had its own smaller subsidiaries. Fluor started being named by Engineering News as the largest construction and engineering company in the United States. Fluor's international revenues rebounded. Having postponed his retirement to help Fluor, Tappan stepped down at the end of 1989 and was replaced by Leslie McCraw.
Recent history
During the restructuring, Fluor's core construction and engineering work was diversified into 30 industries including food, paper manufacturers, prisons and others to reduce its vulnerability to market changes in the oil and gas market. In the 1990s, the company tried to change its image, calling itself a "diversified technical services" firm. It started offering equipment rentals, staffing services, and financing for construction projects. The company began offering environmental cleanup and pollution control services, which grew to half of its new business by 1992. Fluor's mining business grew from $300 million in 1990 to $1 billion in 1994. The US government passed environmental regulations in 1995 that led to growth for the Massey Coal Co. business, because it had large reserves of low-sulfur coal. In 1992, Fluor sold its ownership of Doe Run Company, the world's largest producer of refined lead, which was losing money at the time due to declining lead prices. By 1993, Fluor had revenues of $4.17 billion and 22,000 staff.
In 1997, Fluor's revenues fell almost 50 percent, in part due to the Asian financial crisis and a decrease in overseas business. Additionally, it suffered losses from an over-budget power plant project in Rabigh, Saudi Arabia. Fluor was a sub-contractor to General Electric for the project. Fluor's subsidiaries sued GE alleging that it misrepresented the complexity of the project. Though revenues declined further the following year, profits were increasing. In 1999, nearly 5,000 workers were laid off from Fluor Daniel and 15 offices were closed. Fluor Daniel was re-structured into four business groups: an engineering and construction firm called Fluor Daniel; an equipment rental, staffing and telecommunications division called Fluor Global Services; a coal-mining business called A.T. Massey Coal Co. and an administrative and support division called Fluor Signature Services.
In January 1998, McCraw (age 63) resigned after being diagnosed with bladder cancer and was replaced by former Shell Oil President, Philip J. Carroll. That same year, IT Group purchased a 54 percent interest in Fluor Daniel GTI, Fluor's environmental division, for $36.3 million. Two years later, the coal mining operation under the A.T. Massey Coal Co. name (part of St. Joe) was spun off into its own business. In 2001, Fluor's four primary subsidiaries were consolidated into a single Fluor Corporation. In 2002 Alan Boeckmann was appointed as the CEO, followed by David Seaton in 2011. In 2005, Fluor's headquarters were moved to the Las Colinas area in Irving, Texas.
In December 2015, Fluor announced that it would take over Dutch industrial services company Stork. The acquisition of this company, which modifies and maintains large power plants, was completed in March 2016, in a stock purchase worth $755 million.
In May 2019, David Seaton stepped down as CEO and was replaced by Carlos Hernandez, who joined the firm in 2007.
Organization
Fluor was ranked 259th in Fortune 500 companies for the year 2022. It has offices in 25 countries. Many of Fluor's operations are located near natural resources, such as uranium in Canada, oil reserves in the Middle East and mines in Australia. About 30 percent of Fluor's revenues are based in the United States as of 2011.
Fluor received an "A" ranking in Transparency International's 2012 anti-corruption study. The company hosts online and in-person anti-corruption training sessions for staff and operates an ethics hotline. Former CEO Alan Boeckmann helped create the Partnering Against Corruption Initiative (PACI), whereby companies agree to a set of ethics principles. A MarketLine SWOT analysis said Fluor's environmental work "enhances the company's brand image," while often lengthy and unpredictable legal disputes "tarnish the company's brand image and will erode customer confidence."
It started the Fluor Foundation for its charitable work in 1952 and Fluor Cares in 2010. The company started the largest employer-sponsored apprenticeship program in California with a four-year program for designers in 1982. Fluor operates a virtual college for employees called Fluor University.
Services
Fluor is a holding company that provides services through its subsidiaries. Its subsidiaries provide engineering, procurement, construction, maintenance and project management services. The company has also developed pollution control products, such as the Econamine lineup of carbon capture products. Fluor's work includes designing and building power plants, petrochemical factories, mining facilities, roads and bridges, government buildings, and manufacturing facilities. The company also performs nuclear cleanup, and other services.
Separate teams of experts, procurement staff, project managers and workers are provided for large projects that are supported by a centralized administrative staff. Fluor has trained more than 100,000 craft workers in Indonesia, the Philippines, Korea, Pakistan, Kuwait and other countries, where the needed labor skills weren't available locally. It may also serve clients through a joint venture with another construction firm when a local infrastructure or niche expertise is needed.
Fluor acquired shares of Genentech Inc. in 1981, and it bought a 10 percent interest in a smelter and refinery facility in Gresik, Indonesia in 1995 for $550 million. In 1994, it invested $650 million with the Beacon Group Energy Investment fund to finance energy projects. Fluor also has a majority interest in NuScale LLC., which is developing a new type of 45-megawatt nuclear reactor called a small modular reactor (SMR).
Notable projects
Fluor's first projects were in constructing and grading roads, but by the 1920s it was known for building public facilities, industrial complexes and serving a growing California oil and gas industry. It started building office and meter manufacturing facilities for the Southern California Gas Company in 1915, as well as a compressor station for the Industrial Fuel Supply Company in 1919. Fluor built the first "Buddha Tower" in 1921 in Signal Hill, California, for the Industrial Fuel Supply Company. The Buddha Tower was a design of water-cooling tower named after the Buddha temples they resemble. The following year Fluor was awarded a contract by Richfield Oil to build a 10,000-gallon-per-day gasoline plant.
Against his father's wishes, Peter Fluor expanded Fluor's business outside of California in the 1930s. It built refineries in Texas, as well as oil pipelines and compressor stations from Panhandle, Texas, to Indianapolis, Indiana, for the Panhandle Eastern Pipeline Company. Fluor constructed the Escondida in Chile, which is the second-largest copper mine in the world. In 1942, Fluor constructed cooling towers and other facilities in Hanford, Washington, for the Manhattan Project. It built an expansion of the Dhahran Airfield in Saudi Arabia for the United States Army in the 1950s and accepted its first international project for ARAMCO in the Middle East.
In the 1960s and 1970s, Fluor built the first all-hydrogen refinery in Kuwait and the first exclusively offshore power plant for the Atlantic Richfield Company. It also constructed pumps and ports for the Trans-Alaska Pipeline System, which traversed 800 miles from northern Alaska to Valdez, Alaska, and the world's largest offshore facility for natural gas on the island of Java in Indonesia. In 1976, it was awarded a $5 billion project for ARAMCO in Saudi Arabia, to design facilities that capture sour gas, which is expelled from oil wells as waste, in order to refine it into fuel. That same year a partially completed copper and cobalt mine in Africa was cancelled due to a war in the neighboring region of Angola and declining copper prices. In 1979, Fluor had 13 projects for building United States power plants and had served more than half of the world's government-owned oil companies.
Fluor has been working on the cleanup and shutdown of atomic energy plants in Ohio and Washington since the 1990s. In 1992, Fluor won a contract with the United States Energy Department to clean up nuclear waste. By 1996 Hanford was the most contaminated nuclear site in the US and the US Department of Energy was conducting a $50 billion to $60 billion cleanup of the site. Fluor Hanford Inc. replaced Westinghouse Hanford Co. on the project. After a chemical explosion in 1997, 11 workers filed a lawsuit alleging they were denied appropriate medical attention and protective gear. Fluor and the workers disagreed on whether the explosion resulted in any injuries. In 2005 the US Department of Energy fined Fluor for safety violations and that same year a jury awarded $4.7 million in damages to eleven pipe fitters who claimed they were fired after complaining that a valve rated for 1,975 pounds per square inch (psi) was being used where a valve rated at 2,235 psi was needed.
Fluor built the Aladdin Hotel & Casino in Las Vegas in 2001 for $1.4 billion. In 2004, the company was awarded a $1.1 billion project with AMEC to help rebuild the water, power and civic infrastructure of Iraq after the Iraq War. Fluor has also built a rail line in Europe and missile sites in California and possibly Arizona.
The company provided disaster recovery services in Louisiana after Hurricane Katrina. In 2010 Fluor provided workers to clean up oil tar on beaches in Florida and Alabama after the Deepwater Horizon oil spill. In December 2012, Fluor was awarded a $3.14 billion contract to build a new Tappan Zee Bridge over the Hudson River.
See also
NuScale Power
Genentech
References
External links
Official Fluor Corporation website
Construction and civil engineering companies of the United States
Oilfield services companies
Petroleum engineering
Petroleum in Texas
Companies based in Irving, Texas
Non-renewable resource companies established in 1912
1912 establishments in California
Companies listed on the New York Stock Exchange
Companies in the S&P 400
Multinational companies headquartered in the United States
Construction and civil engineering companies established in 1912 | Fluor Corporation | [
"Engineering"
] | 3,350 | [
"Petroleum engineering",
"Energy engineering"
] |
2,782,914 | https://en.wikipedia.org/wiki/Prentice%20position | The Prentice position is an orientation of a prism, used in optics, optometry and ophthalmology. In this position, named after the optician Charles F. Prentice, the prism is oriented such that light enters it at an angle of 90° to the first surface, so that the beam does not refract at that surface. All the deviation caused by the prism takes place at the exit surface.
In ophthalmology, glass prisms were classically calibrated for use in the Prentice position, while plastic prisms were calibrated for use in the frontal position.
See also
Prism correction
Prentice's rule
References
Optics
Ophthalmology | Prentice position | [
"Physics",
"Chemistry"
] | 137 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
2,783,119 | https://en.wikipedia.org/wiki/Rational%20mapping | In mathematics, in particular the subfield of algebraic geometry, a rational map or rational mapping is a kind of partial function between algebraic varieties. This article uses the convention that varieties are irreducible.
Definition
Formal definition
Formally, a rational map between two varieties is an equivalence class of pairs in which is a morphism of varieties from a non-empty open set to , and two such pairs and are considered equivalent if and coincide on the intersection (this is, in particular, vacuously true if the intersection is empty, but since is assumed irreducible, this is impossible). The proof that this defines an equivalence relation relies on the following lemma:
If two morphisms of varieties are equal on some non-empty open set, then they are equal.
is said to be dominant if one (equivalently, every) representative in the equivalence class is a dominant morphism, i.e. has a dense image. is said to be birational if there exists a rational map which is its inverse, where the composition is taken in the above sense.
The importance of rational maps to algebraic geometry is in the connection between such maps and maps between the function fields of and . By definition, a rational function is just a rational map whose range is the projective line. Composition of functions then allows us to "pull back" rational functions along a rational map, so that a single rational map induces a homomorphism of fields . In particular, the following theorem is central: the functor from the category of projective varieties with dominant rational maps (over a fixed base field, for example ) to the category of finitely generated field extensions of the base field with reverse inclusion of extensions as morphisms, which associates each variety to its function field and each map to the associated map of function fields, is an equivalence of categories.
Examples
Rational maps of projective spaces
There is a rational map sending a ratio . Since the point cannot have an image, this map is only rational, and not a morphism of varieties. More generally, there are rational maps for sending an -tuple to an -tuple by forgetting the last coordinates.
Inclusions of open subvarieties
On a connected variety , the inclusion of any open subvariety is a birational equivalence since the two varieties have equivalent function fields. That is, every rational function can be restricted to a rational function and conversely, a rational function defines a rational equivalence class on . An excellent example of this phenomenon is the birational equivalence of and , hence .
Covering spaces on open subsets
Covering spaces on open subsets of a variety give ample examples of rational maps which are not birational. For example, Belyi's theorem states that every algebraic curve admits a map which ramifies at three points. Then, there is an associated covering space which defines a dominant rational morphism which is not birational. Another class of examples come from hyperelliptic curves which are double covers of ramified at a finite number of points. Another class of examples are given by a taking a hypersurface and restricting a rational map to . This gives a ramified cover. For example, the cubic surface given by the vanishing locus has a rational map to sending . This rational map can be expressed as the degree field extension
Resolution of singularities
One of the canonical examples of a birational map is the resolution of singularities. Over a field of characteristic 0, every singular variety has an associated nonsingular variety with a birational map . This map has the property that it is an isomorphism on and the fiber over is a normal crossing divisor. For example, a nodal curve such as is birational to since topologically it is an elliptic curve with one of the circles contracted. Then, the birational map is given by normalization.
Birational equivalence
Two varieties are said to be birationally equivalent if there exists a birational map between them; this theorem states that birational equivalence of varieties is identical to isomorphism of their function fields as extensions of the base field. This is somewhat more liberal than the notion of isomorphism of varieties (which requires a globally defined morphism to witness the isomorphism, not merely a rational map), in that there exist varieties which are birational but not isomorphic.
The usual example is that is birational to the variety contained in consisting of the set of projective points such that , but not isomorphic. Indeed, any two lines in intersect, but the lines in defined by and cannot intersect since their intersection would have all coordinates zero. To compute the function field of we pass to an affine subset (which does not change the field, a manifestation of the fact that a rational map depends only on its behavior in any open subset of its domain) in which ; in projective space this means we may take and therefore identify this subset with the affine -plane. There, the coordinate ring of is
via the map . And the field of fractions of the latter is just , isomorphic to that of . Note that at no time did we actually produce a rational map, though tracing through the proof of the theorem it is possible to do so.
See also
Birational geometry
Blowing up
Function field of an algebraic variety
Resolution of singularities
Minimal model program
Log structure
References
, section I.4.
Algebraic geometry | Rational mapping | [
"Mathematics"
] | 1,093 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
2,783,368 | https://en.wikipedia.org/wiki/HEPnet | HEPnet or the High-Energy Physics Network is a telecommunications network for researchers in high-energy physics. It originated in the United States, but has spread to - or been connected with - most places involved in such research, including Japan, Europe, and Canada, amongst other places. Well-known mainland US sites include Argonne National Laboratory, Brookhaven National Laboratory and Lawrence Berkeley.
See also
Energy Sciences Network
References
External links
HEPnet site
Computational particle physics
United States Department of Energy
Academic computer network organizations
Lawrence Berkeley National Laboratory | HEPnet | [
"Physics"
] | 109 | [
"Particle physics stubs",
"Particle physics",
"Computational particle physics",
"Computational physics"
] |
2,783,513 | https://en.wikipedia.org/wiki/Complex%20dimension | In mathematics, complex dimension usually refers to the dimension of a complex manifold or a complex algebraic variety. These are spaces in which the local neighborhoods of points (or of non-singular points in the case of a variety) are modeled on a Cartesian product of the form for some , and the complex dimension is the exponent in this product. Because can in turn be modeled by , a space with complex dimension will have real dimension . That is, a smooth manifold of complex dimension has real dimension ; and a complex algebraic variety of complex dimension , away from any singular point, will also be a smooth manifold of real dimension .
However, for a real algebraic variety (that is a variety defined by equations with real coefficients), its dimension refers commonly to its complex dimension, and its real dimension refers to the maximum of the dimensions of the manifolds contained in the set of its real points. The real dimension is not greater than the dimension, and equals it if the variety is irreducible and has real points that are nonsingular.
For example, the equation defines a variety of (complex) dimension 2 (a surface), but of real dimension 0 — it has only one real point, (0, 0, 0), which is singular.
The same considerations apply to codimension. For example a smooth complex hypersurface in complex projective space of dimension n will be a manifold of dimension 2(n − 1). A complex hyperplane does not separate a complex projective space into two components, because it has real codimension 2.
References
Complex manifolds
Algebraic geometry
Dimension | Complex dimension | [
"Physics",
"Mathematics"
] | 324 | [
"Geometric measurement",
"Mathematical analysis",
"Algebraic geometry",
"Physical quantities",
"Mathematical analysis stubs",
"Fields of abstract algebra",
"Theory of relativity",
"Dimension"
] |
2,783,891 | https://en.wikipedia.org/wiki/Noetherian%20scheme | In algebraic geometry, a Noetherian scheme is a scheme that admits a finite covering by open affine subsets , where each is a Noetherian ring. More generally, a scheme is locally Noetherian if it is covered by spectra of Noetherian rings. Thus, a scheme is Noetherian if and only if it is locally Noetherian and compact. As with Noetherian rings, the concept is named after Emmy Noether.
It can be shown that, in a locally Noetherian scheme, if is an open affine subset, then A is a Noetherian ring; in particular, is a Noetherian scheme if and only if A is a Noetherian ring. For a locally Noetherian scheme X, the local rings are also Noetherian rings.
A Noetherian scheme is a Noetherian topological space. But the converse is false in general; consider, for example, the spectrum of a non-Noetherian valuation ring.
The definitions extend to formal schemes.
Properties and Noetherian hypotheses
Having a (locally) Noetherian hypothesis for a statement about schemes generally makes a lot of problems more accessible because they sufficiently rigidify many of its properties.
Dévissage
One of the most important structure theorems about Noetherian rings and Noetherian schemes is the dévissage theorem. This makes it possible to decompose arguments about coherent sheaves into inductive arguments. Given a short exact sequence of coherent sheavesproving one of the sheaves has some property is equivalent to proving the other two have the property. In particular, given a fixed coherent sheaf and a sub-coherent sheaf , showing has some property can be reduced to looking at and . Since this process can only be non-trivially applied only a finite number of times, this makes many induction arguments possible.
Number of irreducible components
Every Noetherian scheme can only have finitely many components.
Morphisms from Noetherian schemes are quasi-compact
Every morphism from a Noetherian scheme is quasi-compact.
Homological properties
There are many nice homological properties of Noetherian schemes.
Čech and sheaf cohomology
Čech cohomology and sheaf cohomology agree on an affine open cover. This makes it possible to compute the sheaf cohomology of using Čech cohomology for the standard open cover.
Compatibility of colimits with cohomology
Given a direct system of sheaves of abelian groups on a Noetherian scheme, there is a canonical isomorphismmeaning the functorspreserve direct limits and coproducts.
Derived direct image
Given a locally finite type morphism to a Noetherian scheme and a complex of sheaves with bounded coherent cohomology such that the sheaves have proper support over , then the derived pushforward has bounded coherent cohomology over , meaning it is an object in .
Examples
Most schemes of interest are Noetherian schemes.
Locally of finite type over a Noetherian base
Another class of examples of Noetherian schemes are families of schemes where the base is Noetherian and is of finite type over . This includes many examples, such as the connected components of a Hilbert scheme, i.e. with a fixed Hilbert polynomial. This is important because it implies many moduli spaces encountered in the wild are Noetherian, such as the Moduli of algebraic curves and Moduli of stable vector bundles. Also, this property can be used to show many schemes considered in algebraic geometry are in fact Noetherian.
Quasi-projective varieties
In particular, quasi-projective varieties are Noetherian schemes. This class includes algebraic curves, elliptic curves, abelian varieties, calabi-yau schemes, shimura varieties, K3 surfaces, and cubic surfaces. Basically all of the objects from classical algebraic geometry fit into this class of examples.
Infinitesimal deformations of Noetherian schemes
In particular, infinitesimal deformations of Noetherian schemes are again Noetherian. For example, given a curve , any deformation is also a Noetherian scheme. A tower of such deformations can be used to construct formal Noetherian schemes.
Non-examples
Schemes over Adelic bases
One of the natural rings which are non-Noetherian are the Ring of adeles for an algebraic number field . In order to deal with such rings, a topology is considered, giving topological rings. There is a notion of algebraic geometry over such rings developed by Weil and Alexander Grothendieck.
Rings of integers over infinite extensions
Given an infinite Galois field extension , such as (by adjoining all roots of unity), the ring of integers is a Non-noetherian ring which is dimension . This breaks the intuition that finite dimensional schemes are necessarily Noetherian. Also, this example provides motivation for why studying schemes over a non-Noetherian base; that is, schemes , can be an interesting and fruitful subject.
One special casepg 93 of such an extension is taking the maximal unramified extension and considering the ring of integers . The induced morphismforms the universal covering of .
Polynomial ring with infinitely many generators
Another example of a non-Noetherian finite-dimensional scheme (in fact zero-dimensional) is given by the following quotient of a polynomial ring with infinitely many generators.
See also
Excellent ring - slightly more rigid than Noetherian rings, but with better properties
Chevalley's theorem on constructible sets
Zariski's main theorem
Dualizing complex
Nagata's compactification theorem
References
Algebraic geometry | Noetherian scheme | [
"Mathematics"
] | 1,151 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
22,714,557 | https://en.wikipedia.org/wiki/Diffusion%20current | Diffusion current is a current in a semiconductor caused by the diffusion of charge carriers (electrons and/or electron holes). This is the current which is due to the transport of charges occurring because of non-uniform concentration of charged particles in a semiconductor. The drift current, by contrast, is due to the motion of charge carriers due to the force exerted on them by an electric field. Diffusion current can be in the same or opposite direction of a drift current. The diffusion current and drift current together are described by the drift–diffusion equation.
It is necessary to consider the part of diffusion current when describing many semiconductor devices. For example, the current near the depletion region of a p–n junction is dominated by the diffusion current. Inside the depletion region, both diffusion current and drift current are present. At equilibrium in a p–n junction, the forward diffusion current in the depletion region is balanced with a reverse drift current, so that the net current is zero.
The diffusion constant for a doped material can be determined with the Haynes–Shockley experiment. Alternatively, if the carrier mobility is known, the diffusion coefficient may be determined from the Einstein relation on electrical mobility.
Overview
Diffusion current versus drift current
The following table compares the two types of current:
{| class="wikitable"
|-
! scope="col" width="450px" | Diffusion current
! scope="col" width="450px" | Drift current
|-
| Diffusion current = the movement caused by variation in the carrier concentration.
| Drift current = the movement caused by electric fields.
|-
| Direction of the diffusion current depends on the slope of the carrier concentration.
| Direction of the drift current is always in the direction of the electric field.
|-
| Obeys Fick's law:
| Obeys Ohm's law:
|}
Carrier actions
No external electric field across the semiconductor is required for a diffusion current to take place. This is because diffusion takes place due to the change in concentration of the carrier particles and not the concentrations themselves. The carrier particles, namely the holes and electrons of a semiconductor, move from a place of higher concentration to a place of lower concentration. Hence, due to the flow of holes and electrons there is a current. This current is called the diffusion current. The drift current and the diffusion current make up the total current in the conductor. The change in the concentration of the carrier particles develops a gradient. Due to this gradient, an electric field is produced in the semiconductor.
Derivation
In a region where n and p vary with distance, a diffusion current is superimposed on that due to conductivity. This diffusion current is governed by Fick's law:
where:
F is flux.
De is the diffusion coefficient or diffusivity
is the concentration gradient of electrons
there is a minus sign because the direction of diffusion is opposite to that of the concentration gradient
The diffusion coefficient for a charge carrier is related to its mobility by the Einstein relation:
where:
kB is the Boltzmann constant
T is the absolute temperature
e is the electrical charge of an electron
Now let's focus on the diffusive current in one-dimension along the x-axis:
The electron current density Je is related to flux, F, by:
Thus
Similarly for holes:
Notice that for electrons the diffusive current is in the same direction as the electron density gradient because the minus sign from the negative charge and Fick's law cancel each other out. However, holes have positive charges and therefore the minus sign from Fick's law is carried over.
Superimpose the diffusive current on the drift current to get
for electrons
and
for holes
Consider electrons in a constant electric field E. Electrons will flow (i.e. there is a drift current) until the density gradient builds up enough for the diffusion current to exactly balance the drift current. So at equilibrium there is no net current flow:
Example
To derive the diffusion current in a semiconductor diode, the depletion layer must be large compared to the mean free path.
One begins with the equation for the net current density J in a semiconductor diode,
where D is the diffusion coefficient for the electron in the considered medium, n is the number of electrons per unit volume (i.e. number density), q is the magnitude of charge of an electron, μ is electron mobility in the medium, and E = −dΦ/dx (Φ potential difference) is the electric field as the potential gradient of the electric potential. According to the Einstein relation on electrical mobility and . Thus, substituting E for the potential gradient in the above equation () and multiplying both sides with exp(−Φ/Vt), () becomes:
Integrating equation () over the depletion region gives
which can be written as
where
The denominator in equation () can be solved by using the following equation:
Therefore, Φ* can be written as:
Since the x ≪ xd, the term (xd − x/2) ≈ xd, using this approximation equation () is solved as follows:
,
since (Φi − Va) > Vt. One obtains the equation of current caused due to diffusion:
From equation (), one can observe that the current depends exponentially on the input voltage Va, also the barrier height ΦB. From equation (), Va can be written as the function of electric field intensity, which is as follows:
Substituting equation () in equation () gives:
From equation (), one can observe that when a zero voltage is applied to the semi-conductor diode, the drift current totally balances the diffusion current. Hence, the net current in a semiconductor diode at zero potential is always zero.
The equation above can be applied to model semiconductor devices. When the density of electrons is not in equilibrium, diffusion of electrons will occur. For example, when a bias is applied to two ends of a chunk of semiconductor, or a light is shining in one place (see right figure), electrons will diffuse from high density regions (center) to low density regions (two ends), forming a gradient of electron density. This process generates diffusion current.
See also
Alternating current
Conduction band
Convection–diffusion equation
Direct current
Drift current
Free electron model
Random walk
Maximal entropy random walk – diffusion in agreement with quantum predictions
References
Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3
Concepts of Modern Physics (4th Edition), A. Beiser, Physics, McGraw-Hill (International), 1987,
Solid State Physics (2nd Edition), J.R. Hook, H.E. Hall, Manchester Physics Series, John Wiley & Sons, 2010,
Ben G. Streetman, Santay Kumar Banerjee; Solid State Electronic Devices (6th Edition), Pearson International Edition; pp. 126–135.
Semiconductors
Electric current
Charge carriers | Diffusion current | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,466 | [
"Electrical resistance and conductance",
"Physical phenomena",
"Physical quantities",
"Charge carriers",
"Semiconductors",
"Materials",
"Electrical phenomena",
"Electronic engineering",
"Condensed matter physics",
"Electric current",
"Wikipedia categories named after physical quantities",
"Sol... |
22,717,850 | https://en.wikipedia.org/wiki/Molecular%20Ecology%20%28journal%29 | Molecular Ecology is a twice monthly scientific journal covering investigations that use molecular techniques to address questions in ecology, evolution, behavior, and conservation. It is published by Wiley-Blackwell. Its 2022 impact factor is 4.9. It was established in 1992 with Harry Smith as founding editor-in-chief, while Loren Rieseberg is the current editor.
See also
Molecular Ecology Resources
References
External links
Ecology journals
English-language journals
Wiley-Blackwell academic journals
Semi-monthly journals
Academic journals established in 1992
Molecular ecology | Molecular Ecology (journal) | [
"Chemistry",
"Environmental_science"
] | 103 | [
"Ecology journals",
"Molecular ecology",
"Environmental science journals",
"Molecular biology",
"Environmental science journal stubs"
] |
22,720,110 | https://en.wikipedia.org/wiki/Taylor%20microscale | In fluid dynamics, the Taylor microscale, which is sometimes called the turbulence length scale, is a length scale used to characterize a turbulent fluid flow. This microscale is named after Geoffrey Ingram Taylor. The Taylor microscale is the intermediate length scale at which fluid viscosity significantly affects the dynamics of turbulent eddies in the flow. This length scale is traditionally applied to turbulent flow which can be characterized by a Kolmogorov spectrum of velocity fluctuations. In such a flow, length scales which are larger than the Taylor microscale are not strongly affected by viscosity. These larger length scales in the flow are generally referred to as the inertial range. Below the Taylor microscale the turbulent motions are subject to strong viscous forces and kinetic energy is dissipated into heat. These shorter length scale motions are generally termed the dissipation range.
Calculation of the Taylor microscale is not entirely straightforward, requiring formation of certain flow correlation function(s), then expanding in a Taylor series and using the first non-zero term to characterize an osculating parabola. The Taylor microscale is proportional to , while the Kolmogorov microscale is proportional to , where is the integral scale Reynolds number. A turbulence Reynolds number calculated based on the Taylor microscale is given by
where is the root mean square of the velocity fluctuations.
The Taylor microscale is given as
where is the kinematic viscosity, and is the rate of energy dissipation. A relation with turbulence kinetic energy can be derived as
The Taylor microscale gives a convenient estimation for the fluctuating strain rate field
Other relations
The Taylor microscale falls in between the large-scale eddies and the small-scale eddies, which can be seen by calculating the ratios between and the Kolmogorov microscale . Given the length scale of the larger eddies , and the turbulence Reynolds number referred to these eddies, the following relations can be obtained:
Notes
References
Fluid dynamics
Turbulence | Taylor microscale | [
"Chemistry",
"Engineering"
] | 414 | [
"Piping",
"Chemical engineering",
"Turbulence",
"Fluid dynamics"
] |
22,721,042 | https://en.wikipedia.org/wiki/Convex%20volume%20approximation | In the analysis of algorithms, several authors have studied the computation of the volume of high-dimensional convex bodies, a problem that can also be used to model many other problems in combinatorial enumeration.
Often these works use a black box model of computation in which the input is given by a subroutine for testing whether a point is inside or outside of the convex body, rather than by an explicit listing of the vertices or faces of a convex polytope.
It is known that, in this model, no deterministic algorithm can achieve an accurate approximation, and even for an explicit listing of faces or vertices the problem is #P-hard.
However, a joint work by Martin Dyer, Alan M. Frieze and Ravindran Kannan provided a randomized polynomial time approximation scheme for the problem,
providing a sharp contrast between the capabilities of randomized and deterministic algorithms.
The main result of the paper is a randomized algorithm for finding an approximation to the volume of a convex body in -dimensional Euclidean space by assuming the existence of a membership oracle. The algorithm takes time bounded by a polynomial in , the dimension of and .
The algorithm combines two ideas:
By using a Markov chain Monte Carlo (MCMC) method, it is possible to generate points that are nearly uniformly randomly distributed within a given convex body. The basic scheme of the algorithm is a nearly uniform sampling from within by placing a grid consisting of -dimensional cubes and doing a random walk over these cubes. By using the theory of rapidly mixing Markov chains, they show that it takes a polynomial time for the random walk to settle down to being a nearly uniform distribution.
By using rejection sampling, it is possible to compare the volumes of two convex bodies, one nested within another, when their volumes are within a small factor of each other. The basic idea is to generate random points within the outer of the two bodies, and to count how often those points are also within the inner body.
The given convex body can be approximated by a sequence of nested bodies, eventually reaching one of known volume (a hypersphere), with this approach used to estimate the factor by which the volume changes at each step of this sequence. Multiplying these factors gives the approximate volume of the original body.
This work earned its authors the 1991 Fulkerson Prize.
Improvements
Although the time for this algorithm is polynomial, it has a high exponent.
Subsequent authors improved the running time of this method by providing more quickly mixing Markov chains for the same problem.
Generalizations
The polynomial-time approximability result has been generalized to more complex structures such as the union and intersection of objects. This relates to Klee's measure problem.
References
Computational geometry
Approximation algorithms | Convex volume approximation | [
"Mathematics"
] | 558 | [
"Computational mathematics",
"Approximation algorithms",
"Computational geometry",
"Mathematical relations",
"Approximations"
] |
22,721,223 | https://en.wikipedia.org/wiki/CGh%20physics | cGh physics refers to the historical attempts in physics to unify relativity, gravitation, and quantum mechanics, in particular following the ideas of Matvei Petrovich Bronstein and George Gamow. The letters are the standard symbols for the speed of light (), the gravitational constant (), and the Planck constant ().
If one considers these three universal constants as the basis for a 3-D coordinate system and envisions a cube, then this pedagogic construction provides a framework, which is referred to as the cGh cube, or physics cube, or cube of theoretical physics (CTP). This cube can be used for organizing major subjects within physics as occupying each of the eight corners. The eight corners of the cGh physics cube are:
Classical mechanics (_, _, _)
Special relativity (, _, _), gravitation (_, , _), quantum mechanics (_, _, )
General relativity (, , _), quantum field theory (, _, ), non-relativistic quantum theory with gravity (_, , )
Theory of everything, or relativistic quantum gravity (, , )
Other cGh physics topics include Hawking radiation and black-hole thermodynamics.
While there are several other physical constants, these three are given special consideration because they can be used to define all Planck units and thus all physical quantities. The three constants are therefore used sometimes as a framework for philosophical study and as one of pedagogical patterns.
Overview
Before the first successful estimate of the speed of light in 1676, it was not known whether light was transmitted instantaneously or not. Because of the tremendously large value of the speed of light—c (i.e. 299,792,458 metres per second in vacuum)—compared to the range of human perceptual response and visual processing, the propagation of light is normally perceived as instantaneous. Hence, the ratio 1/c is sufficiently close to zero that all subsequent differences of calculations in relativistic mechanics are similarly 'invisible' relative to human perception. However, at speeds comparable to the speed of light (c), Lorentz transformation (as per special relativity) produces substantially different results which agree more accurately with (sufficiently precise) experimental measurement. Non-relativistic theory can then be derived by taking the limit as the speed of light tends to infinity—i.e. ignoring terms (in the Taylor expansion) with a factor of 1/c—producing a first-order approximation of the formulae.
The gravitational constant (G) is irrelevant for a system where gravitational forces are negligible. For example, the special theory of relativity is the special case of general relativity in the limit G → 0.
Similarly, in the theories where the effects of quantum mechanics are irrelevant, the value of Planck constant (h) can be neglected. For example, setting h → 0 in the commutation relation of quantum mechanics, the uncertainty in the simultaneous measurement of two conjugate variables tends to zero, approximating quantum mechanics with classical mechanics.
In popular culture
George Gamow chose "C. G. H." as the initials of his fictitious character, Mr C. G. H. Tompkins.
References
Theoretical physics | CGh physics | [
"Physics"
] | 680 | [
"Theoretical physics"
] |
22,721,456 | https://en.wikipedia.org/wiki/Computational%20sustainability | Computational sustainability is an emerging field that attempts to balance societal, economic, and environmental resources for the future well-being of humanity using methods from mathematics, computer science, and information science fields. Sustainability in this context refers to the world's ability to sustain biological, social, and environmental systems in the long term.
Using the power of computers to process large quantities of information, decision making algorithms allocate resources based on real-time information. Applications advanced by this field are widespread across various areas. For example, artificial intelligence and machine learning techniques are created to promote long-term biodiversity conservation and species protection. Smart grids implement renewable resources and storage capabilities to control the production and expenditure of energy. Intelligent transportation system technologies can analyze road conditions and relay information to drivers so they can make smarter, more environmentally-beneficial decisions based on real-time traffic information.
History and motivations
The field of computational sustainability has been motivated by Our Common Future, a 1987 report from the World Commission on Environment and Development about the future of humanity. More recently, computational sustainability research has also been driven by the United Nation's sustainable development goals, a set of 17 goals for the sustainability of human economic, social, and environmental well-being world-wide. Researchers in computational sustainability have primarily focused on addressing problems in areas related to the environment (e.g., biodiversity conservation), sustainable energy infrastructure and natural resources, and societal aspects (e.g., global hunger crises). The computational aspects of computational sustainability leverage techniques from mathematics and computer science, in the areas of artificial intelligence, machine learning, algorithms, game theory, mechanism design, information science, optimization (including combinatorial optimization), dynamical systems, and multi-agent systems.
While the formal emergence of computational sustainability is often traced back to the years 2008 and 2009, marked by the initiation of an NSF-funded award, and specific conferences and workshops, the exploration of computational methods to tackle environmental and societal sustainability issues predates this period. The use of statistical and mathematical models for sustainability-related problems has a long history, paralleling the evolution of computing technology itself. A notable example is the early attempts at climate modeling, which were constrained by the limited computing resources available at the time, necessitating simplified models.
In the realm of artificial intelligence, particularly within machine learning, the 1990s saw research efforts addressing ecological modeling and wastewater management, among other sustainability issues. This work continued into the 2000s, supported by groups like the "Machine Learning for the Environment" working group established by the National Center for Ecological Analysis and Synthesis in 2006. Research on optimization to aid sustainability challenges, such as designing wildlife reserves, can be traced back to the 1980s.
The early 2000s also witnessed a growing concern over the environmental impact of computing technology itself, with green information and communications technology (ICT) gaining attention among ICT companies. This interest extended beyond the immediate environmental effects of computing to consider second-order and higher-order impacts, such as the potential of ICT to reduce the carbon footprint of air travel through online conferencing or to optimize delivery routes to lower CO2 emissions. International policy efforts, particularly by the Organization for Economic Cooperation and Development (OECD), have since focused on a framework recognizing these multi-tiered effects of ICT, a focus that continues today.
Before the OECD's 2008 conference, mathematicians proposed using their expertise to combat climate change, signaling a growing recognition of the research community's role in sustainability. This period also saw the establishment of the Institute for Computational Sustainability in 2008 and the launch of the International Conference on Computational Sustainability in 2009, pivotal moments that significantly advanced the field. The inclusion of sustainability themes in major AI conferences further integrated sustainability into the broader computing and scientific discourse.
The field of computational sustainability has continued to expand, with significant initiatives like the Sustainability-focused Expeditions in Computing award to the University of Minnesota in 2010, aiming to advance climate understanding through data mining and visualization. The establishment of sustainability-related tracks and awards at various conferences, along with targeted funding by organizations like the NSF, underscores the growing importance of computing in addressing sustainability challenges.
Sustainability areas
Balancing environmental and socioeconomic needs
Policy planning
Human health
Biodiversity and conservation
Biodiversity conservation focuses primarily on preserving the diversity of species, sustainable utilization of species and ecosystems, and maintaining life-supporting systems and essential ecological processes.
Conservation of species is an important sustainability goal to prevent biodiversity loss. As urbanization is expanding across the globe, it threatens wildlife in and around cities. An effort towards conservation has included the creation of wildlife corridors that are used to connect wildlife populations that have become isolated from man-made habitat fragmentation. Building these wildlife corridors is a challenge due to barriers between habitats and property owners (Zellmer, Goto). Moving species to connect core conservation areas through corridors results in an optimization problem. This is where technology can help, not only in optimizing corridors but in terms of helping with cost-benefit analysis.
Moreover, artificial intelligence serves as a tool in the ongoing battle against biodiversity loss and illegal activities such as poaching. In recent years there has been significant research on wildlife monitoring strategies to better understand patterns and enhance security to combat poaching (Gomes). This integration of AI into wildlife conservation efforts represents a significant step forward in collective efforts to safeguard and protect natural ecosystems.
United Nations' Sustainability Development Goals
The United Nations lists seventeen different Sustainable Development Goals (SDGs) to protect the planet, all of which are important in different ways. Sustainable Development Goal 14 emphasizes protecting life under water. Sustainable Development Goal 15 references protecting life on land. While technology has historically favored profitable sectors, its potential to revolutionize environmental sustainability, particularly in wildlife conservation, remains largely untapped. By examining challenges, contributions, and potential contributions presented by technological advancements in achieving Sustainable Development Goals 14 and 15, computational innovations can be harnessed to protect life under water and on land.
The application of machine learning techniques to address challenges in fire prediction and management in Alaska's boreal forests. Studies have underscored the importance of adapting existing fire management strategies to the evolving fire landscape, especially considering the impact of climate change on fire frequencies. By incorporating diverse variables such as topography, vegetation, and meteorological factors, the research aligns with the computational sustainability paradigm, which seeks to leverage computational models for sustainable environmental practices.
There is one novel machine learning framework for fire prediction, which represents a significant contribution to computational sustainability in the field of environmental monitoring. The model, centered on the identification of specific ignitions likely to lead to large fires, provides a more straightforward and interpretable alternative to existing, more complex prediction models. The emphasis on two key variables, vapour pressure deficit (VPD) and spruce fraction, reflects the paper's commitment to practical and actionable computational approaches in environmental assessment. The assessment of how active fire management influences fire regimes highlights the role of human intervention in shaping environmental outcomes, illustrating the potential of computational sustainability for informed decision-making in environmental monitoring and assessment.
Environmental monitoring and assessment
Species distribution modeling
Computational sustainability researchers have advanced techniques to combat the biodiversity loss facing the world during the current sixth extinction. Researchers have created computational methods for geospatially mapping the distribution, migration patterns, and wildlife corridors of species, which enable scientists to quantify conservation efforts and recommend effective policies.
Renewable and sustainable energy and materials
Using "affordable and clean energy" is one of the seventeen Sustainable Development Goals (SDG) worldwide (Gomes et al., 2019).
The Sun, as the only planet-hosting star in the solar system, can provide clean and renewable energy to meet the demands of increasing populations. Unlike fossil fuels, the Sun does not generate pollutants or greenhouse gases. Therefore, relying on solar energy can reduce carbon footprints, which abates global warming and ecosystem destruction. The Sun will live for approximately 5 billion years more, which serves as a long-term and stable energy source. If humans can extract and convert energy efficiently, both the environment and the economy can benefit, contributing to sustainability.
However, renewable energy, including wind and solar energy, is non-dispatchable. Humans cannot control these energy sources or predict energy production in advance. If using renewable energies, scientists need to seek different sources for compensation, which usually links back to fossil fuels that are considered unsustainable. Alternatively, people can store the energy from these renewable sources for the difference, which can be expensive.
Scientists should consider different factors when designing the storage system, including frequency regulation, energy shifting, peak shifting, and backup power (Gomes et al., 2019). Deciding whether to utilize diverse energy sources or store energy to prepare for unexpected situations will be hard for scientists. The approaches for each strategy are complicated. Scientists have turned this scenario into an optimization problem that involves the three "broad sustainability themes"—simulation, machine learning, and citizen science (Gomes et al., 2019).
Climate change and renewable energy interrelate with each other. Renewable energy sources such as the Sun and wind highly depend on the climate. On a cloudy day, people will acquire less solar energy due to shielding. The UV index will also impact solar energy production. Reversely, using renewable energy on large scales benefits the environment, reducing global warming and extreme weather. Therefore, constructing an accurate climate model and predicting the weather for renewable energy production becomes essential.
In Jones' article, he explores the usage of artificial intelligence (AI) in simulating the climate (2018). The major problems of using computers for climate modeling arise from a lack of details and slow simulation. Different computing approaches can also result in different and inaccurate results. For example, one model predicts that the temperature can increase more than three times than the other model if the carbon dioxide level doubles in the atmosphere. Therefore, scientists incorporate machine learning frameworks into the existing climate models. This combination enables the computer to efficiently discern more unnoticeable details than traditional computers, even with slight uncertainties and deviations, to give accurate simulations and predictions (Jones, 2018). Simultaneously, machine learning techniques, including normalizing flows, can infer long-term patterns and behaviors from data from a short period.
People can take advantage of running small-scale simulations that are more efficient for predictions, especially with characteristic models. For instance, the information on how clouds evolve in a few miles region over a short period will be sufficient for "Cloud Brain", a deep learning code to infer climate change due to increasing carbon dioxide emissions. Then, the framework can figure out the climate model on a large scale and over long periods. This model is more efficient than traditional high-resolution simulations yet gives similar and realistic results (Jones, 2018). Normalizing flows also performed similar functions as the "Cloud Brain". After inputting initial and final conditions into the neural network, the algorithm can figure out a chain of transformations. The given conditions generally come from a short period, while the chain can be universal for long-term scenarios to infer and predict.
However, developing these machine learning techniques to predict the physical world is still challenging. Machine learning functions "intuitively" and may not follow the rules in the world. When predicting and establishing the climate model, AI cannot consider different factors in physics, including gravity and temperature gradient, for efficiency. Lack of rules in the framework can lead to unrealistic results. These frameworks can be inflexible and do not adapt to a new and diverse environment. "Cloud Brain" cannot accurately predict when the temperature is high (Jones, 2018). Like the "black-box function" in SMART-Invest (Gomes et al., 2019), these machine-learning techniques obtain little transparency. People struggle to recognize and comprehend the models (Jones, 2018). In normalizing flows, learning the exact bijective transformations takes extra effort, and few packages have the functions to express each transformation explicitly. Some particular transformations can disobey the physics laws, but scientists have no way to identify and fix the issue. Therefore, training the model comprehensively with appropriate supervision of physics laws becomes necessary. However, heliophysics can be complex, and scientists are uncertain about the nuclear fusion process inside the Sun. In such cases, no physics equation is established to describe the energy conversion process, which affects the amount of solar energy humans can extract. Without a "rulebook", machine learning is the optimal approach to figure out the pattern and correlation (Jones, 2018). When implementing normalizing flows in solar energy and heliophysics, some degree of freedom needs to be allowed for the neural network to discover patterns in the unknown solar physics regimes.
Agriculture
Spatial planning
Spatial planning refers to the methods and approaches used by the public sector to influence the distribution of people and activities in spaces of various scales. It encompasses a broad spectrum of activities related to the use and management of land and public spaces, aiming to ensure sustainable development and to improve the built and natural environments.
Spatial planning covers a wide range of concerns including urban, suburban, and rural development, land use, transportation systems, infrastructure planning, and environmental protection. It aims to coordinate the various aspects of policy and regulation over land use, housing, public amenities, and transport infrastructure, ensuring that these elements work together to promote economic development, environmental sustainability, and quality of life for communities in all types of areas.
This term is often used in a European context and can be seen as an integrated approach that looks beyond traditional urban planning to address the needs and development strategies of a wider range of environments. It involves strategic decision-making to guide the future development and spatial organization of land use in a way that is efficient, sustainable, and equitable.
Urban planning
Transportation
Intelligent transportation systems (ITS) seek to improve safety and travel times while minimizing greenhouse gas emissions for all travelers, though focusing mainly on drivers. ITS has two systems: one for data collection/relaying, and another for data processing. Data collection can be achieved with video cameras over busy areas, sensors that detect various pieces from location of certain vehicles to infrastructure that is breaking down, and even drivers who notice an accident and use a mobile app, like Waze, to report its whereabouts.
Advanced public transportation systems (APTS) aim to make public transportation more efficient and convenient for its riders. Electronic payment methods allow users to add money to their smart cards at stations and online. APTS relay information to transit facilities about current vehicle locations to give riders expected wait times on screens at stations and directly to customers' smart phones Advanced Traffic Management Systems (ATMS) collect information using cameras and other sensors that gather information regarding how congested roads are. Ramp meters regulate the number of cars entering highways to limit backups. Traffic signals use algorithms to optimize travel times depending on the number of cars on the road. Electronic highway signs relay information regarding travel times, detours, and accidents that may affect drivers ability to reach their destination.
With the rise of consumer connectivity, less infrastructure is needed for these ITS to make informed decisions. Google Maps uses smartphone crowdsourcing to get information about real-time traffic conditions allowing motorists to make decisions based on toll roads, travel times, and overall distance traveled. Cars communicate with their manufacturers to remotely install software updates when new features are added or bugs are being patched. Tesla Motors even uses these updates to increase their cars efficiency and performance. These connections give ITS a means to accurately collect information and even relay that information to drivers with no other infrastructure needed.
Future ITS systems will aid in car communication with not just the infrastructure, but with other cars as well.
Utilities
The electrical grid was designed to send consumers electricity from electricity generators for a monthly fee based on usage. Homeowners are installing solar panels and large batteries to store the power created by these panels. A smart grid is being created to accommodate the new energy sources. Rather than just electricity being sent to a household to be consumed by the various appliances in the home, electricity can flow in either direction. Additional sensors along the grid will improve information collection and decreased downtime during power outages. These sensors can also relay information directly to consumers about how much energy they're using and what the costs will be.
Computational synergies
Active information gathering
Another way that computational strategies are used is in active information gathering. The use of technology to measure tons of information and sort through them is a powerful tool in many fields of study. For example, NASA uses satellites to get SAR (synthetic aperture radar) data in order to map the surface of the earth. They are able to perform active data collection of visible, near-infrared, and short-wave-infrared portions of the electromagnetic spectrum using their satellites. These findings can help to identify deforestation and rising sea levels and help predict future changes to different ecosystems based on the wavelengths and polarization of the radar. NASA has made this data publicly available beginning with the European Space Agency's (ESA) Sentinel-1a in 2014.
Sequential decision making
Stochastic optimization
Uncertainty
Probabilistic graphical models
Ensemble methods
Spatiotemporal modeling
Remote sensing
Information retrieval
Vision and learning
Computer vision and machine learning play a crucial role in advancing computational sustainability, offering innovative solutions to complex environmental challenges. By harnessing the power of these technologies, researchers and practitioners are able to analyze vast amounts of data, extract meaningful patterns, and develop sustainable strategies for managing natural resources and ecosystems.
Applications
Wildlife conservation
Computer vision is used to monitor and track endangered species, such as tracking the movements of animals in their natural habitats or identifying individual animals for population studies. For example, camera traps equipped with computer vision algorithms can automatically detect and identify species, allowing researchers to study their behaviors without disturbing them. Machine learning algorithms can analyze these data to understand animal behavior, habitat preferences, and population dynamics, aiding in conservation efforts. This is helpful in assessing the effectiveness of conservation measures and identify areas in need of protection.
Environmental monitoring
Remote sensing technologies combined with machine learning can monitor air and water quality, detecting pollutants and assessing environmental health. For example, satellite imagery can be used to monitor algal blooms in water bodies, which can be harmful to aquatic life and human health. Computer vision techniques can analyze satellite imagery to detect deforestation and illegal logging activities. By identifying areas at risk, conservationists and authorities can take action to protect forests and biodiversity.
Sustainable agriculture
Computer vision is used to monitor crop health, detecting diseases and nutrient deficiencies early. For example, drones equipped with multispectral cameras can capture images of crops, which are then analyzed using machine learning algorithms to identify health issues. Machine learning algorithms can analyze data from sensors and drones to optimize resource allocation in agriculture. By providing insights into soil health, moisture levels, and crop growth, these algorithms help farmers make informed decisions to improve productivity and sustainability.
Climate change mitigation
Machine learning models can analyze historical climate data to predict future climate patterns. This information is crucial for developing strategies to mitigate the impacts of climate change, such as planning for extreme weather events and sea level rise. Computer vision techniques can be used to monitor renewable energy sources, such as solar panels and wind turbines. By analyzing data on energy production and environmental conditions, these techniques help optimize the use of renewable energy and reduce reliance on fossil fuels.
Significance
Data-driven decision making
Computer vision and machine learning enable data-driven decision-making in sustainability efforts. By analyzing large datasets, researchers can identify trends, predict outcomes, and make informed choices to conserve natural resources and protect the environment.
Efficiency and accuracy
These technologies improve the efficiency and accuracy of environmental monitoring and management. They can process data faster and more accurately than traditional methods, enabling timely interventions to prevent environmental degradation.
Conservation impact
By enabling more precise monitoring and analysis, computer vision and machine learning enhance conservation efforts, helping to protect endangered species, preserve biodiversity, and mitigate the effects of climate change.
Sustainable development
Insights from computer vision and machine learning contribute to the development of sustainable practices in agriculture, forestry, and other industries. By optimizing resource use and minimizing environmental impact, these technologies support long-term sustainability.
Crowdsourced data
Agent-based modeling
Agent-Based Modeling (ABM) has many applications across a variety of domains within computational sustainability. In wildlife conservation and ecosystem management, ABM simulates animal behaviors and interactions in ecosystems. This displays the impacts of habitat destruction or climate change on biodiversity. In sustainable agriculture, ABM assesses how farmers decide on crop selection, land use, and the adoption of sustainable practices. Urban planning benefits from ABM by traffic simulation and pedestrian patterns to potentially optimize public transportation systems, reduce carbon emissions, and improve urban life quality. These are some examples of ABM applications that provide a powerful tool for exploring sustainable solutions and help stakeholders make informed decisions for a sustainable future.
NetLogo is one of the leading and most popular ABM softwares. It allows for researchers to design, develop, and implement complex ABMs in a manner that is still accessible for those without programming backgrounds. Due to this, it has widespread applicability and use in educational settings that can help students develop a broader understanding of sustainability issues. The base version of NetLogo comes with many sample models , this includes 7 models under the folder of Earth Science. These models tackle various sustainability issues ranging from the impact of carbon dioxide emissions on climate change to the percolation of oil through permeable soils during oil spills.
Constraint-based reasoning
Game theory and mechanism design
Databases
Mobile apps
Mobile applications are increasingly being used in biodiversity monitoring and conservation citizen science projects. These apps allow volunteers to easily record and share species observations, photos and other ecological data directly from the field using their smartphones. By harnessing the power of mobile technology and an active citizen community, these projects can gather large amounts of valuable biodiversity data across a variety of settings in a cost-effective way, compared to traditional survey methods conducted by professional scientists alone.
Some popular examples of mobile apps for biodiversity monitoring include iNaturalist, eBird, and Merlin. iNaturalist allows users to record observations, share them with fellow naturalists, and contribute to biodiversity science by sharing findings with scientific data repositories.eBird, managed by the Cornell Lab of Ornithology, enables birdwatchers to enter their sightings and access tools that make birding more rewarding, such as managing lists, photos, and audio recordings, and seeing real-time species distribution maps. Merlin, also from the Cornell Lab, helps users identify bird species through AI-powered visual recognition and question-based filtering, and contributes sightings to the eBird database. These apps showcase effective design practices that have enabled them to gather significant biodiversity data through public participation.
See also
Carla Gomes, pioneer of computational sustainability
Environmental informatics
eBird
Green computing
Institute for Computational Sustainability (ICS)
The Nature Conservancy
United States Fish and Wildlife Service
United States Geological Survey
References
External links
Computational Sustainability
Institute for Computational Sustainability (ICS)
Sustainable development
Computational science
Computational fields of study
Environmental technology | Computational sustainability | [
"Mathematics",
"Technology"
] | 4,671 | [
"Computational science",
"Applied mathematics",
"Computational fields of study",
"Computing and society"
] |
21,252,213 | https://en.wikipedia.org/wiki/CACNA1G | Calcium channel, voltage-dependent, T type, alpha 1G subunit, also known as CACNA1G or Cav3.1 is a protein which in humans is encoded by the CACNA1G gene. It is one of the primary targets in the pharmacology of absence seizure.
Function
Cav3.1 is a type of low-voltage-activated calcium channel, also known as "T-type" for its transient on and off. It is expressed in thalamocortical relay nucleus, and is responsible for the slow-wave sleep and absence seizure. During a slow-wave sleep, Cav3.1 is put into burst mode, and a self-sustaining synchronous cycle between cortex and thalamus is formed, sensory inputs are isolated from cortex; while awake the thalamus should instead relay sensory inputs from outside the central nervous system. The mechanism of absence seizure has a lot in common with slow-wave sleep. Therefore, a blocker that inhibits the burst mode activation of Cav3.1 is effective in treating absence seizures. Common drugs including ethosuximide, as well as trimethadione.
Interactive pathway map
See also
T-type calcium channel
References
External links
Ion channels
Integral membrane proteins | CACNA1G | [
"Chemistry"
] | 261 | [
"Neurochemistry",
"Ion channels"
] |
21,254,434 | https://en.wikipedia.org/wiki/Spin%20states%20%28d%20electrons%29 | Spin states when describing transition metal coordination complexes refers to the potential spin configurations of the central metal's d electrons. For several oxidation states, metals can adopt high-spin and low-spin configurations. The ambiguity only applies to first row metals, because second- and third-row metals are invariably low-spin. These configurations can be understood through the two major models used to describe coordination complexes; crystal field theory and ligand field theory (a more advanced version based on molecular orbital theory).
High-spin vs. low-spin
Octahedral complexes
The Δ splitting of the d orbitals plays an important role in the electron spin state of a coordination complex. Three factors affect Δ: the period (row in periodic table) of the metal ion, the charge of the metal ion, and the field strength of the complex's ligands as described by the spectrochemical series. Only octahedral complexes of first row transition metals adopt high-spin states.
In order for low spin splitting to occur, the energy cost of placing an electron into an already singly occupied orbital must be less than the cost of placing the additional electron into an eg orbital at an energy cost of Δ. If the energy required to pair two electrons is greater than the energy cost of placing an electron in an eg, Δ, high spin splitting occurs.
If the separation between the orbitals is large, then the lower energy orbitals are completely filled before population of the higher orbitals according to the Aufbau principle. Complexes such as this are called "low-spin" since filling an orbital matches electrons and reduces the total electron spin. If the separation between the orbitals is small enough then it is easier to put electrons into the higher energy orbitals than it is to put two into the same low-energy orbital, because of the repulsion resulting from matching two electrons in the same orbital. So, one electron is put into each of the five d orbitals before any pairing occurs in accord with Hund's rule resulting in what is known as a "high-spin" complex. Complexes such as this are called "high-spin" since populating the upper orbital avoids matches between electrons with opposite spin.
The charge of the metal center plays a role in the ligand field and the Δ splitting. The higher the oxidation state of the metal, the stronger the ligand field that is created. In the event that there are two metals with the same d electron configuration, the one with the higher oxidation state is more likely to be low spin than the one with the lower oxidation state; for example, Fe2+ and Co3+ are both d6; however, the higher charge of Co3+ creates a stronger ligand field than Fe2+. All other things being equal, Fe2+ is more likely to be high spin than Co3+.
Ligands also affect the magnitude of Δ splitting of the d orbitals according to their field strength as described by the spectrochemical series. Strong-field ligands, such as CN− and CO, increase the Δ splitting and are more likely to be low-spin. Weak-field ligands, such as I− and Br− cause a smaller Δ splitting and are more likely to be high-spin.
Some octahedral complexes exhibit spin crossover, where the high and low spin states exist in dynamic equilibrium.
Tetrahedral complexes
The Δ splitting energy for tetrahedral metal complexes (four ligands), Δtet is smaller than that for an octahedral complex. Consequently, tetrahedral complexes are almost always high spin Examples of low spin tetrahedral complexes include Fe(2-norbornyl)4, [Co(4-norbornyl)4]+, and the nitrosyl complex Cr(NO)((N(tms)2)3.
Square planar complexes
Many d8 complexes of the first row metals exist in tetrahedral or square planar geometry. In some cases these geometries exist in measurable equilibria. For example, dichlorobis(triphenylphosphine)nickel(II) has been crystallized in both tetrahedral and square planar geometries.
Ligand field theory vs crystal field theory
In terms of d-orbital splitting, ligand field theory (LFT) and crystal field theory (CFT) give similar results. CFT is an older, simpler model that treats ligands as point charges. LFT is more chemical, emphasizes covalent bonding and accommodates pi-bonding explicitly.
High-spin and low-spin systems
In the case of octahedral complexes, the question of high spin vs low spin first arises for d4, since it has more than the 3 electrons to fill the non-bonding d orbitals according to ligand field theory or the stabilized d orbitals according to crystal field splitting.
All complexes of second and third row metals are low-spin.
d4
Octahedral high-spin: 4 unpaired electrons, paramagnetic, substitutionally labile. Includes Cr2+ (many complexes assigned as Cr(II) are however Cr(III) with reduced ligands), Mn3+.
Octahedral low-spin: 2 unpaired electrons, paramagnetic, substitutionally inert. Includes Cr2+, Mn3+.
d5
Octahedral high-spin: 5 unpaired electrons, paramagnetic, substitutionally labile. Includes Fe3+, Mn2+. Example: Tris(acetylacetonato)iron(III).
Octahedral low-spin: 1 unpaired electron, paramagnetic, substitutionally inert. Includes Fe3+. Example: [Fe(CN)6]3−.
d6
Octahedral high-spin: 4 unpaired electrons, paramagnetic, substitutionally labile. Includes Fe2+, Co3+. Examples: [Fe(H2O)6]2+, [CoF6]3−.
Octahedral low-spin: no unpaired electrons, diamagnetic, substitutionally inert. Includes Fe2+, Co3+, Ni4+. Example: [Co(NH3)6]3+.
d7
Octahedral high-spin: 3 unpaired electrons, paramagnetic, substitutionally labile. Includes Co2+, Ni3+.
Octahedral low-spin:1 unpaired electron, paramagnetic, substitutionally labile. Includes Co2+, Ni3+. Example: [Co(NH3)6]2+.
d8Octahedral high-spin: 2 unpaired electrons, paramagnetic, substitutionally labile. Includes Ni2+. Example: [Ni(NH3)6]2+.
Tetrahedral high-spin: 2 unpaired electrons, paramagnetic, substitutionally labile. Includes Ni2+. Example: [NiCl4]2-.
Square planar low-spin: no unpaired electrons, diamagnetic, substitutionally inert. Includes Ni2+. Example: [Ni(CN)4]2−.
Ionic radii
The spin state of the complex affects an atom's ionic radius. For a given d-electron count, high-spin complexes are larger.
d4
Octahedral high spin: Cr2+, 64.5 pm.
Octahedral low spin: Mn3+, 58 pm.
d5
Octahedral high spin: Fe3+, the ionic radius is 64.5 pm.
Octahedral low spin: Fe3+, the ionic radius is 55 pm.
d6
Octahedral high spin: Fe2+, the ionic radius is 78 pm, Co3+ ionic radius 61 pm.
Octahedral low spin: Includes Fe2+ ionic radius 62 pm, Co3+ ionic radius 54.5 pm, Ni4+ ionic radius 48 pm.
d7
Octahedral high spin: Co2+ ionic radius 74.5 pm, Ni3+ ionic radius 60 pm.
Octahedral low spin: Co2+ ionic radius 65 pm, Ni3+ionic radius 56 pm.
d8
Octahedral high spin: Ni2+ ionic radius 69 pm.
Square planar low-spin: Ni2+ ionic radius 49 pm.
Ligand exchange rates
Generally, the rates of ligand dissociation from low spin complexes are lower than dissociation rates from high spin complexes. In the case of octahedral complexes, electrons in the eg levels are anti-bonding with respect to the metal-ligand bonds. Famous "exchange inert" complexes are octahedral complexes of d3 and low-spin d6 metal ions, illustrated respectfully by Cr3+ and Co3+.
References
Coordination chemistry
Electron states | Spin states (d electrons) | [
"Chemistry"
] | 1,841 | [
"Electron",
"Coordination chemistry",
"Electron states"
] |
21,256,480 | https://en.wikipedia.org/wiki/Fisher-Porter%20tube | A Fisher-Porter tube or Fisher-Porter vessel is a glass pressure vessel used in the chemical laboratory. The reaction vessel consists of a lipped heavy-wall borosilicate glass tube and a lid made from stainless steel. The lid is sealed with an o-ring and held in place with a coupling.
The advantage over steel autoclaves is that the progress of a reaction can be followed by eye. The maximum pressure that can be achieved is much lower than that in a metal bomb. For example, typical pressure ratings are 7 bar for a large 335 mL Fisher-Porter vessel and 15 bar for a small 90 mL one, whereas the usual kind of bomb is safe to use with 200 bar. Illustrative applications involve reactions at elevated temperatures using volatile reagents.
Name
The name has become something of a genericised trademark. For decades these flasks used to be made by the Fisher & Porter Company until it became a part of ABB. Nowadays they are sold by Andrews Glass under the Lab-Crest label.
Alternatives
Ace Glass offers thick-walled glass tubes with their proprietary Ace-Thred screw caps. Caps are available to fit gas plunger valves to admit gases under pressure. Similar arrangements are available from Q Labtech and sold through Sigma-Aldrich.
External links
Manufacturer's webpage
References
Laboratory equipment
Pressure vessels
Q-Tube, a safer alternative pressure vessel | Fisher-Porter tube | [
"Physics",
"Chemistry",
"Engineering"
] | 281 | [
"Structural engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Pressure vessels"
] |
21,256,927 | https://en.wikipedia.org/wiki/RTK%20class%20III | RTK class III is a class of receptor tyrosine kinases.
It includes PDGFRα, PDGFRβ, C-KIT, CSF1R, and FLT3.
References
Tyrosine kinase receptors | RTK class III | [
"Chemistry"
] | 49 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
21,257,247 | https://en.wikipedia.org/wiki/Arrival%20theorem | In queueing theory, a discipline within the mathematical theory of probability, the arrival theorem (also referred to as the random observer property, ROP or job observer property) states that "upon arrival at a station, a job observes the system as if in steady state at an arbitrary instant for the system without that job."
The arrival theorem always holds in open product-form networks with unbounded queues at each node, but it also holds in more general networks. A necessary and sufficient condition for the arrival theorem to be satisfied in product-form networks is given in terms of Palm probabilities in Boucherie & Dijk, 1997. A similar result also holds in some closed networks. Examples of product-form networks where the arrival theorem does not hold include reversible Kingman networks and networks with a delay protocol.
Mitrani offers the intuition that "The state of node i as seen by an incoming job has a different distribution from the state seen by a random observer. For instance, an incoming job can never see all 'k jobs present at node i, because it itself cannot be among the jobs already present."
Theorem for arrivals governed by a Poisson process
For Poisson processes the property is often referred to as the PASTA property (Poisson Arrivals See Time Averages) and states that the probability of the state as seen by an outside random observer is the same as the probability of the state seen by an arriving customer. The property also holds for the case of a doubly stochastic Poisson process where the rate parameter is allowed to vary depending on the state.
Theorem for Jackson networks
In an open Jackson network with m queues, write for the state of the network. Suppose is the equilibrium probability that the network is in state . Then the probability that the network is in state immediately before an arrival to any node is also .
Note that this theorem does not follow from Jackson's theorem, where the steady state in continuous time is considered. Here we are concerned with particular points in time, namely arrival times. This theorem first published by Sevcik and Mitrani in 1981.
Theorem for Gordon–Newell networks
In a closed Gordon–Newell network with m queues, write for the state of the network. For a customer in transit to state , let denote the probability that immediately before arrival the customer 'sees' the state of the system to be
This probability, , is the same as the steady state probability for state for a network of the same type with one customer less. It was published independently by Sevcik and Mitrani, and Reiser and Lavenberg, where the result was used to develop mean value analysis.
Notes
Queueing theory
Probability theorems | Arrival theorem | [
"Mathematics"
] | 545 | [
"Theorems in probability theory",
"Mathematical theorems",
"Mathematical problems"
] |
31,344,199 | https://en.wikipedia.org/wiki/UT-VPN | University of Tsukuba Virtual Private Network, UT-VPN is a free and open source software application that implements virtual private network (VPN) techniques for creating secure point-to-point or site-to-site connections in routed or bridged configurations and remote access facilities. It uses SSL/TLS security for encryption and is capable of traversing network address translators (NATs) and firewalls. It was written by Daiyuu Nobori and SoftEther Corporation, and is published under the GNU General Public License (GPL) by University of Tsukuba.
UT-VPN has compatible as PacketiX VPN product of SoftEther Corporation. UT-VPN developed based on PacketiX VPN, but some functions was deleted. For example, the RADIUS client is supported by PacketiX VPN Server, but it is not supported by UT-VPN Server.
Architecture
Encryption
UT-VPN uses the OpenSSL library to provide encryption to packets.
Authentication
UT-VPN offers username/password-based authentication.
Networking
UT-VPN is software to consist of UT-VPN Server and UT-VPN Client. UT-VPN functions as L2-VPN (over SSL/TLS).
UT-VPN Client
'Virtual NIC' (virtual network interface card) is installed in OS how UT-VPN Client was installed in. Virtual NIC is recognized as physical NIC by OS. UT-VPN does encapsulation to TCP (or SSL/TLS) packets from L2 frames by Virtual NIC.
UT-VPN Client connects with UT-VPN Server. If authorization with UT-VPN Server succeeded, UT-VPN Client establishes connection with Virtual HUB.
UT-VPN Server
UT-VPN Server have some 'Virtual HUB', and they function as virtual L2 switch. Virtual HUB does handle frames which received from UT-VPN Client. If necessary, UT-VPN Server forwards encapsulated L2 frames to UT-VPN Client.
Virtual HUB on UT-VPN Server has function cascading connection for Virtual HUB on other UT-VPN Server. Site-to-site connection can come true with cascading connection.
L2 Bridge
UT-VPN Server has bridging function between arbitrary NIC which OS has and virtual HUB.
L3 Switch
UT-VPN Server has Virtual L3 switch function. Virtual L3 switch does L3-switching between virtual HUB on the UT-VPN Server.
Operational Environment
UT-VPN Server
Windows
Windows 98 / Millennium Edition
Windows NT 4.0
Windows 2000
Windows XP
Windows Server 2003
Windows Vista
Windows Server 2008
Hyper-V Server
Windows 7
Windows Server 2008 R2
* Supported for x86/x64
UNIX
Linux (2.4 or later)
FreeBSD (6.0 or later)
Solaris (8.0 or later)
Mac OS X (Tiger or later)
* If it is the environment where compiling it is possible of the source code, UT-VPN Server works.
UT-VPN Client
Windows
Windows 98
Windows ME
Windows 2000
Windows XP
Windows Server 2003
Windows Vista
Windows Server 2008
Hyper-V Server
Windows 7
Windows Server 2008 R2
*Supported for x86/x64
UNIX
Linux (2.4 or later)
* The Virtual NIC does not work in other UNIX operating systems.
Community
The primary method for community support is through the SoftEther mailing lists.
See also
University of Tsukuba
SoftEther Corporation
OpenVPN, The well-known open source VPN software.
References
External links
Official links
UT-VPN OpenSource Project (Japanese)
UT-VPN Download (Japanese, require email address)
Computer network security
Tunneling protocols
Free security software
Unix network-related software | UT-VPN | [
"Engineering"
] | 778 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security",
"Tunneling protocols"
] |
31,344,475 | https://en.wikipedia.org/wiki/Isotope%20electrochemistry | Isotope electrochemistry is a field within electrochemistry concerned with various topics like electrochemical separation of isotopes, electrochemical estimation of isotopic exchange equilibrium constants, electrochemical kinetic isotope effect, electrochemical isotope sensors, etc.
It is an active domain of investigation. It overlaps with many other domains of both theoretical and practical importance like nuclear engineering, radiochemistry, electrochemical technology, geochemistry, sensors and instrumentation.
See also
Bioelectrochemical reactor
Concentration cell
Electrochemical cell
Electrochemical engineering
Equilibrium fractionation
Transient kinetic isotope fractionation
Notes
External links
electrochemical investigation using isotopic effects
http://hydrobor.com.tr/bilgi-bankasi/.../Ti%20Ni%20intermetallic%20phases.pdf
electrochemical isotope separation
electrochemical isotope cell
ACS radioelectrochemistry
electroplating isotope effects
JES electrochemical pumping isotope effects
electrochemical isotope fractionation
https://web.archive.org/web/20120319145651/http://wwwsoc.nii.ac.jp/aesj/publication/JNST2002/No.4/39_367-370.pdf
DOI.org
http://onlinelibrary.wiley.com/doi/10.1002/anie.196707571/abstract
Electrochemistry
Chemical engineering
Isotopes | Isotope electrochemistry | [
"Physics",
"Chemistry",
"Engineering"
] | 296 | [
"Physical chemistry stubs",
"Chemical engineering",
"Isotope stubs",
"Isotopes",
"Electrochemistry",
"Nuclear chemistry stubs",
"nan",
"Nuclear physics",
"Electrochemistry stubs"
] |
31,349,351 | https://en.wikipedia.org/wiki/Quantum%20spin%20liquid | In condensed matter physics, a quantum spin liquid is a phase of matter that can be formed by interacting quantum spins in certain magnetic materials. Quantum spin liquids (QSL) are generally characterized by their long-range quantum entanglement, fractionalized excitations, and absence of ordinary magnetic order.
The quantum spin liquid state was first proposed by physicist Phil Anderson in 1973 as the ground state for a system of spins on a triangular lattice that interact antiferromagnetically with their nearest neighbors, i.e. neighboring spins seek to be aligned in opposite directions. Quantum spin liquids generated further interest when in 1987 Anderson proposed a theory that described high-temperature superconductivity in terms of a disordered spin-liquid state.
Basic properties
The simplest kind of magnetic phase is a paramagnet, where each individual spin behaves independently of the rest, just like atoms in an ideal gas. This highly disordered phase is the generic state of magnets at high temperatures, where thermal fluctuations dominate. Upon cooling, the spins will often enter a ferromagnet (or antiferromagnet) phase. In this phase, interactions between the spins cause them to align into large-scale patterns, such as domains, stripes, or checkerboards. These long-range patterns are referred to as "magnetic order," and are analogous to the regular crystal structure formed by many solids.
Quantum spin liquids offer a dramatic alternative to this typical behavior. One intuitive description of this state is as a "liquid" of disordered spins, in comparison to a ferromagnetic spin state, much in the way liquid water is in a disordered state compared to crystalline ice. However, unlike other disordered states, a quantum spin liquid state preserves its disorder to very low temperatures. A more modern characterization of quantum spin liquids involves their topological order, long-range quantum entanglement properties, and anyon excitations.
Examples
Several physical models have a disordered ground state that can be described as a quantum spin liquid.
Frustrated magnetic moments
Localized spins are frustrated if there exist competing exchange interactions that can not all be satisfied at the same time, leading to a large degeneracy of the system's ground state. A triangle of Ising spins (meaning that the only possible orientation of the spins are either "up" or "down"), which interact antiferromagnetically, is a simple example for frustration. In the ground state, two of the spins can be antiparallel but the third one cannot. This leads to an increase of possible orientations (six in this case) of the spins in the ground state, enhancing fluctuations and thus suppressing magnetic ordering.
A recent research work used this concept in analyzing brain networks and surprisingly indicated frustrated interactions in the brain corresponding to flexible neural interactions. This observation highlights the generalization of the frustration phenomenon and proposes its investigation in biological systems.
Resonating valence bonds (RVB)
To build a ground state without magnetic moment, valence bond states can be used, where two electron spins form a spin 0 singlet due to the antiferromagnetic interaction. If every spin in the system is bound like this, the state of the system as a whole has spin 0 too and is non-magnetic. The two spins forming the bond are maximally entangled, while not being entangled with the other spins. If all spins are distributed to certain localized static bonds, this is called a valence bond solid (VBS).
There are two things that still distinguish a VBS from a spin liquid: First, by ordering the bonds in a certain way, the lattice symmetry is usually broken, which is not the case for a spin liquid. Second, this ground state lacks long-range entanglement. To achieve this, quantum mechanical fluctuations of the valence bonds must be allowed, leading to a ground state consisting of a superposition of many different partitionings of spins into valence bonds. If the partitionings are equally distributed (with the same quantum amplitude), there is no preference for any specific partitioning ("valence bond liquid"). This kind of ground state wavefunction was proposed by P. W. Anderson in 1973 as the ground state of spin liquids and is called a resonating valence bond (RVB) state. These states are of great theoretical interest as they are proposed to play a key role in high-temperature superconductor physics.
Excitations
The valence bonds do not have to be formed by nearest neighbors only and their distributions may vary in different materials. Ground states with large contributions of long range valence bonds have more low-energy spin excitations, as those valence bonds are easier to break up. On breaking, they form two free spins. Other excitations rearrange the valence bonds, leading to low-energy excitations even for short-range bonds. Something very special about spin liquids is that they support exotic excitations, meaning excitations with fractional quantum numbers. A prominent example is the excitation of spinons which are neutral in charge and carry spin . In spin liquids, a spinon is created if one spin is not paired in a valence bond. It can move by rearranging nearby valence bonds at low energy cost.
Realizations of (stable) RVB states
The first discussion of the RVB state on square lattice using the RVB picture only consider nearest neighbour bonds that connect different sub-lattices. The constructed RVB state is an equal amplitude superposition of all the nearest-neighbour bond configurations. Such a RVB state is believed to contain emergent gapless gauge field which may confine the spinons etc. So the equal-amplitude nearest-neighbour RVB state on square lattice is unstable and does not corresponds to a quantum spin phase. It may describe a critical phase transition point between two stable phases. A version of RVB state which is stable and contains deconfined spinons is the chiral spin state. Later, another version of stable RVB state with deconfined spinons, the Z2 spin liquid, is proposed, which realizes the simplest topological order – Z2 topological order. Both chiral spin state and Z2 spin liquid state have long RVB bonds that connect the same sub-lattice. In chiral spin state, different bond configurations can have complex amplitudes, while in Z2 spin liquid state, different bond configurations only have real amplitudes. The RVB state on triangle lattice also realizes the Z2 spin liquid, where different bond configurations only have real amplitudes. The toric code model is yet another realization of Z2 spin liquid (and Z2 topological order) that explicitly breaks the spin rotation symmetry and is exactly solvable.
Experimental signatures and probes
Since there is no single experimental feature which identifies a material as a spin liquid, several experiments have to be conducted to gain information on different properties which characterize a spin liquid.
Magnetic susceptibility
In a high-temperature, classical paramagnet phase, the magnetic susceptibility is given by the Curie–Weiss law
Fitting experimental data to this equation determines a phenomenological Curie–Weiss temperature, . There is a second temperature, , where magnetic order in the material begins to develop, as evidenced by a non-analytic feature in . The ratio of these is called the frustration parameter
In a classic antiferromagnet, the two temperatures should coincide and give . An ideal quantum spin liquid would not develop magnetic order at any temperature and so would have a diverging frustration parameter . A large value is therefore a good indication of a possible spin liquid phase. Some frustrated materials with different lattice structures and their Curie–Weiss temperature are listed in the table below. All of them are proposed spin liquid candidates.
Other
One of the most direct evidence for absence of magnetic ordering give NMR or μSR experiments. If there is a local magnetic field present, the nuclear or muon spin would be affected which can be measured. 1H-NMR measurements on κ-(BEDT-TTF)2Cu2(CN)3 have shown no sign of magnetic ordering down to 32 mK, which is four orders of magnitude smaller than the coupling constant J≈250 K between neighboring spins in this compound. Further investigations include:
Specific heat measurements give information about the low-energy density of states, which can be compared to theoretical models.
Thermal transport measurements can determine if excitations are localized or itinerant.
Neutron scattering gives information about the nature of excitations and correlations (e.g. spinons).
Reflectance measurements can uncover spinons, which couple via emergent gauge fields to the electromagnetic field, giving rise to a power-law optical conductivity.
Candidate materials
RVB type
Neutron scattering measurements of cesium chlorocuprate Cs2CuCl4, a spin-1/2 antiferromagnet on a triangular lattice, displayed diffuse scattering. This was attributed to spinons arising from a 2D RVB state. Later theoretical work challenged this picture, arguing that all experimental results were instead consequences of 1D spinons confined to individual chains.
Afterwards, it was observed in an organic Mott insulator (κ-(BEDT-TTF)2Cu2(CN)3) by Kanoda's group in 2003. It may correspond to a gapless spin liquid with spinon Fermi surface (the so-called uniform RVB state). The peculiar phase diagram of this organic quantum spin liquid compound was first thoroughly mapped using muon spin spectroscopy.
Herbertsmithite
Herbertsmithite is one of the most extensively studied QSL candidate materials. It is a mineral with chemical composition ZnCu3(OH)6Cl2 and a rhombohedral crystal structure. Notably, the copper ions within this structure form stacked two-dimensional layers of kagome lattices. Additionally, superexchange over the oxygen bonds creates a strong antiferromagnetic interaction between the copper spins within a single layer, whereas coupling between layers is negligible. Therefore, it is a good realization of the antiferromagnetic spin-1/2 Heisenberg model on the kagome lattice, which is a prototypical theoretical example of a quantum spin liquid.
Synthetic, polycrystalline herbertsmithite powder was first reported in 2005, and initial magnetic susceptibility studies showed no signs of magnetic order down to 2K. In a subsequent study, the absence of magnetic order was verified down to 50 mK, inelastic neutron scattering measurements revealed a broad spectrum of low energy spin excitations, and low-temperature specific heat measurements had power law scaling. This gave compelling evidence for a spin liquid state with gapless spinon excitations. A broad array of additional experiments, including 17O NMR, and neutron spectroscopy of the dynamic magnetic structure factor, reinforced the identification of herbertsmithite as a gapless spin liquid material, although the exact characterization remained unclear as of 2010.
Large (millimeter size) single crystals of herbertsmithite were grown and characterized in 2011. These enabled more precise measurements of possible spin liquid properties. In particular, momentum-resolved inelastic neutron scattering experiments showed a broad continuum of excitations. This was interpreted as evidence for gapless, fractionalized spinons. Follow-up experiments (using 17O NMR and high-resolution, low-energy neutron scattering) refined this picture and determined there was actually a small spinon excitation gap of 0.07–0.09 meV.
Some measurements were suggestive of quantum critical behavior. Magnetic response of this material displays scaling relation in both the bulk ac susceptibility and the low energy dynamic susceptibility, with the low temperature heat capacity strongly depending on magnetic field. This scaling is seen in certain quantum antiferromagnets, heavy-fermion metals, and two-dimensional 3He as a signature of proximity to a quantum critical point.
In 2020, monodisperse single-crystal nanoparticles of herbertsmithite (~10 nm) were synthesized at room temperature, using gas-diffusion electrocrystallization, showing that their spin liquid nature persists at such small dimensions.
It may realize a U(1)-Dirac spin liquid.
Kitaev spin liquids
Another evidence of quantum spin liquid was observed in a 2-dimensional material in August 2015. The researchers of Oak Ridge National Laboratory, collaborating with physicists from the University of Cambridge, and the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany, measured the first signatures of these fractional particles, known as Majorana fermions, in a two-dimensional material with a structure similar to graphene. Their experimental results successfully matched with one of the main theoretical models for a quantum spin liquid, known as a Kitaev honeycomb model.
Strongly correlated quantum spin liquid
The strongly correlated quantum spin liquid (SCQSL) is a specific realization of a possible quantum spin liquid (QSL) representing a new type of strongly correlated electrical insulator (SCI) that possesses properties of heavy fermion metals with one exception: it resists the flow of electric charge. At low temperatures T the specific heat of this type of insulator is proportional to Tn, with n less or equal 1 rather than n=3, as it should be in the case of a conventional insulator whose heat capacity is proportional to T3. When a magnetic field B is applied to SCI the specific heat depends strongly on B, contrary to conventional insulators. There are a few candidates of SCI; the most promising among them is Herbertsmithite, a mineral with chemical structure ZnCu3(OH)6Cl2.
Kagome type
Ca10Cr7O28 is a frustrated kagome bilayer magnet, which does not develop long-range order even below 1 K, and has a diffuse spectrum of gapless excitations.
Toric code type
In December 2021, the first direct measurement of a quantum spin liquid of the toric code type was reported, it was achieved by two teams: one exploring ground state and anyonic excitations on a quantum processor and the other implementing a theoretical blueprint of atoms on a ruby lattice held with optical tweezers on a quantum simulator.
Specific properties: topological fermion condensation quantum phase transition
The experimental facts collected on heavy fermion (HF) metals and two dimensional Helium-3 demonstrate that the quasiparticle effective mass M* is very large, or even diverges. Topological fermion condensation quantum phase transition (FCQPT) preserves quasiparticles, and forms flat energy band at the Fermi level. The emergence of FCQPT is directly related to the unlimited growth of the effective mass M*. Near FCQPT, M* starts to depend on temperature T, number density x, magnetic field B and other external parameters such as pressure P, etc. In contrast to the Landau paradigm based on the assumption that the effective mass is approximately constant, in the FCQPT theory the effective mass of new quasiparticles strongly depends on T, x, B etc. Therefore, to agree/explain with the numerous experimental facts, extended quasiparticles paradigm based on FCQPT has to be introduced. The main point here is that the well-defined quasiparticles determine the thermodynamic, relaxation, scaling and transport properties of strongly correlated Fermi systems and M* becomes a function of T, x, B, P, etc.
The data collected for very different strongly correlated Fermi systems demonstrate universal scaling behavior; in other words distinct materials with strongly correlated fermions unexpectedly turn out to be uniform, thus forming a new state of matter that consists of HF metals, quasicrystals, quantum spin liquid, two-dimensional Helium-3, and compounds exhibiting high-temperature superconductivity.
Applications
Materials supporting quantum spin liquid states may have applications in data storage and memory. In particular, it is possible to realize topological quantum computation by means of spin-liquid states. Developments in quantum spin liquids may also help in the understanding of high temperature superconductivity.
References
Correlated electrons
Liquids
Phases of matter
Condensed matter physics
Quasiparticles | Quantum spin liquid | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,331 | [
"Matter",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Correlated electrons",
"Quasiparticles",
"Subatomic particles",
"Liquids"
] |
31,350,335 | https://en.wikipedia.org/wiki/Pulsed%20electron%20paramagnetic%20resonance | Pulsed electron paramagnetic resonance (EPR) is an electron paramagnetic resonance technique that involves the alignment of the net magnetization vector of the electron spins in a constant magnetic field. This alignment is perturbed by applying a short oscillating field, usually a microwave pulse. One can then measure the emitted microwave signal which is created by the sample magnetization. Fourier transformation of the microwave signal yields an EPR spectrum in the frequency domain. With a vast variety of pulse sequences it is possible to gain extensive knowledge on structural and dynamical properties of paramagnetic compounds. Pulsed EPR techniques such as electron spin echo envelope modulation (ESEEM) or pulsed electron nuclear double resonance (ENDOR) can reveal the interactions of the electron spin with its surrounding nuclear spins.
Scope
Electron paramagnetic resonance (EPR) or electron spin resonance (ESR) is a spectroscopic technique widely used in biology, chemistry, medicine and physics to study systems with one or more unpaired electrons. Because of the specific relation between the magnetic parameters, electronic wavefunction and the configuration of the surrounding non-zero spin nuclei, EPR and ENDOR provide information on the structure, dynamics and the spatial distribution of the paramagnetic species. However, these techniques are limited in spectral and time resolution when used with traditional continuous wave methods. This resolution can be improved in pulsed EPR by investigating interactions separately from each other via pulse sequences.
Historical overview
R. J. Blume reported the first electron spin echo in 1958, which came from a solution of sodium in ammonia at its boiling point, -33.8˚C. A magnetic field of 0.62 mT was used requiring a frequency of 17.4 MHz. The first microwave electron spin echoes were reported in the same year by Gordon and Bowers using 23 GHz excitation of dopants in silicon.
Much of the pioneering early pulsed EPR was conducted in the group of W. B. Mims at Bell Labs during the 1960s. In the first decade only a small number of groups worked the field, because of the expensive instrumentation, the lack of suitable microwave components and slow digital electronics. The first observation of electron spin echo envelope modulation (ESEEM) was made in 1961 by Mims, Nassau and McGee. Pulsed electron nuclear double resonance (ENDOR) was invented in 1965 by Mims. In this experiment, pulsed NMR transitions are detected with pulsed EPR. ESEEM and pulsed ENDOR continue to be important for studying nuclear spins coupled to electron spins.
In the 1980s, the upcoming of the first commercial pulsed EPR and ENDOR spectrometers in the X band frequency range, lead to a fast growth of the field. In the 1990s, parallel to the upcoming high-field EPR, pulsed EPR and ENDOR became a new fast advancing magnetic resonance spectroscopy tool and the first commercial pulsed EPR and ENDOR spectrometer at W band frequencies appeared on the market.
Principles
The basic principle of pulsed EPR and NMR is similar. Differences can be found in the relative size of the magnetic interactions and in the relaxation rates which are orders of magnitudes larger (faster) in EPR than NMR. A full description of the theory is given within the quantum mechanical formalism, but since the magnetization is being measured as a bulk property, a more intuitive picture can be obtained with a classical description. For a better understanding of the concept of pulsed EPR consider the effects on the magnetization vector in the laboratory frame as well as in the rotating frame. As the animation below shows, in the laboratory frame the static magnetic field B0 is assumed to be parallel to the z-axis and the microwave field B1 parallel to the x-axis. When an electron spin is placed in magnetic field it experiences a torque which causes its magnetic moment to precess around the magnetic field. The precession frequency is known as the Larmor frequency ωL.
where γ is the gyromagnetic ratio and B0 the magnetic field. The electron spins are characterized by two quantum mechanical states, one parallel and one antiparallel to B0. Because of the lower energy of the parallel state more electron spins can be found in this state according to the Boltzmann distribution. This imbalanced population results in a net magnetization, which is the vector sum of all magnetic moments in the sample, parallel to the z-axis and the magnetic field. To better comprehend the effects of the microwave field B1 it is easier to move to the rotating frame.
EPR experiments usually use a microwave resonator designed to create a linearly polarized microwave field B1, perpendicular to the much stronger applied magnetic field B0. The rotating frame is fixed to the rotating B1 components. First we assume to be on resonance with the precessing magnetization vector M0.
Therefore, the component of B1 will appear stationary. In this frame also the precessing magnetization components appear to be stationary that leads to the disappearance of B0, and we need only to consider B1 and M0. The M0 vector is under the influence of the stationary field B1, leading to another precession of M0, this time around B1 at the frequency ω1.
This angular frequency ω1 is also called the Rabi frequency. Assuming B1 to be parallel to the x-axis, the magnetization vector will rotate around the +x-axis in the zy-plane as long as the microwaves are applied. The angle by which M0 is rotated is called the tip angle α and is given by:
Here tp is the duration for which B1 is applied, also called the pulse length. The pulses are labeled by the rotation of M0 which they cause and the direction from which they are coming from, since the microwaves can be phase-shifted from the x-axis on to the y-axis. For example, a +y π/2 pulse means that a B1 field, which has been 90 degrees phase-shifted out of the +x into the +y direction, has rotated M0 by a tip angle of π/2, hence the magnetization would end up along the –x-axis. That means the end position of the magnetization vector M0 depends on the length, the magnitude and direction of the microwave pulse B1. In order to understand how the sample emits microwaves after the intense microwave pulse we need to go back to the laboratory frame. In the rotating frame and on resonance the magnetization appeared to be stationary along the x or y-axis after the pulse. In the laboratory frame it becomes a rotating magnetization in the x-y plane at the Larmor frequency. This rotation generates a signal which is maximized if the magnetization vector is exactly in the xy-plane. This microwave signal generated by the rotating magnetization vector is called free induction decay (FID).
Another assumption we have made was the exact resonance condition, in which the Larmor frequency is equal to the microwave frequency. In reality EPR spectra have many different frequencies and not all of them can be exactly on resonance, therefore we need to take off-resonance effects into account. The off-resonance effects lead to three main consequences. The first consequence can be better understood in the rotating frame. A π/2 pulse leaves magnetization in the xy-plane, but since the microwave field (and therefore the rotating frame) do not have the same frequency as the precessing magnetization vector, the magnetization vector rotates in the xy-plane, either faster or slower than the microwave magnetic field B1. The rotation rate is governed by the frequency difference Δω.
If Δω is 0 then the microwave field rotates as fast as the magnetization vector and both appear to be stationary to each other. If Δω>0 then the magnetization rotates faster than the microwave field component in a counter-clockwise motion and if Δω<0 then the magnetization is slower and rotates clockwise. This means that the individual frequency components of the EPR spectrum, will appear as magnetization components rotating in the xy-plane with the rotation frequency Δω. The second consequence appears in the laboratory frame. Here B1 tips the magnetization differently out of the z-axis, since B0 does not disappear when not on resonance due to the precession of the magnetization vector at Δω. That means that the magnetization is now tipped by an effective magnetic field Beff, which originates from the vector sum of B1 and B0. The magnetization is then tipped around Beff at a faster effective rate ωeff.
This leads directly to the third consequence that the magnetization can not be efficiently tipped into the xy-plane because Beff does not lie in the xy-plane, as B1 does. The motion of the magnetization now defines a cone. That means as Δω becomes larger, the magnetization is tipped less effectively into the xy-plane, and the FID signal decreases. In broad EPR spectra where Δω > ω1 it is not possible to tip all the magnetization into the xy-plane to generate a strong FID signal. This is why it is important to maximize ω1 or minimize the π/2 pulse length for broad EPR signals.
So far the magnetization was tipped into the xy-plane and it remained there with the same magnitude. However, in reality the electron spins interact with their surroundings and the magnetization in the xy-plane will decay and eventually return to alignment with the z-axis. This relaxation process is described by the spin-lattice relaxation time T1, which is a characteristic time needed by the magnetization to return to the z-axis, and by the spin-spin relaxation time T2, which describes the vanishing time of the magnetization in the xy-plane. The spin-lattice relaxation results from the urge of the system to return to thermal equilibrium after it has been perturbed by the B1 pulse. Return of the magnetization parallel to B0 is achieved through interactions with the surroundings, that is spin-lattice relaxation. The corresponding relaxation time needs to be considered when extracting a signal from noise, where the experiment needs to be repeated several times, as quickly as possible. In order to repeat the experiment, one needs to wait until the magnetization along the z-axis has recovered, because if there is no magnetization in z direction, then there is nothing to tip into the xy-plane to create a significant signal.
The spin-spin relaxation time, also called the transverse relaxation time, is related to homogeneous and inhomogeneous broadening. An inhomogeneous broadening results from the fact that the different spins experience local magnetic field inhomogeneities (different surroundings) creating a large number of spin packets characterized by a distribution of Δω. As the net magnetization vector precesses, some spin packets slow down due to lower fields and others speed up due to higher fields leading to a fanning out of the magnetization vector that results in the decay of the EPR signal. The other packets contribute to the transverse magnetization decay due to the homogeneous broadening. In this process all the spin in one spin packet experience the same magnetic field and interact with each other that can lead to mutual and random spin flip-flops. These fluctuations contribute to a faster fanning out of the magnetization vector.
All the information about the frequency spectrum is encoded in the motion of the transverse magnetization. The frequency spectrum is reconstructed using the time behavior of the transverse magnetization made up of y- and x-axis components. It is convenient that these two can be treated as the real and imaginary components of a complex quantity and use the Fourier theory to transform the measured time domain signal into the frequency domain representation. This is possible because both the absorption (real) and the dispersion (imaginary) signals are detected.
The FID signal decays away and for very broad EPR spectra this decay is rather fast due to the inhomogeneous broadening. To obtain more information one can recover the disappeared signal with another microwave pulse to produce a Hahn echo. After applying a π/2 pulse (90°), the magnetization vector is tipped into the xy-plane producing an FID signal. Different frequencies in the EPR spectrum (inhomogeneous broadening) cause this signal to "fan out", meaning that the slower spin-packets trail behind the faster ones. After a certain time t, a π pulse (180°) is applied to the system inverting the magnetization, and the fast spin-packets are then behind catching up with the slow spin-packets. A complete refocusing of the signal occurs then at time 2t. An accurate echo caused by a second microwave pulse can remove all inhomogeneous broadening effects. After all of the spin-packets bunch up, they will dephase again just like an FID. In other words, a spin echo is a reversed FID followed by a normal FID, which can be Fourier transformed to obtain the EPR spectrum. The longer the time between the pulses becomes, the smaller the echo will be due to spin relaxation. When this relaxation leads to an exponential decay in the echo height, the decay constant is the phase memory time TM, which can have many contributions such as transverse relaxation, spectral, spin and instantaneous diffusion. Changing the times between the pulses leads to a direct measurement of TM as shown in the spin echo decay animation below.
Applications
ESEEM and pulsed ENDOR are widely used echo experiments, in which the interaction of electron spins with the nuclei in their environment can be studied and controlled.
A popular pulsed EPR experiments currently is double electron-electron resonance (DEER), which is also known as pulsed electron-electron double resonance (PELDOR). In this experiment, two frequencies control two spins to probe their coupling. The distance between the spins can then be inferred from their coupling strength. This information is used to elucidate structures of large bio-molecules. PELDOR spectroscopy is a versatile tool for structural investigations of proteins, even in a cellular environment.
See also
Nuclear magnetic resonance
Electron nuclear double resonance
References
Electron paramagnetic resonance
Quantum mechanics | Pulsed electron paramagnetic resonance | [
"Physics",
"Chemistry"
] | 2,916 | [
"Spectrum (physical sciences)",
"Theoretical physics",
"Quantum mechanics",
"Spectroscopy",
"Electron paramagnetic resonance"
] |
534,003 | https://en.wikipedia.org/wiki/Duality%20%28electricity%20and%20magnetism%29 | In physics, the electromagnetic dual concept is based on the idea that, in the static case, electromagnetism has two separate facets: electric fields and magnetic fields. Expressions in one of these will have a directly analogous, or dual, expression in the other. The reason for this can ultimately be traced to special relativity, where applying the Lorentz transformation to the electric field will transform it into a magnetic field. These are special cases of duality in mathematics.
The electric field () is the dual of the magnetic field ().
The electric displacement field () is the dual of the magnetic flux density ().
Faraday's law of induction is the dual of Ampère's circuital law.
Gauss's law for electric field is the dual of Gauss's law for magnetism.
The electric potential is the dual of the magnetic potential.
Permittivity is the dual of permeability.
Electrostriction is the dual of magnetostriction.
Piezoelectricity is the dual of piezomagnetism.
Ferroelectricity is the dual of ferromagnetism.
An electrostatic motor is the dual of a magnetic motor;
Electrets are the dual of permanent magnets;
The Faraday effect is the dual of the Kerr effect;
The Aharonov–Casher effect is the dual to the Aharonov–Bohm effect;
The hypothetical magnetic monopole is the dual of electric charge.
See also
Maxwell's equations
Duality (electrical circuits)
List of dualities
Electromagnetism
Duality theories | Duality (electricity and magnetism) | [
"Physics",
"Mathematics"
] | 324 | [
"Electromagnetism",
"Physical phenomena",
"Mathematical structures",
"Category theory",
"Duality theories",
"Fundamental interactions",
"Geometry"
] |
535,349 | https://en.wikipedia.org/wiki/Chebotarev%27s%20density%20theorem | Chebotarev's density theorem in algebraic number theory describes statistically the splitting of primes in a given Galois extension K of the field of rational numbers. Generally speaking, a prime integer will factor into several ideal primes in the ring of algebraic integers of K. There are only finitely many patterns of splitting that may occur. Although the full description of the splitting of every prime p in a general Galois extension is a major unsolved problem, the Chebotarev density theorem says that the frequency of the occurrence of a given pattern, for all primes p less than a large integer N, tends to a certain limit as N goes to infinity. It was proved by Nikolai Chebotaryov in his thesis in 1922, published in .
A special case that is easier to state says that if K is an algebraic number field which is a Galois extension of of degree n, then the prime numbers that completely split in K have density
1/n
among all primes. More generally, splitting behavior can be specified by assigning to (almost) every prime number an invariant, its Frobenius element, which is a representative of a well-defined conjugacy class in the Galois group
Gal(K/Q).
Then the theorem says that the asymptotic distribution of these invariants is uniform over the group, so that a conjugacy class with k elements occurs with frequency asymptotic to
k/n.
History and motivation
When Carl Friedrich Gauss first introduced the notion of complex integers Z[i], he observed that the ordinary prime numbers may factor further in this new set of integers. In fact, if a prime p is congruent to 1 mod 4, then it factors into a product of two distinct prime gaussian integers, or "splits completely"; if p is congruent to 3 mod 4, then it remains prime, or is "inert"; and if p is 2 then it becomes a product of the square of the prime (1+i) and the invertible gaussian integer -i; we say that 2 "ramifies". For instance,
splits completely;
is inert;
ramifies.
From this description, it appears that as one considers larger and larger primes, the frequency of a prime splitting completely approaches 1/2, and likewise for the primes that remain primes in Z[i]. Dirichlet's theorem on arithmetic progressions demonstrates that this is indeed the case. Even though the prime numbers themselves appear rather erratically, splitting of the primes in the extension
follows a simple statistical law.
Similar statistical laws also hold for splitting of primes in the cyclotomic extensions, obtained from the field of rational numbers by adjoining a primitive root of unity of a given order. For example, the ordinary integer primes group into four classes, each with probability 1/4, according to their pattern of splitting in the ring of integers corresponding to the 8th roots of unity.
In this case, the field extension has degree 4 and is abelian, with the Galois group isomorphic to the Klein four-group. It turned out that the Galois group of the extension plays a key role in the pattern of splitting of primes. Georg Frobenius established the framework for investigating this pattern and proved a special case of the theorem. The general statement was proved by Nikolai Grigoryevich Chebotaryov in 1922.
Relation with Dirichlet's theorem
The Chebotarev density theorem may be viewed as a generalisation of Dirichlet's theorem on arithmetic progressions. A quantitative form of Dirichlet's theorem states that if N≥2 is an integer and a is coprime to N, then the proportion of the primes p congruent to a mod N is asymptotic to 1/n, where n=φ(N) is the Euler totient function. This is a special case of the Chebotarev density theorem for the Nth cyclotomic field K. Indeed, the Galois group of K/Q is abelian and can be canonically identified with the group of invertible residue classes mod N. The splitting invariant of a prime p not dividing N is simply its residue class because the number of distinct primes into which p splits is φ(N)/m, where m is multiplicative order of p modulo N; hence by the Chebotarev density theorem, primes are asymptotically uniformly distributed among different residue classes coprime to N.
Formulation
In their survey article, give an earlier result of Frobenius in this area. Suppose K is a Galois extension of the rational number field Q, and P(t) a monic integer polynomial such that K is a splitting field of P. It makes sense to factorise P modulo a prime number p. Its 'splitting type' is the list of degrees of irreducible factors of P mod p, i.e. P factorizes in some fashion over the prime field Fp. If n is the degree of P, then the splitting type is a partition Π of n. Considering also the Galois group G of K over Q, each g in G is a permutation of the roots of P in K; in other words by choosing an ordering of α and its algebraic conjugates, G is faithfully represented as a subgroup of the symmetric group Sn. We can write g by means of its cycle representation, which gives a 'cycle type' c(g), again a partition of n.
The theorem of Frobenius states that for any given choice of Π the primes p for which the splitting type of P mod p is Π has a natural density δ, with δ equal to the proportion of g in G that have cycle type Π.
The statement of the more general Chebotarev theorem is in terms of the Frobenius element of a prime (ideal), which is in fact an associated conjugacy class C of elements of the Galois group G. If we fix C then the theorem says that asymptotically a proportion |C|/|G| of primes have associated Frobenius element as C. When G is abelian the classes of course each have size 1. For the case of a non-abelian group of order 6 they have size 1, 2 and 3, and there are correspondingly (for example) 50% of primes p that have an order 2 element as their Frobenius. So these primes have residue degree 2, so they split into exactly three prime ideals in a degree 6 extension of Q with it as Galois group.
Statement
Let L be a finite Galois extension of a number field K with Galois group G. Let X be a subset of G that is stable under conjugation. The set of primes v of K that are unramified in L and whose associated Frobenius conjugacy class Fv is contained in X has density
The statement is valid when the density refers to either the natural density or the analytic density of the set of primes.
Effective Version
The Generalized Riemann hypothesis implies an effective version of the Chebotarev density theorem: if L/K is a finite Galois extension with Galois group G, and C a union of conjugacy classes of G, the number of unramified primes of K of norm below x with Frobenius conjugacy class in C is
where the constant implied in the big-O notation is absolute, n is the degree of L over Q, and Δ its discriminant.
The effective form of Chebotarev's density theory becomes much weaker without GRH. Take L to be a finite Galois extension of Q with Galois group G and degree d. Take to be a nontrivial irreducible representation of G of degree n, and take to be the Artin conductor of this representation. Suppose that, for a subrepresentation of or , is entire; that is, the Artin conjecture is satisfied for all . Take to be the character associated to . Then there is an absolute positive such that, for ,
where is 1 if is trivial and is otherwise 0, and where is an exceptional real zero of ; if there is no such zero, the term can be ignored. The implicit constant of this expression is absolute.
Infinite extensions
The statement of the Chebotarev density theorem can be generalized to the case of an infinite Galois extension L / K that is unramified outside a finite set S of primes of K (i.e. if there is a finite set S of primes of K such that any prime of K not in S is unramified in the extension L / K). In this case, the Galois group G of L / K is a profinite group equipped with the Krull topology. Since G is compact in this topology, there is a unique Haar measure μ on G. For every prime v of K not in S there is an associated Frobenius conjugacy class Fv. The Chebotarev density theorem in this situation can be stated as follows:
Let X be a subset of G that is stable under conjugation and whose boundary has Haar measure zero. Then, the set of primes v of K not in S such that Fv ⊆ X has density
This reduces to the finite case when L / K is finite (the Haar measure is then just the counting measure).
A consequence of this version of the theorem is that the Frobenius elements of the unramified primes of L are dense in G.
Important consequences
The Chebotarev density theorem reduces the problem of classifying Galois extensions of a number field to that of describing the splitting of primes in extensions. Specifically, it implies that as a Galois extension of K, L is uniquely determined by the set of primes of K that split completely in it. A related corollary is that if almost all prime ideals of K split completely in L, then in fact L = K.
See also
Splitting of prime ideals in Galois extensions
Grothendieck–Katz p-curvature conjecture
Notes
References
Theorems in algebraic number theory
Analytic number theory | Chebotarev's density theorem | [
"Mathematics"
] | 2,122 | [
"Theorems in algebraic number theory",
"Analytic number theory",
"Theorems in number theory",
"Number theory"
] |
536,059 | https://en.wikipedia.org/wiki/Qanat | A qanāt () or kārīz () is a water supply system that was developed in ancient Iran for the purpose of transporting usable water to the surface from an aquifer or a well through an underground aqueduct. Originating approximately 3,000 years ago, its function is essentially the same across the Middle East and North Africa, but it is known by a variety of regional names beyond today's Iran, including: kārēz in Afghanistan and Pakistan; foggāra in Algeria; khettāra in Morocco; falaj in Oman and the United Arab Emirates; and ʿuyūn in Saudi Arabia. In addition to those in Iran, the largest extant and functional qanats are located in Afghanistan, Algeria, China (i.e., the Turpan water system), Oman, and Pakistan.
Proving crucial to water supply in areas with hot and dry climates, a qanat enables water to be transported over long distances by largely eliminating the risk of much of it evaporating on the journey. The system also has the advantage of being fairly resistant to natural disasters, such as floods and earthquakes, as well as to man-made disasters, such as wartime destruction and water supply terrorism. Furthermore, it is almost insensitive to varying levels of precipitation, delivering a flow with only gradual variations from wet to dry years.
The typical design of a qanat is that of a series of well-like vertical shafts, which are all connected by a gently sloping tunnel. This taps into groundwater and delivers it to the surface via gravity, therefore eliminating the need for pumping. The vertical shafts along the underground channel are for maintenance purposes, and water is typically used only once it emerges from the daylight point.
To date, the qanat system still ensures a reliable supply of water for consumption and irrigation across human settlements in hot, arid, and semi-arid climates, but its value to a population is directly related to the quality, volume, and regularity of the groundwater in the inhabited region. Since their adoption outside of the Iranian mainland in antiquity, qanats have come to be heavily relied upon by much of the Middle Eastern and North African populations for sustenance. Likewise, many of the continuously inhabited settlements in these regions are established in areas where conditions have historically been favourable for creating and sustaining a qanat system.
Names
Common variants of qanat in English include kanat, khanat, kunut, kona, konait, ghanat, ghundat.
() is an Arabic word that means "channel". In Persian, the words for "qanat" are (or ; ) and is derived from earlier word (). The word () is also used in Persian. Other names for qanat include (); (Balochi); (Azerbaijan); (Morocco); , or (Spain); () (United Arab Emirates and Oman); (North Africa). Alternative terms for qanats in Asia and North Africa are kakuriz, chin-avulz, and mayun.
Origins
According to most sources, qanat technology was developed by the ancient Iranians sometime in the early 1st millennium BCE and slowly spread westward and eastward from there. Other sources suggest a Southeast Arabian origin. Analogous systems appear to have been developed independently in China and in South America (specifically, southern Peru).
A cotton species, Gossypium arboreum, is indigenous to South Asia and has been cultivated on the Indian subcontinent for a long time. Cotton appears in the Inquiry into Plants by Theophrastus and is mentioned in the Laws of Manu. As transregional trade networks expanded and intensified, cotton spread from its homeland to India and into the Middle East. One theory is that the qanat was developed to irrigate cotton fields, first in what is now Iran, where it doubled the amount of available water for irrigation and urban use. Because of this, Persia enjoyed larger surpluses of agricultural products, thus increasing urbanization and social stratification. The qanat technology subsequently spread from Persia westward and eastward.
In the arid coastal desert of Peru, a technology of water supply similar to that of the qanats, called puquios, was developed. Most archaeologists believe that the puquios are indigenous and date to about 500 CE, but a few believe they are of Spanish origin, brought to the Americas in the 16th century. Puquios were still in use in the Nazca region in the 21st century.
Features
Qanats are constructed as a series of well-like vertical shafts, connected by a gently sloping tunnel which carries a water canal. Qanats efficiently deliver large amounts of subterranean water to the surface without need for pumping. The water drains by gravity, typically from an upland aquifer, with the destination lower than the source. Qanats allow water to be transported over long distances in hot dry climates without much water loss to evaporation.
It is very common for a qanat to start below the foothills of mountains, where the water table is closest to the surface. From this source, the qanat tunnel slopes gently downward, slowly converging with the steeper slope of the land surface above, and the water finally flows out above ground where the two levels meet. To connect a populated or agricultural area with an aquifer, qanats must often extend for long distances.
Qanats are sometimes split into an underground distribution network of smaller canals called kariz. Like qanats, these smaller canals are below ground to avoid contamination and evaporation. In some cases water from a qanat is stored in a reservoir, typically with night flow stored for daytime use. An ab anbar is an example of a traditional Persian qanat-fed reservoir for drinking water.
The qanat system has the advantage of being resistant to natural disasters such as floods, and to deliberate destruction in war. Furthermore, it is almost insensitive to the levels of precipitation, delivering a flow with only gradual variations from wet to dry years. From a sustainability perspective, qanats are powered only by gravity and thus have low operation and maintenance costs. Qanats transfer fresh water from the mountain plateau to the lower-lying plains with saltier soil. This helps to control soil salinity and prevent desertification.
The qanat should not be confused with the spring-flow tunnel typical to the mountainous area around Jerusalem. Although both are excavated tunnels designed to extract water by gravity flow, there are crucial differences. Firstly, the origin of the qanat was a well that was turned into an artificial spring. In contrast, the origin of the spring-flow tunnel was the development of a natural spring to renew or increase flow following a recession of the water table. Secondly, the shafts essential for the construction of qanats are not essential to spring-flow tunnels.
Impact on settlement patterns
A typical town or city in Iran, and elsewhere where the qanat is used, has more than one qanat. Fields and gardens are located both over the qanats a short distance before they emerge from the ground and below the surface outlet. Water from the qanats define both the social regions in the city and the layout of the city.
The water is freshest, cleanest, and coolest in the upper reaches, and more prosperous people live at the outlet or immediately upstream of the outlet. When the qanat is still below ground, the water is drawn to the surface via wells or animal driven Persian wells. Private subterranean reservoirs could supply houses and buildings for domestic use and garden irrigation as well. Air flow from the qanat is used to cool an underground summer room (shabestan) found in many older houses and buildings.
Downstream of the outlet, the water runs through surface canals called jubs (jūbs) which run downhill, with lateral branches to carry water to the neighborhood, gardens and fields. The streets normally parallel the jubs and their lateral branches. As a result, the cities and towns are oriented consistent with the gradient of the land; this is a practical response to efficient water distribution over varying terrain.
The lower reaches of the canals are less desirable for both residences and agriculture. The water grows progressively more polluted as it passes downstream. In dry years the lower reaches are the most likely to see substantial reductions in flow.
Construction
Traditionally qanats are built by a group of skilled laborers, muqannīs, with hand labor. The profession historically paid well and was typically handed down from father to son.
Preparations
The critical, initial step in qanat construction is identification of an appropriate water source. The search begins at the point where the alluvial fan meets the mountains or foothills; water is more abundant in the mountains because of orographic lifting, and excavation in the alluvial fan is relatively easy. The muqannīs follow the track of the main water courses coming from the mountains or foothills to identify evidence of subsurface water such as deep-rooted vegetation or seasonal seeps. A trial well is then dug to determine the depth of the water table and determine whether a sufficient flow is available to justify construction. If these prerequisites are met, the route is laid out aboveground.
Equipment must be assembled. The equipment is straightforward: containers (usually leather bags), ropes, reels to raise the container to the surface at the shaft head, hatchets and shovels for excavation, lights, and spirit levels or plumb bobs and string. Depending upon the soil type, qanat liners (usually fired clay hoops) may also be required.
Although the construction methods are simple, the construction of a qanat requires a detailed understanding of subterranean geology and a degree of engineering sophistication. The gradient of the qanat must be carefully controlled: too shallow a gradient yields no flow and too steep a gradient will result in excessive erosion, collapsing the qanat. And misreading the soil conditions leads to collapses, which at best require extensive rework and at worst are fatal for the crew.
Excavation
Construction of a qanat is usually performed by a crew of 3–4 muqannīs. For a shallow qanat, one worker typically digs the horizontal shaft, one raises the excavated earth from the shaft and one distributes the excavated earth at the top.
The crew typically begins from the destination to which the water will be delivered into the soil and works toward the source (the test well). Vertical shafts are excavated along the route, separated at a distance of . The separation of the shafts is a balance between the amount of work required to excavate them and the amount of effort required to excavate the space between them, as well as the ultimate maintenance effort. In general, the shallower the qanat, the closer the vertical shafts. If the qanat is long, excavation may begin from both ends at once. Tributary channels are sometimes also constructed to supplement the water flow.
Most qanats in Iran run less than , while some have been measured at ≈ in length near Kerman. The vertical shafts usually range from in depth, although qanats in the province of Khorasan have been recorded with vertical shafts of up to . The vertical shafts support construction and maintenance of the underground channel as well as air interchange. Deep shafts require intermediate platforms to facilitate the process of removing soil.
The construction speed depends on the depth and nature of the ground. If the earth is soft and easy to work, at depth a crew of four workers can excavate a horizontal length of per day. When the vertical shaft reaches , they can excavate only 20 meters horizontally per day and at in depth this drops below 5 horizontal meters per day. In Algeria, a common speed is just per day at a depth of . Deep, long qanats (which many are) require years and even decades to construct.
The excavated material is usually transported by means of leather bags up the vertical shafts. It is mounded around the vertical shaft exit, providing a barrier that prevents windblown or rain driven debris from entering the shafts. These mounds may be covered to provide further protection to the qanat. From the air, these shafts look like a string of bomb craters.
The qanat's water-carrying channel must have a sufficient downward slope that water flows easily. However the downward gradient must not be so great as to create conditions under which the water transitions between supercritical and subcritical flow. If this occurs, the waves that result can result in severe erosion that can damage or destroy the qanat. The choice of the slope is a trade off between erosion and sedimentation. Highly sloped tunnels are subject to more erosion as water flows at a higher speed. On the other hand, less sloped tunnels need frequent maintenance due to the problem of sedimentation. A lower downward gradient also contributes to reducing the solid contents and contamination in water. In shorter qanats the downward gradient varies between 1:1000 and 1:1500, while in longer qanats it may be almost horizontal. Such precision is routinely obtained with a spirit level and string.
In cases where the gradient is steeper, underground waterfalls may be constructed with appropriate design features (usually linings) to absorb the energy with minimal erosion. In some cases the water power has been harnessed to drive underground mills. If it is not possible to bring the outlet of the qanat out near the settlement, it is necessary to run a jub or canal overground. This is avoided when possible to limit pollution, warming and water loss due to evaporation.
Maintenance
The vertical shafts may be covered to minimize blown-in sand. The channels of qanats must be periodically inspected for erosion or cave-ins, cleaned of sand and mud and otherwise repaired. For safety, air flow must be assured before entry.
Some damaged qanats have been restored. To be sustainable, restoration needs to take into account many nontechnical factors beginning with the process of selecting the qanat to be restored. In Syria, three sites were chosen based on a national inventory conducted in 2001. One of them, the Drasiah qanat of Dmeir, was completed in 2002. Selection criteria included the availability of a steady groundwater flow, social cohesion and willingness to contribute of the community using the qanat, and the existence of a functioning water-rights system.
Applications
The primary applications of qanats are for irrigation, providing cattle with water, and drinking water supply. Other applications include watermills, cooling and ice storage.
Watermills
Watermills within a qanat system had to be carefully situated, to make best use of the slow flow of water. In Iran, there were subterranean mills at Yazd and Boshruyeh; at Taft and Ardestan mills were placed at the outflow from the qanat, before irrigation of the fields.
Cooling
Qanats used in conjunction with a wind tower can provide cooling as well as a water supply. A wind tower is a chimney-like structure positioned above the house; of its four openings, the one opposite the wind direction is opened to move air out of the house. Incoming air is pulled from a qanat below the house. The air flow across the vertical shaft opening creates a lower pressure (see Bernoulli effect) and draws cool air up from the qanat tunnel, mixing with it.
The air from the qanat is drawn into the tunnel at some distance away and is cooled both by contact with the cool tunnel walls/water and by the transfer of latent heat of evaporation as water evaporates into the air stream. In dry desert climates this can result in a greater than 15 °C reduction in the air temperature coming from the qanat; the mixed air still feels dry, so the basement is cool and only comfortably moist (not damp). Wind tower and qanat cooling have been used in desert climates for over 1,000 years.
Ice storage
By 400 BCE, Persian engineers had mastered the technique of storing ice in the middle of summer in the desert.
The ice could be brought in during the winters from nearby mountains, but in a more usual and sophisticated method they built a wall in the east–west direction near a yakhchal (ice pit). In winter, the qanat water would be channeled to the north side of the wall, whose shade made the water freeze more quickly, increasing the ice formed per winter day. Then the ice was stored in yakhchals—specially designed, naturally cooled refrigerators.
A large underground space with thick insulated walls was connected to a qanat, and a system of windcatchers or wind towers was used to draw cool subterranean air up from the qanat to maintain temperatures inside the space at low levels, even during hot summer days. As a result, the ice melted slowly and was available year-round.
By country
Africa
Algeria
Qanats (designated foggaras in Algeria) are the source of water for irrigation in large oases like Gourara. The foggaras are also found at Touat (an area of Adrar 200 km from Gourara). The length of the foggaras in this region is estimated to be thousands of kilometers. Although sources suggest that the foggaras may have been in use as early as 200 CE, they were clearly in use by the 11th century after the Arabs took possession of the oases in the 10th century and the residents embraced Islam. The water is metered to the various users through the use of distribution weirs that meter flow to the various canals, each for a separate user.
The humidity of the oases is also used to supplement the water supply to the foggara. The temperature gradient in the vertical shafts causes air to rise by natural convection, causing a draft to enter the foggara. The moist air of the agricultural area is drawn into the foggara in the opposite direction to the water run-off. In the foggara it condenses on the tunnel walls and the air passes out of the vertical shafts. This condensed moisture is available for reuse.
Egypt
Qanat irrigation technology was introduced to Egypt by the Achaemenid king Darius I during his reign of 522 BCE-486 BCE, which is supported by the historian Albert T. Olmstead. There are four main oases in the Egyptian desert. The Kharga Oasis is one that has been extensively studied. There is evidence that as early as the second half of the 5th century BCE water brought in qanats was being used. The qanats were excavated through water-bearing sandstone rock, which seeps into the channel, with water collected in a basin behind a small dam at the end. The width is approximately , but the height ranges from 5 to 9 meters; it is likely that the qanat was deepened to enhance seepage when the water table dropped (as is also seen in Iran). From there the water was used to irrigate fields.
There is another instructive structure located at the Kharga Oasis. A well that apparently dried up was improved by driving a side shaft through the easily penetrated sandstone (presumably in the direction of greatest water seepage) into the hill of Ayn-Manâwîr (also written to allow collection of additional water. After this side shaft had been extended, another vertical shaft was driven to intersect the side shaft. Side chambers were built, and holes bored into the rock—presumably at points where water seeped from the rocks—are evident.
Libya
David Mattingly reports foggara extending for hundreds of miles in the Garamantes area near Germa in Libya: "The channels were generally very narrow – less than 2 feet wide and 5 high – but some were several miles long, and in total some 600 foggara extended for hundreds of miles underground. The channels were dug out and maintained using a series of regularly spaced vertical shafts, one every 30 feet or so, 100,000 in total, averaging 30 feet in depth, but sometimes reaching 130."
Morocco
In southern Morocco, the qanat (locally khettara) is also used. On the margins of the Sahara Desert, the isolated oases of the Draa River valley and Tafilalt have relied on qanat water for irrigation since the late 14th century. In Marrakech and the Haouz plain, the qanats have been abandoned since the early 1970s, having dried up. In the Tafilaft area, half of the 400 khettaras are still in use. The 1971 Hassan Adahkil Dam's build in the main course of the Ziz River and its subsequent impact on local water tables is said to be one of the many reasons for the loss of half of the khettara.
The black berbers (haratin) of the south were the hereditary class of qanat diggers in Morocco who build and repair these systems. Their work was hazardous.
Tunisia
The foggara water management system in Tunisia, used to create oases, is similar to that of the Iranian qanat. The foggara is dug into the foothills of a fairly steep mountain range such as the eastern ranges of the Atlas Mountains. Rainfall in the mountains enters the aquifer and moves toward the Saharan region to the south. The foggara, in length, penetrates the aquifer and collects water. Families maintain the foggara and own the land it irrigates over a ten-meter width, with length reckoned by the size of plot that the available water will irrigate.
Asia
Afghanistan
The qanats are called kariz in Dari (Persian) and Pashto and have been in use since the pre-Islamic period. It is estimated that more than 9,370 karizes were in use in the 20th century. The oldest functional kariz which is more than 300 years old and 8 kilometers long is located in Wardak province and is still providing water to nearly 3,000 people.
Many of these ancient structures were destroyed during the Soviet–Aghan War and the War in Afghanistan. Maintenance has not always been possible. The cost of labour has become very high, and maintaining the kariz structures is no longer possible. Lack of skilled artisans who have the traditional knowledge also poses difficulties. A number of the large farmers are abandoning their kariz which has been in their families sometimes for centuries, and moving to tube and dug wells backed by diesel pumps. However, the government of Afghanistan was aware of the importance of these structures and all efforts were made to repair, reconstruct and maintain (through the community) the kariz. The Ministry of Rural Rehabilitation and Development along with national and international NGOs made the effort. There were still functional qanat systems in 2009. American forces were reported to have unintentionally destroyed some of the channels during expansion of a military base, creating tensions between them and the local community. Some of these tunnels were used to store supplies, and to move men and equipment underground.
Armenia
Qanats have been preserved in Armenia in the community of Shvanidzor, in the southern province of Syunik, bordering with Iran. Qanats are named kahrezes in Armenian. There are 5 kahrezes in Shvanidzor. Four of them were constructed before the village was founded. The fifth kahrez was constructed in 2005. Potable water runs through three of them, and two are in poor condition. In the summer, especially in July and August, the amount of water reaches its minimum, creating a critical situation in the water supply system. Still, kahrezes are the main source of potable and irrigation water for the community.
Azerbaijan
The territory of Azerbaijan was home to numerous kahrizes many centuries ago. Archaeological findings suggest that long before the 9th century CE, kahrizes by which the inhabitants brought potable and irrigation water to their settlements were in use in Azerbaijan. Traditionally, kahrizes were built and maintained by a group of masons called 'Kankans' with manual labour. The profession was handed down from father to son.
It is estimated that until the 20th century, nearly 1,500 kahrizes, of which as many as 400 were in the Nakhichevan Autonomous Republic, existed in Azerbaijan. However, following the introduction of electric and fuel-pumped wells during Soviet times, kahrizes were neglected. Today, it is estimated that 800 are still functioning in Azerbaijan. These operational kahrizes are key to the life of many communities.
In 1999, upon the request of the communities in Nakhichevan, the International Organization for Migration (IOM) began implementing a pilot programme to rehabilitate the kahrizes. By 2018 IOM rehabilitated more than 163 kahrizes with funds from the United Nations Development Programme, European Commission, Canadian International Development Agency, Swiss Agency for Development and Cooperation and the Bureau of Population, Refugees, and Migration, US State Department, and the self-contribution of the local communities.
In 2010, IOM began a kahriz rehabilitation project with funds from the Korea International Cooperation Agency. During the First Phase of the action which lasted until January 2013, a total of 20 kahrizes in the mainland of Azerbaijan have been renovated.
China
The oasis of Turpan, in the deserts of Xinjiang in northwestern China, uses water provided by qanat (locally called karez). There are nearly 1,000 karez systems in the area, and the total length of the canals is about 5,000 kilometers.
Turpan has long been the center of a fertile oasis and an important trade center along the Northern Silk Road, at which time it was adjacent to the kingdoms of Korla and Karashahr to the southwest. The historical record of the karez extends back to the Han dynasty. The Turfan Water Museum is a Protected Area of the People's Republic of China because of the importance of the Turpan karez water system to the history of the area.
Iran
In the middle of the 20th century, an estimated 50,000 qanats were in use in Iran, each commissioned and maintained by local users. Of these, only 37,000 remain in use as of 2015. One of the oldest and largest known qanats is in the Iranian city of Gonabad, and after 2,700 years still provides drinking and agricultural water to nearly 40,000 people. Its main well depth is more than 360 meters and its length is 45 kilometers. Yazd, Khorasan and Kerman are zones known for their dependence on an extensive system of qanats.
In 2016, UNESCO inscribed the Persian Qanat as a World Heritage Site, listing the following eleven qanats: Qasebeh Qanat, Qanat of Baladeh, Qanat of Zarch, Hasan Abad-e Moshir Qanat, Ebrāhim Ābād Qanat in Markazi Province, Qanat of Vazvān in Esfahan Province, Mozd Ābād Qanat in Esfahan Province, Qanat of the Moon in Esfahan Province, Qanat of Gowhar-riz in Kerman Province, Jupār – Ghāsem Ābād Qanat in Kerman Province, and Akbar Ābād Qanat in Kerman Province. Since 2002, UNESCO's International Hydrological Programme Intergovernmental Council began investigating the possibility of an international qanat research center to be located in Yazd, Iran.
The Qanats of Gonabad, also called kariz Kai Khosrow, is one of the oldest and largest qanats in the world built between 700 BCE to 500 BCE. It is located at Gonabad, Razavi Khorasan Province. This property contains 427 water wells with total length of .
According to Callisthenes, the Persians were using water clocks in 328 BCE to ensure a just and exact distribution of water from qanats to their shareholders for agricultural irrigation. The use of water clocks in Iran, especially in Qanats of Gonabad and kariz Zibad, dates back to 500 BCE. Later they were also used to determine the exact holy days of pre-Islamic religions, such as the Nowruz, Chelah, or Yaldā – the shortest, longest, and equal-length days and nights of the years.
The water clock, or Fenjaan, was the most accurate and commonly used timekeeping device for calculating the amount or the time that a farmer must take water from the Qanats of Gonabad until it was replaced by more accurate current clocks.
Many of the Iranian qanats bear some characteristics which allow us to call them feat of engineering, considering the intricate techniques used in their construction. The eastern and central regions of Iran hold the most qanats due to low precipitation and lack of permanent surface streams, whereas a small number of qanats can be found in the northern and western parts which receive more rainfall and enjoy some permanent rivers. Respectively the provinces Khorasan Razavi, Southern Khorasan, Isfahan, and Yazd accommodate the most qanats, but from the viewpoint of water discharge the provinces Isfahan, Khorasan Razavi, Fars and Kerman are ranked first to fourth.
Henri Golbot explored the genesis of the qanat in his 1979 publication, (The Qanats. A Technique for Obtaining Water), He argues that the ancient Iranians made use of the water that the miners wished to get rid of it, and founded a basic system named qanat or kariz to supply the required water to their farm lands. According to Golbot, this innovation took place in the northwest of the present Iran somewhere bordering Turkey and later was introduced to the neighboring Zagros Mountains.
According to an inscription left by Sargon II, the king of Assyria, in 714 BCE he invaded the city of Uhlu lying in the northwest of Uroomiye lake that lay in the territory of Urartu empire, and then he noticed that the occupied area enjoyed a very rich vegetation even though there was no river running across it. So he managed to discover the reason why the area could stay green and realized that there were some qanats behind the matter. In fact it was Ursa, the king of the region, who had rescued the people from thirst and turned Uhlu into a prosperous and green land. Golbot believes that the influence of the Medeans and Achaemenids made the technology of qanat spread from Urartu, in the western north of Iran and near the present border between Iran and Turkey, to all over the Iranian plateau.
It was an Achaemenid ruling that in case someone succeeded in constructing a qanat and bringing groundwater to the surface in order to cultivate land, or in renovating an abandoned qanat, the tax he was supposed to pay the government would be waived not only for him but also for his successors for up to 5 generations. During this period, the technology of qanat was in its heyday and it even spread to other countries. For example, following Darius's order, Silaks the naval commander of the Persian army and Khenombiz the royal architect managed to construct a qanat in the oasis of Kharagha in Egypt.
Beadnell believes that qanat construction dates back to two distinct periods: they were first constructed by the Persians, and later the Romans dug some other qanats during their reign in Egypt from 30 BCE to 395 CE. The magnificent temple built in this area during Darius's reign shows that there was a considerable population depending on the water of qanats. Ragerz has estimated this population to be 10,000 people. The most reliable document confirming the existence of qanats at this time was written by Polybius who states that: "the streams are running down from everywhere at the base of Alborz mountain, and people have transferred too much water from a long distance through some subterranean canals by spending much cost and labor."
During the Seleucid era, which began after the occupation of Iran by Alexander the Great, it seems that the qanats were abandoned. In terms of the situation of qanats during this era, some historical records have been found. In a study by Russian orientalist scholars it has been mentioned that: the Persians used the side branches of rivers, mountain springs, wells and qanats to supply water. The subterranean galleries excavated to obtain groundwater were named as qanat. These galleries were linked to the surface through some vertical shafts which were sunk in order to get access to the gallery to repair it if necessary.
According to the historical records, the Parthian kings did not care about the qanats the way the Achaemenid kings and even Sassanid kings did. As an instance, Arsac III, one of the Parthian kings, destroyed some qanats in order to make it difficult for Seleucid Antiochus to advance further while fighting him. The historical records from this time indicate a perfect regulation on both water distribution and farmlands. All the water rights were recorded in a special document which was referred to in case of any transaction. The lists of farmlands – whether private or governmental – were kept at the tax department. During this period there were some official rulings on qanats, streams, construction of dam, operation and maintenance of qanats, etc.
The government proceeded to repair or dredge the qanats that were abandoned or destroyed, and to construct the new qanats if necessary. A document written in the Pahlavi language points out the important role of qanats in developing the cities at that time. In Iran, the advent of Islam, which coincided with the overthrow of the Sassanid dynasty, brought about a profound change in religious, political, social and cultural structures. But the qanats stayed intact because the economic infrastructure including qanats was of great importance to the Arabs. As an instance, M. Lombard reports that the Moslem clerics who lived during Abbasid period, such as Abooyoosef Ya'qoob (died 798 CE) stipulated that whoever can bring water to the idle lands in order to cultivate, his tax would be waived and he would be entitled to the lands cultivated. Therefore, this policy did not differ from that of the Achaemenids in not getting any tax from the people who revived abandoned lands.
The Arabs' supportive policy on qanats was so successful that even Mecca gained a qanat. The Persian historian Hamdollah Mostowfi writes: "Zobeyde Khatoon (Haroon al-Rashid's wife) constructed a qanat in Mecca. After the time of Haroon al-Rashid, during the caliph Moghtader's reign this qanat fell into decay, but he rehabilitated it, and the qanat was rehabilitated again after it collapsed during the reign of two other caliphs named Ghaem and Naser. After the era of the caliphs this qanat completely fell into ruin because the desert sand filled it up, but later Amir Choopan repaired the qanat and made it flow again in Mecca."
There are also other historical texts proving that the Abbasids were concerned about qanats. For example, according to the "Incidents of Abdollah bin Tahir's Time" written by Gardizi, in 830 CE a terrible earthquake struck the town of Forghaneh and reduced many homes to rubble. The inhabitants of Neyshaboor used to come to Abdollah bin Tahir in order to request him to intervene, for they fought over their qanats and found the relevant instruction or law on qanat as a solution neither in the prophet's quotations nor in the clerics' writings. So Abdollah bin Tahir managed to bring together all the clergymen from throughout Khorasan and Iraq to compile a book entitled Alghani (The Book of Qanat). This book collected all the rulings on qanats which could be of use to whoever wanted to judge a dispute over this issue. Gardizi added that this book was still applicable to his time, and everyone made references to this book. One can deduce from these facts that during the above-mentioned period the number of qanats was so considerable that the authorities were prompted to put together some legal instructions concerning them. Also it shows that from the 9th to 11th centuries the qanats that were the hub of the agricultural systems were also of interest to the government.
Apart from "The Book of Alghani", which is considered as a law booklet focusing on qanat-related rulings based on Islamic principles, there is another book about groundwater written by Karaji in 1010. This book, entitled Extraction of Hidden Waters, examines just the technical issues associated with the qanat and tries to answer the common questions such as how to construct and repair a qanat, how to find a groundwater supply, how to do leveling, etc.. Some of the innovations described in this book were introduced for the first time in the history of hydrogeology, and some of its technical methods are still valid and can be applied in qanat construction. The content of the book implies that its writer (Karaji) did not have any idea that there was another book on qanats compiled by the clergymen.
There are some records dating back to that time, signifying their concern about the legal vicinity of qanats. For example, Mohammad bin Hasan quotes Aboo-Hanifeh that in case someone constructs a qanat in abandoned land, someone else can dig another qanat in the same land on the condition that the second qanat is 500 zera' (375 meters) away from the first one.
Ms. Lambton quotes Moeen al-din Esfarzi who wrote the book Rowzat al-Jannat (the garden of paradise) that Abdollah bin Tahir (from the Taherian dynasty) and Ismaeel Ahmed Samani (from the Samani dynasty) had several qanats constructed in Neyshaboor. Later, in the 11th century, a writer named Nasir Khosrow acknowledged all those qanats with the following words: "Neyshaboor is located in a vast plain at a distance of 40 Farsang (≈240 km) from Serakhs and 70 Farsang (≈420 km) from Mary (Marv) ... all the qanats of this city run underground, and it is said that an Arab who was offended by the people of Neyshaboor has complained that; what a beautiful city Neyshaboor could have become if its qanats would have flowed on the ground surface and instead its people would have been underground." These documents all certify the importance of qanats during the Islamic history within the cultural territories of Iran.
In the 13th century, the invasion of Iran by Mongolian tribes reduced many qanats and irrigation systems to ruin, and many qanats were deserted and dried up. Later, in the era of the Ilkhanid dynasty especially at the time of Ghazan Khan and his Persian minister Rashid al-Din Fazl-Allah, some measures were taken to revive the qanats and irrigation systems. There is a 14th-century book entitled Al-Vaghfiya Al-Rashidiya (Rashid's Deeds of Endowment) that names all the properties located in Yazd, Shiraz, Maraghe, Tabriz, Isfahan and Mowsel that Rashid Fazl-Allah donated to the public or religious places. This book mentions many qanats running at that time and irrigating a considerable area of farmland.
At the same time, another book, entitled Jame' al-Kheyrat, was written by Seyyed Rokn al-Din on the same subject as Rashid's book. In this book, Seyyed Rokn al-Din names the properties he donated in the region of Yazd. These deeds of endowment indicate that much attention was given to the qanats during the reign of Ilkhanids, but it is attributable to their Persian ministers, who influenced them.
In 1984–1985 the ministry of energy took a census of 28,038 qanats whose total discharge was 9 billion cubic meters. In the years 1992–1993 the census of 28,054 qanats showed a total discharge of 10 billion cubic meters. 10 years later in 2002–2003 the number of the qanats was reported as 33,691 with a total discharge of 8 billion cubic meters.
In the restricted regions there are 317,225 wells, qanats and springs that discharge 36,719 million cubic meters water per year, out of which 3,409 million cubic meters is surplus to the aquifer capacity. in 2005, in the country as a whole, there were 130,008 deep wells with a discharge of 31,403 million cubic meters, 33,8041 semi deep wells with a discharge of 13,491 million cubic meters, 34,355 qanats with a discharge of 8,212 million cubic meters, and 55,912 natural springs with a discharge of 21,240 million cubic meters.
In 2021, a British-trained architect Margot Krasojević designed a luxury eco hotel based on principles of qanat and windcatchers in a desert in Iran, called Qanat. The project has not yet been built but offers ideas for applying ancient technology to modern-day cooling problems in the desert.
Iraq
A survey of qanat systems in the Kurdistan region of Iraq conducted by the Department of Geography at Oklahoma State University (US) on behalf of UNESCO in 2009 found that out of 683 karez systems, some 380 were still active in 2004, but only 116 were active by 2009. Reasons for the decline of qanats include "abandonment and neglect" prior to 2004, "excessive pumping from wells" and, since 2005, drought. Water shortages are said to have forced, since 2005, over 100,000 people who depended for their livelihoods on karez systems to leave their homes.
The study says that a single karez has the potential to provide enough household water for nearly 9,000 individuals and irrigate over 200 hectares of farmland. UNESCO and the government of Iraq plan to rehabilitate the karez through a Karez Initiative for Community Revitalization launched in 2010. Most of the karez are in Sulaymaniyah Governorate (84%). A large number are also found in Erbil Governorate (13%), especially on the broad plain around and in Erbil city.
India
In India, there are karez systems are located at Bidar, Bijapur, Burhanpur "(Kundi Bhandara)", and Aurgangabad. The Bidar karez systems were probably the first dug in India. It dates to the Bahmani period. Bidar has three karez systems as per Ghulam Yazdani's documentation. Other than Naubad there are two more karez systems in Bidar, "Shukla Theerth" and "Jamna Mori". The Shukla theerth is the longest karez system in Bidar. The mother well of this karez has been discovered by near Gornalli Kere, a historic embankment. The third system called Jamna mori is more of a distribution system within the old city area with many channels crisscrossing the city lanes. Restoration efforts commenced in 2014, with the desilting and excavation of the Naubad Karez in 2015, uncovering 27 vertical shafts linked to the Karez. The rejuvenation of the system has had a significant impact on the water-deficit city of Bidar. A seventh line of the system was discovered in 2016 during a sewage line excavation.
Valliyil Govindankutty, assistant professor in geography at Government College, Chittur, was responsible for rediscovery and mapping of the Naubad Karez System in 2012-2013. Later in 2014-2016 team YUVAA joined Govindankutty to help uncover Other two Karez Systems in Bidar. Detailed documentation of the Naubad karez system was done in August 2013 and a report was submitted to District Administration of Bidar that found several new facts. The research has led to the initiation of cleaning the debris and collapsed sections paving the way to its rejuvenation. The cleaning of karez has led to bringing water to higher areas of the plateau, and it has in turn recharged the wells in the vicinity.
The Bijapur karez system is much more complicated. A reveals that it has surface water and groundwater connections. The Bijapur karez is a network of shallow masonry aqueducts, terracotta/ceramic pipes, embankments and reservoirs, tanks etc. All weave together a network to ensure water reaches the old city. The system starts at Torwi and extends as shallow aqueducts and further as pipes; further it becomes deeper from the Sainik school area onward which exists as a tunnel dug through the geology. The system can be clearly traced up to Ibrahim Roja.
In Aurangabad the karez systems are called nahars. These are shallow aqueducts running through the city. There are 14 aqueducts in Aurangabad. The Nahar-i-Ambari is the oldest and longest. Its again a combination of shallow aqueducts, open channels, pipes, cisterns, etc. The source of water is a surface water body. The karez has been constructed right below the bed of lake. The lake water seeps through the soil into the Karez Gallery.
In Burhanpur the karez is called "Kundi-Bhandara", sometimes wrongly referred to as"Khuni Bhandara". The system is approx 6 km long starts from the alluvial fans of Satpura hills in the north of the town. Unlike Bidar, Bijapur and Aurgangabad the System airvents are round in shape. Inside the Karez one could see lime depositions on the walls. The Systems ends to carry water further to palaces and public fountains through pipe line.
Indonesia
It has been suggested that underground temples at Gua Made in Java reached by shafts, in which masks of a green metal were found, originated as a qanat.
Japan
In Japan there are several dozen qanat-like structures, locally known as 'mambo' or 'manbo', most notably in the Mie and Gifu Prefectures. Whereas some link their origin clearly to the Chinese karez, and therefore to the Iranian source, a Japanese conference in 2008 found insufficient scientific studies to evaluate the origins of the mambo.
Jordan
Among the qanats built in the Roman Empire, the long Gadara Aqueduct in northern Jordan was possibly the longest continuous qanat ever built. Partly following the course of an older Hellenistic aqueduct, excavation work arguably started after a visit by emperor Hadrian in 129–130 CE. The Gadara Aqueduct was never totally finished and was put in service only in sections.
Pakistan
In Pakistan qanat irrigation system is endemic only in Balochistan. The major concentration is in the north and northwest along the Pakistan-Afghanistan border and oasis of Makoran division. The karez system of the Balochistan desert is on the tentative list for future world heritage sites in Pakistan.
The acute shortage of water resources give water a decisive role in the regional conflicts arose in the course of history of Balochistan. Therefore, in Balochistan, the possession of water resources is more important than ownership of land. Hence afterward a complex system for the collection, channeling and distribution of water was developed in Balochistan. Similarly, the distribution and unbiased flow of water to different stockholders also necessitate the importance of different societal classes in Balochistan in general and particularly in Makoran.
For instance, sarrishta, literally, head of the chain, is responsible for administration of channel. He normally owns the largest water quota. Under sarrishta, there are several heads of owners issadar who also possessed larger water quotas. The social hierarchy within Baloch society of Makoran depends upon the possession of largest quotas of water. The role of sarrishta in some cases hierarchical and passing from generations within the family and he must have the knowledge of the criteria of unbiased distribution of water among different issadar.
The sharing of water is based on a complex indigenous system of measurement depends upon time and space particularly to the phases of moon; the hangams. Based on seasonal variations and share of water the hangams are apportioned among various owners over period of seven or fourteen days. However, in some places, instead of hangam, anna used which is based on twelve-hour period for each quota. Therefore, if a person own 16 quotas it means that he is entitled for water for eight days in high seasons and 16 days in winter when water level went down as well as expectation of winter rain (Baharga) in Makran region. The twelve-hour water quota again subdivided into several sub-fractions of local measuring scales such as tas or pad (Dr Gul Hasan Pro VC LUAWMS, 2 day National conference on Kech).
The Chagai district is in the north west corner of Balochistan, Pakistan, bordering with Afghanistan and Iran. Qanats, locally known as Kahn, are found more broadly in this region. They are spread from Chaghai district all the way up to Zhob district.
Syria
Qanats were found over much of Syria. The widespread installation of groundwater pumps has lowered the water table and qanat system. Qanats have gone dry and been abandoned across the country.
Oman
In Oman from the Iron Age period (found in Salut, Bat and other sites) a system of underground aqueducts called 'Falaj' were constructed, a series of well-like vertical shafts, connected by gently sloping horizontal tunnels. There are three types of Falaj: Daudi () with underground aqueducts, Ghaili () requiring a dam to collect the water, and Aini () whose source is a water spring. These enabled large scale agriculture to flourish in a dryland environment. According to UNESCO, some 3,000 aflaj (plural) or falaj (singular), are still in use in Oman today. Nizwa, the former capital city of Oman, was built around a falaj which is in use to this day. These systems date to before the Iron Age in Oman. In July 2006, five representative examples of this irrigation system were inscribed as a World Heritage Site.
United Arab Emirates
The oases of the city of Al Ain (particularly Al-Ain, Al-Qattarah, Al-Mu'taredh, Al-Jimi, Al-Muwaiji, and Hili), adjacent to Al-Buraimi in Oman, continue traditional falaj (qanat) irrigations for the palm groves and gardens, and form part of the city's ancient heritage. Multiple aflaj have been found from the early Iron Age, as early as 1100 BC, and some sources claim these to be the earliest examples of the qanat irrigation system.
The falaj system continued to be in use into the early Pre Islamic (300 BC - 300 AD) and were reintroduced after the early Islamic conquests in the souqs of Julfar, Dibba, and Tawwam. Islamic geographer Al Muqqadasi stated: "Hafit {Tuwwam} abounds in palm trees" in the 10th century, indicating extensive use of the falaj system. The falaj system is still operating in the city of Al Ain, as well as in multiple mountainous settlements, including the villages of Wadi Shees and Masafi.
Europe
Greece
The Tunnel of Eupalinos on Samos runs for 1 kilometre through a hill to supply water to Pythagorion. It was built on the order of Polycrates around 550 BCE. At either end of the tunnel proper, shallow qanat-like tunnels carried the water from the spring and to the town.
Italy
The long Tunnels of Claudius, intended to partially drain the largest Italian inland water, Fucine Lake, was constructed using the qanat technique. It featured shafts up to 122 m deep. The entire ancient town of Palermo in Sicily was equipped with a huge qanat system built during the Arab period (827–1072). Many of the qanats are now mapped and some can be visited. The famous Scirocco room has an air-conditioning system cooled by the flow of water in a qanat and a "wind tower", a structure able to catch the wind and use it to draw the cooled air up into the room.
Luxembourg
The Raschpëtzer near Helmsange in southern Luxembourg is a particularly well preserved example of a Roman qanat. It is probably the most extensive system of its kind north of the Alps. To date, some 330 m of the total tunnel length of 600 m have been explored. Thirteen of the 20 to 25 shafts have been investigated. The qanat appears to have provided water for a large Roman villa on the slopes of the Alzette valley. It was built during the Gallo-Roman period, probably around the year 150 and functioned for about 120 years thereafter.
Spain
There are still many examples of or qanat systems in Spain, most likely brought to the area by the Moors during their rule of the Iberian peninsula. Turrillas in Andalusia on the north facing slopes of the Sierra de Alhamilla has evidence of a qanat system. Granada is another site with an extensive qanat system. In Madrid they were called and were used until the construction of the Canal de Isabel II. See and in Spanish.
The Americas
Qanats in the Americas, usually referred to as puquios or filtration galleries, can be found in the Nazca Province of Peru and in northern Chile. The origin and dating of the Nazca puquios is disputed, although some archaeologists have asserted that they were constructed by the indigenous people of the Nazca culture beginning about 500 CE.
The Spanish introduced qanats into Mexico in 1520 CE.
In the Atacama Desert of northern Chile the shafts of puquios are known as socavones. Socavones are known to exist in Azapa Valley and the oasis of Sibaya, Pica-Matilla, and Puquio de Núñez. In 1918 geologist Juan Brüggen mentioned the existence of 23 socavones in the Pica oasis, yet these have since then been abandoned due to economic and social changes.
Symbolism in Iranian culture
In an August 21, 1906, letter written from Tehran, Florence Khanum, the American wife of Persian diplomat Ali Kuli Khan, described the use of qanats for the garden at the home of her brother-in-law, General Husayn Kalantar,
January 1, 1913
An old tradition in Iran was to hold symbolic wedding ceremonies between widows and qanats in which the widow became the "wife" of the qanat. This was believed to help ensure the continued flow of water.
See also
Aflaj Irrigation Systems of Oman
Notes
References
Hadden, Robert Lee. 2005. "Adits, Caves, Karizi-Qanats, and Tunnels in Afghanistan: An Annotated Bibliography," United States Army Corps of Engineers, Army Geospatial Center.
Ozden, Dursun Directed & Written by; ANATOLIAN WATER CIVILIZATION & ANATOLIAN KARIZES-QANATS, The Documentary Film & Book, 2004–2011 Istanbul, Turkey. http://www.dursunozden.com.tr
External links
WaterHistory.org Article on Karez in Turpan, Xinjiang, China
World Wildlife Fund Editorial on Karez in Afghanistan
Useful information on Qanat provided by Farzad Kohandel, in arabic and in english
Qanat
Information on Qanats (includes photo of access shafts from above)
International Center on Qanats and Historic Hydraulic Structures
The origin and spread of qanats in the Old World – by PW English, in Proceedings of the American Philosophical Society Volume 112, Number 3 June 21, 1968.
The art and science of water, in Saudi Aramco May/June 2006
Carlo Trabia: "Kanats of Sicily", in: Best of Sicily Magazine, March 2005, with Photo
Lynn Teo Simarski, Oman's "Unfailing Springs", 1992, Saudi Aramco World
"Engines of Ingenuity," episode no. 1250, "Water in the Desert," University of Houston, College of Engineering
Water wells
Iranian inventions
Irrigation
Water supply
Ancient technology
Aqueducts in Iran
World Heritage Sites in Iran | Qanat | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 11,774 | [
"Hydrology",
"Water wells",
"Water supply",
"Environmental engineering"
] |
536,063 | https://en.wikipedia.org/wiki/Exothermic%20reaction | In thermochemistry, an exothermic reaction is a "reaction for which the overall standard enthalpy change ΔH⚬ is negative." Exothermic reactions usually release heat. The term is often confused with exergonic reaction, which IUPAC defines as "... a reaction for which the overall standard Gibbs energy change ΔG⚬ is negative." A strongly exothermic reaction will usually also be exergonic because ΔH⚬ makes a major contribution to ΔG⚬. Most of the spectacular chemical reactions that are demonstrated in classrooms are exothermic and exergonic. The opposite is an endothermic reaction, which usually takes up heat and is driven by an entropy increase in the system.
Examples
Examples are numerous: combustion, the thermite reaction, combining strong acids and bases, polymerizations. As an example in everyday life, hand warmers make use of the oxidation of iron to achieve an exothermic reaction:
4Fe + 3O2 → 2Fe2O3 ΔH⚬ = - 1648 kJ/mol
A particularly important class of exothermic reactions is combustion of a hydrocarbon fuel, e.g. the burning of natural gas:
CH4 + 2O2 → CO2 + 2H2O ΔH⚬ = - 890 kJ/mol
These sample reactions are strongly exothermic.
Uncontrolled exothermic reactions, those leading to fires and explosions, are wasteful because it is difficult to capture the released energy. Nature effects combustion reactions under highly controlled conditions, avoiding fires and explosions, in aerobic respiration so as to capture the released energy, e.g. for the formation of ATP.
Measurement
The enthalpy of a chemical system is essentially its energy. The enthalpy change ΔH for a reaction is equal to the heat q transferred out of (or into) a closed system at constant pressure without in- or output of electrical energy. Heat production or absorption in a chemical reaction is measured using calorimetry, e.g. with a bomb calorimeter. One common laboratory instrument is the reaction calorimeter, where the heat flow from or into the reaction vessel is monitored. The heat release and corresponding energy change, Δ, of a combustion reaction can be measured particularly accurately.
The measured heat energy released in an exothermic reaction is converted to ΔH⚬ in Joule per mole (formerly cal/mol). The standard enthalpy change ΔH⚬ is essentially the enthalpy change when the stoichiometric coefficients in the reaction are considered as the amounts of reactants and products (in mole); usually, the initial and final temperature is assumed to be 25 °C. For gas-phase reactions, ΔH⚬ values are related to bond energies to a good approximation by:
Δ⚬ = total bond energy of reactants − total bond energy of products
In an exothermic reaction, by definition, the enthalpy change has a negative value:
Δ = Hproducts - Hreactants < 0
where a larger value (the higher energy of the reactants) is subtracted from a smaller value (the lower energy of the products). For example, when hydrogen burns:
2H2 (g) + O2 (g) → 2H2O (g)
Δ⚬ = −483.6 kJ/mol
See also
Chemical thermodynamics
Differential scanning calorimetry
Endergonic
Exergonic
Endergonic reaction
Exergonic reaction
Exothermic process
Endothermic reaction
Endotherm
References
External links
Thermochemistry | Exothermic reaction | [
"Chemistry"
] | 741 | [
"Thermochemistry"
] |
536,249 | https://en.wikipedia.org/wiki/Storm%20Prediction%20Center | The Storm Prediction Center (SPC) is a US government agency that is part of the National Centers for Environmental Prediction (NCEP), operating under the control of the National Weather Service (NWS), which in turn is part of the National Oceanic and Atmospheric Administration (NOAA) of the United States Department of Commerce (DoC).
Headquartered at the National Weather Center in Norman, Oklahoma, the Storm Prediction Center is tasked with forecasting the risk of severe thunderstorms and tornadoes in the contiguous United States. It issues convective outlooks, mesoscale discussions, and watches as a part of this process. Convective outlooks are issued for the following eight days (issued separately for Day 1, Day 2, Day 3, and Days 4–8), and detail the risk of severe thunderstorms and tornadoes during the given forecast period, although tornado, hail and wind details are only available for Days 1 and 2. Days 3–8 use a probabilistic scale, determining the probability for a severe weather event in percentage categories (15%/yellow and 30%/orange).
Mesoscale discussions are issued to provide information on certain individual regions where severe weather is becoming a threat and states whether a watch is likely and details thereof, particularly concerning conditions conducive for the development of severe thunderstorms in the short term, as well as situations of isolated severe weather when watches are not necessary. Watches are issued when forecasters are confident that severe weather will occur, and usually precede the onset of severe weather by one hour, although this sometimes varies depending on certain atmospheric conditions that may inhibit or accelerate convective development.
The agency is also responsible for forecasting fire weather (indicating conditions that are favorable for wildfires) in the contiguous U.S., issuing fire weather outlooks for Days 1, 2, and 3–8, which detail areas with various levels of risk for fire conditions (such as fire levels and fire alerts).
History
The Storm Prediction Center began in 1952 as SELS (Severe Local Storms Unit), the U.S. Weather Bureau in Washington, D.C. In 1954, the unit moved its forecast operations to Kansas City, Missouri. SELS began issuing convective outlooks for predicted thunderstorm activity in 1955, and began issuing radar summaries in three-hour intervals in 1960; with the increased duties of compiling and disseminating radar summaries, this unit became the National Severe Storms Forecast Center (NSSFC) in 1966, remaining headquartered in Kansas City.
In 1968, the National Severe Storms Forecast Center began issuing status reports on weather watches; the agency then made its first computerized data transmission in 1971. On April 2, 1982, the agency issued the first "Particularly Dangerous Situation" watch, which indicates the imminent threat of a major severe weather event over the watch's timespan. In 1986, the NSSFC introduced two new forecast products: the Day 2 Convective Outlook (which include probabilistic forecasts for outlined areas of thunderstorm risk for the following day) and the Mesoscale Discussion (a short-term forecast outlining specific areas under threat for severe thunderstorm development).
In October 1995, the National Severe Storms Forecast Center relocated its operations to Norman, Oklahoma, and was rechristened the Storm Prediction Center. At that time, the guidance center was housed at Max Westheimer Airport (now the University of Oklahoma Westheimer Airport), co-located in the same building as the National Severe Storms Laboratory and the local National Weather Service Weather Forecast Office (the latter of which, in addition to disseminating forecasts, oversees the issuance of weather warnings and advisories for the western two-thirds of Oklahoma and western portions of North Texas, and issues outline and status updates for SPC-issued severe thunderstorm and tornado watches that include areas served by the Norman office). In 1998, the center began issuing the National Fire Weather Outlook to provide forecasts for areas potentially susceptible to the development and spread of wildfires based on certain meteorological factors. The Day 3 Convective Outlook (which is similar in format to the Day 2 forecast) was first issued on an experimental basis in 2000, and was made an official product in 2001.
In 2006, the Storm Prediction Center, National Severe Storms Laboratory and National Weather Service Norman Forecast Office moved their respective operations into the newly constructed National Weather Center, near Westheimer Airport. Since the agency's relocation to Norman, the 557th Weather Wing at Offutt Air Force Base would assume control of issuing the Storm Prediction Center's severe weather products in the event that the SPC is no longer able to issue them in the event of an outage (such as a computer system failure or building-wide power disruption) or emergency (such as an approaching strong tornadic circulation or tornado on the ground) affecting the Norman campus; on April 1, 2009, the SPC reassigned responsibilities for issuing the center's products in such situations to the 15th Operational Weather Squadron based out of Scott Air Force Base.
Brief history timeline
1948: Following Weather Bureau (WB) researchers' work by on a 20 March tornado at Tinker AFB, two officers (Fawbush and Miller) successfully predict another one five days later on 25 March at same base, given responsibility for AF tornado predictions.
1951: Severe Weather Warning Center (SWWC) established as an Air Weather Service unit, headed by Fawbush and Miller.
1952: WB establishes its own Weather Bureau-Army-Navy (WBAN) Analysis Center in Washington in March as a trial unit, made permanent on 21 May as the Weather Bureau Severe Weather Unit (SWU).
1953: SWU renamed Severe Local Storm (SELS) Warning Center on 17 June.
1954: SELS relocates from the WBAN Center in Washington to the WB's District Forecast Office (DFO) in downtown Kansas City in September.
1955: National Severe Storms Project (NSSP) formed SELS' as research component.
1958: SELS assumes authority for all public severe weather forecasts.
1962: Some from NSSP move to Norman's Weather Radar Laboratory to work with a new Weather Surveillance Radar-1957 (WSR-57).
1964: Remainder of NSSP moves to Norman and is reorganized as National Severe Storms Laboratory (NSSL).
1965: Environmental Science Services Administration (ESSA) formed, and entire WB office (SELS and DFO) in Kansas City renamed National Severe Storms Forecast Center (NSSFC).
1976: Techniques Development Unit (TDU) established in April to provide software development and evaluate forecast methods.
1995: NSSFC renamed Storm Prediction Center (SPC) in October.
1997: SPC moves from Kansas City to Norman.
2006: SPC moves a few miles south to the National Weather Center (NWC) on the University of Oklahoma Research Campus.
2023: Meteorologist Liz Leitman becomes the first woman at the SPC to issue a convective weather watch.
2024: On February 15, 2024, Leitman became the first woman meteorologist to issue a severe thunderstorm watch.
Overview
The Storm Prediction Center is responsible for forecasting the risk of severe weather caused by severe thunderstorms, specifically those producing tornadoes, hail of in diameter or larger, and/or winds of [50 knots] or greater. The agency also forecasts hazardous winter and fire weather conditions. It does so primarily by issuing convective outlooks, severe thunderstorm watches, tornado watches and mesoscale discussions.
There is a three-stage process in which the area, time period, and details of a severe weather forecast are refined from a broad-scale forecast of potential hazards to a more specific and detailed forecast of what hazards are expected, and where and in what time frame they are expected to occur. If warranted, forecasts will also increase in severity through this three-stage process.
The Storm Prediction Center employs a total of 43 personnel, including five lead forecasters, ten mesoscale/outlook forecasters, and seven assistant mesoscale forecasters. Many SPC forecasters and support staff are heavily involved in scientific research into severe and hazardous weather. This involves conducting applied research and writing technical papers, developing training materials, giving seminars and other presentations locally and nationwide, attending scientific conferences, and participating in weather experiments.
Convective outlooks
The Storm Prediction Center issues convective outlooks (AC), consisting of categorical and probabilistic forecasts describing the general threat of severe convective storms over the contiguous United States for the next six to 192 hours (Day 1 through Day 8). These outlooks are labeled and issued by day, and are issued up to five times per day.
The categorical levels of risks are TSTM (for Thunder Storm: light green shaded area – rendered as a brown line prior to April 2011 – indicating a risk for general thunderstorms), "MRGL" (for Marginal: darker green shaded area, indicating a very low but present risk of severe weather); "SLGT" (for Slight: yellow shaded area – previously rendered as a green line – indicating a slight risk of severe weather); "ENH" (for Enhanced: orange shaded area, which replaced the upper end of the SLGT category on October 22, 2014); "MDT" (for Moderate: red shaded area – previously rendered as a red line – indicating a moderate risk of severe weather); and "HIGH" (pink shaded area – previously a rendered as a fuchsia line – indicating a high risk of severe weather). Significant severe areas (referred to as "hatched areas" because of their representation on outlook maps) refer to a threat of increased storm intensity that is of "significant severe" levels (F2/EF2 or stronger tornado, or larger hail, or winds or greater).
In April 2011, the SPC introduced a new graphical format for its categorical and probability outlooks, which included the shading of risk areas (with the colors corresponding to each category, as mentioned above, being changed as well) and population, county/parish/borough and interstate overlays. The new shaded maps also incorporated a revised color palette for the shaded probability categories in each outlook.
In 2013, the SPC incorporated a small table under the Convective Outlook's risk category map that indicates the total coverage area by square miles, the total estimated population affected and major cities included within a severe weather risk area.
Public severe weather outlooks (PWO) are issued when a significant or widespread outbreak is expected, especially for tornadoes. From November to March, it can also be issued for any threat of significant tornadoes in the nighttime hours, noting the lower awareness and greater danger of tornadoes at that time of year.
Categories
A marginal risk day indicates storms of only limited organization, longevity, coverage and/or intensity, typically isolated severe or near-severe storms with limited wind damage, large hail and possibly a low tornado risk. Wind gusts of at least and hailstones of around in diameter are common storm threats within a marginal risk; depending on the sufficient wind shear, a tornado – usually of weak (EF0 to EF1) intensity and short duration – may be possible. This category replaced the "SEE TEXT" category on October 22, 2014.
A slight risk day typically will indicate that the threat exists for scattered severe weather, including scattered wind damage (produced by straight-line sustained winds and/or gusts of 60 to 70 mph), scattered severe hail (varying in size from to ) and/or isolated tornadoes (often of shorter duration and varying weak to moderate intensity, depending on the available wind shear and other sufficient atmospheric parameters). During the peak severe weather season, most days will have a slight risk somewhere in the United States. Isolated significant severe events are possible in some circumstances, but are generally not widespread.
An enhanced risk day indicates that there is a greater threat for severe weather than that which would be indicated by a slight risk, but conditions are not adequate for the development of widespread significant severe weather to necessitate a moderate category, with more numerous areas of wind damage (often with wind gusts of to ), along with severe hail (occasionally over ) and several tornadoes (in some setups, isolated strong tornadoes are possible). Severe storms are expected to be more concentrated and of varying intensities. These days are quite frequent in the peak severe weather season and occur occasionally at other times of year. This risk category replaced the upper end of "slight" on October 22, 2014, although a few situations that previously warranted a moderate risk were reclassified as enhanced (i.e. 45% wind or 15% tornado with no significant area).
A moderate risk day indicates that more widespread and/or more dangerous severe weather is possible, with significant severe weather often more likely. Numerous tornadoes (some of which may be strong and potentially long-track), more widespread or severe wind damage (often with gusts over ) and/or very large/destructive hail (up to or exceeding in diameter) could occur. Major events, such as large tornado outbreaks or widespread straight-line wind events, are sometimes also possible on moderate risk days, but with greater uncertainty. Moderate risk days are not terribly uncommon, and typically occur several times a month during the peak of the severe weather season, and occasionally at other times of the year. Slight and enhanced risk areas typically surround areas under a moderate risk, where the threat is lower.
A high risk day indicates a considerable likelihood of significant to extreme severe weather, generally a major tornado outbreak or (much less often) an extreme derecho event. On these days, the potential exists for extremely severe and life-threatening weather. This includes a large number of tornadoes - many of which will likely be strong to violent and on the ground for a half-hour or longer, or widespread and very destructive straight-line winds, likely in excess of . Hail cannot verify or produce a high risk on its own, although such a day usually involves a threat for widespread very large and damaging hail as well. Many of the most prolific severe weather days were high risk days. Such days are rare; a high risk is typically issued (at the most) only a few times each year (see List of Storm Prediction Center high risk days). High risk areas are usually surrounded by a larger moderate risk area, where uncertainty is greater or the threat is somewhat lower.
The Storm Prediction Center began asking for public comment on proposed categorical additions to the Day 1-3 Convective Outlooks on April 21, 2014, for a two-month period. The Storm Prediction Center broadened this system beginning on October 22, 2014 by adding two new risk categories to the three used originally. The new categories that were added are a "marginal risk" (replacing the "SEE TEXT" contours, see below) and an "enhanced risk". The latter is used to delineate areas where severe weather will occur that would fall under the previous probability criteria of an upper-end slight risk, but do not warrant the issuance of a moderate risk. In order from least to greatest threat, these categories are ranked as: marginal, slight, enhanced, moderate, and high.
Issuance and usage
Convective outlooks are issued by the Storm Prediction Center in Zulu time (also known as Universal Coordinated Time or UTC).
The categories at right refer to the risk levels for the specific severe weather event occurring within of any point in the delineated region, as described in the previous section. The Day 1 Convective Outlook, issued five times per day at 0600Z (valid from 1200Z of the current day until 1200Z the following day), 1300Z and 1630Z (the "morning updates," valid until 1200Z the following day), 2000Z (the "afternoon update," valid until 1200Z the following day), and the 0100Z (the "evening update," valid until 1200Z the following day), provides a textual forecast, map of categories and probabilities, and chart of probabilities. Prior to January 28, 2020, the Day 1 was currently the only outlook to issue specific probabilities for tornadoes, hail or wind. It is the most descriptive and highest accuracy outlook, and typically has the highest probability levels.
Day 2 outlooks, issued twice daily at 0600Z in Daylight Saving Time or 0700Z in Standard Time and 1730Z, refer to predicted risks of convective weather for the following day (1200Z to 1200Z of the next calendar day; for example, a Day 2 outlook issued on April 12, 2100, would be valid from 1200Z on April 13, 2100, through 1200Z on April 14, 2100) and include only a categorical outline, textual description, and a map of categories and probabilities. Day 2 moderate risks are fairly uncommon, and a Day 2 high risk has only been issued twice (for April 7, 2006 and for April 14, 2012). Probabilities for tornadoes, hail and wind applying to the Day 1 Convective Outlook were incorporated into the Day 2 Convective Outlook on January 28, 2020, citing research to SPC operations and improvements in numerical forecast guidance that have increased forecaster confidence in risk estimation for those hazards in that timeframe. The individual hazard probabilistic forecasts replaced the existing "total severe" probability graph for general severe convective storms that had been used for the Day 2 outlook beforehand.
Day 3 outlooks refer to the day after tomorrow, issued twice daily since August 13, 2024 at 0730Z in Daylight Saving Time or 0830Z in Standard Time and 1930Z and include the same products (categorical outline, text description, and probability graph) as the Day 2 outlook. As of June 2012, the SPC forecasts general thunderstorm risk areas. Higher probability forecasts are less and less likely as the forecast period increases due to lessening forecast ability farther in advance. Day 3 moderate risks are quite rare; these have been issued only twenty times since the product became operational (most recently for March 22, 2022). Day 3 high risks are never issued and the operational standards do not allow for such. This is most likely because it would require both a very high degree of certainty (60%) for an event which was still at least 48 hours away and a reasonable level of confidence that said severe thunderstorm outbreak would include significant severe weather (EF2+ tornadoes, hurricane-force winds, and/or egg-sized hail).
Day 4–8 outlooks are the longest-term official SPC Forecast Product, and often change significantly from day to day. This extended forecast for severe weather was an experimental product until March 22, 2007, when the Storm Prediction Center incorporated it as an official product. Areas are delineated in this forecast that have least a 15% or 30% chance of severe weather in the Day 4–8 period (equivalent to a slight risk and an enhanced risk, respectively); as forecaster confidence is not fully resolute on how severe weather will evolve more than three days out, the Day 4–8 outlook only outlines the areas in which severe thunderstorms are forecast to occur during the period at the 15% and 30% likelihood, and does not utilize other categorical risk areas or outline where general (non-severe) thunderstorm activity will occur.
Local forecast offices of the National Weather Service, radio and television stations, and emergency planners often use the forecasts to gauge the potential severe weather threats to their areas. Even after the marginal and enhanced risk categories were added in October 2014, some television stations have continued to use the original three-category system to outline forecasted severe weather risks (though stations that do this may utilize in-house severe weather outlooks that vary to some degree from the SPC convective outlooks), while certain others that have switched to the current system have chosen not to outline marginal risk areas.
Generally, the convective outlook boundaries or lines – general thunderstorms (light green), marginal (dark green), slight (yellow), enhanced (orange), moderate (red) and high (purple) – will be continued as an arrow or line not filled with color if the risk area enters another country (Canada or Mexico) or across waters beyond the United States coastline. This indicates that the risk for severe weather is also valid in that general area of the other side of the border or oceanic boundary.
Mesoscale discussions
SPC mesoscale discussions (MDs) once covered convection (mesoscale convective discussions [MCDs]) and precipitation (mesoscale precipitation discussions [MPDs]); MPDs are now issued by the Weather Prediction Center (WPC). MCDs generally precede the issuance of a tornado or severe thunderstorm watch, by one to three hours when possible. Mesoscale discussions are designed to give local forecasters an update on a region where a severe weather threat is emerging and an indication of whether a watch is likely and details thereof, as well as situations of isolated severe weather when watches are not necessary. MCDs contain meteorological information on what is happening and what is expected to occur in the next few hours, and forecast reasoning in regard to weather watches. Mesoscale discussions are often issued to update information on watches already in effect, and sometimes when one is to be canceled. Mesoscale discussions are occasionally used as advance notice of a categorical upgrade of a scheduled convective outlook.
Example
Meso-gamma mesoscale discussion
SPC mesoscale discussions for a high-impact and high-confidence strong tornadoes (EF2+) or winds greater than are called meso-gamma mesoscale discussions. Meso-gamma mesoscale discussions are rarely issued by the SPC. , the Storm Prediction Center has issued 42 meso-gamma mesoscale discussions.
Weather watches
Watches (WWs) issued by the SPC are generally less than in area and are normally preceded by a mesoscale discussion. Watches are intended to be issued preceding the arrival of severe weather by one to six hours. They indicate that conditions are favorable for thunderstorms capable of producing various modes of severe weather, including large hail, damaging straight-line winds and/or tornadoes. In the case of severe thunderstorm watches organized severe thunderstorms are expected but conditions are not thought to be especially favorable for tornadoes (although they can occur in such areas where one is in effect, and some severe thunderstorm watch statements issued by the SPC may note a threat of isolated tornadic activity if conditions are of modest favorability for storm rotation capable of inducing them), whereas for tornado watches conditions are thought to be favorable for severe thunderstorms to produce tornadoes.
In situations where a forecaster expects a significant threat of extremely severe and life-threatening weather, a watch with special enhanced wording, "Particularly Dangerous Situation" (PDS), is subjectively issued. It is occasionally issued with tornado watches, normally for the potential of major tornado outbreaks, especially those with a significant threat of multiple tornadoes capable of producing F4/EF4 and F5/EF5 damage and/or staying on the ground for long-duration – sometimes uninterrupted – paths. A PDS severe thunderstorm watch is very rare and is typically reserved for derecho events impacting densely populated areas.
Watches are not "warnings", where there is an immediate severe weather threat to life and property. Although severe thunderstorm and tornado warnings are ideally the next step after watches, watches cover a threat of organized severe thunderstorms over a larger area and may not always precede a warning; watch "busts" do sometimes occur should thunderstorm activity not occur at all or that which does develop never reaches the originally forecast level of severity. Warnings are issued by local National Weather Service offices, not the Storm Prediction Center, which is a national guidance center.
The process of issuing a convective watch begins with a conference call from SPC to local NWS offices. If after collaboration a watch is deemed necessary, the Storm Prediction Center will issue a watch approximation product which is followed by the local NWS office issuing a specific county-based watch product. The latter product is responsible for triggering public alert messages via television, radio stations and NOAA Weather Radio. The watch approximation product outlines specific regions covered by the watch (including the approximate outlined area in statute miles) and its time of expiration (based on the local time zone(s) of the areas under the watch), associated potential threats, a meteorological synopsis of atmospheric conditions favorable for severe thunderstorm development, forecasted aviation conditions, and a pre-determined message informing the public of the meaning behind the watch and to be vigilant of any warnings or weather statements that may be issued by their local National Weather Service office.
Watch outline products provide a visual map depiction of the issued watch; the SPC typically delineates watches within this product in the form of "boxes," which technically are represented as either squares, rectangles (horizontal or vertical) or parallelograms depending on the area it covers. Jurisdictions outlined by the county-based watch product as being included in the watch area may differ from the actual watch box; as such, certain counties, parishes or boroughs not covered by the fringes of the watch box may actually be included in the watch and vice versa. Watches can be expanded, contracted (by removing jurisdictions where SPC and NWS forecasters no longer consider there to be a viable threat of severe weather, in which case, the watch box may take on a trapezoidal representation in map-based watch products) or canceled before their set time of expiration by local NWS offices.
Example
Fire weather products
The Storm Prediction Center also is responsible for issuing fire weather outlooks (FWD) for the continental United States. These outlooks are a guidance product for local, state and federal government agencies, including local National Weather Service offices, in forecasting the potential for wildfires. The outlooks issued are for Day 1, Day 2, and Days 3–8. The Day 1 product is issued at 4:00 a.m. Central Time and is updated at 1700Z, and is valid from 1200Z to 1200Z the following day. The Day 2 outlook is issued at 1000Z and is updated at 2000Z for the forecast period of 1200Z to 1200Z the following day. The Day 3–8 outlook is issued at 2200Z, and is valid from 1200Z two days after the current calendar date to 1200Z seven days after the current calendar date.
There are four types of Fire Weather Outlook areas: "See Text", a "Critical Fire Weather Area for Wind and Relative Humidity", an "Extremely Critical Fire Weather Area for Wind and Relative Humidity", and a "Critical Fire Weather Area for Dry Thunderstorms". The outlook type depends on the forecast weather conditions, severity of the predicted threat, and local climatology of a forecast region. "See Text" is a map label used for outlining areas where fire potential is great enough to pose a limited threat, but not enough to warrant a critical area, similar to areas using the same notation title that were formerly outlined in convective outlooks. Critical Fire Weather Areas for Wind and Relative Humidity are typically issued when strong winds ( > ; for Florida) and low relative humidity (usually < 20%) are expected to occur where dried fuels exist, similar to a slight, enhanced, or moderate risk of severe weather. Critical Fire Weather Areas for Dry Thunderstorms are typically issued when widespread or numerous thunderstorms producing rainfall of little accumulation to provide sufficient ground wetting ( < ) are expected to occur where dried fuels exist. Extremely Critical Fire Weather Areas for Wind and Relative Humidity are issued when very strong winds and very low humidity are expected to occur with very dry fuels. Extremely Critical areas are issued relatively rarely, similar to the very low frequency of high risk areas in convective outlooks (see List of Storm Prediction Center extremely critical days).
See also
National Weather Service Norman, Oklahoma – the Weather Forecast Office located adjacent to the Storm Prediction Center within the National Weather Center, which serves central and western Oklahoma and northwestern Texas
Severe weather terminology (United States)
Chris Broyles, a forecaster at the Storm Prediction Center
References
External links
SPC products descriptions
Norman, Oklahoma
History of Kansas City, Missouri
National Centers for Environmental Prediction
Weather prediction
1995 establishments in Oklahoma | Storm Prediction Center | [
"Physics"
] | 5,858 | [
"Weather",
"Weather prediction",
"Physical phenomena"
] |
536,313 | https://en.wikipedia.org/wiki/Polycarbonate | Polycarbonates (PC) are a group of thermoplastic polymers containing carbonate groups in their chemical structures. Polycarbonates used in engineering are strong, tough materials, and some grades are optically transparent. They are easily worked, molded, and thermoformed. Because of these properties, polycarbonates find many applications. Polycarbonates do not have a unique resin identification code (RIC) and are identified as "Other", 7 on the RIC list. Products made from polycarbonate can contain the precursor monomer bisphenol A (BPA).
Structure
Carbonate esters have planar OC(OC)2 cores, which confer rigidity. The unique O=C bond is short (1.173 Å in the depicted example), while the C-O bonds are more ether-like (the bond distances of 1.326 Å for the example depicted). Polycarbonates received their name because they are polymers containing carbonate groups (−O−(C=O)−O−). A balance of useful features, including temperature resistance, impact resistance and optical properties, positions polycarbonates between commodity plastics and engineering plastics.
Production
Phosgene route
The main polycarbonate material is produced by the reaction of bisphenol A (BPA) and phosgene . The overall reaction can be written as follows:
The first step of the synthesis involves treatment of bisphenol A with sodium hydroxide, which deprotonates the hydroxyl groups of the bisphenol A.
(HOC6H4)2CMe2 + 2 NaOH → Na2(OC6H4)2CMe2 + 2 H2O
The diphenoxide (Na2(OC6H4)2CMe2) reacts with phosgene to give a chloroformate, which subsequently is attacked by another phenoxide. The net reaction from the diphenoxide is:
Na2(OC6H4)2CMe2 + COCl2 → 1/n [OC(OC6H4)2CMe2]n + 2 NaCl
In this way, approximately one billion kilograms of polycarbonate is produced annually. Many other diols have been tested in place of bisphenol A, e.g. 1,1-bis(4-hydroxyphenyl)cyclohexane and dihydroxybenzophenone. The cyclohexane is used as a comonomer to suppress crystallisation tendency of the BPA-derived product. Tetrabromobisphenol A is used to enhance fire resistance. Tetramethylcyclobutanediol has been developed as a replacement for BPA.
Transesterification route
An alternative route to polycarbonates entails transesterification from BPA and diphenyl carbonate:
(HOC6H4)2CMe2 + (C6H5O)2CO → 1/n [OC(OC6H4)2CMe2]n + 2 C6H5OH
Properties and processing
Polycarbonate is a durable material. Although it has high impact-resistance, it has low scratch-resistance. Therefore, a hard coating is applied to polycarbonate eyewear lenses and polycarbonate exterior automotive components. The characteristics of polycarbonate compare to those of polymethyl methacrylate (PMMA, acrylic), but polycarbonate is stronger and will hold up longer to extreme temperature. Thermally processed material is usually totally amorphous, and as a result is highly transparent to visible light, with better light transmission than many kinds of glass.
Polycarbonate has a glass transition temperature of about , so it softens gradually above this point and flows above about . Tools must be held at high temperatures, generally above to make strain-free and stress-free products. Low molecular mass grades are easier to mold than higher grades, but their strength is lower as a result. The toughest grades have the highest molecular mass, but are more difficult to process.
Unlike most thermoplastics, polycarbonate can undergo large plastic deformations without cracking or breaking. As a result, it can be processed and formed at room temperature using sheet metal techniques, such as bending on a brake. Even for sharp angle bends with a tight radius, heating may not be necessary. This makes it valuable in prototyping applications where transparent or electrically non-conductive parts are needed, which cannot be made from sheet metal. PMMA/Acrylic, which is similar in appearance to polycarbonate, is brittle and cannot be bent at room temperature.
Main transformation techniques for polycarbonate resins:
extrusion into tubes, rods and other profiles including multiwall
extrusion with cylinders (calenders) into sheets () and films (below ), which can be used directly or manufactured into other shapes using thermoforming or secondary fabrication techniques, such as bending, drilling, or routing. Due to its chemical properties it is not conducive to laser-cutting.
injection molding into ready articles
Polycarbonate may become brittle when exposed to ionizing radiation above
Applications
Electronic components
Polycarbonate is mainly used for electronic applications that capitalize on its collective safety features. A good electrical insulator with heat-resistant and flame-retardant properties, it is used in products associated with power systems and telecommunications hardware. It can serve as a dielectric in high-stability capacitors. Commercial manufacture of polycarbonate capacitors mostly stopped after sole manufacturer Bayer AG stopped making capacitor-grade polycarbonate film at the end of 2000.
Construction materials
The second largest consumer of polycarbonates is the construction industry, e.g. for domelights, flat or curved glazing, roofing sheets and sound walls.
Polycarbonates are used to create materials used in buildings that must be durable but light.
3D printing
Polycarbonates are used extensively in 3D FDM printing, producing durable strong plastic products with a high melting point. Polycarbonate is relatively difficult for casual hobbyists to print compared to thermoplastics such as Polylactic acid (PLA) or Acrylonitrile butadiene styrene (ABS) because of the high melting point, difficulty with print bed adhesion, tendency to warp during printing, and tendency to absorb moisture in humid environments. Despite these issues, 3D printing using polycarbonates is common in the professional community.
Data storage
A major polycarbonate market is the production of compact discs, DVDs, and Blu-ray discs. These discs are produced by injection-molding polycarbonate into a mold cavity that has on one side a metal stamper containing a negative image of the disc data, while the other mold side is a mirrored surface. Typical products of sheet/film production include applications in advertisement (signs, displays, poster protection).
Automotive, aircraft, and security components
In the automotive industry, injection-molded polycarbonate can produce very smooth surfaces that make it well-suited for sputter deposition or evaporation deposition of aluminium without the need for a base-coat. Decorative bezels and optical reflectors are commonly made of polycarbonate.
Its low weight and high impact resistance have made polycarbonate the dominant material for automotive headlamp lenses. However, automotive headlamps require outer surface coatings because of its low scratch resistance and susceptibility to ultraviolet degradation (yellowing). The use of polycarbonate in automotive applications is limited to low stress applications. Stress from fasteners, plastic welding and molding render polycarbonate susceptible to stress corrosion cracking when it comes in contact with certain accelerants such as salt water and plastisol. It can be laminated to make bullet-proof "glass", although "bullet-resistant" is more accurate for the thinner windows, such as are used in bullet-resistant windows in automobiles. The thicker barriers of transparent plastic used in teller's windows and barriers in banks are also polycarbonate.
So-called "theft-proof" large plastic packaging for smaller items, which cannot be opened by hand, is typically made from polycarbonate.
The cockpit canopy of the Lockheed Martin F-22 Raptor jet fighter is fabricated from high optical quality polycarbonate. It is the largest item of its type.
Niche applications
Polycarbonate, being a versatile material with attractive processing and physical properties, has attracted myriad smaller applications. The use of injection molded drinking bottles, glasses and food containers is common, but the use of BPA in the manufacture of polycarbonate has stirred concerns (see Potential hazards in food contact applications), leading to development and use of "BPA-free" plastics in various formulations.
Polycarbonate is commonly used in eye protection, as well as in other projectile-resistant viewing and lighting applications that would normally indicate the use of glass, but require much higher impact-resistance. Polycarbonate lenses also protect the eye from UV light. Many kinds of lenses are manufactured from polycarbonate, including automotive headlamp lenses, lighting lenses, sunglass/eyeglass lenses, camera lenses, swimming goggles and SCUBA masks, and safety glasses/goggles/visors including visors in sporting helmets/masks and police riot gear (helmet visors, riot shields, etc.). Windscreens in small motorized vehicles are commonly made of polycarbonate, such as for motorcycles, ATVs, golf carts, and small airplanes and helicopters.
The light weight of polycarbonate as opposed to glass has led to development of electronic display screens that replace glass with polycarbonate, for use in mobile and portable devices. Such displays include newer e-ink and some LCD screens, though CRT, plasma screen and other LCD technologies generally still require glass for its higher melting temperature and its ability to be etched in finer detail.
As more and more governments are restricting the use of glass in pubs and clubs due to the increased incidence of glassings, polycarbonate glasses are becoming popular for serving alcohol because of their strength, durability, and glass-like feel.
Other miscellaneous items include durable, lightweight luggage, MP3/digital audio player cases, ocarinas, computer cases, riot shields, instrument panels, tealight candle containers and food blender jars. Many toys and hobby items are made from polycarbonate parts, like fins, gyro mounts, and flybar locks in radio-controlled helicopters, and transparent LEGO (ABS is used for opaque pieces).
Standard polycarbonate resins are not suitable for long term exposure to UV radiation. To overcome this, the primary resin can have UV stabilisers added. These grades are sold as UV stabilized polycarbonate to injection moulding and extrusion companies. Other applications, including polycarbonate sheets, may have the anti-UV layer added as a special coating or a coextrusion for enhanced weathering resistance.
Polycarbonate is also used as a printing substrate for nameplate and other forms of industrial grade under printed products. The polycarbonate provides a barrier to wear, the elements, and fading.
Medical applications
Many polycarbonate grades are used in medical applications and comply with both ISO 10993-1 and USP Class VI standards (occasionally referred to as PC-ISO). Class VI is the most stringent of the six USP ratings. These grades can be sterilized using steam at 120 °C, gamma radiation, or by the ethylene oxide (EtO) method. Trinseo strictly limits all its plastics with regard to medical applications. Aliphatic polycarbonates have been developed with improved biocompatibility and degradability for nanomedicine applications.
Mobile phones
Some smartphone manufacturers use polycarbonate. Nokia used polycarbonate in their phones starting with the N9's unibody case in 2011. This practice continued with various phones in the Lumia series. Samsung started using polycarbonate with Galaxy S III's hyperglaze-branded removable battery cover in 2012. This practice continues with various phones in the Galaxy series. Apple started using polycarbonate with the iPhone 5C's unibody case in 2013.
Benefits over glass and metal back covers include durability against shattering (advantage over glass), bending and scratching (advantage over metal), shock absorption, low manufacturing costs, and no interference with radio signals and wireless charging (advantage over metal).
Polycarbonate back covers are available in glossy or matte surface textures.
History
Polycarbonates were first discovered in 1898 by Alfred Einhorn, a German scientist working at the University of Munich. However, after 30 years' laboratory research, this class of materials was abandoned without commercialization. Research resumed in 1953, when Hermann Schnell at Bayer in Uerdingen, Germany patented the first linear polycarbonate. The brand name "Makrolon" was registered in 1955.
Also in 1953, and one week after the invention at Bayer, Daniel Fox at General Electric (GE) in Pittsfield, Massachusetts, independently synthesized a branched polycarbonate. Both companies filed for U.S. patents in 1955, and agreed that the company lacking priority would be granted a license to the technology.
Patent priority was resolved in Bayer's favor, and Bayer began commercial production under the trade name Makrolon in 1958. GE began production under the name Lexan in 1960, creating the GE Plastics division in 1973.
After 1970, the original brownish polycarbonate tint was improved to "glass-clear".
Potential hazards in food contact applications
The use of polycarbonate containers for the purpose of food storage is controversial. The basis of this controversy is their hydrolysis (degradation by water, often referred to as leaching) occurring at high temperature, releases bisphenol A:
1/n [OC(OC6H4)2CMe2]n + H2O → (HOC6H4)2CMe2 + CO2
More than 100 studies have explored the bioactivity of bisphenol A derived from polycarbonates. Bisphenol A appeared to be released from polycarbonate animal cages into water at room temperature and it may have been responsible for enlargement of the reproductive organs of female mice. However, the animal cages used in the research were fabricated from industrial grade polycarbonate, rather than FDA food grade polycarbonate.
An analysis of the literature on bisphenol A leachate low-dose effects by vom Saal and Hughes published in August 2005 seems to have found a suggestive correlation between the source of funding and the conclusion drawn. Industry-funded studies tend to find no significant effects whereas government-funded studies tend to find significant effects.
Sodium hypochlorite bleach and other alkali cleaners catalyze the release of the bisphenol A from polycarbonate containers. Polycarbonate is incompatible with ammonia and acetone. Alcohol is a recommended organic solvent for cleaning grease and oils from polycarbonate.
Environmental impact
Disposal
Studies have shown that at temperatures above 70 °C, and high humidity, polycarbonate will hydrolyze to bisphenol A (BPA). After about 30 days at 85 °C/96% RH, surface crystals are formed which for 70% consisted of BPA. BPA is a compound that is currently on the list of potential environmental hazardous chemicals. It is on the watch list of many countries, such as United States and Germany.
-(-OC6H4)2C(CH3)2CO-)-n + H2O → (CH3)2C(C6H4OH)2 + CO2
The leaching of BPA from polycarbonate can also occur at environmental temperature and normal pH (in landfills).The amount of leaching increases as the polycarbonate parts get older. A study found that the decomposition of BPA in landfills (under anaerobic conditions) will not occur. It will therefore be persistent in landfills. Eventually, it will find its way into water bodies and contribute to aquatic pollution.
Photo-oxidation of polycarbonate
In the presence of UV light, oxidation of this polymer yields compounds such as ketones, phenols, o-phenoxybenzoic acid, benzyl alcohol and other unsaturated compounds. This has been suggested through kinetic and spectral studies. The yellow color formed after long exposure to sun can also be related to further oxidation of phenolic end group
(OC6H4)2C(CH3)2CO )n + O2 , R* → (OC6H4)2C(CH3CH2)CO)n
This product can be further oxidized to form smaller unsaturated compounds. This can proceed via two different pathways, the products formed depends on which mechanism takes place.
Pathway A
(OC6H4)2C(CH3CH2)CO + O2, H* HO(OC6H4)OCO + CH3COCH2(OC6H4)OCO
Pathway B
(OC6H4)2C(CH3CH2)CO)n + O2, H* OCO(OC6H4)CH2OH + OCO(OC6H4)COCH3
Photo-aging reaction
Photo-aging is another degradation route for polycarbonates. Polycarbonate molecules (such as the aromatic ring) absorb UV radiation. This absorbed energy causes cleavage of covalent bonds which initiates the photo-aging process. The reaction can be propagated via side chain oxidation, ring oxidation or photo-Fries rearrangement. Products formed include phenyl salicylate, dihydroxybenzophenone groups, and hydroxydiphenyl ether groups.
(C16H14O3)n C16H17O3 + C13H10O3
Thermal degradation
Waste polycarbonate will degrade at high temperatures to form solid, liquid and gaseous pollutants. A study showed that the products were about 40–50 wt.% liquid, 14–16 wt.% gases, while 34–43 wt.% remained as solid residue. Liquid products contained mainly phenol derivatives (~75wt.%) and bisphenol (~10wt.%) also present. Polycarbonate, however, can be safely used as a carbon source in the steel-making industry.
Phenol derivatives are environmental pollutants, classified as volatile organic compounds (VOC). Studies show they are likely to facilitate ground level ozone formation and increase photo-chemical smog. In aquatic bodies, they can potentially accumulate in organisms. They are persistent in landfills, do not readily evaporate and would remain in the atmosphere.
Effect of fungi
In 2001 a species of fungus in Belize, Geotrichum candidum, was found to consume the polycarbonate found in compact discs (CD). This has prospects for bioremediation. However, this effect has not been reproduced.
See also
CR-39, allyl diglycol carbonate (ADC) used for eyeglasses
Mobile phone accessories
Organic electronics
Thermoplastic polyurethane
Vapor polishing
References
External links
Commodity chemicals
Dielectrics
Optical materials
Plastics
Thermoplastics
Transparent materials
German inventions | Polycarbonate | [
"Physics",
"Chemistry"
] | 4,018 | [
"Physical phenomena",
"Commodity chemicals",
"Products of chemical industry",
"Unsolved problems in physics",
"Optical phenomena",
"Materials",
"Optical materials",
"Transparent materials",
"Dielectrics",
"Amorphous solids",
"Matter",
"Plastics"
] |
536,317 | https://en.wikipedia.org/wiki/Modus%20ponendo%20tollens | Modus ponendo tollens (MPT; Latin: "mode that denies by affirming") is a valid rule of inference for propositional logic. It is closely related to modus ponens and modus tollendo ponens.
Overview
MPT is usually described as having the form:
Not both A and B
A
Therefore, not B
For example:
Ann and Bill cannot both win the race.
Ann won the race.
Therefore, Bill cannot have won the race.
As E. J. Lemmon describes it: "Modus ponendo tollens is the principle that, if the negation of a conjunction holds and also one of its conjuncts, then the negation of its other conjunct holds."
In logic notation this can be represented as:
Based on the Sheffer Stroke (alternative denial), "|", the inference can also be formalized in this way:
Proof
Strong form
Modus ponendo tollens can be made stronger by using exclusive disjunction instead of non-conjunction as a premise:
See also
Modus tollendo ponens
Stoic logic
References
Latin logical phrases
Rules of inference
Theorems in propositional logic
nl:Modus tollens#Modus ponendo tollens | Modus ponendo tollens | [
"Mathematics"
] | 261 | [
"Theorems in propositional logic",
"Rules of inference",
"Theorems in the foundations of mathematics",
"Proof theory"
] |
536,505 | https://en.wikipedia.org/wiki/Rod%20%28unit%29 | The rod, perch, or pole (sometimes also lug) is a surveyor's tool and unit of length of various historical definitions. In British imperial and US customary units, it is defined as feet, equal to exactly of a mile, or yards (a quarter of a surveyor's chain), and is exactly 5.0292 meters. The rod is useful as a unit of length because integer multiples of it can form one acre of square measure (area). The 'perfect acre' is a rectangular area of 43,560 square feet, bounded by sides 660 feet (a furlong) long and 66 feet (a chain) wide (220 yards by 22 yards) or, equivalently, 40 rods by 4 rods. An acre is therefore 160 square rods or 10 square chains.
The name perch derives from the Ancient Roman unit, the pertica.
The measure also has a relationship with the military pike of about the same size. Both measures date from the sixteenth century, when the pike was still utilized in national armies. The tool has been supplanted, first by steel tapes and later by electronic tools such as surveyor lasers and optical target devices for surveying lands. In dialectal English, the term lug has also been used, although the Oxford English Dictionary states that this unit, while usually of feet, may also be of 15, 18, 20, or 21 feet.
In the United States until 1 January 2023, the rod was often defined as 16.5 US survey feet, or approximately 5.029 210 058 m.
History
In England, the perch was officially discouraged in favour of the rod as early as the 15th century; however, local customs maintained its use. In the 13th century, perches were variously recorded in lengths of , , and ; and even as late as 1820, a House of Commons report notes lengths of , , , , and even . In Ireland, a perch was standardized at , making an Irish chain, furlong and mile proportionately longer by 27.27% than the "standard" English measure.
Until English King Henry VIII seized the lands of the Roman Catholic Church in 1536, land measures as we now know them were essentially unknown. Instead a narrative system of landmarks and lists was used. Henry wanted to raise even more funds for his wars than he'd seized directly from church property (he'd also assumed the debts of the monasteries), and as James Burke writes and quotes in the book Connections that the English monk Richard Benese "produced a book on how to survey land using the simple tools of the time, a rod with cord carrying knots at certain intervals, waxed and resined against wet weather." Benese poetically described the measure of an acre in terms of a perch:
The practice of using surveyor's chains, and perch-length rods made into a detachable stiff chain, came about a century later when iron was a more plentiful and common material. A chain is a larger unit of length measuring , or 22 yards, or 100 links, or 4 rods (20.1168 meters). There are 10 chains or 40 rods in a furlong (eighth-mile), and so 80 chains or 320 rods in one statute mile (1760 yards, 1609.344 m, 1.609344 km); the definition of which was legally set in 1593 and popularized by Royal surveyor (called the 'sworn viewer') John Ogilby only after the Great Fire of London (1666).
An acre is defined as the area of 10 square chains (that is, an area of one chain by one furlong), and derives from the shapes of new-tech plows and the desire to quickly survey seized church lands into a quantity of squares for quick sales by Henry VIII's agents; buyers simply wanted to know what they were buying whereas Henry was raising cash for wars against Scotland and France. Consequently, the surveyor's chain and surveyor rods or poles (the perch) have been used for several centuries in Britain and in many other countries influenced by British practices such as North America and Australia. By the time of the industrial revolution and the quickening of land sales, canal and railway surveys, et al. Surveyor rods such as used by George Washington were generally made of dimensionally stable metal—semi-flexible drawn wrought iron linkable bar stock (not steel), such that the four folded elements of a chain were easily transportable through brush and branches when carried by a single man of a surveyor's crew. With a direct ratio to the length of a surveyor's chain and the sides of both an acre and a square (mile), they were common tools used by surveyors, if only to lay out a known plottable baseline in rough terrain thereafter serving as the reference line for instrumental (theodolite) triangulations.
The rod as a survey measure was standardized by Edmund Gunter in England in 1607 as a quarter of a chain (of ), or long.
In ancient cultures
The perch (pertica) as a lineal measure in Rome (also decempeda) was 10 Roman feet (2.96 metres), and in France varied from 10 feet (perche romanie) to 22 feet (perche d'arpent—apparently of "the range of an arrow"—about 220 feet). To confuse matters further, by ancient Roman definition, an arpent equalled 120 Roman feet. The related unit of square measure was the scrupulum or decempeda quadrata, equivalent to about .
In continental Europe
Units comparable to the perch, pole or rod were used in many European countries, with names that include and canne, , and pertica, and . They were subdivided in many different ways, and were of many different lengths.
In Britain and Ireland
In England, the rod or perch was first defined in law by the Composition of Yards and Perches, one of the statutes of uncertain date from the late 13th to early 14th centuries: tres pedes faciunt ulnam, quinque ulne & dimidia faciunt perticam (three feet make a yard, five and a half yards make a perch).
The length of the chain was standardized in 1620 by Edmund Gunter at exactly four rods. Fields were measured in acres, which were one chain (four rods) by one furlong (in the United Kingdom, ten chains).
Bars of metal one rod long were used as standards of length when surveying land. The rod was still in use as a common unit of measurement in the mid-19th century, when Henry David Thoreau used it frequently when describing distances in his work, Walden.
In traditional Scottish units, a Scottish rood (ruid in Lowland Scots, ròd in Scottish Gaelic), also fall measures 222 inches (6 ells).
Modern use
The rod was phased out as a legal unit of measurement in the United Kingdom as part of a ten-year metrication process that began on 24 May 1965.
In the United States, the rod, along with the chain, furlong, and statute mile (as well as the survey inch and survey foot) were based on the pre-1959 values for United States customary units of linear measurement until 1 January 2023. The Mendenhall Order of 1893 defined the yard as exactly meters, with all other units of linear measurement, including the rod, based on the yard. In 1959, an international agreement (the international yard and pound agreement), defined the yard as the fundamental unit of length in the Imperial/USCU system, defined as exactly 0.9144 metres. However, the above-noted units, when used in surveying, may retain their pre-1959 values, depending on the legislation in each state. The U.S. National Geodetic Survey and National Institute of Standards and Technology have replaced the definition for the above-mentioned units by the international 1959 definition of the foot, being exactly 0.3048 meters.
Despite no longer being in widespread use, the rod is still employed in certain specialized fields. In recreational canoeing, maps measure portages (overland paths where canoes must be carried) in rods; typical canoes are approximately one rod long. The term is also in widespread use in the acquisition of pipeline easements, as the offers for an easement are often expressed on a "price per rod".
In the United Kingdom, the sizes of allotment gardens continue to be measured in square poles in some areas, sometimes being referred to simply as poles rather than square poles.
In Vermont, the default right-of-way width of state and town highways and trails is three rods . Rods can also be found on the older legal descriptions of tracts of land in the United States, following the "metes and bounds" method of land survey; as shown in this actual legal description of rural real estate:
Area and volume
The terms pole, perch, rod and rood have been used as units of area, and perch is also used as a unit of volume. As a unit of area, a square perch (the perch being standardized to equal feet, or yards) is equal to a square rod, or acre. There are 40 square perches to a rood (for example a rectangular area of 40 rods times one rod), and 160 square perches to an acre (for example a rectangular area of 40 rods times 4 rods). This unit is usually referred to as a perch or pole even though square perch and square pole were the more precise terms. Rod was also sometimes used as a unit of area to refer to a rood.
However, in the traditional French-based system in some countries, 1 square perche is 42.21 square metres.
As of August 2013, perches and roods are used as government survey units in Jamaica. They appear on most property title documents. The perch is also in extensive use in Sri Lanka, being favored even over the rood and acre in real estate listings there. Perches were informally used as a measure in Queensland real estate until the early 21st century, mostly for historical gazetted properties in older suburbs.
Volume
A traditional unit of volume for stone and other masonry. A perch of masonry is the volume of a stone wall one perch () long, high, and thick. This is equivalent to exactly .
There are two different measurements for a perch depending on the type of masonry that is being built:
A dressed stone work is measured by the -cubic foot perch () long, high, and thick. This is equivalent to exactly .
a brick work or rubble wall made of broken stone of irregular size, shape and texture, made of undressed stone, is measured by the () long, high, and thick. This is equivalent to exactly .
See also
Anthropic units
English units
References
Imperial units
Units of length
Customary units of measurement in the United States
Obsolete units of measurement
Units of measurement
Area | Rod (unit) | [
"Physics",
"Mathematics"
] | 2,226 | [
"Scalar physical quantities",
"Obsolete units of measurement",
"Units of measurement",
"Physical quantities",
"Units of length",
"Quantity",
"Size",
"Wikipedia categories named after physical quantities",
"Area"
] |
537,026 | https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20quantum%20theory | This is a list of mathematical topics in quantum theory, by Wikipedia page. See also list of functional analysis topics, list of Lie group topics, list of quantum-mechanical systems with analytical solutions.
Mathematical formulation of quantum mechanics
bra–ket notation
canonical commutation relation
complete set of commuting observables
Heisenberg picture
Hilbert space
Interaction picture
Measurement in quantum mechanics
quantum field theory
quantum logic
quantum operation
Schrödinger picture
semiclassical
statistical ensemble
wavefunction
wave–particle duality
Wightman axioms
WKB approximation
Schrödinger equation
quantum mechanics, matrix mechanics, Hamiltonian (quantum mechanics)
particle in a box
particle in a ring
particle in a spherically symmetric potential
quantum harmonic oscillator
hydrogen atom
ring wave guide
particle in a one-dimensional lattice (periodic potential)
Fock symmetry in theory of hydrogen
Symmetry
identical particles
angular momentum
angular momentum operator
rotational invariance
rotational symmetry
rotation operator
translational symmetry
Lorentz symmetry
Parity transformation
Noether's theorem
Noether charge
Spin (physics)
isospin
Aman matrices
scale invariance
spontaneous symmetry breaking
supersymmetry breaking
Quantum states
quantum number
Pauli exclusion principle
quantum indeterminacy
uncertainty principle
wavefunction collapse
zero-point energy
bound state
coherent state
squeezed coherent state
density state
Fock state, Fock space
vacuum state
quasinormal mode
no-cloning theorem
quantum entanglement
Dirac equation
spinor, spinor group, spinor bundle
Dirac sea
Spin foam
Poincaré group
gamma matrices
Dirac adjoint
Wigner's classification
anyon
Interpretations of quantum mechanics
Copenhagen interpretation
locality principle
Bell's theorem
Bell test loopholes
CHSH inequality
hidden variable theory
path integral formulation, quantum action
Bohm interpretation
many-worlds interpretation
Tsirelson's bound
Quantum field theory
Feynman diagram
One-loop Feynman diagram
Schwinger's quantum action principle
Propagator
Annihilation operator
S-matrix
Standard Model
Local quantum physics
Nonlocal
Effective field theory
Correlation function (quantum field theory)
Renormalizable
Cutoff
Infrared divergence, infrared fixed point
Ultraviolet divergence
Fermi's interaction
Path-ordering
Landau pole
Higgs mechanism
Wilson line
Wilson loop
Tadpole (physics)
Lattice gauge theory
BRST charge
Anomaly (physics)
Chiral anomaly
Braid statistics
Plekton
Computation
quantum computing
qubit
qutrit
pure qubit state
quantum dot
Kane quantum computer
quantum cryptography
quantum decoherence
quantum circuit
universal quantum computer
measurement based Quantum Computing
timeline of quantum computing
Supersymmetry
Lie superalgebra
supergroup (physics)
supercharge
supermultiplet
supergravity
Quantum gravity
theory of everything
loop quantum gravity
spin network
black hole thermodynamics
Non-commutative geometry
Quantum group
Hopf algebra
Noncommutative quantum field theory
String theory
See list of string theory topics
Matrix model
Quantum theory
Mathematics | List of mathematical topics in quantum theory | [
"Physics"
] | 578 | [
"Theoretical physics",
"Quantum mechanics"
] |
20,172,703 | https://en.wikipedia.org/wiki/Polarizable%20continuum%20model | The polarizable continuum model (PCM) is a commonly used method in computational chemistry to model solvation effects. If it is necessary to consider each solvent molecule as a separate molecule, the computational cost of modeling a solvent-mediated chemical reaction would grow prohibitively high. Modeling the solvent as a polarizable continuum, rather than individual molecules, makes ab initio computation feasible. Two types of PCMs have been popularly used: the dielectric PCM (D-PCM) in which the continuum is polarizable (see dielectrics) and the conductor-like PCM (C-PCM) in which the continuum is conductor-like similar to COSMO Solvation Model.
The molecular free energy of solvation is computed as the sum of three terms:
Gsol = Ges + Gdr + Gcav
Ges = electrostatic
Gdr = dispersion-repulsion
Gcav = cavitation
The Charge-transfer effect is also considered as a part of solvation in cases.
The PCM solvation model is available for calculating energies and gradients at the Hartree–Fock and density functional theory (DFT) levels in several quantum chemical computational packages such as Gaussian, GAMESS and JDFTx.
The authors of a 2002 paper observe that PCM has limitations where non-electrostatic effects dominate the solute-solvent interactions. They write in the abstract: "Since only electrostatic solute-solvent interactions are included in the PCM, our results lead to the conclusion that, for the seven molecules studied, in cyclohexane, acetone, methanol, and acetonitrile electrostatic effects are dominant while in carbon tetrachloride, benzene, and chloroform other nonelectrostatic effects are more important."
There is an integral equation formalism (IEF) version of the PCM which is very commonly used.
PCM is also used to model outer solvation layers in multi-layered solvation approach.
See also
COSMO Solvation Model
References
Computational chemistry | Polarizable continuum model | [
"Chemistry"
] | 431 | [
"Theoretical chemistry",
"Computational chemistry"
] |
20,174,278 | https://en.wikipedia.org/wiki/Clovoxamine | Clovoxamine (INN) (developmental code name DU-23811) is a drug that was discovered in the 1970s and was subsequently investigated as an antidepressant and anxiolytic agent but was never marketed. It acts as a serotonin-norepinephrine reuptake inhibitor (SNRI), with little affinity for the muscarinic acetylcholine, histamine, adrenergic, and serotonin receptors. The compound is structurally related to fluvoxamine.
References
Amines
Antidepressants
4-Chlorophenyl compounds
Ethers
Ketoxime ethers
Serotonin–norepinephrine reuptake inhibitors | Clovoxamine | [
"Chemistry"
] | 148 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
30,234,468 | https://en.wikipedia.org/wiki/Formability | Formability is the ability of a given metal workpiece to undergo plastic deformation without being damaged. The plastic deformation capacity of metallic materials, however, is limited to a certain extent, at which point, the material could experience tearing or fracture (breakage).
Processes affected by the formability of a material include: rolling, extrusion, forging, rollforming, stamping, and hydroforming.
Fracture strain
A general parameter that indicates the formability and ductility of a material is the fracture strain which is determined by a uniaxial tensile test (see also fracture toughness). The strain identified by this test is defined by elongation with respect to a reference length. For example, a length of is used for the standardized uniaxial test of flat specimens, pursuant to EN 10002. It is important to note that deformation is homogeneous up to uniform elongation. Strain subsequently localizes until fracture occurs. Fracture strain is not an engineering strain since distribution of the deformation is inhomogeneous within the reference length. Fracture strain is nevertheless a rough indicator of the formability of a material. Typical values of the fracture strain are: 7% for ultra-high-strength material, and over 50% for mild-strength steel.
Forming limits for sheet forming
One main failure mode is caused by tearing of the material. This is typical for sheet-forming applications.
A neck may appear at a certain forming stage. This is an indication of localized plastic deformation. Whereas more or less homogeneous deformation takes place in and around the subsequent neck location in the early stable deformation stage, almost all deformation is concentrated in the neck zone during the quasi-stable and unstable deformation phase. This leads to material failure manifested by tearing. Forming-limit curves depict the extreme, but still possible, deformation which a sheet material may undergo during any stage of the stamping process. These limits depend on the deformation mode and the ratio of the surface strains. The major surface strain has a minimum value when plane strain deformation occurs, which means that the corresponding minor surface strain is zero. Forming limits are a specific material property. Typical plane strain values range from 10% for high-strength grades and 50% or above for mild-strength materials and those with very good formability.
Forming limit diagrams are often used to graphically or mathematically represent formability. It is recognized by many authors that the nature of fracture and therefore the Forming limit diagrams are intrinsically non-deterministic since large variations might be observed even within a single experimental campaign.
Deep drawability
A classic form of sheetforming is deep drawing, which is done by drawing a sheet by means of a punch tool pressing on the inner region of the sheet, whereas the side material held by a blankholder can be drawn toward the center. It has been observed that materials with outstanding deep drawability behave anisotropically (see: anisotropy). Plastic deformation in the surface is much more pronounced than in the thickness. The lankford coefficient (r) is a specific material property indicating the ratio between width deformation and thickness deformation in the uniaxial tensile test. Materials with very good deep drawability have an r value of 2 or below. The positive aspect of formability with respect to the forming limit curve (forming limit diagram) is seen in the deformation paths of the material that are concentrated in the extreme left of the diagram, where the forming limits become very large.
Ductility
Another failure mode that may occur without any tearing is ductile fracture after plastic deformation (ductility). This may occur as a result of bending or shear deformation (inplane or through the thickness). The failure mechanism may be due to void nucleation and expansion on a microscopic level. Microcracks and subsequent macrocracks may appear when deformation of the material between the voids has exceeded the limit. Extensive research has focused in recent years on understanding and modeling ductile fracture. The approach has been to identify ductile forming limits using various small-scale tests that show different strain ratios or stress triaxialities. An effective measure of this type of forming limit is the minimum radius in roll-forming applications (half the sheet thickness for materials with good and three times the sheet thickness for materials with low formability).
Use of formability parameters
Knowledge of the material formability is very important to the layout and design of any industrial forming process. Simulations using the finite-element method and use of formability criteria such as the forming limit curve (forming limit diagram) enhance and, in some cases, are indispensable to certain tool design processes (also see: Sheet metal forming simulation and Sheet metal forming analysis).
IDDRG
One major objective of the International Deep Drawing Research Group (IDDRG, from 1957) is the investigation, exchange and dissemination of knowledge and experience about the formability of sheet materials.
References
Metal forming
Mechanical engineering | Formability | [
"Physics",
"Engineering"
] | 998 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
30,243,152 | https://en.wikipedia.org/wiki/Pegasus%20Toroidal%20Experiment | The Pegasus Toroidal Experiment is a plasma confinement experiment relevant to fusion power production, run by the Department of Engineering Physics of the University of Wisconsin–Madison. It is a spherical tokamak, a very low-aspect-ratio version of the tokamak configuration, i.e. the minor radius of the torus is comparable to the major radius.
Local Helicity Injection
Pegasus is used to study start up of spherical tokamaks using local helicity injection.
URANIA
Pegasus is being upgraded in 2019 (e.g. by removal of the central solenoid) to build the Unified Reduced Non-Inductive Assessment (URANIA) experiment. This will study plasma startup using transient coaxial helicity injection (CHI).
The max toroidal field is being increased from 0.15 T to 0.6 T, and the pulse duration from 25 to 100 ms.
References
Tokamaks
University of Wisconsin–Madison | Pegasus Toroidal Experiment | [
"Physics"
] | 193 | [
"Plasma physics stubs",
"Plasma physics"
] |
32,836,952 | https://en.wikipedia.org/wiki/Microelectromechanical%20system%20oscillator | Microelectromechanical system oscillators (MEMS oscillators) are devices that generate highly stable reference frequencies used to sequence electronic systems, manage data transfer, define radio frequencies, and measure elapsed time. The core technologies used in MEMS oscillators have been in development since the mid-1960s, but have only been sufficiently advanced for commercial applications since 2006. MEMS oscillators incorporate MEMS resonators, which are microelectromechanical structures that define stable frequencies. MEMS clock generators are MEMS timing devices with multiple outputs for systems that need more than a single reference frequency. MEMS oscillators are a valid alternative to older, more established quartz crystal oscillators, offering better resilience against vibration and mechanical shock, and reliability with respect to temperature variation.
MEMS timing devices
Resonators
MEMS resonators are small electromechanical structures that vibrate at high frequencies. They are used for timing references, signal filtering, mass sensing, biological sensing, motion sensing, and other diverse applications. This article concerns their application in frequency and timing references.
For frequency and timing references, MEMS resonators are attached to electronic circuits, often called sustaining amplifiers, to drive them in continuous motion. In most cases these circuits are located near the resonators and in the same physical package. In addition to driving the resonators, these circuits produce output signals for downstream electronics.
Oscillators
By convention, the term oscillators usually denotes integrated circuits (ICs) that supply single output frequencies. MEMS oscillators include MEMS resonators, sustaining amps, and additional electronics to set or adjust their output frequencies. These circuits often include phase-locked loops (PLLs) that produce selectable or programmable output frequencies from the upstream MEMS reference frequencies.
MEMS oscillators are commonly available as 4- or 6-pin ICs that conform to printed circuit board (PCB) solder footprints previously standardized for quartz crystal oscillators.
Clock generators
The term clock generator usually denotes a timing IC with multiple outputs. Following this custom, MEMS clock generators are multi-output MEMS timing devices. These are used to supply timing signals in complex electronic systems that require multiple frequencies or clock phases. For example, most computers require independent clocks for processor timing, disk I/O, serial I/O, video generation, Ethernet I/O, audio conversion, and other functions.
Clock generators are usually specialised for their applications, including the number and selection of frequencies, various auxiliary features, and package configurations. They often include multiple PLLs to generate multiple output frequencies or phases.
Real-time clocks
MEMS Real-time clocks (RTCs) are ICs that track time of day and date. They include MEMS resonators, sustaining amps, and registers that increment with time, for instance counting days, hours, minutes and seconds. They also include auxiliary functions like alarm outputs and battery management.
RTCs must run continuously to keep track of elapsed time. To do this they must sometimes run from small batteries and therefore must operate at very low power levels. They are generally moderate-sized ICs with up to 20 pins for power, battery backup, digital interface, and various other functions.
History of MEMS timing devices
First demonstration
Motivated by the shortcomings of quartz crystal oscillators, researchers have been developing the resonance properties of MEMS structures since 1965. However, until recently various accuracy, stability, and manufacturability issues related to sealing, packaging, and adjusting the resonator elements prevented cost-effective commercial manufacturing. Five technical challenges had to be overcome:
First demonstrations
Finding stable and predictable resonator materials,
Developing sufficiently clean hermetic packaging technologies,
Trimming and compensating the output frequencies, increasing the quality factor of the resonator elements, and
Improving the signal integrity to meet various application requirements.
The first MEMS resonators were built with metallic resonator elements. These resonators were envisioned as audio filters and had moderate quality factors (Qs) of 500 and frequencies of 1 kHz to 100 kHz. Filtering applications, now for high frequency radio, are still important and are an active area for MEMS research and commercial products.
However, early MEMS resonators did not have sufficiently stable frequencies to be used for timing references or clock generation. The metallic resonator elements tended to shift frequency with time (they aged) and with use (they fatigued). Under temperature variation they tended to have large and not entirely predictable frequency shifts (they had large temperature sensitivity) and when they were temperature cycled they tended to return to different frequencies (they were hysteretic).
Material development
Work in the 1970s through the 1990s identified sufficiently stable resonator materials and associated fabrication techniques. In particular, single and polycrystalline silicon was found to be suitable for frequency references with effectively zero aging, fatigue and hysteresis, and with moderate temperature sensitivity.
Material development is still ongoing in MEMS resonator research. Significant effort has been invested in silicon-germanium (SiGe) for its low temperature fabrication and aluminium nitride (AlN) for its piezoelectric transduction. Work on micromachined quartz continues, while polycrystalline diamond has been used for high frequency resonators for its exceptional stiffness-to-mass ratio.
Packaging development
MEMS resonators require cavities in which they can move freely, and for frequency references these cavities must be evacuated. Early resonators were built on top of silicon wafers and tested in vacuum chambers, but individual resonator encapsulation was clearly needed.
The MEMS community had employed bonded cover techniques to enclose other MEMS components, for instance pressure sensors, accelerometers, and gyroscopes, and these techniques were adapted to resonators. In this approach, cover wafers were micromachined with small cavities and bonded to the resonator wafers, enclosing the resonators in small evacuated cavities. Initially these wafers were bonded with low melting temperature glass, called glass frit, but recently other bonding technologies including metallic compression and metallic amalgams, have replaced glass frit.
Thin film encapsulation techniques were developed to form enclosed cavities by building covers directly over the resonators in the fabrication process rather than bonding covers onto the resonators. These techniques had the advantage that they did not use as much die area for the sealing structure, they did not require preparation of second wafers to form the covers, and the resulting device wafers were thinner.
Frequency references generally require frequency stabilities of 100 parts per million (ppm) or better. However, the early cover and encapsulation technologies left significant amounts of contamination in the cavities. Because MEMS resonators are small, and particularly because they have small volume-to-surface area, they are especially sensitive to mass loading. Even single-atomic layers of contaminants like water or hydrocarbons can shift the resonator's frequencies out of specification.
When resonators are aged or temperature cycled, the contaminants can move in the chambers, and transfer onto or off of the resonators. The change in mass on the resonators can produce hysteresis of thousands of ppm, which is unacceptable for virtually all frequency reference applications.
Early covered resonators with glass frit seals were unstable because contaminants outgassed from the sealing material. To overcome this, getters were built into the cavities. Getters are materials that can absorb gas and contaminants after cavities are sealed. However, getters can also release contaminants and can be costly, so their use in this application is being discontinued in favor of cleaner cover bonding processes.
Likewise, thin film encapsulation can trap fabrication byproducts in the cavities. A high temperature thin film encapsulation based on epitaxial silicon deposition was developed to eliminate this. This epitaxial sealing (EpiSeal) process has been found to be exceptionally clean and produces the highest stability resonators.
Electronic frequency selection and trimming
In early MEMS resonator development, researchers tried to build resonators at the target application frequencies and to maintain those frequencies over temperature. Approaches to solving this problem included trimming and temperature compensating the MEMS resonators in ways analogous to those used for quartz crystal.
However, these techniques were found to be technically limiting and expensive. A more effective solution was to electronically shift the resonators' frequencies to the oscillators' output frequencies. This had the advantage that the resonators did not need to be individually trimmed; instead their frequencies could be measured and appropriate scaling coefficients recorded in the oscillator ICs. In addition, the resonators' temperatures could be electronically measured, and the frequency scaling could be adjusted to compensate for the resonators' frequency variation over temperature.
Improving signal integrity
Various applications require clocks with predefined signal and performance specifications. Of these, the key specifications are phase noise and frequency stability.
Phase noise has been optimized by raising the resonator's natural frequencies (f) and quality factors (Q). The Q specifies how long resonators continue to ring after drive to them is stopped, or equivalently when viewed as filters how narrow their pass-bands are. In particular, the Q times f, or Qf product, determines the near-carrier phase noise. Early MEMS resonators showed unacceptably low Qf products for reference. Significant theoretical work clarified the underlying physics while experimental work developed high Qf resonators. The presently available MEMS Qf performance is suitable for virtually all applications.
Resonator structural design, particularly in mode control, anchoring methods, narrow-gap transducers, linearity, and arrayed structures consumed significant research effort.
The required frequency accuracies range from relatively loose for processor clocking, typically 50 to 100 ppm, to precise for high speed data clocking, often 2.5 ppm and below. Research demonstrated MEMS resonators and oscillators could be built to well within these levels. Commercial products are now available to 0.5 ppm, which covers the majority of application requirements.
Finally, the frequency control electronics and associated support circuitry needed to be developed and optimized. Key areas were in temperature sensors and PLL design. Recent circuit developments have produced MEMS oscillators suitable for high speed serial applications with sub-picosecond integrated jitter.
Commercialization
The U.S. Defense Advanced Research Projects Agency (DARPA) funded a wide range of MEMS research that provided the base technologies for the developments described above. In 2001 and 2002, DARPA launched the Nano Mechanical Array Signal Processors (NMASP) and Harsh Environment Robust Micromechanical Technology (HERMIT) programs to specifically develop MEMS high stability resonator and packaging technologies. This work was fruitful and advanced the technology to a level at which venture capital funded startups could develop commercial products. These startups included Discera in 2001, SiTime in 2004, Silicon Clocks in 2006, and Harmonic Devices in 2006.
SiTime introduced the first production MEMS oscillator in 2006, followed by Discera in 2007. Harmonic Devices changed its focus to sensor products and was bought by Qualcomm in 2010. Silicon Clocks never introduced commercial products and was bought by Silicon Labs in 2010. Additional entrants have announced their intention to produce MEMS oscillators, including Sand 9 and VTI Technologies.
By sales volume, MEMS oscillator suppliers rank in descending order as SiTime and Discera. A number of quartz oscillator suppliers resell MEMS oscillators. SiTime announced it has cumulatively shipped 50 million units as of mid-2011. Others have not disclosed sales volumes.
Operation
One can think of MEMS resonators as small bells that ring at high frequencies. Small bells ring at higher frequencies than large bells, and since MEMS resonators are small they can ring at high frequencies. Common bells are meters down to centimeters across and ring at hundreds of hertz to kilohertz; MEMS resonators are a tenth of a millimeter across and ring at tens of kilohertz to hundreds of megahertz. MEMS resonators have operated at over a gigahertz.
Common bells are mechanically struck, while MEMS resonators are electrically driven. There are two base technologies used to build MEMS resonators that differ in how electrical drive and sense signals are transduced from the mechanical motion. These are electrostatic and piezoelectric. All commercial MEMS oscillators use electrostatic transduction while MEMS filters use piezoelectric transduction. Piezoelectric resonators have not shown sufficient frequency stability or quality factor (Q) for frequency reference applications.
Electronic sustaining amps drive the resonators in continuous oscillation. These amplifiers detect the resonator motion and drive additional energy into the resonators. They are carefully designed to maintain the resonators motion at appropriate amplitudes and to extract low noise output clock signals.
Additional circuits called fractional-n phase lock loops (frac-N PLLs) multiply the resonator's mechanical frequencies to the oscillator's output frequencies. These highly specialized PLLs set the output frequencies under control of digital state machines. The state machines are controlled by calibration and program data stored in non-volatile memory and adjust the PLL configurations to compensate for temperature variations.
The state machines can also be built to provide additional user functions, for instance spread-spectrum clocking and voltage controlled frequency trimming.
MEMS clock generators are built with MEMS oscillators at their core and include additional circuitry to supply the additional outputs. This additional circuitry is usually designed to provide the specific features required by the applications.
MEMS RTCs work like oscillators but are optimized for low power consumption and include auxiliary circuits to track the date and time. To operate at low power they are built with low frequency MEMS resonators. Care is taken in circuit design to minimize power consumption while providing the required timing accuracies.
Manufacturing
Resonators
Depending upon the type of resonator, the fabrication process is either done in a specialized MEMS fab or a CMOS foundry.
The manufacturing process varies with resonator and encapsulation design, but in general the resonator structures are lithographically patterned and plasma-etched in or on silicon wafers. All commercial MEMS oscillators are built from poly or single crystal silicon.
It is important in electrostatically transduced resonators to form narrow and well controlled drive and sense capacitor gaps. These can be either lateral for instance under the resonators, or vertical beside the resonators. Each option has its advantages and both are used commercially.
The resonators are encapsulated either by bonding cover wafers onto the resonator wafers or by depositing thin film encapsulation layers over the resonators. Here again, both methods are used commercially.
Bonded cover wafers must be attached with an adhesive. Two options are used, a glass frit bond ring or a metallic bond ring. The glass frit has been found to generate too much contamination, and thus drift, and is no longer commonly used.
For thin film encapsulation the resonators' structures are covered with layers of oxide and silicon, then released by removing the surrounding oxide to form freestanding resonators, and finally sealed with an additional deposition.
Circuitry
The sustaining amps, PLLs, and auxiliary circuits are built with standard mixed-signal CMOS processes fabricated in CMOS foundries.
Integrated MEMS oscillators with CMOS circuits on the same IC die have been demonstrated but to date this homogeneous integration is not commercially viable. Instead, it is advantageous to produce the MEMS resonators and CMOS circuitry on separate die and combine them at the packaging stage. Combining multiple die in a single package in this way is called heterogeneous integration or simply die stacking.
Packaging
The completed MEMS devices, enclosed in small chip-level vacuum chambers, are diced from their silicon wafers, and the resonator die are stacked on CMOS die and molded into plastic packages to form oscillators.
MEMS oscillators are packaged in the same factories and with the same equipment and materials that are used for standard IC packaging. This is a significant contributor to their cost-effectiveness and reliability as compared to quartz oscillators, which are assembled with specialized ceramic packages in custom-built factories.
Package dimensions and pad shapes match those of standard quartz oscillator packages so the MEMS oscillators can be soldered directly on PCBs designed for quartz without requiring board modification or re-design.
Testing and calibration
Production tests check and calibrate the MEMS resonators and CMOS ICs to verify they are performing to specification and trim their frequencies. In addition, many MEMS oscillators have programmable output frequencies that can be configured at test time. Of course the various types of oscillators are configured from specialized CMOS and MEMS die. For instance, low power and high performance oscillators are not built with the same die. In addition, high precision oscillators often require more careful calibration than lower precision oscillators.
MEMS oscillators are tested much like standard ICs. Like packaging, this is done in standard IC factories with standard IC test equipment.
Using standard IC packaging and test facilities (called subcons in the IC industry) gives MEMS oscillators production scalability. These facilities are capable of large production volumes, often hundreds of millions of ICs per day. This capacity is shared across many IC companies, so ramping production volumes of specific ICs, or in this case specific MEMS oscillators, is a function of allocating standard production equipment. Conversely, quartz oscillator factories are single-function in nature, so that ramping production requires installing custom equipment, which is more costly and time-consuming than allocating standard equipment.
Comparing MEMS and quartz oscillators
Quartz oscillators are sold in much larger quantities than MEMS oscillators, and are widely used and understood by electronics engineers. Therefore, quartz oscillators provide the baseline from which MEMS oscillators are compared.
Recent advances have enabled MEMS-based timing devices to offer performance levels similar, and sometimes superior, to quartz devices. MEMS oscillator signal quality as measured by phase noise is now sufficient for most applications. Phase noise of −150 dBc at 10 kHz from 10 MHz is now available, a level that is generally only needed for radio frequency (RF) applications. MEMS oscillators are now available with integrated jitter under 1.0 picosecond, measured from 12 kHz to 20 MHz, a level that is normally required for high speed serial data links, such as SONET and SyncE, and some instrumentation applications.
Short term stability, startup time, and power consumption, are similar to those of quartz. In some cases, MEMS oscillators show lower power consumption than that of quartz.
High precision MEMS temperature-compensated oscillators (TCXOs) have recently been announced with ±0.1 ppm frequency stability over temperature. This exceeds the performance of all but the very high-end quartz TCXOs and oven-controlled oscillators (OCXOs). MEMS TCXOs are now available with output frequencies over 100 MHz, a capability that only a few specialized quartz oscillators (e.g., inverted mesa,) can provide.
In RTC applications MEMS oscillators are performing slightly better than the best quartz tuning forks in terms of frequency stability over temperature and solder-down shift, while quartz is still superior for the lowest power applications.
Manufacturing and stocking quartz oscillators to the wide variety of specifications that users require is difficult. Various applications require oscillators with specific frequencies, accuracy levels, signal quality levels, package sizes, supply voltages, and special features. The combination of these leads to a proliferation of part numbers which makes stocking impractical and can lead to long production lead times.
MEMS oscillator suppliers solve the diversity problem by leveraging circuit technology. While quartz oscillators are usually built with the quartz crystals driven at the desired output frequencies, MEMS oscillators commonly drive the resonators at one frequency and multiply this to the designed output frequency. In this way, the hundreds of standard application frequencies and the occasional custom frequency can be provided without redesigning the MEMS resonators or circuits.
There are, of course, differences in the resonator, circuits, or calibration required for different categories of parts, but within these categories the frequency translation parameters can often be programmed into the MEMS oscillators late in the production process. Because the components are not differentiated until late in the process the lead times can be short, typically a few weeks. Technologically, quartz oscillators can be made with circuit-centric programmable architectures like those used in MEMS, but historically only a minority have been built this way.
MEMS oscillators are also significantly immune to shock and vibration and have shown production quality levels higher than those associated with quartz.
Quartz oscillators are secure in specific applications where suitable MEMS oscillators have not been introduced. One of those applications, for instance, is voltage-controlled TCXOs (VCTCXOs) for cell phone handsets. This application requires a very specific set of capabilities for which quartz products are highly optimized.
Quartz oscillators are superior in the extreme high ends of the performance range. These include OCXOs that can maintain stabilities within a few parts per billion (ppb), and surface acoustic wave (SAW) oscillators that can deliver jitter under 100 femtoseconds at high frequencies. Until recently, MEMS oscillators did not compete in the TCXO product range, but new product introductions have brought MEMS oscillators into that market.
Quartz is still dominant in clock generator applications. These applications require highly specialized output combinations and custom packages. The supply chain for these products is specialized and does not include a MEMS oscillator supplier.
Typical applications
MEMS oscillators are replacing quartz oscillators in a variety of applications such as computing, consumer, networking, communications, automotive and industrial systems.
Programmable MEMS oscillators can be used in most applications where fixed-frequency quartz oscillators are used, such as PCI-Express, SATA, SAS, PCI, USB, Gigabit Ethernet, MPEG video, and cable modems.
MEMS clock generators are useful in complex systems that require multiple frequencies, such as data servers and telecom switches.
MEMS real-time clocks are used in systems that require precise time measurements. Smart meters for gas and electricity are an example that is consuming significant quantities of these devices.
The "X" in the names of oscillator types originally denoted "crystal". Some manufacturers have adopted this convention to include MEMS oscillators. Others are substituting "M" for "X" (as in "VCMO" versus "VCXO") to differentiate MEMS-based oscillators from quartz-based oscillators.
Limitations
MEMS oscillators may be detrimentally affected by helium. In 2018, a helium leak from a hospital MRI machine caused widespread failure of nearby iPhones due to their MEMS oscillators. A helium concentration of as little as 2% has been shown to cause complete failure of a MEMS oscillator.
See also
Electronic component
References
American inventions
Oscillators
Microelectronic and microelectromechanical systems | Microelectromechanical system oscillator | [
"Materials_science",
"Engineering"
] | 5,055 | [
"Microelectronic and microelectromechanical systems",
"Materials science",
"Microtechnology"
] |
32,840,582 | https://en.wikipedia.org/wiki/Natural%20antisense%20short%20interfering%20RNA | Natural antisense short interfering RNA (natsiRNA) is a type of siRNA. They are endogenous RNA regulators which are between 21 and 24 nucleotides in length, and are generated from complementary mRNA transcripts which are further processed into siRNA.
natsiRNA has been implicated in several developmental and response mechanisms in plants, such as pathogen resistance, salt tolerance and cell wall biosynthesis. natsiRNA has also been shown to alter gene expression in plants responding to environmental stressors.
References
small interfering RNA | Natural antisense short interfering RNA | [
"Chemistry",
"Biology"
] | 110 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
21,265,224 | https://en.wikipedia.org/wiki/Capstan%20equation | The capstan equation or belt friction equation, also known as Euler–Eytelwein formula (after Leonhard Euler and Johann Albert Eytelwein), relates the hold-force to the load-force if a flexible line is wound around a cylinder (a bollard, a winch or a capstan).
It also applies for fractions of one turn as occur with rope drives or band brakes.
Because of the interaction of frictional forces and tension, the tension on a line wrapped around a capstan may be different on either side of the capstan. A small holding force exerted on one side can carry a much larger loading force on the other side; this is the principle by which a capstan-type device operates.
A holding capstan is a ratchet device that can turn only in one direction; once a load is pulled into place in that direction, it can be held with a much smaller force. A powered capstan, also called a winch, rotates so that the applied tension is multiplied by the friction between rope and capstan. On a tall ship a holding capstan and a powered capstan are used in tandem so that a small force can be used to raise a heavy sail and then the rope can be easily removed from the powered capstan and tied off.
In rock climbing this effect allows a lighter person to hold (belay) a heavier person when top-roping, and also produces rope drag during lead climbing.
The formula is
where is the applied tension on the line, is the resulting force exerted at the other side of the capstan, is the coefficient of friction between the rope and capstan materials, and is the total angle swept by all turns of the rope, measured in radians (i.e., with one full turn the angle ).
For dynamic applications such as belt drives or brakes the quantity of interest is the force difference between and . The formula for this is
Several assumptions must be true for the equations to be valid:
The rope is on the verge of full sliding, i.e. is the maximum load that one can hold. Smaller loads can be held as well, resulting in a smaller effective contact angle .
It is important that the line is not rigid, in which case significant force would be lost in the bending of the line tightly around the cylinder. (The equation must be modified for this case.) For instance a Bowden cable is to some extent rigid and doesn't obey the principles of the capstan equation.
The line is non-elastic.
It can be observed that the force gain increases exponentially with the coefficient of friction, the number of turns around the cylinder, and the angle of contact. Note that the radius of the cylinder has no influence on the force gain.
The table below lists values of the factor based on the number of turns and coefficient of friction μ.
From the table it is evident why one seldom sees a sheet (a rope to the loose side of a sail) wound more than three turns around a winch. The force gain would be extreme besides being counter-productive since there is risk of a riding turn, result being that the sheet will foul, form a knot and not run out when eased (by slacking grip on the tail (free end)).
It is both ancient and modern practice for anchor capstans and jib winches to be slightly flared out at the base, rather than cylindrical, to prevent the rope (anchor warp or sail sheet) from sliding down. The rope wound several times around the winch can slip upwards gradually, with little risk of a riding turn, provided it is tailed (loose end is pulled clear), by hand or a self-tailer.
For instance, the factor "153,552,935" (5 turns around a capstan with a coefficient of friction of 0.6) means, in theory, that a newborn baby would be capable of holding (not moving) the weight of two supercarriers (97,000 tons each, but for the baby it would be only a little more than 1 kg). The large number of turns around the capstan combined with such a high friction coefficient mean that very little additional force is necessary to hold such heavy weight in place. The cables necessary to support this weight, as well as the capstan's ability to withstand the crushing force of those cables, are separate considerations.
Derivation
The applied tension is a function of the total angle subtended by the rope on the capstan. On the verge of slipping, this is also the frictional force, which is by definition times the normal force . By simple geometry, the additional normal force when increasing the angle by a small angle is well approximated by . Combining these and considering infinitesimally small yields the differential equation
whose solution is
Generalizations
Generalization of the capstan equation for a V-belt
The belt friction equation for a v-belt is:
where is the angle (in radians) between the two flat sides of the pulley that the v-belt presses against. A flat belt has an effective angle of .
The material of a V-belt or multi-V serpentine belt tends to wedge into the mating groove in a pulley as the load increases, improving torque transmission.
For the same power transmission, a V-belt requires less tension than a flat belt, increasing bearing life.
Generalization of the capstan equation for a rope lying on an arbitrary orthotropic surface
If a rope is lying in equilibrium under tangential forces on a rough orthotropic surface then all three following conditions are satisfied:
No separation – normal reaction is positive for all points of the rope curve:
, where is a normal curvature of the rope curve.
Dragging coefficient of friction and angle are satisfying the following criteria for all points of the curve
Limit values of the tangential forces:
The forces at both ends of the rope and are satisfying the following inequality
with
where is a geodesic curvature of the rope curve, is a curvature of a rope curve, is a coefficient of friction in the tangential direction.
If then
This generalization has been obtained by Konyukhov.
See also
Belt friction
Frictional contact mechanics
Torque amplifier, a device that exploits the capstan effect
References
Further reading
Arne Kihlberg, Kompendium i Mekanik för E1, del II, Göteborg 1980, 60–62.
External links
Capstan equation calculator
Equations of physics
Winches | Capstan equation | [
"Physics",
"Mathematics"
] | 1,328 | [
"Mathematical objects",
"Equations of physics",
"Equations"
] |
21,265,548 | https://en.wikipedia.org/wiki/Piper%20diagram | A Piper diagram is a graphic procedure proposed by Arthur M. Piper in 1944 for presenting water chemistry data to help in understanding the sources of the dissolved constituent salts in water. This procedure is based on the premise that cations and anions in water are in such amounts to assure the electroneutrality of the dissolved salts, in other words the algebraic sum of the electric charges of cations and anions is zero.
A Piper diagram is a graphical representation of the chemistry of a water sample or samples.
The cations and anions are shown by separate ternary plots. The apexes of the cation plot are calcium, magnesium and sodium plus potassium cations. The apexes of the anion plot are sulfate, chloride and carbonate plus hydrogen carbonate anions. The two ternary plots are then projected onto a diamond. The diamond is a matrix transformation of a graph of the anions (sulfate + chloride/ total anions) and cations (sodium + potassium/total cations).
The Piper diagram is suitable for comparing the ionic composition of a set of water samples, but does not lend itself to spatial comparisons. For geographical applications, the Stiff diagram and Maucha diagram are more applicable, because they can be used as markers on a map. Colour coding of the background of the Piper diagram allows linking Piper Diagrams and maps
Water samples shown on the Piper diagram can be grouped in hydrochemical facies. The cation and anion triangles can be separated in regions based on the dominant cation(s) or anion(s) and their combination creates regions in the diamond shaped part of the diagram.
See also
Ternary diagram, just one triangle
QAPF diagram, a common application
References
Diagrams
Water chemistry
Physical chemistry | Piper diagram | [
"Physics",
"Chemistry"
] | 351 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
27,492,898 | https://en.wikipedia.org/wiki/Betaine%20transporter | The Betaine/Carnitine/Choline Transporter (BCCT) family proteins are found in Gram-negative and Gram-positive bacteria and archaea. The BCCT family members a large group of secondary transporters, the APC superfamily. Their common functional feature is that they all transport molecules with a quaternary ammonium group [R-N (CH3)3]. The BCCT family proteins vary in length between 481 and 706 amino acyl residues and possess 12 putative transmembrane α-helical spanners (TMSs). The x-ray structures reveal two 5 TMS repeats, with the total TMSs being 10. These porters catalyze bidirectional uniport or are energized by pmf-driven or smf-driven proton or sodium ion symport, respectively, or substrate: substrate antiport. Some of these permeases exhibit osmosensory and osmoregulatory properties inherent to their polypeptide chains.
Structure
The structures of the sodium-independent carnitine/butyrobetaine antiporter CaiT from Proteus mirabilis (PmCaiT) () and E. coli (EcCaiT)() were determined.
Most members of the BCCT family are Na+- or H+-dependent, whereas EcCaiT is a Na+- and H+-independent substrate: product antiporter. The three-dimensional architecture of CaiT resembles that of the Na+-dependent transporters LeuT and BetP, but in CaiT, methionine sulphur takes the place of the Na+ to coordinate the substrate in the central transport site, accounting for Na+ independence. Both CaiT structures (, ) show the fully open, inward-facing conformation and thus complete the set of functional states that describe the alternating access mechanism. EcCaiT (, ) contains two bound butyrobetaine substrate molecules, one in the central transport site and the other in an extracellular binding pocket. In the structure of PmCaiT, a tryptophan side chain occupies the transport site, and access to the extracellular site is blocked. The binding of both substrates to CaiT reconstituted into proteoliposomes is cooperative, with Hill coefficients of up to 1.7, indicating that the extracellular site is regulatory. Schulze et al. (2010) proposed a mechanism whereby the occupied regulatory site increases the binding affinity of the transport site and initiates substrate translocation. Glycine betaine transporters have been found to contain a conserved region with four tryptophans in their central region.
Function
Most secondary-active transporters transport their substrates using an electrochemical ion gradient, but the carnitine transporter (CaiT) is an ion-independent, L-carnitine/gamma-butyrobetaine antiporter. Crystal structures of CaiT from E. coli and Proteus mirabilis revealed the inverted five-transmembrane-helix repeat similar to that in the amino acid/Na+ symporter, LeuT. Kalayil et al. (2013) showed that mutations of arginine 262 (R262) made CaiT Na+-dependent with increased transport activity in the presence of membrane potential, in agreement with substrate/Na+ cotransport. R262 also plays a role in substrate binding by stabilizing the partly unwound TM1' helix.
Modeling CaiT from P. mirabilis in the outward-open and closed states on the corresponding structures of the related symporter BetP revealed alternating orientations of the buried R262 side chain, which mimic sodium binding and unbinding in the Na+-coupled substrate symporters. A similar mechanism may be operative in other Na+/H+-independent transporters, in which a positively charged amino acid replaces the cotransporter cation. The oscillation of the R262 side chain in CaiT indicates how a positive charge triggers the change between outward-open and inward-open conformations.
Transport reactions
The generalized transport reactions catalyzed by members of the BCCT family are:
Substrate (out) + nH+ (out) → Substrate (in) + nH+ (in)
Substrate (out) + Na+ (out) → Substrate (in) + Na+ (in)
Substrate-1 (out) + Substrate-2 (in) → Substrate-1 (in) + Substrate-2 (out)
Substrate (out) ⇌ Substrate (in)
Substrate = a quaternary amine
Other betaine transporters
The mammalian betaine transporter: Sodium- and chloride-dependent betaine transporter
See also
Betaine
Carnitine
Choline
Transporter Classification Database
References
Protein domains
Protein families
Membrane proteins
Transport proteins
Integral membrane proteins
Transmembrane proteins
Transmembrane transporters | Betaine transporter | [
"Biology"
] | 1,020 | [
"Protein families",
"Protein domains",
"Protein classification",
"Membrane proteins"
] |
27,498,772 | https://en.wikipedia.org/wiki/Vibrational%20temperature | The vibrational temperature is commonly used in thermodynamics, to simplify certain equations. It has units of temperature and is defined as
where is the Boltzmann constant, is the speed of light, is the wavenumber, and (Greek letter nu) is the characteristic frequency of the oscillator.
The vibrational temperature is used commonly when finding the vibrational partition function.
References
Statistical thermodynamics University Arizona
See also
Rotational temperature
Rotational spectroscopy
Vibrational spectroscopy
Infrared spectroscopy
Spectroscopy
Atomic physics
Molecular physics | Vibrational temperature | [
"Physics",
"Chemistry"
] | 108 | [
"Thermodynamics stubs",
"Molecular physics",
"Quantum mechanics",
"Atomic physics",
"Thermodynamics",
" molecular",
"Atomic",
"nan",
"Molecular physics stubs",
"Physical chemistry stubs",
" and optical physics"
] |
27,499,693 | https://en.wikipedia.org/wiki/Partial%20permutation | In combinatorial mathematics, a partial permutation, or sequence without repetition, on a finite set S
is a bijection between two specified subsets of S. That is, it is defined by two subsets U and V of equal size, and a one-to-one mapping from U to V. Equivalently, it is a partial function on S that can be extended to a permutation.
Representation
It is common to consider the case when the set S is simply the set {1, 2, ..., n} of the first n integers. In this case, a partial permutation may be represented by a string of n symbols, some of which are distinct numbers in the range from 1 to and the remaining ones of which are a special "hole" symbol ◊. In this formulation, the domain U of the partial permutation consists of the positions in the string that do not contain a hole, and each such position is mapped to the number in that position. For instance, the string "1 ◊ 2" would represent the partial permutation that maps 1 to itself and maps 3 to 2.
The seven partial permutations on two items are
◊◊, ◊1, ◊2, 1◊, 2◊, 12, 21.
Combinatorial enumeration
The number of partial permutations on n items, for n = 0, 1, 2, ..., is given by the integer sequence
1, 2, 7, 34, 209, 1546, 13327, 130922, 1441729, 17572114, 234662231, ...
where the nth item in the sequence is given by the summation formula
in which the ith term counts the number of partial permutations with support of size i, that is, the number of partial permutations with i non-hole entries.
Alternatively, it can be computed by a recurrence relation
This is determined as follows:
partial permutations where the final elements of each set are omitted:
partial permutations where the final elements of each set map to each other.
partial permutations where the final element of the first set is included, but does not map to the final element of the second set
partial permutations where the final element of the second set is included, but does not map to the final element of the first set
, the partial permutations included in both counts 3 and 4, those permutations where the final elements of both sets are included, but do not map to each other.
Restricted partial permutations
Some authors restrict partial permutations so that either the domain
or the range of the bijection is forced to consist of the first k items in the set of n items being permuted, for some k. In the former case, a partial permutation of length k from an n-set is just a sequence of k terms from the n-set without repetition. (In elementary combinatorics, these objects are sometimes confusingly called "k-permutations" of the n-set.)
References
Combinatorics
Functions and mappings | Partial permutation | [
"Mathematics"
] | 638 | [
"Mathematical analysis",
"Functions and mappings",
"Discrete mathematics",
"Mathematical objects",
"Combinatorics",
"Mathematical relations"
] |
27,499,844 | https://en.wikipedia.org/wiki/Hoffman%20modulation%20contrast%20microscopy | Hoffman modulation contrast microscopy (HMC microscopy) is an optical microscopy technique for enhancing the contrast in unstained biological specimens. The technique was invented by Robert Hoffman in 1975. Like differential interference contrast microscopy (DIC microscopy), contrast is increased by using components in the light path which convert phase gradients in the specimen into differences in light intensity that are rendered in an image that appears three-dimensional. The 3D appearance may be misleading, as a feature which appears to cast a shadow may not necessarily have a distinct physical geometry corresponding to the shadow. The technique is particularly suitable for optical sectioning at lower magnifications.
An example of the use of HMC illumination is in in-vitro fertilisation, where under brightfield illumination the near-transparent oocyte is hard to see clearly.
HMC systems typically consist of a condenser with a slit aperture, an objective with a slit aperture, and a polariser which is fitted between the condenser and the illumination source and is used to control the degree of contrast. The principle of HMC is used by a number of microscope manufacturers who have introduced their own variants of the technique.
References
Microscopy | Hoffman modulation contrast microscopy | [
"Chemistry"
] | 238 | [
"Microscopy"
] |
27,499,917 | https://en.wikipedia.org/wiki/Redox%20gradient | A redox gradient is a series of reduction-oxidation (redox) reactions sorted according to redox potential. The redox ladder displays the order in which redox reactions occur based on the free energy gained from redox pairs. These redox gradients form both spatially and temporally as a result of differences in microbial processes, chemical composition of the environment, and oxidative potential. Common environments where redox gradients exist are coastal marshes, lakes, contaminant plumes, and soils.
The Earth has a global redox gradient with an oxidizing environment at the surface and increasingly reducing conditions below the surface. Redox gradients are generally understood at the macro level, but characterization of redox reactions in heterogeneous environments at the micro-scale require further research and more sophisticated measurement techniques.
Measuring redox conditions
Redox conditions are measured according to the redox potential (Eh) in volts, which represents the tendency for electrons to transfer from an electron donor to an electron acceptor. Eh can be calculated using half reactions and the Nernst equation. An Eh of zero represents the redox couple of the standard hydrogen electrode H+/H2, a positive Eh indicates an oxidizing environment (electrons will be accepted), and a negative Eh indicates a reducing environment (electrons will be donated). In a redox gradient, the most energetically favorable chemical reaction occurs at the “top” of the redox ladder and the least energetically favorable reaction occurs at the “bottom” of the ladder.
Eh can be measured by collecting samples in the field and performing analyses in the lab, or by inserting an electrode into the environment to collect in situ measurements. Typical environments to measure redox potential are in bodies of water, soils, and sediments, all of which can exhibit high levels of heterogeneity. Collecting a high number of samples can produce high spatial resolution, but at the cost of low temporal resolution since samples only reflect a singular a snapshot in time. In situ monitoring can provide high temporal resolution by collecting continuous real-time measurements, but low spatial resolution since the electrode is in a fixed location.
Redox properties can also be tracked with high spatial and temporal resolution through the use of induced-polarization imaging, however, further research is needed to fully understand contributions of redox species to polarization.
Environmental conditions
Redox gradients are commonly found in the environment as functions of both space and time, particularly in soils and aquatic environments. Gradients are caused by varying physiochemical properties including availability of oxygen, soil hydrology, chemical species present, and microbial processes. Specific environments that are commonly characterized by redox gradients include waterlogged soils, wetlands, contaminant plumes, and marine pelagic and hemipelagic sediments.
The following is a list of common reactions that occur in the environment in order from oxidizing to reducing (organisms performing the reaction in parentheses):
Aerobic respiration (aerobes: aerobic organisms)
Denitrification (denitrifiers: denitrifying bacteria)
Manganese reduction (Manganese reducers)
Iron reduction (iron reducers: iron-reducing bacteria)
Sulfate reduction (sulfate reducers: Sulfur-reducing bacteria)
Methanogenesis (methanogens)
Aquatic environments
Redox gradients form in water columns and their sediments. Varying levels of oxygen (oxic, suboxic, hypoxic) within the water column alter redox chemistry and which redox reactions can occur. Development of oxygen minimum zones also contributes to formation of redox gradients.
Benthic sediments exhibit redox gradients produced by variations in mineral composition, organic matter availability, structure, and sorption dynamics. Limited transport of dissolved electrons through subsurface sediments, combined with varying pore sizes of sediments creates significant heterogeneity in benthic sediments. Oxygen availability in sediments determines which microbial respiration pathways can occur, resulting in a vertical stratification of redox processes as oxygen availability decreases with depth.
Terrestrial environments
Soil Eh is also largely a function of hydrological conditions. In the event of a flood, saturated soils can shift from oxic to anoxic, creating a reducing environment as anaerobic microbial processes dominate. Moreover, small anoxic hotspots may develop within soil pore spaces, creating reducing conditions. With time, the starting Eh of a soil can be restored as water drains and the soil dries out. Soils with redox gradients formed by ascending groundwater are classified as gleysols, while soils with gradients formed by stagnant water are classified as stagnosols and planosols.
Soil Eh generally ranges from −300 to +900 mV. The table below summarizes typical Eh values for various soil conditions:
Generally accepted Eh limits that are tolerable by plants are +300 mV < Eh < +700 mV. 300 mV is the boundary value that separates aerobic from anaerobic conditions in wetland soils. Redox potential (Eh) is also closely tied to pH, and both have significant influence on the function of soil-plant-microorganism systems. The main source of electrons in soil is organic matter. Organic matter consumes oxygen as it decomposes, resulting in reducing soil conditions and lower Eh.
Role of microorganisms
Redox gradients form based on resource availability and physiochemical conditions (pH, salinity, temperature) and support stratified communities of microbes. Microbes carry out differing respiration processes (methanogenesis, sulfate reduction, etc.) based on the conditions around them and further amplify redox gradients present in the environment. However, distribution of microorganisms cannot solely be determined from thermodynamics (redox ladder), but is also influenced by ecological and physiological factors.
Redox gradients form along contaminant plumes, in both aquatic and terrestrial settings, as a function of the contaminant concentration and the impacts it has on relevant chemical processes and microbial communities. The highest rates of organic pollutant degradation along a redox gradient are found at the oxic-anoxic interface. In groundwater, this oxic-anoxic environment is referred to as the capillary fringe, where the water table meets soil and fills empty pores. Because this transition zone is both oxic and anoxic, electron acceptors and donors are in high abundance and there is a high level of microbial activity, leading to the highest rates of contaminant biodegradation.
Benthic sediments are heterogeneous in nature and subsequently exhibit redox gradients. Due to this heterogeneity, gradients of reducing and oxidizing chemical species do not always overlap enough to support electron transport needs of niche microbial communities. Cable bacteria have been characterized as sulfide-oxidizing bacteria that assist in connecting these areas of undersupplied and excess electrons to complete the electron transport for otherwise unavailable redox reactions.
Biofilms, found in tidal flats, glaciers, hydrothermal vents, and at the bottoms of aquatic environments, also exhibit redox gradients. The community of microbes—often metal- or sulfate-reducing bacteria—produces redox gradients on the micrometer scale as a function of spatial physiochemical variability.
See sulfate-methane transition zone for coverage of microbial processes in SMTZs.
See also
Anaerobic respiration
Chemocline
Gibbs free energy
Dead zone (ecology)
Hypoxia (environmental)
Marine sediment
Redox
Redox potential
Remineralization
Sediment-water interface
Sulfate-methane transition zone
References
Aquatic ecology
Biogeochemistry
Chemical oceanography
Electrochemistry
Environmental chemistry
Environmental science
Limnology
Marine geology
Oceanographical terminology
Oceanography
Redox
Sediments
Soil science | Redox gradient | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 1,611 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Redox",
"Oceanography",
"Environmental chemistry",
"Chemical oceanography",
"Electrochemistry",
"Biogeochemistry",
"Ecosystems",
"nan",
"Aquatic ecology"
] |
27,500,554 | https://en.wikipedia.org/wiki/GPS%20Block%20IIF | GPS Block IIF, or GPS IIF is an interim class of GPS (satellite) which were used to bridge the gap between previous Navstar Global Positioning System generations until the GPS Block III satellites became operational. They were built by Boeing, operated by the United States Air Force, and launched by the United Launch Alliance (ULA) using Evolved Expendable Launch Vehicles (EELV). They are the final component of the Block II GPS constellation to be launched. On 5 February 2016, the final Block IIF satellite was successfully launched, completing the series.
The spacecraft have a mass of and a design life of 12 years. Like earlier GPS satellites, Block IIF spacecraft operate in semi-synchronous medium Earth orbits, with an altitude of approximately , and an orbital period of twelve hours.
The satellites supplement and partially replace the GPS Block IIA satellites that were launched between 1990 and 1997 with a design life of 7.5 years. The final satellite of the Block IIA series was decommissioned on 09 October 2019. The operational constellation now includes Block IIR, IIRM, IIF and III variants.
Because the Evolved Expendable Launch Vehicles are more powerful than the Delta II, which was used to orbit earlier Block II GPS satellites, they can place the satellites directly into their operational orbits. As a result, Block IIF satellites do not carry apogee kick motors. The original contract for Block IIF, signed in 1996, called for 33 spacecraft. This was later reduced to 12, and program delays and technical problems pushed the first launch from 2006 to 2010.
New characteristics
Broadcasting L5 "safety of life" navigation signal demonstrated on USA-203
Broadcasting a new M-code signal
Doubling in the predicted accuracy
Better resistance to jamming
Reprogrammable processors that can receive software uploads
The first GPS satellites not to have Selective Availability (SA) hardware installed, which degraded civilian accuracy when turned on in the original satellite fleet
Launch history
Overall, 12 GPS Block IIF satellites were launched, all of which are currently operational:
See also
BeiDou Navigation Satellite System
BeiDou-2 (COMPASS) navigation system
Galileo (satellite navigation)
GLONASS
Quasi-Zenith Satellite System
References
Global Positioning System | GPS Block IIF | [
"Technology",
"Engineering"
] | 448 | [
"Global Positioning System",
"Aerospace engineering",
"Wireless locating",
"Aircraft instruments"
] |
27,500,605 | https://en.wikipedia.org/wiki/Canonical%20model | A canonical model is a design pattern used to communicate between different data formats. Essentially: create a data model which is a superset of all the others ("canonical"), and create a "translator" module or layer to/from which all existing modules exchange data with other modules. The canonical model acts as a middleman. Each model now only needs to know how to communicate with the canonical model and doesn't need to know the implementation details of the other modules.
A form of enterprise application integration, it is intended to reduce costs and standardize on agreed data definitions associated with integrating business systems. A canonical model is any model that is canonical in nature, i.e. a model which is in the simplest form possible based on a standard, application integration (EAI) solution. Most organizations also adopt a set of standards for message structure and content (message payload). The desire for consistent message payload results in the construction of an enterprise or business domain canonical model common view within a given context. Often the term canonical model is used interchangeably with integration strategy and often entails a move to a message-based integration methodology. A typical migration from point-to-point canonical data model, an enterprise design pattern which provides common data naming, definition and values within a generalized data framework. Advantages of using a canonical data model are reducing the number of data translations and reducing the maintenance effort.
Adoption of a comprehensive enterprise interfacing to message-based integration begins with a decision on the middleware to be used to transport messages between endpoints. Often this decision results in the adoption of an enterprise service bus (ESB) or enterprise application integration (EAI) solution. Most organizations also adopt a set of standards for message structure and content (message payload). The desire for consistent message payload results in the construction of an enterprise form of XML schema built from the common model objects thus providing the desired consistency and re-usability while ensuring data integrity.
See also
Canonical schema pattern
Common data model
Enterprise information integration
Enterprise integration
Information architecture
List of XML schemas
Service-oriented architecture
Web service
XML schema
References
External links
Forrester Research, Canonical Model Management Forum
Canonical Model, Canonical Schema, and Event Driven SOA
Forrester Research, Canonical Information Modeling
Enterprise Integration Patterns: Canonical Data Model
Metadata Hub and Spokes (Canonical Data Domain)
Enterprise application integration
Enterprise architecture
Enterprise modelling
Software design patterns | Canonical model | [
"Engineering"
] | 485 | [
"Systems engineering",
"Enterprise modelling"
] |
27,501,362 | https://en.wikipedia.org/wiki/Holstein%E2%80%93Herring%20method | The Holstein–Herring method, also called the surface integral method, or Smirnov's method is an effective means of getting the exchange energy splittings of asymptotically degenerate energy states in molecular systems. Although the exchange energy becomes elusive at large internuclear systems, it is of prominent importance in theories of molecular binding and magnetism. This splitting results from the symmetry under exchange of identical nuclei (Pauli exclusion principle). The basic idea pioneered by Theodore Holstein, Conyers Herring and Boris M. Smirnov in the 1950-1960.
Theory
The method can be illustrated for the hydrogen molecular ion or more generally, atom-ion systems or one-active electron systems, as follows. We consider states that are represented by even or odd functions with respect to behavior under space inversion. This is denoted with the suffixes g and u from the German gerade and ungerade and are standard practice for the designation of electronic states of diatomic molecules, whereas for atomic states the terms even and odd are used.
The electronic time-independent Schrödinger equation can be written as:
where E is the (electronic) energy of a given quantum mechanical state (eigenstate), with the electronic state function depending on the spatial coordinates of the electron and where is the electron-nuclear Coulomb potential energy function. For the hydrogen molecular ion, this is:
For any gerade (or even) state, the electronic Schrödinger wave equation can be written in atomic units () as:
For any ungerade (or odd) state, the corresponding wave equation can be written as:
For simplicity, we assume real functions (although the result can be generalized to the complex case). We then multiply the gerade wave equation by on the left and the ungerade wave equation on the left by and subtract to obtain:
where is the exchange energy splitting. Next, without loss of generality, we define orthogonal single-particle functions, and , located at the nuclei and write:
This is similar to the LCAO (linear combination of atomic orbitals) method used in quantum chemistry, but we emphasize that the functions and are in general polarized i.e. they are not pure eigenfunctions of angular momentum with respect to their nuclear center, see
also below). Note, however, that in the limit as , these localized functions collapse into the well-known atomic (hydrogenic) psi functions . We denote as the mid-plane located exactly between the two nuclei (see diagram for hydrogen molecular ion for more details), with representing the unit normal vector of this plane (which is parallel to the Cartesian -direction), so that the full space is divided into left () and right () halves. By considerations of symmetry:
This implies that:
Also, these localized functions are normalized, which leads to:
and conversely. Integration of the above in the whole space left to the mid-plane yields:
and
From a variation of the divergence theorem on the above, we finally obtain:
where is a differential surface element of the mid-plane. This is the Holstein–Herring formula. From the latter, Conyers Herring was the first to show that the lead term for the asymptotic expansion of the energy difference between the two lowest states of the hydrogen molecular ion, namely the first excited state and the ground state (as expressed in molecular notation—see graph for energy curves), was found to be:
Previous calculations based on the LCAO of atomic orbitals had erroneously given a lead coefficient of instead of . While it is true that for the Hydrogen molecular ion, the eigenenergies can be mathematically expressed in terms of a generalization of the Lambert W function, these asymptotic formulae are more useful in the long range and the Holstein–Herring method has a much wider range of applications than this particular molecule.
Applications
The Holstein–Herring formula had limited applications until around 1990 when Kwong-Tin Tang, Jan Peter Toennies, and C. L. Yiu demonstrated that can be a polarized wave function, i.e. an atomic wave function localized at a particular nucleus but perturbed by the other nuclear center, and consequently without apparent gerade or ungerade symmetry, and nonetheless the Holstein–Herring formula above can be used to generate the correct asymptotic series expansions for the exchange energies. In this way, one has successfully recast a two-center formulation into an effective one-center formulation. Subsequently, it has been applied with success to one-active electron systems. Later, Scott et al. explained and clarified their results while sorting out subtle but important issues concerning the true convergence of the polarized wave function.
The outcome meant that it was possible to solve for the asymptotic exchange energy splittings to any order. The Holstein–Herring method has been extended to the two-active electron case i.e. the hydrogen molecule for the two lowest discrete states of and also for general atom-atom systems.
Physical interpretation
The Holstein–Herring formula can be physically interpreted as the electron undergoing "quantum tunnelling" between both nuclei, thus creating a current whose flux through the mid-plane allows us to isolate the exchange energy. The energy is thus shared, i.e. exchanged, between the two nuclear centers. Related to the tunnelling effect, a complementary interpretation from Sidney Coleman's Aspects of Symmetry (1985) has an "instanton" travelling near and about the classical paths within path integral formulation. Note that the volume integral in the denominator of the Holstein–Herring formula is sub-dominant in . Consequently this denominator is almost unity for sufficiently large internuclear distances and only the surface integral of the numerator need be considered.
See also
Dirac delta function model (1-D version of H2+)
Exchange interaction
Exchange symmetry
Conyers Herring
Hydrogen molecular ion
Lambert W function
Quantum tunneling
List of quantum-mechanical systems with analytical solutions
References
Quantum chemistry | Holstein–Herring method | [
"Physics",
"Chemistry"
] | 1,228 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
" and optical physics"
] |
27,502,485 | https://en.wikipedia.org/wiki/Protein%20crystallization | Protein crystallization is the process of formation of a regular array of individual protein molecules stabilized by crystal contacts. If the crystal is sufficiently ordered, it will diffract. Some proteins naturally form crystalline arrays, like aquaporin in the lens of the eye.
In the process of protein crystallization, proteins are dissolved in an aqueous environment and sample solution until they reach the supersaturated state. Different methods are used to reach that state such as vapor diffusion, microbatch, microdialysis, and free-interface diffusion. Developing protein crystals is a difficult process influenced by many factors, including pH, temperature, ionic strength in the crystallization solution, and even gravity. Once formed, these crystals can be used in structural biology to study the molecular structure of the protein, particularly for various industrial or medical purposes.
Development
For over 150 years, scientists from all around the world have known about the crystallization of protein molecules.
In 1840, Friedrich Ludwig Hünefeld accidentally discovered the formation of crystalline material in samples of earthworm blood held under two glass slides and occasionally observed small plate-like crystals in desiccated swine or human blood samples. These crystals were named as 'haemoglobin', by Felix Hoppe-Seyler in 1864. The seminal findings of Hünefeld inspired many scientists in the future.
In 1851, Otto Funke described the process of producing human haemoglobin crystals by diluting red blood cells with solvents, such as pure water, alcohol or ether, followed by slow evaporation of the solvent from the protein solution. In 1871, William T. Preyer, Professor at University of Jena, published a book entitled Die Blutkrystalle (The Crystals of Blood), reviewing the features of haemoglobin crystals from around 50 species of mammals, birds, reptiles and fishes.
In 1909, the physiologist Edward T. Reichert, together with the mineralogist Amos P. Brown, published a treatise on the preparation, physiology and geometrical characterization of haemoglobin crystals from several hundreds animals, including extinct species such as the Tasmanian wolf. Increasing protein crystals were found.
In 1934, John Desmond Bernal and his student Dorothy Hodgkin discovered that protein crystals surrounded by their mother liquor gave better diffraction patterns than dried crystals. Using pepsin, they were the first to discern the diffraction pattern of a wet, globular protein. Prior to Bernal and Hodgkin, protein crystallography had only been performed in dry conditions with inconsistent and unreliable results. This is the first X‐ray diffraction pattern of a protein crystal.
In 1958, the structure of myoglobin (a red protein containing heme), determined by X-ray crystallography, was first reported by John Kendrew. Kendrew shared the 1962 Nobel Prize in Chemistry with Max Perutz for this discovery.
Now, based on the protein crystals, the structures of them play a significant role in biochemistry and translational medicine.
Background
The theory of protein crystallization
Protein crystallization is governed by the same physics that governs the formation of inorganic crystals. For crystallization to occur spontaneously, the crystal state must be favored thermodynamically. This is described by the Gibbs free energy (∆G), defined as ∆G = ∆H- T∆S, which captures how the enthalpy change of a process, ∆H, trades off with the corresponding change in entropy, ∆S. Entropy, roughly, describes the disorder of a system. Highly ordered states, such as protein crystals, are disfavored thermodynamically compared to more disordered states, such as solutions of proteins in solvent, because the transition to a more ordered state would decrease the total entropy of the system (negative ∆S). For crystals to form spontaneously, the ∆G of crystal formation must be negative. In other words, the entropic penalty must be paid by a corresponding decrease in the total energy of the system (∆H). Familiar inorganic crystals such as sodium chloride spontaneously form at ambient conditions because the crystal state decreases the total energy of the system. However, crystallization of some proteins under ambient conditions would both decrease the entropy (negative ∆S) and increase the total energy (positive ∆H) of the system, and thus does not occur spontaneously. To achieve crystallization of such proteins conditions are modified to make crystal formation energetically favorable. This is often accomplished by creation of a supersaturated solution of the sample.
A molecular view going from solution to crystal
Crystal formation requires two steps: nucleation and growth. Nucleation is the initiation step for crystallization. At the nucleation phase, protein molecules in solution come together as aggregates to form a stable solid nucleus. As the nucleus forms, the crystal grows bigger and bigger by molecules attaching to this stable nucleus. The nucleation step is critical for crystal formation since it is the first-order phase transition of samples moving from having a high degree of freedom to obtaining an ordered state (aqueous to solid). For the nucleation step to succeed, the manipulation of crystallization parameters is essential. The approach behind getting a protein to crystallize is to yield a lower solubility of the targeted protein in solution. Once the solubility limit is exceeded and crystals are present, crystallization is accomplished.
Methods
Vapor diffusion
Vapor diffusion is the most commonly employed method of protein crystallization. In this method, droplets containing purified protein, buffer, and precipitant are allowed to equilibrate with a larger reservoir containing similar buffers and precipitants in higher concentrations. Initially, the droplet of protein solution contains comparatively low precipitant and protein concentrations, but as the drop and reservoir equilibrate, the precipitant and protein concentrations increase in the drop. If the appropriate crystallization solutions are used for a given protein, crystal growth occurs in the drop. This method is used because it allows for gentle and gradual changes in concentration of protein and precipitant concentration, which aid in the growth of large and well-ordered crystals.
Vapor diffusion can be performed in either hanging-drop or sitting-drop format. Hanging-drop apparatus involve a drop of protein solution placed on an inverted cover slip, which is then suspended above the reservoir. Sitting-drop crystallization apparatus place the drop on a pedestal that is separated from the reservoir. Both of these methods require sealing of the environment so that equilibration between the drop and reservoir can occur.
Microbatch
A microbatch usually involves immersing a very small volume of protein droplets in oil (as little as 1 μL). The reason that oil is required is because such low volume of protein solution is used and therefore evaporation must be inhibited to carry out the experiment aqueously. Although there are various oils that can be used, the two most common sealing agent are paraffin oils (described by Chayen et al.) and silicon oils (described by D’Arcy). There are also other methods for microbatching that do not use a liquid sealing agent and instead require a scientist to quickly place a film or some tape on a welled plate after placing the drop in the well.
Besides the very limited amounts of sample needed, this method also has as a further advantage that the samples are protected from airborne contamination, as they are never exposed to the air during the experiment.
Microdialysis
Microdialysis takes advantage of a semi-permeable membrane, across which small molecules and ions can pass, while proteins and large polymers cannot cross. By establishing a gradient of solute concentration across the membrane and allowing the system to progress toward equilibrium, the system can slowly move toward supersaturation, at which point protein crystals may form.
Microdialysis can produce crystals by salting out, employing high concentrations of salt or other small membrane-permeable compounds that decrease the solubility of the protein. Very occasionally, some proteins can be crystallized by dialysis salting in, by dialyzing against pure water, removing solutes, driving self-association and crystallization.
Free-interface diffusion
This technique brings together protein and precipitation solutions without premixing them, but instead, injecting them through either sides of a channel, allowing equilibrium through diffusion. The two solutions come into contact in a reagent chamber, both at their maximum concentrations, initiating spontaneous nucleation. As the system comes into equilibrium, the level of supersaturation decreases, favouring crystal growth.
Influencing factors
pH
The basic driving force for protein crystallization is to optimize the number of bonds one can form with another protein through intermolecular interactions. These interactions depend on electron densities of molecules and the protein side chains that change as a function of pH. The tertiary and quaternary structure of proteins are determined by intermolecular interactions between the amino acids’ side groups, in which the hydrophilic groups are usually facing outwards to the solution to form a hydration shell to the solvent (water). As the pH changes, the charge on these polar side group also change with respect to the solution pH and the protein's pKa. Hence, the choice of pH is essential either to promote the formation of crystals where the bonding between molecules to each other is more favorable than with water molecules. pH is one of the most powerful manipulations that one can assign for the optimal crystallization condition.
Temperature
Temperature is another interesting parameter to discuss since protein solubility is a function of temperature. In protein crystallization, manipulation of temperature to yield successful crystals is one common strategy. Unlike pH, temperature of different components of the crystallography experiments could impact the final results such as temperature of buffer preparation, temperature of the actual crystallization experiment, etc.
Chemical additives
Chemical additives are small chemical compounds that are added to the crystallization process to increase the yield of crystals. The role of small molecules in protein crystallization had not been well thought of in the early days since they were thought of as contaminants in most case. Smaller molecules crystallize better than macromolecules such as proteins, therefore, the use of chemical additives had been limited prior to the study by McPherson. However, this is a powerful aspect of the experimental parameters for crystallization that is important for biochemists and crystallographers to further investigate and apply.
Technologies
High throughput crystallization screening
High through-put methods exist to help streamline the large number of experiments required to explore the various conditions that are necessary for successful crystal growth. There are numerous commercial kits available for order which apply preassembled ingredients in systems guaranteed to produce successful crystallization. Using such a kit, a scientist avoids the hassle of purifying a protein and determining the appropriate crystallization conditions.
Liquid-handling robots can be used to set up and automate large number of crystallization experiments simultaneously. What would otherwise be slow and potentially error-prone process carried out by a human can be accomplished efficiently and accurately with an automated system. Robotic crystallization systems use the same components described above, but carry out each step of the procedure quickly and with a large number of replicates. Each experiment utilizes tiny amounts of solution, and the advantage of the smaller size is two-fold: the smaller sample sizes not only cut-down on expenditure of purified protein, but smaller amounts of solution lead to quicker crystallizations. Each experiment is monitored by a camera which detects crystal growth.
Protein engineering
Proteins can be engineered to improve the chance of successful protein crystallization by using techniques like Surface Entropy Reduction or engineering in crystal contacts. Frequently, problematic cysteine residues can be replaced by alanine to avoid disulfide-mediated aggregation, and residues such as lysine, glutamate, and glutamine can be changed to alanine to reduce intrinsic protein flexibility, which can hinder crystallization..
Applications
Macromolecular structures can be determined from protein crystal using a variety of methods, including X-ray diffraction/X-ray crystallography, cryogenic electron microscopy (CryoEM) (including electron crystallography and microcrystal electron diffraction (MicroED)), small-angle X-ray scattering, and neutron diffraction. See also Structural biology.
Crystallization of proteins can also be useful in the formulation of proteins for pharmaceutical purposes.
See also
Crystal engineering
Crystal growth
Crystal optics
Crystal system
Crystallization processes
Crystallographic database
Crystallographic group
Diffraction
Electron crystallography
Electron diffraction
Neutron crystallography
Neutron diffraction
Structural biology
X-ray diffraction
References
Further reading
External links
This page was reproduced (with modifications) with expressed consent from Dr. A. Malcolm Campbell. As of 2010, the original page can be found at
Protein structure
Crystallography | Protein crystallization | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,638 | [
"Materials science",
"Crystallography",
"Condensed matter physics",
"Structural biology",
"Protein structure"
] |
34,333,919 | https://en.wikipedia.org/wiki/Human%20Metabolome%20Database | The Human Metabolome Database (HMDB) is a comprehensive, high-quality, freely accessible, online database of small molecule metabolites found in the human body. It has been created by the Human Metabolome Project funded by Genome Canada and is one of the first dedicated metabolomics databases. The HMDB facilitates human metabolomics research, including the identification and characterization of human metabolites using NMR spectroscopy, GC-MS spectrometry and LC/MS spectrometry. To aid in this discovery process, the HMDB contains three kinds of data: 1) chemical data, 2) clinical data, and 3) molecular biology/biochemistry data (Fig. 1–3). The chemical data includes 41,514 metabolite structures with detailed descriptions along with nearly 10,000 NMR, GC-MS and LC/MS spectra.
The clinical data includes information on >10,000 metabolite-biofluid concentrations and metabolite concentration information on more than 600 different human diseases. The biochemical data includes 5,688 protein (and DNA) sequences and more than 5,000 biochemical reactions that are linked to these metabolite entries. Each metabolite entry in the HMDB contains more than 110 data fields with 2/3 of the information being devoted to chemical/clinical data and the other 1/3 devoted to enzymatic or biochemical data. Many data fields are hyperlinked to other databases (KEGG, MetaCyc, PubChem, Protein Data Bank, ChEBI, Swiss-Prot, and GenBank) and a variety of structure and pathway viewing applets. The HMDB database supports extensive text, sequence, spectral, chemical structure and relational query searches. It has been widely used in metabolomics, clinical chemistry, biomarker discovery and general biochemistry education.
Four additional databases, DrugBank, T3DB, SMPDB and FooDB are also part of the HMDB suite of databases. DrugBank contains equivalent information on ~1,600 drug and drug metabolites, T3DB contains information on 3,100 common toxins and environmental pollutants, SMPDB contains pathway diagrams for 700 human metabolic and disease pathways, while FooDB contains equivalent information on ~28,000 food components and food additives.
Version history
The first version of HMDB was released on January 1, 2007, followed by two subsequent versions on January 1, 2009 (version 2.0), August 1, 2009 (version 2.5), September 18, 2012 (version 3.0) and Jan. 1, 2013 (version 3.5), 2017 (version 4.0). 2022 (version 5.0). Details for each of the major HMDB versions (up to version 5.0) is provided in Table 1.
Scope and access
All data in HMDB is non-proprietary or is derived from a non-proprietary source. It is freely accessible and available to anyone. In addition, nearly every data item is fully traceable and explicitly referenced to the original source. HMDB data is available through a public web interface and downloads.
See also
KEGG
DrugBank
SMPDB
MetaCyc
Bovine Metabolome Database
Metabolome
Metabolomics
List of biological databases
References
Biochemistry databases
Metabolomic databases
Medical databases
Food databases
Human metabolites | Human Metabolome Database | [
"Chemistry",
"Biology"
] | 691 | [
"Biochemistry",
"Biochemistry databases",
"Chemical databases",
"Food databases"
] |
34,333,996 | https://en.wikipedia.org/wiki/CTAG | CTAG is a computational fluid dynamics model for the behaviour of air pollutants on and near roadways.
CTAG stands for Comprehensive Turbulent Aerosol Dynamics and Gas Chemistry, is an environmental turbulent reacting flow model designed to simulate the transport and transformation of air pollutants in complex environments. It is developed by the Energy and the Environmental Research Laboratory ( EERL) at Cornell University.
CTAG’s plume transport model designed for on-road and near-road applications is called CFD-VIT-RIT. CTAG has been applied to investigate the plume dispersion near different highway configurations, chemical evolution of nitrogen oxides near roadways, spatial variations of air pollutants in highway-building environments, and effects of vegetation barriers on near-road air quality.
References
External links
EERL
Atmospheric dispersion modeling
Computational fluid dynamics | CTAG | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 172 | [
"Computational fluid dynamics",
"Computational physics",
"Atmospheric dispersion modeling",
"Environmental engineering",
"Environmental modelling",
"Fluid dynamics"
] |
34,334,126 | https://en.wikipedia.org/wiki/Miniwiz | Miniwiz () is a Taiwanese company that upcycles consumer and industrial waste into construction and consumer products. The company was founded by Arthur Huang. It is headquartered in Taiwan with offices in Milan, Singapore, Beijing, and Shanghai.
History
Miniwiz was founded in March 2005 by Arthur Huang and Jarvis Liu.
In 2010, it constructed EcoARK, a nine-story tall pavilion used as the main exhibition hall for the 2010 Taipei International Flora Exposition. It was built with plastic bottles upcycled as construction material, reportedly saving 300 tons of plastic from ending up in landfills.
In 2010, its Polli-Brick construction material was a finalist in The Earth Awards.
In 2014, it collaborated with Nike, Inc. to recycle athletic shoes and shoe manufacturing by-product into construction material, named Nike Grind.
In May 2015, it announced the Ecofighter project, which would modify a Rutan VariEze by replacing its elements with recycled material, with a flight planned for 2016.
In April 2016, Miniwiz collaborated with tobacco company Philip Morris International to apply recycled filters of iQos heatsticks to architect Cesare Leonard's 1960s furniture designs at the Milan Design Week.
In 2017, Miniwiz collaborated with Bonotto to exhibit textiles created from recycled material at Fuorisalone in Milan and NYCxDesign in New York City.
In 2019, its Trashpresso mobile recycling plant won an IDEAT award. That same year, it also launched a smaller version of the recycling plant named the mini Trashpresso.
See also
List of companies of Taiwan
References
External links
Companies based in Taipei
Taiwanese companies established in 2005
Sustainable building
Engineering companies of Taiwan
Polymers
Technology companies established in 2005 | Miniwiz | [
"Chemistry",
"Materials_science",
"Engineering"
] | 344 | [
"Sustainable building",
"Building engineering",
"Construction",
"Polymer chemistry",
"Polymers"
] |
34,340,344 | https://en.wikipedia.org/wiki/Toxicity%20label | Toxicity labels viz; red label, yellow label, blue label and green label are mandatory labels employed on pesticide containers in India identifying the level of toxicity (that is, the toxicity class) of the contained pesticide. The schemes follows from the Insecticides Act of 1968 and the Insecticides Rules of 1971.
The labeling follows a general scheme as laid down in the Insecticides Rules, 1971, and contains information such as brand name, name of manufacturer, name of the antidote in case of accidental consumption etc. A major aspect of the label is a color mark which represents the toxicity of the material by a color code. Thus the labelling scheme proposes four different colour labels: viz red, yellow, blue, and green.
The toxicity classification applies only to pesticides which are allowed to be sold in India. Some of the classified pesticides may be banned in some states of India, by decision of the state governments. Some of the red-label and yellow-label pesticides were banned in the state of Kerala following the Endosulfan protests of 2011.
See also
Toxicity class listing international regulations outside India
References
Chemical safety
Pesticides
Certification marks in India | Toxicity label | [
"Chemistry",
"Biology",
"Environmental_science"
] | 234 | [
"Chemical accident",
"Pesticides",
"Biocides",
"Toxicology",
"nan",
"Chemical safety"
] |
34,340,842 | https://en.wikipedia.org/wiki/Modular%20Mining%20Systems | Modular Mining is a privately held company that develops, manufactures, markets, and services mining equipment management systems, headquartered in Tucson, Arizona, U.S.A. Modular's DISPATCH Fleet Management System is available in eight languages, and has been deployed at more than 250 active mine sites; among these are nine of the ten highest-producing surface mines in the world.
Modular Mining was founded in Tucson, Arizona, in 1979. Over the next two years, Modular Mining developed and successfully implemented the DISPATCH system, a computerized fleet management system designed to optimize haul truck assignments to loading and dumping points in an open-pit mine and to produce operating reports during the shift. Following the DISPATCH system's development, Modular Mining has gone on to create a number of other software and hardware products that seek to improve different areas of mine operations.
In 1996, Komatsu America Corporation (KAC) of Rolling Meadows, Illinois, in concert with their parent company, heavy equipment manufacturer Komatsu, Limited, of Tokyo, Japan (Komatsu), acquired a controlling interest in Modular Mining. In 2003, KAC acquired the remaining shares of the company, and today Modular Mining remains a wholly owned Komatsu subsidiary.
History
First Installations
From 1980 to 1981, Modular worked with Phelps Dodge Corporation (a wholly owned subsidiary of Freeport-McMoRan Copper & Gold since 2007) to develop and install the first version of the DISPATCH system at the company's open–pit copper mine in Tyrone, New Mexico. Following this implementation, Phelps Dodge conducted a productivity study on the system, showing a 10% production increase.
In 1985, Phelps Dodge contracted Modular Mining to develop and install a DISPATCH system tailored to the rail operation at its mine in Morenci, Arizona, and the company also had the system installed at El Chino Mine in Santa Rita, New Mexico.
Global Growth
Following initial installations in the Southwestern U.S., Modular Mining went on to install the DISPATCH System at various mines around the world. International customers included Iron Ore Co. of Canada (IOC) at Labrador in 1985; Palabora Mining Co. Ltd. in South Africa in 1985, and the Cerrejón mine in Colombia in 1988. After a decade of rising industry demand for the DISPATCH system, Modular established its first subsidiary, Modular Mining Systems, Pty. Ltd., in Caboolture, Australia in 1989 to support its expanding Australasian customer base. The Australia office was relocated to Tuggerah, Australia in 1993.
Modular has established the following global subsidiaries:
1989 – Modular Mining Systems, Pty. Ltd., New South Wales, Australia
1993 – Modular Mining Systems, Inc. y Cia Ltda., Santiago, Chile
1996 – P.T. Modular Mining Systems, Balikpapan, Indonesia
1998 – Modular Mining Systems China, Beijing, China
2000 – Modular Mining System Africa, Pty. Ltd., Johannesburg, South Africa
2000 – Modular Mining Systems do Brasil Ltda., Recife, Brazil
2000 – Modular Mining Systems Canada Ltd., Port Coquitlam, Canada
2002 – Modular Mining Systems SCRL, Lima, Peru
2004 – Modular Mining Systems India Pvt. Ltd., Pune, India
2006 – Modular Mining Systems Eurasia, Moscow, Russia
2009 – Modular Mining Systems do Brasil Ltda., Belo Horizonte, Brazil
Modular Today
Modular Mining has established itself in mine management technology, with systems currently running in eight languages at sites around the world. In October 2010, Modular announced its two–hundredth customer: Vale S.A.’s Moatize mine, an open–pit coalmine in Tete Province, Mozambique, which marked the third greenfield mine that year to implement Modular technology. Recent innovations from Modular have come in the areas of machine guidance, remote vital signs monitoring, and mine safety.
Products
The IntelliMine Suite
Modular provides software applications and hardware products designed to enhance mining operations’ productivity, safety, and equipment availability/utilization. The core Modular product line, the Intelli–Mine integrated asset management suite, includes the following products:
Modular Mining's flagship product, the DISPATCH Fleet Management system (FMS) for open–pit mines, has become established as a standard for fleet management software in the mining industry. At its core, the system optimizes haul truck assignments, reducing truck queuing at loading and dumping locations, through the use of multiple optimization algorithms, including linear programming, best path, and dynamic programming. A mine's dispatcher uses the system to centrally manage mine operations, including equipment allocation, shift change, refueling, and equipment downtime events. Other features of the system include GPS-based equipment positioning, equipment health monitoring, maintenance tracking, blending, and production reporting. The DISPATCH system enables real-time, computerized, central management of mine operations to maximize production and efficiency, while increasing safety and control.
The DISPATCH Underground system, first deployed in 1991, is the only fully integrated mine management technology for underground operations. The system uses rugged field computer components designed for underground mining environments. DISPATCH Underground supports all major underground mining methods and processes, providing tools for inventory and material movement, crew management, and fleet monitoring, and location management.
The MineCare maintenance management system is a software application designed to minimize maintenance response time, perform predictive and reliability-centered maintenance (RCM), provide critical failure analysis information, and report key performance indicators (KPIs) to mine maintenance personnel in real time. Modular offers over 175 interfaces to original equipment manufacturer (OEM) systems, giving operators access to equipment data, and allowing maintenance personnel to monitor the health of components such as engines, hydraulics, electrical systems, and tires. The MineCare system has helped save considerable maintenance costs and equipment downtime at mining operations around the world.
The ProVision machine guidance system utilizes high–precision Global Navigation Satellite Systems or GNSS (including GPS and GLONASS) to enhance safety, provide reporting and immediate operator feedback, highlight opportunities for improvement initiatives, and reduce operating costs. ProVision technology is available for excavators, loaders, backhoes, drills, and dozers. To improve safety, the ProVision system offers features such as the Proximity Detection module, which is designed to alert equipment operators of encroaching equipment, hazards, and keep–out zones by use of GNSS and on–board camera.
The RoadMap position and safety tracking system enhances safety for light vehicles at mine sites. It monitors light vehicle positions and provides real–time awareness of changing conditions to visitors and mine personnel. Any equipment with a RoadMap system installed continuously tracks its own position using GPS, comparing it in real time to a preconfigured road network. The basic configuration uses a PDA with internal GPS receiver.
Autonomous Haulage Systems
From early on, the development of autonomous (driverless) haulage systems (AHS), i.e. vehicle automation has played a significant role in Modular Mining's affiliation with Komatsu. In November 2011, multinational mining and resources company Rio Tinto signed an agreement with Komatsu Limited for the purchase of 150 autonomous haul trucks, to be implemented into the company's Western Australian Pilbara operations by the end of 2015. The deal marked the first large–scale commercial deployment of autonomous haulage in mining, and came as a major milestone for Modular as a company. Modular's contribution to the Komatsu AHS includes: the supervisory system, operational intelligence, communications infrastructure, operation reporting, and vehicle–interaction safety technologies. This implementation of autonomous haulage has been credited with the ability to create high–tech jobs and help improve mine productivity, safety, and environmental performance.
Presence in Underground Mining
In 1991, Modular Mining Systems developed and installed the world's first underground fleet management system at a diamond mine in South Africa. The DISPATCH Underground system increased productivity by detecting equipment positions in real time using infrared beacons, as well as by operator interaction with a field computer on board the Load–Haul–Dump (LHD) machines used to extract ore-bearing rock. Field information from the on–board computers was delivered to the central server. Today, some of the world's largest underground mining operations use DISPATCH Underground to proactively manage underground mining equipment. Solution was updated in 2016 according to the recent market requirements.
ModularReady OEM Interfaces
Modular has relationships with all major Original Equipment Manufacturers (OEMs) of heavy equipment in mining to provide asset health monitoring. ModularReady interfaces allow 3rd–party systems to interface with Intelli–Mine products. In total, Modular has over 175 interfaces that provide thousands of parameters to issue payload readings, automate haul cycle events, issue OEM and user–defined alarms, and analyze trends that indicate impending equipment failure. ModularReady interfaces provide the ability to aggregate min, max, and average parameter readings, streaming them to central applications such as MineCare for use in RCM analysis.
Services
Value Added Services
Modular provides value-added services, to help mines discover and address areas for improvement in various parts of mine operations and maintenance. The Modular Value Add Services (VAS) division specializes in design, implementation, and optimization of mine management systems, for both surface and underground mines. Services include product user training, consulting, business process mapping (BPM), performance benchmarking, change management, and lean services, among others. In April 2011, Modular expanded its offer of change management and lean to all customers worldwide.
See also
Mining
Komatsu Limited
References
Technology companies of the United States
Mining equipment companies
Multinational companies headquartered in the United States
Companies based in Tucson, Arizona
American companies established in 1979 | Modular Mining Systems | [
"Engineering"
] | 1,960 | [
"Mining equipment",
"Mining equipment companies"
] |
5,071,294 | https://en.wikipedia.org/wiki/Intrinsic%20viscosity | Intrinsic viscosity is a measure of a solute's contribution to the viscosity of a solution. If is the viscosity in the absence of the solute, is (dynamic or kinematic) viscosity of the solution and is the volume fraction of the solute in the solution, then intrinsic viscosity is defined as the dimensionless number It should not be confused with inherent viscosity, which is the ratio of the natural logarithm of the relative viscosity to the mass concentration of the polymer.
When the solute particles are rigid spheres at infinite dilution, the intrinsic viscosity equals , as shown first by Albert Einstein.
In practical settings, is usually solute mass concentration (c, g/dL), and the units of intrinsic viscosity are deciliters per gram (dL/g), otherwise known as inverse concentration.
Formulae for rigid spheroids
Generalizing from spheres to spheroids with an axial semiaxis (i.e., the semiaxis of revolution) and equatorial semiaxes , the intrinsic viscosity can be written
where the constants are defined
The coefficients are the Jeffery functions
General ellipsoidal formulae
It is possible to generalize the intrinsic viscosity formula from spheroids to arbitrary ellipsoids with semiaxes , and .
Frequency dependence
The intrinsic viscosity formula may also be generalized to include a frequency dependence.
Applications
The intrinsic viscosity is very sensitive to the axial ratio of spheroids, especially of prolate spheroids. For example, the intrinsic viscosity can provide rough estimates of the number of subunits in a protein fiber composed of a helical array of proteins such as tubulin. More generally, intrinsic viscosity can be used to assay quaternary structure. In polymer chemistry intrinsic viscosity is related to molar mass through the Mark–Houwink equation. A practical method for the determination of intrinsic viscosity is with a Ubbelohde viscometer or with a RheoSense VROC viscometer.
References
Fluid dynamics
Viscosity | Intrinsic viscosity | [
"Physics",
"Chemistry",
"Engineering"
] | 442 | [
"Physical phenomena",
"Physical quantities",
"Chemical engineering",
"Piping",
"Wikipedia categories named after physical quantities",
"Viscosity",
"Physical properties",
"Fluid dynamics"
] |
5,074,413 | https://en.wikipedia.org/wiki/Lattice%20Boltzmann%20methods | The lattice Boltzmann methods (LBM), originated from the lattice gas automata (LGA) method (Hardy-Pomeau-Pazzis and Frisch-Hasslacher-Pomeau models), is a class of computational fluid dynamics (CFD) methods for fluid simulation. Instead of solving the Navier–Stokes equations directly, a fluid density on a lattice is simulated with streaming and collision (relaxation) processes. The method is versatile as the model fluid can straightforwardly be made to mimic common fluid behaviour like vapour/liquid coexistence, and so fluid systems such as liquid droplets can be simulated. Also, fluids in complex environments such as porous media can be straightforwardly simulated, whereas with complex boundaries other CFD methods can be hard to work with.
Algorithm
Unlike CFD methods that solve the conservation equations of macroscopic properties (i.e., mass, momentum, and energy) numerically, LBM models the fluid consisting of fictive particles, and such particles perform consecutive propagation and collision processes over a discrete lattice. Due to its particulate nature and local dynamics, LBM has several advantages over other conventional CFD methods, especially in dealing with complex boundaries, incorporating microscopic interactions, and parallelization of the algorithm. A different interpretation of the lattice Boltzmann equation is that of a discrete-velocity Boltzmann equation. The numerical methods of solution of the system of partial differential equations then give rise to a discrete map, which can be interpreted as the propagation and collision of fictitious particles.
In an algorithm, there are collision and streaming steps. These evolve the density of the fluid , for the position and the time. As the fluid is on a lattice, the density has a number of components equal to the number of lattice vectors connected to each lattice point. As an example, the lattice vectors for a simple lattice used in simulations in two dimensions is shown here. This lattice is usually denoted D2Q9, for two dimensions and nine vectors: four vectors along north, east, south and west, plus four vectors to the corners of a unit square, plus a vector with both components zero. Then, for example vector , i.e., it points due south and so has no component but a component of . So one of the nine components of the total density at the central lattice point, , is that part of the fluid at point moving due south, at a speed in lattice units of one.
Then the steps that evolve the fluid in time are:
The collision step
which is the Bhatnagar Gross and Krook (BGK) model for relaxation to equilibrium via collisions between the molecules of a fluid. is the equilibrium density along direction i at the current density there, this can be expressed in a Taylor approximation (see below, in Mathematical equations for simulations):
The model assumes that the fluid locally relaxes to equilibrium over a characteristic timescale . This timescale determines the kinematic viscosity, the larger it is, the larger is the kinematic viscosity.
The streaming step
As is, by definition, the fluid density at point at time , that is moving at a velocity of per time step, then at the next time step it will have flowed to point .
Advantages
The LBM was designed from scratch to run efficiently on massively parallel architectures, ranging from inexpensive embedded FPGAs and DSPs up to GPUs and heterogeneous clusters and supercomputers (even with a slow interconnection network). It enables complex physics and sophisticated algorithms. Efficiency leads to a qualitatively new level of understanding since it allows solving problems that previously could not be approached (or only with insufficient accuracy).
The method originates from a molecular description of a fluid and can directly incorporate physical terms stemming from a knowledge of the interaction between molecules. Hence it is an indispensable instrument in fundamental research, as it keeps the cycle between the elaboration of a theory and the formulation of a corresponding numerical model short.
Automated data pre-processing and lattice generation in a time that accounts for a small fraction of the total simulation.
Parallel data analysis, post-processing and evaluation.
Fully resolved multi-phase flow with small droplets and bubbles.
Fully resolved flow through complex geometries and porous media.
Complex, coupled flow with heat transfer and chemical reactions.
Limitations and development
As with Navier–Stokes based CFD, LBM methods have been successfully coupled with thermal-specific solutions to enable heat transfer (solids-based conduction, convection and radiation) simulation capability. For multiphase/multicomponent models, the interface thickness is usually large and the density ratio across the interface is small when compared with real fluids. Recently this problem has been resolved by Yuan and Schaefer who improved on models by Shan and Chen, Swift, and He, Chen, and Zhang. They were able to reach density ratios of 1000:1 by simply changing the equation of state. It has been proposed to apply Galilean Transformation to overcome the limitation of modelling high-speed fluid flows.
The fast advancements of this method had also successfully simulated microfluidics, However, as of now, LBM is still limited in simulating high Knudsen number flows where Monte Carlo methods are instead used, and high-Mach number flows in aerodynamics are still difficult for LBM, and a consistent thermo-hydrodynamic scheme is absent.
Development from the LGA method
LBM originated from the lattice gas automata (LGA) method, which can be considered as a simplified fictitious molecular dynamics model in which space, time, and particle velocities are all discrete. For example, in the 2-dimensional FHP Model each lattice node is connected to its neighbors by 6 lattice velocities on a triangular lattice; there can be either 0 or 1 particles at a lattice node moving with a given lattice velocity. After a time interval, each particle will move to the neighboring node in its direction; this process is called the propagation or streaming step. When more than one particle arrives at the same node from different directions, they collide and change their velocities according to a set of collision rules. Streaming steps and collision steps alternate. Suitable collision rules should conserve the particle number (mass), momentum, and energy before and after the collision. LGA suffer from several innate defects for use in hydrodynamic simulations: lack of Galilean invariance for fast flows, statistical noise and poor Reynolds number scaling with lattice size. LGA are, however, well suited to simplify and extend the reach of reaction diffusion and molecular dynamics models.
The main motivation for the transition from LGA to LBM was the desire to remove the statistical noise by replacing the Boolean particle number in a lattice direction with its ensemble average, the so-called density distribution function. Accompanying this replacement, the discrete collision rule is also replaced by a continuous function known as the collision operator. In the LBM development, an important simplification is to approximate the collision operator with the Bhatnagar-Gross-Krook (BGK) relaxation term. This lattice BGK (LBGK) model makes simulations more efficient and allows flexibility of the transport coefficients. On the other hand, it has been shown that the LBM scheme can also be considered as a special discretized form of the continuous Boltzmann equation. From Chapman-Enskog theory, one can recover the governing continuity and Navier–Stokes equations from the LBM algorithm.
Lattices and the DnQm classification
Lattice Boltzmann models can be operated on a number of different lattices, both cubic and triangular, and with or without rest particles in the discrete distribution function.
A popular way of classifying the different methods by lattice is the DnQm scheme. Here "Dn" stands for "n dimensions", while "Qm" stands for "m speeds". For example, D3Q15 is a 3-dimensional lattice Boltzmann model on a cubic grid, with rest particles present. Each node has a crystal shape and can deliver particles to 15 nodes: each of the 6 neighboring nodes that share a surface, the 8 neighboring nodes sharing a corner, and itself. (The D3Q15 model does not contain particles moving to the 12 neighboring nodes that share an edge; adding those would create a "D3Q27" model.)
Real quantities as space and time need to be converted to lattice units prior to simulation. Nondimensional quantities, like the Reynolds number, remain the same.
Lattice units conversion
In most Lattice Boltzmann simulations is the basic unit for lattice spacing, so if the domain of length has lattice units along its entire length, the space unit is simply defined as . Speeds in lattice Boltzmann simulations are typically given in terms of the speed of sound. The discrete time unit can therefore be given as , where the denominator is the physical speed of sound.
For small-scale flows (such as those seen in porous media mechanics), operating with the true speed of sound can lead to unacceptably short time steps. It is therefore common to raise the lattice Mach number to something much larger than the real Mach number, and compensating for this by raising the viscosity as well in order to preserve the Reynolds number.
Simulation of mixtures
Simulating multiphase/multicomponent flows has always been a challenge to conventional CFD because of the moving and deformable interfaces. More fundamentally, the interfaces between different phases (liquid and vapor) or components (e.g., oil and water) originate from the specific interactions among fluid molecules. Therefore, it is difficult to implement such microscopic interactions into the macroscopic Navier–Stokes equation. However, in LBM, the particulate kinetics provides a relatively easy and consistent way to incorporate the underlying microscopic interactions by modifying the collision operator. Several LBM multiphase/multicomponent models have been developed. Here phase separations are generated automatically from the particle dynamics and no special treatment is needed to manipulate the interfaces as in traditional CFD methods. Successful applications of multiphase/multicomponent LBM models can be found in various complex fluid systems, including interface instability, bubble/droplet dynamics, wetting on solid surfaces, interfacial slip, and droplet electrohydrodynamic deformations.
A lattice Boltzmann model for simulation of gas mixture combustion capable of accommodating significant density variations at low-Mach number regime has been recently proposed.
To this respect, it is worth to notice that, since LBM deals with a larger set of fields (as compared to conventional CFD), the simulation of reactive gas mixtures presents some additional challenges in terms of memory demand as far as large detailed combustion mechanisms are concerned. Those issues may be addressed, though, by resorting to systematic model reduction techniques.
Thermal lattice-Boltzmann method
Currently (2009), a thermal lattice-Boltzmann method (TLBM) falls into one of three categories: the multi-speed approach, the passive scalar approach, and the thermal energy distribution.
Derivation of Navier–Stokes equation from discrete LBE
Starting with the discrete lattice Boltzmann equation (also referred to as LBGK equation due to the collision operator used). We first do a 2nd-order Taylor series expansion about the left side of the LBE. This is chosen over a simpler 1st-order Taylor expansion as the discrete LBE cannot be recovered. When doing the 2nd-order Taylor series expansion, the zero derivative term and the first term on the right will cancel, leaving only the first and second derivative terms of the Taylor expansion and the collision operator:
For simplicity, write as . The slightly simplified Taylor series expansion is then as follows, where ":" is the colon product between dyads:
By expanding the particle distribution function into equilibrium and non-equilibrium components and using the Chapman-Enskog expansion, where is the Knudsen number, the Taylor-expanded LBE can be decomposed into different magnitudes of order for the Knudsen number in order to obtain the proper continuum equations:
The equilibrium and non-equilibrium distributions satisfy the following relations to their macroscopic variables (these will be used later, once the particle distributions are in the "correct form" in order to scale from the particle to macroscopic level):
The Chapman-Enskog expansion is then:
By substituting the expanded equilibrium and non-equilibrium into the Taylor expansion and separating into different orders of , the continuum equations are nearly derived.
For order :
For order :
Then, the second equation can be simplified with some algebra and the first equation into the following:
Applying the relations between the particle distribution functions and the macroscopic properties from above, the mass and momentum equations are achieved:
The momentum flux tensor has the following form then:
where is shorthand for the square of the sum of all the components of (i. e. ), and the equilibrium particle distribution with second order to be comparable to the Navier–Stokes equation is:
The equilibrium distribution is only valid for small velocities or small Mach numbers. Inserting the equilibrium distribution back into the flux tensor leads to:
Finally, the Navier–Stokes equation is recovered under the assumption that density variation is small:
This derivation follows the work of Chen and Doolen.
Mathematical equations for simulations
The continuous Boltzmann equation is an evolution equation for a single particle probability distribution function and the internal energy density distribution function (He et al.) are each respectively:
where is related to by
is an external force, is a collision integral, and (also labeled by in literature) is the microscopic velocity. The external force is related to temperature external force by the relation below. A typical test for one's model is the Rayleigh–Bénard convection for .
Macroscopic variables such as density , velocity , and temperature can be calculated as the moments of the density distribution function:
The lattice Boltzmann method discretizes this equation by limiting space to a lattice and the velocity space to a discrete set of microscopic velocities (i. e. ). The microscopic velocities in D2Q9, D3Q15, and D3Q19 for example are given as:
The single-phase discretized Boltzmann equation for mass density and internal energy density are:
The collision operator is often approximated by a BGK collision operator under the condition it also satisfies the conservation laws:
In the collision operator is the discrete, equilibrium particle probability distribution function. In D2Q9 and D3Q19, it is shown below for an incompressible flow in continuous and discrete form where D, R, and T are the dimension, universal gas constant, and absolute temperature respectively. The partial derivation for the continuous to discrete form is provided through a simple derivation to second order accuracy.
Letting yields the final result:
As much work has already been done on a single-component flow, the following TLBM will be discussed. The multicomponent/multiphase TLBM is also more intriguing and useful than simply one component. To be in line with current research, define the set of all components of the system (i. e. walls of porous media, multiple fluids/gases, etc.) with elements .
The relaxation parameter,, is related to the kinematic viscosity,, by the following relationship:
The moments of the give the local conserved quantities. The density is given by
and the weighted average velocity, , and the local momentum are given by
In the above equation for the equilibrium velocity , the term is the interaction force between a component and the other components. It is still the subject of much discussion as it is typically a tuning parameter that determines how fluid-fluid, fluid-gas, etc. interact. Frank et al. list current models for this force term. The commonly used derivations are Gunstensen chromodynamic model, Swift's free energy-based approach for both liquid/vapor systems and binary fluids, He's intermolecular interaction-based model, the Inamuro approach, and the Lee and Lin approach.
The following is the general description for as given by several authors.
is the effective mass and is Green's function representing the interparticle interaction with as the neighboring site. Satisfying and where represents repulsive forces. For D2Q9 and D3Q19, this leads to
The effective mass as proposed by Shan and Chen uses the following effective mass for a single-component, multiphase system. The equation of state is also given under the condition of a single component and multiphase.
So far, it appears that and are free constants to tune but once plugged into the system's equation of state(EOS), they must satisfy the thermodynamic relationships at the critical point such that and . For the EOS, is 3.0 for D2Q9 and D3Q19 while it equals 10.0 for D3Q15.
It was later shown by Yuan and Schaefer that the effective mass density needs to be changed to simulate multiphase flow more accurately. They compared the Shan and Chen (SC), Carnahan-Starling (C–S), van der Waals (vdW), Redlich–Kwong (R–K), Redlich–Kwong Soave (RKS), and Peng–Robinson (P–R) EOS. Their results revealed that the SC EOS was insufficient and that C–S, P–R, R–K, and RKS EOS are all more accurate in modeling multiphase flow of a single component.
For the popular isothermal Lattice Boltzmann methods these are the only conserved quantities. Thermal models also conserve energy and therefore have an additional conserved quantity:
Unstructured grids
Normally, the lattice Boltzmann methods is implemented on regular grids, However the use of unstructured grid can help with solving complex boundaries, unstructured grids are made of triangles or tetrahedra with variations.
Assuming is a volume made by all barycenters of tetrahedra, faces and edges connected to vertex , the discrete velocity density function:
where are position of a vertex and its neighbors, and:
where is wights of a linear interpolation of by vertices of triangle or tetrahedra that lies within.
Applications
During the last years, the LBM has proven to be a powerful tool for solving problems at different length and time scales.
Some of the applications of LBM include:
Porous Media flows
Biomedical Flows
Earth sciences (Soil filtration).
Energy Sciences (Fuel Cells).
Example Implementation
This is a barebone implementation of LBM on a 100x100 grid, Using Python:#This is a fluid simulator using the lattice Boltzmann method.
#Using D2Q9 and peiodic boundary, and used no external library.
#It generates two ripples at 50,50 and 50,40.
#Reference: Erlend Magnus Viggen's Master thesis, "The Lattice Boltzmann Method with Applications in Acoustics".
#For Wikipedia under CC-BY-SA license.
import math
#Define some utilities
def sum(a):
s=0
for e in a:
s=s+e
return s
#Weights in D2Q9
Weights=[1/36,1/9,1/36,
1/9, 4/9,1/9,
1/36,1/9,1/36]
#Discrete velocity vectors
DiscreteVelocityVectors=[[-1,1],[0,1],[1,1],
[-1,0],[0,0],[1,0],
[-1,-1],[0,-1],[1,-1]
]
#A Field2D class
class Field2D():
def __init__(self,res : int):
self.field=[]
for b in range(res):
fm=[]
for a in range(res):
fm.append([0,0,0,
0,1,0,
0,0,0])
self.field.append(fm[:])
self.res = res
#This visualize the simulation, can only be used in a terminal
@staticmethod
def VisualizeField(a,sc,res):
stringr=""
for u in range(res):
row=""
for v in range(res):
n=int(u*a.res/res)
x=int(v*a.res/res)
flowmomentem=a.Momentum(n,x)
col="\033[38;2;{0};{1};{2}m██".format(int(127+sc*flowmomentem[0]),int(127+sc*flowmomentem[1]),0)
row=row+col
print(row)
stringr=stringr+row+"\n"
return stringr
#Momentum of the field
def Momentum(self,x,y):
return velocityField[y][x][0]*sum(self.field[y][x]),velocityField[y][x][1]*sum(self.field[y][x])
#Resolution of the simulation
res=100
a=Field2D(res)
#The velocity field
velocityField=[]
for DummyVariable in range(res):
DummyList=[]
for DummyVariable2 in range(res):
DummyList.append([0,0])
velocityField.append(DummyList[:])
#The density field
DensityField=[]
for DummyVariable in range(res):
DummyList=[]
for DummyVariable2 in range(res):
DummyList.append(1)
DensityField.append(DummyList[:])
#Set initial condition
DensityField[50][50]=2
DensityField[40][50]=2
#Maximum solving steps
MaxSteps = 120
#The speed of sound, specifically 1/sqrt(3) ~ 0.57
SpeedOfSound=1/math.sqrt(3)
#time relaxation constant
TimeRelaxationConstant=0.5
#Solve
for s in range(MaxSteps):
#Collision Step
df=Field2D(res)
for y in range(res):
for x in range(res):
for v in range(9):
Velocity=a.field[y][x][v]
FirstTerm=Velocity
#The Flow Velocity
FlowVelocity=velocityField[y][x]
Dotted=FlowVelocity[0]*DiscreteVelocityVectors[v][0]+FlowVelocity[1]*DiscreteVelocityVectors[v][1]
# #The taylor expainsion of equilibrium term
taylor=1+((Dotted)/(SpeedOfSound**2))+((Dotted**2)/(2*SpeedOfSound**4))-((FlowVelocity[0]**2+FlowVelocity[1]**2)/(2*SpeedOfSound**2))
#The current density
density=DensityField[y][x]
#The equilibrium
equilibrium=density*taylor*Weights[v]
SecondTerm=(equilibrium-Velocity)/TimeRelaxationConstant
df.field[y][x][v]=FirstTerm+SecondTerm
#Streaming Step
for y in range(0,res):
for x in range(0,res):
for v in range(9):
#Target, the lattice point this iteration is solving
TargetY=y+DiscreteVelocityVectors[v][1]
TargetX=x+DiscreteVelocityVectors[v][0]
# Peiodic Boundary
if TargetY == res and TargetX == res:
a.field[TargetY-res][TargetX-res][v]=df.field[y][x][v]
elif TargetX == res:
a.field[TargetY][TargetX-res][v]=df.field[y][x][v]
elif TargetY == res:
a.field[TargetY-res][TargetX][v]=df.field[y][x][v]
elif TargetY == -1 and TargetX == -1:
a.field[TargetY+res][TargetX+res][v]=df.field[y][x][v]
elif TargetX == -1:
a.field[TargetY][TargetX+res][v]=df.field[y][x][v]
elif TargetY == -1:
a.field[TargetY+res][TargetX][v]=df.field[y][x][v]
else:
a.field[TargetY][TargetX][v]=df.field[y][x][v]
#Calculate macroscopic variables
for y in range(res):
for x in range(res):
#Recompute Density Field
DensityField[y][x]=sum(a.field[y][x])
#Recompute Flow Velocity
FlowVelocity=[0,0]
for DummyVariable in range(9):
FlowVelocity[0]=FlowVelocity[0]+DiscreteVelocityVectors[DummyVariable][0]*a.field[y][x][DummyVariable]
for DummyVariable in range(9):
FlowVelocity[1]=FlowVelocity[1]+DiscreteVelocityVectors[DummyVariable][1]*a.field[y][x][DummyVariable]
FlowVelocity[0]=FlowVelocity[0]/DensityField[y][x]
FlowVelocity[1]=FlowVelocity[1]/DensityField[y][x]
#Insert to Velocity Field
velocityField[y][x]=FlowVelocity
#Visualize
Field2D.VisualizeField(a,128,100)
External links
LBM Method
Entropic Lattice Boltzmann Method (ELBM)
dsfd.org: Website of the annual DSFD conference series (1986 -- now) where advances in theory and application of the lattice Boltzmann method are discussed
Website of the annual ICMMES conference on lattice Boltzmann methods and their applications
Further reading
Notes
Computational fluid dynamics
Lattice models | Lattice Boltzmann methods | [
"Physics",
"Chemistry",
"Materials_science"
] | 5,622 | [
"Computational fluid dynamics",
"Lattice models",
"Computational physics",
"Condensed matter physics",
"Statistical mechanics",
"Fluid dynamics"
] |
5,074,851 | https://en.wikipedia.org/wiki/Level%20of%20free%20convection | The level of free convection (LFC) is the altitude in the atmosphere where an air parcel lifted adiabatically until saturation becomes warmer than the environment at the same level, so that positive buoyancy can initiate self-sustained convection.
Finding the LFC
The usual way of finding the LFC is to lift a parcel from a lower level along the dry adiabatic lapse rate until it crosses the saturated mixing ratio line of the parcel: this is the lifted condensation level (LCL). From there on, follow the moist adiabatic lapse rate until the temperature of the parcel reaches the air mass temperature, at the equilibrium level (EL). If the temperature of the parcel along the moist adiabat is warmer than the environment on further lift, one has found the LFC.
Use
Since the volume of the parcel is larger than the surrounding air after LFC by the ideal gas law (PV = nRT), it is less dense and becomes buoyant rising until its temperature (at EL) equals the surrounding airmass. If the airmass has one or many LFC, it is potentially unstable and may lead to convective clouds like cumulus and thunderstorms.
From the level of free convection to the point where the ascending parcel again becomes colder than its surroundings, the equilibrium level (EL), any air parcel gain kinetic energy which is calculated by its Convective available potential energy (CAPE), giving the potential for severe weather.
References
See also
Atmospheric convection
Atmospheric thermodynamics
Atmospheric thermodynamics
Meteorological quantities
Severe weather and convection | Level of free convection | [
"Physics",
"Chemistry",
"Mathematics"
] | 329 | [
"Thermodynamics stubs",
"Physical quantities",
"Quantity",
"Meteorological quantities",
"Thermodynamics",
"Physical chemistry stubs"
] |
5,075,270 | https://en.wikipedia.org/wiki/Pseudonormal%20space | In mathematics, in the field of topology, a topological space is said to be pseudonormal if given two disjoint closed sets in it, one of which is countable, there are disjoint open sets containing them. Note the following:
Every normal space is pseudonormal.
Every pseudonormal space is regular.
An example of a pseudonormal Moore space that is not metrizable was given by , in connection with the conjecture that all normal Moore spaces are metrizable.
References
Topology
Properties of topological spaces | Pseudonormal space | [
"Physics",
"Mathematics"
] | 111 | [
"Properties of topological spaces",
"Space (mathematics)",
"Topological spaces",
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
5,075,551 | https://en.wikipedia.org/wiki/Perfect%20set | In general topology, a subset of a topological space is perfect if it is closed and has no isolated points. Equivalently: the set is perfect if , where denotes the set of all limit points of , also known as the derived set of . (Some authors do not consider the empty set to be perfect.)
In a perfect set, every point can be approximated arbitrarily well by other points from the set: given any point of and any neighborhood of the point, there is another point of that lies within the neighborhood. Furthermore, any point of the space that can be so approximated by points of belongs to .
Note that the term perfect space is also used, incompatibly, to refer to other properties of a topological space, such as being a Gδ space. As another possible source of confusion, also note that having the perfect set property is not the same as being a perfect set.
Examples
Examples of perfect subsets of the real line are the empty set, all closed intervals, the real line itself, and the Cantor set. The latter is noteworthy in that it is totally disconnected.
Whether a set is perfect or not (and whether it is closed or not) depends on the surrounding space. For instance, the set is perfect as a subset of the space but not perfect as a subset of the space , since it fails to be closed in the latter.
Connection with other topological properties
Every topological space can be written in a unique way as the disjoint union of a perfect set and a scattered set.
Cantor proved that every closed subset of the real line can be uniquely written as the disjoint union of a perfect set and a countable set. This is also true more generally for all closed subsets of Polish spaces, in which case the theorem is known as the Cantor–Bendixson theorem.
Cantor also showed that every non-empty perfect subset of the real line has cardinality , the cardinality of the continuum. These results are extended in descriptive set theory as follows:
If X is a complete metric space with no isolated points, then the Cantor space 2ω can be continuously embedded into X. Thus X has cardinality at least . If X is a separable, complete metric space with no isolated points, the cardinality of X is exactly .
If X is a locally compact Hausdorff space with no isolated points, there is an injective function (not necessarily continuous) from Cantor space to X, and so X has cardinality at least .
See also
Dense-in-itself
Finite intersection property
Subspace topology
Notes
References
Topology
Properties of topological spaces | Perfect set | [
"Physics",
"Mathematics"
] | 529 | [
"Properties of topological spaces",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
5,076,475 | https://en.wikipedia.org/wiki/Kassel%20kerb | A Kassel kerb is a design of kerb (curb in US English) that features a concave-section that allows for an easier alignment for buses. The kerb was first introduced in the German city of Kassel for the low-floor tram system but has since been adopted for use at traffic stops. Kassel kerbs can be part of a bus stop kerb, designed for low-floor buses that serve an elevated bus stop platform.
History
Low-floor bus development
The invention of special kerbs for low-floor buses is connected with the introduction of low-floor buses and modern low-floor trams in the late 20th century. The German NEOPLAN Bus GmbH had designed the first bus with a "low-entry section" in 1976 but it was not accepted well in the market.
Since the 1980s, the Association of German Transport Companies invested into the design for a new standard bus, the "", with the second-generation Neoplan N 416 from 1982 to find wider acceptance.
Shortly later, MAN's competitor Daimler was designing the Mercedes-Benz O405 in 1984 to fit with the new Standard-Bus requirements, and this model spread quickly in the market in the late 1980s.
Based on the Standard-Bus model, a number of variants were developed by their respective manufacturers – here, it was the Kässbohrer Setra S 300 NC to show the first a low-floor version in 1987 that was sold since 1989.
Daimler began to derive the low-floor version of its successful model in its Mercedes-Benz O 405 N, that was produced since late 1989, and which proved to be of a robust design in the following years, leading into rising production numbers. Consequently, Neoplan again developed low-floor versions of their Standard-Buses, named Neoplan N4014, N4015, N4016 NF with production starting in 1990.
Accessibility concepts
With more low-floor buses being introduced to public transport in Germany in the late 1980s, it sparked ideas to optimize accessibility. The introduction of low-floor buses had reduced the number of steps from two or three to one, but the remaining step was a barrier to wheelchairs. A simple elevation of the bus platform is not enough, as there is often a gap too wide for wheels to traverse.
The parallel introduction of low-floor trams showed, that with proper horizontal alignment, the gap can be small enough to be barrier-free. After the first low floor trams in Geneva of 1987, the city of Bremen asked MAN to develop a low floor tram. The resulting prototypes of ADtranz GT6N were delivered in 1990 and mass production started in 1992 with the first batches entering service in Berlin, Bremen and Munich in the following years. The experiences with that first generation sparked interest to take further advantage of the low floor designs.
The introduction of barrier-free concepts into bus transport systems in the 1990s was successful up to the point, that the German transport companies stopped ordering high-floor designs by 1998 and eventually MAN and Daimler stopped producing high-floor city buses in Europe by 2001—public (city) transport companies no longer wanted such designs.
While the first special bus stop kerbs were using the Kassel Sonderbord, other kerb manufacturers followed the model by offering kerbs that optimize vertical and horizontal alignment for low-floor buses.
Kassel Sonderbord
In 1996, the DIN, the German Institute for Standardization, issued the DIN 18024 part 1 ("Barrierefreies Bauen – Teil 1: Straßen, Plätze, Wege, öffentliche Verkehrs- und Grünanlagen sowie Spielplätze; Planungsgrundlagen" / Barrier-Free Design – Part 1: Streets, Places, Roads and Recreational Areas; Planning Basics), updated in 1998. Kassel had been at the forefront, performing tests with low-floor buses as early as 1992. A simple increment on the bus platform height showed problems with wear on the bus tyres, and the planning department of the Kassel public transport company began to assemble ideas on a "special curb" ("") for their bus stops in 1994. A manufacturer was found in in Gensungen south of Kassel with their patent kerb (EP0544202/1993). After its termination manufacturing was taken over by in Borken, Hesse (also south of Kassel). By 2001 about 16% of bus stops in Kassel had been converted to "Kasseler Sonderbord".
The kerb guides the tyre of the stopping bus, improving the alignment of the doors with the kerb and slightly raised boarding platform. As the tyre rides up the concave surface, gravity pulls it back down and steers the bus into alignment.
The kerb has become a common part of contemporary bus stop design, and the provisions of DIN 18024-1 were proposed in 2010 to become a section of DIN 18070 („Öffentlicher Verkehrs- und Freiraum“, or Public Transport and Open Spaces).
Dresden Combibord
The "" kerb is a parallel development, derived from the elevated sidewalks used for low-floor trams in Dresden, Germany. Its development started during the introduction of the first low-floor trams (mode Gelenktriebwagen NGT6DD during 1995–1998) and the Combibord patent was granted in July 1997 (DE 19730055). The round section allows buses to align to the tram platform in a similar way as the trams for level entry.
The Dresden public transport company gives the following reference data:
minimum platform height at tram door: 230 mm
minimum platform height at bus door: 180 mm
maximum remaining entry height from platform to tram: 50 mm
maximum remaining entry height from platform to bus: 80 mm
maximum remaining gap between platform and tram/bus: 50 mm
on dedicated tram stop platforms an accessibility from public sidewalks is asserted with a maximum ramp elevation of 30 mm and incline below 6%
Sonderbord Plus
The original Kassel kerb had been designed for a height of 16 cm above street surface. It did follow the assumption that a higher kerb could lead to collisions of the bus with it. Additionally it was expected that outward-opening swing doors could be damaged. However the remaining gap turned out to be an obstacle for disabled persons on wheels that required often to ask for help / the usage of a ramp enabled by the bus driver.
The United Nations Convention on the Rights of Persons with Disabilities was enacted in 2009 in Germany which led to a reevaluation of the bus kerbs. The original manufacturer Profilbeton GmbH derived the "Sonderboard Plus" designed for a height of 22 cm above street surface. The Hamburger Verkehrsverbund GmbH did build a test station in 2013 featuring a full-length of 39.75 m. It did include an approach section of 8 m length which had a 16 cm kerb after 1.5 m. This was followed by a 14 m section for the loading area having a 22 cm kerb. During a number of simulations it could be shown that damages are possible but they are avoidable. (When the approach was in the 16 cm area or when a steep approach is needed then no problems occur with cautious braking avoiding rolling movements of the bus body). Only a departure steering hard left could make the rear to touch the kerb. As a result, the Sonderbord Plus came into regular use in Kassel and Hamburg, and Berlin took it over in 2018 for new bus stops.
Variants
The Erfurt Busbord kerb deployed since 2007 has a height of 240 mm. (the kerb in Kassel has been 180 mm).
The Berlin Combibord kerb is 210 mm above rail (the kerb in Dresden is 240 mm above rail).
Sonderbord Plus is replacing older bus kerbs to 220 mm in Kassel, Hamburg, Berlin.
New tram generations like Flexity Wien and second generation Flexity Berlin are lowering the entry level height, matching with Combiboard kerbs just above 200 mm. The nominal remaining gap shall be kept below 30 mm vertically for them.
References
External links
Accessible building
Accessible transportation
Assistive technology
Bus terminology
Pedestrian infrastructure
Road infrastructure | Kassel kerb | [
"Physics",
"Engineering"
] | 1,694 | [
"Accessible transportation",
"Accessible building",
"Physical systems",
"Transport",
"Architecture"
] |
35,523,510 | https://en.wikipedia.org/wiki/Distrontium%20ruthenate | Distrontium ruthenate, also known as strontium ruthenate, is an oxide of strontium and ruthenium with the chemical formula Sr2RuO4. It was the first reported perovskite superconductor that did not contain copper. Strontium ruthenate is structurally very similar to the high-temperature cuprate superconductors, and in particular, is almost identical to the lanthanum doped superconductor (La, Sr)2CuO4. However, the transition temperature for the superconducting phase transition is 0.93 K (about 1.5 K for the best sample), which is much lower than the corresponding value for cuprates.
Superconductivity
Superconductivity in SRO was first observed by Yoshiteru Maeno et al. Unlike the cuprate superconductors, SRO displays superconductivity in the absence of doping. The superconducting order parameter in SRO exhibits signatures of time-reversal symmetry breaking, and hence, it can be classified as an unconventional superconductor.
Sr2RuO4 is believed to be a fairly two-dimensional system, with superconductivity occurring primarily on the Ru-O plane. The electronic structure of Sr2RuO4 is characterized by three bands derived from the Ru t2g 4d orbitals, namely, α, β and γ bands, of which the first is hole-like while the other two are electron-like. Among them, the γ band arises mainly from the dxy orbital, while the α and β bands emerge from the hybridization of dxz and dyz orbitals. Due to the two-dimensionality of Sr2RuO4, its Fermi surface consists of three nearly two-dimensional sheets with little dispersion along the crystalline c-axis and that the compound is nearly magnetic.
Early proposals suggested that superconductivity is dominant in the γ band. In particular, the candidate chiral p-wave order parameter in the momentum space exhibits k-dependence phase winding which is characteristic of time-reversal symmetry breaking. This peculiar single-band superconducting order is expected to give rise to appreciable spontaneous supercurrent at the edge of the sample. Such an effect is closely associated with the topology of the Hamiltonian describing Sr2RuO4 in the superconducting state, which is characterized by a nonzero Chern number. However, scanning probes have so far failed to detect expected time-reversal symmetry breaking fields generated by the supercurrent, off by orders of magnitude. This has led some to speculate that superconductivity arises dominantly from the α and β bands instead. Such a two-band superconductor, although having k-dependence phase winding in its order parameters on the two relevant bands, is topologically trivial with the two bands featuring opposite Chern numbers. Therefore, it could possibly give a much reduced if not completely cancelled supercurrent at the edge. However, this naive reasoning was later found not to be entirely correct: the magnitude of edge current is not directly related to the topological property of the chiral state. In particular, although the non-trivial topology is expected to give rise to protected chiral edge states, due to U(1) symmetry-breaking the edge current is not a protected quantity. In fact, it has been shown that the edge current vanishes identically for any higher angular momentum chiral pairing states which feature even larger Chern numbers, such as chiral d-, f-wave etc.
Tc seems to increase under uniaxial compression that pushes the van Hove singularity of the dxy orbital across the Fermi level.
Evidence was reported for p-wave singlet state as in cuprates and conventional superconductors, instead of the conjectured more unconventional p-wave triplet state. It has also been suggested that Strontium ruthenate superconductivity could be due to a Fulde–Ferrell–Larkin–Ovchinnikov phase.
Strontium ruthenate behaves as a conventional Fermi liquid at temperatures below 25 K.
In 2023, a team of researchers from the University of Illinois Urbana-Champaign confirmed the 67-year-old prediction of Pines' demon excitation in Sr2RuO4.
See also
Uranium ditelluride
Dicalcium ruthenate
References
Further reading
Strontium compounds
Ruthenates
Transition metal oxides
Superconductors
Perovskites | Distrontium ruthenate | [
"Chemistry",
"Materials_science"
] | 945 | [
"Superconductivity",
"Superconductors"
] |
35,525,984 | https://en.wikipedia.org/wiki/Media%20Object%20Server | The Media Object Server (MOS) protocol allows newsroom computer systems (NRCS) to communicate using a standard protocol with video servers, audio servers, still stores, and character generators for broadcast production.
The MOS protocol is based on XML. It enables the exchange of the following types of messages:
Descriptive Data for Media Objects. The MOS "pushes" descriptive information and pointers to the NRCS as objects are created, modified, or deleted in the MOS. This allows the NRCS to be "aware" of the contents of the MOS and enables the NRCS to perform searches on and manipulate the data the MOS has sent.
Playlist Exchange. The NRCS can build and transfer playlist information to the MOS. This allows the NRCS to control the sequence that media objects are played or presented by the MOS.
Status Exchange. The MOS can inform the NRCS of the status of specific clips or the MOS system in general. The NRCS can notify the MOS of the status of specific playlist items or running orders.
MOS was developed to reduce the need for the development of device specific drivers. By allowing developers to embed functionality and handle events, vendors were relieved of the burden of developing device drivers. It was left to the manufacturers to interface newsroom computer systems. This approach affords broadcasters flexibility to purchase equipment from multiple vendors. It also limits the need to have operators in multiple locations throughout the studio as, for example, multiple character generators (CG) can be fired from a single control workstation, without needing an operator at each CG console.
MOS enables journalists to see, use, and control media devices inside Associated Press's ENPS system so that individual pieces of newsroom production technology speak a common XML-based language.
History of MOS
The first meeting of the MOS protocol development group occurred at the Associated Press ENPS developer's conference in Orlando, Florida in 1998. The fundamental concepts of MOS were released to the public domain at that conference.
As an open protocol, the MOS Development Group encourages the participation of broadcast equipment vendors and their customers. More than 100 companies are said to work with AP on MOS-related projects. Compatible hardware and software includes video editing, storage and management; automation; machine control; prompters; character generators; audio editing, store and management; web publishing, interactive TV, field transmission and graphics.
Current development is happening on two tracks: a socket-based version, and a web service version. The current official versions of the MOS protocol, as of January 2011, are 2.8.4 (sockets) and 3.8.4 (web service).
In 2016 proposals began to introduce IP Video support in the MOS protocol. This proposal allows representations of live IP Video sources such as NDI (Network Device Interface) to be included as MOS objects alongside MOS objects representing files to be played off disk
There is also a Java based implementation called jmos that is currently compatible with MOS specification 2.8.2.
An open source TypeScript (dialect of JavaScript) MOS connector and MOS Gateway is being actively developed by the Norwegian state broadcaster NRK, as part of their open-source Sofie broadcast automation software initiative.
An open source Python library and command line tool called mosromgr was developed by the BBC. The mosromgr library provides functionality for classifying MOS file types, processing and inspecting MOS message files, as well as merging a batch of MOS files into a complete running order.
In 2017 the National Academy of Television Arts and Sciences awarded an Emmy to the MOS Group for "Development and Standardization of Media Object Server (MOS) Protocol."
References
Broadcast engineering
Multimedia
Servers (computing)
Television technology
Television terminology
Video storage | Media Object Server | [
"Technology",
"Engineering"
] | 776 | [
"Information and communications technology",
"Broadcast engineering",
"Television technology",
"Electronic engineering",
"Multimedia"
] |
35,527,030 | https://en.wikipedia.org/wiki/Plate%20%28structure%29 | A plate is a structural element which is characterized by a three-dimensional solid whose thickness is very small when compared with other dimensions.
The effects of the loads that are expected to be applied on it only generate stresses whose resultants are, in practical terms, exclusively normal to the element's thickness. Their mechanics are the main subject of the plate theory.
Thin plates are initially flat structural members bounded by two parallel planes, called faces, and a cylindrical surface, called an edge or boundary. The generators of the cylindrical surface are perpendicular to the plane faces. The distance between the plane faces is called the thickness (h) of the plate. It will be assumed that the plate thickness is small compared with other characteristic dimensions of the faces (length, width, diameter, etc.). Geometrically, plates are bounded either by straight or curved boundaries. The static or dynamic loads carried by plates are predominantly perpendicular to the plate faces.
See also
Shell
Bending of plates
References
Stephen Timoshenko & S. Woinowsky-Krieger (1940,59) Theory of Plates and Shells, McGraw-Hill Book Company.
Solid mechanics
Structural system | Plate (structure) | [
"Physics",
"Technology",
"Engineering"
] | 231 | [
"Structural engineering",
"Solid mechanics",
"Building engineering",
"Structural system",
"Civil engineering",
"Mechanics",
"Civil engineering stubs"
] |
35,527,085 | https://en.wikipedia.org/wiki/Shell%20%28structure%29 | A shell is a three-dimensional solid structural element whose thickness is very small compared to its other dimensions. It is characterized in structural terms by mid-plane stress which is both coplanar and normal to the surface. A shell can be derived from a plate in two steps: by initially forming the middle surface as a singly or doubly curved surface, then by applying loads which are coplanar to the plate's plane thus generating significant stresses.
Materials range from concrete (a concrete shell) to fabric (as in fabric structures).
Thin-shell structures (also called plate and shell structures) are lightweight constructions using shell elements. These elements, typically curved, are assembled to make large structures. Typical applications include aircraft fuselages, boat hulls, and the roofs of large buildings.
Definition
A thin shell is defined as a shell with a thickness which is small compared to its other dimensions and in which deformations are not large compared to thickness. A primary difference between a shell structure and a plate structure is that, in the unstressed state, the shell structure has curvature as opposed to the plates structure which is flat. Membrane action in a shell is primarily caused by in-plane forces (plane stress), but there may be secondary forces resulting from flexural deformations. Where a flat plate acts similar to a beam with bending and shear stresses, shells are analogous to a cable which resists loads through tensile stresses. The ideal thin shell must be capable of developing both tension and compression.
Types
The most popular types of thin-shell structures are:
Concrete shell structures, often cast as a monolithic dome or stressed ribbon bridge or saddle roof
Lattice shell structures, also called gridshell structures, often in the form of a geodesic dome or a hyperboloid structure
Membrane structures, which include fabric structures and other tensile structures, cable domes, and pneumatic structures.
See also
Monocoque
Diagrid
Stretched grid method
List of thin-shell structures
Persons related:
Félix Candela
Dyckerhoff & Widmann
Wilhelm Flügge
Eugène Freyssinet
Heinz Isler
Pier Luigi Nervi
Plate
Frei Otto
Ernest Edwin Sechler
Vladimir Shukhov
All-Russia Exhibition 1896
Eduardo Torroja
Membrane theory of shells
References
Further reading
External links
Thin-shell structures
Double thin-shells structures
Hypar & Concrete Shells
Past and Future of Grid Shell Structures
Shape optimization of Shell and Spatial structure (PDF)
Lattice Shell for Space Vehicles (PDF)
International Association for Shell and Spatial Structures
Solid mechanics
Structural system | Shell (structure) | [
"Physics",
"Technology",
"Engineering"
] | 513 | [
"Structural engineering",
"Solid mechanics",
"Building engineering",
"Structural system",
"Civil engineering",
"Mechanics",
"Civil engineering stubs"
] |
35,528,199 | https://en.wikipedia.org/wiki/ISCB%20Africa%20ASBCB%20Conference%20on%20Bioinformatics | The ISCB Africa ASBCB Conference on Bioinformatics is a biennial academic conference on the subjects of bioinformatics and computational biology, organized by the African Society for Bioinformatics and Computational Biology (ASBCB). The conference was first held in 2007 as the "ASBCB Conference on the Bioinformatics of African Pathogens, Hosts and Vectors". Since 2009, the conference has been jointly organized with the International Society for Computational Biology (ISCB) and held in different locations within Africa. Although having an evident African focus, the meeting is intended to be a truly international event, encompassing scientists and students from leading institutions in the US, Latin America, Europe and Africa. Holding this event in Africa, ISCB and ASBCB intend to promote local efforts for cooperation and dissemination of leading research techniques to combat major African diseases.
Format of the Meeting
The meeting usually consists of a 3-day conference followed by practical workshops. The main 3-day meeting includes keynote presentations by up to 6 invited speakers from around
the world, including Africa. Session Chairs introduce Keynote Speakers with an overview of the session, highlighting the most significant challenges and the current state of the art in the
field before the keynote speakers launch their presentations. Highly accomplished researchers, primarily but not exclusively from non-African countries, present during the post conference tutorial workshops.
Conference Goals
To directly impact existing capacity to develop public health interventions in endemic Africa countries by driving collaboration and networks development and training.
To expose and educate young and established scientists to the latest bioinformatics tools and techniques used in researching treatments and cures for African hosts, vectors and disease.
Scientific publications
Since 2009, the ISCB Africa ASBCB Conference has been partnering with the Genes, Infection and Evolution journal to publish top papers presented at the conference.
List of conferences
References
External links
2011 Conference Website
2009 Conference Website
Bioinformatics
Biology conferences
Computer science conferences
Recurring events established in 2007 | ISCB Africa ASBCB Conference on Bioinformatics | [
"Technology",
"Engineering",
"Biology"
] | 393 | [
"Biological engineering",
"Computer science conferences",
"Computer conference stubs",
"Bioinformatics",
"Computer science",
"Computing stubs"
] |
35,529,156 | https://en.wikipedia.org/wiki/Riccardin%20C | Riccardin C is a macrocyclic bis(bibenzyl). It is a secondary metabolite isolated from the Siberian cowslip subspecies Primula veris subsp. macrocalyx, in Reboulia hemisphaerica and in the Chinese liverwort Plagiochasma intermedium.
In 2005, the compound was prepared by total synthesis together with the strained compound cavicularin.
References
Dihydrostilbenoids
Macrocycles
Cyclophanes
Heterocyclic compounds with 5 rings
Oxygen heterocycles | Riccardin C | [
"Chemistry"
] | 120 | [
"Organic compounds",
"Macrocycles"
] |
35,531,907 | https://en.wikipedia.org/wiki/Flottweg | Flottweg SE is a manufacturer of machines and systems for mechanical liquid-solid separation. The headquarters is located in Vilsbiburg (Bavaria), Germany. The company develops and produces decanter centrifuges, separators and belt presses. Flottweg has subsidiaries and branch offices with service centers in the United States (Flottweg Separation Technology, Inc.), People´s Republic of China, Russia, Italy, Poland, France, Australia and Mexico.
History
In 1910 Gustav Otto (son of the inventor of the gasoline engines) founded Otto-Flugzeugwerke (Gustav Otto Aircraft Machine Works) based in Munich, Germany. On March 7, 1916, this company was registered as the Bayrische Flugzeugwerke (Bavarian Aircraft Works), the precursor to Bayrische Motorenwerke (Bayerischen Motorenwerke“ or "BMW"). In 1918, Otto opened a new factory in Munich, and began in 1920 to produce motorized bicycles, which were given the brand name "Flottweg" as the German words "flott" means "quick" and "weg" means "away". In 1932' Dr. Georg Bruckmayer acquired the Flottweg name and founded the engine factory "Flottweg-Motoren-Werke". In 1933, the company began manufacturing and distributing motorcycles and aircraft engines in Munich.
In 1943 World War II forced Flottweg to move to Vilsbiburg. In 1946, the company began to manufacture specialized equipment for the printing industry (until 1984).
In 1953 Flottweg expanded its into a business sector: the production of liquid-solid separation machines. The first product was the Flottweg Decanter, a solid bowl decanter centrifuge; the first model, DECANTER Z1, was sold to BASF. Further development lead to a higher speed and more efficient decanter in 1964. In 1966 an adjustable impeller for decanter was added, which allowed the operator to optimise the pond depth while the decanter was operating; this was useful in the extraction of olive oil. After further development, Flottweg sold a 3-phase decanter for olive oil in Spain in 1971. In 1977 Flottweg began to manufacture belt filter presses.
By 1988 Krauss-Maffei, Munich, was the main shareholder, owning 90% of Flottweg. In 1989, under license agreement, the Krauss-Maffei product line of decanter centrifuges was procured by Flottweg. In the same year, Flottweg opened new sales and service offices in France and Russia, and in the next few years sales and service centers in Düsseldorf and Leipzig, Germany were opened.
In 1998, Flottweg acquired Veronesi, an Italian manufacturer of disc stack centrifuges. In the same year, Flottweg began to develop and produce disc stack machines at Vilsbiburg.
In 2007, Flottweg Separation Technology, Inc. was founded in Independence, Kentucky . In 2012 , Flottweg was converted to Flottweg SE, legally registered in the Societas Europaea (also called "Europe AG") in the European Union.
Products
In 2015, Flottweg produces decanter centrifuges, which are used for continuous separation of solids from liquids, clarification of liquids, and classification of fine pigments. These are used to separate municipal and industrial wastewaters , in the manufacture of plastics, in extraction and processing of animal and vegetable raw materials, pclarification of drinks, in the mining and processing industry, and in the processing of biofuels.
Flottweg also produces separators (disc stack centrifuges), which are used in the food and beverage, chemical, pharmaceutical and petroleum industries. The company also makes belt presses, used to produce fruit and vegetable juices, dewater spent grains, create algae and herbal extracts, and processing of coffee grounds and industrial sludge.
References
External links
web presence of Flottweg SE
"Flottweg GmbH & Co, KG",
Liquid-solid separation
Manufacturing companies of Germany
German companies established in 1932
Companies based in Bavaria | Flottweg | [
"Chemistry"
] | 893 | [
"Separation processes by phases",
"Liquid-solid separation"
] |
24,231,219 | https://en.wikipedia.org/wiki/Distributed-element%20filter | A distributed-element filter is an electronic filter in which capacitance, inductance, and resistance (the elements of the circuit) are not localised in discrete capacitors, inductors, and resistors as they are in conventional filters. Its purpose is to allow a range of signal frequencies to pass, but to block others. Conventional filters are constructed from inductors and capacitors, and the circuits so built are described by the lumped element model, which considers each element to be "lumped together" at one place. That model is conceptually simple, but it becomes increasingly unreliable as the frequency of the signal increases, or equivalently as the wavelength decreases. The distributed-element model applies at all frequencies, and is used in transmission-line theory; many distributed-element components are made of short lengths of transmission line. In the distributed view of circuits, the elements are distributed along the length of conductors and are inextricably mixed together. The filter design is usually concerned only with inductance and capacitance, but because of this mixing of elements they cannot be treated as separate "lumped" capacitors and inductors. There is no precise frequency above which distributed element filters must be used but they are especially associated with the microwave band (wavelength less than one metre).
Distributed-element filters are used in many of the same applications as lumped element filters, such as selectivity of radio channel, bandlimiting of noise and multiplexing of many signals into one channel. Distributed-element filters may be constructed to have any of the bandforms possible with lumped elements (low-pass, band-pass, etc.) with the exception of high-pass, which is usually only approximated. All filter classes used in lumped element designs (Butterworth, Chebyshev, etc.) can be implemented using a distributed-element approach.
There are many component forms used to construct distributed-element filters, but all have the common property of causing a discontinuity on the transmission line. These discontinuities present a reactive impedance to a wavefront travelling down the line, and these reactances can be chosen by design to serve as approximations for lumped inductors, capacitors or resonators, as required by the filter.
The development of distributed-element filters was spurred on by the military need for radar and electronic counter measures during World War II. Lumped element analogue filters had long before been developed but these new military systems operated at microwave frequencies and new filter designs were required. When the war ended, the technology found applications in the microwave links used by telephone companies and other organisations with large fixed-communication networks, such as television broadcasters. Nowadays the technology can be found in several mass-produced consumer items, such as the converters (figure 1 shows an example) used with satellite television dishes.
General comments
The symbol λ is used to mean the wavelength of the signal being transmitted on the line or a section of line of that electrical length.
Distributed-element filters are mostly used at frequencies above the VHF (Very High Frequency) band (30 to 300 MHz). At these frequencies, the physical length of passive components is a significant fraction of the wavelength of the operating frequency, and it becomes difficult to use the conventional lumped element model. The exact point at which distributed-element modelling becomes necessary depends on the particular design under consideration. A common rule of thumb is to apply distributed-element modelling when component dimensions are larger than 0.1λ. The increasing miniaturisation of electronics has meant that circuit designs are becoming ever smaller compared to λ. The frequencies beyond which a distributed-element approach to filter design becomes necessary are becoming ever higher as a result of these advances. On the other hand, antenna structure dimensions are usually comparable to λ in all frequency bands and require the distributed-element model.
The most noticeable difference in behaviour between a distributed-element filter and its lumped-element approximation is that the former will have multiple passband replicas of the lumped-element prototype passband, because transmission-line transfer characteristics repeat at harmonic intervals. These spurious passbands are undesirable in most cases.
For clarity of presentation, the diagrams in this article are drawn with the components implemented in stripline format. This does not imply an industry preference, although planar transmission line formats (that is, formats where conductors consist of flat strips) are popular because they can be implemented using established printed circuit board manufacturing techniques. The structures shown can also be implemented using microstrip or buried stripline techniques (with suitable adjustments to dimensions) and can be adapted to coaxial cables, twin leads and waveguides, although some structures are more suitable for some implementations than others. The open wire implementations, for instance, of a number of structures are shown in the second column of figure 3 and open wire equivalents can be found for most other stripline structures. Planar transmission lines are also used in integrated circuit designs.
History
Development of distributed-element filters began in the years before World War II. Warren P. Mason founded the field of distributed-element circuits. A major paper on the subject was published by Mason and Sykes in 1937. Mason had filed a patent much earlier, in 1927, and that patent may contain the first published electrical design which moves away from a lumped element analysis. Mason and Sykes' work was focused on the formats of coaxial cable and balanced pairs of wires – the planar technologies were not yet in use. Much development was carried out during the war years driven by the filtering needs of radar and electronic counter-measures. A good deal of this was at the MIT Radiation Laboratory, but other laboratories in the US and the UK were also involved.
Some important advances in network theory were needed before filters could be advanced beyond wartime designs. One of these was the commensurate line theory of Paul Richards. Commensurate lines are networks in which all the elements are the same length (or in some cases multiples of the unit length), although they may differ in other dimensions to give different characteristic impedances. Richards' transformation allows a lumped element design to be taken "as is" and transformed directly into a distributed-element design using a very simple transform equation.
The difficulty with Richards' transformation from the point of view of building practical filters was that the resulting distributed-element design invariably included series connected elements. This was not possible to implement in planar technologies and was often inconvenient in other technologies. This problem was solved by K. Kuroda who used impedance transformers to eliminate the series elements. He published a set of transformations known as Kuroda's identities in 1955, but his work was written in Japanese and it was several years before his ideas were incorporated into the English-language literature.
Following the war, one important research avenue was trying to increase the design bandwidth of wide-band filters. The approach used at the time (and still in use today) was to start with a lumped element prototype filter and through various transformations arrive at the desired filter in a distributed-element form. This approach appeared to be stuck at a minimum Q of five (see Band-pass filters below for an explanation of Q). In 1957, Leo Young at Stanford Research Institute published a method for designing filters which started with a distributed-element prototype. This prototype was based on quarter wave impedance transformers and was able to produce designs with bandwidths up to an octave, corresponding to a Q of about 1.3. Some of Young's procedures in that paper were empirical, but later, exact solutions were published. Young's paper specifically addresses directly coupled cavity resonators, but the procedure can equally be applied to other directly coupled resonator types, such as those found in modern planar technologies and illustrated in this article. The capacitive gap filter (figure 8) and the parallel-coupled lines filter (figure 9) are examples of directly coupled resonators.
of printed planar technologies greatly simplified the manufacture of many microwave components including filters, and microwave integrated circuits then became possible. It is not known when planar transmission lines originated, but experiments using them were recorded as early as 1936. The inventor of printed stripline, however, is known; this was Robert M. Barrett who published the idea in 1951. This caught on rapidly, and Barrett's stripline soon had fierce commercial competition from rival planar formats, especially triplate and microstrip. The generic term stripline in modern usage usually refers to the form then known as triplate.
Early stripline directly coupled resonator filters were end-coupled, but the length was reduced and the compactness successively increased with the introduction of parallel-coupled line filters, interdigital filters, and comb-line filters. Much of this work was published by the group at Stanford led by George Matthaei, and also including Leo Young mentioned above, in a landmark book which still today serves as a reference for circuit designers. The hairpin filter was first described in 1972. By the 1970s, most of the filter topologies in common use today had been described. More recent research has concentrated on new or variant mathematical classes of the filters, such as pseudo-elliptic, while still using the same basic topologies, or with alternative implementation technologies such as suspended stripline and finline.
The initial non-military application of distributed-element filters was in the microwave links used by telecommunications companies to provide the backbone of their networks. These links were also used by other industries with large, fixed networks, notably television broadcasters. Such applications were part of large capital investment programs. However, mass-production manufacturing made the technology cheap enough to incorporate in domestic satellite television systems. An emerging application is in superconducting filters for use in the cellular base stations operated by mobile phone companies.
Basic components
The simplest structure that can be implemented is a step in the characteristic impedance of the line, which introduces a discontinuity in the transmission characteristics. This is done in planar technologies by a change in the width of the transmission line. Figure 4(a) shows a step up in impedance (narrower lines have higher impedance). A step down in impedance would be the mirror image of figure 4(a). The discontinuity can be represented approximately as a series inductor, or more exactly, as a low-pass T circuit as shown in figure 4(a). Multiple discontinuities are often coupled together with impedance transformers to produce a filter of higher order. These impedance transformers can be just a short (often λ/4) length of transmission line. These composite structures can implement any of the filter families (Butterworth, Chebyshev, etc.) by approximating the rational transfer function of the corresponding lumped element filter. This correspondence is not exact since distributed-element circuits cannot be rational and is the root reason for the divergence of lumped element and distributed-element behaviour. Impedance transformers are also used in hybrid mixtures of lumped and distributed-element filters (the so-called semi-lumped structures).
Another very common component of distributed-element filters is the stub. Over a narrow range of frequencies, a stub can be used as a capacitor or an inductor (its impedance is determined by its length) but over a wide band it behaves as a resonator. Short-circuit, nominally quarter-wavelength stubs (figure 3(a)) behave as shunt LC antiresonators, and an open-circuit nominally quarter-wavelength stub (figure 3(b)) behaves as a series LC resonator. Stubs can also be used in conjunction with impedance transformers to build more complex filters and, as would be expected from their resonant nature, are most useful in band-pass applications. While open-circuit stubs are easier to manufacture in planar technologies, they have the drawback that the termination deviates significantly from an ideal open circuit (see figure 4(b)), often leading to a preference for short-circuit stubs (one can always be used in place of the other by adding or subtracting λ/4 to or from the length).
A helical resonator is similar to a stub, in that it requires a distributed-element model to represent it, but is actually built using lumped elements. They are built in a non-planar format and consist of a coil of wire, on a former and core, and connected only at one end. The device is usually in a shielded can with a hole in the top for adjusting the core. It will often look physically very similar to the lumped LC resonators used for a similar purpose. They are most useful in the upper VHF and lower UHF bands whereas stubs are more often applied in the higher UHF and SHF bands.
Coupled lines (figures 3(c-e)) can also be used as filter elements; like stubs, they can act as resonators and likewise be terminated short-circuit or open-circuit. Coupled lines tend to be preferred in planar technologies, where they are easy to implement, whereas stubs tend to be preferred elsewhere. Implementing a true open circuit in planar technology is not feasible because of the dielectric effect of the substrate which will always ensure that the equivalent circuit contains a shunt capacitance. Despite this, open circuits are often used in planar formats in preference to short circuits because they are easier to implement. Numerous element types can be classified as coupled lines and a selection of the more common ones is shown in the figures.
Some common structures are shown in figures 3 and 4, along with their lumped-element counterparts. These lumped-element approximations are not to be taken as equivalent circuits but rather as a guide to the behaviour of the distributed elements over a certain frequency range. Figures 3(a) and 3(b) show a short-circuit and open-circuit stub, respectively. When the stub length is λ/4, these behave, respectively, as anti-resonators and resonators and are therefore useful, respectively, as elements in band-pass and band-stop filters. Figure 3(c) shows a short-circuited line coupled to the main line. This also behaves as a resonator, but is commonly used in low-pass filter applications with the resonant frequency well outside the band of interest. Figures 3(d) and 3(e) show coupled line structures which are both useful in band-pass filters. The structures of figures 3(c) and 3(e) have equivalent circuits involving stubs placed in series with the line. Such a topology is straightforward to implement in open-wire circuits but not with a planar technology. These two structures are therefore useful for implementing an equivalent series element.
Low-pass filters
A low-pass filter can be implemented quite directly from a ladder topology lumped-element prototype with the stepped impedance filter shown in figure 5. This is also called a cascaded lines design. The filter consists of alternating sections of high-impedance and low-impedance lines which correspond to the series inductors and shunt capacitors in the lumped-element implementation. Low-pass filters are commonly used to feed direct current (DC) bias to active components. Filters intended for this application are sometimes referred to as chokes. In such cases, each element of the filter is λ/4 in length (where λ is the wavelength of the main-line signal to be blocked from transmission into the DC source) and the high-impedance sections of the line are made as narrow as the manufacturing technology will allow in order to maximise the inductance. Additional sections may be added as required for the performance of the filter just as they would for the lumped-element counterpart. As well as the planar form shown, this structure is particularly well suited for coaxial implementations with alternating discs of metal and insulator being threaded on to the central conductor.
A more complex example of stepped impedance design is presented in figure 6. Again, narrow lines are used to implement inductors and wide lines correspond to capacitors, but in this case, the lumped-element counterpart has resonators connected in shunt across the main line. This topology can be used to design elliptical filters or Chebyshev filters with poles of attenuation in the stopband. However, calculating component values for these structures is an involved process and has led to designers often choosing to implement them as m-derived filters instead, which perform well and are much easier to calculate. The purpose of incorporating resonators is to improve the stopband rejection. However, beyond the resonant frequency of the highest frequency resonator, the stopband rejection starts to deteriorate as the resonators are moving towards open-circuit. For this reason, filters built to this design often have an additional single stepped-impedance capacitor as the final element of the filter. This also ensures good rejection at high frequency.
Another common low-pass design technique is to implement the shunt capacitors as stubs with the resonant frequency set above the operating frequency so that the stub impedance is capacitive in the passband. This implementation has a lumped-element counterpart of a general form similar to the filter of figure 6. Where space allows, the stubs may be set on alternate sides of the main line as shown in figure 7(a). The purpose of this is to prevent coupling between adjacent stubs which detracts from the filter performance by altering the frequency response. However, a structure with all the stubs on the same side is still a valid design. If the stub is required to be a very low impedance line, the stub may be inconveniently wide. In these cases, a possible solution is to connect two narrower stubs in parallel. That is, each stub position has a stub on both sides of the line. A drawback of this topology is that additional transverse resonant modes are possible along the λ/2 length of line formed by the two stubs together. For a choke design, the requirement is simply to make the capacitance as large as possible, for which the maximum stub width of λ/4 may be used with stubs in parallel on both sides of the main line. The resulting filter looks rather similar to the stepped impedance filter of figure 5, but has been designed on completely different principles. A difficulty with using stubs this wide is that the point at which they are connected to the main line is ill-defined. A stub that is narrow in comparison to λ can be taken as being connected on its centre-line and calculations based on that assumption will accurately predict filter response. For a wide stub, however, calculations that assume the side branch is connected at a definite point on the main line leads to inaccuracies as this is no longer a good model of the transmission pattern. One solution to this difficulty is to use radial stubs instead of linear stubs. A pair of radial stubs in parallel (one on either side of the main line) is called a butterfly stub (see figure 7(b)). A group of three radial stubs in parallel, which can be achieved at the end of a line, is called a clover-leaf stub.
Band-pass filters
A band-pass filter can be constructed using any elements that can resonate. Filters using stubs can clearly be made band-pass; numerous other structures are possible and some are presented below.
An important parameter when discussing band-pass filters is the fractional bandwidth. This is defined as the ratio of the bandwidth to the geometric centre frequency. The inverse of this quantity is called the Q-factor, Q. If ω1 and ω2 are the frequencies of the passband edges, then:
bandwidth ,
geometric centre frequency and
Capacitive gap filter
The capacitive gap structure consists of sections of line about λ/2 in length which act as resonators and are coupled "end-on" by gaps in the transmission line. It is particularly suitable for planar formats, is easily implemented with printed circuit technology and has the advantage of taking up no more space than a plain transmission line would. The limitation of this topology is that performance (particularly insertion loss) deteriorates with increasing fractional bandwidth, and acceptable results are not obtained with a Q less than about 5. A further difficulty with producing low-Q designs is that the gap width is required to be smaller for wider fractional bandwidths. The minimum width of gaps, like the minimum width of tracks, is limited by the resolution of the printing technology.
Parallel-coupled lines filter
Parallel-coupled lines is another popular topology for printed boards, for which open-circuit lines are the simplest to implement since the manufacturing consists of nothing more than the printed track. The design consists of a row of parallel λ/2 resonators, but coupling over only λ/4 to each of the neighbouring resonators, so forming a staggered line as shown in figure 9. Wider fractional bandwidths are possible with this filter than with the capacitive gap filter, but a similar problem arises on printed boards as dielectric loss reduces the Q. Lower-Q lines require tighter coupling and smaller gaps between them which is limited by the accuracy of the printing process. One solution to this problem is to print the track on multiple layers with adjacent lines overlapping but not in contact because they are on different layers. In this way, the lines can be coupled across their width, which results in much stronger coupling than when they are edge-to-edge, and a larger gap becomes possible for the same performance.
For other (non-printed) technologies, short-circuit lines may be preferred since the short-circuit provides a mechanical attachment point for the line and Q-reducing dielectric insulators are not required for mechanical support. Other than for mechanical and assembly reasons, there is little preference for open-circuit over short-circuit coupled lines. Both structures can realize the same range of filter implementations with the same electrical performance. Both types of parallel-coupled filters, in theory, do not have spurious passbands at twice the centre frequency as seen in many other filter topologies (e.g., stubs). However, suppression of this spurious passband requires perfect tuning of the coupled lines which is not realized in practice, so there is inevitably some residual spurious passband at this frequency.
The hairpin filter is another structure that uses parallel-coupled lines. In this case, each pair of parallel-coupled lines is connected to the next pair by a short link. The "U" shapes so formed give rise to the name hairpin filter. In some designs the link can be longer, giving a wide hairpin with λ/4 impedance transformer action between sections.
The angled bends seen in figure 10 are common to stripline designs and represent a compromise between a sharp right angle, which produces a large discontinuity, and a smooth bend, which takes up more board area which can be severely limited in some products. Such bends are often seen in long stubs where they could not otherwise be fitted into the space available. The lumped-element equivalent circuit of this kind of discontinuity is similar to a stepped-impedance discontinuity. Examples of such stubs can be seen on the bias inputs to several components in the photograph at the top of the article.
Interdigital filter
Interdigital filters are another form of coupled-line filter. Each section of line is about λ/4 in length and is terminated in a short-circuit at one end only, the other end being left open-circuit. The end which is short-circuited alternates on each line section. This topology is straightforward to implement in planar technologies, but also particularly lends itself to a mechanical assembly of lines fixed inside a metal case. The lines can be either circular rods or rectangular bars, and interfacing to a coaxial format line is easy. As with the parallel-coupled line filter, the advantage of a mechanical arrangement that does not require insulators for support is that dielectric losses are eliminated. The spacing requirement between lines is not as stringent as in the parallel line structure; as such, higher fractional bandwidths can be achieved, and Q values as low as 1.4 are possible.
The comb-line filter is similar to the interdigital filter in that it lends itself to mechanical assembly in a metal case without dielectric support. In the case of the comb-line, all the lines are short-circuited at the same end rather than alternate ends. The other ends are terminated in capacitors to ground, and the design is consequently classified as semi-lumped. The chief advantage of this design is that the upper stopband can be made very wide, that is, free of spurious passbands at all frequencies of interest.
Stub band-pass filters
As mentioned above, stubs lend themselves to band-pass designs. General forms of these are similar to stub low-pass filters except that the main line is no longer a narrow high impedance line. Designers have many different topologies of stub filters to choose from, some of which produce identical responses. An example stub filter is shown in figure 12; it consists of a row of λ/4 short-circuit stubs coupled together by λ/4 impedance transformers.
The stubs in the body of the filter are double paralleled stubs while the stubs on the end sections are only singles, an arrangement that has impedance matching advantages. The impedance transformers have the effect of transforming the row of shunt anti-resonators into a ladder of series resonators and shunt anti-resonators. A filter with similar properties can be constructed with λ/4 open-circuit stubs placed in series with the line and coupled together with λ/4 impedance transformers, although this structure is not possible in planar technologies.
Yet another structure available is λ/2 open-circuit stubs across the line coupled with λ/4 impedance transformers. This topology has both low-pass and band-pass characteristics. Because it will pass DC, it is possible to transmit biasing voltages to active components without the need for blocking capacitors. Also, since short-circuit links are not required, no assembly operations other than the board printing are required when implemented as stripline. The disadvantages are
(i) the filter will take up more board real estate than the corresponding λ/4 stub filter, since the stubs are all twice as long;
(ii) the first spurious passband is at 2ω0, as opposed to 3ω0 for the λ/4 stub filter.
Konishi describes a wideband 12 GHz band-pass filter, which uses 60° butterfly stubs and also has a low-pass response (short-circuit stubs are required to prevent such a response). As is often the case with distributed-element filters, the bandform into which the filter is classified largely depends on which bands are desired and which are considered to be spurious.
High-pass filters
Genuine high-pass filters are difficult, if not impossible, to implement with distributed elements. The usual design approach is to start with a band-pass design, but make the upper stopband occur at a frequency that is so high as to be of no interest. Such filters are described as pseudo-high-pass and the upper stopband is described as a vestigial stopband. Even structures that seem to have an "obvious" high-pass topology, such as the capacitive gap filter of figure 8, turn out to be band-pass when their behaviour for very short wavelengths is considered.
See also
RF and microwave filter
Waveguide filter
Spurline
Power dividers and directional couplers
References
Bibliography
Bahl, I. J. Lumped Elements for RF and Microwave Circuits, Artech House, 2003 .
Barrett, R. M. and Barnes, M. H. "Microwave printed circuits", Radio Telev., vol.46, p. 16, September 1951.
Barrett, R. M. "Etched sheets serve as microwave components", Electronics, vol.25, pp. 114–118, June 1952.
Benoit, Hervé Satellite Television: Techniques of Analogue and Digital Television, Butterworth-Heinemann, 1999 .
Bhat, Bharathi and Koul, Shiban K. Stripline-like Transmission Lines for Microwave Integrated Circuits, New Age International, 1989 .
Carr, Joseph J. The Technician's Radio Receiver Handbook, Newnes, 2001
Cohn, S. B. "Parallel-coupled transmission-line resonator filters", IRE Transactions: Microwave Theory and Techniques, vol.MTT-6, pp. 223–231, April 1958.
Connor, F. R. Wave Transmission, Edward Arnold Ltd., 1972 .
Cristal, E. G. and Frankel, S. "Hairpin line/half-wave parallel-coupled-line filters", IEEE Transactions: Microwave Theory and Techniques, vol.MTT-20, pp. 719–728, November 1972.
Fagen, M. D. and Millman, S. A History of Engineering and Science in the Bell System: Volume 5: Communications Sciences (1925–1980), AT&T Bell Laboratories, 1984.
Farago, P. S. An Introduction to Linear Network Analysis, English Universities Press, 1961.
Ford, Peter John and Saunders, G. A. The Rise of the Superconductors, CRC Press, 2005 .
Golio, John Michael The RF and Microwave Handbook, CRC Press, 2001 .
Hong, Jia-Sheng and Lancaster, M. J. Microstrip Filters for RF/Microwave Applications, John Wiley and Sons, 2001 .
Huurdeman, Anton A. The Worldwide History of Telecommunications, Wiley-IEEE, 2003 .
Jarry, Pierre and Beneat, Jacques Design and Realizations of Miniaturized Fractal Microwave and RF Filters, John Wiley and Sons, 2009 .
Kneppo, Ivan Microwave Integrated Circuits, Springer, 1994 .
Konishi, Yoshihiro Microwave Integrated Circuits, CRC Press, 1991 .
Lee, Thomas H. Planar Microwave Engineering: A Practical Guide to Theory, Measurement, and Circuits, Cambridge University Press, 2004 .
Levy, R. "Theory of direct coupled-cavity filters", IEEE Transactions: Microwave Theory and Techniques, vol.MTT-15, pp. 340–348, June 1967.
Levy, R. Cohn, S.B., "A History of microwave filter research, design, and development", IEEE Transactions: Microwave Theory and Techniques, pp. 1055–1067, vol.32, issue 9, 1984.
Lundström, Lars-Ingemar Understanding Digital Television, Elsevier, 2006 .
Makimoto, Mitsuo and Yamashita, Sadahiko "Microwave resonators and filters for wireless communication: theory, design, and application", Springer, 2001 .
Mason, W. P. and Sykes, R. A. "The use of coaxial and balanced transmission lines in filters and wide band transformers for high radio frequencies", Bell Syst. Tech. J., vol.16, pp. 275–302, 1937.
Matthaei, G. L. "Interdigital band-pass filters", IRE Transactions: Microwave Theory and Techniques, vol.MTT-10, pp. 479–491, November 1962.
Matthaei, G. L. "Comb-line band-pass filters of narrow or moderate bandwidth", Microwave Journal, vol.6, pp. 82–91, August 1963.
Matthaei, George L.; Young, Leo and Jones, E. M. T. Microwave Filters, Impedance-Matching Networks, and Coupling Structures McGraw-Hill 1964 (1980 edition is ).
Niehenke, E. C.; Pucel, R. A. and Bahl, I. J. "Microwave and millimeter-wave integrated circuits", IEEE Transactions: Microwave Theory and Techniques, vol.50, Iss. 2002, pp.846–857.
Di Paolo, Franco Networks and Devices using Planar Transmission Lines, CRC Press, 2000 .
Ragan, G. L. (ed.) Microwave transmission circuits, Massachusetts Institute of Technology Radiation Laboratory, Dover Publications, 1965.
Richards, P. I. "Resistor-transmission-line circuits", Proceedings of the IRE, vol.36, pp. 217–220, Feb. 1948.
Rogers, John W. M. and Plett, Calvin Radio Frequency Integrated Circuit Design, Artech House, 2003 .
Sarkar, Tapan K. History of Wireless, John Wiley and Sons, 2006 .
Sevgi, Levent Complex Electromagnetic Problems and Numerical Simulation Approaches, Wiley-IEEE, 2003 .
Thurston, Robert N., "Warren P. Mason: 1900-1986", Journal of the Acoustical Society of America, vol. 81, iss. 2, pp. 570-571, February 1987.
Young, L. "Direct-coupled cavity filters for wide and narrow bandwidths" IEEE Transactions: Microwave Theory and Techniques'', vol.MTT-11''', pp. 162–178, May 1963.
Distributed element circuits
Microwave technology | Distributed-element filter | [
"Engineering"
] | 6,915 | [
"Electronic engineering",
"Distributed element circuits"
] |
24,231,653 | https://en.wikipedia.org/wiki/Shuttle%20valve | A shuttle valve is a type of valve which allows fluid to flow through it from one of two sources. Generally a shuttle valve is used in pneumatic systems, although sometimes it will be found in hydraulic systems.
Structure and function
The basic structure of a shuttle valve is like a tube with three openings; one on each end, and one in the middle. A ball or other blocking valve element (the shuttle) moves freely within the tube. When pressure of a fluid is exerted through the opening at one end it pushes the shuttle towards the opposite end, closing it. This prevents the fluid from passing through that opening, but allows it to flow out through the middle opening. In this way two different sources can provide pressure to a device without the threat of back flow from one source to the other.
In pneumatic logic a shuttle-valve works as an OR gate.
Applications
A shuttle valve has several applications including:
The use of more switches on one machine: by using the shuttle valve, more than one switch can be operated on a single machine for safety, and each switch can be placed at any suitable location. This application is normally used with heavy industrial machinery.
Winch brake circuit: a shuttle valve provides brake control in pneumatic winch applications. When the compressor is operated the shuttle valves direct air to open the brake shoes. When the control valve is centered, the brake cylinder is vented through the shuttle valve, and the brake shoes are allowed to close.
Air pilot control: converting from air to oil results in locking of the cylinder. Shifting the four-way valve to either extreme position applies the air pilot through the shuttle valve, holding the two air-operated valves open and applying oil under air pressure to the corresponding side of the cylinder. Positioning a manual valve to neutral exhausts the air pilot pressure, closing the two-way valves, and trapping oil on both sides of the cylinder to lock it in position.
Standby and emergency systems: compressor systems requiring standby or purge gases capability are pressure controlled by the shuttle valve. This is used for instrumentation, pressure cables, or any system requiring continuous pneumatic input. If the compressor fails, the standby tank—regulated to slightly under the compressor supply—will shift the shuttle valve and take over the function. When the compressor pressure is re-established, the shuttle valve shifts back and seals off the standby system until needed again.
References
Control devices
Valves | Shuttle valve | [
"Physics",
"Chemistry",
"Engineering"
] | 491 | [
"Control devices",
"Physical systems",
"Control engineering",
"Valves",
"Hydraulics",
"Piping"
] |
24,237,000 | https://en.wikipedia.org/wiki/Rushton%20turbine | The Rushton turbine or Rushton disc turbine is a radial flow impeller used for many mixing applications (commonly for gas dispersion applications) in process engineering and was invented by John Henry Rushton. The design is based on a flat horizontal disk, with flat, vertically-mounted blades. Recent innovations include the use of concave or semi-circular blades.
References
Fluid dynamics
Pumps | Rushton turbine | [
"Physics",
"Chemistry",
"Engineering"
] | 79 | [
"Pumps",
"Turbomachinery",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Piping",
"Fluid dynamics"
] |
6,688,378 | https://en.wikipedia.org/wiki/Dewvaporation | Dewvaporation is a novel desalination technology developed at Arizona State University (Tempe) as an energy efficient tool for freshwater procurement and saline waste stream management. The system has relatively low installation costs and low operation and maintenance requirements.
The process uses air as a carrier gas that transfers water vapor from ascending evaporative channels to adjacent, descending dew-forming channels. Heat flowing through the barrier allows the evaporative energy requirement to be fully satisfied by the heat released by condensation on the dew forming side. A small pressure difference is held so that the condensing cooler air is kept on the cool side.
Near atmospheric operation permits corrosion free and scale-resistant polypropylene construction, and also allows the use of low-grade heat to drive the process.
The process is proprietary, developed by Dr. James R. Beckman. Currently, Altela Inc. (Albuquerque, NM) is manufacturing this technology under the AltelaRain trade name.
Detailed process
According to the Bureau of Reclamation, a branch of the US Department of Interior, this process uses simple corrugated plastic tanks with many "DewVaporation columns" inserted in each tank. Each column is made of corrugated plastic and is divided into two compartments. The wall in the middle serves for receiving and evaporating sea-water into a hot air stream, and on the other side for condensing freshwater. The cooling from the evaporation helps water condense on the dividing wall, while the energy from the condensing vapor, now turned to droplets, passes back to the evaporation side, and is absorbed in the evaporating sea water. This way, much of the energy (as heat) is left in the process, and is not removed with the air leaving the DewVaporation column.
Various improvements have been proposed, among those reusing the output brine and adding external heat in a stacked way, so that the pressure and humidity gap between the two sides of the column are optimal and constant.
See also
Heat of evaporation
Heat transfer
References
External links
Solar Desalination using Dewvaporation
Seawater desalination using Dewvaporation technique: theoretical development and design evolution
Seawater desalination using Dewvaporation technique: experimental and enhancement work with economic analysis
Dewvaporation Desalination 5,000-Gallon-Per-Day Pilot Plant
Brackish and seawater desalination using a 20 ft2 dewvaporation tower
Water treatment
Water desalination | Dewvaporation | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 508 | [
"Water desalination",
"Water treatment",
"Water pollution",
"Environmental engineering",
"Water technology"
] |
6,690,048 | https://en.wikipedia.org/wiki/Neanderthal%20genome%20project | The Neanderthal genome project is an effort, founded in July 2006, of a group of scientists to sequence the Neanderthal genome.
It was initiated by 454 Life Sciences, a biotechnology company based in Branford, Connecticut in the United States and is coordinated by the Max Planck Institute for Evolutionary Anthropology in Germany. In May 2010 the project published their initial draft of the Neanderthal genome (Vi33.16, Vi33.25, Vi33.26) based on the analysis of four billion base pairs of Neanderthal DNA. The study determined that some mixture of genes occurred between Neanderthals and anatomically modern humans and presented evidence that elements of their genome remain in modern humans outside Africa.
In December 2013, a high coverage genome of a Neanderthal was reported for the first time. DNA was extracted from a toe fragment from a female Neanderthal researchers have dubbed the "Altai Neandertal". It was found in Denisova Cave in the Altai Mountains of Siberia and is estimated to be 50,000 years old.
Findings
The researchers recovered ancient DNA of Neanderthals by extracting the DNA from the femur bones of three 38,000 year-old female Neanderthal specimens from Vindija Cave, Croatia, and other bones found in Spain, Russia, and Germany. Only about half a gram of the bone samples (or 21 samples each 50–100 mg) was required for the sequencing, but the project faced many difficulties, including the contamination of the samples by the bacteria that had colonized the Neanderthal's body and humans who handled the bones at the excavation site and at the laboratory.
In February 2009, the Max Planck Institute's team led by Svante Pääbo announced that they had completed the first draft of the Neanderthal genome. An early analysis of the data suggested in "the genome of Neanderthals, a human species driven to extinction" "no significant trace of Neanderthal genes in modern humans". New results suggested that some adult Neanderthals were lactose intolerant. On the question of potentially cloning a Neanderthal, Pääbo commented, "Starting from the DNA extracted from a fossil, it is and will remain impossible."
In May 2010, the project released a draft of their report on the sequenced Neanderthal genome. Contradicting the results discovered while examining mitochondrial DNA (mtDNA), they demonstrated a range of genetic contribution to non-African modern humans ranging from 1% to 4%. From their Homo sapiens samples in Eurasia (French, Han Chinese and Papuan) the authors stated that it is likely that interbreeding occurred in the Levant before Homo sapiens migrated into Europe. This finding is disputed because of the paucity of archeological evidence supporting their statement. The fossil evidence does not conclusively place Neanderthals and modern humans in close proximity at this time and place.
According to preliminary sequences from 2010, 99.7% of the nucleotide sequences of the modern human and Neanderthal genomes are identical, compared to humans sharing around 98.8% of sequences with the chimpanzee. (For some time, studies concerning the commonality between chimps and humans modified the commonality of 99% to a commonality of only 94%, showing that the genetic gap between humans and chimpanzees was far larger than originally thought, but more recent knowledge states the difference between humans, chimpanzees, and bonobos at just about 1.0–1.2% again.)
Additionally, in 2010, the discovery and analysis of mtDNA from the Denisova hominin in Siberia revealed that it differed from that of modern humans by 385 bases (nucleotides) in the mtDNA strand out of approximately 16,500, whereas the difference between modern humans and Neanderthals is around 202 bases. In contrast, the difference between chimpanzees and modern humans is approximately 1,462 mtDNA base pairs. Analysis of the specimen's nuclear DNA was then still under way and expected to clarify whether the find is a distinct species. Even though the Denisova hominin's mtDNA lineage predates the divergence of modern humans and Neanderthals, coalescent theory does not preclude a more recent divergence date for her nuclear DNA.
A rib fragment from the partial skeleton of a Neanderthal infant found in the Mezmaiskaya cave in the northwestern foothills of the Caucasus Mountains was radiocarbon-dated in 1999 to 29,195±965 B.P., and therefore belonging to the latest lived Neanderthals. Ancient DNA recovered for a mtDNA sequence showed 3.48% divergence from that of the Feldhofer Neanderthal, some 2,500 km to the west in Germany and in 2011 Phylogenetic analysis placed the two in a clade distinct from modern humans, suggesting that their mtDNA types have not contributed to the modern human mtDNA pool.
In 2015, Israel Hershkovitz of Tel Aviv University reported that a skull found in a cave in northern Israel, is "probably a woman, who lived and died in the region about 55,000 years ago, placing modern humans there and then for the first time ever", pointing to a potential time and location when modern humans first interbred with Neanderthals.
In 2016, the project found that Neanderthals bred with modern humans multiple times, and that Neanderthals interbred with Denisovans only once, as evidenced in the genome of modern-day Melanesians.
In 2006, two research teams working on the same Neanderthal sample published their results, Richard Green and his team in Nature, and James Noonan's team in Science. The results were received with some scepticism, mainly surrounding the issue of a possible admixture of Neanderthals into the modern human genome.
In 2006, Richard Green's team had used a then new sequencing technique developed by 454 Life Sciences that amplifies single molecules for characterization and obtained over a quarter of a million unique short sequences ("reads"). The technique delivers randomly located reads, so that sequences of interest – genes that differ between modern humans and Neanderthals – show up at random as well. However, this form of direct sequencing destroys the original sample so to obtain new reads more samples must be destructively sequenced.
Noonan's team, led by Edward Rubin, used a different technique, one in which the Neanderthal DNA is inserted into bacteria, which make multiple copies of a single fragment. They demonstrated that Neanderthal genomic sequences can be recovered using a metagenomic library-based approach. All of the DNA in the sample is "immortalized" into metagenomic libraries. A DNA fragment is selected, then propagated in microbes. The resulting Neanderthal DNA sequences can then be sequenced or specific sequences can be studied.
Overall, their results were remarkably similar. One group suggested there was a hint of mixing between human and Neanderthal genomes, while the other found none, but both teams recognized that the data set was not large enough to give a definitive answer.
The publication by Noonan, and his team revealed Neanderthal DNA sequences matching chimpanzee DNA, but not modern human DNA, at multiple locations, thus enabling the first accurate calculation of the date of the most recent common ancestor of H. sapiens and H. neanderthalensis. The research team estimates the most recent common ancestor of their H. neanderthalensis samples and their H. sapiens reference sequence lived 706,000 years ago (divergence time), estimating the separation of the human and Neanderthal ancestral populations to 370,000 years ago (split time).
Based on the analysis of mitochondrial DNA, the split of the Neanderthal and H. sapiens lineages is estimated to date to between
760,000 and 550,000 years ago (95% CI).
Mutations of the speech-related gene FOXP2 identical to those in modern humans were discovered in Neanderthal DNA from the El Sidrón 1253 and 1351c specimens, suggesting Neanderthals might have shared some basic language capabilities with modern humans.
See also
General:
References
Further reading
External links
"The Neandertal Genome Project Website"
"Genetic algorithm model shows that modern humans out-competed the Neanderthal" Short summary of a peer review paper 2008
"Last of the Neanderthals" National Geographic, October 2008
MSNBC "Neanderthal genome project launches" 22 November 2006.
BBC News Paul Rincon, "Neanderthal DNA secrets unlocked" 15 November 2006
"Welcome to Neanderthal genomics" 17 November 2006
"Neanderthal Genome Sequencing Yields Surprising Results and Opens a New Door to Future Studies" 15 November 2006
Ancient DNA (human)
Ancient human genetic history
Genetics in Germany
Genome projects
Human evolution
Genome | Neanderthal genome project | [
"Biology"
] | 1,840 | [
"Genome projects"
] |
6,692,258 | https://en.wikipedia.org/wiki/Medrad%20Inc. | Medrad was an American company headquartered in Warrendale, Pennsylvania, until it was purchased by Bayer AG in 2006.
It was founded in 1964 by M. Stephen Heilman, a doctor who created the first flow-controlled, angiographic power injector in the kitchen of his home near Pittsburgh, Pennsylvania.
An emergency department physician by day, Heilman saw potential in angiography. Power injecting a contrast agent into the vessels, he reasoned, would enhance the image and make it possible to diagnose the heart disease and stroke patients who regularly came through the emergency department.
Heilman's invention was the first in a list of Medrad innovations. His company later introduced injector technology for computed tomography (CT) and magnetic resonance (MR) imaging.
Medrad has more than 1,700 employees, 1,200 of whom are in the Pittsburgh area. The company has branches around the world including the Netherlands, France, Germany, Italy, China, UK, Brazil, Japan, Norway, Belgium, Sweden, Denmark, Singapore, Egypt, Mexico, Cyprus and Australia.
References
Medical equipment
Health care companies established in 1964
1964 establishments in Pennsylvania
2006 mergers and acquisitions | Medrad Inc. | [
"Biology"
] | 243 | [
"Medical equipment",
"Medical technology"
] |
2,000,736 | https://en.wikipedia.org/wiki/Gauge%20fixing | In the physics of gauge theories, gauge fixing (also called choosing a gauge) denotes a mathematical procedure for coping with redundant degrees of freedom in field variables. By definition, a gauge theory represents each physically distinct configuration of the system as an equivalence class of detailed local field configurations. Any two detailed configurations in the same equivalence class are related by a certain transformation, equivalent to a shear along unphysical axes in configuration space. Most of the quantitative physical predictions of a gauge theory can only be obtained under a coherent prescription for suppressing or ignoring these unphysical degrees of freedom.
Although the unphysical axes in the space of detailed configurations are a fundamental property of the physical model, there is no special set of directions "perpendicular" to them. Hence there is an enormous amount of freedom involved in taking a "cross section" representing each physical configuration by a particular detailed configuration (or even a weighted distribution of them). Judicious gauge fixing can simplify calculations immensely, but becomes progressively harder as the physical model becomes more realistic; its application to quantum field theory is fraught with complications related to renormalization, especially when the computation is continued to higher orders. Historically, the search for logically consistent and computationally tractable gauge fixing procedures, and efforts to demonstrate their equivalence in the face of a bewildering variety of technical difficulties, has been a major driver of mathematical physics from the late nineteenth century to the present.
Gauge freedom
The archetypical gauge theory is the Heaviside–Gibbs formulation of continuum electrodynamics in terms of an electromagnetic four-potential, which is presented here in space/time asymmetric Heaviside notation. The electric field E and magnetic field B of Maxwell's equations contain only "physical" degrees of freedom, in the sense that every mathematical degree of freedom in an electromagnetic field configuration has a separately measurable effect on the motions of test charges in the vicinity. These "field strength" variables can be expressed in terms of the electric scalar potential and the magnetic vector potential A through the relations:
If the transformation
is made, then B remains unchanged, since (with the identity )
However, this transformation changes E according to
If another change
is made then E also remains the same. Hence, the E and B fields are unchanged if one takes any function and simultaneously transforms A and φ via the transformations () and ().
A particular choice of the scalar and vector potentials is a gauge (more precisely, gauge potential) and a scalar function ψ used to change the gauge is called a gauge function. The existence of arbitrary numbers of gauge functions corresponds to the U(1) gauge freedom of this theory. Gauge fixing can be done in many ways, some of which we exhibit below.
Although classical electromagnetism is now often spoken of as a gauge theory, it was not originally conceived in these terms. The motion of a classical point charge is affected only by the electric and magnetic field strengths at that point, and the potentials can be treated as a mere mathematical device for simplifying some proofs and calculations. Not until the advent of quantum field theory could it be said that the potentials themselves are part of the physical configuration of a system. The earliest consequence to be accurately predicted and experimentally verified was the Aharonov–Bohm effect, which has no classical counterpart. Nevertheless, gauge freedom is still true in these theories. For example, the Aharonov–Bohm effect depends on a line integral of A around a closed loop, and this integral is not changed by
Gauge fixing in non-abelian gauge theories, such as Yang–Mills theory and general relativity, is a rather more complicated topic; for details see Gribov ambiguity, Faddeev–Popov ghost, and frame bundle.
An illustration
As an illustration of gauge fixing, one may look at a cylindrical rod and attempt to tell whether it is twisted. If the rod is perfectly cylindrical, then the circular symmetry of the cross section makes it impossible to tell whether or not it is twisted. However, if there were a straight line drawn along the length of the rod, then one could easily say whether or not there is a twist by looking at the state of the line. Drawing a line is gauge fixing. Drawing the line spoils the gauge symmetry, i.e., the circular symmetry U(1) of the cross section at each point of the rod. The line is the equivalent of a gauge function; it need not be straight. Almost any line is a valid gauge fixing, i.e., there is a large gauge freedom. In summary, to tell whether the rod is twisted, the gauge must be known. Physical quantities, such as the energy of the torsion, do not depend on the gauge, i.e., they are gauge invariant.
Coulomb gauge
The Coulomb gauge (also known as the transverse gauge) is used in quantum chemistry and condensed matter physics and is defined by the gauge condition (more precisely, gauge fixing condition)
It is particularly useful for "semi-classical" calculations in quantum mechanics, in which the vector potential is quantized but the Coulomb interaction is not.
The Coulomb gauge has a number of properties:
Lorenz gauge
The Lorenz gauge is given, in SI units, by:
and in Gaussian units by:
This may be rewritten as:
where is the electromagnetic four-potential, the 4-gradient [using the metric signature (+, −, −, −)].
It is unique among the constraint gauges in retaining manifest Lorentz invariance. Note, however, that this gauge was originally named after the Danish physicist Ludvig Lorenz and not after Hendrik Lorentz; it is often misspelled "Lorentz gauge". (Neither was the first to use it in calculations; it was introduced in 1888 by George Francis FitzGerald.)
The Lorenz gauge leads to the following inhomogeneous wave equations for the potentials:
It can be seen from these equations that, in the absence of current and charge, the solutions are potentials which propagate at the speed of light.
The Lorenz gauge is incomplete in some sense: there remains a subspace of gauge transformations which can also preserve the constraint. These remaining degrees of freedom correspond to gauge functions which satisfy the wave equation
These remaining gauge degrees of freedom propagate at the speed of light. To obtain a fully fixed gauge, one must add boundary conditions along the light cone of the experimental region.
Maxwell's equations in the Lorenz gauge simplify to
where is the four-current.
Two solutions of these equations for the same current configuration differ by a solution of the vacuum wave equation
In this form it is clear that the components of the potential separately satisfy the Klein–Gordon equation, and hence that the Lorenz gauge condition allows transversely, longitudinally, and "time-like" polarized waves in the four-potential. The transverse polarizations correspond to classical radiation, i.e., transversely polarized waves in the field strength. To suppress the "unphysical" longitudinal and time-like polarization states, which are not observed in experiments at classical distance scales, one must also employ auxiliary constraints known as Ward identities. Classically, these identities are equivalent to the continuity equation
Many of the differences between classical and quantum electrodynamics can be accounted for by the role that the longitudinal and time-like polarizations play in interactions between charged particles at microscopic distances.
Rξ gauges
The Rξ gauges are a generalization of the Lorenz gauge applicable to theories expressed in terms of an action principle with Lagrangian density . Instead of fixing the gauge by constraining the gauge field a priori, via an auxiliary equation, one adds a gauge breaking term to the "physical" (gauge invariant) Lagrangian
The choice of the parameter ξ determines the choice of gauge. The Rξ Landau gauge is classically equivalent to Lorenz gauge: it is obtained in the limit ξ → 0 but postpones taking that limit until after the theory has been quantized. It improves the rigor of certain existence and equivalence proofs. Most quantum field theory computations are simplest in the Feynman–'t Hooft gauge, in which ; a few are more tractable in other Rξ gauges, such as the Yennie gauge .
An equivalent formulation of Rξ gauge uses an auxiliary field, a scalar field B with no independent dynamics:
The auxiliary field, sometimes called a Nakanishi–Lautrup field, can be eliminated by "completing the square" to obtain the previous form. From a mathematical perspective the auxiliary field is a variety of Goldstone boson, and its use has advantages when identifying the asymptotic states of the theory, and especially when generalizing beyond QED.
Historically, the use of Rξ gauges was a significant technical advance in extending quantum electrodynamics computations beyond one-loop order. In addition to retaining manifest Lorentz invariance, the Rξ prescription breaks the symmetry under local gauge transformations while preserving the ratio of functional measures of any two physically distinct gauge configurations. This permits a change of variables in which infinitesimal perturbations along "physical" directions in configuration space are entirely uncoupled from those along "unphysical" directions, allowing the latter to be absorbed into the physically meaningless normalization of the functional integral. When ξ is finite, each physical configuration (orbit of the group of gauge transformations) is represented not by a single solution of a constraint equation but by a Gaussian distribution centered on the extremum of the gauge breaking term. In terms of the Feynman rules of the gauge-fixed theory, this appears as a contribution to the photon propagator for internal lines from virtual photons of unphysical polarization.
The photon propagator, which is the multiplicative factor corresponding to an internal photon in the Feynman diagram expansion of a QED calculation, contains a factor gμν corresponding to the Minkowski metric. An expansion of this factor as a sum over photon polarizations involves terms containing all four possible polarizations. Transversely polarized radiation can be expressed mathematically as a sum over either a linearly or circularly polarized basis. Similarly, one can combine the longitudinal and time-like gauge polarizations to obtain "forward" and "backward" polarizations; these are a form of light-cone coordinates in which the metric is off-diagonal. An expansion of the gμν factor in terms of circularly polarized (spin ±1) and light-cone coordinates is called a spin sum. Spin sums can be very helpful both in simplifying expressions and in obtaining a physical understanding of the experimental effects associated with different terms in a theoretical calculation.
Richard Feynman used arguments along approximately these lines largely to justify calculation procedures that produced consistent, finite, high precision results for important observable parameters such as the anomalous magnetic moment of the electron. Although his arguments sometimes lacked mathematical rigor even by physicists' standards and glossed over details such as the derivation of Ward–Takahashi identities of the quantum theory, his calculations worked, and Freeman Dyson soon demonstrated that his method was substantially equivalent to those of Julian Schwinger and Sin-Itiro Tomonaga, with whom Feynman shared the 1965 Nobel Prize in Physics.
Forward and backward polarized radiation can be omitted in the asymptotic states of a quantum field theory (see Ward–Takahashi identity). For this reason, and because their appearance in spin sums can be seen as a mere mathematical device in QED (much like the electromagnetic four-potential in classical electrodynamics), they are often spoken of as "unphysical". But unlike the constraint-based gauge fixing procedures above, the Rξ gauge generalizes well to non-abelian gauge groups such as the SU(3) of QCD. The couplings between physical and unphysical perturbation axes do not entirely disappear under the corresponding change of variables; to obtain correct results, one must account for the non-trivial Jacobian of the embedding of gauge freedom axes within the space of detailed configurations. This leads to the explicit appearance of forward and backward polarized gauge bosons in Feynman diagrams, along with Faddeev–Popov ghosts, which are even more "unphysical" in that they violate the spin–statistics theorem. The relationship between these entities, and the reasons why they do not appear as particles in the quantum mechanical sense, becomes more evident in the BRST formalism of quantization.
Maximal abelian gauge
In any non-abelian gauge theory, any maximal abelian gauge is an incomplete gauge which fixes the gauge freedom outside of the maximal abelian subgroup. Examples are
For SU(2) gauge theory in D dimensions, the maximal abelian subgroup is a U(1) subgroup. If this is chosen to be the one generated by the Pauli matrix σ3, then the maximal abelian gauge is that which maximizes the function where
For SU(3) gauge theory in D dimensions, the maximal abelian subgroup is a U(1)×U(1) subgroup. If this is chosen to be the one generated by the Gell-Mann matrices λ3 and λ8, then the maximal abelian gauge is that which maximizes the function where
This applies regularly in higher algebras (of groups in the algebras), for example the Clifford Algebra and as it is regularly.
Less commonly used gauges
Various other gauges, which can be beneficial in specific situations have appeared in the literature.
Weyl gauge
The Weyl gauge (also known as the Hamiltonian or temporal gauge) is an incomplete gauge obtained by the choice
It is named after Hermann Weyl. It eliminates the negative-norm ghost, lacks manifest Lorentz invariance, and requires longitudinal photons and a constraint on states.
Multipolar gauge
The gauge condition of the multipolar gauge (also known as the line gauge, point gauge or Poincaré gauge (named after Henri Poincaré)) is:
This is another gauge in which the potentials can be expressed in a simple way in terms of the instantaneous fields
Fock–Schwinger gauge
The gauge condition of the Fock–Schwinger gauge (named after Vladimir Fock and Julian Schwinger; sometimes also called the relativistic Poincaré gauge) is:
where xμ is the position four-vector.
Dirac gauge
The nonlinear Dirac gauge condition (named after Paul Dirac) is:
References
Further reading
Electromagnetism
Quantum field theory
Quantum electrodynamics
Gauge theories
pl:Cechowanie (fizyka)#Wybór cechowania | Gauge fixing | [
"Physics"
] | 3,024 | [
"Quantum field theory",
"Electromagnetism",
"Physical phenomena",
"Quantum mechanics",
"Fundamental interactions"
] |
2,001,549 | https://en.wikipedia.org/wiki/Cytochrome%20f | Cytochrome f is the largest subunit of cytochrome b6f complex (plastoquinol—plastocyanin reductase; ). In its structure and functions, the cytochrome b6f complex bears extensive analogy to the cytochrome bc1 complex of mitochondria and photosynthetic purple bacteria. Cytochrome f (cyt f) plays a role analogous to that of cytochrome c1, in spite of their different structures.
The 3D structure of Brassica rapa (Turnip) cyt f has been determined. The lumen-side segment of cyt f includes two structural domains: a small one above a larger one that, in turn, is on top of the attachment to the membrane domain. The large domain consists of an anti-parallel beta-sandwich and a short haem-binding peptide, which form a three-layer structure. The small domain is inserted between beta-strands F and G of the large domain and is an all-beta domain. The haem nestles between two short helices at the N terminus of cyt f. Within the second helix is the sequence motif for the c-type cytochromes, CxxCH (residues 21–25), which is covalently attached to the haem through thioether bonds to Cys-21 and Cys-24. His-25 is the fifth haem iron ligand. The sixth haem iron ligand is the alpha-amino group of Tyr-1 in the first helix. Cyt f has an internal network of water molecules that may function as a proton wire. The water chain appears to be a conserved feature of cyt f.
References
Further reading
External links
Photosynthesis
Cytochromes
Peripheral membrane proteins | Cytochrome f | [
"Chemistry",
"Biology"
] | 366 | [
"Biochemistry",
"Photosynthesis"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.