id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,682,521 | https://en.wikipedia.org/wiki/Somatic%20antigen | A somatic antigen is an antigen located in the cell wall of a gram-positive or gram-negative bacterium.
See also
Lipopolysaccharide
References
Bacterial proteins
Bacteriology | Somatic antigen | [
"Biology"
] | 40 | [
"Bacteria stubs",
"Bacteria"
] |
14,682,596 | https://en.wikipedia.org/wiki/Finite%20topological%20space | In mathematics, a finite topological space is a topological space for which the underlying point set is finite. That is, it is a topological space which has only finitely many elements.
Finite topological spaces are often used to provide examples of interesting phenomena or counterexamples to plausible sounding conjectures. William Thurston has called the study of finite topologies in this sense "an oddball topic that can
lend good insight to a variety of questions".
Topologies on a finite set
Let be a finite set. A topology on is a subset of (the power set of ) such that
and .
if then .
if then .
In other words, a subset of is a topology if contains both and and is closed under arbitrary unions and intersections. Elements of are called open sets. The general description of topological spaces requires that a topology be closed under arbitrary (finite or infinite) unions of open sets, but only under intersections of finitely many open sets. Here, that distinction is unnecessary. Since the power set of a finite set is finite there can be only finitely many open sets (and only finitely many closed sets).
A topology on a finite set can also be thought of as a sublattice of which includes both the bottom element and the top element .
Examples
0 or 1 points
There is a unique topology on the empty set ∅. The only open set is the empty one. Indeed, this is the only subset of ∅.
Likewise, there is a unique topology on a singleton set {a}. Here the open sets are ∅ and {a}. This topology is both discrete and trivial, although in some ways it is better to think of it as a discrete space since it shares more properties with the family of finite discrete spaces.
For any topological space X there is a unique continuous function from ∅ to X, namely the empty function. There is also a unique continuous function from X to the singleton space {a}, namely the constant function to a. In the language of category theory the empty space serves as an initial object in the category of topological spaces while the singleton space serves as a terminal object.
2 points
Let X = {a,b} be a set with 2 elements. There are four distinct topologies on X:
{∅, {a,b}} (the trivial topology)
{∅, {a}, {a,b}}
{∅, {b}, {a,b}}
{∅, {a}, {b}, {a,b}} (the discrete topology)
The second and third topologies above are easily seen to be homeomorphic. The function from X to itself which swaps a and b is a homeomorphism. A topological space homeomorphic to one of these is called a Sierpiński space. So, in fact, there are only three inequivalent topologies on a two-point set: the trivial one, the discrete one, and the Sierpiński topology.
The specialization preorder on the Sierpiński space {a,b} with {b} open is given by: a ≤ a, b ≤ b, and a ≤ b.
3 points
Let X = {a,b,c} be a set with 3 elements. There are 29 distinct topologies on X but only 9 inequivalent topologies:
{∅, {a,b,c}}
{∅, {c}, {a,b,c}}
{∅, {a,b}, {a,b,c}}
{∅, {c}, {a,b}, {a,b,c}}
{∅, {c}, {b,c}, {a,b,c}} (T0)
{∅, {c}, {a,c}, {b,c}, {a,b,c}} (T0)
{∅, {a}, {b}, {a,b}, {a,b,c}} (T0)
{∅, {b}, {c}, {a,b}, {b,c}, {a,b,c}} (T0)
{∅, {a}, {b}, {c}, {a,b}, {a,c}, {b,c}, {a,b,c}} (T0)
The last 5 of these are all T0. The first one is trivial, while in 2, 3, and 4 the points a and b are topologically indistinguishable.
4 points
Let X = {a,b,c,d} be a set with 4 elements. There are 355 distinct topologies on X but only 33 inequivalent topologies:
{∅, {a, b, c, d}}
{∅, {a, b, c}, {a, b, c, d}}
{∅, {a}, {a, b, c, d}}
{∅, {a}, {a, b, c}, {a, b, c, d}}
{∅, {a, b}, {a, b, c, d}}
{∅, {a, b}, {a, b, c}, {a, b, c, d}}
{∅, {a}, {a, b}, {a, b, c, d}}
{∅, {a}, {b}, {a, b}, {a, b, c, d}}
{∅, {a, b, c}, {d}, {a, b, c, d}}
{∅, {a}, {a, b, c}, {a, d}, {a, b, c, d}}
{∅, {a}, {a, b, c}, {d}, {a, d}, {a, b, c, d}}
{∅, {a}, {b, c}, {a, b, c}, {a, d}, {a, b, c, d}}
{∅, {a, b}, {a, b, c}, {a, b, d}, {a, b, c, d}}
{∅, {a, b}, {c}, {a, b, c}, {a, b, c, d}}
{∅, {a, b}, {c}, {a, b, c}, {a, b, d}, {a, b, c, d}}
{∅, {a, b}, {c}, {a, b, c}, {d}, {a, b, d}, {c, d}, {a, b, c, d}}
{∅, {b, c}, {a, d}, {a, b, c, d}}
{∅, {a}, {a, b}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {a, b}, {a, c}, {a, b, c}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {a, c}, {a, b, c}, {a, b, c, d}} (T0)
{∅, {a}, {a, b}, {a, b, c}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {a, b, c}, {a, b, c, d}} (T0)
{∅, {a}, {a, b}, {c}, {a, c}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {a, b}, {a, c}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {a, c}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {b, c}, {a, b, c}, {a, d}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {a, b}, {a, c}, {a, b, c}, {a, d}, {a, b, d}, {a, c, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {a, c}, {a, b, c}, {a, d}, {a, b, d}, {a, c, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {c}, {a, c}, {b, c}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {c}, {a, c}, {b, c}, {a, b, c}, {a, d}, {a, b, d}, {a, c, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {c}, {a, c}, {b, c}, {a, b, c}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {c}, {a, c}, {b, c}, {a, b, c}, {d}, {a, d}, {b, d}, {a, b, d}, {c, d}, {a, c, d}, {b, c, d}, {a, b, c, d}} (T0)
The last 16 of these are all T0.
Properties
Specialization preorder
Topologies on a finite set X are in one-to-one correspondence with preorders on X. Recall that a preorder on X is a binary relation on X which is reflexive and transitive.
Given a (not necessarily finite) topological space X we can define a preorder on X by
x ≤ y if and only if x ∈ cl{y}
where cl{y} denotes the closure of the singleton set {y}. This preorder is called the specialization preorder on X. Every open set U of X will be an upper set with respect to ≤ (i.e. if x ∈ U and x ≤ y then y ∈ U). Now if X is finite, the converse is also true: every upper set is open in X. So for finite spaces, the topology on X is uniquely determined by ≤.
Going in the other direction, suppose (X, ≤) is a preordered set. Define a topology τ on X by taking the open sets to be the upper sets with respect to ≤. Then the relation ≤ will be the specialization preorder of (X, τ). The topology defined in this way is called the Alexandrov topology determined by ≤.
The equivalence between preorders and finite topologies can be interpreted as a version of Birkhoff's representation theorem, an equivalence between finite distributive lattices (the lattice of open sets of the topology) and partial orders (the partial order of equivalence classes of the preorder). This correspondence also works for a larger class of spaces called finitely generated spaces. Finitely generated spaces can be characterized as the spaces in which an arbitrary intersection of open sets is open. Finite topological spaces are a special class of finitely generated spaces.
Compactness and countability
Every finite topological space is compact since any open cover must already be finite. Indeed, compact spaces are often thought of as a generalization of finite spaces since they share many of the same properties.
Every finite topological space is also second-countable (there are only finitely many open sets) and separable (since the space itself is countable).
Separation axioms
If a finite topological space is T1 (in particular, if it is Hausdorff) then it must, in fact, be discrete. This is because the complement of a point is a finite union of closed points and therefore closed. It follows that each point must be open.
Therefore, any finite topological space which is not discrete cannot be T1, Hausdorff, or anything stronger.
However, it is possible for a non-discrete finite space to be T0. In general, two points x and y are topologically indistinguishable if and only if x ≤ y and y ≤ x, where ≤ is the specialization preorder on X. It follows that a space X is T0 if and only if the specialization preorder ≤ on X is a partial order. There are numerous partial orders on a finite set. Each defines a unique T0 topology.
Similarly, a space is R0 if and only if the specialization preorder is an equivalence relation. Given any equivalence relation on a finite set X the associated topology is the partition topology on X. The equivalence classes will be the classes of topologically indistinguishable points. Since the partition topology is pseudometrizable, a finite space is R0 if and only if it is completely regular.
Non-discrete finite spaces can also be normal. The excluded point topology on any finite set is a completely normal T0 space which is non-discrete.
Connectivity
Connectivity in a finite space X is best understood by considering the specialization preorder ≤ on X. We can associate to any preordered set X a directed graph Γ by taking the points of X as vertices and drawing an edge x → y whenever x ≤ y. The connectivity of a finite space X can be understood by considering the connectivity of the associated graph Γ.
In any topological space, if x ≤ y then there is a path from x to y. One can simply take f(0) = x and f(t) = y for t > 0. It is easily to verify that f is continuous. It follows that the path components of a finite topological space are precisely the (weakly) connected components of the associated graph Γ. That is, there is a topological path from x to y if and only if there is an undirected path between the corresponding vertices of Γ.
Every finite space is locally path-connected since the set
is a path-connected open neighborhood of x that is contained in every other neighborhood. In other words, this single set forms a local base at x.
Therefore, a finite space is connected if and only if it is path-connected. The connected components are precisely the path components. Each such component is both closed and open in X.
Finite spaces may have stronger connectivity properties. A finite space X is
hyperconnected if and only if there is a greatest element with respect to the specialization preorder. This is an element whose closure is the whole space X.
ultraconnected if and only if there is a least element with respect to the specialization preorder. This is an element whose only neighborhood is the whole space X.
For example, the particular point topology on a finite space is hyperconnected while the excluded point topology is ultraconnected. The Sierpiński space is both.
Additional structure
A finite topological space is pseudometrizable if and only if it is R0. In this case, one possible pseudometric is given by
where x ≡ y means x and y are topologically indistinguishable. A finite topological space is metrizable if and only if it is discrete.
Likewise, a topological space is uniformizable if and only if it is R0. The uniform structure will be the pseudometric uniformity induced by the above pseudometric.
Algebraic topology
Perhaps surprisingly, there are finite topological spaces with nontrivial fundamental groups. A simple example is the pseudocircle, which is space X with four points, two of which are open and two of which are closed. There is a continuous map from the unit circle S1 to X which is a weak homotopy equivalence (i.e. it induces an isomorphism of homotopy groups). It follows that the fundamental group of the pseudocircle is infinite cyclic.
More generally it has been shown that for any finite abstract simplicial complex K, there is a finite topological space XK and a weak homotopy equivalence f : |K| → XK where |K| is the geometric realization of K. It follows that the homotopy groups of |K| and XK are isomorphic. In fact, the underlying set of XK can be taken to be K itself, with the topology associated to the inclusion partial order.
Number of topologies on a finite set
As discussed above, topologies on a finite set are in one-to-one correspondence with preorders on the set, and T0 topologies are in one-to-one correspondence with partial orders. Therefore, the number of topologies on a finite set is equal to the number of preorders and the number of T0 topologies is equal to the number of partial orders.
The table below lists the number of distinct (T0) topologies on a set with n elements. It also lists the number of inequivalent (i.e. nonhomeomorphic) topologies.
Let T(n) denote the number of distinct topologies on a set with n points. There is no known simple formula to compute T(n) for arbitrary n. The Online Encyclopedia of Integer Sequences presently lists T(n) for n ≤ 18.
The number of distinct T0 topologies on a set with n points, denoted T0(n), is related to T(n) by the formula
where S(n,k) denotes the Stirling number of the second kind.
See also
Finite geometry
Finite metric space
Topological combinatorics
References
External links
Topological spaces
Combinatorics | Finite topological space | [
"Mathematics"
] | 4,033 | [
"Discrete mathematics",
"Mathematical structures",
"Space (mathematics)",
"Combinatorics",
"Topological spaces",
"Topology"
] |
14,682,695 | https://en.wikipedia.org/wiki/Technology%20of%20television | The technology of television has evolved since its early days using a mechanical system invented by Paul Gottlieb Nipkow in 1884. Every television system works on the scanning principle first implemented in the rotating disk scanner of Nipkow. This turns a two-dimensional image into a time series of signals that represent the brightness and color of each resolvable element of the picture. By repeating a two-dimensional image quickly enough, the impression of motion can be transmitted as well. For the receiving apparatus to reconstruct the image, synchronization information is included in the signal to allow proper placement of each line within the image and to identify when a complete image has been transmitted and a new image is to follow.
While mechanically scanned systems were experimentally used, television as a mass medium was made practical by the development of electronic camera tubes and displays. By the turn of the 21st century, it was technically feasible to replace the analog signals for television broadcasting with digital signals. Many television viewers no longer use an antenna to receive over-the-air broadcasts instead, relying on cable television systems. Increasingly these are integrated with telephone and Internet services.
Elements of a television system
The elements of a simple broadcast television system are:
An image source. This is the electrical signal that represents a visual image, and may be derived from a professional video camera in the case of live television, a video tape recorder for playback of recorded images, or telecine with a flying spot scanner for the transfer of motion pictures to video).
A sound source. This is an electrical signal from a microphone or from the audio output of a video tape recorder.
A transmitter, which generates radio signals (radio waves) and encodes them with picture and sound information.
A television antenna coupled to the output of the transmitter for broadcasting the encoded signals.
A television antenna to receive the broadcast signals.
A receiver (also called a tuner), which decodes the picture and sound information from the broadcast signals, and whose input is coupled to the antenna of the television set.
A display device, which turns the electrical signals into visual images.
An audio amplifier and loudspeaker, which turns electrical signals into sound waves (speech, music, and other sounds) to accompany the images.
Practical television systems include equipment for selecting different image sources, mixing images from several sources at once, insertion of pre-recorded video signals, synchronizing signals from many sources, and direct image generation by computer for such purposes as station identification. The facility for housing such equipment, as well as providing space for stages, sets, offices, etc., is called a television studio, and may be located many miles from the transmitter. Communication from the studio to the transmitter is accomplished via a dedicated cable or radio system.
Television signals were originally transmitted exclusively via land-based transmitters. The quality of reception varied greatly, dependent in large part on the location and type of receiving antenna. This led to the proliferation of large rooftop antennas to improve reception in the 1960s, replacing set-top dipole or "rabbit ears" antennas, which however remained popular. Antenna rotors, set-top controlled servo motors to which the mast of the antenna is mounted, to enable rotating the antenna such that it points to the desired transmitter, would also become popular.
In most cities today, cable television providers deliver signals over coaxial or fiber-optic cables for a fee. Signals can also be delivered by radio from satellites in geosynchronous orbit and received by parabolic dish antennas, which are comparatively large for analog signals, but much smaller for digital. Like cable providers, satellite television providers also require a fee, often less than cable systems. The affordability and convenience of digital satellite reception has led to the proliferation of small dish antennas outside many houses and apartments.
Digital systems may be inserted anywhere in the chain to provide better image transmission quality, reduction in transmission bandwidth, special effects, or security of transmission from reception by non-subscribers. A home today might have the choice of receiving analog or HDTV over the air, analog or digital cable with HDTV from a cable television company over coaxial cable, or even from the phone company over fiber optic lines. On the road, television can be received by pocket sized televisions, recorded on tape or digital media players, or played back on wireless phones (mobile or "cell" phones) over a high-speed or "broadband" internet connection.
Display technology
There are now several kinds of video displays used in modern TV sets:
CRT (cathode-ray tube): Up until the first decade of the 21st century, the most common screens were direct-view CRTs for up to roughly 100 cm (40 inch) (in 4:3 ratio) and 115 cm (45 inch) (in 16:9 ratio) diagonals. A typical NTSC broadcast signal's visible portion has an equivalent resolution of 449 x 483 rectangular pixels.
Rear Projection (RPTV) displays can be made in large sizes, (254 cm (100 inch) and beyond), and use projection technology. Three types of projection systems are used in projection TVs: CRT-based, LCD-based, and DLP (reflective micromirror chip) -based, D-ILA and LCOS-based. Projection television has been commercially available since the 1970s, but at that time could not match the image sharpness of the CRT.
A variation is a video projector, using similar technology, which projects onto a screen. This is often referred to as "front projection".
Flat-panel display (LCD or plasma): Flat panels television sets use active matrix LCD or plasma display technology. Flat panel LCDs and plasma displays are as little as 25.4 mm thick and can be hung on a wall like a picture or put over a pedestal. Some models can also be used as computer monitors.
LED (light-emitting diode) arrays (not to be confused with the LED backlighting used behind some LCD panels consequently advertised as "LED") became the favored technology for large outdoor video and stadium screens after the advent of very bright LEDs and the matrix driver electronics for them. They make possible ultra-large flat panel video displays that other technologies are currently not able to match in performance.
OLED (organic light-emitting diode) technology is currently (2019) used in high end smartphone screens and televisions. Unlike LCD panels, OLED screens are viewable from extreme angles, are free from pixel lag, and offer a very high contrast ratio comparable to CRT displays, with very deep blacks. They can be extremely thin and lightweight and can, at least in prototype, be made flexible enough to roll up when not in use.
Each has its pros and cons. Front projection and plasma displays have a wide viewing angle (nearly 180 degrees) so they may be best for a home theater with a wide seating arrangement. Rear projection screens do not perform well in daylight or well-lit rooms and so are only suitable for darker viewing areas.
Terminology and specifications
Display resolution is the number of pixels of one row on a given screen. Before the year 2000 horizontal lines of resolution was the standard method of measurement for analog video. For example, a VHS VCR might be described as having 250 lines of resolution as measured across a circle circumscribed in the center of the screen (approximately 440 pixels edge-to-edge). With analog signals, the number of vertical lines and the frame rate are directly proportional to the bandwidth of the signal transmitted.
A typical resolution of 720×480 or 720x576 means that the television display has 720 pixels across and 480 or 576 pixels on the vertical axis. The higher the resolution on a specified display the sharper the image. Contrast ratio is a measurement of the range between the lightest and darkest points on the screen.
The higher the contrast ratio, the better-looking picture there is in terms of richness, deepness, and shadow detail. The brightness of a picture measures how vibrant and impacting the colors are. This is measured in candela per square metre (cd/m2).
On the other hand, the so-called brightness and contrast adjustment controls on televisions and monitors are traditionally used to control different aspects of the picture display. The brightness control shifts the black level, affecting the image intensity or brightness, while the contrast control adjusts the contrast range of the image.
Transmission band
There are various bands on which televisions operate depending upon the country. The VHF and UHF signals in bands III to V are generally used. Lower frequencies do not have enough bandwidth available for television.
Countries with 60 Hz power line frequency use frame rates very near 30 per second, while 50 Hz regions use 25 frames per second. These rates were chosen to minimize the distortion of pictures that could be produced in analog receivers. For a given frame rate, an analog signal with 400 lines per frame would use less bandwidth than one with 600 or 800 lines per frame. Higher bandwidth makes receiver design more complicated, requires higher radio frequencies to be used, and may limit the number of channels that can be allocated in a given area; the same radio frequencies useful for television are also in high demand for other services such as aviation, land mobile radio, and mobile telephones.
Although the BBC initially used Band I VHF at 45 MHz, this frequency is (in the UK) no longer in use for this purpose. Band II is used for FM radio transmissions. Higher frequencies behave more like light and do not penetrate buildings or travel around obstructions well enough to be used in a conventional broadcast TV system, so they are generally only used for MMDS and satellite television, which uses frequencies from 2 to 12 GHz. TV systems in most countries relay the video as an AM (amplitude-modulation) signal and the sound as an FM (frequency-modulation) signal. An exception is France, where the sound is AM.
Aspect ratios
Aspect ratio refers to the ratio of the horizontal to vertical measurements of a television's picture. Mechanically scanned television as first demonstrated by John Logie Baird in 1926 used a 7:3 vertical aspect ratio, oriented for the head and shoulders of a single person in close-up.
Most of the early electronic TV systems, from the mid-1930s onward, shared the same aspect ratio of 4:3 which was chosen to match the Academy Ratio used in cinema films at the time. This ratio was also square enough to be conveniently viewed on round cathode-ray tubes (CRTs), which were all that could be produced given the manufacturing technology of the time. (Today's CRT technology allows the manufacture of much wider tubes, and the flat-screen technologies which are becoming steadily more popular have no technical aspect ratio limitations at all.) The BBC's television service used a more squarish 5:4 ratio from 1936 to 3 April 1950, when it too switched to a 4:3 ratio. This did not present significant problems, as most sets at the time used round tubes which were easily adjusted to the 4:3 ratio when the transmissions changed.
In the early 1950s, movie studios moved towards widescreen aspect ratios such as CinemaScope in an effort to distance their product from television. Although this was initially just a gimmick, widescreen is still the format of choice today and 4:3 aspect ratio movies are rare.
Yet the various television systems were not originally designed to be compatible with film at all. Traditional, narrow-screen movies are projected onto a television camera either so that the top of the screens line up to show facial features, or, for films with subtitles, the bottoms. What this means is that filmed newspapers or long captions filling the screen for explanation are cut off at each end. Similarly, while the frame rate of sound films is 24 per second, the screen scanning rate of the NTSC is 29.97 Hz (per second), which requires a complex scanning schedule. That of PAL and SECAM are 50 Hz, which means that films are shortened (and the sound is offkey) by scanning each frame twice for 25 per second.
The switch to digital television systems was used as an opportunity to change the standard television picture format from the old ratio of 4:3 (1.33:1) to an aspect ratio of 16:9 (approximately 1.78:1). This enables TV to get closer to the aspect ratio of modern widescreen movies, which range from 1.66:1 through 1.85:1 to 2.35:1. There are two methods for transporting widescreen content, the most common of which uses what is called anamorphic widescreen format. This format is very similar to the technique used to fit a widescreen movie frame inside a 1.33:1 35 mm film frame. The image is compressed horizontally when recorded, then expanded again when played back. The anamorphic widescreen 16:9 format was first introduced via European PALplus television broadcasts and then later on "widescreen" Laser Discs and DVDs; the ATSC HDTV system uses straight widescreen format, no horizontal compression or expansion is used.
Recently "widescreen" has spread from television to computing where both desktop and laptop computers are commonly equipped with widescreen displays. There are some complaints about distortions of movie picture ratio due to some DVD playback software not taking account of aspect ratios; but this may subside as the DVD playback software matures. Furthermore, computer and laptop widescreen displays are in the 16:10 aspect ratio both physically in size and in pixel counts, and not in 16:9 of consumer televisions, leading to further complexity. This was a result of widescreen computer display engineers' assumption that people viewing 16:9 content on their computer would prefer that an area of the screen be reserved for playback controls, subtitles or their Taskbar, as opposed to viewing content full-screen.
Aspect ratio incompatibility
The television industry's changing of aspect ratios is not without difficulties, and can present a considerable problem.
Displaying a widescreen aspect (rectangular) image on a conventional aspect (square or 4:3) display can be shown:
in "letterboxed" format, with black horizontal bars at the top and bottom
with part of the image being cropped, usually the extreme left and right of the image being cut off (or in "pan and scan", parts selected by an operator or a viewer)
with the image horizontally compressed
A conventional aspect (square or 4:3) image on a widescreen aspect (rectangular with longer horizon) display can be shown:
in "pillar box" format, with black vertical bars to the left and right
with upper and lower portions of the image cut off (or in "tilt and scan", parts selected by an operator)
with the image vertically compressed
A common compromise is to shoot or create material at an aspect ratio of 14:9, and to lose some image at each side for 4:3 presentation, and some image at top and bottom for 16:9 presentation. In recent years, the cinematographic process known as Super 35 (championed by James Cameron) has been used to film a number of major movies such as Titanic, Legally Blonde, Austin Powers, and Crouching Tiger, Hidden Dragon. This process results in a camera-negative which can then be used to create both wide-screen theatrical prints, and standard "full screen" releases for television/VHS/DVD which avoid the need for either "letterboxing" or the severe loss of information caused by conventional pan-and-scan cropping.
Sound
Data
The end of analog television broadcasting
NTSC
In North America, the basic signal standards since 1941 had been compatible enough in 2007 that even the oldest monochrome televisions could still receive color broadcasts. However, the United States Congress passed a law that required the cessation of all conventional television broadcast signals by February 2009. After that date, all NTSC standard televisions with analog-only tuners went dark unless fitted with a digital ATSC tuner. The digital channels occupy the same spectrum as the analog channels. Some of the spectrum previously occupied by the highest numbered channels was auctioned off by the United States' Federal Communications Commission for other uses.
PAL and SECAM
PAL and SECAM are expected not to be broadcast in Europe and Eurasia by the mid-2020s. PAL-M may have a similar decommissioning timeline.
The EU recommended that Member Countries switch from Analog to digital by January 1, 2012. Luxembourg and the Netherlands already completed their closedowns in 2006, and Finland and Sweden closed down their analog broadcasts in 2007.
Britain started its digital switch in October 2007. At 2am on Wednesday 17 October 2007, the BBC2 transmitter covering the Whitehaven and Copeland areas (NW England) was disabled. The remaining four analog channels ceased broadcasting shortly after. The original five channels are now available only in digital form, alongside other additional free-to-air channels
New developments
3D television
Ambilight
Broadcast flag
CableCARD
Digital Light Processing (DLP)
Digital rights management (DRM)
Digital television (DTV)
Digital Video Recorders (DVR)
Direct Broadcast Satellite TV (DBS)
DVD and HD DVD standards
Blu-ray Disc
Flat panel display
Flicker-free (100 Hz or 120 Hz, depending on country)
High-definition television (HDTV)
High-Definition Multimedia Interface (HDMI)
IPTV also known as Internet television (IPTV)
Laser TV display technology
Liquid crystal display television (LCD)
Mirror TV
OLED TV - Roll up TV (using organic light-emitting diodes)
P2PTV
Pay-per-view
Personal video recorders (PVR)
Picture-in-picture (PiP)
Pixelplus
Placeshifting
Plasma display
Remote controls
Surface-conduction electron-emitter display (SED) display technology
The Slingbox
Time shifting
Video on-demand (VOD)
Ultra High Definition Television (UHDTV)
Web TV
Exterior designs
In the early days of television, cabinets were made of wood grain (often simulated particularly in the later years), however, they went out of style in the 1980s. Up until the late 1970s, console TV/Hi Fi's were common. These were large (about 6' wide by 4' high) wooden cabinets containing a television, speakers, radio and a turntable.
See also
Broadcast safe
Electronic waste
History of display technology
History of television
References
Mass media technology
Television terminology | Technology of television | [
"Technology"
] | 3,754 | [
"Information and communications technology",
"Mass media technology",
"Television technology"
] |
14,682,858 | https://en.wikipedia.org/wiki/RasGEF%20domain | RasGEF domain is domain found in the CDC25 family of guanine nucleotide exchange factors for Ras-like small GTPases.
Ras proteins are membrane-associated molecular switches that bind GTP and GDP and slowly hydrolyze GTP to GDP. The balance between the GTP bound (active) and GDP bound (inactive) states is regulated by the opposite action of proteins activating the GTPase activity and that of proteins which promote the loss of bound GDP and the uptake of fresh GTP. The latter proteins are known as guanine-nucleotide dissociation stimulators (GDSs) (or also as guanine-nucleotide releasing (or exchange) factors (GRFs)). Proteins that act as GDS can be classified into at least two families, on the basis of sequence similarities, the CDC24 family (see ) and this CDC25 (RasGEF) family.
The size of the proteins of the CDC25 family range from 309 residues (LTE1) to 1596 residues (sos). The sequence similarity shared by all these proteins is limited to a region of about 250 amino acids generally located in their C-terminal section (currently the only exceptions are sos and ralGDS where this domain makes up the central part of the protein). This domain has been shown, in CDC25 an SCD25, to be essential for the activity of these proteins.
Human proteins containing this domain
KNDC1; PLCE1; RALGDS; RALGPS1; RALGPS2; RAPGEF1; RAPGEF2; RAPGEF3;
RAPGEF4; RAPGEF5; RAPGEF6; RAPGEFL1; RASGEF1A; RASGEF1B; RASGEF1C; RASGRF1; RASGRF2;
RASGRP1; RASGRP2; RASGRP3; RASGRP4; RGL1; RGL2; RGL3;
RGL4/RGR; SOS1; SOS2;
References
Further reading
Protein domains
Protein families
Peripheral membrane proteins | RasGEF domain | [
"Biology"
] | 450 | [
"Protein families",
"Protein domains",
"Protein classification"
] |
56,133 | https://en.wikipedia.org/wiki/Battle%20Angel%20Alita | Battle Angel Alita, known in Japan as , is a Japanese cyberpunk manga series created by Yukito Kishiro and originally published in Shueisha's Business Jump magazine from 1990 to 1995. The second of the comic's nine volumes was adapted in 1993 into a two-part anime original video animation titled Battle Angel for North American release by ADV Films and the UK and Australian release by Manga Entertainment. Manga Entertainment also dubbed Battle Angel Alita into English. A live-action film adaptation titled Alita: Battle Angel was released on February 14, 2019.
The series is set in the post-apocalyptic future and focuses on Alita ("Gally" in the Japanese version, and several other countries), a female cyborg who has lost all memories and is found in a junkyard by a cybernetics doctor who rebuilds and takes care of her. She discovers that there is one thing she remembers, the legendary cyborg martial art Panzer Kunst, which leads to her becoming a Hunter Warrior, or bounty hunter. The story traces Alita's attempts to rediscover her past and the characters whose lives she impacts on her journey. The manga series is continued in Battle Angel Alita: Last Order and Battle Angel Alita: Mars Chronicle.
Plot
Battle Angel Alita tells the story of Alita, an amnesiac female cyborg. Her intact head and chest, in suspended animation, are found by cyber medic expert Daisuke Ido in the local garbage dump. Ido manages to revive her, and finding she has lost her memory, names her Alita after his recently deceased cat. The rebuilt Alita soon discovers that she instinctively remembers the legendary martial art, Panzer Kunst, although she does not recall anything else. Alita uses her Panzer Kunst to first become a bounty hunter, killing cyborg criminals in the Scrapyard, and then as a star player in the brutal gladiator sport of Motorball. While in combat, Alita awakens memories of her earlier life on Mars. She becomes involved with the floating city of Zalem (Tiphares in some older translations) as one of their agents and is sent to hunt down criminals. Foremost is the mad genius Desty Nova, who has a complex, ever-changing relationship with Alita.
The futuristic dystopian world of Battle Angel Alita revolves around the city of Scrapyard (Kuzutetsu in the Japanese and various other versions), which has grown up around a massive scrap heap that rains down from Zalem. Ground dwellers have no access to Zalem and are forced to make a living in the sprawl below. Many are heavily modified by cybernetics to better cope with their hard life.
Zalem exploits the Scrapyard and surrounding farms, paying bounty hunters (called Hunter-Warriors) to hunt criminals and arranging violent sports to keep the population entertained. Massive tubes connect the Scrapyard to Zalem, and the city uses robots for carrying out errands and providing security on the ground. Occasionally, Zalemites (such as Daisuke Ido and Desty Nova) are exiled and sent to the ground. Aside from the robots and exiles, there is little contact between the two cities.
The story takes place in the former United States. According to a map, printed in the eighth volume, Scrapyard/Zalem is near Kansas City, Missouri, and the Necropolis is Colorado Springs, Colorado. Radio KAOS is at Dallas, Texas. Figure's coastal hometown is Alhambra, California. Desty Nova's Granite Inn is built out of a military base—NORAD at Cheyenne Mountain Complex, Colorado.
Battle Angel Alita is eventually revealed to take place in the 26th century. The sequel Battle Angel Alita: Last Order introduces a calendar era called "Era Sputnik" which has an epoch of AD 1957. The original Battle Angel Alita series begins in ES 577 (AD 2533) and ends in ES 590 (AD 2546), Battle Angel Alita: Last Order is mostly set roughly in ES 591 (AD 2547), and Battle Angel Alita: Mars Chronicle currently alternates between ES 373–374 (AD 2329–2330) and ES 594 (AD 2550).
Characters
Battle Angel Alita features a diverse cast of characters, many of whom shift in and out of focus as the story progresses. Some are never to be seen again following the conclusion of a story arc, while others make recurring appearances. The one character who remains a constant throughout is Alita, the protagonist and title character, a young cyborg with amnesia struggling to uncover her forgotten past through the only thing she remembers from it: by fighting. Early on in the story, Daisuke Ido, a bounty-hunting cybernetic doctor who finds and revives Alita, plays a major role as well, but midway the focus begins to increasingly shift to Desty Nova, an eccentric nanotechnology scientist who has fled from Zalem. Desty Nova is the mastermind behind many of the enemies and trials that Alita faces, but does not make an actual appearance until more than two years into the story, although he is alluded to early on. Finally, Kaos, Desty Nova's son, a frail and troubled radio DJ with psychometric powers, also begins to play a crucial role after he comes in contact with Alita. He broadcasts his popular radio show from the wastelands outside the Scrapyard, staying away from the increasing conflict between Zalem and the rebel army Barjack.
Production
Alita was originally a female cyborg police officer named Gally in an unpublished comic called Rainmaker. Publishers at Shueisha liked her and asked Kishiro to make a new story with her as the main character. After he had come up with the plot for a storyline he was commissioned to make it a long-running series.
Besides renaming Gally to Alita, older North American versions of the manga also changed the city of Zalem (from Biblical Hebrew שָׁלֵם šālēm, "peace") to Tiphares (after Tiferet). Since Kishiro also used the name Jeru (after Jerusalem) for the facility atop Zalem, Jeru was renamed Ketheres in the translation (after Keter). More recent versions reverted the cities' names back to Zalem and Jeru. To further develop the Biblical theme in the original series, Zalem's main computer was named Melchizedek, "the king of Salem" and "priest to the Most High God".
Media
Manga
The manga was first published in Shueisha's Business Jump magazine. It was then serialized from 1990 to 1995 in nine . Yukito Kishiro moved from Shueisha to Kodansha in August 2010. The company acquired the license rights to Battle Angel Alita. A 6-volume special edition titled Gunnm: Complete Edition was released in Japan on December 23, 1998. The series was released in B5 format and contains the original story. Also included are rough sketches, a timeline and the first three Battle Angel Alita: Holy Night & Other Stories short stories. From October 5 to November 16, 2016, Kodansha republished Gunnm in B5 format. It was later reprinted in A5 format starting on November 21, 2018.
A spin-off series titled was published in Ultra Jump from September 1995 to July 1996 issues. It was released in a single volume on June 24, 1998.
A spin-off series titled was published in Ultra Jump from January 24, 1997, to December 19, 2006. It was released in a single volume on December 19, 2007. It is composed of four short side stories: "Holy Night", "Sonic Finger", "Hometown" and "Barjack Rhapsody".
In North America, Viz Media originally released the story in a 25-page comic book, after which it followed the same volume format as its Japanese counterpart. Viz also released the Ashen Victor spin-off series. Along with the rest of the series, Kishiro's original Battle Angel Alita manga has been licensed for North American publication through Kodansha USA, who republished it the five-volume omnibus format in 2017 and 2018, with the last volume including Ashen Victor. Holy Night & Other Stories has also been licensed by Kodansha USA, who published it digitally on October 30, 2018, and as hardcover on November 20, 2018. Battle Angel Alita has also been licensed for international release in a number of languages and regions. It was published in Spain by Planeta DeAgostini, in Brazil by Editora JBC, in France and Netherlands by Glenat, in Poland by JPF, in Germany by Carlsen, in Taiwan by Tong Li Publishing, in Argentina by Editorial Ivrea and in Russia by Xl Media.
OVA
A two-episode OVA was released in 1993, incorporating elements from the second volume of the manga with changes to the characters and storyline. According to Kishiro, only two episodes were originally planned. At the time, he was too busy with the manga "to review the plan coolly" and was not serious about an anime adaptation. It remains the only anime adaptation of Battle Angel Alita to date and there are no plans to revive it.
A 3-minute 3D-CGI rendered movie clip is included in volume 6 of the Japanese Gunnm: Complete Edition (1998–2000). It showcases Alita in a Third League Motorball race with players from two of her races such as "Armor" Togo, Degchalev, and Valdicci, and depicts events from both of those races.
Film
20th Century Fox and Director James Cameron acquired the film rights to Battle Angel. It was originally brought to Cameron's attention by filmmaker Guillermo del Toro. Cameron is said to be a big fan of the manga, and he was waiting until CGI technology was sufficiently advanced to make a live-action 3D film with effects comparable to Avatar. The film would be a live-action adaptation of the first four volumes of the manga series; "What I’m going to do is take the spine story and use elements from the first four books. So, the Motorball from books three and four, and parts of the story of one and two will all be in the movie."
Alita was originally scheduled to be his next production after the TV series Dark Angel, which was influenced by Battle Angel Alita. After Avatar, he stated he would work on Avatar sequels before starting Alita.
Cameron's producer Jon Landau said, "I am sure you will get to see Battle Angel. It is one of my favourite stories, a great story about a young woman's journey to self-discovery. It is a film that asks the question: What does it mean to be human? Are you human if you have a heart, a brain or a soul? I look forward to giving the audience the film." Landau half-jokingly stated that the project may be titled Alita: The Battle Angel, because of Cameron's tradition in naming his films with either an "A" or a "T".
In October 2015, it was reported that Robert Rodriguez would direct the film with Cameron and Landau producing. On April 26, 2016, both The Hollywood Reporter and Variety reported that Maika Monroe, Rosa Salazar, Zendaya and Bella Thorne were in the running for the lead role. Near the end of May 2016, Salazar was cast as Alita, and on February 7, 2017, The Hollywood Reporter reported that Jennifer Connelly would be joining the cast as one of the villains.
On December 8, 2017, the first trailer for Battle Angel was released to the public, and the film, titled Alita: Battle Angel and directed by Robert Rodriguez, came out in 2019.
Novels
A novelization of the manga by Yasuhisa Kawamura was released on April 4, 1997, by Shueisha's JUMP j-BOOKS label.
In November 2018, Titan Books published Alita: Battle Angel—Iron City, a prequel novel for the film. The novel was written by Pat Cadigan, a notable science fiction author.
Video game
Gunnm: Martian Memory is an action RPG video game for the PlayStation by Banpresto. It is an adaptation of the manga, following Alita (Gally) from her discovery in the Zalem dump heap by Daisuke Ido up through and beyond her career as a TUNED agent. The story includes additional elements that Kishiro had conceived when he ended the original manga in 1995, but was unable to implement at the time, which involved Alita going into outer space. He then expanded the story, which formed the basis for the manga Battle Angel Alita: Last Order.
Related works
Ashen Victor, a story set six years before the beginning of Battle Angel Alita. It primarily tells the story of a Motorball player and it sets the evolution of the game into what it becomes in the Battle Angel Alita series.
Last Order, a continuation of Battle Angel Alita, published monthly in Ultra Jump and later in Evening.
Mars Chronicle, a continuation of Last Order, published in Evening.
Reception
During Gunnms initial run in Business Jump manga magazine between 1990 and 1995, the magazine's circulation reached a record 760,000 monthly sales, the highest in its history. Between 1990 and 1995, Business Jump magazine had a total circulation of over 50million copies, with a total estimated revenue of approximately ().
Reviews and criticism
The fantasy world created by Yukito Kishiro has received positive reviews from many websites. 'MangaLife.com reviewer Adam Volk calls the Gunnm universe "complex and stunningly compelling". He writes that after reading the first volume, it becomes clear why the author of the manga is known as a master of the genre. The work combines a large amount of action with believable and independent characters, which the reviewer said is rare in films, comics and TV shows. In the end, the reviewer called the original manga a classic example of a beautiful story about life.
Patrick King, a reviewer for the online anime and manga magazine, Animefringe, praises the "magnificence of Kishiro's creation" and "a living, breathing, frightening, incredibly plausible, perhaps even prophetic look at the future of mankind." He considered that the main themes that Kishiro touches on in his work are human nature and sincerity. King also noted that, unlike Kishiro's other work, Aqua Knight, the style of the original work is more realistic. The violence present in the manga, according to the reviewer, makes the work unsuitable for children, but helps the reader understand what exactly the main character is fighting.
Raphael See of THEM Anime Reviews, opined that Battle Angel is "probably the best cyborg anime" he has seen. And although it does not stand out with something special, due to its high quality it leaves an overall positive viewing experience. He's writing: "A nice feature of this work is the display of cybernetics and technology in the context of the surrounding world, without focusing on the plot itself." The only downside, according to critics, is the brevity of the series, giving the impression that the anime is part of something bigger.
Anime News Network critic, Theron Martin praises the author's meticulous background work and emphasizes that Kishiro has not lost his artistic skills over time. The reviewer also noted that "the reader will always be able to understand what is happening, even in moments of stunning action". JapanVisitor.com notes the influence on Kishiro of writers such as Philip Dick (" Do Androids Dream of Electric Sheep? " ) and Isaac Asimov (" I, Robot ")
See also
Aelita, also known as Aelita, or The Decline of Mars
Armitage III
Genocyber
Notes
References
Bibliography
External links
Official website at Kodansha
Battle Angel
1990s science fiction novels
Action anime and manga
Biorobotics in fiction
Comics about genetic engineering
Cyberpunk anime and manga
Cyborgs in anime and manga
Dystopian fiction
Exploratory engineering
Fiction about artificial intelligence
Fiction about brain–computer interface
Fiction about consciousness transfer
Fiction about cyborgs
Fiction about immortality
Fiction about megastructures
Fiction about prosthetics
Fiction about robots
Fiction about terraforming
Fiction about the Solar System
Fiction about wormholes
Fictional bounty hunters
Jump J-Books
Manga adapted into films
Nanopunk
Post-apocalyptic anime and manga
Seinen manga
Shueisha franchises
Shueisha manga
Transhumanism in anime and manga
Viz Media manga | Battle Angel Alita | [
"Technology",
"Biology"
] | 3,397 | [
"Exploratory engineering",
"Fiction about cyborgs",
"Cyborgs"
] |
56,135 | https://en.wikipedia.org/wiki/Touchstone%20%28assaying%20tool%29 | A touchstone is a small tablet of dark stone such as slate or lydite, used for assaying precious metal alloys. It has a finely grained surface on which soft metals leave a visible trace.
History
The touchstone was used during the Harappa period of the Indus Valley civilization ca. 2600–1900 BC for testing the purity of soft metals. It was also used in Ancient Greece.
The touchstone allowed anyone to easily and quickly determine the purity of a metal sample. This, in turn, led to the widespread adoption of gold as a standard of exchange. Although mixing gold with less expensive materials was common in coinage, using a touchstone one could easily determine the quantity of gold in the coin, and thereby calculate its intrinsic worth.
Operation
Drawing a line with gold on a touchstone will leave a visible trace. Because different alloys of gold have different colors (see gold), the unknown sample can be compared to samples of known purity. This method has been used since ancient times. In modern times, additional tests can be done. The trace will react in different ways to specific concentrations of nitric acid or aqua regia, thereby identifying the quality of the gold: 24 karat gold is not affected but 14 karat gold will show chemical activity.
See also
Litmus test
Spot analysis
Streak test
References
Materials science
Jewellery
Gold
Lithics
Inventions of the Indus Valley Civilisation
Indian inventions | Touchstone (assaying tool) | [
"Physics",
"Materials_science",
"Engineering"
] | 284 | [
"Materials science stubs",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
56,139 | https://en.wikipedia.org/wiki/Butanol | Butanol (also called butyl alcohol) is a four-carbon alcohol with a formula of C4H9OH, which occurs in five isomeric structures (four structural isomers), from a straight-chain primary alcohol to a branched-chain tertiary alcohol; all are a butyl or isobutyl group linked to a hydroxyl group (sometimes represented as BuOH, sec-BuOH, i-BuOH, and t''-BuOH). These are 1-butanol, two stereoisomers of sec-butyl alcohol, isobutanol and tert-butyl alcohol. Butanol is primarily used as a solvent and as an intermediate in chemical synthesis, and may be used as a fuel. Biologically produced butanol is called biobutanol, which may be n-butanol or isobutanol.
Isomers
The unmodified term butanol usually refers to the straight chain isomer with the alcohol functional group at the terminal carbon, which is also known as 1-butanol. The straight chain isomer with the alcohol at an internal carbon is sec-butyl alcohol or 2-butanol. The branched isomer with the alcohol at a terminal carbon is isobutanol or 2-methyl-1-propanol, and the branched isomer with the alcohol at the internal carbon is tert-butyl alcohol or 2-methyl-2-propanol.
The butanol isomers have different melting and boiling points. 1-Butanol and isobutanol have limited solubility, sec-butyl alcohol has substantially greater solubility, whereas tert-butyl alcohol is miscible with water. The hydroxyl group makes the molecule polar, promoting solubility in water, while the longer hydrocarbon chain mitigates the polarity and reduces solubility.
Toxicity
Butanol exhibits a low order of toxicity in single dose experiments with laboratory animals and is considered safe enough for use in cosmetics. Brief, repeated overexposure with the skin can result in depression of the central nervous system, as with other short-chain alcohols. Exposure may also cause severe eye irritation and moderate skin irritation. The main dangers are from prolonged exposure to the alcohol's vapors. In extreme cases this includes suppression of the central nervous system and even death. Under most circumstances, butanol is quickly metabolized to carbon dioxide. It has not been shown to damage DNA or cause cancer.
Uses
Primary uses
Butanol is used as a solvent for a wide variety of chemical and textile processes, in organic synthesis, and as a chemical intermediate. It is also used as a paint thinner and a solvent in other coating applications where a relatively slow evaporating latent solvent is preferable, as with lacquers and ambient-cured enamels. It is also used as a component of hydraulic and brake fluids.
A 50% solution of butanol in water has been used since the 20th century to retard the drying of fresh plaster in fresco painting. The solution is usually sprayed on the wet plaster after the plaster has been trowelled smooth and extends the working period during which frescos can be painted up to 18 hours.
Butanol is used in the synthesis of 2-butoxyethanol. A major application for butanol is as a reactant with acrylic acid to produce butyl acrylate, a primary ingredient of water based acrylic paint.
It is also used as a base for perfumes, but on its own has a highly alcoholic aroma.
Salts of butanol are chemical intermediates; for example, alkali metal salts of tert-butanol are tert-butoxides.
Recreational use
2-Methyl-2-butanol is a central nervous system depressant with a similar effect upon ingestion to ethanol. Case reports have been documented demonstrating its potential for abuse.
Biobutanol
Butanol (n-butanol or isobutanol) is a potential biofuel (butanol fuel). Butanol at 85 percent concentration can be used in cars designed for gasoline (petrol) without any change to the engine (unlike 85% ethanol), and it contains more energy for a given volume than ethanol and almost as much as gasoline, and a vehicle using butanol would return fuel consumption more comparable to gasoline than ethanol. Butanol can also be added to diesel fuel to reduce soot emissions. Photoautotrophic microorganisms, like cyanobacteria, can be engineered to produce 1-butanol indirectly from and water.
Production
Butanols are normally present in fusel alcohol.
Since the 1950s, most butanol in the United States is produced commercially from fossil fuels. The most common process starts with propene (propylene), which is put through a hydroformylation reaction to form butanal, which is then reduced with hydrogen to 1-butanol and/or 2-butanol. tert-butanol is derived from isobutane as a co-product of propylene oxide production.
Butanol can also be produced by fermentation of biomass by bacteria. Prior to the 1950s, Clostridium acetobutylicum was used in industrial fermentation to produce n-butanol.
See also
A.B.E. process
Algal fuel
Solvent
ReferencesMerck Index'', 12th Edition, 1575.
Alkanols
Alcohol solvents
Fatty alcohols
Biotechnology products
Liquid fuels
Oxygenates | Butanol | [
"Biology"
] | 1,159 | [
"Biotechnology products"
] |
56,163 | https://en.wikipedia.org/wiki/Starship | A starship, starcraft, or interstellar spacecraft is a theoretical spacecraft designed for traveling between planetary systems. The term is mostly found in science fiction. Reference to a "star-ship" appears as early as 1882 in Oahspe: A New Bible.
While NASA's Voyager and Pioneer probes have traveled into local interstellar space, the purpose of these uncrewed craft was specifically interplanetary, and they are not predicted to reach another star system; Voyager 1 probe and Gliese 445 will pass one another within 1.6 light years in about 40,000 years. Several preliminary designs for starships have been undertaken through exploratory engineering, using feasibility studies with modern technology or technology thought likely to be available in the near future.
In April 2016, scientists announced Breakthrough Starshot, a Breakthrough Initiatives program, to develop a proof-of-concept fleet of small centimeter-sized light sail spacecraft named StarChip, capable of making the journey to Alpha Centauri, the nearest star system, at speeds of 20% and 15% of the speed of light, taking between 20 and 30 years to reach the star system, respectively, and about 4 years to notify Earth of a successful arrival.
Research
To travel between stars in a reasonable time using rocket-like technology requires very high effective exhaust velocity jet and enormous energy to power this, such as might be provided by fusion power or antimatter.
There are very few scientific studies that investigate the issues in building a starship. Some examples of this include:
Project Orion (1958–1965), mostly crewed interplanetary spacecraft
Project Daedalus (1973–1978), uncrewed interstellar probe
Project Longshot (1987–1988), uncrewed interstellar probe
Project Icarus (2009–2014), uncrewed interstellar probe
Hundred-Year Starship (2011), crewed interstellar craft
The Bussard ramjet is an idea to use nuclear fusion of interstellar gas to provide propulsion.
Examined in an October 1973 issue of Analog, the Enzmann Starship proposed using a 12,000-ton ball of frozen deuterium to power pulse propulsion units. Twice as long as the Empire State Building is tall and assembled in-orbit, the proposed spacecraft would be part of a larger project preceded by interstellar probes and telescopic observation of target star systems.
The NASA Breakthrough Propulsion Physics Program (1996–2002) was a professional scientific study examining advanced spacecraft propulsion systems.
Types
Sleeper: Starships that place their occupants into Cryostasis or Temporal Stasis during a long trip. This includes cryonics-based systems that freeze passengers for the duration of the journey. This is a common trope in science fiction, with some notable examples including "To Sleep in a Sea of Stars" by Christopher Paolini and Edward Bellamy's "Looking Backward"
Generation: Ships in which the destination would be reached by descendants of the original passengers. These ships would necessarily be self-sustaining and self-maintaining for possibly thousands of years. Notable examples of this in fiction are the Godspeed in Beth Revis' "Across the Universe" (and subsequent sequels), as well as the Vanguard from Robert A. Heinlein's "Orphans of the Sky"
Relativistic: Ships that function by taking advantage of time dilation at close-to-light-speeds, so long trips will seem much shorter (but still take the same amount of time for outside observers).
Frame shift: Ships that take advantage of the fact that certain dimensions are less "folded" than others, to allow shorter travel by shifting one's frame of reference into a higher, more flat dimension to cut down on travel time, such as in science fiction with inter-dimensional hyperspace. Generally this results in speeds close to (but importantly, not greater than) light speed.
Faster-than-light (FTL): A ship that functions by reaching a destination faster than the speed of light. While according to the special theory of relativity, faster-than-light travel is impossible, drives like a warp drive or using a wormhole, that is in principle similar have been hypothesized.
Theoretical possibilities
The Alcubierre drive is a speculative warp drive conjectured by Mexican physicist Miguel Alcubierre in a 1994 paper which has not been peer-reviewed. The paper suggests that space itself could be topographically warped to create a local region of spacetime wherein the region ahead of the "warp bubble" is compressed, allowed to resume normalcy within the bubble, and then rapidly expanded behind the bubble creating an effect that results in apparent FTL travel, all in a manner consistent with the Einstein field equations of general relativity and without the introduction of wormholes. However, the actual construction of such a drive would face other serious theoretical difficulties.
Fictional examples
There are widely known vessels in various science fiction franchises. The most prominent cultural use and one of the earliest common uses of the term starship was in Star Trek: The Original Series.
Individual ships
(This list is not exhaustive.)
Axiom (WALL-E)
Battlestar Galactica (Battlestar Galactica)
High Charity (Halo Series)
Hyperion (StarCraft)
Jupiter 2 (Lost In Space)
Long Shot (Ringworld)
Moya (Farscape)
NSEA Protector (Galaxy Quest)
SDF-1 Macross (The Super Dimension Fortress Macross)
SSV Normandy (Mass Effect)
UNSC Infinity (Halo Series)
USG Ishimura (Dead Space)
Yamato (Space Battleship Yamato/Star Blazers)
Groups of ships
Spacecraft in Star Trek
USS Defiant (Star Trek: Deep Space Nine)
USS Discovery
USS Enterprise (various)
USS Voyager (Star Trek: Voyager)
Klingon starships
Star Wars starships
Millennium Falcon
Star Destroyers
M/V Seamus (Archer: 1999)
The Reapers (Mass Effect)
USS Callister (USS Callister)
See also
References
External links
Starship Dimensions (to-scale size comparisons)
Starship Size Comparison Chart 1 (Dan Carlson, 13 July 2003)
Starship Size Comparison Chart 2 (Dan Carlson, 30 October 2003)
Starship Names (a Sci-Fi wiki article, outside Wikipedia)
Fictional spacecraft by type
Science fiction themes
Spaceflight
Fiction about transport
Interstellar travel
es:Astronave | Starship | [
"Astronomy"
] | 1,293 | [
"Outer space",
"Astronomical hypotheses",
"Spaceflight",
"Interstellar travel"
] |
56,189 | https://en.wikipedia.org/wiki/Interlaced%20video | Interlaced video (also known as interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the characteristics of the human visual system.
This effectively doubles the time resolution (also called temporal resolution) as compared to non-interlaced footage (for frame rates equal to field rates). Interlaced signals require a display that is natively capable of showing the individual fields in a sequential order. CRT displays and ALiS plasma displays are made for displaying interlaced signals.
Interlaced scan refers to one of two common methods for "painting" a video image on an electronic display screen (the other being progressive scan) by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame. One field contains all odd-numbered lines in the image; the other contains all even-numbered lines.
Sometimes in interlaced video a field is called a frame which can lead to confusion.
A Phase Alternating Line (PAL)-based television set display, for example, scans 50 fields every second (25 odd and 25 even). The two sets of 25 fields work together to create a full frame every 1/25 of a second (or 25 frames per second), but with interlacing create a new half frame every 1/50 of a second (or 50 fields per second). To display interlaced video on progressive scan displays, playback applies deinterlacing to the video signal (which adds input lag).
The European Broadcasting Union argued against interlaced video in production and broadcasting. Until the early 2010s, they recommended 720p 50 fps (frames per second) for the current production format—and were working with the industry to introduce 1080p 50 as a future-proof production standard. 1080p 50 offers higher vertical resolution, better quality at lower bitrates, and easier conversion to other formats, such as 720p 50 and 1080i 50. The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be completely eliminated because some information is lost between frames.
Despite arguments against it, television standards organizations continue to support interlacing. It is still included in digital video transmission formats such as DV, DVB, and ATSC. New video compression standards like High Efficiency Video Coding are optimized for progressive scan video, but sometimes do support interlaced video.
Description
Progressive scan captures, transmits, and displays an image in a path similar to text on a page—line by line, top to bottom.
The interlaced scan pattern in a standard definition CRT display also completes such a scan, but in two passes (two fields). The first pass displays the first and all odd numbered lines, from the top left corner to the bottom right corner. The second pass displays the second and all even numbered lines, filling in the gaps in the first scan.
This scan of alternate lines is called interlacing. A field is an image that contains only half of the lines needed to make a complete picture. In the days of CRT displays, the afterglow of the display's phosphor aided this effect.
Interlacing provides full vertical detail with the same bandwidth that would be required for a full progressive scan, but with twice the perceived frame rate and refresh rate. To prevent flicker, all analog broadcast television systems used interlacing.
Format identifiers like 576i50 and 720p50 specify the frame rate for progressive scan formats, but for interlaced formats they typically specify the field rate (which is twice the frame rate). This can lead to confusion, because industry-standard SMPTE timecode formats always deal with frame rate, not field rate. To avoid confusion, SMPTE and EBU always use frame rate to specify interlaced formats, e.g., 480i60 is 480i/30, 576i50 is 576i/25, and 1080i50 is 1080i/25. This convention assumes that one complete frame in an interlaced signal consists of two fields in sequence.
Benefits of interlacing
One of the most important factors in analog television is signal bandwidth, measured in megahertz. The greater the bandwidth, the more expensive and complex the entire production and broadcasting chain. This includes cameras, storage systems, broadcast systems—and reception systems: terrestrial, cable, satellite, Internet, and end-user displays (TVs and computer monitors).
For a fixed bandwidth, interlace provides a video signal with twice the display refresh rate for a given line count (versus progressive scan video at a similar frame rate—for instance 1080i at 60 half-frames per second, vs. 1080p at 30 full frames per second). The higher refresh rate improves the appearance of an object in motion, because it updates its position on the display more often, and when an object is stationary, human vision combines information from multiple similar half-frames to produce the same perceived resolution as that provided by a progressive full frame. This technique is only useful, though, if source material is available in higher refresh rates. Cinema movies are typically recorded at 24fps, and therefore do not benefit from interlacing, a solution which reduces the maximum video bandwidth to 5 MHz without reducing the effective picture scan rate of 60 Hz.
Given a fixed bandwidth and high refresh rate, interlaced video can also provide a higher spatial resolution than progressive scan. For instance, 1920×1080 pixel resolution interlaced HDTV with a 60 Hz field rate (known as 1080i60 or 1080i/30) has a similar bandwidth to 1280×720 pixel progressive scan HDTV with a 60 Hz frame rate (720p60 or 720p/60), but achieves approximately twice the spatial resolution for low-motion scenes.
However, bandwidth benefits only apply to an analog or uncompressed digital video signal. With digital video compression, as used in all current digital TV standards, interlacing introduces additional inefficiencies. EBU has performed tests that show that the bandwidth savings of interlaced video over progressive video is minimal, even with twice the frame rate. I.e., 1080p50 signal produces roughly the same bit rate as 1080i50 (aka 1080i/25) signal, and 1080p50 actually requires less bandwidth to be perceived as subjectively better than its 1080i/25 (1080i50) equivalent when encoding a "sports-type" scene.
Interlacing can be exploited to produce 3D TV programming, especially with a CRT display and especially for color filtered glasses by transmitting the color keyed picture for each eye in the alternating fields. This does not require significant alterations to existing equipment. Shutter glasses can be adopted as well, obviously with the requirement of achieving synchronisation. If a progressive scan display is used to view such programming, any attempt to deinterlace the picture will render the effect useless. For color filtered glasses the picture has to be either buffered and shown as if it was progressive with alternating color keyed lines, or each field has to be line-doubled and displayed as discrete frames. The latter procedure is the only way to suit shutter glasses on a progressive display.
Interlacing problems
Interlaced video is designed to be captured, stored, transmitted, and displayed in the same interlaced format. Because each interlaced video frame is two fields captured at different moments in time, interlaced video frames can exhibit motion artifacts known as interlacing effects, or combing, if recorded objects move fast enough to be in different positions when each individual field is captured. These artifacts may be more visible when interlaced video is displayed at a slower speed than it was captured, or in still frames.
While there are simple methods to produce somewhat satisfactory progressive frames from the interlaced image, for example by doubling the lines of one field and omitting the other (halving vertical resolution), or anti-aliasing the image in the vertical axis to hide some of the combing, there are sometimes methods of producing results far superior to these. If there is only sideways (X axis) motion between the two fields and this motion is even throughout the full frame, it is possible to align the scanlines and crop the left and right ends that exceed the frame area to produce a visually satisfactory image. Minor Y axis motion can be corrected similarly by aligning the scanlines in a different sequence and cropping the excess at the top and bottom. Often the middle of the picture is the most necessary area to put into check, and whether there is only X or Y axis alignment correction, or both are applied, most artifacts will occur towards the edges of the picture. However, even these simple procedures require motion tracking between the fields, and a rotating or tilting object, or one that moves in the Z axis (away from or towards the camera) will still produce combing, possibly even looking worse than if the fields were joined in a simpler method.
Some deinterlacing processes can analyze each frame individually and decide the best method. The best and only perfect conversion in these cases is to treat each frame as a separate image, but that may not always be possible. For framerate conversions and zooming it would mostly be ideal to line-double each field to produce a double rate of progressive frames, resample the frames to the desired resolution and then re-scan the stream at the desired rate, either in progressive or interlaced mode.
Interline twitter
Interlace introduces a potential problem called interline twitter, a form of moiré. This aliasing effect only shows up under certain circumstances—when the subject contains vertical detail that approaches the horizontal resolution of the video format. For instance, a finely striped jacket on a news anchor may produce a shimmering effect. This is twittering. Television professionals avoid wearing clothing with fine striped patterns for this reason. Professional video cameras or computer-generated imagery systems apply a low-pass filter to the vertical resolution of the signal to prevent interline twitter.
Interline twitter is the primary reason that interlacing is less suited for computer displays. Each scanline on a high-resolution computer monitor typically displays discrete pixels, each of which does not span the scanline above or below. When the overall interlaced framerate is 60 frames per second, a pixel (or more critically for e.g. windowing systems or underlined text, a horizontal line) that spans only one scanline in height is visible for the 1/60 of a second that would be expected of a 60 Hz progressive display - but is then followed by 1/60 of a second of darkness (whilst the opposite field is scanned), reducing the per-line/per-pixel refresh rate to 30 frames per second with quite obvious flicker.
To avoid this, standard interlaced television sets typically do not display sharp detail. When computer graphics appear on a standard television set, the screen is either treated as if it were half the resolution of what it actually is (or even lower), or rendered at full resolution and then subjected to a low-pass filter in the vertical direction (e.g. a "motion blur" type with a 1-pixel distance, which blends each line 50% with the next, maintaining a degree of the full positional resolution and preventing the obvious "blockiness" of simple line doubling whilst actually reducing flicker to less than what the simpler approach would achieve). If text is displayed, it is large enough so that any horizontal lines are at least two scanlines high. Most fonts for television programming have wide, fat strokes, and do not include fine-detail serifs that would make the twittering more visible; in addition, modern character generators apply a degree of anti-aliasing that has a similar line-spanning effect to the aforementioned full-frame low-pass filter.
Deinterlacing
ALiS plasma panels and the old CRTs can display interlaced video directly, but modern computer video displays and TV sets are mostly based on LCD technology, which mostly use progressive scanning.
Displaying interlaced video on a progressive scan display requires a process called deinterlacing. This is can be an imperfect technique, especially if the frame rate isn't doubled in the deinterlaced output. Providing the best picture quality for interlaced video signals without doubling the frame rate requires expensive and complex devices and algorithms, and can cause various artifacts. For television displays, deinterlacing systems are integrated into progressive scan TV sets that accept interlaced signal, such as broadcast SDTV signal.
Most modern computer monitors do not support interlaced video, besides some legacy medium-resolution modes (and possibly 1080i as an adjunct to 1080p), and support for standard-definition video (480/576i or 240/288p) is particularly rare given its much lower line-scanning frequency vs typical "VGA"-or-higher analog computer video modes. Playing back interlaced video from a DVD, digital file or analog capture card on a computer display instead requires some form of deinterlacing in the player software and/or graphics hardware, which often uses very simple methods to deinterlace. This means that interlaced video often has visible artifacts on computer systems. Computer systems may be used to edit interlaced video, but the disparity between computer video display systems and interlaced television signal formats means that the video content being edited cannot be viewed properly without separate video display hardware.
Current manufacture TV sets employ a system of intelligently extrapolating the extra information that would be present in a progressive signal entirely from an interlaced original. In theory: this should simply be a problem of applying the appropriate algorithms to the interlaced signal, as all information should be present in that signal. In practice, results are currently variable, and depend on the quality of the input signal and amount of processing power applied to the conversion. The biggest impediment, at present, is artifacts in the lower quality interlaced signals (generally broadcast video), as these are not consistent from field to field. On the other hand, high bit rate interlaced signals such as from HD camcorders operating in their highest bit rate mode work well.
Deinterlacing algorithms temporarily store a few frames of interlaced images and then extrapolate extra frame data to make a smooth flicker-free image. This frame storage and processing results in a slight display lag that is visible in business showrooms with a large number of different models on display. Unlike the old unprocessed NTSC signal, the screens do not all follow motion in perfect synchrony. Some models appear to update slightly faster or slower than others. Similarly, the audio can have an echo effect due to different processing delays.
History
When motion picture film was developed, the movie screen had to be illuminated at a high rate to prevent visible flicker. The exact rate necessary varies by brightness — 50 Hz is (barely) acceptable for small, low brightness displays in dimly lit rooms, whilst 80 Hz or more may be necessary for bright displays that extend into peripheral vision. The film solution was to project each frame of film three times using a three-bladed shutter: a movie shot at 16 frames per second illuminated the screen 48 times per second. Later, when sound film became available, the higher projection speed of 24 frames per second enabled a two-bladed shutter to produce 48 times per second illumination—but only in projectors incapable of projecting at the lower speed.
This solution could not be used for television. To store a full video frame and display it twice requires a frame buffer—electronic memory (RAM)—sufficient to store a video frame. This method did not become feasible until the late 1980s and with digital technology. In addition, avoiding on-screen interference patterns caused by studio lighting and the limits of vacuum tube technology required that CRTs for TV be scanned at AC line frequency. (This was 60 Hz in the US, 50 Hz Europe.)
Several different interlacing patents have been proposed since 1914 in the context of still or moving image transmission, but few of them were practicable. In 1926, Ulises Armand Sanabria demonstrated television to 200,000 people attending Chicago Radio World’s Fair. Sanabria’s system was mechanically scanned using a 'triple interlace' Nipkow disc with three offset spirals and was thus a 3:1 scheme rather than the usual 2:1. It worked with 45 line 15 frames per second images being transmitted. With 15 frames per second and a 3:1 interlace the field rate was 45 fields per second yielding (for the time) a very steady image. He did not apply for a patent for his interlaced scanning until May 1931.
In 1930, German Telefunken engineer Fritz Schröter first formulated and patented the concept of breaking a single image frame into successive interlaced lines, based on his earlier experiments with phototelegraphy. In the USA, RCA engineer Randall C. Ballard patented the same idea in 1932, initially for the purpose of reformatting sound film to television rather than for the transmission of live images. Commercial implementation began in 1934 as cathode-ray tube screens became brighter, increasing the level of flicker caused by progressive (sequential) scanning.
In 1936, when the UK was setting analog standards, early thermionic valve based CRT drive electronics could only scan at around 200 lines in 1/50 of a second (i.e. approximately a 10 kHz repetition rate for the sawtooth horizontal deflection waveform). Using interlace, a pair of 202.5-line fields could be superimposed to become a sharper 405 line frame (with around 377 used for the actual image, and yet fewer visible within the screen bezel; in modern parlance, the standard would be "377i"). The vertical scan frequency remained 50 Hz, but visible detail was noticeably improved. As a result, this system supplanted John Logie Baird's 240 line mechanical progressive scan system that was also being trialled at the time.
From the 1940s onward, improvements in technology allowed the US and the rest of Europe to adopt systems using increasingly higher line-scan frequencies and more radio signal bandwidth to produce higher line counts at the same frame rate, thus achieving better picture quality. However the fundamentals of interlaced scanning were at the heart of all of these systems. The US adopted the 525 line system, later incorporating the composite color standard known as NTSC, Europe adopted the 625 line system, and the UK switched from its idiosyncratic 405 line system to (the much more US-like) 625 to avoid having to develop a (wholly) unique method of color TV. France switched from its similarly unique 819 line monochrome system to the more European standard of 625. Europe in general, including the UK, then adopted the PAL color encoding standard, which was essentially based on NTSC, but inverted the color carrier phase with each line (and frame) in order to cancel out the hue-distorting phase shifts that dogged NTSC broadcasts. France instead adopted its own unique, twin-FM-carrier based SECAM system, which offered improved quality at the cost of greater electronic complexity, and was also used by some other countries, notably Russia and its satellite states. Though the color standards are often used as synonyms for the underlying video standard - NTSC for 525i/60, PAL/SECAM for 625i/50 - there are several cases of inversions or other modifications; e.g. PAL color is used on otherwise "NTSC" (that is, 525i/60) broadcasts in Brazil, as well as vice versa elsewhere, along with cases of PAL bandwidth being squeezed to 3.58 MHz to fit in the broadcast waveband allocation of NTSC, or NTSC being expanded to take up PAL's 4.43 MHz.
Interlacing was ubiquitous in displays until the 1970s, when the needs of computer monitors resulted in the reintroduction of progressive scan, including on regular TVs or simple monitors based on the same circuitry; most CRT based displays are entirely capable of displaying both progressive and interlace regardless of their original intended use, so long as the horizontal and vertical frequencies match, as the technical difference is simply that of either starting/ending the vertical sync cycle halfway along a scanline every other frame (interlace), or always synchronising right at the start/end of a line (progressive). Interlace is still used for most standard definition TVs, and the 1080i HDTV broadcast standard, but not for LCD, micromirror (DLP), or most plasma displays; these displays do not use a raster scan to create an image (their panels may still be updated in a left-to-right, top-to-bottom scanning fashion, but always in a progressive fashion, and not necessarily at the same rate as the input signal), and so cannot benefit from interlacing (where older LCDs use a "dual scan" system to provide higher resolution with slower-updating technology, the panel is instead divided into two adjacent halves that are updated simultaneously): in practice, they have to be driven with a progressive scan signal. The deinterlacing circuitry to get progressive scan from a normal interlaced broadcast television signal can add to the cost of a television set using such displays. Currently, progressive displays dominate the HDTV market.
Interlace and computers
In the 1970s, computers and home video game systems began using TV sets as display devices. At that point, a 480-line NTSC signal was well beyond the graphics abilities of low cost computers, so these systems used a simplified video signal that made each video field scan directly on top of the previous one, rather than each line between two lines of the previous field, along with relatively low horizontal pixel counts. This marked the return of progressive scanning not seen since the 1920s. Since each field became a complete frame on its own, modern terminology would call this 240p on NTSC sets, and 288p on PAL. While consumer devices were permitted to create such signals, broadcast regulations prohibited TV stations from transmitting video like this. Computer monitor standards such as the TTL-RGB mode available on the CGA and e.g. BBC Micro were further simplifications to NTSC, which improved picture quality by omitting modulation of color, and allowing a more direct connection between the computer's graphics system and the CRT.
By the mid-1980s, computers had outgrown these video systems and needed better displays. Most home and basic office computers suffered from the use of the old scanning method, with the highest display resolution being around 640x200 (or sometimes 640x256 in 625-line/50 Hz regions), resulting in a severely distorted tall narrow pixel shape, making the display of high resolution text alongside realistic proportioned images difficult (logical "square pixel" modes were possible but only at low resolutions of 320x200 or less). Solutions from various companies varied widely. Because PC monitor signals did not need to be broadcast, they could consume far more than the 6, 7 and 8 MHz of bandwidth that NTSC and PAL signals were confined to. IBM's Monochrome Display Adapter and Enhanced Graphics Adapter as well as the Hercules Graphics Card and the original Macintosh computer generated video signals of 342 to 350p, at 50 to 60 Hz, with approximately 16 MHz of bandwidth, some enhanced PC clones such as the AT&T 6300 (aka Olivetti M24) as well as computers made for the Japanese home market managed 400p instead at around 24 MHz, and the Atari ST pushed that to 71 Hz with 32 MHz bandwidth - all of which required dedicated high-frequency (and usually single-mode, i.e. not "video"-compatible) monitors due to their increased line rates. The Commodore Amiga instead created a true interlaced 480i60/576i50 RGB signal at broadcast video rates (and with a 7 or 14 MHz bandwidth), suitable for NTSC/PAL encoding (where it was smoothly decimated to 3.5~4.5 MHz). This ability (plus built-in genlocking) resulted in the Amiga dominating the video production field until the mid-1990s, but the interlaced display mode caused flicker problems for more traditional PC applications where single-pixel detail is required, with "flicker-fixer" scan-doubler peripherals plus high-frequency RGB monitors (or Commodore's own specialist scan-conversion A2024 monitor) being popular, if expensive, purchases amongst power users. 1987 saw the introduction of VGA, on which PCs soon standardized, as well as Apple's Macintosh II range which offered displays of similar, then superior resolution and color depth, with rivalry between the two standards (and later PC quasi-standards such as XGA and SVGA) rapidly pushing up the quality of display available to both professional and home users.
In the late 1980s and early 1990s, monitor and graphics card manufacturers introduced newer high resolution standards that once again included interlace. These monitors ran at higher scanning frequencies, typically allowing a 75 to 90 Hz field rate (i.e. 37.5 to 45 Hz frame rate), and tended to use longer-persistence phosphors in their CRTs, all of which was intended to alleviate flicker and shimmer problems. Such monitors proved generally unpopular, outside of specialist ultra-high-resolution applications such as CAD and DTP which demanded as many pixels as possible, with interlace being a necessary evil and better than trying to use the progressive-scan equivalents. Whilst flicker was often not immediately obvious on these displays, eyestrain and lack of focus nevertheless became a serious problem, and the trade-off for a longer afterglow was reduced brightness and poor response to moving images, leaving visible and often off-colored trails behind. These colored trails were a minor annoyance for monochrome displays, and the generally slower-updating screens used for design or database-query purposes, but much more troublesome for color displays and the faster motions inherent in the increasingly popular window-based operating systems, as well as the full-screen scrolling in WYSIWYG word-processors, spreadsheets, and of course for high-action games. Additionally, the regular, thin horizontal lines common to early GUIs, combined with low color depth that meant window elements were generally high-contrast (indeed, frequently stark black-and-white), made shimmer even more obvious than with otherwise lower fieldrate video applications. As rapid technological advancement made it practical and affordable, barely a decade after the first ultra-high-resolution interlaced upgrades appeared for the IBM PC, to provide sufficiently high pixel clocks and horizontal scan rates for hi-rez progressive-scan modes in first professional and then consumer-grade displays, the practice was soon abandoned. For the rest of the 1990s, monitors and graphics cards instead made great play of their highest stated resolutions being "non-interlaced", even where the overall framerate was barely any higher than what it had been for the interlaced modes (e.g. SVGA at 56p versus 43i to 47i), and usually including a top mode technically exceeding the CRT's actual resolution (number of color-phosphor triads) which meant there was no additional image clarity to be gained through interlacing and/or increasing the signal bandwidth still further. This experience is why the PC industry today remains against interlace in HDTV, and lobbied for the 720p standard, and continues to push for the adoption of 1080p (at 60 Hz for NTSC legacy countries, and 50 Hz for PAL); however, 1080i remains the most common HD broadcast resolution, if only for reasons of backward compatibility with older HDTV hardware that cannot support 1080p - and sometimes not even 720p - without the addition of an external scaler, similar to how and why most SD-focussed digital broadcasting still relies on the otherwise obsolete MPEG2 standard embedded into e.g. DVB-T.
See also
1080i: high-definition television (HDTV) digitally broadcast in 16:9 (widescreen) aspect ratio standard
480i: standard-definition interlaced video usually used in traditionally NTSC countries (North and parts
576i: standard-definition interlaced video usually used in traditionally PAL and SECAM countries of South America, Japan)
Deinterlacing: converting an interlaced video signal into a non-interlaced one
Field (video): In interlaced video, one of the many still images displayed sequentially to create the illusion of motion on the screen.
Federal Standard 1037C: defines interlaced scanning
Progressive scan: the opposite of interlacing; the image is displayed line by line.
Progressive segmented frame: a scheme designed to acquire, store, modify, and distribute progressive-scan video using interlaced equipment and media
Telecine: a method for converting film frame rates to television frame rates using interlacing
Screen tearing
Wobulation: a variation of interlacing used in DLP displays
References
External links
Fields: Why Video Is Crucially Different from Graphics – An article that describes field-based, interlaced, digitized video and its relation to frame-based computer graphics with many illustrations
Digital Video and Field Order - An article that explains with diagrams how the field order of PAL and NTSC has arisen, and how PAL and NTSC is digitized
100FPS.COM* – Video Interlacing/Deinterlacing
Interlace / Progressive Scanning - Computer vs. Video
Sampling theory and synthesis of interlaced video
Interlaced versus progressive
Television technology
Video formats
1925 introductions
Data compression | Interlaced video | [
"Technology"
] | 6,179 | [
"Information and communications technology",
"Television technology"
] |
56,206 | https://en.wikipedia.org/wiki/Crop%20circle | A crop circle, crop formation, or corn circle is a pattern created by flattening a crop, usually a cereal. The term was first coined in the early 1980s. Crop circles have been described as all falling "within the range of the sort of thing done in hoaxes" by Taner Edis, professor of physics at Truman State University.
Although obscure natural causes or alien origins of crop circles are suggested by fringe theorists, there is no scientific evidence for such explanations, and all crop circles are consistent with human causation. In 1991, two hoaxers, Doug Bower and Dave Chorley, took credit for having created over 200 crop circles throughout England, in widely-reported interviews. The number of reports of crop circles increased substantially after interviews with them. In the United Kingdom, reported circles are not distributed randomly across the landscape, but appear near roads, areas of medium to dense population, and cultural heritage monuments, such as Stonehenge or Avebury. They usually appear overnight. Nearly half of all crop circles found in the UK in 2003 were located within a radius of the Avebury stone circles.
In contrast to crop circles or crop formations, archaeological remains can cause cropmarks in the fields in the shapes of circles and squares, but these do not appear overnight, and are always in the same places every year.
History
Before the 20th century
A 1678 news pamphlet The Mowing-Devil: or, Strange News Out of Hartfordshire describes a crop whose stalks were cut rather than bent. (see folklore section).
In 1686, an English naturalist, Robert Plot, reported on rings or arcs of mushrooms (see fairy rings) in The Natural History of Stafford-Shire, proposing air flows from the sky as a cause. In 1991, meteorologist Terence Meaden linked this report with modern crop circles, a claim that has been compared with those made by Erich von Däniken.
An 1880 letter to the editor of Nature by amateur scientist John Rand Capron describes how several circles of flattened crops in a field were formed under suspicious circumstances and possibly caused by "cyclonic wind action", stating "as viewed from a distance, circular spots (...) they all presented much the same character, viz, a few standing stalks as a centre, some prostrate stalks with their heads arranged pretty evenly in a direction forming a circle round the centre, and outside there a circular wall of stalks which had not suffered".
20th century
In 1932, archaeologist E. C. Curwen observed four dark rings in a field at Stoughton Down near Chichester, but could examine only one: "a circle in which the barley was 'lodged' or beaten down, while the interior area was very slightly mounded up."
In Fortean Times, David Wood reported that in 1940 he made crop circles near Gloucestershire using ropes.
In 1963, Patrick Moore described a crater in a potato field in Wiltshire that he considered was probably caused by an unknown meteoric body. In nearby wheat fields, there were several circular and elliptical areas where the wheat had been flattened. There was evidence of "spiral flattening". He thought they could be caused by air currents from the impact, since they led towards the crater. Astronomer Hugh Ernest Butler observed similar craters and said they were likely caused by lightning strikes.
During the 1960s, there were many reports of UFO sightings and circular formations in swamp reeds and sugarcane fields in Tully, Queensland, Australia, and in Canada. For example, on 8 August 1967, three circles were found in a field in Duhamel, Alberta, Canada; Department of National Defence investigators concluded that it was artificial but couldn't say who made them or how. The most famous case is the 1966 Tully "saucer nest", when a farmer said he witnessed a saucer-shaped craft rise from a swamp and then fly away. On investigating he found a nearly circular area long by wide where the grass was flattened in clockwise curves to water level within the circle, and the reeds had been uprooted from the mud. The local police officer, the Royal Australian Air Force, and the University of Queensland concluded that it was most probably caused by natural causes, like a down draught, a willy-willy (dust devil), or a waterspout. In 1973, G.J. Odgers, Director of Public Relations, Department of Defence (Air Office), wrote to a journalist that the "saucer" was probably debris lifted by a willy-willy.
After the 1960s, there was a surge of UFOlogists in Wiltshire, and there were rumours of "saucer nests" appearing in the area, but they were never photographed. There are other pre-1970s reports of circular formations, especially in Australia and Canada, but they were always simple circles, which could have been caused by whirlwinds.
British pranksters Doug Bower and Dave Chorley reported they started creating crop circles in British cornfields in 1978, inspired by the Tully "saucer nest" case.
The first film to depict a geometric crop circle, in this case created by super-intelligent ants, was the 1974 science-fiction film Phase IV. The film has been cited as a possible inspiration or influence on the pranksters who started this phenomenon.
The majority of reports of crop circles have appeared and spread since the late 1970s as many circles began appearing throughout the English countryside. This phenomenon became widely known in the late 1980s, after the media started to report crop circles in Hampshire and Wiltshire. After Bower and Chorley gave interviews in 1991 about how they had made crop circles, circles started appearing all over the world. By 2001, approximately 10,000 crop circles have been reported internationally, from locations such as the former Soviet Union, the United Kingdom, Japan, the U.S., and Canada. Researchers have noted a correlation between crop circles, recent media coverage, and the absence of fencing and/or anti-trespassing legislation.
Although farmers expressed concern at the damage caused to their crops, local response to the appearance of crop circles was often enthusiastic, with locals taking advantage of the increase of tourism and visits from scientists, crop circle researchers, and individuals seeking spiritual experiences. The market for crop circle interest consequently generated bus or helicopter tours of circle sites, walking tours, T-shirts, and book sales.
21st century
Since the start of the 21st century, crop formations have increased in size and complexity, with some featuring as many as 2,000 different shapes and some incorporating complex mathematical and scientific characteristics.
The researcher Jeremy Northcote found that crop circles in the UK in 2002 were not spread randomly across the landscape. They tended to appear near roads, areas of medium-to-dense population, and cultural heritage monuments such as Stonehenge or Avebury. He found that they always appeared in areas that were easy to access. This suggests strongly that these crop circles were more likely to be caused by intentional human action than by paranormal activity. Another strong indication of that theory was that inhabitants of the zone with the most circles had a historical tendency for making large-scale formations, including stone circles such as Stonehenge, earthen mounds such as Silbury Hill, long barrows such as West Kennet Long Barrow, and white horses in chalk hills.
Bower and Chorley
In 1991, two self-professed pranksters, Doug Bower and Dave Chorley, made headlines by saying they had started the crop circle phenomenon in 1978, using simple tools consisting of a plank of wood, rope, and a baseball cap fitted with a loop of wire to help them walk in straight lines. To prove their case they made a circle in front of journalists; a "cereologist" (advocate of paranormal explanations of crop circles), Pat Delgado, examined the circle and declared it authentic before it was revealed that it was a hoax.
Inspired by Australian crop circle accounts from 1966, Bower and Chorley claimed to be responsible for all circles made prior to 1987, and for more than 200 crop circles in 1978–1991 (with 1,000 other circles not being made by them). Writing in Physics World, Richard Taylor of the University of Oregon said that "the pictographs they created inspired a second wave of crop artists. Far from fizzling out, crop circles have evolved into an international phenomenon, with hundreds of sophisticated pictographs now appearing annually around the globe."
Art and business
After reports of simple circles in the 1970s, increasingly complex geometric designs have been created by anonymous artists, in some cases to attract tourists to an area.
Since the early 1990s, the UK arts collective Circlemakers, founded by Rod Dickinson and John Lundberg, and subsequently including Wil Russell and Rob Irving, has been creating crop circles in the UK and around the world as part of its art practice and also for commercial clients.
The Led Zeppelin Boxed Set that was released on 7 September 1990, along with the remasters of the first boxed set, as well as the second boxed set, all feature an image of a crop circle that appeared in East Field in Alton Barnes, Wiltshire.
On the night of 11–12 July 1992, a crop-circle-making competition with a prize of £3,000 (funded in part by the Arthur Koestler Foundation) was held in Berkshire. The winning entry was produced by three Westland Helicopters engineers, using rope, PVC pipe, a plank, string, a telescopic device and two stepladders. According to Rupert Sheldrake, the competition was organised by him and John Michell and "co-sponsored by The Guardian and The Cerealogist". The prize money came from PM, a German magazine. Sheldrake wrote that "The experiment was conclusive. Humans could indeed make all the features of state-of-the-art crop formations at that time. Eleven of the twelve teams made more or less impressive formations that followed the set design."
In 2002, Discovery Channel commissioned five aeronautics and astronautics graduate students from MIT to create crop circles of their own, aiming to duplicate some of the features claimed to distinguish "real" crop circles from the known fakes such as those created by Bower and Chorley. The creation of the circle was recorded and used in the Discovery Channel documentary Crop Circles: Mysteries in the Fields.
In 2009, The Guardian reported that crop circle activity had been waning around Wiltshire, in part because makers preferred creating promotional crop circles for companies that paid well for their efforts.
A video sequence used in connection with the opening of the 2012 Summer Olympics in London showed two crop circles in the shape of the Olympic rings. Another Olympic crop circle was visible to passengers landing at nearby Heathrow Airport before and during the Games.
A crop circle depicting the emblem of the Star Wars Rebel Alliance was created in California in December 2017 by a father and his 11-year-old son as a spaceport for X-wing fighters.
Legal implications
In 1992, Gábor Takács and Róbert Dallos, both then aged 17, were the first people to face legal action after creating a crop circle. Takács and Dallos, of the St. Stephen Agricultural Technicum, a high school in Hungary specializing in agriculture, created a diameter crop circle in a wheat field near Székesfehérvár, southwest of Budapest, on 8 June 1992. In September, the pair appeared on Hungarian TV and exposed the circle as a hoax, showing photos of the field before and after the circle was made. As a result, Aranykalász Co., the owners of the land, sued the teens for 630,000 Ft (~$3,000 USD) in damages. The presiding judge ruled that the students were only responsible for the damage caused in the circle itself, amounting to about 6,000 Ft (~$30 USD), and that 99% of the damage to the crops was caused by the thousands of visitors who flocked to Székesfehérvár following the media's promotion of the circle. The fine was eventually paid by the TV show, as were the students' legal fees.
In 2000, Matthew Williams became the first man in the UK to be arrested for causing criminal damage after making a crop circle near Devizes. In November 2000, he was fined £100 plus £40 in costs. , no one else has been successfully prosecuted in the UK for criminal damage caused by creating crop circles.
Creation
Human origin
The scientific consensus on crop circles is that they are constructed by human beings as hoaxes, advertising, or art. The most widely known method for a person or group to construct a crop formation is to tie one end of a rope to an anchor point and the other end to a board which is used to crush the plants. It is also possible to bend grass without breaking it, if it has recently rained—a method that was used to create crop circles in Hungary in 1992. Skeptics of the paranormal point out that all characteristics of crop circles are fully compatible with their being made by hoaxers.
Bower and Chorley confessed in 1991 to making the first crop circles in southern England. When some people refused to believe them, they deliberately added straight lines and squares to show that they could not have natural causes. In a copycat effect, increasingly complex circles started appearing in many countries around the world, including fractal figures. Physicists have suggested that the most complex formations might be made with the help of GPS and lasers. In 2009, a circle formation was made over the course of three consecutive nights and was apparently left unfinished, with some half-made circles.
The main criticism of alleged non-human creation of crop circles is that while evidence of these origins, besides eyewitness testimonies, is absent, many are definitely known to be the work of human pranksters, and others can be adequately explained as such. There have been cases in which researchers declared crop circles to be "the real thing", only to be confronted with the people who created the circle and documented the fraud, like Bower and Chorley and tabloid Today hoaxing Pat Delgado, the Wessex Sceptics and Channel 4's Equinox hoaxing Terence Meaden, or a friend of a Canadian farmer hoaxing a field researcher of the Canadian Crop Circle Research Network. In his 1995 book The Demon-Haunted World: Science as a Candle in the Dark, Carl Sagan concludes that crop circles were created by Bower and Chorley and their copycats, and speculates that UFOlogists willingly ignore the evidence for hoaxing so they can keep believing in an extraterrestrial origin of the circles. Many others have demonstrated how complex crop circles can be created. Scientific American published an article by Matt Ridley, who started making crop circles in northern England in 1991. He wrote about how easy it is to develop techniques using simple tools that can easily fool later observers. He reported on "expert" sources such as The Wall Street Journal who had been easily fooled, and mused about why people want to believe supernatural explanations for phenomena that are not yet explained. Methods of creating a crop circle are now well documented on the Internet.
Some crop formations are paid for by companies who use them as advertising. Many crop circles show human symbols, like the heart and arrow symbol of love, and stereotyped alien faces.
Hoaxers have been caught in the process of making new circles, such as in 2004 in the Netherlands.
Natural origins
Weather
It has been suggested that crop circles may be the result of extraordinary meteorological phenomena ranging from freak tornadoes to ball lightning, but there is no evidence of any crop circle being created by any of these causes.
In 1880, an amateur scientist, John Rand Capron, wrote a letter to the editor of journal Nature about some circles in crops and blamed them on a recent storm, saying their shape was "suggestive of some cyclonic wind action".
In 1980, Terence Meaden, a meteorologist and physicist, proposed that the circles were caused by whirlwinds whose course was affected by southern England hills. As circles became more complex, Terence had to create increasingly complex theories, blaming an electromagneto-hydrodynamic "plasma vortex". The meteorological theory became popular, and it was even referenced in 1991 by physicist Stephen Hawking who said that, "Corn circles are either hoaxes or formed by vortex movement of air". The weather theory suffered a serious blow in 1991, but Hawking's point about hoaxes was supported when Bower and Chorley stated that they had been responsible for making all those circles. By the end of 1991 Meaden conceded that those circles that had complex designs were made by hoaxers.
Animal activity
In 2009, the attorney general for the island state of Tasmania stated that Australian wallabies had been found creating crop circles in fields of opium poppies, which are grown legally for medicinal use, after consuming some of the opiate-laden poppies and running in circles.
Alternative explanations
In science magazines from the 1980s and 1990s, for example Science Illustrated, one could read reports suggesting that the plants were bent by something that could be microwave radiation, rather than broken by physical impact. The magazines also contained serious reports of the absence of human influence and measurement of unusual radiation. Today, this is considered to be pseudoscience, while at the time it was subject of serious research. At that time, it was also more likely that an unknown factor was behind the incidents, not least seen in light of the fact that GPS was not available to the public.
Paranormal
Since becoming the focus of widespread media attention in the 1980s, crop circles have been the subject of speculation by various paranormal, ufological, and anomalistic investigators, ranging from proposals that they were created by bizarre meteorological phenomena to messages from extraterrestrial beings. There has also been speculation that crop circles have a relation to ley lines.
Some paranormal advocates think that crop circles are caused by ball lightning and that the patterns are so complex that they have to be controlled by some entity. Some proposed entities are: Gaia asking to stop global warming and human pollution; God; supernatural beings (for example Indian devas); the collective minds of humanity through a proposed "quantum field"; and extraterrestrial beings.
Responding to local beliefs that "extraterrestrial beings" in UFOs were responsible for crop circles appearing, the Indonesian National Institute of Aeronautics and Space (LAPAN) described crop circles as "man-made". , research professor of astronomy and astrophysics at LAPAN stated, "We have come to agree that this 'thing' cannot be scientifically proven." Among others, paranormal enthusiasts, ufologists, and anomalistic investigators have offered hypothetical explanations that have been criticised as pseudoscientific by sceptical groups and scientists, including the Committee for Skeptical Inquiry. No credible evidence of extraterrestrial origin has been presented.
Changes to crops
A small number of scientists (physicist Eltjo Haselhoff, the late biophysicist William Levengood) have claimed to observe differences between the crops inside the circles and outside them, citing this as evidence they were not man made. Levengood published papers in journal Physiologia Plantarum in 1994 and 1999. In his 1994 paper he found that certain deformities in the grain inside the circles were correlated to the position of the grain inside the circle.
In 1996, Joe Nickell objected that correlation is not causation, raising several objections to Levengood's methods and assumptions, and said, "Until his work is independently replicated by qualified scientists doing 'double-blind' studies and otherwise following stringent scientific protocols, there seems no need to take seriously the many dubious claims that Levengood makes, including his similar ones involving plants at alleged 'cattle mutilation' sites." Nickell also criticised Levengood for using circular logic, stating: "There is, in fact, no satisfactory evidence that a single “genuine” (i.e., vortex-produced) crop-circle exists, so Levengood’s reasoning is circular: Although there are no guaranteed genuine formations on which to conduct research, the research supposedly proves the genuineness of the formations."
Advocates of non-human causes discount on-site evidence of human involvement as attempts to discredit the phenomena. When Ridley wrote negative articles in newspapers, he was accused of spreading "government disinformation" and of working for the UK military intelligence service MI5. Ridley responded by noting that many "cereologists" make good livings from selling books and providing high-priced personal tours through crop fields, and he claimed that they have vested interests in rejecting what is by far the most likely explanation for the circles.
Related art
Patterns similar to crop circles can also be made in snow, by using skis, snow shoes or just walking with ordinary shoes.
Patterns similar to crop circles can also be made in sand.
Images can be made in forests by cutting trees, especially in areas with snow. Celebrating the Olympic Games in Lillehammer, Norway in 1994, a tall stylised image of an Olympic torch runner was made in a forest close to one of the arenas.
Folklore
Researchers of crop circles have linked modern crop circles to old folkloric tales to support the claim that they are not artificially produced. Crop circles are culture dependent: they appear mostly in developed and secularised Western countries where people are receptive to New Age beliefs, including Japan, but they do not appear at all in other zones, such as Muslim countries.
Fungi can cause circular areas of crop to die, probably the origin of tales of "fairie rings". Tales also mention balls of light many times but never in relation to crop circles.
A 17th-century English woodcut called the Mowing-Devil depicts the devil with a scythe mowing (cutting) a circular design in a field of oats. The pamphlet containing the image states that the farmer, disgusted at the wage demanded by his mower for his work, insisted that he would rather have "the devil himself" perform the task. Crop circle researcher Jim Schnabel does not consider this to be a historical precedent for crop circles because the stalks were cut down, not bent. The circular form indicated to the farmer that it had been caused by the devil.
In the 1948 German story Die zwölf Schwäne (The Twelve Swans), a farmer every morning finds a circular ring of flattened grain in his field. After several attempts, his son sees twelve princesses disguised as swans, who take off their disguises and dance in the field. Crop rings produced by fungi may have inspired such tales, since folklore considers that these rings are created by dancing wolves or fairies.
See also
Arecibo crop circle
Benjamin Creme
Geoglyph
Kosmopoisk
Land art
List of hoaxes
Nazca Lines
Rice paddy art
Explanatory notes
References
Further reading
External links
Website with pictures, since 1994, of crop circles in the UK.
1990s fads and trends
Alleged extraterrestrial visitation
Circles
Crops
Forteana
Land art
Pseudoscience
UFO conspiracy theories
UFO hoaxes
UFO-related phenomena
Vandalism | Crop circle | [
"Mathematics",
"Technology"
] | 4,693 | [
"UFO conspiracy theories",
"Circles",
"Pi",
"Science and technology-related conspiracy theories"
] |
56,217 | https://en.wikipedia.org/wiki/Poaceae | Poaceae ( ), also called Gramineae ( ), is a large and nearly ubiquitous family of monocotyledonous flowering plants commonly known as grasses. It includes the cereal grasses, bamboos, the grasses of natural grassland and species cultivated in lawns and pasture. The latter are commonly referred to collectively as grass.
With around 780 genera and around 12,000 species, the Poaceae is the fifth-largest plant family, following the Asteraceae, Orchidaceae, Fabaceae and Rubiaceae.
The Poaceae are the most economically important plant family, providing staple foods from domesticated cereal crops such as maize, wheat, rice, oats, barley, and millet for people and as feed for meat-producing animals. They provide, through direct human consumption, just over one-half (51%) of all dietary energy; rice provides 20%, wheat supplies 20%, maize (corn) 5.5%, and other grains 6%. Some members of the Poaceae are used as building materials (bamboo, thatch, and straw); others can provide a source of biofuel, primarily via the conversion of maize to ethanol.
Grasses have stems that are hollow except at the nodes and narrow alternate leaves borne in two ranks. The lower part of each leaf encloses the stem, forming a leaf-sheath. The leaf grows from the base of the blade, an adaptation allowing it to cope with frequent grazing.
Grasslands such as savannah and prairie where grasses are dominant are estimated to constitute 40.5% of the land area of the Earth, excluding Greenland and Antarctica. Grasses are also an important part of the vegetation in many other habitats, including wetlands, forests and tundra.
Though they are commonly called "grasses", groups such as the seagrasses, rushes and sedges fall outside this family. The rushes and sedges are related to the Poaceae, being members of the order Poales, but the seagrasses are members of the order Alismatales. However, all of them belong to the monocot group of plants.
Description
Grasses may be annual or perennial herbs, generally with the following characteristics (the image gallery can be used for reference): The stems of grasses, called culms, are usually cylindrical (more rarely flattened, but not 3-angled) and are hollow, plugged at the nodes, where the leaves are attached. Grass leaves are nearly always alternate and distichous (in one plane), and have parallel veins. Each leaf is differentiated into a lower sheath hugging the stem and a blade with entire (i.e., smooth) margins. The leaf blades of many grasses are hardened with silica phytoliths, which discourage grazing animals; some, such as sword grass, are sharp enough to cut human skin. A membranous appendage or fringe of hairs called the ligule lies at the junction between sheath and blade, preventing water or insects from penetrating into the sheath.
Flowers of Poaceae are characteristically arranged in spikelets, each having one or more florets. The spikelets are further grouped into panicles or spikes. The part of the spikelet that bears the florets is called the rachilla. A spikelet consists of two (or sometimes fewer) bracts at the base, called glumes, followed by one or more florets. A floret consists of the flower surrounded by two bracts, one external—the lemma—and one internal—the palea. The flowers are usually hermaphroditic—maize being an important exception—and mainly anemophilous or wind-pollinated, although insects occasionally play a role. The perianth is reduced to two scales, called lodicules, that expand and contract to spread the lemma and palea; these are generally interpreted to be modified sepals. The fruit of grasses is a caryopsis, in which the seed coat is fused to the fruit wall.
A tiller is a leafy shoot other than the first shoot produced from the seed.
Growth and development
Grass blades grow at the base of the blade and not from elongated stem tips. This low growth point evolved in response to grazing animals and allows grasses to be grazed or mown regularly without severe damage to the plant.
Three general classifications of growth habit present in grasses: bunch-type (also called caespitose), stoloniferous, and rhizomatous.
The success of the grasses lies in part in their morphology and growth processes and in part in their physiological diversity. There are both C3 and C4 grasses, referring to the photosynthetic pathway for carbon fixation. The C4 grasses have a photosynthetic pathway, linked to specialized Kranz leaf anatomy, which allows for increased water use efficiency, rendering them better adapted to hot, arid environments.
The C3 grasses are referred to as "cool-season" grasses, while the C4 plants are considered "warm-season" grasses.
Annual cool-season – wheat, rye, annual bluegrass (annual meadowgrass, Poa annua), and oat
Perennial cool-season – orchardgrass (cocksfoot, Dactylis glomerata), fescue (Festuca spp.), Kentucky bluegrass and perennial ryegrass (Lolium perenne)
Annual warm-season – maize, sudangrass, and pearl millet
Perennial warm-season – big bluestem, Indiangrass, Bermudagrass and switchgrass.
Although the C4 species are all in the PACMAD clade (see diagram below), it seems that various forms of C4 have arisen some twenty or more times, in various subfamilies or genera. In the Aristida genus for example, one species (A. longifolia) is C3 but the approximately 300 other species are C4. As another example, the whole tribe of Andropogoneae, which includes maize, sorghum, sugar cane, "Job's tears", and bluestem grasses, is C4. Around 46 percent of grass species are C4 plants.
Taxonomy
The name Poaceae was given by John Hendley Barnhart in 1895, based on the tribe Poeae described in 1814 by Robert Brown, and the type genus Poa described in 1753 by Carl Linnaeus. The term is derived from the Ancient Greek πόα (póa, "fodder").
Evolutionary history
Grasses include some of the most versatile plant life-forms. They became widespread toward the end of the Cretaceous period, and fossilized dinosaur dung (coprolites) have been found containing phytoliths of a variety that include grasses that are related to modern rice and bamboo. Grasses have adapted to conditions in lush rain forests, dry deserts, cold mountains and even intertidal habitats, and are currently the most widespread plant type; grass is a valuable source of food and energy for all sorts of wildlife.
A cladogram shows subfamilies and approximate species numbers in brackets:
Before 2005, fossil findings indicated that grasses evolved around 55 million years ago. Finds of grass-like phytoliths in Cretaceous dinosaur coprolites from the latest Cretaceous (Maastrichtian) aged Lameta Formation of India have pushed this date back to 66 million years ago. In 2011, fossils from the same deposit were found to belong to the modern rice tribe Oryzeae, suggesting substantial diversification of major lineages by this time.
In 2018, a study described grass microfossils extracted from the teeth of the hadrosauroid dinosaur Equijubus normani from northern China, dating to the Albian stage of the Early Cretaceous approximately 113–100 million years ago, which were found to belong to primitive lineages within Poaceae, similar in position to the Anomochlooideae. These are currently the oldest known grass fossils.
Fossils of Phragmites have been found in the Late Cretaceous of North America, particularly in the Maastrichtian aged Laramie Formation. However slightly older fossils of
Phragmites have been found in the Eastern coast of the US dating the Campanian (such as in the Black Creek Formation).
The relationships among the three subfamilies Bambusoideae, Oryzoideae and Pooideae in the BOP clade have been resolved: Bambusoideae and Pooideae are more closely related to each other than to Oryzoideae. This separation occurred within the relatively short time span of about 4 million years.
According to Lester Charles King, the spread of grasses in the Late Cenozoic would have changed patterns of hillslope evolution favouring slopes that are convex upslope and concave downslope and lacking a free face were common. King argued that this was the result of more slowly acting surface wash caused by carpets of grass which in turn would have resulted in relatively more soil creep.
Subdivisions
There are about 12,000 grass species in about 771 genera that are classified into 12 subfamilies. See the full list of Poaceae genera.
Anomochlooideae Pilg. ex Potztal, a small lineage of broad-leaved grasses that includes two genera (Anomochloa, Streptochaeta)
Pharoideae L.G.Clark & Judz., a small lineage of grasses of three genera, including Pharus and Leptaspis
Puelioideae L.G.Clark, M.Kobay., S.Mathews, Spangler & E.A.Kellogg, a small lineage of the African genus Puelia
Pooideae, including wheat, barley, oats, brome-grass (Bromus), reed-grasses (Calamagrostis) and many lawn and pasture grasses such as bluegrass (Poa)
Bambusoideae, including bamboo
Ehrhartoideae, including rice and wild rice
Aristidoideae, including Aristida
Arundinoideae, including giant reed and common reed
Chloridoideae, including the lovegrasses (Eragrostis, about 350 species, including teff), dropseeds (Sporobolus, some 160 species), finger millet (Eleusine coracana (L.) Gaertn.), and the muhly grasses (Muhlenbergia, about 175 species)
Panicoideae, including panic grass, maize, sorghum, sugarcane, most millets, fonio, "Job's tears", and bluestem grasses
Micrairoideae
Danthonioideae, including pampas grass
Distribution
The grass family is one of the most widely distributed and abundant groups of plants on Earth. Grasses are found on every continent, including Antarctica. The Antarctic hair grass, Deschampsia antarctica is one of only two flowering plant species native to the western Antarctic Peninsula.
Ecology
Grasses are the dominant vegetation in many habitats, including grassland, salt-marsh, reedswamp and steppes. They also occur as a smaller part of the vegetation in almost every other terrestrial habitat.
Grass-dominated biomes are called grasslands. If only large, contiguous areas of grasslands are counted, these biomes cover 31% of the planet's land. Grasslands include pampas, steppes, and prairies.
Grasses provide food to many grazing mammals, as well as to many species of butterflies and moths.
Many types of animals eat grass as their main source of food, and are called graminivores – these include cattle, sheep, horses, rabbits and many invertebrates, such as grasshoppers and the caterpillars of many brown butterflies. Grasses are also eaten by omnivorous or even occasionally by primarily carnivorous animals.
Grasses dominate certain biomes, especially temperate grasslands, because many species are adapted to grazing and fire.
Grasses are unusual in that the meristem is near the bottom of the plant; hence, grasses can quickly recover from cropping at the top.
The evolution of large grazing animals in the Cenozoic contributed to the spread of grasses. Without large grazers, fire-cleared areas are quickly colonized by grasses, and with enough rain, tree seedlings. Trees eventually outcompete most grasses. Trampling grazers kill seedling trees but not grasses.
Sexual reproduction and meiosis
Sexual reproduction and meiosis have been studied in rice, maize, wheat and barley. Meiosis research in these crop species is linked to crop improvement, since meiotic recombination is an important component of plant breeding. Unlike in animals, the specification of both male and female plant germlines occurs late in development during flowering. The transition from the sporophyte phase to the gametophyte state is initiated by meiotic entry.
Uses
Grasses are, in human terms, perhaps the most economically important plant family. Their economic importance stems from several areas, including food production, industry, and lawns. They have been grown as food for domesticated animals for up to 6,000 years and the grains of grasses such as wheat, rice, maize (corn) and barley have been the most important human food crops. Grasses are also used in the manufacture of thatch, paper, fuel, clothing, insulation, timber for fencing, furniture, scaffolding and construction materials, floor matting, sports turf and baskets.
Food production
Of all crops grown, 70% are grasses. Agricultural grasses grown for their edible seeds are called cereals or grains (although the latter term, when used agriculturally, refers to both cereals and similar seeds of other plant species, such as buckwheat and legumes). Three cereals—rice, wheat, and maize (corn)—provide more than half of all calories consumed by humans. Cereals constitute the major source of carbohydrates for humans and perhaps the major source of protein; these include rice (in southern and eastern Asia), maize (in Central and South America), and wheat and barley (in Europe, northern Asia and the Americas).
Sugarcane is the major source of sugar production. Additional food uses of sugarcane include sprouted grain, shoots, and rhizomes, and in drink they include sugarcane juice and plant milk, as well as rum, beer, whisky, and vodka.
Bamboo shoots are used in numerous Asian dishes and broths, and are available in supermarkets in various sliced forms, in both fresh, fermented and canned versions.
Lemongrass is a grass used as a culinary herb for its citrus-like flavor and scent.
Many species of grass are grown as pasture for foraging or as fodder for prescribed livestock feeds, particularly in the case of cattle, horses, and sheep. Such grasses may be cut and stored for later feeding, especially for the winter, in the form of bales of hay or straw, or in silos as silage. Straw (and sometimes hay) may also be used as bedding for animals.
An example of a sod-forming perennial grass used in agriculture is Thinopyrum intermedium.
Industry
Grasses are used as raw material for a multitude of purposes, including construction and in the composition of building materials such as cob, for insulation, in the manufacture of paper and board such as oriented structural straw board. Grass fiber can be used for making paper, biofuel production, nonwoven fabrics, and as replacement for glass fibers used in reinforced plastics. Bamboo scaffolding is able to withstand typhoon-force winds that would break steel scaffolding. Larger bamboos and Arundo donax have stout culms that can be used in a manner similar to timber, Arundo is used to make reeds for woodwind instruments, and bamboo is used for innumerable implements.
Phragmites australis (common reed) is important for thatching and wall construction of homes in Africa. Grasses are used in water treatment systems, in wetland conservation and land reclamation, and used to lessen the erosional impact of urban storm water runoff.
Palaeoecological reconstructions
Pollen morphology, particularly in the Poaceae family, is key to figuring out their evolutionary relationships and how environments have changed over time. Grass pollen grains, however, often look the same, making it hard to use them for detailed climate or environmental reconstructions. Grass pollen has a single pore and can vary a lot in size, from about 20 to over 100 micrometers, and this size difference has been looked into for clues about past habitats, to tell apart domesticated grasses from wild ones, and to indicate various biological features like how they perform photosynthesis, their breeding systems, and genetic complexity. Yet, there's ongoing debate about how effective pollen size is for piecing together historical landscapes and weather patterns, considering other factors such as genetic material amount might also affect pollen size. Despite these challenges, new techniques in Fourier-Transform Infrared Spectroscopy (FT-IR) and improved statistical methods are now helping to better identify these similar-looking pollen types.
Lawn and ornamental use
Grasses are the primary plants used in lawns, which themselves derive from grazed grasslands in Europe. They also provide an important means of erosion control (e.g., along roadsides), especially on sloping land. Grass lawns are an important covering of playing surfaces in many sports, including football (soccer), American football, tennis, golf, cricket, softball and baseball.
Ornamental grasses, such as perennial bunch grasses, are used in many styles of garden design for their foliage, inflorescences and seed heads. They are often used in natural landscaping, xeriscaping and slope and beach stabilization in contemporary landscaping, wildlife gardening, and native plant gardening. They are used as screens and hedges.
Sports turf
Grass playing fields, courses and pitches are the traditional playing surfaces for many sports, including American football, association football, baseball, cricket, golf, and rugby. Grass surfaces are also sometimes used for horse racing and tennis. Type of maintenance and species of grass used may be important factors for some sports, less critical for others. In some sports facilities, including indoor domes and other places where maintenance of a grass field would be difficult, grass may be replaced with artificial turf, a synthetic grass-like substitute.
Cricket
In cricket, the pitch is the strip of carefully mowed and rolled grass where the bowler bowls. In the days leading up to the match it is repeatedly mowed and rolled to produce a very hard, flat surface for the ball to bounce off.
Golf
Grass on golf courses is kept in three distinct conditions: that of the rough, the fairway, and the putting green. Grass on the fairway is mown short and even, allowing the player to strike the ball cleanly. Playing from the rough is a disadvantage because the long grass may affect the flight of the ball. Grass on the putting green is the shortest and most even, ideally allowing the ball to roll smoothly over the surface. An entire industry revolves around the development and marketing of turf grass varieties.
Tennis
In tennis, grass is grown on very hard-packed soil, and the bounce of a tennis ball may vary depending on the grass's health, how recently it has been mowed, and the wear and tear of recent play. The surface is softer than hard courts and clay (other tennis surfaces), so the ball bounces lower, and players must reach the ball faster resulting in a different style of play which may suit some players more than others. Among the world's most prestigious court for grass tennis is Centre Court at Wimbledon, London which hosts the final of the annual Wimbledon Championships in England, one of the four Grand Slam tournaments.
Economically important grasses
A number of grasses are invasive species that damage natural ecosystems, including forms of Phragmites australis which are native to Eurasia but has spread around the world.
Role in society
Grasses have long had significance in human society. They have been cultivated as feed for people and domesticated animals for thousands of years. The primary ingredient of beer is usually barley or wheat, both of which have been used for this purpose for over 4,000 years.
In some places, particularly in suburban areas, the maintenance of a grass lawn is a sign of a homeowner's responsibility to the overall appearance of their neighborhood. One work credits lawn maintenance to:
In communities with drought problems, watering of lawns may be restricted to certain times of day or days of the week. Many US municipalities and homeowners' associations have rules which require lawns to be maintained to certain specifications, sanctioning those who allow the grass to grow too long.
The smell of freshly cut grass is produced mainly by cis-3-Hexenal.
Some common aphorisms involve grass. For example:
"The grass is always greener on the other side" suggests an alternate state of affairs will always seem preferable to one's own.
"Don't let the grass grow under your feet" tells someone to get moving.
"A snake in the grass" means dangers that are hidden.
"When elephants fight, it is the grass which suffers" tells of bystanders caught in the crossfire.
A folk myth about grass is that it refuses to grow where any violent death has occurred.
Image gallery
See also
Agrostology
Forb
GrassBase
Green track
PACMAD clade
Thinopyrum intermedium
References
External links
Need a Definition of Grass?
Vegetative Key to Grasses
Poaceae at The Plant List
Learn about grasses at The Story of the Poaceae
Gramineae at The Families of Flowering Plants (DELTA)
Poaceae at the Angiosperm Phylogeny Website
Poaceae Classification from the online Catalogue of New World Grasses
Poaceae at the online Guide to the Flora of Mongolia
Poaceae at the online Flora of Taiwan
Poaceae at the online Flora of Pakistan
Poaceae at the online Flora of Zimbabwe
Poaceae at the online Flora of Western Australia
Grasses of Australia (AusGrass2) – AusGrass2 | Grasses of Australia
Gramineae at the online Flora of New Zealand
NZ Grass Key An Interactive Key to New Zealand Grasses at Landcare Research
The Grass Genera of the World at DELTA intkey
RGB Kew - The Online World Grass Flora
GrassWorld
Extant Albian first appearances
Grasslands
Plant life-forms
Plants by habit
Poales families | Poaceae | [
"Biology"
] | 4,562 | [
"Plant life-forms",
"Grasslands",
"Ecosystems",
"Plants"
] |
56,223 | https://en.wikipedia.org/wiki/Piltdown%20Man | The Piltdown Man was a paleoanthropological fraud in which bone fragments were presented as the fossilised remains of a previously unknown early human. Although there were doubts about its authenticity virtually from its announcement in 1912, the remains were still broadly accepted for many years, and the falsity of the hoax was only definitively demonstrated in 1953. An extensive scientific review in 2016 established that amateur archaeologist Charles Dawson was responsible for the fraudulent evidence.
In 1912, Dawson claimed that he had discovered the "missing link" between early apes and man. In February 1912, Dawson contacted Arthur Smith Woodward, Keeper of Geology at the Natural History Museum, stating he had found a section of a human-like skull in Pleistocene gravel beds near Piltdown, East Sussex. That summer, Dawson and Woodward purportedly discovered more bones and artifacts at the site, which they connected to the same individual. These finds included a jawbone, more skull fragments, a set of teeth, and primitive tools.
Woodward reconstructed the skull fragments and hypothesised that they belonged to a human ancestor from 500,000 years ago. The discovery was announced at a Geological Society meeting and was given the Latin name Eoanthropus dawsoni ("Dawson's dawn-man"). The questionable significance of the assemblage remained the subject of considerable controversy until it was conclusively exposed in 1953 as a forgery. It was found to have consisted of the altered mandible and some teeth of an orangutan deliberately combined with the cranium of a fully developed, though small-brained, modern human.
The Piltdown hoax is prominent for two reasons: the attention it generated around the subject of human evolution, and the length of time – 41 years – that elapsed from its alleged initial discovery to its definitive exposure as a composite forgery.
Find
At a meeting of the Geological Society of London on 18 December 1912, Charles Dawson claimed that a workman at the Piltdown gravel pit had given him a fragment of the skull four years earlier. According to Dawson, workmen at the site discovered the skull shortly before his visit and broke it up in the belief that it was a fossilised coconut. Revisiting the site on several occasions, Dawson found further fragments of the skull and took them to Arthur Smith Woodward, keeper of the geological department at the British Museum. Greatly interested by the finds, Woodward accompanied Dawson to the site. Though the two worked together between June and September 1912, Dawson alone recovered more skull fragments and half of the lower jaw. The skull unearthed in 1908 was the only find discovered in situ, with most of the other pieces found in the gravel pit's spoil heaps. French Jesuit paleontologist and geologist Pierre Teilhard de Chardin participated in the uncovering of the Piltdown skull with Woodward.
At the same meeting, Woodward announced that a reconstruction of the fragments indicated that the skull was in many ways similar to that of a modern human, except for the occiput (the part of the skull that sits on the spinal column), and brain size, which was about two-thirds that of a modern human. He went on to indicate that, save for two human-like molar teeth, the jaw bone was indistinguishable from that of a modern, young chimpanzee. From the British Museum's reconstruction of the skull, Woodward proposed that Piltdown Man represented an evolutionary missing link between apes and humans, since the combination of a human-like cranium with an ape-like jaw tended to support the notion then prevailing in England that human evolution began with the brain.
The find was considered legitimate by Otto Schoetensack who had discovered the Heidelberg fossils just a few years earlier; he described it as being the best evidence for an ape-like ancestor of modern humans. Almost from the outset, Woodward's reconstruction of the Piltdown fragments was strongly challenged by some researchers. At the Royal College of Surgeons, copies of the same fragments used by the British Museum in their reconstruction were used to produce an entirely different model, one that in brain size and other features resembled a modern human. This reconstruction, by Arthur Keith, was called Homo piltdownensis in reflection of its more human appearance.
Woodward's reconstruction included ape-like canine teeth, which was itself controversial. In August 1913, Woodward, Dawson and Teilhard de Chardin began a systematic search of the spoil heaps specifically to find the missing canines. Teilhard de Chardin soon found a canine that, according to Woodward, fitted the jaw perfectly. A few days later, Teilhard de Chardin moved to France and took no further part in the discoveries. Noting that the tooth "corresponds exactly with that of an ape", Woodward expected the find to end any dispute over his reconstruction of the skull. However, Keith attacked the find. Keith pointed out that human molars are the result of side to side movement when chewing. The canine in the Piltdown jaw was impossible as it prevented side to side movement. To explain the wear on the molar teeth, the canine could not have been any higher than the molars. Grafton Elliot Smith, a fellow anthropologist, sided with Woodward, and at the next Royal Society meeting claimed that Keith's opposition was motivated entirely by ambition. Keith later recalled, "Such was the end of our long friendship."
As early as 1913, David Waterston of King's College London published in Nature his conclusion that the sample consisted of an ape mandible and human skull. Likewise, French paleontologist Marcellin Boule concluded the same in 1915. A third opinion from the American zoologist Gerrit Smith Miller Jr. concluded that Piltdown's jaw came from a fossil ape. In 1923, Franz Weidenreich examined the remains and correctly reported that they consisted of a modern human cranium and an orangutan jaw with filed-down teeth.
Sheffield Park find
In 1915, Dawson claimed to have found three fragments of a second skull (Piltdown II) at a new site about away from the original finds. Woodward attempted several times to elicit the location from Dawson, but was unsuccessful. So far as is known, the site was never identified and the finds appear largely undocumented. Woodward did not present the new finds to the Society until five months after Dawson's death in August 1916 and deliberately implied that he knew where they had been found. In 1921, Henry Fairfield Osborn, President of the American Museum of Natural History, examined the Piltdown and Sheffield Park finds and declared that the jaw and skull belonged together "without question" and that the Sheffield Park fragments "were exactly those which we should have selected to confirm the comparison with the original type."
The Sheffield Park finds were taken as proof of the authenticity of the Piltdown Man; it may have been chance that brought an ape's jaw and a human skull together, but the odds of it happening twice were slim. Even Keith conceded to this new evidence, though he still harboured personal doubts.
Memorial
On 23 July 1938, at Barkham Manor, Piltdown, Sir Arthur Keith unveiled a memorial to mark the site where Piltdown Man was discovered by Charles Dawson. Sir Arthur finished his speech saying:
The inscription on the memorial stone reads:
Exposure
Scientific investigation
From the outset, some scientists expressed scepticism about the Piltdown find (see above). Gerrit Smith Miller Jr., for example, observed in 1915 that "deliberate malice could hardly have been more successful than the hazards of deposition in so breaking the fossils as to give free scope to individual judgment in fitting the parts together". In the decades prior to its exposure as a forgery in 1953, scientists increasingly regarded Piltdown as an enigmatic aberration, inconsistent with the path of hominid evolution as demonstrated by fossils found elsewhere.
In November 1953, Time magazine published evidence, gathered variously by Kenneth Page Oakley, Sir Wilfrid Edward Le Gros Clark and Joseph Weiner, proving that Piltdown Man was a forgery and demonstrating that the fossil was a composite of three distinct species. It consisted of a human skull of medieval age, the 500-year-old lower jaw of an orangutan and chimpanzee fossil teeth. Someone had created the appearance of age by staining the bones with an iron solution and chromic acid. Microscopic examination revealed file-marks on the teeth, and it was deduced from this that someone had modified the teeth to a shape more suited to a human diet.
The Piltdown Man hoax succeeded so well because, at the time of its discovery, the scientific establishment believed that the large modern brain preceded the modern omnivorous diet, and the forgery provided exactly that evidence. Stephen Jay Gould argued that nationalism and cultural prejudice played a role in the ready acceptance of Piltdown Man as genuine, because it satisfied European expectations that the earliest humans would be found in Eurasia, and the British in particular wanted a "first Briton" to set against fossil hominids found elsewhere in Europe.
Identity of the forger
The identity of the Piltdown forger remains unknown, but suspects have included Dawson, Pierre Teilhard de Chardin, Arthur Keith, Martin A. C. Hinton, Horace de Vere Cole and Arthur Conan Doyle.
The focus on Dawson as the main forger is supported by the accumulation of evidence regarding other archaeological hoaxes he perpetrated in the decade or two before the Piltdown discovery. The archaeologist Miles Russell of Bournemouth University analysed Dawson's antiquarian collection, and determined that at least 38 of his specimens were fakes. Among these were the teeth of a multituberculate mammal, Plagiaulax dawsoni, "found" in 1891 (and whose teeth had been filed down in the same way that the teeth of Piltdown Man were to be some 20 years later); the so-called "shadow figures" on the walls of Hastings Castle; a unique hafted stone axe; the Bexhill boat (a hybrid seafaring vessel); the Pevensey bricks (allegedly the latest datable "finds" from Roman Britain); the contents of the Lavant Caves (a fraudulent "flint mine"); the Beauport Park "Roman" statuette (a hybrid iron object); the Bulverhythe Hammer (shaped with an iron knife in the same way as the Piltdown elephant bone implement would later be); a fraudulent "Chinese" bronze vase; the Brighton "Toad in the Hole" (a toad entombed within a flint nodule); the English Channel sea serpent; the Uckfield Horseshoe (another hybrid iron object) and the Lewes Prick Spur. Of his antiquarian publications, most demonstrate evidence of plagiarism or at least naive referencing. Russell wrote: "Piltdown was not a 'one-off' hoax, more the culmination of a life's work." In addition, Harry Morris, an acquaintance of Dawson, had come into possession of one of the flints obtained by Dawson at the Piltdown gravel pit. He suspected that it had been artificially aged – "stained by C. Dawson with intent to defraud". He remained deeply suspicious of Dawson for many years to come, though he never sought to discredit him publicly, possibly because it would have been an argument against the eolith theory, which Morris strongly supported.
Adrian Lister of the UK's Natural History Museum has said that "some people have suggested" that there may also have been a second 'fraudster' seeking to use outrageous fraud in the hope of anonymously exposing the original frauds. This was a theory first proposed by Miles Russell. He has explained that the piece nicknamed the 'cricket bat' (a fossilised elephant bone) was such a crudely forged 'early tool' that it may have been planted to cast doubt upon the other finds, the 'Earliest Englishman' in effect being recovered with the earliest evidence for the game of cricket. This seems to have been part of a wider attempt, by disaffected members of the Sussex archaeological community, to expose Dawson's activities, other examples being the obviously fraudulent 'Maresfield Map', the 'Ashburnham Dial', and the 'Piltdown Palaeolith'. Nevertheless, the 'cricket bat' was accepted at the time, even though it aroused the suspicions of some and ultimately helped lead to the eventual recognition of the fraud decades later.
In 2016, the results of an eight-year review of the forgery were released, identifying Dawson's modus operandi. Multiple specimens demonstrated the same consistent preparation: application of the stain, packing of crevices with local gravel, and fixation of teeth and gravel with dentist's putty. Analysis of shape and trace DNA showed that teeth from both sites belonged to the same orangutan. The consistent method and common source indicated the work of one person on all the specimens, and Dawson was the only one associated with Piltdown II. The authors did not rule out the possibility that someone else provided the false fossils to Dawson but ruled out several other suspects, including Teilhard de Chardin and Doyle, based on the skill and knowledge demonstrated by the forgeries, which closely reflected ideas fashionable in biology at the time. On the other hand, Stephen Jay Gould judged that Pierre Teilhard de Chardin conspired with Dawson in the Piltdown forgery. Teilhard de Chardin had travelled to regions of Africa where one of the anomalous finds originated, and resided in the Wealden area from the date of the earliest finds (although others suggest that he was "without doubt innocent in this matter"). Hinton left a trunk in storage at the Natural History Museum in London that in 1970 was found to contain animal bones and teeth carved and stained in a manner similar to the carving and staining on the Piltdown finds. Phillip Tobias implicated Arthur Keith in helping Dawson by detailing the history of the investigation of the hoax, dismissing other theories, and listing inconsistencies in Keith's statements and actions. Other investigations suggest that the hoax involved accomplices rather than a single forger.
Richard Milner, an American historian of science, argued that Arthur Conan Doyle may have been the perpetrator of the Piltdown Man hoax. Milner noted that Doyle had a plausible motive—namely, revenge on the scientific establishment for debunking one of his favourite psychics—and said that The Lost World appeared to contain several clues referring cryptically to his having been involved in the hoax. Samuel Rosenberg's 1974 book Naked is the Best Disguise purports to explain how, throughout his writings, Doyle had provided overt clues to otherwise hidden or suppressed aspects of his way of thinking that seemed to support the idea that Doyle would be involved in such a hoax. More recent research suggests that Doyle was not involved. In 2016, researchers at the Natural History Museum and Liverpool John Moores University analyzed DNA evidence showing that responsibility for the hoax lay with Dawson, who had originally "found" the remains. Dawson had initially not been considered the likely perpetrator, because the hoax was seen as being too elaborate for him to have devised; however, the DNA evidence showed that a supposedly ancient tooth Dawson had "discovered" in 1915 (at a different site) came from the same jaw as that of the Piltdown Man, suggesting that he had planted them both. That tooth, too, was later proven to have been planted as part of a hoax.
Chris Stringer, an anthropologist from the Natural History Museum, was quoted as saying: "Conan Doyle was known to play golf at the Piltdown site and had even given Dawson a lift in his car to the area, but he was a public man and very busy[,] and it is very unlikely that he would have had the time [to create the hoax]. So there are some coincidences, but I think they are just coincidences. When you look at the fossil evidence[,] you can only associate Dawson with all the finds, and Dawson was known to be personally ambitious. He wanted professional recognition. He wanted to be a member of the Royal Society and he was after an MBE [sic]. He wanted people to stop seeing him as an amateur".
Legacy
Early humans
In 1912, the majority of the scientific community believed the Piltdown Man was the "missing link" between apes and humans. However, over time the Piltdown Man lost its validity, as other discoveries such as the Taung Child and Peking Man were made. R. W. Ehrich and G. M. Henderson note, "To those who are not completely disillusioned by the work of their predecessors, the disqualification of the Piltdown skull changes little in the broad evolutionary pattern. The validity of the specimen has always been questioned". Eventually, during the 1940s and 1950s, more advanced dating technologies, such as the fluorine absorption test, proved scientifically that this skull was actually a fraud.
Influence
The Piltdown Man fraud significantly affected early research on human evolution. Notably, it led scientists down a blind alley in the belief that the human brain expanded in size before the jaw adapted to new types of food. Discoveries of Australopithecine fossils such as the Taung child found by Raymond Dart during the 1920s in South Africa were ignored because of the support for Piltdown Man as "the missing link," and the reconstruction of human evolution was confused for decades. The examination and debate over Piltdown Man caused a vast expenditure of time and effort on the fossil, with an estimated 250+ papers written on the topic.
The book Scientology: A History of Man by L. Ron Hubbard features the Piltdown Man as a phase of biological history capable of leaving a person with subconscious memories of traumatic incidents that can only be resolved by use of Scientology technology. Recovered "memories" of this phase are prompted by one's obsession with biting, hiding the teeth or mouth, and early familial issues. Nominally, this appears to be related to the large jaw of the Piltdown Man specimen. The book was first published in 1952, shortly before the fraud was confirmed, and has since been republished 5 times (most recently in 2007).
Creationists often cite the hoax (along with Nebraska Man) as evidence of an alleged dishonesty of paleontologists who study human evolution, although scientists themselves had exposed the Piltdown hoax (and the Nebraska Man incident was not a deliberate fraud). In November 2003, the Natural History Museum in London held an exhibition to mark the 50th anniversary of the exposure of the fraud.
Biases in the interpretation of the Piltdown Man
The Piltdown case is an example of how race, nationalism, and gender influenced scientific and public opinion. Newspapers explained the seemingly primitive and contradictory features of the skull and jaw by attempting to demonstrate an analogy with non-white races, presumed at the time to be more primitive and less developed than white Europeans. The influence of nationalism resulted in the differing interpretations of the find: whilst the majority of British scientists accepted the discovery as "the earliest Englishman", European and American scientists were considerably more sceptical, and several suggested at the time that the skull and jaw were from two different creatures and had been accidentally mixed up. Although Woodward suggested that the specimen discovered might be female, most scientists and journalists referred to Piltdown as a male. The only notable exception was the coverage by the Daily Express newspaper, which referred to the discovery as a woman, but only to mock the suffragette movement, of which the Express was highly critical.
Timeline
1908: Dawson claims discovery of first Piltdown fragments.
1912 February: Dawson contacts Woodward about first skull fragments.
1912 June: Dawson, Woodward, and Teilhard de Chardin form digging team.
1912 June: Team finds elephant molar, skull fragment.
1912 June: Right parietal skull bones and the jaw bone discovered.
1912 November: News breaks in the popular press.
1912 December: Official presentation of Piltdown Man.
1913: David Waterston concludes that the sample is an ape mandible and a human skull.
1914: Talgai Skull (Australia) found, and considered (at the time) to confirm Piltdown.
1915: Marcellin Boule concludes that the sample is an ape mandible and a human skull. Gerrit Smith Miller concludes the jaw is from a fossil ape.
1916 August: Dawson dies.
1923: Franz Weidenreich reports the remains consist of a modern human cranium and orangutan jaw with filed-down teeth.
1925: Edmonds reports Piltdown geology error. Report ignored.
1943: Fluorine content test is first proposed.
1948: The Earliest Englishman by Woodward is published (posthumously).
1949: Fluorine content test establishes Piltdown Man as relatively recent.
1953: Weiner, Le Gros Clark, and Oakley expose the hoax.
2003: Full extent of Charles Dawson's career in forgeries is exposed.
2016: Study reveals method of Dawson's forgery.
See also
Archaeoraptor
Beringer's Lying Stones
Bone Wars similar rivalry and hoaxes over dinosaur bones in the late 19th century
Calaveras Skull
Cardiff Giant
Cheddar Man a genuine skeleton of an early Briton
Himalayan fossil hoax
References
Further reading
The Times, 21 November 1953; 23 November 1953
.
.
.
, The Evil Empire, Google Books
.
.
.
.
.
.
.
External links
"Charles Dawson Piltdown Faker" BBC News
Project Piltdown at Bournemouth University
Piltdown Man documentary Discovery Channel
Piltdown Man at the Natural History Museum, London
The Piltdown Plot at Clark University
Archæological Forgeries
The Unmasking of Piltdown Man BBC
Fossil fools: Return to Piltdown BBC
The Boldest Hoax (about Piltdown Man case) PBS NOVA
Sarah Lyell, "Piltdown Man Hoaxer: Missing Link is Found", The New York Times, 25 May 1996. The case for Martin A. C. Hinton as the hoaxer.
An annotated bibliography of the Piltdown Man forgery, 1953–2005 by Tom Turrittin.
Web pages about the Piltdown forgery hosted by the British Geological Survey
An annotated select bibliography of the Piltdown forgery by David G Bate
1910s hoaxes
1912 in science
Academic scandals
Archaeological forgeries
History of East Sussex
Hoaxes in the United Kingdom
Hoaxes in science
Nationalism and archaeology
Fletching
Fossil forgeries
Paleontological chimeras
Scientific racism | Piltdown Man | [
"Biology"
] | 4,650 | [
"Biology theories",
"Obsolete biology theories",
"Scientific racism"
] |
56,226 | https://en.wikipedia.org/wiki/Electrum | Electrum is a naturally occurring alloy of gold and silver, with trace amounts of copper and other metals. Its color ranges from pale to bright yellow, depending on the proportions of gold and silver. It has been produced artificially and is also known as "green gold".
Electrum was used as early as the third millennium BC in the Old Kingdom of Egypt, sometimes as an exterior coating to the pyramidions atop ancient Egyptian pyramids and obelisks. It was also used in the making of ancient drinking vessels. The first known metal coins made were of electrum, dating back to the end of the 7th century or the beginning of the 6th century BC.
Etymology
The name electrum is the Latinized form of the Greek word ἤλεκτρον (ḗlektron), mentioned in the Odyssey, referring to a metallic substance consisting of gold alloyed with silver. The same word was also used for the substance amber, likely because of the pale yellow color of certain varieties. (It is from amber’s electrostatic properties that the modern English words electron and electricity are derived.) Electrum was often referred to as "white gold" in ancient times but could be more accurately described as pale gold because it is usually pale yellow or yellowish-white in color. The modern use of the term white gold usually refers to gold alloyed with any one or a combination of nickel, silver, platinum and palladium to produce a silver-colored gold.
Composition
Electrum consists primarily of gold and silver but is sometimes found with traces of platinum, copper and other metals. The name is mostly applied informally to compositions between 20–80% gold and 80–20% silver, but these are strictly called gold or silver depending on the dominant element. Analysis of the composition of electrum in ancient Greek coinage dating from about 600 BC shows that the gold content was about 55.5% in the coinage issued by Phocaea. In the early classical period the gold content of electrum ranged from 46% in Phokaia to 43% in Mytilene. In later coinage from these areas, dating to 326 BC, the gold content averaged 40% to 41%. In the Hellenistic period electrum coins with a regularly decreasing proportion of gold were issued by the Carthaginians. In the later Eastern Roman Empire controlled from Constantinople, the purity of the gold coinage was reduced.
History
Electrum is mentioned in an account of an expedition sent by Pharaoh Sahure of the Fifth Dynasty of Egypt. It is also discussed by Pliny the Elder in his Naturalis Historia. It is also mentioned in the Bible, in the first chapter of the book of the prophet Ezekiel.
Early coinage
The earliest known electrum coins, Lydian coins and East Greek coins found under the Temple of Artemis at Ephesus, are currently dated to the last quarter of the 7th century BC (625–600 BC). Electrum is believed to have been used in coins c. 600 BC in Lydia during the reign of Alyattes.
Electrum was much better for coinage than gold, mostly because it was harder and more durable, but also because techniques for refining gold were not widespread at the time. The gold content of naturally occurring electrum in modern western Anatolia ranges from 70% to 90%, in contrast to the 45–55% of gold in electrum used in ancient Lydian coinage of the same geographical area. This suggests that the Lydians had already solved the refining technology for silver and were adding refined silver to the local native electrum some decades before introducing pure silver coins.
In Lydia, electrum was minted into coins weighing , each valued at stater (meaning "standard"). Three of these coins—with a weight of about —totaled one stater, about one month's pay for a soldier. To complement the stater, fractions were made: the trite (third), the hekte (sixth), and so forth, including of a stater, and even down to and of a stater. The stater was about to . Larger denominations, such as a one stater coin, were minted as well.
Because of variation in the composition of electrum, it was difficult to determine the exact worth of each coin. Widespread trading was hampered by this problem, as the intrinsic value of each electrum coin could not be easily determined. This suggests that one reason for the invention of coinage in that area was to increase the profits from seigniorage by issuing currency with a lower gold content than the commonly circulating metal.
These difficulties were eliminated circa 570 BC when the Croeseids, coins of pure gold and silver, were introduced. However, electrum currency remained common until approximately 350 BC. The simplest reason for this was that, because of the gold content, one 14.1 gram stater was worth as much as ten 14.1 gram silver pieces.
See also
Corinthian bronze – a highly prized alloy in antiquity that may have contained electrum
Crown gold - A 22 carat gold alloy highly valued for its use in gold coins from the 16th century onwards
Hepatizon
Orichalcum – another distinct metal or alloy mentioned in texts from classical antiquity, later used to refer to brass
Panchaloha
Shakudō – a Japanese billon of gold and copper with a dark blue-purple patina
Shibuichi – another Japanese alloy known for its patina
Thokcha – an alloy of meteoric iron or "thunderbolt iron" commonly used in Tibet
Tumbaga – a similar material, originating in Pre-Columbian America
References
External links
Electrum lion coins of the ancient Lydians (about 600 BC)
An image of the obverse of a Lydian coin made of electrum
Gold
Coinage metals and alloys
Precious metal alloys
Silver
Copper alloys | Electrum | [
"Chemistry"
] | 1,176 | [
"Precious metal alloys",
"Alloys",
"Copper alloys",
"Coinage metals and alloys"
] |
56,239 | https://en.wikipedia.org/wiki/Acrylamide | Acrylamide (or acrylic amide) is an organic compound with the chemical formula CH2=CHC(O)NH2. It is a white odorless solid, soluble in water and several organic solvents. From the chemistry perspective, acrylamide is a vinyl-substituted primary amide (CONH2). It is produced industrially mainly as a precursor to polyacrylamides, which find many uses as water-soluble thickeners and flocculation agents.
Acrylamide forms in burnt areas of food, particularly starchy foods like potatoes, when cooked with high heat, above . Despite health scares following this discovery in 2002, and its classification as a probable carcinogen, acrylamide from diet is thought unlikely to cause cancer in humans; Cancer Research UK categorized the idea that eating burnt food causes cancer as a "myth".
Production
Acrylamide can be prepared by the hydration of acrylonitrile, which is catalyzed enzymatically:
CH2=CHCN + H2O → CH2=CHC(O)NH2
This reaction also is catalyzed by sulfuric acid as well as various metal salts. Treatment of acrylonitrile with sulfuric acid gives acrylamide sulfate, . This salt can be converted to acrylamide with a base or to methyl acrylate with methanol.
Uses
The majority of acrylamide is used to manufacture various polymers, especially polyacrylamide. This water-soluble polymer, which has very low toxicity, is widely used as thickener and flocculating agent. These functions are valuable in the purification of drinking water, corrosion inhibition, mineral extraction, and paper making. Polyacrylamide gels are routinely used in medicine and biochemistry for purification and assays.
Toxicity and carcinogenicity
Acrylamide can arise in some cooked foods via a series of steps by the reaction of the amino acid asparagine and glucose. This condensation, one of the Maillard reactions, followed by dehydrogenation produces N-(D-glucos-1-yl)-L-asparagine, which upon pyrolysis generates some acrylamide.
The discovery in 2002 that some cooked foods contain acrylamide attracted significant attention to its possible biological effects. IARC, NTP, and the EPA have classified it as a probable carcinogen, although epidemiological studies (as of 2019) suggest that dietary acrylamide consumption does not significantly increase people's risk of developing cancer.
Europe
According to the EFSA, the main toxicity risks of acrylamide are "Neurotoxicity, adverse effects on male reproduction, developmental toxicity and carcinogenicity". However, according to their research, there is no concern on non-neoplastic effects. Furthermore, while the relation between consumption of acrylamide and cancer in rats and mice has been shown, it is still unclear whether acrylamide consumption has an effect on the risk of developing cancer in humans, and existing epidemiological studies in humans are very limited and do not show any relation between acrylamide and cancer in humans. Food industry workers exposed to twice the average level of acrylamide do not exhibit higher cancer rates.
United States
Acrylamide is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
Acrylamide is considered a potential occupational carcinogen by U.S. government agencies and classified as a Group 2A carcinogen by the IARC. The Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health have set dermal occupational exposure limits at 0.03 mg/m3 over an eight-hour workday.
Opinions of health organizations
Baking, grilling or broiling food causes significant concentrations of acrylamide. This discovery in 2002 led to international health concerns. Subsequent research has however found that it is not likely that the acrylamides in burnt or well-cooked food cause cancer in humans; Cancer Research UK categorizes the idea that burnt food causes cancer as a "myth".
The American Cancer Society says that laboratory studies have shown that acrylamide is likely to be a carcinogen, but that evidence from epidemiological studies suggests that dietary acrylamide is unlikely to raise the risk of people developing most common types of cancer.
Hazards
Radiolabeled acrylamide is also a skin irritant and may be a tumor initiator in the skin, potentially increasing risk for skin cancer. Symptoms of acrylamide exposure include dermatitis in the exposed area, and peripheral neuropathy.
Laboratory research has found that some phytochemicals may have the potential to be developed into drugs which could alleviate the toxicity of acrylamide.
Mechanism of action
Acrylamide is metabolized to the genotoxic derivative glycidamide. On the other hand, acrylamide and glycidamide can be detoxified via conjugation with glutathione.
Occurrence in food
Acrylamide was discovered in foods, mainly in starchy foods, such as potato chips (UK: potato crisps), French fries (UK: chips), and bread that had been heated higher than . Production of acrylamide in the heating process was shown to be temperature-dependent. It was not found in food that had been boiled, or in foods that were not heated.
Acrylamide has been found in roasted barley tea, called mugicha in Japanese. The barley is roasted so it is dark brown prior to being steeped in hot water. The roasting process produced 200–600 micrograms/kg of acrylamide in mugicha. This is less than the >1000 micrograms/kg found in potato crisps and other fried whole potato snack foods cited in the same study and it is unclear how much of this enters the drink to be ingested. Rice cracker and sweet potato levels were lower than in potatoes. Potatoes cooked whole were found to have significantly lower acrylamide levels than the others, suggesting a link between food preparation method and acrylamide levels.
Acrylamide levels appear to rise as food is heated for longer periods of time. Although researchers are still unsure of the precise mechanisms by which acrylamide forms in foods, many believe it is a byproduct of the Maillard reaction. In fried or baked goods, acrylamide may be produced by the reaction between asparagine and reducing sugars (fructose, glucose, etc.) or reactive carbonyls at temperatures above .
Later studies have found acrylamide in black olives, dried plums, dried pears, coffee, and peanuts.
The US FDA has analyzed a variety of U.S. food products for levels of acrylamide since 2002.
Occurrence in cigarettes
Cigarette smoking is a major acrylamide source. It has been shown in one study to cause an increase in blood acrylamide levels three-fold greater than any dietary factor.
See also
Acrydite: research on this compound casts light on acrylamide
Acrolein
Alkyl nitrites
Deep-frying
Deep fryer
Vacuum fryer
Substance of very high concern
Heterocyclic amines
Polycyclic aromatic hydrocarbons
References
Further reading
External links
Carboxamides
Hazardous air pollutants
IARC Group 2A carcinogens
Monomers
Reproductive toxicants
Suspected fetotoxicants | Acrylamide | [
"Chemistry",
"Materials_science"
] | 1,627 | [
"Endocrine disruptors",
"Reproductive toxicants",
"Monomers",
"Polymer chemistry"
] |
56,263 | https://en.wikipedia.org/wiki/Dyadic%20rational | In mathematics, a dyadic rational or binary rational is a number that can be expressed as a fraction whose denominator is a power of two. For example, 1/2, 3/2, and 3/8 are dyadic rationals, but 1/3 is not. These numbers are important in computer science because they are the only ones with finite binary representations. Dyadic rationals also have applications in weights and measures, musical time signatures, and early mathematics education. They can accurately approximate any real number.
The sum, difference, or product of any two dyadic rational numbers is another dyadic rational number, given by a simple formula. However, division of one dyadic rational number by another does not always produce a dyadic rational result. Mathematically, this means that the dyadic rational numbers form a ring, lying between the ring of integers and the field of rational numbers. This ring may be denoted .
In advanced mathematics, the dyadic rational numbers are central to the constructions of the dyadic solenoid, Minkowski's question-mark function, Daubechies wavelets, Thompson's group, Prüfer 2-group, surreal numbers, and fusible numbers. These numbers are order-isomorphic to the rational numbers; they form a subsystem of the 2-adic numbers as well as of the reals, and can represent the fractional parts of 2-adic numbers. Functions from natural numbers to dyadic rationals have been used to formalize mathematical analysis in reverse mathematics.
Applications
In measurement
Many traditional systems of weights and measures are based on the idea of repeated halving, which produces dyadic rationals when measuring fractional amounts of units. The inch is customarily subdivided in dyadic rationals rather than using a decimal subdivision. The customary divisions of the gallon into half-gallons, quarts, pints, and cups are also dyadic. The ancient Egyptians used dyadic rationals in measurement, with denominators up to 64. Similarly, systems of weights from the Indus Valley civilisation are for the most part based on repeated halving; anthropologist Heather M.-L. Miller writes that "halving is a relatively simple operation with beam balances, which is likely why so many weight systems of this time period used binary systems".
In computing
Dyadic rationals are central to computer science as a type of fractional number that many computers can manipulate directly. In particular, as a data type used by computers, floating-point numbers are often defined as integers multiplied by positive or negative powers of two. The numbers that can be represented precisely in a floating-point format, such as the IEEE floating-point datatypes, are called its representable numbers. For most floating-point representations, the representable numbers are a subset of the dyadic rationals. The same is true for fixed-point datatypes, which also use powers of two implicitly in the majority of cases. Because of the simplicity of computing with dyadic rationals, they are also used for exact real computing using interval arithmetic, and are central to some theoretical models of computable numbers.
Generating a random variable from random bits, in a fixed amount of time, is possible only when the variable has finitely many outcomes whose probabilities are all dyadic rational numbers. For random variables whose probabilities are not dyadic, it is necessary either to approximate their probabilities by dyadic rationals, or to use a random generation process whose time is itself random and unbounded.
In music
Time signatures in Western musical notation traditionally are written in a form resembling fractions (for example: , , or ), although the horizontal line of the musical staff that separates the top and bottom number is usually omitted when writing the signature separately from its staff. As fractions they are generally dyadic, although non-dyadic time signatures have also been used. The numeric value of the signature, interpreted as a fraction, describes the length of a measure as a fraction of a whole note. Its numerator describes the number of beats per measure, and the denominator describes the length of each beat.
In mathematics education
In theories of childhood development of the concept of a fraction based on the work of Jean Piaget, fractional numbers arising from halving and repeated halving are among the earliest forms of fractions to develop. This stage of development of the concept of fractions has been called "algorithmic halving". Addition and subtraction of these numbers can be performed in steps that only involve doubling, halving, adding, and subtracting integers. In contrast, addition and subtraction of more general fractions involves integer multiplication and factorization to reach a common denominator. Therefore, dyadic fractions can be easier for students to calculate with than more general fractions.
Definitions and arithmetic
The dyadic numbers are the rational numbers that result from dividing an integer by a power of two. A rational number in simplest terms is a dyadic rational when is a power of two. Another equivalent way of defining the dyadic rationals is that they are the real numbers that have a terminating binary representation.
Addition, subtraction, and multiplication of any two dyadic rationals produces another dyadic rational, according to the following formulas:
However, the result of dividing one dyadic rational by another is not necessarily a dyadic rational. For instance, 1 and 3 are both dyadic rational numbers, but 1/3 is not.
Additional properties
Every integer, and every half-integer, is a dyadic rational. They both meet the definition of being an integer divided by a power of two: every integer is an integer divided by one (the zeroth power of two), and every half-integer is an integer divided by two.
Every real number can be arbitrarily closely approximated by dyadic rationals. In particular, for a real number , consider the dyadic rationals of the form where can be any integer and denotes the floor function that rounds its argument down to an integer. These numbers approximate from below to within an error of , which can be made arbitrarily small by choosing to be arbitrarily large. For a fractal subset of the real numbers, this error bound is within a constant factor of optimal: for these numbers, there is no approximation with error smaller than a constant times . The existence of accurate dyadic approximations can be expressed by saying that the set of all dyadic rationals is dense in the real line. More strongly, this set is uniformly dense, in the sense that the dyadic rationals with denominator are uniformly spaced on the real line.
The dyadic rationals are precisely those numbers possessing finite binary expansions. Their binary expansions are not unique; there is one finite and one infinite representation of each dyadic rational other than 0 (ignoring terminal 0s). For example, 0.112 = 0.10111...2, giving two different representations for 3/4. The dyadic rationals are the only numbers whose binary expansions are not unique.
In advanced mathematics
Algebraic structure
Because they are closed under addition, subtraction, and multiplication, but not division, the dyadic rationals are a ring but not a field. The ring of dyadic rationals may be denoted , meaning that it can be generated by evaluating polynomials with integer coefficients, at the argument 1/2. As a ring, the dyadic rationals are a subring of the rational numbers, and an overring of the integers. Algebraically, this ring is the localization of the integers with respect to the set of powers of two.
As well as forming a subring of the real numbers, the dyadic rational numbers form a subring of the 2-adic numbers, a system of numbers that can be defined from binary representations that are finite to the right of the binary point but may extend infinitely far to the left. The 2-adic numbers include all rational numbers, not just the dyadic rationals. Embedding the dyadic rationals into the 2-adic numbers does not change the arithmetic of the dyadic rationals, but it gives them a different topological structure than they have as a subring of the real numbers. As they do in the reals, the dyadic rationals form a dense subset of the 2-adic numbers, and are the set of 2-adic numbers with finite binary expansions. Every 2-adic number can be decomposed into the sum of a 2-adic integer and a dyadic rational; in this sense, the dyadic rationals can represent the fractional parts of 2-adic numbers, but this decomposition is not unique.
Addition of dyadic rationals modulo 1 (the quotient group of the dyadic rationals by the integers) forms the Prüfer 2-group.
Dyadic solenoid
Considering only the addition and subtraction operations of the dyadic rationals gives them the structure of an additive abelian group. Pontryagin duality is a method for understanding abelian groups by constructing dual groups, whose elements are characters of the original group, group homomorphisms to the multiplicative group of the complex numbers, with pointwise multiplication as the dual group operation. The dual group of the additive dyadic rationals, constructed in this way, can also be viewed as a topological group. It is called the dyadic solenoid, and is isomorphic to the topological product of the real numbers and 2-adic numbers, quotiented by the diagonal embedding of the dyadic rationals into this product. It is an example of a protorus, a solenoid, and an indecomposable continuum.
Functions with dyadic rationals as distinguished points
Because they are a dense subset of the real numbers, the dyadic rationals, with their numeric ordering, form a dense order. As with any two unbounded countable dense linear orders, by Cantor's isomorphism theorem, the dyadic rationals are order-isomorphic to the rational numbers. In this case, Minkowski's question-mark function provides an order-preserving bijection between the set of all rational numbers and the set of dyadic rationals.
The dyadic rationals play a key role in the analysis of Daubechies wavelets, as the set of points where the scaling function of these wavelets is non-smooth. Similarly, the dyadic rationals parameterize the discontinuities in the boundary between stable and unstable points in the parameter space of the Hénon map.
The set of piecewise linear homeomorphisms from the unit interval to itself that have power-of-2 slopes and dyadic-rational breakpoints forms a group under the operation of function composition. This is Thompson's group, the first known example of an infinite but finitely presented simple group. The same group can also be represented by an action on rooted binary trees, or by an action on the dyadic rationals within the unit interval.
Other related constructions
In reverse mathematics, one way of constructing the real numbers is to represent them as functions from unary numbers to dyadic rationals, where the value of one of these functions for the argument is a dyadic rational with denominator that approximates the given real number. Defining real numbers in this way allows many of the basic results of mathematical analysis to be proven within a restricted theory of second-order arithmetic called "feasible analysis" (BTFA).
The surreal numbers are generated by an iterated construction principle which starts by generating all finite dyadic rationals, and then goes on to create new and strange kinds of infinite, infinitesimal and other numbers. This number system is foundational to combinatorial game theory, and dyadic rationals arise naturally in this theory as the set of values of certain combinatorial games.
The fusible numbers are a subset of the dyadic rationals, the closure of the set under the operation , restricted to pairs with . They are well-ordered, with order type equal to the epsilon number . For each integer the smallest fusible number that is greater than has the form . The existence of for each cannot be proven in Peano arithmetic, and grows so rapidly as a function of that for it is (in Knuth's up-arrow notation for large numbers) already larger than .
The usual proof of Urysohn's lemma utilizes the dyadic fractions for constructing the separating function from the lemma.
References
Fractions (mathematics)
Rational numbers
Ring theory
Number theory | Dyadic rational | [
"Mathematics"
] | 2,664 | [
"Fractions (mathematics)",
"Discrete mathematics",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Arithmetic",
"Numbers",
"Number theory"
] |
56,265 | https://en.wikipedia.org/wiki/Thymus | The thymus (: thymuses or thymi) is a specialized primary lymphoid organ of the immune system. Within the thymus, thymus cell lymphocytes or T cells mature. T cells are critical to the adaptive immune system, where the body adapts to specific foreign invaders. The thymus is located in the upper front part of the chest, in the anterior superior mediastinum, behind the sternum, and in front of the heart. It is made up of two lobes, each consisting of a central medulla and an outer cortex, surrounded by a capsule.
The thymus is made up of immature T cells called thymocytes, as well as lining cells called epithelial cells which help the thymocytes develop. T cells that successfully develop react appropriately with MHC immune receptors of the body (called positive selection) and not against proteins of the body (called negative selection). The thymus is the largest and most active during the neonatal and pre-adolescent periods. By the early teens, the thymus begins to decrease in size and activity and the tissue of the thymus is gradually replaced by fatty tissue. Nevertheless, some T cell development continues throughout adult life.
Abnormalities of the thymus can result in a decreased number of T cells and autoimmune diseases such as autoimmune polyendocrine syndrome type 1 and myasthenia gravis. These are often associated with cancer of the tissue of the thymus, called thymoma, or tissues arising from immature lymphocytes such as T cells, called lymphoma. Removal of the thymus is called thymectomy. Although the thymus has been identified as a part of the body since the time of the Ancient Greeks, it is only since the 1960s that the function of the thymus in the immune system has become clearer.
Structure
The thymus is an organ that sits behind the sternum in the upper front part of the chest, stretching upwards towards the neck. In children, the thymus is pinkish-gray, soft, and lobulated on its surfaces. At birth, it is about 4–6 cm long, 2.5–5 cm wide, and about 1 cm thick. It increases in size until puberty, where it may have a size of about 40–50 g, following which it decreases in size in a process known as involution.
The thymus is located in the anterior mediastinum. It is made up of two lobes that meet in the upper midline, and stretch from below the thyroid in the neck to as low as the cartilage of the fourth rib. The lobes are covered by a capsule. The thymus lies behind the sternum, rests on the pericardium, and is separated from the aortic arch and great vessels by a layer of fascia. The left brachiocephalic vein may even be embedded within the thymus. In the neck, it lies on the front and sides of the trachea, behind the sternohyoid and sternothyroid muscles.
Microanatomy
The thymus consists of two lobes, merged in the middle, surrounded by a capsule that extends with blood vessels into the interior. The lobes consist of an outer rich with cells and an inner less dense . The lobes are divided into smaller lobules 0.5-2 mm diameter, between which extrude radiating insertions from the capsule along .
The cortex is mainly made up of thymocytes and epithelial cells. The thymocytes, immature T cells, are supported by a network of the finely-branched epithelial reticular cells, which is continuous with a similar network in the medulla. This network forms an adventitia to the blood vessels, which enter the cortex via septa near the junction with the medulla. Other cells are also present in the thymus, including macrophages, dendritic cells, and a small amount of B cells, neutrophils and eosinophils.
In the medulla, the network of epithelial cells is coarser than in the cortex, and the lymphoid cells are relatively fewer in number. Concentric, nest-like bodies called Hassall's corpuscles (also called thymic corpuscles) are formed by aggregations of the medullary epithelial cells. These are concentric, layered whorls of epithelial cells that increase in number throughout life. They are the remains of the epithelial tubes, which grow out from the third pharyngeal pouches of the embryo to form the thymus.
Blood and nerve supply
The arteries supplying the thymus are branches of the internal thoracic, and inferior thyroid arteries, with branches from the superior thyroid artery sometimes seen. The branches reach the thymus and travel with the septa of the capsule into the area between the cortex and medulla, where they enter the thymus itself; or alternatively directly enter the capsule.
The veins of the thymus, the thymic veins, end in the left brachiocephalic vein, internal thoracic vein, and in the inferior thyroid veins. Sometimes the veins end directly in the superior vena cava.
Lymphatic vessels travel only away from the thymus, accompanying the arteries and veins. These drain into the brachiocephalic, tracheobronchial and parasternal lymph nodes.
The nerves supplying the thymus arise from the vagus nerve and the cervical sympathetic chain. Branches from the phrenic nerves reach the capsule of the thymus, but do not enter into the thymus itself.
Variation
The two lobes differ slightly in size, with the left lobe usually higher than the right. Thymic tissue may be found scattered on or around the gland, and occasionally within the thyroid. The thymus in children stretches variably upwards, at times to as high as the thyroid gland.
Development
The thymocytes and the epithelium of the thymus have different developmental origins. The epithelium of the thymus develops first, appearing as two outgrowths, one on either side, of the third pharyngeal pouch. It sometimes also involves the fourth pharyngeal pouch. These extend outward and backward into the surrounding mesoderm and neural crest-derived mesenchyme in front of the ventral aorta. Here the thymocytes and epithelium meet and join with connective tissue. The pharyngeal opening of each diverticulum is soon obliterated, but the neck of the flask persists for some time as a cellular cord. By further proliferation of the cells lining the flask, buds of cells are formed, which become surrounded and isolated by the invading mesoderm.
The epithelium forms fine lobules, and develops into a sponge-like structure. During this stage, hematopoietic bone-marrow precursors migrate into the thymus. Normal development is dependent on the interaction between the epithelium and the hematopoietic thymocytes. Iodine is also necessary for thymus development and activity.
Involution
The thymus continues to grow after birth reaching the relative maximum size by puberty. It is most active in fetal and neonatal life. It increases to a mass of 20 to 50 grams by puberty. It then begins to decrease in size and activity in a process called thymic involution. After the first year of life the amount of T cells produced begins to fall. Fat and connective tissue fills a part of the thymic volume. During involution, the thymus decreases in size and activity. Fat cells are present at birth, but increase in size and number markedly after puberty, invading the gland from the walls between the lobules first, then into the cortex and medulla. This process continues into old age, where whether with a microscope or with the human eye, the thymus may be difficult to detect, although typically weighs 5–15 grams. Additionally, there is an increasing body of evidence showing that age-related thymic involution is found in most, if not all, vertebrate species with a thymus, suggesting that this is an evolutionary process that has been conserved.[40]
The atrophy is due to the increased circulating level of sex hormones, and chemical or physical castration of an adult results in the thymus increasing in size and activity. Severe illness or human immunodeficiency virus infection may also result in involution.
Function
T cell maturation
The thymus facilitates the maturation of T cells, an important part of the immune system providing cell-mediated immunity. T cells begin as hematopoietic precursors from the bone-marrow, and migrate to the thymus, where they are referred to as thymocytes. In the thymus, they undergo a process of maturation, which involves ensuring the cells react against antigens ("positive selection"), but that they do not react against antigens found on body tissue ("negative selection"). Once mature, T cells emigrate from the thymus to provide vital functions in the immune system.
Each T cell has a distinct T cell receptor, suited to a specific substance, called an antigen. Most T cell receptors bind to the major histocompatibility complex on cells of the body. The MHC presents an antigen to the T cell receptor, which becomes active if this matches the specific T cell receptor. In order to be properly functional, a mature T cell needs to be able to bind to the MHC molecule ("positive selection"), and not to react against antigens that are actually from the tissues of body ("negative selection"). Positive selection occurs in the cortex and negative selection occurs in the medulla of the thymus. After this process T cells that have survived leave the thymus, regulated by sphingosine-1-phosphate. Further maturation occurs in the peripheral circulation. Some of this is because of hormones and cytokines secreted by cells within the thymus, including thymulin, thymopoietin, and thymosins.
Positive selection
T cells have distinct T cell receptors. These distinct receptors are formed by process of V(D)J recombination gene rearrangement stimulated by RAG1 and RAG2 genes. This process is error-prone, and some thymocytes fail to make functional T-cell receptors, whereas other thymocytes make T-cell receptors that are autoreactive. If a functional T cell receptor is formed, the thymocyte will begin to express simultaneously the cell surface proteins CD4 and CD8.
The survival and nature of the T cell then depends on its interaction with surrounding thymic epithelial cells. Here, the T cell receptor interacts with the MHC molecules on the surface of epithelial cells. A T cell with a receptor that doesn't react, or reacts weakly will die by apoptosis. A T cell that does react will survive and proliferate. A mature T cell expresses only CD4 or CD8, but not both. This depends on the strength of binding between the TCR and MHC class 1 or class 2. A T cell receptor that binds mostly to MHC class I tends to produce a mature "cytotoxic" CD8 positive T cell; a T cell receptor that binds mostly to MHC class II tends to produce a CD4 positive T cell.
Negative selection
T cells that attack the body's own proteins are eliminated in the thymus, called "negative selection". Epithelial cells in the medulla and dendritic cells in the thymus express major proteins from elsewhere in the body. The gene that stimulates this is AIRE. Thymocytes that react strongly to self antigens do not survive, and die by apoptosis. Some CD4 positive T cells exposed to self antigens persist as T regulatory cells.
Clinical significance
Immunodeficiency
As the thymus is where T cells develop, congenital problems with the development of the thymus can lead to immunodeficiency, whether because of a problem with the development of the thymus gland, or a problem specific to thymocyte development. Immunodeficiency can be profound. Loss of the thymus at an early age through genetic mutation (as in DiGeorge syndrome, CHARGE syndrome, or a very rare "nude" thymus causing absence of hair and the thymus) results in severe immunodeficiency and subsequent high susceptibility to infection by viruses, protozoa, and fungi. Nude mice with the very rare "nude" deficiency as a result of FOXN1 mutation are a strain of research mice as a model of T cell deficiency.
The most common congenital cause of thymus-related immune deficiency results from the deletion of the 22nd chromosome, called DiGeorge syndrome. This results in a failure of development of the third and fourth pharyngeal pouches, resulting in failure of development of the thymus, and variable other associated problems, such as congenital heart disease, and abnormalities of mouth (such as cleft palate and cleft lip), failure of development of the parathyroid glands, and the presence of a fistula between the trachea and the oesophagus. Very low numbers of circulating T cells are seen. The condition is diagnosed by fluorescent in situ hybridization and treated with thymus transplantation.
Severe combined immunodeficiency (SCID) are group of rare congenital genetic diseases that can result in combined T, B, and NK cell deficiencies. These syndromes are caused by mutations that affect the maturation of the hematopoietic progenitor cells, which are the precursors of both B and T cells. A number of genetic defects can cause SCID, including IL-2 receptor gene loss of function, and mutation resulting in deficiency of the enzyme adenine deaminase.
Autoimmune disease
Autoimmune polyendocrine syndrome
Autoimmune polyendocrine syndrome type 1, is a rare genetic autoimmune syndrome that results from a genetic defect of the thymus tissues. Specifically, the disease results from defects in the autoimmune regulator (AIRE) gene, which stimulates expression of self antigens in the epithelial cells within the medulla of the thymus. Because of defects in this condition, self antigens are not expressed, resulting in T cells that are not conditioned to tolerate tissues of the body, and may treat them as foreign, stimulating an immune response and resulting in autoimmunity. People with APECED develop an autoimmune disease that affects multiple endocrine tissues, with the commonly affected organs being hypothyroidism of the thyroid gland, Addison's disease of the adrenal glands, and candida infection of body surfaces including the inner lining of the mouth and of the nails due to dysfunction of TH17 cells, and symptoms often beginning in childhood. Many other autoimmune diseases may also occur. Treatment is directed at the affected organs.
Thymoma-associated multiorgan autoimmunity
Thymoma-associated multiorgan autoimmunity can occur in people with thymoma. In this condition, the T cells developed in the thymus are directed against the tissues of the body. This is because the malignant thymus is incapable of appropriately educating developing thymocytes to eliminate self-reactive T cells. The condition is virtually indistinguishable from graft versus host disease.
Myasthenia gravis
Myasthenia gravis is an autoimmune disease most often due to antibodies that block acetylcholine receptors, involved in signalling between nerves and muscles. It is often associated with thymic hyperplasia or thymoma, with antibodies produced probably because of T cells that develop abnormally. Myasthenia gravis most often develops between young and middle age, causing easy fatiguing of muscle movements. Investigations include demonstrating antibodies (such as against acetylcholine receptors or muscle-specific kinase), and CT scan to detect thymoma or thymectomy. With regard to the thymus, removal of the thymus, called thymectomy may be considered as a treatment, particularly if a thymoma is found. Other treatments include increasing the duration of acetylcholine action at nerve synapses by decreasing the rate of breakdown. This is done by acetylcholinesterase inhibitors such as pyridostigmine.
Cancer
Thymomas
Tumours originating from the thymic epithelial cells are called thymomas. They most often occur in adults older than 40. Tumours are generally detected when they cause symptoms, such as a neck mass or affecting nearby structures such as the superior vena cava; detected because of screening in patients with myasthenia gravis, which has a strong association with thymomas and hyperplasia; and detected as an incidental finding on imaging such as chest X-rays. Hyperplasia and tumours originating from the thymus are associated with other autoimmune diseases – such as hypogammaglobulinemia, Graves disease, pure red cell aplasia, pernicious anaemia and dermatomyositis, likely because of defects in negative selection in proliferating T cells.
Thymomas can be benign; benign but by virtue of expansion, invading beyond the capsule of the thymus ("invasive thymoma"), or malignant (a carcinoma). This classification is based on the appearance of the cells. A WHO classification also exists but is not used as part of standard clinical practice. Benign tumours confined to the thymus are most common; followed by locally invasive tumours, and then by carcinomas. There is variation in reporting, with some sources reporting malignant tumours as more common. Invasive tumours, although not technically malignant, can still spread () to other areas of the body. Even though thymomas occur of epithelial cells, they can also contain thymocytes. Treatment of thymomas often requires surgery to remove the entire thymus. This may also result in temporary remission of any associated autoimmune conditions.
Lymphomas
Tumours originating from T cells of the thymus form a subset of acute lymphoblastic leukaemia (ALL). These are similar in symptoms, investigation approach and management to other forms of ALL. Symptoms that develop, like other forms of ALL, relate to deficiency of platelets, resulting in bruising or bleeding; immunosuppression resulting in infections; or infiltration by cells into parts of the body, resulting in an enlarged liver, spleen, lymph nodes or other sites. Blood test might reveal a large amount of white blood cells or lymphoblasts, and deficiency in other cell lines – such as low platelets or anaemia. Immunophenotyping will reveal cells that are CD3, a protein found on T cells, and help further distinguish the maturity of the T cells. Genetic analysis including karyotyping may reveal specific abnormalities that may influence prognosis or treatment, such as the Philadelphia translocation. Management can include multiple courses of chemotherapy, stem cell transplant, and management of associated problems, such as treatment of infections with antibiotics, and blood transfusions. Very high white cell counts may also require cytoreduction with apheresis.
Tumours originating from the small population of B cells present in the thymus lead to primary mediastinal large B cell lymphomas. These are a rare subtype of Non-Hodgkins lymphoma, although by the activity of genes and occasionally microscopic shape, unusually they also have the characteristics of Hodgkins lymphomas. that occur most commonly in young and middle-aged, more prominent in females. Most often, when symptoms occur it is because of compression of structures near the thymus, such as the superior vena cava or the upper respiratory tract; when lymph nodes are affected it is often in the mediastinum and neck groups. Such tumours are often detected with a biopsy that is subject to immunohistochemistry. This will show the presence of clusters of differentiation, cell surface proteins – namely CD30, with CD19, CD20 and CD22, and with the absence of CD15. Other markers may also be used to confirm the diagnosis. Treatment usually includes the typical regimens of CHOP or EPOCH or other regimens; regimens generally including cyclophosphamide, an anthracycline, prednisone, and other chemotherapeutics; and potentially also a stem cell transplant.
Thymic cysts
The thymus may contain cysts, usually less than 4 cm in diameter. Thymic cysts are usually detected incidentally and do not generally cause symptoms. Thymic cysts can occur along the neck or in the chest (mediastinum). Cysts usually just contain fluid and are lined by either many layers of flat cells or column-shaped cells. Despite this, the presence of a cyst can cause problems similar to those of thymomas, by compressing nearby structures, and some may contact internal walls () and be difficult to distinguish from tumours. When cysts are found, investigation may include a workup for tumours, which may include CT or MRI scan of the area the cyst is suspected to be in.
Surgical removal
Thymectomy is the surgical removal of the thymus. The usual reason for removal is to gain access to the heart for surgery to correct congenital heart defects in the neonatal period. Other indications for thymectomy include the removal of thymomas and the treatment of myasthenia gravis. In neonates the relative size of the thymus obstructs surgical access to the heart and its surrounding vessels.
Removal of the thymus in infancy results in often fatal immunodeficiency, because functional T cells have not developed. In older children and adults, which have a functioning lymphatic system with mature T cells also situated in other lymphoid organs, the effect is reduced, but includes failure to mount immune responses against new antigens, an increase in cancers, and an increase in all-cause mortality.
Society and culture
When used as food for humans, the thymus of animals is known as one of the kinds of sweetbread.
History
The thymus was known to the ancient Greeks, and its name comes from the Greek word θυμός (thumos), meaning "anger", or in Ancient Greek, "heart, soul, desire, life", possibly because of its location in the chest, near where emotions are subjectively felt; or else the name comes from the herb thyme (also in Greek θύμος or θυμάρι), which became the name for a "warty excrescence", possibly due to its resemblance to a bunch of thyme.
Galen was the first to note that the size of the organ changed over the duration of a person's life.
In the 19th century, a condition was identified as status thymicolymphaticus defined by an increase in lymphoid tissue and an enlarged thymus. It was thought to be a cause of sudden infant death syndrome but is now an obsolete term.
The importance of the thymus in the immune system was discovered in 1961 by Jacques Miller, by surgically removing the thymus from one-day-old mice, and observing the subsequent deficiency in a lymphocyte population, subsequently named T cells after the organ of their origin. Until the discovery of its immunological role, the thymus had been dismissed as a "evolutionary accident", without functional importance. The role the thymus played in ensuring mature T cells tolerated the tissues of the body was uncovered in 1962, with the finding that T cells of a transplanted thymus in mice demonstrated tolerance towards tissues of the donor mouse. B cells and T cells were identified as different types of lymphocytes in 1968, and the fact that T cells required maturation in the thymus was understood. The subtypes of T cells (CD8 and CD4) were identified by 1975. The way that these subclasses of T cells matured – positive selection of cells that functionally bound to MHC receptors – was known by the 1990s. The important role of the AIRE gene, and the role of negative selection in preventing autoreactive T cells from maturing, was understood by 1994.
Recently, advances in immunology have allowed the function of the thymus in T-cell maturation to be more fully understood.
Other animals
The thymus is present in all jawed vertebrates, where it undergoes the same shrinkage with age and plays the same immunological function as in other vertebrates. Recently, in 2011, a discrete thymus-like lympho-epithelial structure, termed the thymoid, was discovered in the gills of larval lampreys. Hagfish possess a protothymus associated with the pharyngeal velar muscles, which is responsible for a variety of immune responses.
The thymus is also present in most other vertebrates with similar structure and function as the human thymus. A second thymus in the neck has been reported sometimes to occur in the mouse. As in humans, the guinea pig's thymus naturally atrophies as the animal reaches adulthood, but the athymic hairless guinea pig (which arose from a spontaneous laboratory mutation) possesses no thymic tissue whatsoever, and the organ cavity is replaced with cystic spaces.
Additional images
References
Books
External links
T cell development in the thymus. Video by Janice Yau, describing stromal signaling and tolerance. Department of Immunology and Biomedical Communications, University of Toronto. Master's Research Project, Master of Science in Biomedical Communications. 2011.
Endocrine system anatomy
Immune system
Lymphatic system
Lymphatics of the torso
Lymphoid organ
Mammal anatomy
Organs (anatomy) | Thymus | [
"Biology"
] | 5,443 | [
"Immune system",
"Organ systems"
] |
56,300 | https://en.wikipedia.org/wiki/Nanotech%20%28anthology%29 | Nanotech is a 1998 anthology of science fiction short stories revolving around nanotechnology and its effects. It is edited by American writers Jack Dann and Gardner Dozois.
Contents
"Blood Music" by Greg Bear
"Margin of Error" by Nancy Kress
"Axiomatic" by Greg Egan
"Remember'd Kisses" by Michael F. Flynn
"Recording Angel" by Ian McDonald
"Sunflowers" by Kathleen Ann Goonan
"The Logic Pool" Stephen Baxter
"Any Major Dude" Paul Di Filippo
"We Were Out of Our Minds with Joy" by David Marusek
"Willy in the Nano-lab" by Geoffrey A. Landis
External links
1998 anthologies
Jack Dann and Gardner Dozois Ace anthologies
Ace Books books
Fiction about nanotechnology | Nanotech (anthology) | [
"Materials_science"
] | 156 | [
"Fiction about nanotechnology",
"Nanotechnology"
] |
56,313 | https://en.wikipedia.org/wiki/Zoning | In urban planning, zoning is a method in which a municipality or other tier of government divides land into "zones", each of which has a set of regulations for new development that differs from other zones. Zones may be defined for a single use (e.g. residential, industrial), they may combine several compatible activities by use, or in the case of form-based zoning, the differing regulations may govern the density, size and shape of allowed buildings whatever their use. The planning rules for each zone determine whether planning permission for a given development may be granted. Zoning may specify a variety of outright and conditional uses of land. It may indicate the size and dimensions of lots that land may be subdivided into, or the form and scale of buildings. These guidelines are set in order to guide urban growth and development.
Zoning is the most common regulatory urban planning method used by local governments in developed countries. Exceptions include the United Kingdom and the City of Houston, Texas.
Most zoning systems have a procedure for granting variances (exceptions to the zoning rules), usually because of some perceived hardship caused by the particular nature of the property in question.
History
The origins of zoning districts can be traced back to antiquity. The ancient walled city was the predecessor for classifying and regulating land, based on use. Outside the city walls were the undesirable functions, which were usually based on noise and smell. The space between the walls is where unsanitary and dangerous activities occurred such as butchering, waste disposal, and brick-firing. Within the walls were civic and religious places, and where the majority of people lived.
Beyond distinguishing between urban and non-urban land, most ancient cities further classified land types and uses inside their walls. This was practiced in many regions of the world – for example, in China during the Zhou Dynasty (1046 – 256 BC), in India during the Vedic Era (1500 – 500 BC), and in the military camps that spread throughout the Roman Empire (31 BC – 476 AD).
Throughout the Age of Enlightenment and Industrial Revolution, cultural and socio-economic shifts led to the rapid increase in the enforcement and invention of urban regulations. The shifts were informed by a new scientific rationality, the advent of mass production and complex manufacturing, and the subsequent onset of urbanisation. Industry leaving the home reshaped modern cities. The definition of home was tied to the definition of economy, which caused a much greater mixing of uses within the residential quarters of cities.
Separation between uses is a feature of many planned cities designed before the advent of zoning. A notable example is Adelaide in South Australia, whose city centre, along with the suburb of North Adelaide, is surrounded on all sides by a park, the Adelaide Park Lands. The park was designed by Colonel William Light in 1836 in order to physically separate the city centre from its suburbs. Low density residential areas surround the park, providing a pleasant walk between work in the city within and the family homes outside.
Sir Ebenezer Howard, founder of the garden city movement, cited Adelaide as an example of how green open space could be used to prevent cities from expanding beyond their boundaries and coalescing. His design for an ideal city, published in his 1902 book Garden Cities of To-morrow, envisaged separate concentric rings of public buildings, parks, retail space, residential areas and industrial areas, all surrounded by open space and farmland. All retail activity was to be conducted within a single glass-roofed building, an early concept for the modern shopping centre inspired by the Crystal Palace.
However, these planned or ideal cities were static designs embodied in a single masterplan. What was lacking was a regulatory mechanism to allow the city to develop over time, setting guidelines to developers and private citizens over what could be built where. The first modern zoning systems were applied in the United States with the Los Angeles zoning ordinances of 1904 and the New York City 1916 Zoning Resolution.
Types
There are a great variety of zoning types, some of which focus on regulating building form and the relation of buildings to the street with mixed uses, known as form-based, others with separating land uses, known as use-based, or a combination thereof. Use-based zoning systems can comprise single-use zones, mixed-use zones - where a compatible group of uses are allowed to co-exist - or a combination of both single and mixed-use zones in one system.
The main approaches include use-based, form-based, performance and incentive zoning. There are also several additional zoning provisions used in combination with the main approaches.
Main approaches to zoning
Use-based zoning
Use-based or functional zoning systems can comprise single-use zones, mixed-use zones—where a compatible group of uses are allowed to co-exist —or a combination of both single- and mixed-use zones in one system.
Single-use zoning
The primary purpose of single-use zoning is to geographically separate uses that are thought to be incompatible. In practice, zoning is also used to prevent new development from interfering with existing uses and/or to preserve the character of a community.
Single-use zoning is where only one kind of use is allowed per zone, or district. It is also known as exclusionary zoning or, in the United States, as Euclidean zoning because of a court case in Euclid, Ohio, Village of Euclid, Ohio v. Ambler Realty Co. , which established its constitutionality. It has been the dominant system of zoning in North America, especially the United States, since its first implementation.
Commonly defined single-use districts include: residential, commercial, and industrial. Each category can have a number of sub-categories, for example, within the commercial category there may be separate districts for small retail, large retail, office use, lodging and others, while industrial may be subdivided into heavy manufacturing, light assembly and warehouse uses. Special districts may also be created for purposes like public facilities, recreational amenities, and green space.
The application of single-use zoning has led to the distinctive form of many cities in the United States, Canada, Australia, and New Zealand, in which a very dense urban core, often containing skyscrapers, is surrounded by low density residential suburbs, characterised by large gardens and leafy streets. Some metropolitan areas such as Minneapolis–Saint Paul and Sydney have several such cores.
Mixed-use zoning
Mixed-use zoning combines residential, commercial, office, and public uses into a single space. Mixed-use zoning can be vertical, within a single building, or horizontal, involving multiple buildings.
Planning and community activist Jane Jacobs wrote extensively on the connections between the separation of uses and the failure of urban renewal projects in New York City. She advocated dense mixed-use developments and walkable streets. In contrast to villages and towns, in which many residents know one another, and low-density outer suburbs that attract few visitors, cities and inner city areas have the problem of maintaining order between strangers. This order is maintained when, throughout the day and evening, there are sufficient people present with eyes on the street. This can be accomplished in successful urban districts that have a great diversity of uses, creating interest and attracting visitors. Jacobs' writings, along with increasing concerns about urban sprawl, are often credited with inspiring the New Urbanism movement.
To accommodate the New Urbanist vision of walkable communities combining cafés, restaurants, offices and residential development in a single area, mixed-use zones have been created within some zoning systems. These still use the basic regulatory mechanisms of zoning, excluding incompatible uses such as heavy industry or sewage farms, while allowing compatible uses such as residential, commercial and retail activities so that people can live, work and socialise within a compact geographic area.
The mixing of land uses is common throughout the world. Mixed-use zoning has particular relevance in the United States, where it is proposed as a remedy to the problems caused by widespread single-use zoning.
Form-based zoning
Form-based or intensity zoning regulates not the type of land use, but the form that land use may take. For instance, form-based zoning in a dense area may insist on low setbacks, high density, and pedestrian accessibility. Form-based codes (FBCs) are designed to directly respond to the physical structure of a community in order to create more walkable and adaptable environments.
Form-based zoning codes have five main elements: a regulating plan, public standards, building standards, and precise definitions of technical terms. Form-based codes recognize the interrelated nature of all components of land-use planning—zoning, subdivision, and public works—and integrate them to define districts based on the community's desired character and intensity of development.
The French planning system is mostly form-based; zones in French cities generally allow many types of uses. The city of Paris has used its zoning system to concentrate high-density office buildings in the district of La Défense rather than allow heritage buildings across the city to be demolished to make way for them, as is often the case in London or New York. The construction of the Montparnasse Tower in 1973 led to an outcry. As a result, two years after its completion the construction of buildings over seven storeys high in the city centre was banned.
Performance zoning
Performance zoning, also known as flexible or impact zoning or effects-based planning, was first advocated by Lane Kendig in 1973. It uses performance-based or goal-oriented criteria to establish review parameters for proposed development projects. Performance zoning may use a menu of compliance options where a property developer can earn points or credits for limiting environmental impacts, including affordable housing units, or providing public amenities. In addition to the menu and points system, there may be additional discretionary criteria included in the review process. Performance zoning may be applied only to a specific type of development, such as housing, and may be combined with a system of use-based districts.
Performance zoning is flexible, logical, and transparent while offering a form of accountability. These qualities are in contrast with the seemingly arbitrary nature of use-based zoning. Performance zoning can also fairly balance a region's environmental and housing needs across local jurisdictions. Performance zoning balances principles of markets and private property rights with environmental protection goals. However, performance zoning can be extremely difficult to implement due to the complexity of preparing an impact study for each project, and can require the supervising authority to exercise a lot of discretion. Performance zoning has not been adopted widely in the US.
Incentive zoning
Incentive zoning allows property developers to develop land more intensively, such as with greater density or taller buildings, in exchange for providing some public benefits, such as environmental amenities or affordable housing units. The public benefits most often incentivised by US cities are "mixed-use development, open space conservation, walkability, affordable housing, and public parks."
Incentive zoning allows for a high degree of flexibility, but may be complex to administer. The more a proposed development takes advantage of incentive criteria, the more closely it has to be reviewed on a discretionary basis. The initial creation of the incentive structure in order to best serve planning priorities also may be challenging and often requires extensive ongoing revision to maintain balance between incentive magnitude and value given to developers. Incentive zoning may be most effective in communities with well-established standards and where demand for both land and for specific amenities is high. However, hidden costs may still offset its benefits. Incentive zoning has also been criticized for increasing traffic, reducing natural light, and offering developers larger rewards than those reaped by the public.
Additional provisions
Additional zoning provisions exist that are not their own distinct types of zoning but seek to improve existing varieties through the incorporation of flexible practices and other elements such as information and communication technologies (ICTs).
Smart zoning
Smart zoning is a broad term that consists of several alternatives to use-based zoning that incorporate information and communication technologies. There are a number of different techniques to accomplish smart zoning. Floating zones, cluster zoning, and planned unit developments (PUDs) are possible—even as the conventional use-based code exists—or the conventional code may be completely replaced by a smart performance or form-based code, as the city of Miami did in 2019. The incorporation of ICTs to measure metrics such as walkability, and the flexibility and adaptability that smart zoning can provide, have been cited as advantages of smart zoning over "non-smart" performance or form-based codes.
Floating zones
Floating zones describe a zoning district's characteristics and codify requirements for its establishment, but its location remains unspecified until conditions exist to implement that type of zoning district. When the criteria for implementation of a floating zone are met, the floating zone ceases "to float" and its location is established by a zoning amendment.
Cluster zoning
Cluster zoning permits residential uses to be clustered more closely together than normally allowed, thereby leaving substantial land area to be devoted to open space. Cluster zoning has been favored for its preservation of open space and reduction in construction and utility costs via consolidation, although existing residents may often disapprove due to a reduction in lot sizes.
Planned unit development (PUD)
The term planned unit development (PUD) can refer either to the regulatory process or to the development itself. A PUD groups multiple compatible land uses within a single unified development. A PUD can be residential, mixed-use, or a larger master-planned community. Rather than being governed by standard zoning ordinances, the developer negotiates terms with the local government. At best, a PUD provides flexibility to create convenient ways for residents to access commercial and other amenities. In the US, residents of a PUD have an ongoing role in management of the development through a homeowner's association.
Pattern zoning
Pattern zoning is a zoning technique in which a municipality provides licensed, pre-approved building designs, typically with an expedited permitting process. Pattern zoning is used to reduce barriers to housing development, create more affordable housing, reduce burdens on permit-review staff, and create quality housing designs within a certain neighborhood or jurisdiction. Pattern zoning may also be used to promote certain building types such as missing middle housing and affordable small-scale commercial properties. In some cases, a municipality purchases design patterns and constructs the properties themselves while in other cases the municipality offers the patterns for private development.
Hybrid zoning
A hybrid zoning code combines two or more approaches, often use-based and form-based zoning. Hybrid zoning can be used to introduce form and design considerations into an existing community's zoning without completely rewriting the zoning ordinance.
Composite zoning is a particular type of hybrid zoning that combines use, form, and site design components:
the use component establishes how land can be used within a district, as in use-based or functional zoning;
the form (also known as architectural) component sets standards for building design, such as height and facades;
the site design component specifies how buildings are situated on the site, such as setbacks and open space.
An advantage of composite zoning is the ability to create flexible zoning districts for smoother transitions between adjacent properties with different uses.
Inclusionary zoning
Inclusionary zoning refers to policies to increase the number of housing units within a development that are affordable to low and middle-income households. These policies can be mandatory as part of performance zoning or based on voluntary incentives, such as allowing greater density of development.
Overlay zoning
An overlay zone is a zoning district that overlaps one or more zoning districts to address a particular concern or feature of that area, such as wetlands, historic buildings or transit-oriented development. Overlay zoning has the advantage of providing targeted regulation to address a specific issue, such as a natural hazard, without having to significantly rewrite an existing zoning ordinance. However, development of overlay zoning regulation often requires significant technical expertise.
Transferable development rights
Transferable development rights, also known as transfer of development credits and transferable development units, are based on the concept that with land ownership comes the right of use of land, or land development. These land-based development rights can, in some jurisdictions, be used, unused, sold, or otherwise transferred by the owner of a parcel. These are typically used to transfer development rights from rural areas (sending sites) to urban areas (receiving sites) with more demand and infrastructure to support development.
Spot zoning
Spot zoning is a controversial practice in which a small part of a larger zoning district is rezoned in a way that is not consistent with the community's broader planning process. While a jurisdiction can rezone even a single parcel of land in some cases, spot zoning is often disallowed when the change would conflict with the policies and objectives of existing land-use plans. Other factors that may be considered in these cases are the size of the parcel, the zoning categories involved, how adjacent properties are zoned and used, and expected benefits and harms to the landowner, neighbors, and community.
Conditional zoning
Conditional zoning is a legislative process in which site-specific standards and conditions become part of the zoning ordinance at the request of the property owner. The conditions may be more or less restrictive than the standard zoning. Conditional zoning can be considered spot zoning and can be challenged on those grounds.
Conditional zoning should not be confused with conditional-use permits (also called special-use permits), a quasi-judicial process that enables land uses that, because of their special nature, may be suitable only in certain locations, or when arranged or operated in a particular manner. Uses which might be disallowed under current zoning, such as a school or a community center, can be permitted via conditional-use permits.
Contract zoning
Contract zoning is a controversial practice in which there is a bilateral agreement between a property owner and a local government to rezone a property in exchange for a commitment from the developer. It typically involves loosening restrictions on how the property can be used. Contract zoning is controversial and sometimes prohibited because it deviates from the broader planning process and has been considered an illegal bargaining away of the government's police powers to enforce zoning.
Fiscal zoning
Fiscal zoning is a controversial practice in which local governments use land use regulation, including zoning, to encourage land uses that generate high tax revenue and exclude uses that place a high demand on public services.
Effectiveness and criticism
Environmental activists argue that putting everyday uses out of walking distance of each other leads to an increase in traffic, since people have to own cars in order to live a normal life where their basic human needs are met, and get in their cars and drive to meet their needs throughout the day. Single-use zoning and urban sprawl have also been criticized as making work–family balance more difficult to achieve, as greater distances need to be covered in order to integrate the different life domains. These issues are especially acute in the United States, with its high level of car usage combined with insufficient or poorly maintained urban rail and metro systems.
Some economists claim that zoning laws work against economic efficiency, reduce responsiveness to consumer demands and hinder development in a free economy, as poor zoning restrictions hinder the more efficient usage of a given area. Even without zoning restrictions, a landfill, for example, would likely gravitate to cheaper land and not a residential area. Single-use zoning laws can get in the way of creative developments like mixed-use buildings and can even stop harmless activities like yard sales. The Houston example of non-zoning or private zoning with no restriction on particular land use but with other development code shows a combination of private and public planning.
Other critics of zoning argue that zoning laws are a disincentive to provide housing which results in an increase in housing costs and a decrease in productive economic output. For example, A 2017 study showed that if all states deregulated their zoning laws only halfway to the level of Texas, a state known for low zoning regulations, their GDP would increase by 12 percent due to more productive workers and opportunity. Furthermore, critics note that it impedes the ability of those that wish to provide charitable housing from doing so. For example, in 2022, Gloversville's Free Methodist Church in New York wished to provide 40 beds for the homeless population in -4 degree weather and were inhibited from doing so.
Corruption is a challenge for zoning. Some have argued that zoning laws increase economic inequality. Empirical effectiveness estimates show some zoning approaches can contribute to housing crisis.
Alternatives
In Houston, Texas, the lack of a local zoning ordinance means that property owners make heavy use of deed restrictions to prevent unwanted development. This practice is sometimes known as "private zoning". Non-zoned land regulations can still include requirements like minimum lot size and setbacks.
By country
Australia
The legal framework for land use zoning in Australia is established by States and Territories, hence each State or Territory has different zoning rules. Land use zones are generally defined at local government level, and most often called Planning Schemes. In reality, however in all cases the state governments have an absolute ability to overrule the local decision-making. There are administrative appeal processes such as VCAT to challenge decisions.
Statutory planning, otherwise known as town planning, development control or development management, refers to the part of the planning process that is concerned with the regulation and management of changes to land use and development. Planning and zoning have a great political dimension, with governments often criticized for favouring developers; also nimbyism is very prevalent.
Canada
In Canada, land-use control is a provincial responsibility deriving from the constitutional authority over property and civil rights. This authority had been granted to the provinces under the British North America Acts of 1867 and was carried forward in the Constitution Act, 1982. The zoning power relates to real property, or land and the improvements constructed thereon that become part of the land itself (in Québec, immeubles). The provinces empowered the municipalities and regions to control the use of land within their boundaries, letting the municipalities establish their own zoning by-laws. There are provisions for control of land use in unorganized areas of the provinces. Provincial tribunals are the ultimate authority for appeals and reviews.
France
In France, the Code of Urbanism or Code de l’urbanisme (called the Town Planning Code), a national law, guides regional and local planning and outlines procedures for obtaining building permits. Unlike England where planners must use their discretion to allow use or building type changes, private development in France is permitted as long as the developer follows the legally-binding regulations.
Japan
Zoning districts are classified into twelve use zones. Each zone determines a building's shape and permitted uses. A building's shape is controlled by zonal restrictions on allowable floor area ratio and height (in absolute terms and in relation with adjacent buildings and roads). These controls are intended to allow adequate light and ventilation between buildings and on roads. Instead of single-use zoning, zones are defined by the "most intense" use permitted. Uses of lesser intensity are permitted in zones where higher intensity uses are permitted but higher intensity uses are not allowed in lower intensity zones.
New Zealand
New Zealand's planning system is grounded in effects-based Performance Zoning under the Resource Management Act.
Philippines
Zoning and land use planning in the Philippines is governed by the Department of Human Settlements and Urban Development (DHSUD) and previously by the Housing and Land Use Regulatory Board (HLURB), which lays out national zoning guidelines and regulations, and oversees the preparation and implementation of comprehensive land use plans (CLUPs) and zoning ordinances by city and municipal governments under their mandate in the Local Government Code of 1991 (Republic Act No. 7160).
The present zoning scheme used in the Philippines is detailed in the HLURB's Model Zoning Ordinance published in 2014, which outlines 26 basic zone types based on primary usage and building regulations (as defined in the National Building Code), and also includes public domain and water bodies within the municipality's jurisdiction. Local governments may also add overlays identifying special use zones such as areas prone to natural disasters, ancestral lands of indigenous peoples (IPs), heritage zones, ecotourism areas, transit-oriented developments (TODs), and scenic corridors. Residential and commercial zones are further subdivided into subclasses defined by density, commercial zones also allow for residential uses, and industrial zones are subdivided by their intensity and the environmental impact of the uses allowed. Regulations on residential, commercial, and industrial zones may differ between municipalities, so one municipality may permit 4-storey buildings on medium-density residential zones, while another may only permit 2-storey buildings.
Singapore
The framework for governing land uses in Singapore is administered by the Urban Redevelopment Authority (URA) through the Master Plan. The Master Plan is a statutory document divided into two sections: the plans and the Written Statement. The plans show the land use zoning allowed across Singapore, while the Written Statement provides a written explanation of the zones available and their allowed uses.
South Africa
There are five (5) zoning categories in South Africa; residential, business, industrial, agricultural, and open space zoning. These five categories are further classified into subcategories. The zoning categories are governed by the Spatial Planning and Land Use Management Act enacted in 2016. To change a land use from one zone to another requires a process of rezoning.
United Kingdom
The United Kingdom does not use zoning as a technique for controlling land use. British land use control began its modern phase after the Town and Country Planning Act of 1947. Rather than dividing municipal maps into land use zones, English planning law places all development under the control of local and regional governments, effectively abolishing the ability to develop land by-right. However, existing development allows land use by-right as long as the use does not constitute a change in the type of land use. A property owner must apply to change land use type of any existing building, and such changes must be consistent with the local and regional land use plans.
Development control or planning control is the element of the United Kingdom's system of town and country planning through which local government regulates land use and new building. There are 421 Local Planning Authorities (LPAs) in the United Kingdom. Generally they are the local borough or district council or a unitary authority. They each use a discretionary "plan-led system" whereby development plans are formed and the public consulted. Subsequent development requires planning permission, which will be granted or refused with reference to the development plan as a material consideration.
The plan does not provide specific guidance on what type of buildings will be allowed in a given location, rather it provides general principles for development and goals for the management of urban change. Because planning committees (made up of directly elected local councillors) or in some cases planning officers themselves (via delegated decisions) have discretion on each application for development or change of use made, the system is considered a 'discretionary' one.
Planning applications can differ greatly in scale, from airports and new towns to minor modifications to individual houses. In order to prevent local authorities from being overwhelmed by high volumes of small-scale applications from individual householders, a separate system of permitted development has been introduced. Permitted development rules are largely form-based, but in the absence of zoning, are applied at the national level. Examples include allowing a two-storey extension up to three metres at the rear of a property, extensions up to 50% of the original width at each side, and certain types of outbuildings in the garden, provided that no more than 50% of the land area is built over. These are appropriately sized for a typical three bedroom semi-detached property, but must be applied across a wide variety of housing types, from small terraces, to larger detached properties and manor houses.
In August 2020, the UK Government published a consultation document called Planning for the Future. The proposals hinted at a move toward zoning, with areas given a Growth, Renewal or Protected designation, with the possibility of "sub-areas within each category", although the document did not elaborate on what the details of these might have been. Nothing was done with these proposals and following the 2024 general election there are no plans for the UK to adopt zoning within its planning system.
United States
Under the police power rights, state governments may exercise over private real property. With this power, special laws and regulations have long been made restricting the places where particular types of business can be carried on. In 1904, Los Angeles established the nation's first land-use restrictions for a portion of the city. New York City adopted the first zoning regulations to apply city-wide in 1916.
The constitutionality of zoning ordinances was upheld by the U.S. Supreme Court in the 1926 case Village of Euclid, Ohio v. Ambler Realty Co. Among large populated cities in the United States, Houston is unique in having no zoning ordinances. The city instead has a proliferation of private deed restrictions and retains government regulations like minimum lot size and setbacks.
Scale
Early zoning practices were subtle and often debated. Some claim the practices started in the 1920s while others suggest the birth of zoning occurred in New York in 1916. Both of these examples for the start of zoning, however, were urban cases. Zoning becomes an increasing legal force as it continues to expand in its geographical range through its introduction in other urban centres and use in larger political and geographical boundaries. Regional zoning was the next step in increased geographical size of areas under zoning laws. A major difference between urban zoning and regional zoning was that "regional areas consequently seldom bear direct relationship to arbitrary political boundaries". This form of zoning also included rural areas which was counter-intuitive to the theory that zoning was a result of population density. Finally, zoning also expanded again but back to a political boundary again with state zoning.
Types in use in the United States
Use-based zoning, especially single-use zoning, is by far the most common type of zoning in the US, where it is known as Euclidean zoning, after Euclid, Ohio's role in a landmark U.S. Supreme Court case, Village of Euclid v. Ambler Realty Co.
Single-use zoning in the United States
Single-use zoning takes two forms, flat and hierarchical, also known as cumulative or pyramidal. Under flat zoning, each district is strictly designated for one use. In a simple hierarchical zoning system, districts are organized with residential (the most sensitive and least disruptive category) at the top, followed by commercial and industrial. Residential and commercial buildings are allowed in industrial zones and residential buildings are allowed in commercial zones. More complex hierarchical systems account for multiple levels within categories, such as multiple types of residential buildings in multifamily residential districts. Hierarchical zoning generally fell out of favor in the United States in the mid-twentieth century, with flat zoning becoming more popular, although many municipalities still incorporate some degree of hierarchy in their zoning ordinances.
Single-use zoning is used by many municipalities due to its ease of implementation (one set of explicit, prescriptive rules), long-established legal precedent, and familiarity to planners and design professionals. Single-use zoning has been criticized, however, for its lack of flexibility. Separation of uses can contribute to urban sprawl, loss of open space, heavy infrastructure costs, and automobile dependency. In particular, single-family zoning, residential districts where only single-family homes can be built, has been widely criticized as a cause of sprawl and racial segregation.
Social problems in the United States
The United States suffers from greater levels of deurbanization and urban decay than other developed countries, and additional problems such as urban prairies that do not occur elsewhere. Jonathan Rothwell has argued that zoning encourages racial segregation. He claims a strong relationship exists between an area's allowance of building housing at higher density and racial integration between blacks and whites in the United States. The relationship between segregation and density is explained by Rothwell and Massey as the restrictive density zoning producing higher housing prices in white areas and limiting opportunities for people with modest incomes to leave segregated areas. Between 1980 and 2000, racial integration occurred faster in areas that did not have strict density regulations than those that did. Rothwell and Massey suggest homeowners and business interests are the two key players in density regulations that emerge from a political economy. They propose that in older states where rural jurisdictions are primarily composed of homeowners, it is the narrow interests of homeowners to block development because tax rates are lower in rural areas, and taxation is more likely to fall on the median homeowner. Business interests are unable to counteract the homeowners' interests in rural areas because business interests are weaker and business ownership is rarely controlled by people living outside the community. This translates into rural communities that have a tendency to resist development by using density regulations to make business opportunities less attractive. Density zoning regulations in the U.S increase residential segregation in metropolitan areas by reducing the availability of affordable housing in some jurisdictions; other zoning regulations like school infrastructure regulations and growth controls are also variables associated with higher segregation. With more permissive zoning regulations there are lower levels of segregation; desegregation is higher in places with more liberal regulations on zoning, allowing the residents to integrate racially. Metropolitan areas that allowed higher density development moved rapidly toward racial integration than their counterparts with strict density limitations. The greater the allowable density, the lower the level of racial segregation.
Zoning laws that limit the construction of new housing (like single-family zoning) are associated with reduced affordability and are a major factor in residential segregation in the United States by income and race.
See also
Activity centre
Agricultural protection zoning
Context theory
Ekistics
Exclusionary zoning
Fenceline community
Form-based codes
Greenspace (disambiguation)
Open space reserve
Urban open space
Inclusionary zoning
Locally unwanted land use
Mixed use development
New urbanism
NIMBY
Non-conforming use
Planning permission
Police power
Principles of Intelligent Urbanism
Reverse sensitivity
Road
Single-use zoning
Spot zoning
Statutory planning
Subdivision (land)
Traffic
Variance (land use)
YIMBY
Zoning district
Zoning in the United States
References
Further reading
Taylor, George Town Planning for Australia (Studies in International Planning History), Routledge, 2018, .
Gurran, N., Gallent, N. and Chiu, R.L.H. Politics, Planning and Housing Supply in Australia, England and Hong Kong (Routledge Research in Planning and Urban Design), Routledge, 2016.
Bassett, E.M. The master plan, with a discussion of the theory of community land planning legislation. New York: Russell Sage foundation, 1938.
Bassett, E. M. Zoning. New York: Russell Sage Foundation, 1940
Hirt, Sonia. Zoned in the USA: The Origins and Implications of American Land-Use Regulation (Cornell University Press, 2014) 245 pp. online review
Stephani, Carl J. and Marilyn C. ZONING 101, originally published in 1993 by the National League of Cities, now available in a Third Edition, 2012.
External links
ZoningPoint – A searchable database of zoning maps and zoning codes for every county and municipality in the United States.
Crenex – Zoning Maps – Links to zoning maps and planning commissions of 50 most populous cities in the US.
New York City Department of City Planning – Zoning History
Michigan State University Extension Planning & Zoning information
Schindler's Land Use Page (Michigan State University Extension Land Use Team)
Zoning Compliance and Zoning Certification - Analysis and Reporting
Land Policy Institute at Michigan State University
By Bradley C. Karkkainen (1994). Zoning: A Reply To The Critics, Journal of Land Use & Environmental Law
Urban planning | Zoning | [
"Engineering"
] | 7,209 | [
"Construction",
"Urban planning",
"Zoning",
"Architecture"
] |
56,353 | https://en.wikipedia.org/wiki/Linear%20span | In mathematics, the linear span (also called the linear hull or just span) of a set of elements of a vector space is the smallest linear subspace of that contains It is the set of all finite linear combinations of the elements of , and the intersection of all linear subspaces that contain It often denoted or
For example, in geometry, two linearly independent vectors span a plane.
To express that a vector space is a linear span of a subset , one commonly uses one of the following phrases: spans ; is a spanning set of ; is spanned or generated by ; is a generator set or a generating set of .
Spans can be generalized to many mathematical structures, in which case, the smallest substructure containing is generally called the substructure generated by
Definition
Given a vector space over a field , the span of a set of vectors (not necessarily finite) is defined to be the intersection of all subspaces of that contain . It is thus the smallest (for set inclusion) subspace containing . It is referred to as the subspace spanned by , or by the vectors in . Conversely, is called a spanning set of , and we say that spans .
It follows from this definition that the span of is the set of all finite linear combinations of elements (vectors) of , and can be defined as such. That is,
When is empty, the only possibility is , and the previous expression for reduces to the empty sum. The standard convention for the empty sum implies thus a property that is immediate with the other definitions. However, many introductory textbooks simply include this fact as part of the definition.
When is finite, one has
Examples
The real vector space has {(−1, 0, 0), (0, 1, 0), (0, 0, 1)} as a spanning set. This particular spanning set is also a basis. If (−1, 0, 0) were replaced by (1, 0, 0), it would also form the canonical basis of .
Another spanning set for the same space is given by {(1, 2, 3), (0, 1, 2), (−1, , 3), (1, 1, 1)}, but this set is not a basis, because it is linearly dependent.
The set } is not a spanning set of , since its span is the space of all vectors in whose last component is zero. That space is also spanned by the set {(1, 0, 0), (0, 1, 0)}, as (1, 1, 0) is a linear combination of (1, 0, 0) and (0, 1, 0). Thus, the spanned space is not It can be identified with by removing the third components equal to zero.
The empty set is a spanning set of {(0, 0, 0)}, since the empty set is a subset of all possible vector spaces in , and {(0, 0, 0)} is the intersection of all of these vector spaces.
The set of monomials , where is a non-negative integer, spans the space of polynomials.
Theorems
Equivalence of definitions
The set of all linear combinations of a subset of , a vector space over , is the smallest linear subspace of containing .
Proof. We first prove that is a subspace of . Since is a subset of , we only need to prove the existence of a zero vector in , that is closed under addition, and that is closed under scalar multiplication. Letting , it is trivial that the zero vector of exists in , since . Adding together two linear combinations of also produces a linear combination of : , where all , and multiplying a linear combination of by a scalar will produce another linear combination of : . Thus is a subspace of .
Suppose that is a linear subspace of containing . It follows that , since every is a linear combination of (trivially). Since is closed under addition and scalar multiplication, then every linear combination must be contained in . Thus, is contained in every subspace of containing , and the intersection of all such subspaces, or the smallest such subspace, is equal to the set of all linear combinations of .
Size of spanning set is at least size of linearly independent set
Every spanning set of a vector space must contain at least as many elements as any linearly independent set of vectors from .
Proof. Let be a spanning set and be a linearly independent set of vectors from . We want to show that .
Since spans , then must also span , and must be a linear combination of . Thus is linearly dependent, and we can remove one vector from that is a linear combination of the other elements. This vector cannot be any of the , since is linearly independent. The resulting set is , which is a spanning set of . We repeat this step times, where the resulting set after the th step is the union of and vectors of .
It is ensured until the th step that there will always be some to remove out of for every adjoint of , and thus there are at least as many 's as there are 's—i.e. . To verify this, we assume by way of contradiction that . Then, at the th step, we have the set and we can adjoin another vector . But, since is a spanning set of , is a linear combination of . This is a contradiction, since is linearly independent.
Spanning set can be reduced to a basis
Let be a finite-dimensional vector space. Any set of vectors that spans can be reduced to a basis for , by discarding vectors if necessary (i.e. if there are linearly dependent vectors in the set). If the axiom of choice holds, this is true without the assumption that has finite dimension. This also indicates that a basis is a minimal spanning set when is finite-dimensional.
Generalizations
Generalizing the definition of the span of points in space, a subset of the ground set of a matroid is called a spanning set if the rank of equals the rank of the entire ground set
The vector space definition can also be generalized to modules. Given an -module and a collection of elements , ..., of , the submodule of spanned by , ..., is the sum of cyclic modules
consisting of all R-linear combinations of the elements . As with the case of vector spaces, the submodule of A spanned by any subset of A is the intersection of all submodules containing that subset.
Closed linear span (functional analysis)
In functional analysis, a closed linear span of a set of vectors is the minimal closed set which contains the linear span of that set.
Suppose that is a normed vector space and let be any non-empty subset of . The closed linear span of , denoted by or , is the intersection of all the closed linear subspaces of which contain .
One mathematical formulation of this is
The closed linear span of the set of functions xn on the interval [0, 1], where n is a non-negative integer, depends on the norm used. If the L2 norm is used, then the closed linear span is the Hilbert space of square-integrable functions on the interval. But if the maximum norm is used, the closed linear span will be the space of continuous functions on the interval. In either case, the closed linear span contains functions that are not polynomials, and so are not in the linear span itself. However, the cardinality of the set of functions in the closed linear span is the cardinality of the continuum, which is the same cardinality as for the set of polynomials.
Notes
The linear span of a set is dense in the closed linear span. Moreover, as stated in the lemma below, the closed linear span is indeed the closure of the linear span.
Closed linear spans are important when dealing with closed linear subspaces (which are themselves highly important, see Riesz's lemma).
A useful lemma
Let be a normed space and let be any non-empty subset of . Then
(So the usual way to find the closed linear span is to find the linear span first, and then the closure of that linear span.)
See also
Affine hull
Conical combination
Convex hull
Footnotes
Citations
Sources
Textbooks
Lay, David C. (2021) Linear Algebra and Its Applications (6th Edition). Pearson.
Web
External links
Linear Combinations and Span: Understanding linear combinations and spans of vectors, khanacademy.org.
Abstract algebra
Linear algebra | Linear span | [
"Mathematics"
] | 1,743 | [
"Linear algebra",
"Abstract algebra",
"Algebra"
] |
56,357 | https://en.wikipedia.org/wiki/Linear%20subspace | In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.
Definition
If V is a vector space over a field K, a subset W of V is a linear subspace of V if it is a vector space over K for the operations of V. Equivalently, a linear subspace of V is a nonempty subset W such that, whenever are elements of W and are elements of K, it follows that is in W.
The singleton set consisting of the zero vector alone and the entire vector space itself are linear subspaces that are called the trivial subspaces of the vector space.
Examples
Example I
In the vector space V = R3 (the real coordinate space over the field R of real numbers), take W to be the set of all vectors in V whose last component is 0.
Then W is a subspace of V.
Proof:
Given u and v in W, then they can be expressed as and . Then . Thus, u + v is an element of W, too.
Given u in W and a scalar c in R, if again, then . Thus, cu is an element of W too.
Example II
Let the field be R again, but now let the vector space V be the Cartesian plane R2.
Take W to be the set of points (x, y) of R2 such that x = y.
Then W is a subspace of R2.
Proof:
Let and be elements of W, that is, points in the plane such that p1 = p2 and q1 = q2. Then ; since p1 = p2 and q1 = q2, then p1 + q1 = p2 + q2, so p + q is an element of W.
Let p = (p1, p2) be an element of W, that is, a point in the plane such that p1 = p2, and let c be a scalar in R. Then ; since p1 = p2, then cp1 = cp2, so cp is an element of W.
In general, any subset of the real coordinate space Rn that is defined by a homogeneous system of linear equations will yield a subspace.
(The equation in example I was z = 0, and the equation in example II was x = y.)
Example III
Again take the field to be R, but now let the vector space V be the set RR of all functions from R to R.
Let C(R) be the subset consisting of continuous functions.
Then C(R) is a subspace of RR.
Proof:
We know from calculus that .
We know from calculus that the sum of continuous functions is continuous.
Again, we know from calculus that the product of a continuous function and a number is continuous.
Example IV
Keep the same field and vector space as before, but now consider the set Diff(R) of all differentiable functions.
The same sort of argument as before shows that this is a subspace too.
Examples that extend these themes are common in functional analysis.
Properties of subspaces
From the definition of vector spaces, it follows that subspaces are nonempty, and are closed under sums and under scalar multiples. Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set W is a subspace if and only if every linear combination of finitely many elements of W also belongs to W.
The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time.
In a topological vector space X, a subspace W need not be topologically closed, but a finite-dimensional subspace is always closed. The same is true for subspaces of finite codimension (i.e., subspaces determined by a finite number of continuous linear functionals).
Descriptions
Descriptions of subspaces include the solution set to a homogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linear parametric equations, the span of a collection of vectors, and the null space, column space, and row space of a matrix. Geometrically (especially over the field of real numbers and its subfields), a subspace is a flat in an n-space that passes through the origin.
A natural description of a 1-subspace is the scalar multiplication of one non-zero vector v to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication:
This idea is generalized for higher dimensions with linear span, but criteria for equality of k-spaces specified by sets of k vectors are not so simple.
A dual description is provided with linear functionals (usually implemented as linear equations). One non-zero linear functional F specifies its kernel subspace F = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in the dual space):
It is generalized for higher codimensions with a system of equations. The following two subsections will present this latter description in details, and the remaining four subsections further describe the idea of linear span.
Systems of linear equations
The solution set to any homogeneous system of linear equations with n variables is a subspace in the coordinate space Kn:
For example, the set of all vectors (over real or rational numbers) satisfying the equations
is a one-dimensional subspace. More generally, that is to say that given a set of n independent functions, the dimension of the subspace in Kk will be the dimension of the null set of A, the composite matrix of the n functions.
Null space of a matrix
In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation:
The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix
Every subspace of Kn can be described as the null space of some matrix (see below for more).
Linear parametric equations
The subset of Kn described by a system of homogeneous linear parametric equations is a subspace:
For example, the set of all vectors (x, y, z) parameterized by the equations
is a two-dimensional subspace of K3, if K is a number field (such as real or rational numbers).
Span of vectors
In linear algebra, the system of parametric equations can be written as a single vector equation:
The expression on the right is called a linear combination of the vectors
(2, 5, −1) and (3, −4, 2). These two vectors are said to span the resulting subspace.
In general, a linear combination of vectors v1, v2, ... , vk is any vector of the form
The set of all possible linear combinations is called the span:
If the vectors v1, ... , vk have n components, then their span is a subspace of Kn. Geometrically, the span is the flat through the origin in n-dimensional space determined by the points v1, ... , vk.
Example
The xz-plane in R3 can be parameterized by the equations
As a subspace, the xz-plane is spanned by the vectors (1, 0, 0) and (0, 0, 1). Every vector in the xz-plane can be written as a linear combination of these two:
Geometrically, this corresponds to the fact that every point on the xz-plane can be reached from the origin by first moving some distance in the direction of (1, 0, 0) and then moving some distance in the direction of (0, 0, 1).
Column space and row space
A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation:
In this case, the subspace consists of all possible values of the vector x. In linear algebra, this subspace is known as the column space (or image) of the matrix A. It is precisely the subspace of Kn spanned by the column vectors of A.
The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space (see below).
Independence, basis, and dimension
In general, a subspace of Kn determined by k parameters (or spanned by k vectors) has dimension k. However, there are exceptions to this rule. For example, the subspace of K3 spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the xz-plane, with each point on the plane described by infinitely many different values of .
In general, vectors v1, ... , vk are called linearly independent if
for
(t1, t2, ... , tk) ≠ (u1, u2, ... , uk).
If are linearly independent, then the coordinates for a vector in the span are uniquely determined.
A basis for a subspace S is a set of linearly independent vectors whose span is S. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see § Algorithms below for more).
Example
Let S be the subspace of R4 defined by the equations
Then the vectors (2, 1, 0, 0) and (0, 0, 5, 1) are a basis for S. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors:
The subspace S is two-dimensional. Geometrically, it is the plane in R4 passing through the points (0, 0, 0, 0), (2, 1, 0, 0), and (0, 0, 5, 1).
Operations and relations on subspaces
Inclusion
The set-theoretical inclusion binary relation specifies a partial order on the set of all subspaces (of any dimension).
A subspace cannot lie in any subspace of lesser dimension. If dim U = k, a finite number, and U ⊂ W, then dim W = k if and only if U = W.
Intersection
Given subspaces U and W of a vector space V, then their intersection U ∩ W := {v ∈ V : v is an element of both U and W} is also a subspace of V.
Proof:
Let v and w be elements of U ∩ W. Then v and w belong to both U and W. Because U is a subspace, then v + w belongs to U. Similarly, since W is a subspace, then v + w belongs to W. Thus, v + w belongs to U ∩ W.
Let v belong to U ∩ W, and let c be a scalar. Then v belongs to both U and W. Since U and W are subspaces, cv belongs to both U and W.
Since U and W are vector spaces, then 0 belongs to both sets. Thus, 0 belongs to U ∩ W.
For every vector space V, the set {0} and V itself are subspaces of V.
Sum
If U and W are subspaces, their sum is the subspace
For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality
Here, the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation:
A set of subspaces is independent when the only intersection between any pair of subspaces is the trivial subspace. The direct sum is the sum of independent subspaces, written as . An equivalent restatement is that a direct sum is a subspace sum under the condition that every subspace contributes to the span of the sum.
The dimension of a direct sum is the same as the sum of subspaces, but may be shortened because the dimension of the trivial subspace is zero.
Lattice of subspaces
The operations intersection and sum make the set of all subspaces a bounded modular lattice, where the {0} subspace, the least element, is an identity element of the sum operation, and the identical subspace V, the greatest element, is an identity element of the intersection operation.
Orthogonal complements
If is an inner product space and is a subset of , then the orthogonal complement of , denoted , is again a subspace. If is finite-dimensional and is a subspace, then the dimensions of and satisfy the complementary relationship . Moreover, no vector is orthogonal to itself, so and is the direct sum of and . Applying orthogonal complements twice returns the original subspace: for every subspace .
This operation, understood as negation (), makes the lattice of subspaces a (possibly infinite) orthocomplemented lattice (although not a distributive lattice).
In spaces with other bilinear forms, some but not all of these results still hold. In pseudo-Euclidean spaces and symplectic vector spaces, for example, orthogonal complements exist. However, these spaces may have null vectors that are orthogonal to themselves, and consequently there exist subspaces such that . As a result, this operation does not turn the lattice of subspaces into a Boolean algebra (nor a Heyting algebra).
Algorithms
Most algorithms for dealing with subspaces involve row reduction. This is the process of applying elementary row operations to a matrix, until it reaches either row echelon form or reduced row echelon form. Row reduction has the following important properties:
The reduced matrix has the same null space as the original.
Row reduction does not change the span of the row vectors, i.e. the reduced matrix has the same row space as the original.
Row reduction does not affect the linear dependence of the column vectors.
Basis for a row space
Input An m × n matrix A.
Output A basis for the row space of A.
Use elementary row operations to put A into row echelon form.
The nonzero rows of the echelon form are a basis for the row space of A.
See the article on row space for an example.
If we instead put the matrix A into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of Kn are equal.
Subspace membership
Input A basis {b1, b2, ..., bk} for a subspace S of Kn, and a vector v with n components.
Output Determines whether v is an element of S
Create a (k + 1) × n matrix A whose rows are the vectors b1, ... , bk and v.
Use elementary row operations to put A into row echelon form.
If the echelon form has a row of zeroes, then the vectors are linearly dependent, and therefore .
Basis for a column space
Input An m × n matrix A
Output A basis for the column space of A
Use elementary row operations to put A into row echelon form.
Determine which columns of the echelon form have pivots. The corresponding columns of the original matrix are a basis for the column space.
See the article on column space for an example.
This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns.
Coordinates for a vector
Input A basis {b1, b2, ..., bk} for a subspace S of Kn, and a vector
Output Numbers t1, t2, ..., tk such that
Create an augmented matrix A whose columns are b1,...,bk , with the last column being v.
Use elementary row operations to put A into reduced row echelon form.
Express the final column of the reduced echelon form as a linear combination of the first k columns. The coefficients used are the desired numbers . (These should be precisely the first k entries in the final column of the reduced echelon form.)
If the final column of the reduced row echelon form contains a pivot, then the input vector v does not lie in S.
Basis for a null space
Input An m × n matrix A.
Output A basis for the null space of A
Use elementary row operations to put A in reduced row echelon form.
Using the reduced row echelon form, determine which of the variables are free. Write equations for the dependent variables in terms of the free variables.
For each free variable xi, choose a vector in the null space for which and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space of A.
See the article on null space for an example.
Basis for the sum and intersection of two subspaces
Given two subspaces and of , a basis of the sum and the intersection can be calculated using the Zassenhaus algorithm.
Equations for a subspace
Input A basis {b1, b2, ..., bk} for a subspace S of Kn
Output An (n − k) × n matrix whose null space is S.
Create a matrix A whose rows are .
Use elementary row operations to put A into reduced row echelon form.
Let be the columns of the reduced row echelon form. For each column without a pivot, write an equation expressing the column as a linear combination of the columns with pivots.
This results in a homogeneous system of n − k linear equations involving the variables c1,...,cn. The matrix corresponding to this system is the desired matrix with nullspace S.
Example
If the reduced row echelon form of A is
then the column vectors satisfy the equations
It follows that the row vectors of A satisfy the equations
In particular, the row vectors of A are a basis for the null space of the corresponding matrix.
See also
Cyclic subspace
Invariant subspace
Multilinear subspace learning
Quotient space (linear algebra)
Signal subspace
Subspace topology
Notes
Citations
Sources
Textbook
Web
External links
Linear algebra
Articles containing proofs
Operator theory
Functional analysis | Linear subspace | [
"Mathematics"
] | 3,886 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Mathematical relations",
"Linear algebra",
"Articles containing proofs",
"Algebra"
] |
56,359 | https://en.wikipedia.org/wiki/Leo%20Szilard | Leo Szilard (; ; born Leó Spitz; February 11, 1898 – May 30, 1964) was a Hungarian-born physicist, biologist and inventor who made numerous important discoveries in nuclear physics and the biological sciences. He conceived the nuclear chain reaction in 1933, and patented the idea in 1936. In late 1939 he wrote the letter for Albert Einstein's signature that resulted in the Manhattan Project that built the atomic bomb, and then in 1944 wrote the Szilard petition asking President Truman to demonstrate the bomb without dropping it on civilians. According to György Marx, he was one of the Hungarian scientists known as The Martians.
Szilard initially attended Palatine Joseph Technical University in Budapest, but his engineering studies were interrupted by service in the Austro-Hungarian Army during World War I. He left Hungary for Germany in 1919, enrolling at Technische Hochschule (Institute of Technology) in Berlin-Charlottenburg (now Technische Universität Berlin), but became bored with engineering and transferred to Friedrich Wilhelm University, where he studied physics. He wrote his doctoral thesis on Maxwell's demon, a long-standing puzzle in the philosophy of thermal and statistical physics. Szilard was the first scientist of note to recognize the connection between thermodynamics and information theory.
Szilard coined and submitted the earliest known patent applications and the first publications for the concept of the electron microscope (1928), the cyclotron (1929), and also contributed to the development of the linear accelerator (1928) in Germany. Between 1926 and 1930, he worked with Einstein on the development of the Einstein refrigerator.
After Adolf Hitler became chancellor of Germany in 1933, Szilard urged his family and friends to flee Europe while they still could. He moved to England, where he helped found the Academic Assistance Council, an organization dedicated to helping refugee scholars find new jobs. While in England, Szilard, alongside Thomas A. Chalmers, discovered a means of isotope separation known as the Szilard–Chalmers effect.
Foreseeing another war in Europe, Szilard moved to the United States in 1938, where he worked with Enrico Fermi and Walter Zinn on means of creating a nuclear chain reaction. He was present when this was achieved within the Chicago Pile-1 on December 2, 1942. He worked for the Manhattan Project's Metallurgical Laboratory at the University of Chicago on aspects of nuclear reactor design, where he was the chief physicist. He drafted the Szilard petition advocating a non-lethal demonstration of the atomic bomb, but the Interim Committee chose to use them in a military strike instead.
Together with Enrico Fermi, he applied for a nuclear reactor patent in 1944. He publicly sounded the alarm against the possible development of salted thermonuclear bombs, a new kind of nuclear weapon that might annihilate mankind.
His inventions, discoveries, and contributions related to biological science are also equally important; they include the discovery of feedback inhibition and the invention of the chemostat. According to Theodore Puck and Philip I. Marcus, Szilard gave essential advice which made the earliest cloning of the human cell a reality.
Diagnosed with bladder cancer in 1960, he underwent a cobalt-60 treatment that he had designed. He helped found the Salk Institute for Biological Studies, where he became a resident fellow. Szilard founded Council for a Livable World in 1962 to deliver "the sweet voice of reason" about nuclear weapons to Congress, the White House, and the American public. He died in his sleep of a heart attack in 1964.
Early life
He was born as Leó Spitz in Budapest, Kingdom of Hungary, on February 11, 1898. His middle-class Jewish parents, Lajos (Louis) Spitz, a civil engineer, and Tekla Vidor, raised Leó on the Városligeti Fasor in Pest. He had two younger siblings, a brother, Béla, born in 1900, and a sister, Rózsi, born in 1901. On October 4, 1900, the family changed its surname from the German "Spitz" to the Hungarian "Szilárd", a name that means "solid" in Hungarian. Despite having a religious background, Szilard became an agnostic. From 1908 to 1916 he attended Föreáliskola (high school) in Budapest's 6th District . Showing an early interest in physics and a proficiency in mathematics, in 1916 he won the Eötvös Prize, a national prize for mathematics.
With World War I raging in Europe, Szilard received notice on January 22, 1916, that he had been drafted into the 5th Fortress Regiment, but he was able to continue his studies. He enrolled as an engineering student at the Palatine Joseph Technical University, which he entered in September 1916. The following year he joined the Austro-Hungarian Army's 4th Mountain Artillery Regiment, but immediately was sent to Budapest as an officer candidate. He rejoined his regiment in May 1918 but in September, before being sent to the front, he fell ill with Spanish Influenza and was returned home for hospitalization. Later he was informed that his regiment had been nearly annihilated in battle, so the illness probably saved his life. He was discharged honorably in November 1918, after the Armistice.
In January 1919, Szilard resumed his engineering studies, but Hungary was in a chaotic political situation with the rise of the Hungarian Soviet Republic under Béla Kun. Szilard and his brother Béla founded their own political group, the Hungarian Association of Socialist Students, with a platform based on a scheme of Szilard's for taxation reform. He was convinced that socialism was the answer to Hungary's post-war problems, but not that of Kun's Hungarian Socialist Party, which had close ties to the Soviet Union. When Kun's government tottered, the brothers officially changed their religion from "Israelite" to "Calvinist", but when they attempted to re-enroll in what was now the Budapest University of Technology, they were prevented from doing so by nationalist students because they were Jews.
Time in Berlin
Convinced that there was no future for him in Hungary, Szilard left for Berlin via Austria on December 25, 1919, and enrolled at the Technische Hochschule (Institute of Technology) in Berlin-Charlottenburg. He was soon joined by his brother Béla. Szilard became bored with engineering, and his attention turned to physics. This was not taught at the Technische Hochschule, so he transferred to Friedrich Wilhelm University, where he attended lectures given by Albert Einstein, Max Planck, Walter Nernst, James Franck and Max von Laue. He also met fellow Hungarian students Eugene Wigner, John von Neumann and Dennis Gabor.
Szilard's doctoral dissertation on thermodynamics Über die thermodynamischen Schwankungserscheinungen (On The Manifestation of Thermodynamic Fluctuations), praised by Einstein, won top honors in 1922. It involved a long-standing puzzle in the philosophy of thermal and statistical physics known as Maxwell's demon, a thought experiment originated by the physicist James Clerk Maxwell. The problem was thought to be insoluble, but in tackling it Szilard recognized the connection between thermodynamics and information theory.
Szilard was appointed as assistant to von Laue at the Institute for Theoretical Physics in 1924. In 1927 he finished his habilitation and became a Privatdozent (private lecturer) in physics. For his habilitation lecture, he produced a second paper on Maxwell's demon, Über die Entropieverminderung in einem thermodynamischen System bei Eingriffen intelligenter Wesen (On the reduction of entropy in a thermodynamic system by the intervention of intelligent beings), that had actually been written soon after the first. This introduced the thought experiment now called the Szilard engine and became important in the history of attempts to understand Maxwell's demon. The paper also introduced the first equation linking negative entropy and information. This work established Szilard as a foundational figure in information theory; however, he did not publish it until 1929 and chose not to pursue the topic further. Cybernetics, via the work of Norbert Wiener and Claude E. Shannon, would later develop the concept into a general theory in the 1940s and 1950s—though, during the time of the Cybernetics Meetings, John Von Neumann pointed out that Szilard first equated information with entropy in his review of Wiener's Cybernetics book.
Throughout his time in Berlin, Szilard worked on numerous technical inventions. In 1928 he submitted a patent application for the linear accelerator, not knowing of Gustav Ising's prior 1924 journal article and Rolf Widerøe's operational device, and in 1929 applied for one for the cyclotron. He was also the first person to conceive the idea of the electron microscope, and submitted the earliest patent for one in 1928. Between 1926 and 1930, he worked with Einstein to develop the Einstein refrigerator, notable because it had no moving parts. He did not build all of these devices, or publish these ideas in scientific journals, and so credit for them often went to others. As a result, Szilard never received the Nobel Prize, but Ernest Lawrence was awarded it for the cyclotron in 1939, and Ernst Ruska for the electron microscope in 1986.
Szilard received German citizenship in 1930, but was already uneasy about the political situation in Europe. When Adolf Hitler became chancellor of Germany on January 30, 1933, Szilard urged his family and friends to flee Europe while they still could. He moved to England, and transferred his savings of £1,595 (£ today) from his bank in Zürich to one in London. He lived in hotels where lodging and meals cost about £5.5 a week. For those less fortunate, he helped found the Academic Assistance Council, an organization dedicated to helping refugee scholars find new jobs, and persuaded the Royal Society to provide accommodation for it at Burlington House. He enlisted the help of academics such as Harald Bohr, G. H. Hardy, Archibald Hill and Frederick G. Donnan. By the outbreak of World War II in 1939, it had helped to find places for over 2,500 refugee scholars.
Nuclear physics
On the morning of September 12, 1933, Szilard read an article in The Times summarizing a speech given by Lord Rutherford in which Rutherford rejected the feasibility of using atomic energy for practical purposes. The speech remarked specifically on the recent 1932 work of his students, John Cockcroft and Ernest Walton, in "splitting" lithium into alpha particles, by bombardment with protons from a particle accelerator they had constructed. Rutherford went on to say:
Szilard was so annoyed at Rutherford's dismissal that, on the same day, he conceived of the idea of nuclear chain reaction (analogous to a chemical chain reaction), using recently discovered neutrons. The idea did not use the mechanism of nuclear fission, which was not yet discovered, but Szilard realized that if neutrons could initiate any sort of energy-producing nuclear reaction, such as the one that had occurred in lithium, and could be produced themselves by the same reaction, energy might be obtained with little input, since the reaction would be self-sustaining. He wanted to carry out a systematic survey of all 92 then-known elements in order to find one that can allow the chain reaction, at an estimated cost of $8000, but he did not for lack of funds.
Szilard filed for a patent on the concept of the neutron-induced nuclear chain reaction in June 1934, which was granted in March 1936. Under section 30 of the Patents and Designs Act (1907, UK), Szilard was able to assign the patent to the British Admiralty to ensure its secrecy, which he did. Consequently, his patent was not published until 1949 when the relevant parts of the Patents and Designs Act (1907, UK) were repealed by the Patents and Designs Act (July 1949, UK). Richard Rhodes described Szilard's moment of inspiration:
Prior to conceiving the nuclear chain reaction, in 1932 Szilard had read H. G. Wells' The World Set Free, a book describing continuing explosives which Wells termed "atomic bombs"; Szilard wrote in his memoirs the book had made "a very great impression on me." When Szilard assigned his patent to the Admiralty to keep the news from reaching the notice of the wider scientific community, he wrote, "Knowing what this [a chain reaction] would mean—and I knew it because I had read H. G. Wells—I did not want this patent to become public."
In early 1934, Szilard began working at St Bartholomew's Hospital in London. Working with a young physicist on the hospital staff, Thomas A. Chalmers, he began studying radioactive isotopes for medical purposes. It was known that bombarding elements with neutrons could produce either heavier isotopes of an element, or a heavier element, a phenomenon known as the Fermi Effect after its discoverer, the Italian physicist Enrico Fermi. When they bombarded ethyl iodide with neutrons produced by a radon–beryllium source, they found that the heavier radioactive isotopes of iodine separated from the compound. Thus, they had discovered a means of isotope separation. This method became known as the Szilard–Chalmers effect, and was widely used in the preparation of medical isotopes. He also attempted unsuccessfully to create a nuclear chain reaction using beryllium by bombarding it with X-rays.
Manhattan Project
Columbia University
Szilard visited Béla and Rose and her husband Roland (Lorand) Detre, in Switzerland in September 1937. After a rainstorm, he and his siblings spent an afternoon in an unsuccessful attempt to build a prototype collapsible umbrella. One reason for the visit was that he had decided to emigrate to the United States, as he believed that another war in Europe was inevitable and imminent. He reached New York on the liner on January 2, 1938. Over the next few months, he moved from place to place, conducting research with Maurice Goldhaber at the University of Illinois at Urbana–Champaign, and then the University of Chicago, University of Michigan and the University of Rochester, where he undertook experiments with indium but again failed to initiate a chain reaction.
In early 1938, Szilard would settle down at "what would become a haven for much of the rest of his life" when he took a room at the King's Crown Hotel in New York City, being near Columbia University where he now conducted research without a formal title or position. He encountered John R. Dunning, who invited him to speak about his research at an afternoon seminar in January 1939. That same month, Niels Bohr brought news with him to New York that nuclear fission had accidentally been observed by chemists Otto Hahn and Fritz Strassmann at the Kaiser Wilhelm Institute for Chemistry in Berlin on December 19, 1938. Hahn and Strassman's misunderstanding of their observation would be corrected theoretically and thus explained by Lise Meitner and Otto Frisch, as Meitner had known of Szilard's theory back in 1933 and after re-conducting the experiment themselves, confirmed Szilard's theory to have been correct all along. When Szilard found out about it on a visit to Wigner at Princeton University, he immediately realized that uranium might be the element capable of sustaining a chain reaction.
Unable to convince Fermi that this was the case, Szilard set out on his own. He obtained permission from the head of the physics department at Columbia, George B. Pegram, to use a laboratory for three months. To fund his experiment, he borrowed $2,000 from a fellow inventor, Benjamin Liebowitz. He wired Frederick Lindemann at Oxford and asked him to send a beryllium cylinder. He persuaded Walter Zinn to become his collaborator and hired Semyon Krewer to investigate processes for manufacturing pure uranium and graphite.
Szilard and Zinn conducted a simple experiment on the seventh floor of Pupin Hall at Columbia, using a radium–beryllium source to bombard uranium with neutrons. Initially nothing registered on the oscilloscope, but then Zinn realized that it was not plugged in. On doing so, they discovered significant neutron multiplication in natural uranium, proving that a chain reaction might be possible. Szilard later described the event: "We turned the switch and saw the flashes. We watched them for a little while and then we switched everything off and went home." He understood the implications and consequences of this discovery, though. "That night, there was very little doubt in my mind that the world was headed for grief".
While they had demonstrated that the fission of uranium produced more neutrons than it consumed, this was still not a chain reaction. Szilard persuaded Fermi and Herbert L. Anderson to try a larger experiment using of uranium. To maximize the chance of fission, they needed a neutron moderator to slow the neutrons down. Hydrogen was a known moderator, so they used water. The results were disappointing. It became apparent that hydrogen slowed neutrons down, but also absorbed them, leaving fewer for the chain reaction. Szilard then suggested Fermi use carbon, in the form of graphite. He felt he would need about (50.8 metric ton) of graphite and of uranium. As a back-up plan, Szilard also considered where he might find a few tons of heavy water; deuterium would not absorb neutrons like ordinary hydrogen but would have the similar value as a moderator. Such quantities of material would require a lot of money.
Szilard drafted a confidential letter to the President, Franklin D. Roosevelt, explaining the possibility of nuclear weapons, warning of the German nuclear weapon project, and encouraging the development of a program that could result in their creation. With the help of Wigner and Edward Teller, he approached his old friend and collaborator Einstein in August 1939, and persuaded him to sign the letter, lending his fame to the proposal. The Einstein–Szilárd letter resulted in the establishment of research into nuclear fission by the US government, and ultimately to the creation of the Manhattan Project. Roosevelt gave the letter to his aide, Brigadier General Edwin M. "Pa" Watson with the instruction: "Pa, this requires action!"
An Advisory Committee on Uranium was formed under Lyman J. Briggs, a scientist and the director of the National Bureau of Standards. Its first meeting on October 21, 1939, was attended by Szilard, Teller, and Wigner, who persuaded the Army and Navy to provide $6,000 for Szilard to purchase supplies for experiments—in particular, more graphite. A 1940 Army intelligence report on Fermi and Szilard, prepared when the United States had not yet entered World War II, expressed reservations about both. While it contained some errors of fact about Szilard, it correctly noted his dire prediction that Germany would win the war.
Fermi and Szilard met with Herbert G. MacPherson and V. C. Hamister of the National Carbon Company, who manufactured graphite, and Szilard made another important discovery. He asked about impurities in graphite and learned from MacPherson that it usually contained boron, a neutron absorber. He then had special boron-free graphite produced. Had he not done so, they might have concluded, as the German nuclear researchers did, that graphite was unsuitable for use as a neutron moderator. Like the German researchers, Fermi and Szilard still believed that enormous quantities of uranium would be required for an atomic bomb, and therefore concentrated on producing a controlled chain reaction. Fermi determined that a fissioning uranium atom produced 1.73 neutrons on average. It was enough, but a careful design was called for to minimize losses. Szilard worked up various designs for a nuclear reactor. "If the uranium project could have been run on ideas alone," Wigner later remarked, "no one but Leo Szilard would have been needed."
Metallurgical Laboratory
At its December 6, 1941, meeting, the National Defense Research Committee resolved to proceed with an all-out effort to produce atomic bombs. This decision was given urgency by the Japanese attack on Pearl Harbor the following day that brought the United States into World War II. It was formally approved by Roosevelt in January 1942. Arthur H. Compton from the University of Chicago was appointed head of research and development. Against Szilard's wishes, Compton concentrated all the groups working on reactors and plutonium at the Metallurgical Laboratory of the University of Chicago. Compton laid out an ambitious plan to achieve a chain reaction by January 1943, start manufacturing plutonium in nuclear reactors by January 1944, and produce an atomic bomb by January 1945.
In January 1942, Szilard joined the Metallurgical Laboratory in Chicago as a research associate, and later the chief physicist. Alvin Weinberg noted that Szilard served as the project "gadfly", asking all the embarrassing questions. Szilard provided important insights. While uranium-238 did not fission readily with slow, moderated neutrons, it might still fission with the fast neutrons produced by fission. This effect was small but crucial. Szilard made suggestions that improved the uranium canning process, and worked with David Gurinsky and Ed Creutz on a method for recovering uranium from its salts.
A vexing question at the time was how a production reactor should be cooled. Taking a conservative view that every possible neutron must be preserved, the majority opinion initially favored cooling with helium, which would absorb very few neutrons. Szilard argued that if this was a concern, then liquid bismuth would be a better choice. He supervised experiments with it, but the practical difficulties turned out to be too great. In the end, Wigner's plan to use ordinary water as a coolant won out. When the coolant issue became too heated, Compton and the director of the Manhattan Project, Brigadier General Leslie R. Groves, Jr., moved to dismiss Szilard, who was still a German citizen, but the Secretary of War, Henry L. Stimson, refused to do so. Szilard was therefore present on December 2, 1942, when the first man-made self-sustaining nuclear chain reaction was achieved in the first nuclear reactor under viewing stands of Stagg Field and shook Fermi's hand.
Szilard started to acquire high-quality graphite and uranium, which were the necessary materials for building a large-scale chain reaction experiment. The success of this demonstration and technological breakthrough at the University of Chicago were partially due to Szilard's new atomic theories, his uranium lattice design, and the identification and mitigation of a key graphite impurity (boron) through a joint collaboration with graphite suppliers.
Szilard became a naturalized citizen of the United States in March 1943. The Army offered Szilard $25,000 for his inventions before November 1940, when he officially joined the project. He refused. He was the co-holder, with Fermi, of the patent on the nuclear reactor. In the end he sold his patent to the government for reimbursement of his expenses, some $15,416, plus the standard $1 fee. He continued to work with Fermi and Wigner on nuclear reactor design and is credited with coining the term "breeder reactor".
With an enduring passion for the preservation of human life and political freedom, Szilard hoped that the US government would not use nuclear weapons, but that the mere threat of such weapons would force Germany and Japan to surrender. He also worried about the long-term implications of nuclear weapons, predicting that their use by the United States would start a nuclear arms race with the USSR. He drafted the Szilárd petition advocating that the atomic bomb be demonstrated to the enemy, and used only if the enemy did not then surrender. The Interim Committee instead chose to use atomic bombs against cities over the protests of Szilard and other scientists. Afterwards, he lobbied for amendments to the Atomic Energy Act of 1946 that placed nuclear energy under civilian control.
After the war
In 1946, Szilard secured a research professorship at the University of Chicago that allowed him to research in biology and the social sciences. He teamed up with Aaron Novick, a chemist who had worked at the Metallurgical Laboratory during the war. The two men saw biology as a field that had not been explored as much as physics and was ready for scientific breakthroughs. It was a field that Szilard had been working on in 1933 before he had become subsumed in the quest for a nuclear chain reaction. The duo made considerable advances. They invented the chemostat, a device for regulating the growth rate of the microorganisms in a bioreactor, and developed methods for measuring the growth rate of bacteria. They discovered feedback inhibition, an important factor in processes such as growth and metabolism. Szilard gave essential advice to Theodore Puck and Philip I. Marcus for their first cloning of a human cell in 1955.
Personal life
Before his relationship with his later wife Gertrud "Trude" Weiss, Leo Szilard's life partner in the period 1927–1934 was the kindergarten teacher and opera singer Gerda Philipsborn, who also worked as a volunteer in a Berlin asylum organization for refugee children and in 1932 moved to India to continue this work. Szilard married Trude Weiss, a physician, in a civil ceremony in New York on October 13, 1951. They had known each other since 1929 and had frequently corresponded and visited each other ever since. Weiss took up a teaching position at the University of Colorado in April 1950, and Szilard began staying with her in Denver for weeks at a time when they had never been together for more than a few days before. Single people living together was frowned upon in the conservative United States at the time and, after they were discovered by one of her students, Szilard began to worry that she might lose her job. Their relationship remained a long-distance one, and they kept news of their marriage quiet. Many of his friends were shocked, having considered Szilard a born bachelor.
Writings
In 1949 Szilard wrote a short story titled "My Trial as a War Criminal" in which he imagined himself on trial for crimes against humanity after the United States lost a war with the Soviet Union. He publicly sounded the alarm against the possible development of salted thermonuclear bombs, explaining in a University of Chicago Round Table radio program on February 26, 1950, that a sufficiently big thermonuclear bomb rigged with specific but common materials, might annihilate mankind. His comments, as well as those of Hans Bethe, Harrison Brown, and Frederick Seitz (the three other scientists who participated in the program), were attacked by the Atomic Energy Commission's former Chairman David Lilienthal, and the criticisms plus a response from Szilard were published. Time compared Szilard to Chicken Little while the AEC dismissed his ideas, but scientists debated whether it was feasible or not; the Bulletin of the Atomic Scientists commissioned a study by James R. Arnold, who concluded that it was. Physicist W. H. Clark suggested that a 50 megaton cobalt bomb did have the potential to produce sufficient long-lasting radiation to be a doomsday weapon, in theory, but was of the view that, even then, "enough people might find refuge to wait out the radioactivity and emerge to begin again."
In 1961 he proposed the idea of "Mined Cities", an early example of mutually assured destruction.
Szilard published a book of short stories, The Voice of the Dolphins (1961), in which he dealt with the moral and ethical issues raised by the Cold War and his own role in the development of atomic weapons. The title story described an international biology research laboratory in Central Europe. This became reality after a meeting in 1962 with Victor F. Weisskopf, James Watson and John Kendrew. When the European Molecular Biology Laboratory was established, the library was named The Szilard Library and the library stamp features dolphins. Other honors that he received included the Atoms for Peace Award in 1959, and the Humanist of the Year in 1960. A lunar crater on the far side of the Moon was named after him in 1970. The Leo Szilard Lectureship Award, established in 1974, is given in his honor by the American Physical Society.
Cancer diagnosis and treatment
In 1960, Szilard was diagnosed with bladder cancer. He underwent cobalt therapy at New York's Memorial Sloan-Kettering Hospital using a cobalt 60 treatment regimen that his doctors gave him a high degree of control over. A second round of treatment with an increased dose followed in 1962. The higher dose did its job and his cancer never returned.
Last years
Szilard spent his last years as a fellow of the Salk Institute for Biological Studies in the La Jolla community of San Diego, California, which he had helped create. Szilard founded Council for a Livable World in 1962 to deliver "the sweet voice of reason" about nuclear weapons to Congress, the White House, and the American public. He was appointed a non-resident fellow there in July 1963, and became a resident fellow on April 1, 1964, after moving to San Diego in February. With Trude, he lived in a bungalow on the property of the Hotel del Charro. On May 30, 1964, he died there in his sleep of a heart attack; when Trude awoke, she was unable to revive him. His remains were cremated.
His papers are in the library at the University of California, San Diego. In February 2014, the library announced that it received funding from the National Historical Publications and Records Commission to digitize its collection of his papers, ranging from 1938 to 1998.
Patents
—Improvements in or relating to the transmutation of chemical elements—L. Szilard, filed June 28, 1934, issued March 30, 1936
—Neutronic reactor—E. Fermi, L. Szilard, filed December 19, 1944, issued May 17, 1955
—Einstein Refrigerator—co-developed with Albert Einstein filed in 1926, issued November 11, 1930
Recognition and remembrance
Atoms for Peace Award, 1959
Albert Einstein Award, 1960
American Humanist Association's Humanist of the Year, 1960
Szilard (crater) on the far side of the Moon, named in 1970
Leo Szilard Lectureship Award, since 1974
Asteroid 38442 Szilárd discovered in 1999
Leószilárdite, mineral, reported in 2016
In media
Szilard was portrayed in the 2023 Christopher Nolan film Oppenheimer by Máté Haumann. Szilard was also the subject of a musical entitled Atomic.
See also
The Martians (scientists)
Szilard Point
Notes
References
Further reading
External links
Leo Szilard Onlinean "Internet Historic Site" (first created March 30, 1995) maintained by Gene Dannen
Register of the Leo Szilard Papers, MSS 32, Special Collections & Archives, UC San Diego Library.
Leo Szilard Papers, MSS 32, Special Collections & Archives, UC San Diego Library.
Lanouette/Szilard Papers, MSS 659, Special Collections & Archives, UC San Diego Library.
2014 Interview with William Lanouette, author of "Genius in the Shadows: A Biography of Leo Szilard, the Man Behind the Bomb." Voices of the Manhattan Project
Einstein's Letter to President Roosevelt1939
The Szilard Library at the European Molecular Biology Laboratory
Szilard lecture on war
Einstein and Szilard re-enact their meeting for the film Atomic Power (1946)
The Many Worlds of Leo Szilard, an invited session sponsored by the APS Forum on the History of Physics at the APS April Meeting 2014, speakers discuss the life and physics Léo Szilárd. Presentations by William Lanouette (The Many Worlds of Léo Szilárd: Physicist, Peacemaker, Provocateur), Richard Garwin (Léo Szilárd in Physics and Information), and Matthew Meselson (Léo Szilárd: Biologist and Peacemaker)
Leo Szilard on IMDb
Entry at isfdb.org
1898 births
1964 deaths
People from Pest, Hungary
Accelerator physicists
American agnostics
American nuclear physicists
American letter writers
20th-century Hungarian physicists
Atoms for Peace Award recipients
Burials at Kerepesi Cemetery
Jewish agnostics
Hungarian agnostics
Hungarian emigrants to the United States
20th-century Hungarian engineers
20th-century Hungarian inventors
Jewish American physicists
20th-century American Jews
20th-century American physicists
Hungarian Jews
Hungarian nuclear physicists
Manhattan Project people
Jewish emigrants from Nazi Germany to the United Kingdom
Columbia University faculty
Budapest University of Technology and Economics alumni
Technische Universität Berlin alumni
Humboldt University of Berlin alumni
Fellows of the American Physical Society
Salk Institute for Biological Studies people
Statistical physicists
Hungarian biophysicists | Leo Szilard | [
"Physics"
] | 6,807 | [
"Statistical physicists",
"Statistical mechanics"
] |
56,360 | https://en.wikipedia.org/wiki/Sorbitol | Sorbitol (), less commonly known as glucitol (), is a sugar alcohol with a sweet taste which the human body metabolizes slowly. It can be obtained by reduction of glucose, which changes the converted aldehyde group (−CHO) to a primary alcohol group (−CH2OH). Most sorbitol is made from potato starch, but it is also found in nature, for example in apples, pears, peaches, and prunes. It is converted to fructose by sorbitol-6-phosphate 2-dehydrogenase. Sorbitol is an isomer of mannitol, another sugar alcohol; the two differ only in the orientation of the hydroxyl group on carbon2. While similar, the two sugar alcohols have very different sources in nature, melting points, and uses.
As an over-the-counter drug, sorbitol is used as a laxative to treat constipation.
Synthesis
Sorbitol may be synthesised via a glucose reduction reaction in which the converted aldehyde group is converted into a hydroxyl group. The reaction requires NADH and is catalyzed by aldose reductase. Glucose reduction is the first step of the polyol pathway of glucose metabolism, and is implicated in multiple diabetic complications.
C6H12O6 + NADH + H+ -> C6H14O6 + NAD+The mechanism involves a tyrosine residue in the active site of aldehyde reductase. The hydrogen atom on NADH is transferred to the electrophilic aldehyde carbon atom; electrons on the aldehyde carbon-oxygen double bond are transferred to the oxygen that abstracts the proton on tyrosine side chain to form the hydroxyl group. The role of aldehyde reductase tyrosine phenol group is to serve as a general acid to provide proton to the reduced aldehyde oxygen on glucose.
Glucose reduction is not the major glucose metabolism pathway in a normal human body, where the glucose level is in the normal range. However, in diabetic patients whose blood glucose level is high, up to 1/3 of their glucose could go through the glucose reduction pathway. This will consume NADH and eventually leads to cell damage.
Uses
Sweetener
Sorbitol is a sugar substitute, and when used in food it has the INS number and E number 420. Sorbitol is about 60% as sweet as sucrose (table sugar).
Sorbitol is referred to as a nutritive sweetener because it provides some dietary energy. It is partly absorbed from the small intestine and metabolized in the body, and partly fermented in the large intestine. The fermentation produces short-chain fatty acids, acetic acid, propionic acid, and butyric acid, which are mostly absorbed and provide energy, but also carbon dioxide, methane, and hydrogen which do not provide energy. Even though the heat of combustion of sorbitol is higher than that of glucose (having two extra hydrogen atoms), the net energy contribution is between 2.5 and 3.4 kilocalories per gram, versus the approximately 4 kilocalories (17 kilojoules) for carbohydrates. It is often used in diet foods (including diet drinks and ice cream), mints, cough syrups, and sugar-free chewing gum. Most bacteria cannot use sorbitol for energy, but it can be slowly fermented in the mouth by Streptococcus mutans, a bacterium that causes tooth decay. In contrast, many other sugar alcohols such as isomalt and xylitol are considered non-acidogenic.
It also occurs naturally in many stone fruits and berries from trees of the genus Sorbus.
Medical applications
Laxative
As is the case with other sugar alcohols, foods containing sorbitol can cause gastrointestinal distress. Sorbitol can be used as a laxative when taken orally or as an enema. Sorbitol works as a laxative by drawing water into the large intestine, stimulating bowel movements. Sorbitol has been determined safe for use by the elderly, although it is not recommended without the advice of a physician.
Sorbitol is commonly used orally as a one-time dose of 70% solution. It may also be used as a one-time rectal enema.
Other medical applications
Sorbitol is used in bacterial culture media to distinguish the pathogenic Escherichia coli O157:H7 from most other strains of E. coli, because it is usually unable to ferment sorbitol, unlike 93% of known E. coli strains.
A treatment for hyperkalaemia (elevated blood potassium) uses sorbitol and the ion-exchange resin sodium polystyrene sulfonate (tradename Kayexalate). The resin exchanges sodium ions for potassium ions in the bowel, while sorbitol helps to eliminate it. In 2010, the U.S. FDA issued a warning of increased risk for gastrointestinal necrosis with this combination.
Sorbitol is also used in the manufacture of softgel capsules to store single doses of liquid medicines.
Health care, food, and cosmetic uses
Sorbitol often is used in modern cosmetics as a humectant and thickener. It is also used in mouthwash and toothpaste. Some transparent gels can be made only with sorbitol, because of its high refractive index.
Sorbitol is used as a cryoprotectant additive (mixed with sucrose and sodium polyphosphates) in the manufacture of surimi, a processed fish paste. It is also used as a humectant in some cigarettes.
Beyond its use as a sugar substitute in reduced-sugar foods, sorbitol is also used as a humectant in cookies and low-moisture foods like peanut butter and fruit preserves. In baking, it is also valuable because it acts as a plasticizer, and slows down the staling process.
Miscellaneous uses
A mixture of sorbitol and potassium nitrate has found some success as an amateur solid rocket fuel. It has similar performance to sucrose-based rocket candy, but is easier to cast, less hygroscopic and does not caramelize.
Sorbitol is identified as a potential key chemical intermediate for production of fuels from biomass resources. Carbohydrate fractions in biomass such as cellulose undergo sequential hydrolysis and hydrogenation in the presence of metal catalysts to produce sorbitol. Complete reduction of sorbitol opens the way to alkanes, such as hexane, which can be used as a biofuel. Hydrogen required for this reaction can be produced by aqueous phase catalytic reforming of sorbitol.
19 C6H14O6 → 13 C6H14 + 36 CO2 + 42 H2O
The above chemical reaction is exothermic, and 1.5 moles of sorbitol generate approximately 1 mole of hexane. When hydrogen is co-fed, no carbon dioxide is produced.
Sorbitol based polyols are used in the production of polyurethane foam for the construction industry.
It is also added after electroporation of yeasts in transformation protocols, allowing the cells to recover by raising the osmolarity of the medium.
Medical importance
Aldose reductase is the first enzyme in the sorbitol-aldose reductase pathway responsible for the reduction of glucose to sorbitol, as well as the reduction of galactose to galactitol. Too much sorbitol trapped in retinal cells, the cells of the lens, and the Schwann cells that myelinate peripheral nerves, is a frequent result of long-term hyperglycemia that accompanies poorly controlled diabetes. This can damage these cells, leading to retinopathy, cataracts and peripheral neuropathy, respectively.
Sorbitol is fermented in the colon and produces short-chain fatty acids, which are beneficial to overall colon health.
Potential adverse effects
Sorbitol may cause allergic reactions in some people. Common side effects from use as a laxative are stomach cramps, vomiting, diarrhea or rectal bleeding.
Compendial status
Food Chemicals Codex
European Pharmacopoeia 6.1
British Pharmacopoeia 2009
Japanese Pharmacopoeia 17
See also
Sorbitan
Isosorbide
References
External links
Commodity chemicals
E-number additives
Excipients
Laxatives
Osmotic diuretics
Sugar alcohols
Sugar substitutes | Sorbitol | [
"Chemistry"
] | 1,830 | [
"Carbohydrates",
"Sugar alcohols",
"Commodity chemicals",
"Products of chemical industry"
] |
56,369 | https://en.wikipedia.org/wiki/Bell%27s%20theorem | Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories, given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light. "Hidden variables" are supposed properties of quantum particles that are not included in quantum theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."
The first such result was introduced by Bell in 1964, building upon the Einstein–Podolsky–Rosen paradox, which had called attention to the phenomenon of quantum entanglement. Bell deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. Such a constraint would later be named a Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Multiple variations on Bell's theorem were put forward in the following years, using different assumptions and obtaining different Bell (or "Bell-type") inequalities.
The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by John Clauser and Stuart Freedman. More advanced experiments, known collectively as Bell tests, have been performed many times since. Often, these experiments have had the goal of "closing loopholes", that is, ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with local hidden-variable theories.
The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, different interpretations of quantum mechanics disagree about what exactly it implies.
Theorem
There are many variations on the basic idea, some employing stronger mathematical assumptions than others. Significantly, Bell-type theorems do not refer to any particular theory of local hidden variables, but instead show that quantum physics violates general assumptions behind classical pictures of nature. The original theorem proved by Bell in 1964 is not the most amenable to experiment, and it is convenient to introduce the genre of Bell-type inequalities with a later example.
Hypothetical characters Alice and Bob stand in widely separated locations. Their colleague Victor prepares a pair of particles and sends one to Alice and the other to Bob. When Alice receives her particle, she chooses to perform one of two possible measurements (perhaps by flipping a coin to decide which). Denote these measurements by and . Both and are binary measurements: the result of is either or , and likewise for . When Bob receives his particle, he chooses one of two measurements, and , which are also both binary.
Suppose that each measurement reveals a property that the particle already possessed. For instance, if Alice chooses to measure and obtains the result , then the particle she received carried a value of for a property . Consider the combinationBecause both and take the values , then either or . In the former case, the quantity must equal 0, while in the latter case, . So, one of the terms on the right-hand side of the above expression will vanish, and the other will equal . Consequently, if the experiment is repeated over many trials, with Victor preparing new pairs of particles, the absolute value of the average of the combination across all the trials will be less than or equal to 2. No single trial can measure this quantity, because Alice and Bob can only choose one measurement each, but on the assumption that the underlying properties exist, the average value of the sum is just the sum of the averages for each term. Using angle brackets to denote averages
This is a Bell inequality, specifically, the CHSH inequality. Its derivation here depends upon two assumptions: first, that the underlying physical properties and exist independently of being observed or measured (sometimes called the assumption of realism); and second, that Alice's choice of action cannot influence Bob's result or vice versa (often called the assumption of locality).
Quantum mechanics can violate the CHSH inequality, as follows. Victor prepares a pair of qubits which he describes by the Bell state
where and are the eigenstates of one of the Pauli matrices,
Victor then passes the first qubit to Alice and the second to Bob. Alice and Bob's choices of possible measurements are also defined in terms of the Pauli matrices. Alice measures either of the two observables and :
and Bob measures either of the two observables
Victor can calculate the quantum expectation values for pairs of these observables using the Born rule:
While only one of these four measurements can be made in a single trial of the experiment, the sum
gives the sum of the average values that Victor expects to find across multiple trials. This value exceeds the classical upper bound of 2 that was deduced from the hypothesis of local hidden variables. The value is in fact the largest that quantum physics permits for this combination of expectation values, making it a Tsirelson bound.
The CHSH inequality can also be thought of as a game in which Alice and Bob try to coordinate their actions. Victor prepares two bits, and , independently and at random. He sends bit to Alice and bit to Bob. Alice and Bob win if they return answer bits and to Victor, satisfying
Or, equivalently, Alice and Bob win if the logical AND of and is the logical XOR of and . Alice and Bob can agree upon any strategy they desire before the game, but they cannot communicate once the game begins. In any theory based on local hidden variables, Alice and Bob's probability of winning is no greater than , regardless of what strategy they agree upon beforehand. However, if they share an entangled quantum state, their probability of winning can be as large as
Variations and related results
Bell (1964)
Bell's 1964 paper points out that under restricted conditions, local hidden-variable models can reproduce the predictions of quantum mechanics. He then demonstrates that this cannot hold true in general. Bell considers a refinement by David Bohm of the Einstein–Podolsky–Rosen (EPR) thought experiment. In this scenario, a pair of particles are formed together in such a way that they are described by a spin singlet state (which is an example of an entangled state). The particles then move apart in opposite directions. Each particle is measured by a Stern–Gerlach device, a measuring instrument that can be oriented in different directions and that reports one of two possible outcomes, representable by and . The configuration of each measuring instrument is represented by a unit vector, and the quantum-mechanical prediction for the correlation between two detectors with settings and is
In particular, if the orientation of the two detectors is the same (), then the outcome of one measurement is certain to be the negative of the outcome of the other, giving . And if the orientations of the two detectors are orthogonal (), then the outcomes are uncorrelated, and . Bell proves by example that these special cases can be explained in terms of hidden variables, then proceeds to show that the full range of possibilities involving intermediate angles cannot.
Bell posited that a local hidden-variable model for these correlations would explain them in terms of an integral over the possible values of some hidden parameter :
where is a probability density function. The two functions and provide the responses of the two detectors given the orientation vectors and the hidden variable:
Crucially, the outcome of detector does not depend upon , and likewise the outcome of does not depend upon , because the two detectors are physically separated. Now we suppose that the experimenter has a choice of settings for the second detector: it can be set either to or to . Bell proves that the difference in correlation between these two choices of detector setting must satisfy the inequality
However, it is easy to find situations where quantum mechanics violates the Bell inequality. For example, let the vectors and be orthogonal, and let lie in their plane at a 45° angle from both of them. Then
while
but
Therefore, there is no local hidden-variable model that can reproduce the predictions of quantum mechanics for all choices of , , and Experimental results contradict the classical curves and match the curve predicted by quantum mechanics as long as experimental shortcomings are accounted for.
Bell's 1964 theorem requires the possibility of perfect anti-correlations: the ability to make a probability-1 prediction about the result from the second detector, knowing the result from the first. This is related to the "EPR criterion of reality", a concept introduced in the 1935 paper by Einstein, Podolsky, and Rosen. This paper posits: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity."
GHZ–Mermin (1990)
Daniel Greenberger, Michael A. Horne, and Anton Zeilinger presented a four-particle thought experiment in 1990, which David Mermin then simplified to use only three particles. In this thought experiment, Victor generates a set of three spin-1/2 particles described by the quantum state
where as above, and are the eigenvectors of the Pauli matrix . Victor then sends a particle each to Alice, Bob, and Charlie, who wait at widely separated locations. Alice measures either or on her particle, and so do Bob and Charlie. The result of each measurement is either or . Applying the Born rule to the three-qubit state , Victor predicts that whenever the three measurements include one and two 's, the product of the outcomes will always be . This follows because is an eigenvector of with eigenvalue , and likewise for and . Therefore, knowing Alice's result for a measurement and Bob's result for a measurement, Victor can predict with probability 1 what result Charlie will return for a measurement. According to the EPR criterion of reality, there would be an "element of reality" corresponding to the outcome of a measurement upon Charlie's qubit. Indeed, this same logic applies to both measurements and all three qubits. Per the EPR criterion of reality, then, each particle contains an "instruction set" that determines the outcome of a or measurement upon it. The set of all three particles would then be described by the instruction set
with each entry being either or , and each or measurement simply returning the appropriate value.
If Alice, Bob, and Charlie all perform the measurement, then the product of their results would be . This value can be deduced from
because the square of either or is . Each factor in parentheses equals , so
and the product of Alice, Bob, and Charlie's results will be with probability unity. But this is inconsistent with quantum physics: Victor can predict using the state that the measurement will instead yield with probability unity.
This thought experiment can also be recast as a traditional Bell inequality or, equivalently, as a nonlocal game in the same spirit as the CHSH game. In it, Alice, Bob, and Charlie receive bits from Victor, promised to always have an even number of ones, that is, , and send him back bits . They win the game if have an odd number of ones for all inputs except , when they need to have an even number of ones. That is, they win the game iff . With local hidden variables the highest probability of victory they can have is 3/4, whereas using the quantum strategy above they win it with certainty. This is an example of quantum pseudo-telepathy.
Kochen–Specker theorem (1967)
In quantum theory, orthonormal bases for a Hilbert space represent measurements that can be performed upon a system having that Hilbert space. Each vector in a basis represents a possible outcome of that measurement. Suppose that a hidden variable exists, so that knowing the value of would imply certainty about the outcome of any measurement. Given a value of , each measurement outcome – that is, each vector in the Hilbert space – is either impossible or guaranteed. A Kochen–Specker configuration is a finite set of vectors made of multiple interlocking bases, with the property that a vector in it will always be impossible when considered as belonging to one basis and guaranteed when taken as belonging to another. In other words, a Kochen–Specker configuration is an "uncolorable set" that demonstrates the inconsistency of assuming a hidden variable can be controlling the measurement outcomes.
Free will theorem
The Kochen–Specker type of argument, using configurations of interlocking bases, can be combined with the idea of measuring entangled pairs that underlies Bell-type inequalities. This was noted beginning in the 1970s by Kochen, Heywood and Redhead, Stairs, and Brown and Svetlichny. As EPR pointed out, obtaining a measurement outcome on one half of an entangled pair implies certainty about the outcome of a corresponding measurement on the other half. The "EPR criterion of reality" posits that because the second half of the pair was not disturbed, that certainty must be due to a physical property belonging to it. In other words, by this criterion, a hidden variable must exist within the second, as-yet unmeasured half of the pair. No contradiction arises if only one measurement on the first half is considered. However, if the observer has a choice of multiple possible measurements, and the vectors defining those measurements form a Kochen–Specker configuration, then some outcome on the second half will be simultaneously impossible and guaranteed.
This type of argument gained attention when an instance of it was advanced by John Conway and Simon Kochen under the name of the free will theorem. The Conway–Kochen theorem uses a pair of entangled qutrits and a Kochen–Specker configuration discovered by Asher Peres.
Quasiclassical entanglement
As Bell pointed out, some predictions of quantum mechanics can be replicated in local hidden-variable models, including special cases of correlations produced from entanglement. This topic has been studied systematically in the years since Bell's theorem. In 1989, Reinhard Werner introduced what are now called Werner states, joint quantum states for a pair of systems that yield EPR-type correlations but also admit a hidden-variable model. Werner states are bipartite quantum states that are invariant under unitaries of symmetric tensor-product form:
In 2004, Robert Spekkens introduced a toy model that starts with the premise of local, discretized degrees of freedom and then imposes a "knowledge balance principle" that restricts how much an observer can know about those degrees of freedom, thereby making them into hidden variables. The allowed states of knowledge ("epistemic states") about the underlying variables ("ontic states") mimic some features of quantum states. Correlations in the toy model can emulate some aspects of entanglement, like monogamy, but by construction, the toy model can never violate a Bell inequality.
History
Background
The question of whether quantum mechanics can be "completed" by hidden variables dates to the early years of quantum theory. In his 1932 textbook on quantum mechanics, the Hungarian-born polymath John von Neumann presented what he claimed to be a proof that there could be no "hidden parameters". The validity and definitiveness of von Neumann's proof were questioned by Hans Reichenbach, in more detail by Grete Hermann, and possibly in conversation though not in print by Albert Einstein. (Simon Kochen and Ernst Specker rejected von Neumann's key assumption as early as 1961, but did not publish a criticism of it until 1967.)
Einstein argued persistently that quantum mechanics could not be a complete theory. His preferred argument relied on a principle of locality:
Consider a mechanical system constituted of two partial systems A and B which have interaction with each other only during limited time. Let the ψ function before their interaction be given. Then the Schrödinger equation will furnish the ψ function after their interaction has taken place. Let us now determine the physical condition of the partial system A as completely as possible by measurements. Then the quantum mechanics allows us to determine the ψ function of the partial system B from the measurements made, and from the ψ function of the total system. This determination, however, gives a result which depends upon which of the determining magnitudes specifying the condition of A has been measured (for instance coordinates or momenta). Since there can be only one physical condition of B after the interaction and which can reasonably not be considered as dependent on the particular measurement we perform on the system A separated from B it may be concluded that the ψ function is not unambiguously coordinated with the physical condition. This coordination of several ψ functions with the same physical condition of system B shows again that the ψ function cannot be interpreted as a (complete) description of a physical condition of a unit system.
The EPR thought experiment is similar, also considering two separated systems A and B described by a joint wave function. However, the EPR paper adds the idea later known as the EPR criterion of reality, according to which the ability to predict with probability 1 the outcome of a measurement upon B implies the existence of an "element of reality" within B.
In 1951, David Bohm proposed a variant of the EPR thought experiment in which the measurements have discrete ranges of possible outcomes, unlike the position and momentum measurements considered by EPR. The year before, Chien-Shiung Wu and Irving Shaknov had successfully measured polarizations of photons produced in entangled pairs, thereby making the Bohm version of the EPR thought experiment practically feasible.
By the late 1940s, the mathematician George Mackey had grown interested in the foundations of quantum physics, and in 1957 he drew up a list of postulates that he took to be a precise definition of quantum mechanics. Mackey conjectured that one of the postulates was redundant, and shortly thereafter, Andrew M. Gleason proved that it was indeed deducible from the other postulates. Gleason's theorem provided an argument that a broad class of hidden-variable theories are incompatible with quantum mechanics. More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent with special relativity. The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no such probability measure can be defined.
Tsung-Dao Lee came close to deriving Bell's theorem in 1960. He considered events where two kaons were produced traveling in opposite directions, and came to the conclusion that hidden variables could not explain the correlations that could be obtained in such situations. However, complications arose due to the fact that kaons decay, and he did not go so far as to deduce a Bell-type inequality.
Bell's publications
Bell chose to publish his theorem in a comparatively obscure journal because it did not require page charges, in fact paying the authors who published there at the time. Because the journal did not provide free reprints of articles for the authors to distribute, however, Bell had to spend the money he received to buy copies that he could send to other physicists. While the articles printed in the journal themselves listed the publication's name simply as Physics, the covers carried the trilingual version Physics Physique Физика to reflect that it would print articles in English, French and Russian.
Prior to proving his 1964 result, Bell also proved a result equivalent to the Kochen–Specker theorem (hence the latter is sometimes also known as the Bell–Kochen–Specker or Bell–KS theorem). However, publication of this theorem was inadvertently delayed until 1966. In that paper, Bell argued that because an explanation of quantum phenomena in terms of hidden variables would require nonlocality, the EPR paradox "is resolved in the way which Einstein would have liked least."
Experiments
In 1967, the unusual title Physics Physique Физика caught the attention of John Clauser, who then discovered Bell's paper and began to consider how to perform a Bell test in the laboratory. Clauser and Stuart Freedman would go on to perform a Bell test in 1972. This was only a limited test, because the choice of detector settings was made before the photons had left the source. In 1982, Alain Aspect and collaborators performed the first Bell test to remove this limitation. This began a trend of progressively more stringent Bell tests. The GHZ thought experiment was implemented in practice, using entangled triplets of photons, in 2000. By 2002, testing the CHSH inequality was feasible in undergraduate laboratory courses.
In Bell tests, there may be problems of experimental design or set-up that affect the validity of the experimental findings. These problems are often referred to as "loopholes". The purpose of the experiment is to test whether nature can be described by local hidden-variable theory, which would contradict the predictions of quantum mechanics.
The most prevalent loopholes in real experiments are the detection and locality loopholes. The detection loophole is opened when a small fraction of the particles (usually photons) are detected in the experiment, making it possible to explain the data with local hidden variables by assuming that the detected particles are an unrepresentative sample. The locality loophole is opened when the detections are not done with a spacelike separation, making it possible for the result of one measurement to influence the other without contradicting relativity. In some experiments there may be additional defects that make local-hidden-variable explanations of Bell test violations possible.
Although both the locality and detection loopholes had been closed in different experiments, a long-standing challenge was to close both simultaneously in the same experiment. This was finally achieved in three experiments in 2015.
Regarding these results, Alain Aspect writes that "no experiment ... can be said to be totally loophole-free," but he says the experiments "remove the last doubts that we should renounce" local hidden variables, and refers to examples of remaining loopholes as being "far fetched" and "foreign to the usual way of reasoning in physics."
These efforts to experimentally validate violations of the Bell inequalities would later result in Clauser, Aspect, and Anton Zeilinger being awarded the 2022 Nobel Prize in Physics.
Interpretations
Reactions to Bell's theorem have been many and varied. Maximilian Schlosshauer, Johannes Kofler, and Zeilinger write that Bell inequalities provide "a wonderful example of how we can have a rigorous theoretical result tested by numerous experiments, and yet disagree about the implications."
The Copenhagen interpretation
Copenhagen-type interpretations generally take the violation of Bell inequalities as grounds to reject the assumption often called counterfactual definiteness or "realism", which is not necessarily the same as abandoning realism in a broader philosophical sense. For example, Roland Omnès argues for the rejection of hidden variables and concludes that "quantum mechanics is probably as realistic as any theory of its scope and maturity ever will be". Likewise, Rudolf Peierls took the message of Bell's theorem to be that, because the premise of locality is physically reasonable, "hidden variables cannot be introduced without abandoning some of the results of quantum mechanics".
This is also the route taken by interpretations that descend from the Copenhagen tradition, such as consistent histories (often advertised as "Copenhagen done right"), as well as QBism.
Many-worlds interpretation of quantum mechanics
The Many-worlds interpretation, also known as the Everett interpretation, is dynamically local, meaning that it does not call for action at a distance, and deterministic, because it consists of the unitary part of quantum mechanics without collapse. It can generate correlations that violate a Bell inequality because it violates an implicit assumption by Bell that measurements have a single outcome. In fact, Bell's theorem can be proven in the Many-Worlds framework from the assumption that a measurement has a single outcome. Therefore, a violation of a Bell inequality can be interpreted as a demonstration that measurements have multiple outcomes.
The explanation it provides for the Bell correlations is that when Alice and Bob make their measurements, they split into local branches. From the point of view of each copy of Alice, there are multiple copies of Bob experiencing different results, so Bob cannot have a definite result, and the same is true from the point of view of each copy of Bob. They will obtain a mutually well-defined result only when their future light cones overlap. At this point we can say that the Bell correlation starts existing, but it was produced by a purely local mechanism. Therefore, the violation of a Bell inequality cannot be interpreted as a proof of non-locality.
Non-local hidden variables
Most advocates of the hidden-variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell's inequality by means of a non-local hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. One challenge for non-local hidden variable theories is to explain why this instantaneous communication can exist at the level of the hidden variables, but it cannot be used to send signals. A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories, though not Bohmian mechanics itself.
The transactional interpretation, which postulates waves traveling both backwards and forwards in time, is likewise non-local.
Superdeterminism
A necessary assumption to derive Bell's theorem is that the hidden variables are not correlated with the measurement settings. This assumption has been justified on the grounds that the experimenter has "free will" to choose the settings, and that it is necessary to do science in the first place. A (hypothetical) theory where the choice of measurement is necessarily correlated with the system being measured is known as superdeterministic.
A few advocates of deterministic models have not given up on local hidden variables. For example, Gerard 't Hooft has argued that superdeterminism cannot be dismissed.
See also
Einstein's thought experiments
Epistemological Letters
Fundamental Fysiks Group
Leggett inequality
Leggett–Garg inequality
Mermin's device
Mott problem
PBR theorem
Quantum contextuality
Quantum nonlocality
Renninger negative-result experiment
Notes
References
Further reading
The following are intended for general audiences.
The following are more technically oriented.
External links
Mermin: Spooky Actions At A Distance? Oppenheimer Lecture.
Quantum information science
Quantum measurement
Theorems in quantum mechanics
Hidden variable theory
Inequalities
1964 introductions
No-go theorems | Bell's theorem | [
"Physics",
"Mathematics"
] | 5,685 | [
"Theorems in quantum mechanics",
"Mathematical theorems",
"No-go theorems",
"Equations of physics",
"Quantum mechanics",
"Binary relations",
"Theorems in mathematical physics",
"Quantum measurement",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Physics theo... |
56,398 | https://en.wikipedia.org/wiki/Phase%20diagram | A phase diagram in physical chemistry, engineering, mineralogy, and materials science is a type of chart used to show conditions (pressure, temperature, etc.) at which thermodynamically distinct phases (such as solid, liquid or gaseous states) occur and coexist at equilibrium.
Overview
Common components of a phase diagram are lines of equilibrium or phase boundaries, which refer to lines that mark conditions under which multiple phases can coexist at equilibrium. Phase transitions occur along lines of equilibrium. Metastable phases are not shown in phase diagrams as, despite their common occurrence, they are not equilibrium phases.
Triple points are points on phase diagrams where lines of equilibrium intersect. Triple points mark conditions at which three different phases can coexist. For example, the water phase diagram has a triple point corresponding to the single temperature and pressure at which solid, liquid, and gaseous water can coexist in a stable equilibrium ( and a partial vapor pressure of ). The pressure on a pressure-temperature diagram (such as the water phase diagram shown) is the partial pressure of the substance in question.
The solidus is the temperature below which the substance is stable in the solid state. The liquidus is the temperature above which the substance is stable in a liquid state. There may be a gap between the solidus and liquidus; within the gap, the substance consists of a mixture of crystals and liquid (like a "slurry").
Working fluids are often categorized on the basis of the shape of their phase diagram.
Types
2-dimensional diagrams
Pressure vs temperature
The simplest phase diagrams are pressure–temperature diagrams of a single simple substance, such as water. The axes correspond to the pressure and temperature. The phase diagram shows, in pressure–temperature space, the lines of equilibrium or phase boundaries between the three phases of solid, liquid, and gas.
The curves on the phase diagram show the points where the free energy (and other derived properties) becomes non-analytic: their derivatives with respect to the coordinates (temperature and pressure in this example) change discontinuously (abruptly). For example, the heat capacity of a container filled with ice will change abruptly as the container is heated past the melting point. The open spaces, where the free energy is analytic, correspond to single phase regions. Single phase regions are separated by lines of non-analytical behavior, where phase transitions occur, which are called phase boundaries.
In the diagram on the right, the phase boundary between liquid and gas does not continue indefinitely. Instead, it terminates at a point on the phase diagram called the critical point. This reflects the fact that, at extremely high temperatures and pressures, the liquid and gaseous phases become indistinguishable, in what is known as a supercritical fluid. In water, the critical point occurs at around Tc = , pc = and ρc = 356 kg/m3.
The existence of the liquid–gas critical point reveals a slight ambiguity in labelling the single phase regions. When going from the liquid to the gaseous phase, one usually crosses the phase boundary, but it is possible to choose a path that never crosses the boundary by going to the right of the critical point. Thus, the liquid and gaseous phases can blend continuously into each other. The solid–liquid phase boundary can only end in a critical point if the solid and liquid phases have the same symmetry group.
For most substances, the solid–liquid phase boundary (or fusion curve) in the phase diagram has a positive slope so that the melting point increases with pressure. This is true whenever the solid phase is denser than the liquid phase. The greater the pressure on a given substance, the closer together the molecules of the substance are brought to each other, which increases the effect of the substance's intermolecular forces. Thus, the substance requires a higher temperature for its molecules to have enough energy to break out of the fixed pattern of the solid phase and enter the liquid phase. A similar concept applies to liquid–gas phase changes.
Water is an exception which has a solid-liquid boundary with negative slope so that the melting point decreases with pressure. This occurs because ice (solid water) is less dense than liquid water, as shown by the fact that ice floats on water. At a molecular level, ice is less dense because it has a more extensive network of hydrogen bonding which requires a greater separation of water molecules. Other exceptions include antimony and bismuth.
At very high pressures above 50 GPa (500 000 atm), liquid nitrogen undergoes a liquid-liquid phase transition to a polymeric form and becomes denser than solid nitrogen at the same pressure. Under these conditions therefore, solid nitrogen also floats in its liquid.
The value of the slope dP/dT is given by the Clausius–Clapeyron equation for fusion (melting)
where ΔHfus is the heat of fusion which is always positive, and ΔVfus is the volume change for fusion. For most substances ΔVfus is positive so that the slope is positive. However for water and other exceptions, ΔVfus is negative so that the slope is negative.
Other thermodynamic properties
In addition to temperature and pressure, other thermodynamic properties may be graphed in phase diagrams. Examples of such thermodynamic properties include specific volume, specific enthalpy, or specific entropy. For example, single-component graphs of temperature vs. specific entropy (T vs. s) for water/steam or for a refrigerant are commonly used to illustrate thermodynamic cycles such as a Carnot cycle, Rankine cycle, or vapor-compression refrigeration cycle.
Any two thermodynamic quantities may be shown on the horizontal and vertical axes of a two-dimensional diagram. Additional thermodynamic quantities may each be illustrated in increments as a series of lines—curved, straight, or a combination of curved and straight. Each of these iso-lines represents the thermodynamic quantity at a certain constant value.
3-dimensional diagrams
It is possible to envision three-dimensional (3D) graphs showing three thermodynamic quantities. For example, for a single component, a 3D Cartesian coordinate type graph can show temperature (T) on one axis, pressure (p) on a second axis, and specific volume (v) on a third. Such a 3D graph is sometimes called a p–v–T diagram. The equilibrium conditions are shown as curves on a curved surface in 3D with areas for solid, liquid, and vapor phases and areas where solid and liquid, solid and vapor, or liquid and vapor coexist in equilibrium. A line on the surface called a triple line is where solid, liquid and vapor can all coexist in equilibrium. The critical point remains a point on the surface even on a 3D phase diagram.
An orthographic projection of the 3D p–v–T graph showing pressure and temperature as the vertical and horizontal axes collapses the 3D plot into the standard 2D pressure–temperature diagram. When this is done, the solid–vapor, solid–liquid, and liquid–vapor surfaces collapse into three corresponding curved lines meeting at the triple point, which is the collapsed orthographic projection of the triple line.
Binary mixtures
Other much more complex types of phase diagrams can be constructed, particularly when more than one pure component is present. In that case, concentration becomes an important variable. Phase diagrams with more than two dimensions can be constructed that show the effect of more than two variables on the phase of a substance. Phase diagrams can use other variables in addition to or in place of temperature, pressure and composition, for example the strength of an applied electrical or magnetic field, and they can also involve substances that take on more than just three states of matter.
One type of phase diagram plots temperature against the relative concentrations of two substances in a binary mixture called a binary phase diagram, as shown at right. Such a mixture can be either a solid solution, eutectic or peritectic, among others. These two types of mixtures result in very different graphs. Another type of binary phase diagram is a boiling-point diagram for a mixture of two components, i. e. chemical compounds. For two particular volatile components at a certain pressure such as atmospheric pressure, a boiling-point diagram shows what vapor (gas) compositions are in equilibrium with given liquid compositions depending on temperature. In a typical binary boiling-point diagram, temperature is plotted on a vertical axis and mixture composition on a horizontal axis.
A two component diagram with components A and B in an "ideal" solution is shown. The construction of a liquid vapor phase diagram assumes an ideal liquid solution obeying Raoult's law and an ideal gas mixture obeying Dalton's law of partial pressure. A tie line from the liquid to the gas at constant pressure would indicate the two compositions of the liquid and gas respectively.
A simple example diagram with hypothetical components 1 and 2 in a non-azeotropic mixture is shown at right. The fact that there are two separate curved lines joining the boiling points of the pure components means that the vapor composition is usually not the same as the liquid composition the vapor is in equilibrium with. See Vapor–liquid equilibrium for more information.
In addition to the above-mentioned types of phase diagrams, there are many other possible combinations. Some of the major features of phase diagrams include congruent points, where a solid phase transforms directly into a liquid. There is also the peritectoid, a point where two solid phases combine into one solid phase during cooling. The inverse of this, when one solid phase transforms into two solid phases during cooling, is called the eutectoid.
A complex phase diagram of great technological importance is that of the iron–carbon system for less than 7% carbon (see steel).
The x-axis of such a diagram represents the concentration variable of the mixture. As the mixtures are typically far from dilute and their density as a function of temperature is usually unknown, the preferred concentration measure is mole fraction. A volume-based measure like molarity would be inadvisable.
Ternary phase diagrams
A system with three components is called a ternary system. At constant pressure the maximum number of independent variables is three – the temperature and two concentration values. For a representation of ternary equilibria a three-dimensional phase diagram is required. Often such a diagram is drawn with the composition as a horizontal plane and the temperature on an axis perpendicular to this plane. To represent composition in a ternary system an equilateral triangle is used, called Gibbs triangle (see also Ternary plot).
The temperature scale is plotted on the axis perpendicular to the composition triangle. Thus, the space model of a ternary phase diagram is a right-triangular prism. The prism sides represent corresponding binary systems A-B, B-C, A-C.
However, the most common methods to present phase equilibria in a ternary system are the following:
1) projections on the concentration triangle ABC of the liquidus, solidus, solvus surfaces;
2) isothermal sections;
3) vertical sections.
Crystals
Polymorphic and polyamorphic substances have multiple crystal or amorphous phases, which can be graphed in a similar fashion to solid, liquid, and gas phases.
Mesophases
Some organic materials pass through intermediate states between solid and liquid; these states are called mesophases. Attention has been directed to mesophases because they enable display devices and have become commercially important through the so-called liquid-crystal technology. Phase diagrams are used to describe the occurrence of mesophases.
See also
CALPHAD (method)
Computational thermodynamics
Congruent melting and incongruent melting
Gibbs phase rule
Glass databases
Hamiltonian mechanics
Phase separation
Saturation dome
Schreinemaker's analysis
Simple phase envelope algorithm
References
External links
Iron-Iron Carbide Phase Diagram Example
How to build a phase diagram
Phase Changes: Phase Diagrams: Part 1
Equilibrium Fe-C phase diagram
Phase diagrams for lead free solders
DoITPoMS Phase Diagram Library
DoITPoMS Teaching and Learning Package – "Phase Diagrams and Solidification"
Phase Diagrams: The Beginning of Wisdom – Open Access Journal Article
Binodal curves, tie-lines, lever rule and invariant points – How to read phase diagrams (Video by SciFox on TIB AV-Portal)
The Alloy Phase Diagram International Commission (APDIC)
Periodic table of phase diagrams of the elements (pdf poster)
Diagram
Equilibrium chemistry
Materials science
Metallurgy
Charts
Diagrams
Gases
Chemical engineering thermodynamics | Phase diagram | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,576 | [
"Matter",
"Phase transitions",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Metallurgy",
"Chemical engineering",
"Phases of matter",
"Critical phenomena",
"Materials science",
"Equilibrium chemistry",
"Chemical engineering thermodynamics",
"nan",
"Statistical mechanics",
... |
56,434 | https://en.wikipedia.org/wiki/Julia%20set | In complex dynamics, the Julia set and the Fatou set are two complementary sets (Julia "laces" and Fatou "dusts") defined from a function. Informally, the Fatou set of the function consists of values with the property that all nearby values behave similarly under repeated iteration of the function, and the Julia set consists of values such that an arbitrarily small perturbation can cause drastic changes in the sequence of iterated function values.
Thus the behavior of the function on the Fatou set is "regular", while on the Julia set its behavior is "chaotic".
The Julia set of a function is commonly denoted and the Fatou set is denoted These sets are named after the French mathematicians Gaston Julia and Pierre Fatou whose work began the study of complex dynamics during the early 20th century.
Formal definition
Let be a non-constant meromorphic function from the Riemann sphere onto itself. Such functions are precisely the non-constant complex rational functions, that is, where and are complex polynomials. Assume that p and q have no common roots, and at least one has degree larger than 1. Then there is a finite number of open sets that are left invariant by and are such that:
The union of the sets is dense in the plane and
behaves in a regular and equal way on each of the sets .
The last statement means that the termini of the sequences of iterations generated by the points of are either precisely the same set, which is then a finite cycle, or they are finite cycles of circular or annular shaped sets that are lying concentrically. In the first case the cycle is attracting, in the second case it is neutral.
These sets are the Fatou domains of , and their union is the Fatou set of . Each of the Fatou domains contains at least one critical point of , that is, a (finite) point z satisfying , or if the degree of the numerator is at least two larger than the degree of the denominator , or if for some c and a rational function satisfying this condition.
The complement of is the Julia set of . If all the critical points are preperiodic, that is they are not periodic but eventually land on a periodic cycle, then is all the sphere. Otherwise, is a nowhere dense set (it is without interior points) and an uncountable set (of the same cardinality as the real numbers). Like , is left invariant by , and on this set the iteration is repelling, meaning that for all w in a neighbourhood of z (within ). This means that behaves chaotically on the Julia set. Although there are points in the Julia set whose sequence of iterations is finite, there are only a countable number of such points (and they make up an infinitesimal part of the Julia set). The sequences generated by points outside this set behave chaotically, a phenomenon called deterministic chaos.
There has been extensive research on the Fatou set and Julia set of iterated rational functions, known as rational maps. For example, it is known that the Fatou set of a rational map has either 0, 1, 2 or infinitely many components. Each component of the Fatou set of a rational map can be classified into one of four different classes.
Equivalent descriptions of the Julia set
is the smallest closed set containing at least three points which is completely invariant under f.
is the closure of the set of repelling periodic points.
For all but at most two points the Julia set is the set of limit points of the full backwards orbit (This suggests a simple algorithm for plotting Julia sets, see below.)
If f is an entire function, then is the boundary of the set of points which converge to infinity under iteration.
If f is a polynomial, then is the boundary of the filled Julia set; that is, those points whose orbits under iterations of f remain bounded.
Properties of the Julia set and Fatou set
The Julia set and the Fatou set of f are both completely invariant under iterations of the holomorphic function f:
Examples
For the Julia set is the unit circle and on this the iteration is given by doubling of angles (an operation that is chaotic on the points whose argument is not a rational fraction of ). There are two Fatou domains: the interior and the exterior of the circle, with iteration towards 0 and ∞, respectively.
For the Julia set is the line segment between −2 and 2. There is one Fatou domain: the points not on the line segment iterate towards ∞. (Apart from a shift and scaling of the domain, this iteration is equivalent to on the unit interval, which is commonly used as an example of chaotic system.)
The functions f and g are of the form , where c is a complex number. For such an iteration the Julia set is not in general a simple curve, but is a fractal, and for some values of c it can take surprising shapes. See the pictures below.
For some functions f(z) we can say beforehand that the Julia set is a fractal and not a simple curve. This is because of the following result on the iterations of a rational function:
This means that each point of the Julia set is a point of accumulation for each of the Fatou domains. Therefore, if there are more than two Fatou domains, each point of the Julia set must have points of more than two different open sets infinitely close, and this means that the Julia set cannot be a simple curve. This phenomenon happens, for instance, when f(z) is the Newton iteration for solving the equation :
The image on the right shows the case n = 3.
Quadratic polynomials
A very popular complex dynamical system is given by the family of complex quadratic polynomials, a special case of rational maps. Such quadratic polynomials can be expressed as
where c is a complex parameter. Fix some large enough that (For example, if is in the Mandelbrot set, then so we may simply let ) Then the filled Julia set for this system is the subset of the complex plane given by
where is the nth iterate of The Julia set of this function is the boundary of .
The parameter plane of quadratic polynomials – that is, the plane of possible c values – gives rise to the famous Mandelbrot set. Indeed, the Mandelbrot set is defined as the set of all c such that is connected. For parameters outside the Mandelbrot set, the Julia set is a Cantor space: in this case it is sometimes referred to as Fatou dust.
In many cases, the Julia set of c looks like the Mandelbrot set in sufficiently small neighborhoods of c. This is true, in particular, for so-called Misiurewicz parameters, i.e. parameters c for which the critical point is pre-periodic. For instance:
At c = i, the shorter, front toe of the forefoot, the Julia set looks like a branched lightning bolt.
At c = −2, the tip of the long spiky tail, the Julia set is a straight line segment.
In other words, the Julia sets are locally similar around Misiurewicz points.
Generalizations
The definition of Julia and Fatou sets easily carries over to the case of certain maps whose image contains their domain; most notably transcendental meromorphic functions and Adam Epstein's finite-type maps.
Julia sets are also commonly defined in the study of dynamics in several complex variables.
Pseudocode
The below pseudocode implementations hard code the functions for each fractal. Consider implementing complex number operations to allow for more dynamic and reusable code.
Pseudocode for normal Julia sets
R = escape radius # choose R > 0 such that R**2 - R >= sqrt(cx**2 + cy**2)
for each pixel (x, y) on the screen, do:
{
zx = scaled x coordinate of pixel; # (scale to be between -R and R)
# zx represents the real part of z.
zy = scaled y coordinate of pixel; # (scale to be between -R and R)
# zy represents the imaginary part of z.
iteration = 0;
max_iteration = 1000;
while (zx * zx + zy * zy < R**2 AND iteration < max_iteration)
{
xtemp = zx * zx - zy * zy;
zy = 2 * zx * zy + cy;
zx = xtemp + cx;
iteration = iteration + 1;
}
if (iteration == max_iteration)
return black;
else
return iteration;
}
Pseudocode for multi-Julia sets
R = escape radius # choose R > 0 such that R**n - R >= sqrt(cx**2 + cy**2)
for each pixel (x, y) on the screen, do:
{
zx = scaled x coordinate of pixel; # (scale to be between -R and R)
zy = scaled y coordinate of pixel; # (scale to be between -R and R)
iteration = 0;
max_iteration = 1000;
while (zx * zx + zy * zy < R**2 AND iteration < max_iteration)
{
xtmp = (zx * zx + zy * zy) ^ (n / 2) * cos(n * atan2(zy, zx)) + cx;
zy = (zx * zx + zy * zy) ^ (n / 2) * sin(n * atan2(zy, zx)) + cy;
zx = xtmp;
iteration = iteration + 1;
}
if (iteration == max_iteration)
return black;
else
return iteration;
}
Another recommended option is to reduce color banding between iterations by using a renormalization formula for the iteration.
Such formula is given to be,
where is the escaping iteration, bounded by some such that and , and is the magnitude of the last iterate before escaping.
This can be implemented, very simply, like so:
# simply replace the last 4 lines of code from the last example with these lines of code:
if(iteration == max_iteration)
return black;
else
abs_z = zx * zx + zy * zy;
return iteration + 1 - log(log(abs_z))/log(n);
The difference is shown below with a Julia set defined as where .
The potential function and the real iteration number
The Julia set for is the unit circle, and on the outer Fatou domain, the potential function φ(z) is defined by φ(z) = log|z|. The equipotential lines for this function are concentric circles. As we have
where is the sequence of iteration generated by z. For the more general iteration , it has been proved that if the Julia set is connected (that is, if c belongs to the (usual) Mandelbrot set), then there exist a biholomorphic map ψ between the outer Fatou domain and the outer of the unit circle such that . This means that the potential function on the outer Fatou domain defined by this correspondence is given by:
This formula has meaning also if the Julia set is not connected, so that we for all c can define the potential function on the Fatou domain containing ∞ by this formula. For a general rational function f(z) such that ∞ is a critical point and a fixed point, that is, such that the degree m of the numerator is at least two larger than the degree n of the denominator, we define the potential function on the Fatou domain containing ∞ by:
where d = m − n is the degree of the rational function.
If N is a very large number (e.g. 10100), and if k is the first iteration number such that , we have that
for some real number , which should be regarded as the real iteration number, and we have that:
where the last number is in the interval [0, 1).
For iteration towards a finite attracting cycle of order r, we have that if is a point of the cycle, then (the r-fold composition), and the number
is the attraction of the cycle. If w is a point very near and w′ is w iterated r times, we have that
Therefore, the number is almost independent of k. We define the potential function on the Fatou domain by:
If ε is a very small number and k is the first iteration number such that , we have that
for some real number , which should be regarded as the real iteration number, and we have that:
If the attraction is ∞, meaning that the cycle is super-attracting, meaning again that one of the points of the cycle is a critical point, we must replace α by
where w′ is w iterated r times and the formula for φ(z) by:
And now the real iteration number is given by:
For the colouring we must have a cyclic scale of colours (constructed mathematically, for instance) and containing H colours numbered from 0 to H−1 (H = 500, for instance). We multiply the real number by a fixed real number determining the density of the colours in the picture, and take the integral part of this number modulo H.
The definition of the potential function and our way of colouring presuppose that the cycle is attracting, that is, not neutral. If the cycle is neutral, we cannot colour the Fatou domain in a natural way. As the terminus of the iteration is a revolving movement, we can, for instance, colour by the minimum distance from the cycle left fixed by the iteration.
Field lines
In each Fatou domain (that is not neutral) there are two systems of lines orthogonal to each other: the equipotential lines (for the potential function or the real iteration number) and the field lines.
If we colour the Fatou domain according to the iteration number (and not the real iteration number , as defined in the previous section), the bands of iteration show the course of the equipotential lines. If the iteration is towards ∞ (as is the case with the outer Fatou domain for the usual iteration ), we can easily show the course of the field lines, namely by altering the colour according as the last point in the sequence of iteration is above or below the x-axis (first picture), but in this case (more precisely: when the Fatou domain is super-attracting) we cannot draw the field lines coherently - at least not by the method we describe here. In this case a field line is also called an external ray.
Let z be a point in the attracting Fatou domain. If we iterate z a large number of times, the terminus of the sequence of iteration is a finite cycle C, and the Fatou domain is (by definition) the set of points whose sequence of iteration converges towards C. The field lines issue from the points of C and from the (infinite number of) points that iterate into a point of C. And they end on the Julia set in points that are non-chaotic (that is, generating a finite cycle). Let r be the order of the cycle C (its number of points) and let be a point in C. We have (the r-fold composition), and we define the complex number α by
If the points of C are , α is the product of the r numbers . The real number 1/|α| is the attraction of the cycle, and our assumption that the cycle is neither neutral nor super-attracting, means that . The point is a fixed point for , and near this point the map has (in connection with field lines) character of a rotation with the argument β of α (that is, ).
In order to colour the Fatou domain, we have chosen a small number ε and set the sequences of iteration to stop when , and we colour the point z according to the number k (or the real iteration number, if we prefer a smooth colouring). If we choose a direction from given by an angle θ, the field line issuing from in this direction consists of the points z such that the argument ψ of the number satisfies the condition that
For if we pass an iteration band in the direction of the field lines (and away from the cycle), the iteration number k is increased by 1 and the number ψ is increased by β, therefore the number is constant along the field line.
A colouring of the field lines of the Fatou domain means that we colour the spaces between pairs of field lines: we choose a number of regularly situated directions issuing from , and in each of these directions we choose two directions around this direction. As it can happen that the two field lines of a pair do not end in the same point of the Julia set, our coloured field lines can ramify (endlessly) in their way towards the Julia set. We can colour on the basis of the distance to the center line of the field line, and we can mix this colouring with the usual colouring. Such pictures can be very decorative (second picture).
A coloured field line (the domain between two field lines) is divided up by the iteration bands, and such a part can be put into a one-to-one correspondence with the unit square: the one coordinate is (calculated from) the distance from one of the bounding field lines, the other is (calculated from) the distance from the inner of the bounding iteration bands (this number is the non-integral part of the real iteration number). Therefore, we can put pictures into the field lines (third picture).
Plotting the Julia set
Methods :
Distance Estimation Method for Julia set (DEM/J)
Inverse Iteration Method (IIM)
Using backwards (inverse) iteration (IIM)
As mentioned above, the Julia set can be found as the set of limit points of the set of pre-images of (essentially) any given point. So we can try to plot the Julia set of a given function as follows. Start with any point z we know to be in the Julia set, such as a repelling periodic point, and compute all pre-images of z under some high iterate of f.
Unfortunately, as the number of iterated pre-images grows exponentially, this is not feasible computationally. However, we can adjust this method, in a similar way as the "random game" method for iterated function systems. That is, in each step, we choose at random one of the inverse images of f.
For example, for the quadratic polynomial fc, the backwards iteration is described by
At each step, one of the two square roots is selected at random.
Note that certain parts of the Julia set are quite difficult to access with the reverse Julia algorithm. For this reason, one must modify IIM/J ( it is called MIIM/J) or use other methods to produce better images.
Using DEM/J
As a Julia set is infinitely thin we cannot draw it effectively by backwards iteration from the pixels. It will appear fragmented because of the impracticality of examining infinitely many startpoints. Since the iteration count changes vigorously near the Julia set, a partial solution is to imply the outline of the set from the nearest color contours, but the set will tend to look muddy.
A better way to draw the Julia set in black and white is to estimate the distance of pixels (DEM) from the set and to color every pixel whose center is close to the set. The formula for the distance estimation is derived from the formula for the potential function φ(z). When the equipotential lines for φ(z) lie close, the number is large, and conversely, therefore the equipotential lines for the function should lie approximately regularly. It has been proven that the value found by this formula (up to a constant factor) converges towards the true distance for z converging towards the Julia set.
We assume that f(z) is rational, that is, where p(z) and q(z) are complex polynomials of degrees m and n, respectively, and we have to find the derivative of the above expressions for φ(z). And as it is only that varies, we must calculate the derivative of with respect to z. But as (the k-fold composition), is the product of the numbers , and this sequence can be calculated recursively by , starting with (before the calculation of the next iteration ).
For iteration towards ∞ (more precisely when , so that ∞ is a super-attracting fixed point), we have
() and consequently:
For iteration towards a finite attracting cycle (that is not super-attracting) containing the point and having order r, we have
and consequently:
For a super-attracting cycle, the formula is:
We calculate this number when the iteration stops. Note that the distance estimation is independent of the attraction of the cycle. This means that it has meaning for transcendental functions of "degree infinity" (e.g. sin(z) and tan(z)).
Besides drawing of the boundary, the distance function can be introduced as a 3rd dimension to create a solid fractal landscape.
See also
Douady rabbit
Limit set
Stable and unstable sets
No wandering domain theorem
Chaos theory
Notes
References
Bibliography
First appeared in as a available as
External links
– Windows, 370 kB
– one of the applets can render Julia sets, via Iterated Function Systems.
– A visual explanation of Julia Sets.
– Mandelbrot, Burning ship and corresponding Julia set generator.
- A visual explanation.
Fractals
Limit sets
Complex dynamics
Articles containing video clips
Articles with example pseudocode | Julia set | [
"Mathematics"
] | 4,482 | [
"Limit sets",
"Functions and mappings",
"Complex dynamics",
"Mathematical analysis",
"Mathematical objects",
"Fractals",
"Topology",
"Mathematical relations",
"Dynamical systems"
] |
56,437 | https://en.wikipedia.org/wiki/Infrared%20astronomy | Infrared astronomy is a sub-discipline of astronomy which specializes in the observation and analysis of astronomical objects using infrared (IR) radiation. The wavelength of infrared light ranges from 0.75 to 300 micrometers, and falls in between visible radiation, which ranges from 380 to 750 nanometers, and submillimeter waves.
Infrared astronomy began in the 1830s, a few decades after the discovery of infrared light by William Herschel in 1800. Early progress was limited, and it was not until the early 20th century that conclusive detections of astronomical objects other than the Sun and Moon were made in infrared light. After a number of discoveries were made in the 1950s and 1960s in radio astronomy, astronomers realized the information available outside the visible wavelength range, and modern infrared astronomy was established.
Infrared and optical astronomy are often practiced using the same telescopes, as the same mirrors or lenses are usually effective over a wavelength range that includes both visible and infrared light. Both fields also use solid state detectors, though the specific type of solid state photodetectors used are different. Infrared light is absorbed at many wavelengths by water vapor in the Earth's atmosphere, so most infrared telescopes are at high elevations in dry places, above as much of the atmosphere as possible. There have also been infrared observatories in space, including the Spitzer Space Telescope, the Herschel Space Observatory, and more recently the James Webb Space Telescope.
History
The discovery of infrared radiation is attributed to William Herschel, who performed an experiment in 1800 where he placed a thermometer in sunlight of different colors after it passed through a prism. He noticed that the temperature increase induced by sunlight was highest outside the visible spectrum, just beyond the red color. That the temperature increase was highest at infrared wavelengths was due to the spectral response of the prism rather than properties of the Sun, but the fact that there was any temperature increase at all prompted Herschel to deduce that there was invisible radiation from the Sun. He dubbed this radiation "calorific rays", and went on to show that it could be reflected, transmitted, and absorbed just like visible light.
Efforts were made starting in the 1830s and continuing through the 19th century to detect infrared radiation from other astronomical sources. Radiation from the Moon was first detected in 1856 by Charles Piazzi Smyth, the Astronomer Royal for Scotland, during an expedition to Tenerife to test his ideas about mountain top astronomy. Ernest Fox Nichols used a modified Crookes radiometer in an attempt to detect infrared radiation from Arcturus and Vega, but Nichols deemed the results inconclusive. Even so, the ratio of flux he reported for the two stars is consistent with the modern value, so George Rieke gives Nichols credit for the first detection of a star other than our own in the infrared.
The field of infrared astronomy continued to develop slowly in the early 20th century, as Seth Barnes Nicholson and Edison Pettit developed thermopile detectors capable of accurate infrared photometry and sensitive to a few hundreds of stars. The field was mostly neglected by traditional astronomers until the 1960s, with most scientists who practiced infrared astronomy having actually been trained physicists. The success of radio astronomy during the 1950s and 1960s, combined with the improvement of infrared detector technology, prompted more astronomers to take notice, and infrared astronomy became well established as a subfield of astronomy.
Infrared space telescopes entered service. In 1983, IRAS made an all-sky survey. In 1995, the European Space Agency created the Infrared Space Observatory. Before this satellite ran out of liquid helium in 1998, it discovered protostars and water in our universe (even on Saturn and Uranus).
On 25 August 2003, NASA launched the Spitzer Space Telescope, previously known as the Space Infrared Telescope Facility. In 2009, the telescope ran out of liquid helium and lost the ability to see far infrared. It had discovered stars, the Double Helix Nebula, and light from extrasolar planets. It continued working in 3.6 and 4.5 micrometer bands. Since then, other infrared telescopes helped find new stars that are forming, nebulae, and stellar nurseries. Infrared telescopes have opened up a whole new part of the galaxy for us. They are also useful for observing extremely distant things, like quasars. Quasars move away from Earth. The resulting large redshift make them difficult targets with an optical telescope. Infrared telescopes give much more information about them.
During May 2008, a group of international infrared astronomers proved that intergalactic dust greatly dims the light of distant galaxies. In actuality, galaxies are almost twice as bright as they look. The dust absorbs much of the visible light and re-emits it as infrared light.
Modern infrared astronomy
Infrared radiation with wavelengths just longer than visible light, known as near-infrared, behaves in a very similar way to visible light, and can be detected using similar solid state devices (because of this, many quasars, stars, and galaxies were discovered). For this reason, the near infrared region of the spectrum is commonly incorporated as part of the "optical" spectrum, along with the near ultraviolet. Many optical telescopes, such as those at Keck Observatory, operate effectively in the near infrared as well as at visible wavelengths. The far-infrared extends to submillimeter wavelengths, which are observed by telescopes such as the James Clerk Maxwell Telescope at Mauna Kea Observatory.
Like all other forms of electromagnetic radiation, infrared is utilized by astronomers to study the universe. Indeed, infrared measurements taken by the 2MASS and WISE astronomical surveys have been particularly effective at unveiling previously undiscovered star clusters. Examples of such embedded star clusters are FSR 1424, FSR 1432, Camargo 394, Camargo 399, Majaess 30, and Majaess 99. Infrared telescopes, which includes most major optical telescopes as well as a few dedicated infrared telescopes, need to be chilled with liquid nitrogen and shielded from warm objects. The reason for this is that objects with temperatures of a few hundred kelvins emit most of their thermal energy at infrared wavelengths. If infrared detectors were not kept cooled, the radiation from the detector itself would contribute noise that would dwarf the radiation from any celestial source. This is particularly important in the mid-infrared and far-infrared regions of the spectrum.
To achieve higher angular resolution, some infrared telescopes are combined to form astronomical interferometers. The effective resolution of an interferometer is set by the distance between the telescopes, rather than the size of the individual telescopes. When used together with adaptive optics, infrared interferometers, such as two 10 meter telescopes at Keck Observatory or the four 8.2 meter telescopes that make up the Very Large Telescope Interferometer, can achieve high angular resolution.
The principal limitation on infrared sensitivity from ground-based telescopes is the Earth's atmosphere. Water vapor absorbs a significant amount of infrared radiation, and the atmosphere itself emits at infrared wavelengths. For this reason, most infrared telescopes are built in very dry places at high altitude, so that they are above most of the water vapor in the atmosphere. Suitable locations on Earth include Mauna Kea Observatory at 4205 meters above sea level, the Paranal Observatory at 2635 meters in Chile and regions of high altitude ice-desert such as Dome C in Antarctic. Even at high altitudes, the transparency of the Earth's atmosphere is limited except in infrared windows, or wavelengths where the Earth's atmosphere is transparent. The main infrared windows are listed below:
As is the case for visible light telescopes, space is the ideal place for infrared telescopes. Telescopes in space can achieve higher resolution, as they do not suffer from blurring caused by the Earth's atmosphere, and are also free from infrared absorption caused by the Earth's atmosphere. Current infrared telescopes in space include the Herschel Space Observatory, the Spitzer Space Telescope, the Wide-field Infrared Survey Explorer and the James Webb Space Telescope. Since putting telescopes in orbit is expensive, there are also airborne observatories, such as the Stratospheric Observatory for Infrared Astronomy and the Kuiper Airborne Observatory. These observatories fly above most, but not all, of the atmosphere, and water vapor in the atmosphere absorbs some of infrared light from space.
Infrared technology
One of the most common infrared detector arrays used at research telescopes is HgCdTe arrays. These operate well between 0.6 and 5 micrometre wavelengths. For longer wavelength observations or higher sensitivity other detectors may be used, including other narrow gap semiconductor detectors, low temperature bolometer arrays or photon-counting Superconducting Tunnel Junction arrays.
Special requirements for infrared astronomy include: very low dark currents to allow long integration times, associated low noise readout circuits and sometimes very high pixel counts.
Low temperature is often achieved by a coolant, which can run out. Space missions have either ended or shifted to "warm" observations when the coolant supply used up. For example, WISE ran out of coolant in October 2010, about ten months after being launched. (See also NICMOS, Spitzer Space Telescope)
Observatories
Space observatories
Many space telescopes detect electromagnetic radiation in a wavelength range that overlaps at least to some degree with the infrared wavelength range. Therefore it is difficult to define which space telescopes are infrared telescopes. Here the definition of "infrared space telescope" is taken to be a space telescope whose main mission is detecting infrared light.
Eight infrared space telescopes have been operated in space. They are:
Infrared Astronomical Satellite (IRAS), operated 1983 (10 months). A joint mission of US (NASA), UK and the Netherlands.
Infrared Space Observatory (ISO), operated 1995-1998, ESA mission.
Midcourse Space Experiment (MSX), operated 1996-1997, BMDO mission.
Spitzer Space Telescope, operated 2003-2020, NASA mission.
Akari, operated 2006-2011, JAXA mission.
Herschel Space Observatory, operated 2009-2013, ESA mission.
Wide-field Infrared Survey Explorer (WISE), operated 2009-2024, NASA mission.
James Webb Space Telescope (JWST), operated 2022-, NASA mission.
Euclid telescope, operated 2023-, ESA mission.
In addition, SPHEREx is a telescope scheduled for launch in 2025. NASA is also planning to launch the Nancy Grace Roman Space Telescope (NGRST), originally known as the Wide Field InfraRed Space Telescope (WFIRST), in 2027.
Many other smaller space-missions and space-based detectors of infrared radiation have been operated in space. These include the Infrared Telescope (IRT) that flew with the Space Shuttle.
The Submillimeter Wave Astronomy Satellite (SWAS) is sometimes mentioned as an infrared satellite, although it is a submillimeter satellite.
Infrared instruments on space telescopes
For many space telescopes, only some of the instruments are capable of infrared observation. Below are listed some of the most notable of these space observatories and instruments:
Cosmic Background Explorer (COBE) satellite (1989-1993) Diffuse Infrared Background Experiment (DIRBE) instrument
Hubble Space Telescope (1990-) Near Infrared Camera and Multi-Object Spectrometer (NICMOS) instrument (1997-1999, 2002-2008)
Hubble Space Telescope Wide Field Camera 3 (WFC3) camera (2009-) observes infrared.
Airborne Observatories
Three airplane-based observatories have been used (other aircraft have also been used occasionally to host infrared space studies) to study the sky in infrared. They are:
Galileo Observatory, a NASA mission. Was active 1965-1973.
Kuiper Airborne Observatory, a NASA mission. Was active 1974-1995.
SOFIA, a NASA-DLR mission. Was active 2010-2022.
Ground-based observatories
Many ground-based infrared telescopes exist around the world. The largest are:
VISTA
UKIRT
IRTF
WIRO
See also
Far-infrared astronomy
Infrared spectroscopy
List of largest infrared telescopes
Radio Galaxy Zoo
References
External links
Cool Cosmos (Caltech/IPAC IR educational resource site)
Infrared Science Archive
Astronomical imaging
Observational astronomy
Infrared imaging | Infrared astronomy | [
"Astronomy"
] | 2,475 | [
"Observational astronomy",
"Astronomical sub-disciplines"
] |
56,458 | https://en.wikipedia.org/wiki/Apraxia | Apraxia is a motor disorder caused by damage to the brain (specifically the posterior parietal cortex or corpus callosum), which causes difficulty with motor planning to perform tasks or movements. The nature of the damage determines the disorder's severity, and the absence of sensory loss or paralysis helps to explain the level of difficulty. Children may be born with apraxia; its cause is unknown, and symptoms are usually noticed in the early stages of development. Apraxia occurring later in life, known as acquired apraxia, is typically caused by traumatic brain injury, stroke, dementia, Alzheimer's disease, brain tumor, or other neurodegenerative disorders. The multiple types of apraxia are categorized by the specific ability and/or body part affected.
The term "apraxia" comes .
Types
The several types of apraxia include:
Apraxia of speech (AOS) is having difficulty planning and coordinating the movements necessary for speech (e.g. potato=totapo, topato). AOS can independently occur without issues in areas such as verbal comprehension, reading comprehension, writing, articulation, or prosody.
Buccofacial or orofacial apraxia, the most common type of apraxia, is the inability to carry out facial movements on demand. For example, an inability to lick one's lips, wink, or whistle when requested to do so. This suggests an inability to carry out volitional movements of the tongue, cheeks, lips, pharynx, or larynx on command.
Constructional apraxia is the inability to draw, construct, or copy simple configurations, such as intersecting shapes. These patients have difficulty copying a simple diagram or drawing basic shapes.
Gait apraxia is the loss of ability to have normal function of the lower limbs such as walking. This is not due to loss of motor or sensory functions.
Ideational/conceptual apraxia is having an inability to conceptualize a task and impaired ability to complete multistep actions. This form of apraxia consists of an inability to select and carry out an appropriate motor program. For example, the patient may complete actions in incorrect orders, such as buttering bread before putting it in the toaster, or putting on shoes before putting on socks. Also, a loss occurs in the ability to voluntarily perform a learned task when given the necessary objects or tools. For instance, if given a screwdriver, these patients may try to write with it as if it were a pen, or try to comb their hair with a toothbrush.
Ideomotor apraxia is having deficits in the ability to plan or complete motor actions that rely on semantic memory. These patients are able to explain how to perform an action, but unable to "imagine" or act out a movement such as "pretend to brush your teeth" or "pucker as though you bit into a sour lemon." When the ability to perform an action automatically when cued remains intact, though, this is known as automatic-voluntary dissociation. For example, they may not be able to pick up a phone when asked to do so, but can perform the action without thinking when the phone rings.
Limb-kinetic apraxia is having the inability to perform precise, voluntary movements of extremities. For example, a person affected by limb apraxia may have difficulty waving hello, tying shoes, or typing on a computer. This type is common in patients who have experienced a stroke, some type of brain trauma, or have Alzheimer's disease.
Oculomotor apraxia is having difficulty moving the eye on command, especially with saccade movements that direct the gaze to targets. This is one of the three major components of Balint's syndrome.
Causes
Apraxia is most often due to a lesion located in the dominant (usually left) hemisphere of the brain, typically in the frontal and parietal lobes. Lesions may be due to stroke, acquired brain injuries, or neurodegenerative diseases such as Alzheimer's disease or other dementias, Parkinson's disease, or Huntington's disease. Also, apraxia possibly may be caused by lesions in other areas of the brain.
Ideomotor apraxia is typically due to a decrease in blood flow to the dominant hemisphere of the brain and particularly the parietal and premotor areas. It is frequently seen in patients with corticobasal degeneration.
Ideational apraxia has been observed in patients with lesions in the dominant hemisphere near areas associated with aphasia, but more research is needed on ideational apraxia due to brain lesions. The localization of lesions in areas of the frontal and temporal lobes would provide explanation for the difficulty in motor planning seen in ideational apraxia, as well as its difficulty to distinguish it from certain aphasias.
Constructional apraxia is often caused by lesions of the inferior nondominant parietal lobe, and can be caused by brain injury, illness, tumor, or other condition that can result in a brain lesion.
Diagnosis
Although qualitative and quantitative studies exist, little consensus exists on the proper method to assess for apraxia. The criticisms of past methods include failure to meet standard psychometric properties and research-specific designs that translate poorly to nonresearch use.
The Test to Measure Upper Limb Apraxia (TULIA) is one method of determining upper limb apraxia through the qualitative and quantitative assessment of gesture production. In contrast to previous publications on apraxic assessment, the reliability and validity of TULIA was thoroughly investigated. The TULIA consists of subtests for the imitation and pantomime of nonsymbolic ("put your index finger on top of your nose"), intransitive ("wave goodbye"), and transitive ("show me how to use a hammer") gestures. Discrimination (differentiating between well- and poorly performed tasks) and recognition (indicating which object corresponds to a pantomimed gesture) tasks are also often tested for a full apraxia evaluation.
However, a strong correlation may not be seen between formal test results and actual performance in everyday functioning or activities of daily living (ADLs). A comprehensive assessment of apraxia should include formal testing, standardized measurements of ADLs, observation of daily routines, self-report questionnaires, and targeted interviews with the patients and their relatives.
As stated above, apraxia should not be confused with aphasia (the inability to understand language); however, they frequently occur together. Apraxia is so often accompanied by aphasia that many believe that if a person displays AOS, then the patient also having some level of aphasia should be assumed.
Treatment
Treatment for individuals with apraxia includes speech therapy, occupational therapy, and physical therapy. Currently, no medications are indicated for the treatment of apraxia, only therapy treatments. Generally, treatments for apraxia have received little attention for several reasons, including the tendency for the condition to resolve spontaneously in acute cases. Additionally, the very nature of the automatic-voluntary dissociation of motor abilities that defines apraxia means that patients may still be able to automatically perform activities if cued to do so in daily life. Nevertheless, patients experiencing apraxia have less functional independence in their daily lives, and that evidence for the treatment of apraxia is scarce. However, a literature review of apraxia treatment to date reveals that although the field is in its early stages of treatment design, certain aspects can be included to treat apraxia.
One method is through rehabilitative treatment, which has been found to positively impact apraxia, as well as ADLs. In this review, rehabilitative treatment consisted of 12 different contextual cues, which were used to teach patients how to produce the same gesture under different contextual situations. Additional studies have also recommended varying forms of gesture therapy, whereby the patient is instructed to make gestures (either using objects or symbolically meaningful and nonmeaningful gestures) with progressively less cuing from the therapist. Patients with apraxia may need to use a form of alternative and augmentative communication depending on the severity of the disorder. In addition to using gestures as mentioned, patients can also use communication boards or more sophisticated electronic devices if needed.
No single type of therapy or approach has been proven as the best way to treat a patient with apraxia, since each patient's case varies. One-on-one sessions usually work the best, though, with the support of family members and friends. Since everyone responds to therapy differently, some patients will make significant improvements, while others will make less progress. The overall goal for treatment of apraxia is to treat the motor plans for speech, not treating at the phoneme (sound) level. Individuals with apraxia of speech should receive treatment that focuses on the repetition of target words and rate of speech. The overall goal for treatment of apraxia should be to improve speech intelligibility, rate of speech, and articulation of targeted words.
See also
Praxis (process)
Ataxia
Aging movement control
Developmental coordination disorder (also known as developmental dyspraxia)
Lists of language disorders
References
Further reading
Kasper, D.L.; Braunwald, E.; Fauci, A.S.; Hauser, S.L.; Longo, D.L.; Jameson, J.L.. Harrison's Principles of Internal Medicine. New York: McGraw-Hill, 2005. .
Manasco, H. (2014). Introduction to Neurogenic Communication Disorders. Jones & Bartlett Publishers.
External links
Acquired Apraxia of Speech: A Treatment Overview
Apraxia: Symptoms, Causes, Tests, Treatments
ApraxiaKids
GettingTheWordOutOnApraxia.com: A Community for Parents of Children with Apraxia
Complications of stroke
Dementia
Aphasias
Motor control | Apraxia | [
"Biology"
] | 2,064 | [
"Behavior",
"Motor control"
] |
56,478 | https://en.wikipedia.org/wiki/North | North is one of the four compass points or cardinal directions. It is the opposite of south and is perpendicular to east and west. North is a noun, adjective, or adverb indicating direction or geography.
Etymology
The word north is related to the Old High German nord, both descending from the Proto-Indo-European unit *ner-, meaning "left; below" as north is to left when facing the rising sun. Similarly, the other cardinal directions are also related to the sun's position.
The Latin word borealis comes from the Greek boreas "north wind, north" which, according to Ovid, was personified as the wind-god Boreas, the father of Calais and Zetes. Septentrionalis is from septentriones, "the seven plow oxen", a name of Ursa Major. The Greek ἀρκτικός (arktikós) is named for the same constellation, and is the source of the English word Arctic.
Other languages have other derivations. For example, in Lezgian, kefer can mean both "disbelief" and "north", since to the north of the Muslim Lezgian homeland there are areas formerly inhabited by non-Muslim Caucasian and Turkic peoples. In many languages of Mesoamerica, north also means "up".
In Romanian the old word for north is mĭazănoapte, from Latin mediam noctem meaning midnight and in
Hungarian is észak, which is derived from éjszaka ("night"), since between the Tropic of Cancer and the Arctic Circle the Sun never shines from the north.
North is sometimes abbreviated as N.
Mapping and navigation
By convention, the top or upward-facing side of a map is north.
To go north using a compass for navigation, set a bearing or azimuth of 0° or 360°. Traveling directly north traces a meridian line upwards.
North is specifically the direction that, in Western culture, is considered the fundamental direction:
North is used (explicitly or implicitly) to define all other directions.
The (visual) top edges of maps usually correspond to the northern edge of the area represented, unless explicitly stated otherwise or landmarks are considered more useful for that territory than specific directions.
On any rotating astronomical object, north often denotes the side appearing to rotate counterclockwise when viewed from afar along the axis of rotation. However, the International Astronomical Union (IAU) defines the geographic north pole of a planet or any of its satellites in the Solar System as the planetary pole that is in the same celestial hemisphere, relative to the invariable plane of the Solar System, as Earth's north pole. This means some objects, such as Uranus, rotate in the retrograde direction: when seen from the IAU north, the spin is clockwise.
Magnetic north and declination
Magnetic north is of interest because it is the direction indicated as north on a properly functioning (but uncorrected) magnetic compass. The difference between it and true north is called the magnetic declination (or simply the declination where the context is clear). For many purposes and physical circumstances, the error in direction that results from ignoring the distinction is tolerable; in others a mental or instrument compensation, based on assumed knowledge of the applicable declination, can solve all the problems. But simple generalizations on the subject should be treated as unsound, and as likely to reflect popular misconceptions about terrestrial magnetism.
Maps intended for usage in orienteering by compass will clearly indicate the local declination for easy correction to true north. Maps may also indicate grid north, which is a navigational term referring to the direction northwards along the grid lines of a map projection.
Roles of north as prime direction
The visible rotation of the night sky around the visible celestial pole provides a vivid metaphor of that direction corresponding to "up". Thus the choice of the north as corresponding to "up" in the northern hemisphere, or of south in that role in the southern, is, before worldwide communication, anything but an arbitrary one - at least for night-time astronomers. (Note: the southern hemisphere lacks a prominent visible analog to the northern Pole Star.) On the contrary, Chinese and Islamic cultures considered south as the proper "top" end for maps. In the cultures of Polynesia, where navigation played an important role, winds - prevailing local or ancestral - can define cardinal points.
In Western culture:
Maps tend to be drawn for viewing with either true north or magnetic north at the top.
Globes of the earth have the North Pole at the top, or if the Earth's axis is represented as inclined from vertical (normally by the angle it has relative to the axis of the Earth's orbit), in the top half.
Maps are usually labelled to indicate which direction on the map corresponds to a direction on the earth,
usually with a single arrow oriented to the map's representation of true north,
occasionally with a single arrow oriented to the map's representation of magnetic north, or two arrows oriented to true and magnetic north respectively,
occasionally with a compass rose, but if so, usually on a map with north at the top and usually with north decorated more prominently than any other compass point.
"Up" is a metaphor for north. The notion that north should always be "up" and east at the right was established by the Greek astronomer Ptolemy. The historian Daniel Boorstin suggests that perhaps this was because the better-known places in his world were in the northern hemisphere, and on a flat map these were most convenient for study if they were in the upper right-hand corner.
North is quite often associated with colder climates because most of the world's populated land at high latitudes is located in the Northern Hemisphere. The Arctic Circle passes through the Arctic Ocean, Norway, Sweden, Finland, Russia, the United States (Alaska), Canada (Yukon, Northwest Territories and Nunavut), Denmark (Greenland) and Iceland.
Roles of east and west as inherently subsidiary directions
While the choice of north over south as prime direction reflects quite arbitrary historical factors, east and west are not nearly as natural alternatives as first glance might suggest. Their folk definitions are, respectively, "where the sun rises" and "where it sets". Except on the Equator, however, these definitions, taken together, would imply that
east and west would not be 180 degrees apart, but instead would differ from that by up to twice the degrees of latitude of the location in question, and
they would each move slightly from day to day and, in the temperate zones, markedly over the course of the year.
Reasonably accurate folk astronomy, such as is usually attributed to Stone Age peoples or later Celts, would arrive at east and west by noting the directions of rising and setting (preferably more than once each) and choosing as prime direction one of the two mutually opposite directions that lie halfway between those two. The true folk-astronomical definitions of east and west are "the directions, a right angle from the prime direction, that are closest to the rising and setting, respectively, of the sun (or moon).
Cultural references
Being the "default" direction on the compass, north is referred to frequently in Western popular culture. Some examples include:
"North of X" is a phrase often used by Americans to mean "more than X" or "greater than X" in relation to the conventional direction of north being upwards, i.e. "The world population is north of 7 billion people" or "north of 40 [years old]".
See also
Nordicity
List of northernmost items
Northing
Northern Light
Septentrional
References
External links
Orientation (geometry) | North | [
"Physics",
"Mathematics"
] | 1,575 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
56,480 | https://en.wikipedia.org/wiki/Q%20fever | Q fever or query fever is a disease caused by infection with Coxiella burnetii, a bacterium that affects humans and other animals. This organism is uncommon, but may be found in cattle, sheep, goats, and other domestic mammals, including cats and dogs. The infection results from inhalation of a spore-like small-cell variant, and from contact with the milk, urine, feces, vaginal mucus, or semen of infected animals. Rarely, the disease is tick-borne. The incubation period can range from . Humans are vulnerable to Q fever, and infection can result from even a few organisms. The bacterium is an obligate intracellular pathogenic parasite.
Signs and symptoms
The incubation period is usually two to three weeks. The most common manifestation is flu-like symptoms: abrupt onset of fever, malaise, profuse perspiration, severe headache, muscle pain, joint pain, loss of appetite, upper respiratory problems, dry cough, pleuritic pain, chills, confusion, and gastrointestinal symptoms, such as nausea, vomiting, and diarrhea. About half of infected individuals exhibit no symptoms.
During its course, the disease can progress to an atypical pneumonia, which can result in a life-threatening acute respiratory distress syndrome, usually occurring during the first four to five days of infection.
Less often, Q fever causes (granulomatous) hepatitis, which may be asymptomatic or become symptomatic with malaise, fever, liver enlargement, and pain in the right upper quadrant of the abdomen. This hepatitis often results in the elevation of transaminase values, although jaundice is uncommon. Q fever can also rarely result in retinal vasculitis.
The chronic form of Q fever is virtually identical to endocarditis (i.e. inflammation of the inner lining of the heart), which can occur months or decades following the infection. It is usually fatal if untreated. However, with appropriate treatment, the mortality falls to around 10%.
A minority of Q fever survivors develop Q fever fatigue syndrome after acute infection, one of the more well-studied post-acute infection syndromes. Q fever fatigue syndrome is characterised by post-exertional malaise and debilitating fatigue. People with Q fever fatigue syndrome frequently meet the diagnostic criteria for myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS). Symptoms often persist years after the initial infection.
Diagnosis
Diagnosis is usually based on serology (looking for an antibody response) rather than looking for the organism itself. Serology allows the detection of chronic infection by the appearance of high levels of the antibody against the virulent form of the bacterium. Molecular detection of bacterial DNA is increasingly used. Contrary to most obligate intracellular parasites, Coxiella burnetii can be grown in an axenic culture, but its culture is technically difficult and not routinely available in most microbiology laboratories.
Q fever can cause endocarditis (infection of the heart valves) which may require transoesophageal echocardiography to diagnose. Q fever hepatitis manifests as an elevation of alanine transaminase and aspartate transaminase, but a definitive diagnosis is only possible on liver biopsy, which shows the characteristic fibrin ring granulomas.
Prevention
Research done in the 1960s1970s by French Canadian-American microbiologist and virologist Paul Fiset was instrumental in the development of the first successful Q fever vaccine.
Protection is offered by Q-Vax, a whole-cell, inactivated vaccine developed by an Australian vaccine manufacturing company, CSL Limited. The intradermal vaccination is composed of killed C. burnetii organisms. Skin and blood tests should be done before vaccination to identify pre-existing immunity because vaccinating people who already have immunity can result in a severe local reaction. After a single dose of vaccine, protective immunity lasts for many years. Revaccination is not generally required. Annual screening is typically recommended.
In 2001, Australia introduced a national Q fever vaccination program for people working in "at-risk" occupations. Vaccinated or previously exposed people may have their status recorded on the Australian Q Fever Register, which may be a condition of employment in the meat processing industry or in veterinary research. An earlier killed vaccine had been developed in the Soviet Union, but its side effects prevented its licensing abroad.
Preliminary results suggest vaccination of animals may be a method of control. Published trials proved that use of a registered phase vaccine (Coxevac) on infected farms is a tool of major interest to manage or prevent early or late abortion, repeat breeding, anoestrus, silent oestrus, metritis, and decreases in milk yield when C. burnetii is the major cause of these problems.
Treatment
Treatment of acute Q fever with antibiotics is very effective. Commonly used antibiotics include doxycycline, tetracycline, chloramphenicol, ciprofloxacin, and ofloxacin; the antimalarial drug hydroxychloroquine is also used. Chronic Q fever is more difficult to treat and can require up to four years of treatment with doxycycline and quinolones or doxycycline with hydroxychloroquine. If a person has chronic Q fever, doxycycline and hydroxychloroquine will be prescribed for at least 18 months. Q fever in pregnancy is especially difficult to treat because doxycycline and ciprofloxacin are contraindicated in pregnancy. The preferred treatment for pregnancy and children under the age of eight is co-trimoxazole.
Epidemiology
Q fever is a globally distributed zoonotic disease caused by a highly sustainable and virulent bacterium. The pathogenic agent is found worldwide, with the exception of New Zealand and Antarctica. Understanding the transmission and risk factors of Q fever is crucial for public health due to its potential to cause widespread infection.
Transmission and occupational risks
Transmission primarily occurs through the inhalation of contaminated dust, contact with contaminated milk, meat, or wool, and particularly birthing products. Ticks can transfer the pathogenic agent to other animals. While human-to-human transmission is rare, often associated with the transmission of birth products, sexual contact, and blood transfusion, certain occupations pose higher risks for Q fever:
Veterinary personnel
Stockyard workers
Farmers
Sheep shearers
Animal transporters
Laboratory workers handling potentially infected veterinary samples or visiting abattoirs
People who cull and process kangaroos
Hide (tannery) workers
It is important to note that anyone who has contact with animals infected with Q fever bacteria, especially people who work on farms or with animals, is at an increased risk of contracting the disease. Understanding these occupational risks is crucial for public health.
Prevalence and risk factors
Studies indicate a higher prevalence of Q fever in men than in women, potentially linked to occupational exposure rates. Other contributing risk factors include geography, age, and occupational exposure. Diagnosis relies on blood compatibility testing, with treatment varying for acute and chronic cases. Acute disease often responds to doxycycline, while chronic cases may require a combination of doxycycline and hydroxychloroquine. It is worth noting that Q fever was officially reported in the United States as a notifiable disease in 1999 due to its potential biowarfare agent status.
Q fever exhibits global epidemiological patterns, with higher incidence rates reported in certain countries. In Africa, wild animals in rainforests primarily transmit the disease, making it endemic. Unique patterns are observed in Latin America, but reporting is sporadic and inconsistent between and among countries, making it difficult to track and address.
Recent outbreaks in European countries, including the Netherlands and France, have been linked to urbanized goat farming, raising concerns about the safety of intensive livestock farming practices and the potential risks of zoonotic diseases. Similarly, in the United States, Q fever is more common in livestock farming regions, especially in the West and the Great Plains. California, Texas, and Iowa account for almost 40% of reported cases, with a higher incidence during the spring and early summer when livestock are breeding, regardless of whether the infection is acute or chronic.
These outbreaks have affected a significant number of people, with immunocompromised individuals being more severely impacted. The global nature of Q fever and its association with livestock farming highlight the importance of implementing measures to prevent and control the spread of the disease, particularly in high-risk regions.
Age and occupational exposure
Older men in the West and Great Plains regions, involved in close contact with livestock management, are at a higher risk of contracting chronic Q fever. This risk may be further increased for those with a history of cardiac problems. The disease can manifest years after the initial infection, presenting symptoms such as non-specific fatigue, fever, weight loss, and endocarditis. Additionally, certain populations are more vulnerable to Q fever, including children living in farming communities, who may experience similar symptoms as adults. There have also been reported cases of Q fever among United States military service members, particularly those deployed to Iraq or Afghanistan, which further highlights the importance of understanding and addressing the occupational risks associated with Q fever.
Prevention and public health education
Proper public health education is crucial in reducing the number of Q fever cases. Raising awareness about transmission routes, occupational risks, and preventive measures, such as eliminating unpasteurized milk products from the diet, can help prevent the spread of disease.
Interdisciplinary collaboration between medical personnel and farmers is critical when developing strategies for control and prevention in a community. Awareness campaigns should particularly target occupations that work with livestock, focusing on risk-reduction procedures such as herd monitoring, implementing sanitation practices and personal protective equipment, and vaccinating animals. Locating livestock farms at least 500 meters away from residential areas can also help reduce animal-to-human transmission.
History
Q fever was first described in 1935 by Edward Holbrook Derrick in slaughterhouse workers in Brisbane, Queensland. The "Q" stands for "query" and was applied at a time when the causative agent was unknown; it was chosen over suggestions of abattoir fever and Queensland rickettsial fever, to avoid directing negative connotations at either the cattle industry or the state of Queensland.
The pathogen of Q fever was discovered in 1937, when Frank Macfarlane Burnet and Mavis Freeman isolated the bacterium from one of Derrick's patients. It was originally identified as a species of Rickettsia. H.R. Cox and Gordon Davis elucidated the transmission when they isolated it from ticks found in the US state of Montana in 1938. It is a zoonotic disease whose most common animal reservoirs are cattle, sheep, and goats. Coxiella burnetii – named for Cox and Burnet – is no longer regarded as closely related to the Rickettsiae, but as similar to Legionella and Francisella, and is a Gammaproteobacterium.
Society and culture
An early mention of Q fever was important in one of the early Dr. Kildare films (1939, Calling Dr. Kildare). Kildare's mentor Dr. Gillespie (Lionel Barrymore) tires of his protégé working fruitlessly on "exotic diagnoses" ("I think it's Q fever!") and sends him to work in a neighborhood clinic, instead.
Biological warfare
C. burnetii has been used to develop biological weapons.
The United States investigated it as a potential biological warfare agent in the 1950s, with eventual standardization as agent OU. At Fort Detrick and Dugway Proving Ground, human trials were conducted on Whitecoat volunteers to determine the median infective dose (18 MICLD50/person i.h.) and course of infection. The Deseret Test Center dispensed biological Agent OU with ships and aircraft, during Project 112 and Project SHAD. As a standardized biological, it was manufactured in large quantities at Pine Bluff Arsenal, with 5,098 gallons in the arsenal in bulk at the time of demilitarization in 1970.
C. burnetii is currently ranked as a "category B" bioterrorism agent by the CDC. It can be contagious and is very stable in aerosols in a wide range of temperatures. Q fever microorganisms may survive on surfaces for up to 60 days. It is considered a good agent in part because its ID50 (number of bacilli needed to infect 50% of individuals) is considered to be one, making it the lowest known.
In animals
Q fever can affect many species of domestic and wild animals, including ruminants (cattle, sheep, goats, bison, deer species...), carnivores (dogs, cats, seals...), rodents, reptiles and birds. However, ruminants (cattle, goats, and sheep) are the most frequently affected animals, and can serve as a reservoir for the bacteria.
Clinical signs
In contrast to humans, though a respiratory and cardiac infection could be experimentally reproduced in cattle, the clinical signs mainly affect the reproductive system. Q fever in ruminants is, therefore, mainly responsible for abortions, metritis, retained placenta, and infertility.
The clinical signs vary between species. In small ruminants (sheep and goats), it is dominated by abortions, premature births, stillbirths, and the birth of weak lambs or kids. One of the characteristics of abortions in goats is that they are very frequent and clustered in the first year or two after contamination of the farm. This is known as an abortion storm.
In cattle, although abortions also occur, they are less frequent and more sporadic. The clinical picture is rather dominated by nonspecific signs such as placental retentions, metritis, and consequent fertility disorders.
Epidemiology
With the exception of New Zealand, which is currently free of Q fever, the disease is present throughout the world. Numerous epidemiological surveys have been carried out. They have shown that about one in three cattle farms and one in four sheep or goat farms are infected, but wide variations are seen between studies and countries. In China, Iran, Great Britain, Germany, Hungary, the Netherlands, Spain, the US, Belgium, Denmark, Croatia, Slovakia, the Czech Republic, Serbia, Slovenia, and Jordan, for example, more than 50% of cattle herds were infected with Q fever.
Infected animals shed the bacteria by three routes - genital discharge, faeces, and milk. Excretion is greatest at the time of parturition or abortion, and placentas and aborted fetuses are the main sources of bacteria, particularly in goats.
As C. burnetii is small and resistant in the environment, it is easily airborne and can be transmitted from one farm to another, even if several kilometres away.
Control
Biosecurity measures
Based on the epidemiological data, biosecurity measures can be derived:
The spread of manure from infected farms should be avoided in windy conditions
The level of hygiene must be very high during parturition and fetal annexes, and fetuses must be collected and destroyed as soon as possible
Medical measures
A vaccine for cattle, goats, and sheep exists. It reduces clinical expression such as abortions and decreases excretion of the bacteria by the animals leading to control of Q fever in herds.
In addition, vaccination of herds against Q fever has been shown to reduce the risk of human infection.
References
External links
Q fever at the CDC
Coxiella burnetii genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID
Atypical pneumonias
Bacterial diseases
Bacterium-related cutaneous conditions
Biological agents
Bovine diseases
Rare infectious diseases
Rodent-carried diseases
Sheep and goat diseases
Tick-borne diseases
Zoonoses
Zoonotic bacterial diseases | Q fever | [
"Biology",
"Environmental_science"
] | 3,294 | [
"Biological agents",
"Toxicology",
"Biological warfare"
] |
56,484 | https://en.wikipedia.org/wiki/Low-pass%20filter | A low-pass filter is a filter that passes signals with a frequency lower than a selected cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter design. The filter is sometimes called a high-cut filter, or treble-cut filter in audio applications. A low-pass filter is the complement of a high-pass filter.
In optics, high-pass and low-pass may have different meanings, depending on whether referring to the frequency or wavelength of light, since these variables are inversely related. High-pass frequency filters would act as low-pass wavelength filters, and vice versa. For this reason, it is a good practice to refer to wavelength filters as short-pass and long-pass to avoid confusion, which would correspond to high-pass and low-pass frequencies.
Low-pass filters exist in many different forms, including electronic circuits such as a hiss filter used in audio, anti-aliasing filters for conditioning signals before analog-to-digital conversion, digital filters for smoothing sets of data, acoustic barriers, blurring of images, and so on. The moving average operation used in fields such as finance is a particular kind of low-pass filter and can be analyzed with the same signal processing techniques as are used for other low-pass filters. Low-pass filters provide a smoother form of a signal, removing the short-term fluctuations and leaving the longer-term trend.
Filter designers will often use the low-pass form as a prototype filter. That is a filter with unity bandwidth and impedance. The desired filter is obtained from the prototype by scaling for the desired bandwidth and impedance and transforming into the desired bandform (that is, low-pass, high-pass, band-pass or band-stop).
Examples
Examples of low-pass filters occur in acoustics, optics and electronics.
A stiff physical barrier tends to reflect higher sound frequencies, acting as an acoustic low-pass filter for transmitting sound. When music is playing in another room, the low notes are easily heard, while the high notes are attenuated.
An optical filter with the same function can correctly be called a low-pass filter, but conventionally is called a longpass filter (low frequency is long wavelength), to avoid confusion.
In an electronic low-pass RC filter for voltage signals, high frequencies in the input signal are attenuated, but the filter has little attenuation below the cutoff frequency determined by its RC time constant. For current signals, a similar circuit, using a resistor and capacitor in parallel, works in a similar manner. (See current divider discussed in more detail below.)
Electronic low-pass filters are used on inputs to subwoofers and other types of loudspeakers, to block high pitches that they cannot efficiently reproduce. Radio transmitters use low-pass filters to block harmonic emissions that might interfere with other communications. The tone knob on many electric guitars is a low-pass filter used to reduce the amount of treble in the sound. An integrator is another time constant low-pass filter.
Telephone lines fitted with DSL splitters use low-pass filters to separate DSL from POTS signals (and high-pass vice versa), which share the same pair of wires (transmission channel).
Low-pass filters also play a significant role in the sculpting of sound created by analogue and virtual analogue synthesisers. See subtractive synthesis.
A low-pass filter is used as an anti-aliasing filter before sampling and for reconstruction in digital-to-analog conversion.
Ideal and real filters
An ideal low-pass filter completely eliminates all frequencies above the cutoff frequency while passing those below unchanged; its frequency response is a rectangular function and is a brick-wall filter. The transition region present in practical filters does not exist in an ideal filter. An ideal low-pass filter can be realized mathematically (theoretically) by multiplying a signal by the rectangular function in the frequency domain or, equivalently, convolution with its impulse response, a sinc function, in the time domain.
However, the ideal filter is impossible to realize without also having signals of infinite extent in time, and so generally needs to be approximated for real ongoing signals, because the sinc function's support region extends to all past and future times. The filter would therefore need to have infinite delay, or knowledge of the infinite future and past, to perform the convolution. It is effectively realizable for pre-recorded digital signals by assuming extensions of zero into the past and future, or, more typically, by making the signal repetitive and using Fourier analysis.
Real filters for real-time applications approximate the ideal filter by truncating and windowing the infinite impulse response to make a finite impulse response; applying that filter requires delaying the signal for a moderate period of time, allowing the computation to "see" a little bit into the future. This delay is manifested as phase shift. Greater accuracy in approximation requires a longer delay.
Truncating an ideal low-pass filter result in ringing artifacts via the Gibbs phenomenon, which can be reduced or worsened by the choice of windowing function. Design and choice of real filters involves understanding and minimizing these artifacts. For example, simple truncation of the sinc function will create severe ringing artifacts, which can be reduced using window functions that drop off more smoothly at the edges.
The Whittaker–Shannon interpolation formula describes how to use a perfect low-pass filter to reconstruct a continuous signal from a sampled digital signal. Real digital-to-analog converters uses real filter approximations.
Time response
The time response of a low-pass filter is found by solving the response to the simple low-pass RC filter.
Using Kirchhoff's Laws we arrive at the differential equation
Step input response example
If we let be a step function of magnitude then the differential equation has the solution
where is the cutoff frequency of the filter.
Frequency response
The most common way to characterize the frequency response of a circuit is to find its Laplace transform transfer function, . Taking the Laplace transform of our differential equation and solving for we get
Difference equation through discrete time sampling
A discrete difference equation is easily obtained by sampling the step input response above at regular intervals of where and is the time between samples. Taking the difference between two consecutive samples we have
Solving for we get
Where
Using the notation and , and substituting our sampled value, , we get the difference equation
Error analysis
Comparing the reconstructed output signal from the difference equation, , to the step input response, , we find that there is an exact reconstruction (0% error). This is the reconstructed output for a time-invariant input. However, if the input is time variant, such as , this model approximates the input signal as a series of step functions with duration producing an error in the reconstructed output signal. The error produced from time variant inputs is difficult to quantify but decreases as .
Discrete-time realization
Many digital filters are designed to give low-pass characteristics. Both infinite impulse response and finite impulse response low pass filters, as well as filters using Fourier transforms, are widely used.
Simple infinite impulse response filter
The effect of an infinite impulse response low-pass filter can be simulated on a computer by analyzing an RC filter's behavior in the time domain, and then discretizing the model.
From the circuit diagram to the right, according to Kirchhoff's Laws and the definition of capacitance:
where is the charge stored in the capacitor at time . Substituting equation into equation gives , which can be substituted into equation so that
This equation can be discretized. For simplicity, assume that samples of the input and output are taken at evenly spaced points in time separated by time. Let the samples of be represented by the sequence , and let be represented by the sequence , which correspond to the same points in time. Making these substitutions,
Rearranging terms gives the recurrence relation
That is, this discrete-time implementation of a simple RC low-pass filter is the exponentially weighted moving average
By definition, the smoothing factor is within the range . The expression for yields the equivalent time constant in terms of the sampling period and smoothing factor ,
Recalling that
so
note and are related by,
and
If =0.5, then the RC time constant equals the sampling period. If , then RC is significantly larger than the sampling interval, and .
The filter recurrence relation provides a way to determine the output samples in terms of the input samples and the preceding output. The following pseudocode algorithm simulates the effect of a low-pass filter on a series of digital samples:
// Return RC low-pass filter output samples, given input samples,
// time interval dt, and time constant RC
function lowpass(real[1..n] x, real dt, real RC)
var real[1..n] y
var real α := dt / (RC + dt)
y[1] := α * x[1]
for i from 2 to n
y[i] := α * x[i] + (1-α) * y[i-1]
return y
The loop that calculates each of the n outputs can be refactored into the equivalent:
for i from 2 to n
y[i] := y[i-1] + α * (x[i] - y[i-1])
That is, the change from one filter output to the next is proportional to the difference between the previous output and the next input. This exponential smoothing property matches the exponential decay seen in the continuous-time system. As expected, as the time constant RC increases, the discrete-time smoothing parameter decreases, and the output samples respond more slowly to a change in the input samples ; the system has more inertia. This filter is an infinite-impulse-response (IIR) single-pole low-pass filter.
Finite impulse response
Finite-impulse-response filters can be built that approximate the sinc function time-domain response of an ideal sharp-cutoff low-pass filter. For minimum distortion, the finite impulse response filter has an unbounded number of coefficients operating on an unbounded signal. In practice, the time-domain response must be time truncated and is often of a simplified shape; in the simplest case, a running average can be used, giving a square time response.
Fourier transform
For non-realtime filtering, to achieve a low pass filter, the entire signal is usually taken as a looped signal, the Fourier transform is taken, filtered in the frequency domain, followed by an inverse Fourier transform. Only O(n log(n)) operations are required compared to O(n2) for the time domain filtering algorithm.
This can also sometimes be done in real time, where the signal is delayed long enough to perform the Fourier transformation on shorter, overlapping blocks.
Continuous-time realization
There are many different types of filter circuits, with different responses to changing frequency. The frequency response of a filter is generally represented using a Bode plot, and the filter is characterized by its cutoff frequency and rate of frequency rolloff. In all cases, at the cutoff frequency, the filter attenuates the input power by half or 3 dB. So the order of the filter determines the amount of additional attenuation for frequencies higher than the cutoff frequency.
A first-order filter, for example, reduces the signal amplitude by half (so power reduces by a factor of 4, or , every time the frequency doubles (goes up one octave); more precisely, the power rolloff approaches 20 dB per decade in the limit of high frequency. The magnitude Bode plot for a first-order filter looks like a horizontal line below the cutoff frequency, and a diagonal line above the cutoff frequency. There is also a "knee curve" at the boundary between the two, smoothly transitioning between the two straight-line regions. If the transfer function of a first-order low-pass filter has a zero as well as a pole, the Bode plot flattens out again, at some maximum attenuation of high frequencies; such an effect is caused for example by a little bit of the input leaking around the one-pole filter; this one-pole–one-zero filter is still a first-order low-pass. See Pole–zero plot and RC circuit.
A second-order filter attenuates high frequencies more steeply. The Bode plot for this type of filter resembles that of a first-order filter, except that it falls off more quickly. For example, a second-order Butterworth filter reduces the signal amplitude to one-fourth of its original level every time the frequency doubles (so power decreases by 12 dB per octave, or 40 dB per decade). Other all-pole second-order filters may roll off at different rates initially depending on their Q factor, but approach the same final rate of 12 dB per octave; as with the first-order filters, zeroes in the transfer function can change the high-frequency asymptote. See RLC circuit.
Third- and higher-order filters are defined similarly. In general, the final rate of power rolloff for an order- all-pole filter is 6 dB per octave (20 dB per decade).
On any Butterworth filter, if one extends the horizontal line to the right and the diagonal line to the upper-left (the asymptotes of the function), they intersect at exactly the cutoff frequency, 3 dB below the horizontal line. The various types of filters (Butterworth filter, Chebyshev filter, Bessel filter, etc.) all have different-looking knee curves. Many second-order filters have "peaking" or resonance that puts their frequency response above the horizontal line at this peak.
The meanings of 'low' and 'high'—that is, the cutoff frequency—depend on the characteristics of the filter. The term "low-pass filter" merely refers to the shape of the filter's response; a high-pass filter could be built that cuts off at a lower frequency than any low-pass filter—it is their responses that set them apart. Electronic circuits can be devised for any desired frequency range, right up through microwave frequencies (above 1 GHz) and higher.
Laplace notation
Continuous-time filters can also be described in terms of the Laplace transform of their impulse response, in a way that lets all characteristics of the filter be easily analyzed by considering the pattern of poles and zeros of the Laplace transform in the complex plane. (In discrete time, one can similarly consider the Z-transform of the impulse response.)
For example, a first-order low-pass filter can be described by the continuous time transfer function, in the Laplace domain, as:
where H is the transfer function, s is the Laplace transform variable (complex angular frequency), τ is the filter time constant, is the cutoff frequency, and K is the gain of the filter in the passband. The cutoff frequency is related to the time constant by:
Electronic low-pass filters
First-order passive
RC filter
One simple low-pass filter circuit consists of a resistor in series with a load, and a capacitor in parallel with the load. The capacitor exhibits reactance, and blocks low-frequency signals, forcing them through the load instead. At higher frequencies, the reactance drops, and the capacitor effectively functions as a short circuit. The combination of resistance and capacitance gives the time constant of the filter (represented by the Greek letter tau). The break frequency, also called the turnover frequency, corner frequency, or cutoff frequency (in hertz), is determined by the time constant:
or equivalently (in radians per second):
This circuit may be understood by considering the time the capacitor needs to charge or discharge through the resistor:
At low frequencies, there is plenty of time for the capacitor to charge up to practically the same voltage as the input voltage.
At high frequencies, the capacitor only has time to charge up a small amount before the input switches direction. The output goes up and down only a small fraction of the amount the input goes up and down. At double the frequency, there's only time for it to charge up half the amount.
Another way to understand this circuit is through the concept of reactance at a particular frequency:
Since direct current (DC) cannot flow through the capacitor, DC input must flow out the path marked (analogous to removing the capacitor).
Since alternating current (AC) flows very well through the capacitor, almost as well as it flows through a solid wire, AC input flows out through the capacitor, effectively short circuiting to the ground (analogous to replacing the capacitor with just a wire).
The capacitor is not an "on/off" object (like the block or pass fluidic explanation above). The capacitor variably acts between these two extremes. It is the Bode plot and frequency response that show this variability.
RL filter
A resistor–inductor circuit or RL filter is an electric circuit composed of resistors and inductors driven by a voltage or current source. A first-order RL circuit is composed of one resistor and one inductor and is the simplest type of RL circuit.
A first-order RL circuit is one of the simplest analogue infinite impulse response electronic filters. It consists of a resistor and an inductor, either in series driven by a voltage source or in parallel driven by a current source.
Second-order passive
RLC filter
An RLC circuit (the letters R, L, and C can be in a different sequence) is an electrical circuit consisting of a resistor, an inductor, and a capacitor, connected in series or in parallel. The RLC part of the name is due to those letters being the usual electrical symbols for resistance, inductance, and capacitance, respectively. The circuit forms a harmonic oscillator for current and will resonate in a similar way as an LC circuit will. The main difference that the presence of the resistor makes is that any oscillation induced in the circuit will die away over time if it is not kept going by a source. This effect of the resistor is called damping. The presence of the resistance also reduces the peak resonant frequency somewhat. Some resistance is unavoidable in real circuits, even if a resistor is not specifically included as a component. An ideal, pure LC circuit is an abstraction for the purpose of theory.
There are many applications for this circuit. They are used in many different types of oscillator circuits. Another important application is for tuning, such as in radio receivers or television sets, where they are used to select a narrow range of frequencies from the ambient radio waves. In this role, the circuit is often called a tuned circuit. An RLC circuit can be used as a band-pass filter, band-stop filter, low-pass filter, or high-pass filter. The RLC filter is described as a second-order circuit, meaning that any voltage or current in the circuit can be described by a second-order differential equation in circuit analysis.
Second-order low-pass filter in standard form
The transfer function of a second-order low-pass filter can be expressed as a function of frequency as shown in Equation 1, the Second-Order Low-Pass Filter Standard Form.
In this equation, is the frequency variable, is the cutoff frequency, is the frequency scaling factor, and is the quality factor. Equation 1 describes three regions of operation: below cutoff, in the area of cutoff, and above cutoff. For each area, Equation 1 reduces to:
: - The circuit passes signals multiplied by the gain factor .
: - Signals are phase-shifted 90° and modified by the quality factor .
: - Signals are phase-shifted 180° and attenuated by the square of the frequency ratio. This behavior is detailed by Jim Karki in "Active Low-Pass Filter Design" (Texas Instruments, 2023).
With attenuation at frequencies above increasing by a power of two, the last formula describes a second-order low-pass filter. The frequency scaling factor is used to scale the cutoff frequency of the filter so that it follows the definitions given before.
Higher order passive filters
Higher-order passive filters can also be constructed (see diagram for a third-order example).
First order active
An active low-pass filter adds an active device to create an active filter that allows for gain in the passband.
In the operational amplifier circuit shown in the figure, the cutoff frequency (in hertz) is defined as:
or equivalently (in radians per second):
The gain in the passband is −R2/R1, and the stopband drops off at −6 dB per octave (that is −20 dB per decade) as it is a first-order filter.
See also
Baseband
Smoother (statistics)
References
External links
Low Pass Filter java simulator
ECE 209: Review of Circuits as LTI Systems, a short primer on the mathematical analysis of (electrical) LTI systems.
ECE 209: Sources of Phase Shift, an intuitive explanation of the source of phase shift in a low-pass filter. Also verifies simple passive LPF transfer function by means of trigonometric identity.
C code generator for digital implementation of Butterworth, Bessel, and Chebyshev filters created by the late Dr. Tony Fisher of the University of York (York, England).
Signal processing
Linear filters
Synthesiser modules
Filter frequency response
Acoustics
Sound | Low-pass filter | [
"Physics",
"Technology",
"Engineering"
] | 4,500 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Classical mechanics",
"Acoustics"
] |
56,508 | https://en.wikipedia.org/wiki/Judi%20Bari | Judith Beatrice Bari (November 7, 1949 – March 2, 1997) was an American environmentalist, feminist, and labor leader, primarily active in Northern California after moving to the state in the mid-1970s. In the 1980s and 1990s, she was the principal organizer of Earth First! campaigns against logging in the ancient redwood forests of Mendocino County and related areas. She also organized Industrial Workers of the World Local 1 in an effort to bring together timber workers and environmentalists of Earth First! in common cause.
Bari suffered severe injuries on 24 May 1990 in Oakland, California, when a pipe bomb went off under her seat in her car. She was driving with colleague Darryl Cherney, who had minor injuries. They were arrested by Oakland Police, aided by the FBI, who accused them of transporting a bomb for terrorist purposes. While those charges were dropped, in 1991 the pair filed suit against the Oakland Police Department and FBI for violations of their civil rights during the investigation of the bombing. A jury found in their favor when the case went to trial in 2002, and damages were awarded to Bari's estate and Cherney. Bari had died of cancer in 1997. The bombing has not been solved.
In 1999 a bill was passed to establish the Headwaters Forest Reserve (H.R. 2107, Title V. Sec.501.) under administration by the Bureau of Land Management. This protected of mixed old-growth and previously harvested forest. It was a project that Bari had long supported.
Early life and education
Bari was born on November 7, 1949, and was raised in Silver Spring, Maryland, the daughter of mathematician Ruth Aaronson Bari, who became a recognized mathematician, and diamond setter Arthur Bari. Her parents were Jewish and Italian in ancestry, respectively. The elder Baris were both active in left-wing politics; they advocated for civil rights and opposed the Vietnam War. Judi Bari was the second of three daughters; her older sister is Gina Kolata, a science journalist for the New York Times; and younger is Martha Bari, an art historian.
Although Judi Bari attended the University of Maryland for five years, she dropped out without graduating. She said that her college career was most notable for "anti-Vietnam War rioting".
Bari began working as a clerk for a chain grocery store and became a union organizer in its work force. At her next job as a mail handler, she organized a wildcat strike in the United States Postal Service bulk mail facility in Maryland.
Move to California, marriage and family
Bari moved to the Bay Area in Northern California, which was a center of political activism. In 1978 she met her future husband Michael Sweeney at a labor organizers' conference. They shared an interest in radical politics. Sweeney had graduated from Stanford University, and for a time in the early 1970s had been a member of the Maoist group Venceremos, which had mostly Chicano members. He had been married before.
In 1979, Bari and Sweeney married and settled in Santa Rosa, California. They had two daughters together, Lisa (1981) and Jessica (1985). The couple divorced in 1988 and shared custody of their children.
Political and conservation activities
During the early to mid-1980s, Bari devoted herself to Pledge of Resistance, a group that opposed US policies in Central America. She was a self-proclaimed virtuoso on the bullhorn. She edited, wrote, and drew cartoons for political leaflets and publications.
Around 1985, Bari moved north with her husband and two children to the vicinity of Redwood Valley in Mendocino County, California. It was an area of old timber towns, such as Eureka and Fortuna, and a new wave of hippies and young counter-culture adults who migrated here from urban areas.
In 1986, Houston millionaire Charles Hurwitz acquired Pacific Lumber Company, with assets in Northern California, including in redwood forests. He doubled the company's rate of timber harvesting as a means of paying off the acquisition cost. This enraged environmentalists. The federal government also investigated the transaction because of Hurwitz's use of junk bonds. Activist protests against old-growth timber harvesting by Pacific Lumber became the focus of Earth First! in the following years.
On May 8, 1987, a sawmill accident occurred at the Louisiana Pacific mill in Cloverdale, California. Mill worker George Alexander nearly died of injuries suffered when a saw blade struck a spike in a log being milled, generating shrapnel. Adverse publicity resulted.
Earth First!, which at that point still promoted "monkeywrenching" as part of its tactics, was blamed by the company and some workers for the spike because of incidents of equipment sabotage that had taken place in the vicinity where the log was harvested. But responsibility for the spike was not determined. However, it was later confirmed that the prime suspect in the case was not an Earth First! activist but a local "disgruntled" landowner.
The bad publicity from the incident resulted in Earth First! disavowing tree spiking (but not other forms of sabotage).
In 1988, Bari was instrumental in starting Local 1 of the Industrial Workers of the World (IWW), which allied with Earth First! in protests against cutting old growth redwoods. Bari used her labor organizing background to run a workshop on the Industrial Workers of the World at an Earth First! rendezvous in California. Through the formation of EF!–IWW Local 1, she sought to bring together environmentalists and timber workers who were concerned about the harvest rate by the timber industry. She believed they had interests in common.
That year, Bari organized the first forest blockade, to promote expanding the South Fork Eel River Wilderness, managed by the US Bureau of Land Management. Related to her other interests, that year Bari also organized a counter-demonstration to protect a Planned Parenthood clinic in Ukiah.
Many timber workers believed that the environmentalists were threatening their livelihoods. At this time, environmentalists were backing their legal suits against timber overcutting by staging blockades of job sites in the woods and tree sitting. Loggers saw such actions as harassment. Confrontations between loggers and demonstrators were often heated and sometimes violent. Reactions to Bari's involvement in the protests were severe: her car was rammed by a logging truck in 1989, and she received death threats.
In August 1989, environmentalist Mem Hill suffered a broken nose in a protest confrontation with loggers in the woods. She filed a legal suit accusing a logger of assault, and claiming law enforcement did not protect her from attack.
Bari emphasized non-violent action and began to incorporate music into her demonstrations. She played the fiddle and sang original compositions by Darryl Cherney, who played guitar. Sometimes she sang her own songs. Their song titles and lyrics aroused controversy, as many listeners considered them offensive. Cherney's song about tree spiking, "Spike a Tree for Jesus" is one example; "Will This Fetus Be Aborted?", sung as a counter-protest to an anti-abortion rally, was another.
The media portrayed her as an obstructionist saboteur. Some activists and area residents found Bari to be egocentric, humorless, and strident. Her tactics often rankled not only members of the timber industry and political establishment, but fellow activists.
Differences emerged between Bari and her husband over their political paths and diverging lives. He headed a recycling company in the county. They struggled to reconcile political action with the obligations of parenting. In 1988, with a divorce between herself and her husband underway, she met Darryl Cherney. They began a romantic relationship based partly on shared political beliefs, and appeared together at various protests (as noted above).
In 1990, the Sierra Club withdrew its support from legislation amending California Forest Practice Rules and moving forward with a process to establish a Headwaters Forest preserve on Pacific Lumber Company land. They submitted a voter initiative, Proposition 130, dubbed "Forests Forever." The timber industry was strongly opposed to it. In response, environmentalists began organizing Redwood Summer, a campaign of nonviolent protests focused on slowing harvest of redwood forests in Northern California until such forests gained extra protections under Proposition 130. They named their campaign in honor of the 1964 Freedom Summer of the Civil Rights Movement. Bari was instrumental in recruiting demonstrators from college campuses across the United States. But on November 6, 1990, Proposition 130 was defeated by California voters, with 52.13% against. Opponents emphasized the disruptive activities of Redwood Summer, which interfered with timber workers, and the support of Earth First! for Proposition 130. It had been accused of sabotage and violence against workers in the past.
During organizing for Redwood Summer, Bari directed efforts in Mendocino County, and Cherney went on the road to recruit activists. Bari had local connections and a rapport with some lumber industry workers that was developed during her organizing efforts of an IWW local. While recruiting, Cherney was kept at a distance, so that his reputation for advocating sabotage and propensity for hostile outbursts toward timber workers could not damage the campaign.
On April 22, 1990, a group called Earth Night Action Group sabotaged power poles in southern Santa Cruz County, causing power outages. Upon hearing of that incident, Bari reportedly said, "Desperate times call for desperate measures," and "So what if some ice cream melted?" Observers interpreted her statements as approval of sabotage, and thought Earth First! might still be involved in such activities. A provocative flyer was publicized that had been written by Cherney: he called for "Earth Night" actions, and it featured images of a monkey wrench, an earth mover, and figures representing saboteurs in the night. Cherney said the flyer was facetious. The identities of members of the Earth Night Action Group has never been established; their relationship to Earth First! was a matter of speculation.
On May 9, 1990, a failed incendiary pipe bomb was discovered in the Louisiana Pacific sawmill in Cloverdale. A hand-lettered sign, saying "L-P screws millworkers", had been placed outside the mill. Responsibility for the bomb was never established.
On May 22, 1990, Bari met with local loggers to agree on ground rules for nonviolence during the Redwood Summer demonstrations. In the early afternoon of May 23, 1990, Bari started a road trip to Santa Cruz to organize for Redwood Summer and related musical events. She stopped for a press conference in Ukiah and for a meeting at the Seeds of Peace collective house in Berkeley.
That night she stayed overnight in Oakland, at a house near MacArthur and Park boulevards. On May 24 she and Darryl Cherney (as passenger) drove away from the house, and a short time later a bomb exploded beneath her seat. She suffered severe injuries and Cherney suffered lesser ones.
Car bombing attempt on Bari's life
Summary
On May 24, 1990, in Oakland, California, Bari and Darryl Cherney were traveling in her car when it was blown up by a pipe bomb under her seat. Bari was driving and severely injured by the blast. Cherney suffered minor injuries. Bari was arrested for transporting explosives while she was still in critical condition with a fractured pelvis and other major injuries.
FBI bomb investigators reached the scene nearly simultaneously with first responders from the Oakland Police Department. Bari raised suspicion that the FBI knew about the bomb beforehand and might have been responsible for it. In Bari's words, it was as if the investigators were "waiting around the corner with their fingers in their ears." It was later revealed that there had been a tip to law enforcement, suspected to be from the person responsible for the bomb, that "some heavies" were carrying a bomb south for sabotage in the Santa Cruz area.
The Federal Bureau of Investigation (FBI) took jurisdiction of the case away from the Bureau of Alcohol, Tobacco, Firearms and Explosives, alleging it was an eco-terrorism case. The Oakland Police Department of Alameda County was the local agency on the case. Bari's wounds disabled her to the extent she had to curtail her activities. As Bari convalesced, other activists carried out Redwood Summer, conducting a series of demonstrations by thousands of environmental activists.
In late July 1990, the Alameda County District Attorney declined to press charges against Bari and Cherney, claiming insufficient evidence. But Bari and Cherney filed a civil rights suit in 1991 for violations by the FBI and Oakland Police because of the arrests and search warrants carried out on their properties. The trial was not concluded until 2002. Bari died of breast cancer in 1997. The jury found that their civil rights had been violated. The court made an award of $4.4 million to Cherney and Bari's estate.
Events of investigation
When the Oakland police and the FBI initially accused Bari and Cherney of knowingly carrying a bomb for use in an act of terrorism, the story made headlines nationwide. By 3:00 p.m. of the day of the bombing, Bari was arrested for transportation of illegal explosives. She was still being treated in Highland Hospital .
Because of Earth First! had earlier developed a reputation for sabotage, the media reported the police version of events. For example, a KQED news report, entitled "Focus: Logjam", used the term "radical" to describe Earth First!, blamed them for having sabotaged loggers' equipment and conducting tree spiking, and tied Bari's bombing in with such actions.
Based on his personal observations of bomb damage to the car, FBI Special Agent Frank Doyle filed a public affidavit that the bomb had been carried on the back seat floorboard of Bari's vehicle. The FBI was granted a search warrant on May 25 at 2:21 a.m., and agents used a helicopter to quickly reach Bari's home and search it. Agents also searched the premises of the "Seeds of Peace" house in Berkeley, where Bari and Cherney had visited the day before the explosion. Members of Seeds of Peace were repeatedly interviewed; they said that they repeatedly told police that Bari and Cherney were committed to nonviolence.
Within a week, supporters of Bari and Cherney were petitioning for an investigation of the FBI's investigative methods. Daniel Hamburg, a former Mendocino County Supervisor, and others complained that the investigation seemed focused on charging the two environmentalists.
On July 6, a new search warrant for Bari's home was granted, as investigators sought exemplars of typewriting to compare to the typewritten The "Lord's Avenger".(See more below)
FBI analysis of the explosive device determined it was a pipe bomb with nails wrapped to its surface to create shrapnel. It was equipped with a timer-armed motion trigger, so that it would explode only when the car was driven. The bomb was confirmed to have been placed on the floorboard directly under the driver's seat, not on the floorboard behind the seat, as Agent Doyle had claimed. That evidence suggested that the bomb was an anti-personnel device intended to kill the driver of Bari's car. The FBI investigation remained focused on the theory that the explosion was an accidental detonation of a device knowingly transported by Bari. They attempted to match roofing nails transported in Bari's car to finishing nails used with the bomb. After seven weeks of news stories reporting the police claims that all evidence pointed to Bari and Cherney, the Alameda County District Attorney announced that he would not file any formal charges against the pair due to insufficient evidence against them. Law enforcement agencies never fully investigated evidence that the bombing was an attempt on Bari's life. The crime has remained unsolved.
During her convalescence, Bari issued a directive prohibiting those in her circle from cooperating with investigators. Even after she was no longer considered a suspect, she demanded that her circle remain silent. Bari offered cooperation with investigators in return for legal immunity; but her offer was refused.
Theories
The "Lord's Avenger"
Five days after the bombing, on May 29, while Bari was still in hospital, Mike Geniella of the Santa Rosa Press Democrat received a letter claiming responsibility for both the bomb in Bari's car and a partially detonated one set a week before at the Cloverdale lumber mill. Written in an ornate, biblical style with misogynistic language, the letter was signed "The Lord's Avenger." It said the writer had been outraged by Bari's statements and behavior in December 1988, when she opposed an anti-abortion protest at a Planned Parenthood clinic in Ukiah, California. The letter described the construction of the two bombs in great detail.
Based on content of the letter, law enforcement investigated Bill Staley, a self-styled preacher, Louisiana Pacific mill worker, and former professional football player who had been prominent at the 1988 anti-abortion demonstration. Staley was eventually cleared of suspicion in the bombing. While the letter's author gave accurate details about the bombs' construction, investigators found the explanation of how the bomb was placed in Bari's car to be implausible. Both supporters and detractors of Bari's theory of the bombing being an FBI/industry plot, which had been publicized, concluded that the bomb builder sent the letter in an effort to divert attention to Staley.
Darryl Cherney
Investigators looked closely at both Cherney and Bari's ex-husband Sweeney as potential suspects, knowing that women often faced danger and were killed by men close to them, especially after relationships ended. Some of Bari's friends had noted changes in her relationship with Cherney, and thought he may have set the bomb because Bari had replaced him as the leading organizer of Earth First! in northern California. In a related rumor, there was talk that killing Bari would provide a martyr to boost the profile of Redwood Summer. Suspicions about his writing the Lord's Avenger letter, as well as more general grounds, fell apart under logical impossibilities.
FBI
The FBI's assertion that the bombing was an accidental detonation was shown to be completely implausible in the face of physical evidence. Bari and her supporters began to suspect the assailant was associated with the FBI. Within the next year, Bari developed the theory that the bomber was an acquaintance whom she had suspected of being an FBI informant. From depositions taken in 1994 for Bari and Cherney's federal civil rights lawsuit, they learned that the May 24 bombing of Bari's car bore a close resemblance to "crime scenes" staged by the FBI in a "bomb school" held in redwood country with the assistance of the Pacific Lumber company earlier that year. Bari and followers believed this supported their idea that the bombing could be attributed to the FBI.
The FBI school was intended to train local and state police officers on how to investigate bomb scenes. The school taught that bomb explosions inside a vehicle often indicated the knowing, criminal transportation of homemade bombs, which went off accidentally. They noted that it was difficult to break into a locked car in order to plant a bomb. By 1991, evidence conclusively showed that the bomb was placed directly beneath Bari's seat, as she had said from the day of the accident.
According to Bari, FBI Special Agent Frank Doyle, one of the agents on her case, had been the instructor at the bomb school. At least four of the law enforcement responders to the bombing had been students of his at the school.
In the weeks before the bombing, Bari had received numerous death threats related to her anti-logging activism, which she reported to local police. After the bombing, her attorney turned over such written threats to the FBI for investigation. As revealed in the 2002 trial evidence, neither the Oakland police nor the FBI ever investigated these.
Bari's ex-husband
In 1991, Stephen Talbot, KQED reporter and documentary producer, and investigative reporter David Helvarg made a documentary titled Who Bombed Judi Bari?. During the production, he discovered circumstantial evidence and heard suspicions expressed by acquaintances of Bari that her ex-husband Mike Sweeney should be considered a suspect. Bari told Talbot in confidence that she also had doubts about her former husband, and that he abused her during their marriage. She later publicly denied these statements. Talbot named Sweeney and others as possible suspects in the bombing, but in 1991 did not attribute any statements to Bari. After her death, he felt released from his journalist's protection of her as a source. He wrote about Sweeney as a suspect more directly in a 2002 article published on Salon.com.
Bari strongly criticized Talbot's 1991 film in her article, "Who bought Steve Talbot?," published in the San Francisco Weekly and the Anderson Valley Advertiser. Talbot also had reported a 1989 letter signed by "Argus" that was sent to the Chief of the Ukiah Police Department, offering to be an informant against Bari regarding marijuana dealing. Bari claimed in her article that the "Argus" letter had to have been written by Irv Sutley, a Peace and Freedom Party activist whom she had met in 1988. Attention had also been focused on two other threatening letters: a "no second warning" death threat letter sent to Bari about a month before the bombing, and what became known as the "Lord's Avenger" letter sent to the Santa Rosa Press Democrat immediately after the bombing.
Through the early 1990s, many activists believed that the bombing was the work of either the FBI or other opponents of Bari's Earth First! activities. Irv Sutley was suspected as the hitman. But Bari's attempts to shape accounts of the bombing were alienating supporters and raised suspicions that she was hiding something. Bruce Anderson of the Advertiser was among those put off by her assertions. He knew that the 1988 divorce had been bitter. While he thought that some of her post-bombing behavior was odd, he continued to support her public position.
As he later recalled:
I still feel guilty about not defending you [Talbot]. I wimped out completely. I knew she'd told you about Sweeney. Lots of people knew she'd told you. I was a complete dupe, a coward and a fool. I convinced myself that her work mobilizing people against the corporate timber companies outweighed unpleasant aspects of her character and the even more unpleasant aspects of her personal behavior.
In a reaction to efforts to tie Sutley to the bombing, some former Bari supporters publicly shifted their suspicion toward Sweeney. In 1995 Ed Gehrman, a teacher and publisher of Flatland, a small magazine (now defunct) in Fort Bragg, California, had also participated in Redwood Summer protests. He became concerned about the controversy over Sutley. Initially suspecting Sutley, Gehrman questioned him directly about it. Sutley denied being involved. In addition, he said that in 1989, Pam Davis, a friend of Bari, had on three separate occasions offered him $5,000 to kill her ex-husband Sweeney. In response, Bari said in a radio broadcast that the apparent solicitation was a joke misunderstood by her friend, who had conveyed the offer to Sutley.
Gehrman believed that someone was lying. He discussed the issues with journalist Alexander Cockburn of CounterPunch, a political magazine. Cockburn offered to pay for polygraph tests of the key players in the controversy. Sutley was the only one who accepted the offer; he took a polygraph test and passed. (Law enforcement does not rely on such polygraph tests.) After that, Gehrman considered Sutley credible. As he considered motives for that attack, he began to suspect Sweeney more strongly.
Gehrman presented his case for exculpating Sutley in Flatland. Anderson reconsidered his support of Bari's position, arousing anger among her supporters. Anderson was incensed by the possibility that Bari had tried to smear an innocent man in order to promote her narrative that the timber industry and/or the FBI were involved in the bombing. Anderson suggested that Bari and Sweeney each had sufficient guilty knowledge to destroy the other - a legal mutual assured destruction scenario.
Meanwhile, Gehrman tried to use the "Argus," "no second warning," and "Lord's Avenger" letters to determine the identity of Bari's assailant. He submitted facsimiles of the three letters and their envelopes, along with exemplars of text written by various suspects, to Don Foster. An English professor at Vassar College, Foster had established expertise in attributional analysis of documents. (He has since been discounted as an expert.) Foster concluded that the three letters were from the same writer and most closely matched exemplars by Sweeney.
Anderson wrote regular columns in the Advertiser accusing the supporters of the late Bari of lying by their continued support of the industry/FBI theory. Gehrman said he was approached in 2005 by Jan Maxwell, a longtime friend of Pam Davis. Maxwell said that Davis had told her that Bari had suggested a murder-for-hire solicitation against Sweeney. This seemed to place the solicitations to Sutley within a larger pattern. Gehrman presented a summary of his knowledge about the case, which he reprinted in the Advertiser in 2008.
Years before, in 2002, at the conclusion of the Bari/Cherney civil rights trial, Stephen Talbot had already publicly reported on Salon.com that Bari had confided in him about her suspicions of Sweeney and the car bombing, as well as her knowledge that he had firebombed the Santa Rosa airport in 1980. She also said that Sweeney had abused her during their marriage.
Aftermath
While the bombing investigation was underway, Earth First! organizers proceeded with training and demonstrations in several timber towns: Fort Bragg (July), Eureka, and Fortuna. Before they got underway, the Mendocino County Board of Supervisors was considering legislation to regulate the size of protest signs and standards, in order to curb violence by demonstrators. Meanwhile, Redwood Summer organizers debated whether to cancel demonstrations in the woods as being too dangerous.
On May 29, representatives of Redwood Summer were pleased to reach an agreement with some of industry: they signed with small local logging companies to support nonviolent and non-destructive protests of timber harvesting. Activists eventually continued events of Redwood Summer, demonstrating in some of the timber towns. The demonstrations by environmentalists were generally countered by demonstrations of numerous loggers and their families. The latter believed that their jobs and lives were jeopardized by proposed restrictions on logging.
Redwood Summer ended with Earth First! claiming success because they had trained so many volunteers in nonviolent resistance. But the numbers of participants in protests were smaller than organizers had hoped for. In addition, by September, the New York Times was reporting that antagonism between environmentalists and timber workers seemed to have increased. State voters defeated Proposition 130, which would have restricted logging, on November 6, 1990. The campaign against it had emphasized its support by Earth First!.
Several years later, the Northern California "Timber Wars" heated up again in 1998. Earth First! members were dissatisfied with the final agreement that established the Headwaters Forest Reserve. By a bill passed in 1997, the government was authorized to acquire and protect , rather than the much larger portion proposed for more than a decade. The division between the timber community and Earth First! became sharper than ever. "Anarchists" and other advocates of violence, such as Rodney Coronado, a convicted arsonist and Earth Liberation Front member, gained prominence within Earth First!. Such members threatened both the industrial equipment and facilities of timber companies, as well as individuals at their private residences. After Bari died in 1997, she had the status of a major leader in Earth First! lore, but timber protests moved away from the community-based collaboration that she had tried to develop and present.
Later events related to bombing investigation
The bombing of Bari and Cherney has never been solved. Following the 2002 trial and award of damages, Cherney and supporters sought access to the remains of the partially intact Cloverdale mill bomb held by the FBI. Investigators believed that similarities between it and the remains of the pipe bomb in the car showed they were constructed by the same maker. They hoped to find DNA evidence that could be analyzed by current technology and reveal a suspect. In 2012, a federal judge ordered the FBI not to destroy the remains of that pipe bomb, as they had planned. Ben Rosenfeld, attorney for Cherney, requested DNA analysis by an outside lab. The FBI said they had never performed such testing. The judge ordered such testing.
The case remains under the jurisdiction of the City of Oakland, where it occurred, and the Alameda County District Attorney. The Mendocino County Sheriff's Office has deferred on jurisdictional issues, claiming that there is insufficient evidence that the bomb was planted in Mendocino County.
In 2001 DNA evidence from documents, including the "Lord's Avenger" letter, which is believed strongly tied to Bari's assailant and yielded a fingerprint, was presented by joint agreement of the Bari advocates and the FBI. It does not match DNA samples obtained from Sutley. Mike Sweeney reportedly had not submitted a DNA sample. It is not known if law enforcement requested him to submit one.
Writing and public service career
Bari became a political writer as part of her interests in feminism, class struggle, and ecology. In May 1992, in an article published in Ms. magazine, she claimed to have feminized Earth First!. The radical environmentalist group was founded by men. In its early days, they pursued sabotage that damaged equipment and threatened the lives of timber workers, a series of actions known as "monkeywrenching". Bari emphasized non-violent actions and public education in an effort to build collaboration in the region.
Stepping back from Earth First! leadership because of dealing with inoperable cancer, by the end of 1996, Bari was working as a para-legal and hosting a weekly public radio show. Before her death, she organized the Redwood Summer Justice Project, a non-profit organization to coordinate political and financial support for the suit she and Cherney were conducting.
In 1994 Bari was part of a congressional advisory committee, chartered by Congressman Dan Hamburg (D-CA), trying to develop a proposal for a Headwaters Forest Reserve of 44,000 acres. Efforts had been underway to protect this area for more than a decade. Their proposal included a compensation clause for those lumber workers who would have been laid off following establishment of this extensive reserve. The bill based on the "large reserve" proposal died in Congress after Hamburg lost his 1994 re-election bid; during a midterm upheaval, he was defeated by the Republican former incumbent of his seat. Instead, a 7472-acre forest reserve was authorized by a bill passed on November 14, 1997, shortly after Bari's death.
Death and posthumous civil rights trial
On March 2, 1997, Bari died of breast cancer at her home near Willits. A memorial service in her honor was attended by an estimated 1,000 people.
Bari and Cherney had filed a federal civil rights suit in 1991 claiming that the FBI and police officers falsely arrested the pair in relation to the bombing of her car in May 1990. They were accused of carrying the bomb to use for other purposes. Bari and Cherney said that law enforcement was trying to frame them as terrorists so as to discredit their political organizing to protect the redwood forests.
In 1997, Bari and Cherney sued the law enforcement officers named in the civil rights suit for conspiracy to violate the activists' First and Fourth Amendment rights. On October 15 that year, the agents lost their bid for immunity from prosecution.
Also on October 15, federal judge Claudia Wilken dismissed former FBI supervisor SAIC Richard Wallace Held from the case. The court said that as SAIC he had no duty to oversee the daily duties of his subordinate agents. The plaintiffs' contention that the FBI was responsible for the bomb was also dismissed from the case. Its scope was restricted to malicious investigative malpractice on the part of the FBI, and the allowed damage claim was reduced from $20 million to $4.4 million.
The suit finally went to trial in 2002. After deliberation for two weeks, a jury found in favor of Bari's and Cherney's federal civil lawsuit. They concluded the pair's civil rights had been violated by several named individuals from the FBI and Oakland Police Department.
As part of the jury's verdict, the judge ordered Frank Doyle and two other FBI agents, and three Oakland police officers, to pay a total of $4.4 million to Cherney and to Bari's estate. The award was compensation for the defendants' violation of the plaintiffs' First Amendment rights to freedom of speech and freedom of assembly, and for the defendants' various unlawful acts, including unlawful search and seizure in violation of the plaintiffs' Fourth Amendment rights. At trial the FBI and the Oakland Police had pointed fingers at each other:
Oakland investigators testified that they relied almost exclusively on the F.B.I.'s counter-terrorism unit in San Francisco for advice on how to handle the case. But the F.B.I. agents denied misleading the investigators into believing that Ms. Bari and Mr. Cherney were violence-prone radicals who were probably guilty of transporting the bomb.
While neither agency would admit wrongdoing, the jury held both liable, finding that "[B]oth agencies admitted they had amassed intelligence on the couple before the bombing." This evidence supported the jury's finding that both the FBI and the Oakland police persecuted Bari and Cherney as potential terrorists rather than conducting a full investigation to try to find the perpetrators. They were trying to discredit and sabotage Earth First! and the planned Redwood Summer of 1990, thereby violating the plaintiffs' First Amendment rights and justifying the large award.
After the trial's gag order was lifted, a juror revealed to the press that she believed the law enforcement agents had lied:
"Investigators were lying so much it was insulting . ... I'm surprised that they seriously expected anyone would believe them ... They were evasive. They were arrogant. They were defensive," said juror Mary Nunn.
Legacy
On May 20, 2003, the Oakland City Council unanimously voted a resolution establishing Judi Bari Day, stating:
Whereas, Judi Bari was a dedicated activist, who worked for many social and environmental causes, the most prominent being the protection and stewardship of California's ancient redwood forests. ... Now, therefore, be it resolved that the City of Oakland shall designate May 24 as Judi Bari Day ...
Bibliography
Books by Bari
Books and articles about Bari
, self-published on his website set up for the book
The Encyclopedia of American Law Enforcement: Facts on File Crime Library. Michael Newton. Infobase Publishing, 2007. .
The Last Stand: The War Between Wall Street and Main Street Over California's Ancient Redwoods . David Harris . University of California Press, 1997. .
The Symbolic Earth: Discourse and Our Creation of the Environment Environmental Studies . Editors: James Gerard Cantrill, Christine Lena. University Press of Kentucky, 1996. .
Stories of Globalization: Transnational Corporations, Resistance, and the State . Alessandro Bonanno, Douglas H. Constance. Penn State Press, 2010. .
The War Against the Greens: The "Wise-Use" Movement, the New Right, and the Browning of America. David Helvarg. Big Earth Publishing, 2004. .
Renewed controversy
A critical biography of Bari titled The Secret Wars of Judi Bari (2005), by investigative journalist Kate Coleman, drew fierce criticism by many supporters. But a review in Environmental History said that the author "succeeds in offering a balanced view of her life."
Cherney, managers of Bari's estate (for her portion of the FBI settlement award), Bari's ex-husband Michael Sweeney, a suspect in the bombing; and their followers, claimed the book had hundreds of factual errors and expressed a bias against Bari and Earth First! These critics noted that the publisher, Encounter Books, was founded by arch-conservative Peter Collier. They said it was funded primarily by arch-conservative foundations not sympathetic to Bari's causes. Author Coleman said that such allegations and the aspersions cast on the publisher, were being used as a smokescreen. She said the book's detractors were dedicated to preserving an incomplete and distorted memory of Ms. Bari.
Cherney and some other critics said that Coleman had failed to include more information from their points of view. The author said that they had not responded to her attempts to contact them.
In her book, Coleman outlined a case that Sweeney, Bari's ex-husband, had planted the bomb in order to kill her. This thesis had been suggested by others, namely Stephen Talbot, in his 1991 documentary, and more specifically in his 2002 article on Salon.com, in which he revealed statements that Bari had made to him in 1991. He felt her death lifted his responsibility to protect her confidences.
Mark Hertsgaard wrote a critical review in the Los Angeles Times entitled, "'Too many rumors, too few facts to examine eco-activism case". He said, "the reporting is thin and sloppy, and the humdrum prose is marred by dubious speculation." Ed Guthmann, in a review in the San Francisco Chronicle, criticized Hertsgaard's review for containing its own errors.
Films
References
Further reading
Steve Ongerth, Redwood Uprising: The Story of Judi Bari and Earth First!-IWW Local #1 , also titled Redwood Uprising: From One Big Union to Earth First! and the Bombing of Judi Bari (2010)
External links
"The Attempted Murder of Judi Bari", 1994 interview, Albion Monitor
Friends of Judi Bari, a defense group
Profile, SourceWatch
Obituary for Judi Bari, IWW
"Don't Mourn, Organise! The Judi Bari Story", BBC, December 2004, 30 min. audio (MP3)
Mike Sweeney, editor; Website criticizing Kate Coleman's The Secret Wars of Judi Bari
Bari's writings
Writings by and about Judi Bari
IWW Environmental Unionism Caucus, featuring more writings by Judi Bari, focusing specifically on class-struggle ecology
1949 births
1997 deaths
20th-century American Jews
20th-century American non-fiction writers
20th-century American women writers
Activists from California
American anti-capitalists
American anti-fascists
American communists
American feminists
American people of Italian descent
American women environmentalists
American women non-fiction writers
American women's rights activists
Anti-corporate activists
Anti-fascists
Deaths from breast cancer in California
Ecofeminists
Explosion survivors
Industrial Workers of the World members
Jewish American activists
Jewish American non-fiction writers
Jewish feminists
Jewish women writers
People from Willits, California
University of Maryland, College Park alumni
Writers from California
Jewish women activists | Judi Bari | [
"Chemistry"
] | 7,942 | [
"Explosion survivors",
"Explosions"
] |
56,509 | https://en.wikipedia.org/wiki/Potash | Potash ( ) includes various mined and manufactured salts that contain potassium in water-soluble form. The name derives from pot ash, plant ashes or wood ash soaked in water in a pot, the primary means of manufacturing potash before the Industrial Era. The word potassium is derived from potash.
Potash is produced worldwide in amounts exceeding 71.9 million tonnes (~45.4 million tonnes K2O equivalent) per year as of 2021, with Canada being the largest producer, mostly for use in fertilizer. Various kinds of fertilizer-potash constitute the single greatest industrial use of the element potassium in the world. Potassium was first derived in 1807 by electrolysis of caustic potash (potassium hydroxide).
Terminology
Potash refers to potassium compounds and potassium-bearing materials, most commonly potassium carbonate. The word "potash" originates from the Middle Dutch , denoting "pot ashes" in 1477.
The old method of making potassium carbonate () was by collecting or producing wood ash (the occupation of ash burners), leaching the ashes, and then evaporating the resulting solution in large iron pots, which left a white residue denominated "pot ash". Approximately 10% by weight of common wood ash can be recovered as potash. Later, "potash" became widely applied to naturally occurring minerals that contained potassium salts and the commercial product derived from them.
The following table lists a number of potassium compounds that have "potash" in their traditional names:
History
Origin of potash ore
Most of the world reserves of potassium (K) were deposited as sea water in ancient inland oceans. After the water evaporated, the potassium salts crystallized into beds of potash ore. These are the locations where potash is being mined today. The deposits are a naturally occurring mixture of potassium chloride (KCl) and sodium chloride (NaCl), more commonly known as table salt. Over time, as the surface of the earth changed, these deposits were covered by thousands of feet of earth.
Bronze Age
Potash (especially potassium carbonate) has been used in bleaching textiles, making glass, ceramic, and making soap, since the Bronze Age. Potash was principally obtained by leaching the ashes of land plants.
14th–17th century
Potash mining
Beginning in the 14th century potash was mined in Ethiopia. One of the world's largest deposits, 140 to 150 million tons, is located in the Dallol area of the Afar Region.
Wood-derived potash
Potash was one of the most important industrial chemicals. It was refined from the ashes of broadleaved trees and produced primarily in the forested areas of Europe, Russia, and North America. Although methods for producing artificial alkalis were invented in the late 18th century, these did not become economical until the late 19th century and so the dependence on organic sources of potash remained.
Potash became an important international trade commodity in Europe from at least the early 14th century. It is estimated that European imports of potash required 6 or more million cubic metres each year from the early 17th century. Between 1420 and 1620, the primary exporting cities for wood-derived potash were Gdańsk, Königsberg and Riga. In the late 15th century, London was the lead importer due to its position as the centre of soft soap making while the Dutch dominated as suppliers and consumers in the 16th century. From the 1640s, geopolitical disruptions (i.e. Russo-Polish War (1654–1667)) meant that the centres of export moved from the Baltic to Archangelsk, Russia. In 1700, Russian ash was dominant though Gdańsk remained notable for the quality of its potash.
18th century
Kelp ash
On the Orkney islands, kelp ash provided potash and soda ash, production starting "possibly as early as 1719" and lasting for a century. The products were "eagerly sought after by the glass and soap industries of the time."
North America
By the 18th century, higher quality American potash was increasingly exported to Britain. In the late 18th and early 19th centuries, potash production provided settlers in North America badly needed cash and credit as they cleared wooded land for crops. To make full use of their land, settlers needed to dispose of excess wood. The easiest way to accomplish this was to burn any wood not needed for fuel or construction. Ashes from hardwood trees could then be used to make lye, which could either be used to make soap or boiled down to produce valuable potash. Hardwood could generate ashes at the rate of 60 to 100 bushels per acre (500 to 900 m3/km2). In 1790, the sale of ashes could generate $3.25 to $6.25 per acre ($800 to $1,500/km2) in rural New York State – nearly the same rate as hiring a laborer to clear the same area. Potash making became a major industry in British North America. Great Britain was always the most important market. The American potash industry followed the woodsman's ax across the country.
The first US patent
The first US patent of any kind was issued in 1790 to Samuel Hopkins for an improvement "in the making of Pot ash and Pearl ash by a new Apparatus and Process". Pearl ash was a purer quality made by calcination of potash in a reverberatory furnace or kiln. Potash pits were once used in England to produce potash that was used in making soap for the preparation of wool for yarn production.
19th century
After about 1820, New York replaced New England as the most important source; by 1840 the center was in Ohio. Potash production was always a by-product industry, following from the need to clear land for agriculture.
Canada
From 1767, potash from wood ashes was exported from Canada. By 1811, 70% of the total 19.6 million lbs of potash imports to Britain came from Canada. Exports of potash and pearl ash reached 43,958 barrels in 1865. There were 519 asheries in operation in 1871.
20th century industrialization
The wood-ash industry declined in the late 19th century when large-scale production of potash from mineral salts was established in Germany. In the early 20th century, the potash industry was dominated by a cartel in which Germany had the dominant role. WWI saw a brief resurgence of American asheries, with their product typically consisting of 66% hydroxide, 17% carbonate, 16% sulfate and other impurities. Later in the century, the cartel ended as new potash producers emerged in the USSR and Canada.
In 1943, potash was discovered in Saskatchewan, Canada, during oil drilling. Active exploration began in 1951. In 1958, the Potash Company of America became the first potash producer in Canada with the commissioning of an underground potash mine at Patience Lake. As numerous potash producers in Canada developed, the Saskatchewan government became increasingly involved in the industry, leading to the creation of Canpotex in the 1970s.
In 1964 the Canadian company Kalium Chemicals established the first potash mine using the solution process. The discovery was made during oil reserve exploration. The mine was developed near Regina, Saskatchewan. The mine reached depths greater than 1500 meters. It is now the Mosaic Corporation's Belle Plaine unit.
The USSR's potash production had largely been for domestic use and use in the Council for Mutual Economic Assistance countries. After the dissolution of the USSR, Russian and Belarusian potash producers entered into direct competition with producers elsewhere in the world for the first time.
In the beginning of the 20th century, potash deposits were found in the Dallol Depression in the Musely and Crescent localities near the Ethiopean-Eritrean border. The estimated reserves in Musely and Crescent are 173 and 12 million tonnes respectively. The latter is particularly suitable for surface mining. It was explored in the 1960s but the works stopped due to flooding in 1967. Attempts to continue mining in the 1990s were halted by the Eritrean–Ethiopian War and have not resumed as of 2009.
Mining
Shaft mining and strip mining
All commercial potash deposits come originally from evaporite deposits and are often buried deep below the earth's surface. Potash ores are typically rich in potassium chloride (KCl), sodium chloride (NaCl) and other salts and clays, and are typically obtained by conventional shaft mining with the extracted ore ground into a powder. Most potash mines today are deep shaft mines as much as 4,400 feet (1,400 m) underground. Others are mined as strip mines, having been laid down in horizontal layers as sedimentary rock. In above-ground processing plants, the KCl is separated from the mixture to produce a high-analysis potassium fertilizer. Other potassium salts can be separated by various procedures, resulting in potassium sulfate and potassium-magnesium sulfate.
Dissolution mining and evaporation methods
Other methods include dissolution mining and evaporation methods from brines. In the evaporation method, hot water is injected into the potash, which is dissolved and then pumped to the surface where it is concentrated by solar induced evaporation. Amine reagents are then added to either the mined or evaporated solutions. The amine coats the KCl but not NaCl. Air bubbles cling to the amine + KCl and float it to the surface while the NaCl and clay sink to the bottom. The surface is skimmed for the amine + KCl, which is then dried and packaged for use as a K rich fertilizer—KCl dissolves readily in water and is available quickly for plant nutrition.
Recovery of potassium fertilizer salts from sea water has been studied in India. During extraction of salt from seawater by evaporation, potassium salts get concentrated in bittern, an effluent from the salt industry.
Production
Potash deposits are distributed unevenly throughout the world. , deposits are being mined in Canada, Russia, China, Belarus, Israel, Germany, Chile, the United States, Jordan, Spain, the United Kingdom, Uzbekistan and Brazil, with the most significant deposits present under the great depths of the Prairie Evaporite Formation in Saskatchewan, Canada. Canada and Russia are the countries where the bulk of potash is produced; Belarus is also a major producer.
The Permian Basin deposit includes the major mines outside of Carlsbad, New Mexico, to the world's purest potash deposit in Lea County, New Mexico (near the Carlsbad deposits), which is believed to be roughly 80% pure. (Osceola County, Michigan, has deposits 90+% pure; the only mine there was converted to salt production, however.) Canada is the largest producer, followed by Russia and Belarus. The most significant reserve of Canada's potash is located in the province of Saskatchewan and is mined by The Mosaic Company, Nutrien and K+S.
In China, most potash deposits are concentrated in the deserts and salt flats of the endorheic basins of its western provinces, particularly Qinghai. Geological expeditions discovered the reserves in the 1950s but commercial exploitation lagged until Deng Xiaoping's Reform and Opening Up Policy in the 1980s. The 1989 opening of the Qinghai Potash Fertilizer Factory in the remote Qarhan Playa increased China's production of potassium chloride sixfold, from less than a year at Haixi and Tanggu to just under a year.
In 2013, almost 70% of potash production was controlled by Canpotex, an exporting and marketing firm, and the Belarusian Potash Company. The latter was a joint venture between Belaruskali and Uralkali, but on July 30, 2013, Uralkali announced that it had ended the venture.
Potash is water soluble and transporting it requires special transportation infrastructure.
Occupational hazards
Excessive respiratory disease due to environmental hazards, such as radon and asbestos, has been a concern for potash miners throughout history. Potash miners are liable to develop silicosis. Based on a study conducted between 1977 and 1987 of cardiovascular disease among potash workers, the overall mortality rates were low, but a noticeable difference in above-ground workers was documented.
Consumption
Fertilizers
Potassium is the third major plant and crop nutrient after nitrogen and phosphorus. It has been used since antiquity as a soil fertilizer (about 90% of current use). Fertilizer use is the main driver behind potash consumption, especially for its use in fertilizing crops that contribute to high-protein diets. As of at least 2010, more than 95% of potash is mined for use in agricultural purposes.
Elemental potassium does not occur in nature because it reacts violently with water. As part of various compounds, potassium makes up about 2.6% of the Earth's crust by mass and is the seventh most abundant element, similar in abundance to sodium at approximately 1.8% of the crust. Potash is important for agriculture because it improves water retention, yield, nutrient value, taste, color, texture and disease resistance of food crops. It has wide application to fruit and vegetables, rice, wheat and other grains, sugar, corn, soybeans, palm oil and cotton, all of which benefit from the nutrient's quality-enhancing properties.
Demand for food and animal feed has been on the rise since 2000. The United States Department of Agriculture's Economic Research Service (ERS) attributes the trend to average annual population increases of 75 million people around the world. Geographically, economic growth in Asia and Latin America greatly contributed to the increased use of potash-based fertilizer. Rising incomes in developing countries also were a factor in the growing potash and fertilizer use. With more money in the household budget, consumers added more meat and dairy products to their diets. This shift in eating patterns required more acres to be planted, more fertilizer to be applied and more animals to be fed—all requiring more potash.
After years of trending upward, fertilizer use slowed in 2008. The worldwide economic downturn is the primary reason for the declining fertilizer use, dropping prices, and mounting inventories.
The world's largest consumers of potash are China, the United States, Brazil, and India. Brazil imports 90% of the potash it needs. Potash consumption for fertilizers is expected to increase to about 37.8 million tonnes by 2022.
Potash imports and exports are often reported in K2O equivalent, although fertilizer never contains potassium oxide, per se, because potassium oxide is caustic and hygroscopic.
Pricing
At the beginning of 2008, potash prices started a meteoric climb from less than US$200 a tonne to a high of US$875 in February 2009. These subsequently dropped dramatically to an April 2010 low of US$310 level, before recovering in 2011–12, and relapsing again in 2013. For reference, prices in November 2011 were about US$470 per tonne, but as of May 2013 were stable at US$393. After the surprise breakup of the world's largest potash cartel at the end of July 2013, potash prices were poised to drop some 20 percent. At the end of December 2015, potash traded for US$295 a tonne. In April 2016 its price was US$269. In May 2017, prices had stabilised at around US$216 a tonne down 18% from the previous year. By January 2018, prices have been recovering to around US$225 a tonne. World potash demand tends to be price inelastic in the short-run and even in the long run.
Other uses
In addition to its use as a fertilizer, potassium chloride is important in many industrialized economies, where it is used in aluminium recycling, by the chloralkali industry to produce potassium hydroxide, in metal electroplating, oil-well drilling fluid, snow and ice melting, steel heat-treating, in medicine as a treatment for hypokalemia, and water softening. Potassium hydroxide is used for industrial water treatment and is the precursor of potassium carbonate, several forms of potassium phosphate, many other potassic chemicals, and soap manufacturing. Potassium carbonate is used to produce animal feed supplements, cement, fire extinguishers, food products, photographic chemicals, and textiles. It is also used in brewing beer, pharmaceutical preparations, and as a catalyst for synthetic rubber manufacturing. Also combined with silica sand to produce potassium silicate, sometimes known as waterglass, for use in paints and arc welding electrodes. These non-fertilizer uses have accounted for about 15% of annual potash consumption in the United States.
Substitutes
No substitutes exist for potassium as an essential plant nutrient and as an essential nutritional requirement for animals and humans. Manure and glauconite (greensand) are low-potassium-content sources that can be profitably transported only short distances to crop fields.
See also
Bone ash
Saltpeter
Saltwater soap
Sodium hydroxide
References
Further reading
Seaver, Frederick J. (1918) "Historical Sketches of Franklin County And Its Several Towns", J.B Lyons Company, Albany, NY, Section "Making Potash" pp. 27–29
External links
They Burned the Woods and Sold the Ashes
Henry M. Paynter, The First Patent, Invention & Technology, Fall 1990
The First U.S. Patent , issued for a method of potash production
World Agriculture and Fertilizer Markets Map
Russia reaps rich harvest with potash
Agricultural chemicals
Fertilizers
Industrial minerals
Potassium
Salts
Types of ash | Potash | [
"Chemistry"
] | 3,602 | [
"Fertilizers",
"Types of ash",
"Potash",
"Salts",
"Combustion",
"Soil chemistry"
] |
56,511 | https://en.wikipedia.org/wiki/Kidney%20dialysis | Kidney dialysis (from Greek , , 'dissolution'; from , , 'through', and , , 'loosening or splitting') is the process of removing excess water, solutes, and toxins from the blood in people whose kidneys can no longer perform these functions naturally. Along with kidney transplantation, it is a type of renal replacement therapy.
Dialysis may need to be initiated when there is a sudden rapid loss of kidney function, known as acute kidney injury (previously called acute renal failure), or when a gradual decline in kidney function, chronic kidney failure, reaches stage 5. Stage 5 chronic renal failure is reached when the glomerular filtration rate is less than 15% of the normal, creatinine clearance is less than 10 mL per minute, and uremia is present.
Dialysis is used as a temporary measure in either acute kidney injury or in those awaiting kidney transplant and as a permanent measure in those for whom a transplant is not indicated or not possible.
In West European countries, Australia, Canada, the United Kingdom, and the United States, dialysis is paid for by the government for those who are eligible. The first successful dialysis was performed in 1943.
Background
The kidneys have an important role in maintaining health. When the person is healthy, the kidneys maintain the body's internal equilibrium of water and minerals (sodium, potassium, chloride, calcium, phosphorus, magnesium, sulphate). The acidic metabolism end-products that the body cannot get rid of via respiration are also excreted through the kidneys. The kidneys also function as a part of the endocrine system, producing erythropoietin, calcitriol and renin. Erythropoietin is involved in the production of red blood cells and calcitriol plays a role in bone formation. Dialysis is an imperfect treatment to replace kidney function because it does not correct the compromised endocrine functions of the kidney. Dialysis treatments replace some of these functions through diffusion (waste removal) and ultrafiltration (fluid removal). Dialysis uses highly purified (also known as "ultrapure") water.
Principle
Dialysis works on the principles of the diffusion of solutes and ultrafiltration of fluid across a semipermeable membrane. Diffusion is a property of substances in water; substances in water tend to move from an area of high concentration to an area of low concentration. Blood flows by one side of a semipermeable membrane, and a dialysate, or special dialysis fluid, flows by the opposite side. A semipermeable membrane is a thin layer of material that contains holes of various sizes, or pores. Smaller solutes and fluid pass through the membrane, but the membrane blocks the passage of larger substances (for example, red blood cells and large proteins). This replicates the filtering process that takes place in the kidneys when the blood enters the kidneys and the larger substances are separated from the smaller ones in the glomerulus.
The two main types of dialysis, hemodialysis and peritoneal dialysis, remove wastes and excess water from the blood in different ways. Hemodialysis removes wastes and water by circulating blood outside the body through an external filter, called a dialyzer, that contains a semipermeable membrane. The blood flows in one direction and the dialysate flows in the opposite. The counter-current flow of the blood and dialysate maximizes the concentration gradient of solutes between the blood and dialysate, which helps to remove more urea and creatinine from the blood. The concentrations of solutes normally found in the urine (for example potassium, phosphorus and urea) are undesirably high in the blood, but low or absent in the dialysis solution, and constant replacement of the dialysate ensures that the concentration of undesired solutes is kept low on this side of the membrane. The dialysis solution has levels of minerals like potassium and calcium that are similar to their natural concentration in healthy blood. For another solute, bicarbonate, dialysis solution level is set at a slightly higher level than in normal blood, to encourage the diffusion of bicarbonate into the blood, to act as a pH buffer to neutralize the metabolic acidosis that is often present in these patients. The levels of the components of dialysate are typically prescribed by a nephrologist according to the needs of the individual patient.
In peritoneal dialysis, wastes and water are removed from the blood inside the body using the peritoneum as a natural semipermeable membrane. Waste and excess water move from the blood, across the visceral peritoneum due to its large surface area and into a special dialysis solution, called dialysate, in the peritoneal cavity within the abdomen.
Types
There are three primary and two secondary types of dialysis: hemodialysis (primary), peritoneal dialysis (primary), hemofiltration (primary), hemodiafiltration (secondary) and intestinal dialysis (secondary).
Hemodialysis
In hemodialysis, the patient's blood is pumped through the blood compartment of a dialyzer, exposing it to a partially permeable membrane. The dialyzer is composed of thousands of tiny hollow synthetic fibers. The fiber wall acts as the semipermeable membrane. Blood flows through the fibers, dialysis solution flows around the outside of the fibers, and water and wastes move between these two solutions. The cleansed blood is then returned via the circuit back to the body. Ultrafiltration occurs by increasing the hydrostatic pressure across the dialyzer membrane. This usually is done by applying a negative pressure to the dialysate compartment of the dialyzer. This pressure gradient causes water and dissolved solutes to move from blood to dialysate and allows the removal of several litres of excess fluid during a typical 4-hour treatment.
In the United States, hemodialysis treatments are typically given in a dialysis center three times per week (due in the United States to Medicare reimbursement rules); however, as of 2005 over 2,500 people in the United States are dialyzing at home more frequently for various treatment lengths. Studies have demonstrated the clinical benefits of dialyzing 5 to 7 times a week, for 6 to 8 hours. This type of hemodialysis is usually called nocturnal daily hemodialysis and a study has shown it provides a significant improvement in both small and large molecular weight clearance and decreases the need for phosphate binders. These frequent long treatments are often done at home while sleeping, but home dialysis is a flexible modality and schedules can be changed day to day, week to week. In general, studies show that both increased treatment length and frequency are clinically beneficial.
Hemo-dialysis was one of the most common procedures performed in U.S. hospitals in 2011, occurring in 909,000 stays (a rate of 29 stays per 10,000 population).
Peritoneal dialysis
In peritoneal dialysis, a sterile solution containing glucose (called dialysate) is run through a tube into the peritoneal cavity, the abdominal body cavity around the intestine, where the peritoneal membrane acts as a partially permeable membrane.
This exchange is repeated 4–5 times per day; automatic systems can run more frequent exchange cycles overnight. Peritoneal dialysis is less efficient than hemodialysis, but because it is carried out for a longer period of time the net effect in terms of removal of waste products and of salt and water are similar to hemodialysis. Peritoneal dialysis is carried out at home by the patient, often without help. This frees patients from the routine of having to go to a dialysis clinic on a fixed schedule multiple times per week. Peritoneal dialysis can be performed with little to no specialized equipment (other than bags of fresh dialysate).
Hemofiltration
Hemofiltration is a similar treatment to hemodialysis, but it makes use of a different principle. The blood is pumped through a dialyzer or "hemofilter" as in dialysis, but no dialysate is used. A pressure gradient is applied; as a result, water moves across the very permeable membrane rapidly, "dragging" along with it many dissolved substances, including ones with large molecular weights, which are not cleared as well by hemodialysis. Salts and water lost from the blood during this process are replaced with a "substitution fluid" that is infused into the extracorporeal circuit during the treatment.
Hemodiafiltration
Hemodiafiltration is a combination between hemodialysis and hemofiltration, thus used to purify the blood from toxins when the kidney is not working normally and also used to treat acute kidney injury (AKI).
Intestinal dialysis
In intestinal dialysis, the diet is supplemented with soluble fibres such as acacia fibre, which is digested by bacteria in the colon. This bacterial growth increases the amount of nitrogen that is eliminated in fecal waste. An alternative approach utilizes the ingestion of 1 to 1.5 liters of non-absorbable solutions of polyethylene glycol or mannitol every fourth hour.
Indications
The decision to initiate dialysis or hemofiltration in patients with kidney failure depends on several factors. These can be divided into acute or chronic indications.
Depression and kidney failure symptoms can be similar to each other. It is important that there is open communication between a dialysis team and the patient. Open communication will allow giving a better quality of life. Knowing the patients' needs will allow the dialysis team to provide more options like: changes in dialysis type like home dialysis for patients to be able to be more active or changes in eating habits to avoid unnecessary waste products.
Acute indications
Indications for dialysis in a patient with acute kidney injury are summarized with the vowel mnemonic of "AEIOU":
Acidemia from metabolic acidosis in situations in which correction with sodium bicarbonate is impractical or may result in fluid overload.
Electrolyte abnormality, such as severe hyperkalemia, especially when combined with AKI.
Intoxication, that is, acute poisoning with a dialyzable substance. These substances can be represented by the mnemonic SLIME: salicylic acid, lithium, isopropanol, magnesium-containing laxatives and ethylene glycol.
Overload of fluid not expected to respond to treatment with diuretics
Uremia complications, such as pericarditis, encephalopathy, or gastrointestinal bleeding.
Chronic indications
Chronic dialysis may be indicated when a patient has symptomatic kidney failure and low glomerular filtration rate (GFR < 15 mL/min). Between 1996 and 2008, there was a trend to initiate dialysis at progressively higher estimated GFR, eGFR.
A review of the evidence shows no benefit or potential harm with early dialysis initiation, which has been defined by start of dialysis at an estimated GFR of greater than 10 ml/min/1.732. Observational data from large registries of dialysis patients suggests that early start of dialysis may be harmful.
The most recent published guidelines from Canada, for when to initiate dialysis, recommend an intent to defer dialysis until a patient has definite kidney failure symptoms, which may occur at an estimated GFR of 5–9 ml/min/1.732.
Impact
Effectiveness
Even though it is not a cure for kidney failure, dialysis is a very effective treatment. Survival rates of kidney failure are generally longer with dialysis than without (having only conservative kidney management). However, from the age of 80 and in elderly patients with comorbidities there is no difference in survival between the two groups.
Quality of life
Dialysis is an intensive treatment that has a serious impact on those treated with it. Being on dialysis usually leads to a poor quality of life. However, there are strategies that can make it more tolerable. Receiving dialysis at home might improve people's quality of life and autonomy.
Scheduling and adherence
Dialysis is typically on a regular schedule of three times a week.
Given that dialysis patients have little or no capacity to filtrate solutes and regulate their fluid volume due to kidney dysfunction, missing dialysis is potentially lethal. These patients can be hyperkalaemic leading to cardiac dysrhythmias and potential cardiac arrest, as well as fluid in the alveoli of their lungs which can impair breathing.
Some medications can be used in the short term to decrease serum potassium and stabilise the cardiac muscle so as to facilitate stabilisation of acute patients in the setting of missed dialysis. Salbutamol and insulin can decrease serum potassium by up to 1.0mmol/L each by shifting potassium from the extracellular space into the intracellular spaces within skeletal muscle cells, and calcium gluconate is used to stabilise the myocardium in hyperkalaemic patients, in an attempt to reduce the likelihood of lethal arrhythmias arising from a high serum potassium.
Survival without dialysis
People who decide against dialysis treatment when reaching end-stage chronic kidney disease could survive several years and experience improvements in their mental well-being in addition to sustained physical well-being and overall quality of life until late in their illness course. However, use of acute care services in these cases is common and intensity of end-of-life care is highly variable among people opting out of dialysis.
Cost
The average annual total cost per dialysis patient varies between countries, for example in South Korea 19,812 USD, in New Zealand 26,479 USD and in Netherlands 89,958 USD, according to an 2021 article.
Pediatric dialysis
Over the past 20 years, children have benefited from major improvements in both technology and clinical management of dialysis. Morbidity during dialysis sessions has decreased with seizures being exceptional and hypotensive episodes rare. Pain and discomfort have been reduced with the use of chronic internal jugular venous catheters and anesthetic creams for fistula puncture. Non-invasive technologies to assess patient target dry weight and access flow can significantly reduce patient morbidity and health care costs. Mortality in paediatric and young adult patients on chronic hemodialysis is associated with multifactorial markers of nutrition, inflammation, anaemia and dialysis dose, which highlights the importance of multimodal intervention strategies besides adequate hemodialysis treatment as determined by Kt/V alone.
Biocompatible synthetic membranes, specific small size material dialyzers and new low extra-corporeal volume tubing have been developed for young infants. Arterial and venous tubing length is made of minimum length and diameter, a <80 ml to <110 ml volume tubing is designed for pediatric patients and a >130 to <224 ml tubing are for adult patients, regardless of blood pump segment size, which can be of 6.4 mm for normal dialysis or 8.0mm for high flux dialysis in all patients. All dialysis machine manufacturers design their machine to do the pediatric dialysis. In pediatric patients, the pump speed should be kept at low side, according to patient blood output capacity, and the clotting with heparin dose should be carefully monitored. The high flux dialysis (see below) is not recommended for pediatric patients.
In children, hemodialysis must be individualized and viewed as an "integrated therapy" that considers their long-term exposure to chronic renal failure treatment. Dialysis is seen only as a temporary measure for children compared with renal transplantation because this enables the best chance of rehabilitation in terms of educational and psychosocial functioning. Long-term chronic dialysis, however, the highest standards should be applied to these children to preserve their future "cardiovascular life"—which might include more dialysis time and on-line hemodiafiltration online hdf with synthetic high flux membranes with the surface area of 0.2 m2 to 0.8 m2 and blood tubing lines with the low volume yet large blood pump segment of 6.4/8.0 mm, if we are able to improve on the rather restricted concept of small-solute urea dialysis clearance.
Dialyzable substances
Characteristics
Dialyzable substances—substances removable with dialysis—have these properties:
Low molecular mass
High water solubility
Low protein binding capacity
Prolonged elimination (long half-life)
Small volume of distribution
Substances
Ethylene glycol
Procainamide
Methanol
Isopropyl alcohol
Barbiturates
Lithium
Bromide
Sotalol
Chloral hydrate
Ethanol
Acetone
Atenolol
Theophylline
Salicylates
Baclofen
Dialysis in different countries
United Kingdom
The National Health Service provides dialysis in the United Kingdom. In 2022, there were more than 30,000 people on dialysis in the UK.
For people who need to travel to dialysis centres, patient transport services are generally provided without charge. Cornwall Clinical Commissioning Group proposed to restrict this provision to people who did not have specific medical or financial reasons in 2018 but changed their minds after a campaign led by Kidney Care UK and decided to fund transport for people requiring dialysis three times a week for a minimum or six times a month for a minimum of three months.
Home dialysis
UK clinical guidelines recommend offering people a choice regarding where they get their dialysis. Research in the UK found that receiving dialysis at home can lead to better quality of life and is less costly than receiving dialysis in hospital. However, many people in the UK prefer to receive dialysis in hospital: In 2022, only 1 in 6 chose receiving it at home.
There are various reasons why people do not choose home dialysis. Among these are preferring hospitals as a way of getting regular social contact, being concerned about necessary changes to their homes and their family members becoming carers. Other reasons include a lack of motivation, doubting abilities for self-managed treatment, and not having suitable housing or support at home. Hospital dialysis is also often presented as the norm by healthcare professionals.
Encouraging people to have dialysis at home could reduce the impact of dialysis on people's social and professional lives. Some ways to help are offering peer support from other people on home dialysis, better education materials, and professionals being more familiar with home dialysis and its impact. Choosing home dialysis is more likely at kidney centers which have better organisational culture, leadership and attitude.
United States
Since 1972, insurance companies in the United States have covered the cost of dialysis and transplants for all citizens. By 2014, more than 460,000 Americans were undergoing treatment, the costs of which amount to six percent of the entire Medicare budget. Kidney disease is the ninth leading cause of death, and the U.S. has one of the highest mortality rates for dialysis care in the industrialized world. The rate of patients getting kidney transplants has been lower than expected. These outcomes have been blamed on a new for-profit dialysis industry responding to government payment policies. A 1999 study concluded that "patients treated in for-profit dialysis facilities have higher mortality rates and are less likely to be placed on the waiting list for a renal transplant than are patients who are treated in not-for-profit facilities", possibly because transplantation removes a constant stream of revenue from the facility. The insurance industry has complained about kickbacks and problematic relationships between charities and providers.
China
The Government of China provides the funding for dialysis treatment. There is a challenge to reach everyone who needs dialysis treatment because of the unequal distribution of health care resources and dialysis centers. There are 395,121 individuals who receive hemodialysis or peritoneal dialysis in China per year. The percentage of the Chinese population with Chronic Kidney Disease is 10.8%. The Chinese Government is trying to increase the amount of peritoneal dialysis taking place to meet the needs of the nation's individuals with Chronic Kidney Disease.
Australia
Dialysis is provided without cost to all patients through Medicare, with 75% of all dialysis being administered as haemodialysis to patients three times per week in a dialysis facility. The Northern Territory has the highest incidence rate per population of haemodialysis, with Indigenous Australians having higher rates of Chronic Kidney Disease and lower rates of functional kidney transplants than the broader population. The remote Central Australian town of Alice Springs, despite having a population of approximately 25000, has the largest dialysis unit in the Southern Hemisphere. Many people must move to Alice Springs from remote Indigenous communities to access health services such as haemodialysis, which results in housing shortages, overcrowding, and poor living conditions.
History
In 1913, Leonard Rowntree and John Jacob Abel of Johns Hopkins Hospital developed the first dialysis system which they successfully tested in animals. A Dutch doctor, Willem Johan Kolff, constructed the first working dialyzer in 1943 during the Nazi occupation of the Netherlands. Due to the scarcity of available resources, Kolff had to improvise and build the initial machine using sausage casings, beverage cans, a washing machine and various other items that were available at the time. Over the following two years (1944–1945), Kolff used his machine to treat 16 patients with acute kidney failure, but the results were unsuccessful. Then, in 1945, a 67-year-old comatose woman regained consciousness following 11 hours of hemodialysis with the dialyzer and lived for another seven years before dying from an unrelated condition. She was the first-ever patient successfully treated with dialysis. Gordon Murray of the University of Toronto independently developed a dialysis machine in 1945. Unlike Kolff's rotating drum, Murray's machine used fixed flat plates, more like modern designs. Like Kolff, Murray's initial success was in patients with acute renal failure. Nils Alwall of Lund University in Sweden modified a similar construction to the Kolff dialysis machine by enclosing it inside a stainless steel canister. This allowed the removal of fluids, by applying a negative pressure to the outside canister, thus making it the first truly practical device for hemodialysis. Alwall treated his first patient in acute kidney failure on 3 September 1946.
See also
Thomas Graham (chemist), the founder of dialysis and father of colloid chemistry
Dialysis tubing
List of US dialysis providers
Vitamin and mineral management for dialysis
Nephrology
Hepatorenal syndrome
References
Bibliography
Further reading
External links
"Machine Cleans Blood While You Wait"—1950 article on early use of dialysis machine at Bellevue Hospital New York City—an example of how complex and large early dialysis machines were
Home Dialysis Museum—History and pictures of dialysis machines through time
Introduction to Dialysis Machines—Tutorial describing the main subfunctions of dialysis systems.
"First Nations man conducts own dialysis treatments to avoid move to the city"—CBC News (November 30, 2016)
Biochemical separation processes
Detoxification
Medical mnemonics
Membrane technology | Kidney dialysis | [
"Chemistry",
"Biology"
] | 4,831 | [
"Biochemistry methods",
"Biochemical separation processes",
"Membrane technology",
"Separation processes"
] |
56,517 | https://en.wikipedia.org/wiki/Raymond%20Smullyan | Raymond Merrill Smullyan (; May 25, 1919 – February 6, 2017) was an American mathematician, magician, concert pianist, logician, Taoist, and philosopher.
Born in Far Rockaway, New York, Smullyan's first career choice was in stage magic. He earned a BSc from the University of Chicago in 1955 and his PhD from Princeton University in 1959. Smullyan is one of many logicians to have studied with Alonzo Church.
Life
Smullyan was born on May 25, 1919, in Far Rockaway, Queens, New York, to an Ashkenazi Jewish family. His father was Isidore Smullyan, a Russian-born businessman who emigrated to Belgium when young and graduated from the University of Antwerp, his native language being French. His mother was Rosina Smullyan (née Freeman), a painter and actress born and raised in London. Both parents were musical, his father playing the violin and his mother playing the piano. Smullyan was the youngest of three children. His eldest brother, Emile Benoit Smullyan, later became an economist under the name of Emile Benoit. His sister was Gladys Smullyan, later Gladys Gwynn. His cousin was the philosopher Arthur Francis Smullyan (1912–1998). In Far Rockaway he was a grade school classmate of Richard Feynman.
Smullyan showed musical talent from a young age, playing both violin and piano. He studied with pianist Grace Hofheimer in New York. He had perfect pitch. He started his interest in logic at the age of 5. In 1931 he won a gold medal in the piano competition of the New York Music Week Association when he was aged 12 (the previous year he had won the silver medal). After graduating from grade school, the Depression forced his family to move to Manhattan, and he attended Theodore Roosevelt High School in The Bronx. He played violin in the school orchestra but devoted more time to playing the piano. At high school he fell in love with mathematics when he took a class in geometry. Apart from his classes in geometry, physics, and chemistry, however, he was dissatisfied with his high school, and dropped out. Smullyan studied mathematics on his own, including analytic geometry, calculus, and modern higher algebra – particularly group theory and Galois theory. He sat in on a course taught by Ernest Nagel at Columbia University that was being taken by his cousin, Arthur Smullyan, and independently discovered Boolean rings. He also spent a year at the Cambridge Rindge and Latin School. Smullyan did not graduate with a high school diploma, but he took the College Board exams to get into college. He studied mathematics and music at Pacific University in Oregon for one semester, and at Reed College for less than a semester, before following the pianist Berhard Abramowitsch to San Francisco. Smullyan audited classes at the University of California, Berkeley, before returning to New York, where he continued his independent study of modern abstract algebra. At this time he composed a number of chess problems which were published many years later; he also learned magic.
At the age of 24, Smullyan enrolled at the University of Wisconsin-Madison for three semesters, because he wanted to study modern algebra with a professor whose book he had read. He later transferred to the University of Chicago and majored in mathematics. After a break in which he worked as a magician in New York and met his first wife, he returned to the University of Chicago, where he also worked as a magician at night and taught piano on the faculty at Roosevelt University. While at Chicago he took three courses with the philosopher Rudolf Carnap, for which he wrote three term papers. Carnap recommended that he send the first term paper to Willard Van Orman Quine, which he did. Quine replied that he should tinker with his idea about what makes quantification theory tick. Of the other two term papers, one, entitled "Languages in which Self-Reference is Possible" (which Carnap showed to Kurt Gödel), was later published in 1957. The other was later published in his 1961 book Theory of Formal Systems. While still a student at the University of Chicago, on the basis of a recommendation from Carnap, he was hired by John G. Kemeny, the chair of the mathematics department at Dartmouth College. Smullyan taught at Dartmouth for two years. During that time he separated from his first wife, from whom he later divorced. He also used to visit his friends Gloria and Marvin Minsky (Gloria Minsky was his cousin) in Cambridge, Massachusetts. The University of Chicago, after a battle between the faculty and administration, agreed to award Smullyan a bachelor of science degree in mathematics in 1955 based partly on courses he had taught at Dartmouth (although he had not taken them at Chicago). Both Carnap and Kemeny helped him to get accepted to the graduate program in mathematics at Princeton University. He received a PhD in mathematics from Princeton University in 1959. He completed his doctoral dissertation, titled "Theory of formal systems", under the supervision of Alonzo Church, which was published in 1961. While a graduate student at Princeton he met his second wife, Blanche, a pianist and teacher, born in Belgium, to whom he was married for 48 years until she died in 2006.
While a PhD student, Smullyan's term paper for Carnap, "Languages in which Self-Reference is Possible", was published in 1957 in the Journal of Symbolic Logic, showing that Gödelian incompleteness held for formal systems considerably more elementary than that of Kurt Gödel's 1931 landmark paper. The contemporary understanding of Gödel's theorem dates from this paper. Smullyan later made a compelling case that much of the fascination with Gödel's theorem should be directed at Tarski's theorem, which is much easier to prove and equally disturbing philosophically.
After getting his PhD from Princeton, he taught at Princeton for two years. He subsequently taught at New York University, at the State University of New York at New Paltz, at Smith College, and at the Belfer Graduate School of Science at Yeshiva University, before becoming professor of mathematics and computer science at Lehman College in the Bronx, where he taught undergraduate students from 1968 to 1984. He was also a professor of philosophy at the CUNY Graduate Center from 1976 to 1984, where he taught graduate students. He was subsequently a professor of philosophy at Indiana University, where he taught both undergraduate and graduate students. He was also an amateur astronomer, using a six-inch reflecting telescope for which he ground the mirror. Fellow mathematician Martin Gardner was a close friend.
Smullyan wrote many books about recreational mathematics and recreational logic. Most notably, one is titled What Is the Name of This Book? . His A Beginner's Further Guide to Mathematical Logic , published in 2017, was his final book.
Logic problems
Many of Smullyan's logic problems are extensions of classic puzzles. Knights and Knaves involves knights (who always tell the truth) and knaves (who always lie). This is based on a story of two doors and two guards, one who lies and one who tells the truth. One door leads to heaven and one to hell, and the puzzle is to find out which door leads to heaven by asking one of the guards a question. One way to do this is to ask, "Which door would the other guard say leads to hell?". Unfortunately, this fails, as the liar can answer, "He would say the door to paradise leads to hell," and the truth-teller would answer, "He would say the door to paradise leads to hell." You must point at one of the doors as well as simply stating a question. For example, as philosopher Richard Turnbull has explained, you could point at either door and ask, "Will the other guard say this is the door to paradise?" The truth-teller will say "No, " if it is in fact the door to paradise, as will the liar. So you pick that door. The truth-teller will answer "Yes," if it is the door to Hell, as will the liar, so you pick the other door. Note also that we are not told anything about the goals of either guard: for all we know, the liar may want to help us and the truth-teller not help us, or both are indifferent, so there's no reason to think either one will phrase answers such as to provide us with the most optimally available kind of comprehension. This is behind the crucial role of actually pointing at a door directly while asking the question. This idea was famously used in the 1986 film Labyrinth.
In more complex puzzles, he introduces characters who may lie or tell the truth (referred to as "normals"), and furthermore instead of answering "yes" or "no", use words which mean "yes" or "no", but the reader does not know which word means which. The puzzle known as "the hardest logic puzzle ever" is based on these characters and themes. In his Transylvania puzzles, half of the inhabitants are insane, and believe only false things, whereas the other half are sane and believe only true things. In addition, humans always tell the truth, and vampires always lie. For example, an insane vampire will believe a false thing (2 + 2 is not 4) but will then lie about it, and say that it is false. A sane vampire knows 2 + 2 is 4, but will lie and say it is not. And mutatis mutandis for humans. Thus everything said by a sane human or an insane vampire is true, while everything said by an insane human or a sane vampire is false.
His book Forever Undecided popularizes Gödel's incompleteness theorems by phrasing them in terms of reasoners and their beliefs, rather than formal systems and what can be proved in them. For example, if a native of a knight/knave island says to a sufficiently self-aware reasoner, "You will never believe that I am a knight", the reasoner cannot believe either that the native is a knight or that he is a knave without becoming inconsistent (i.e., holding two contradictory beliefs). The equivalent theorem is that for any formal system S, there exists a mathematical statement that can be interpreted as "This statement is not provable in formal system S". If the system S is consistent, neither the statement nor its opposite will be provable in it. See also Doxastic logic.
Inspector Craig is a frequent character in Smullyan's "puzzle-novellas." He is generally called into a scene of a crime that has a solution that is mathematical in nature. Then, through a series of increasingly harder challenges, he (and the reader) begin to understand the principles in question. Finally the novella culminates in Inspector Craig (and the reader) solving the crime, utilizing the mathematical and logical principles learned. Inspector Craig generally does not learn the formal theory in question, and Smullyan usually reserves a few chapters after the Inspector Craig adventure to illuminate the analogy for the reader. Inspector Craig gets his name from William Craig.
His book To Mock a Mockingbird (1985) is a recreational introduction to the subject of combinatory logic.
Apart from writing about and teaching logic, Smullyan released a recording of his favorite baroque keyboard and classical piano pieces by composers such as Bach, Scarlatti, and Schubert. Some recordings are available on the Piano Society website, along with the video "Rambles, Reflections, Music and Readings". He has also written two autobiographical works, one entitled Some Interesting Memories: A Paradoxical Life () and a later book entitled Reflections: The Magic, Music and Mathematics of Raymond Smullyan ().
In 2001, documentary filmmaker Tao Ruspoli made a film about Smullyan called "This Film Needs No Title: A Portrait of Raymond Smullyan."
Philosophy
Smullyan wrote several books about Taoist philosophy, a philosophy he believed neatly solved most or all traditional philosophical problems as well as integrating mathematics, logic, and philosophy into a cohesive whole. One of Smullyan's discussions of Taoist philosophy centers on the question of free will in an imagined conversation between a mortal human and God.
Bibliography
Books
(1961) Theory of Formal Systems
(1968) First-Order Logic
(1977) The Tao is Silent
(1978) What Is the Name of This Book? The Riddle of Dracula and Other Logical Puzzles – knights, knaves, and other logic puzzles
(1979) The Chess Mysteries of Sherlock Holmes – introducing retrograde analysis in the game of chess
(1980) This Book Needs No Title
(1981) The Chess Mysteries of the Arabian Knights – second book on retrograde analysis chess problems
(1982) Alice in Puzzle-Land
(1982) The Lady or the Tiger? – ladies, tigers, and more logic puzzles
(1983) 5000 B.C. and Other Philosophical Fantasies
(1985) To Mock a Mockingbird – puzzles based on combinatory logic
(1987) Forever Undecided – puzzles based on undecidability in formal systems
(1992) Gödel's Incompleteness Theorems
(1992) Satan, Cantor and Infinity
(1993) Recursion Theory for Metamathematics
(1994) Diagonalization and Self-Reference
(1996) Set Theory and the Continuum Problem
(1997) The Riddle of Scheherazade
(2002) Some Interesting Memories: A Paradoxical Life
(2003) Who Knows?: A Study of Religious Consciousness
(2009) Logical Labyrinths
(2009) Rambles Through My Library , Praxis International
(2010) King Arthur in Search of his Dog
(2013) The Godelian Puzzle Book: Puzzles, Paradoxes and Proofs
(2014) A Beginner's Guide to Mathematical Logic
(2015) The Magic Garden of George B and Other Logic Puzzles
(2015) Reflections: The Magic, Music and Mathematics of Raymond Smullyan
(2016) A Beginner's Further Guide to Mathematical Logic
(2016) A Mixed Bag: Jokes, Riddles, Puzzles and Memorabilia
Articles, columns and miscellanea
Is God a Taoist? by Raymond Smullyan, 1977.
Planet Without Laughter by Raymond Smullyan, 1980.
An Epistemological Nightmare by Raymond Smullyan, 1982.
See also
Alice's Adventures in Wonderland
Coercive logic
Paradox
The Lady, or the Tiger
References
External links
Raymond Smullyan's website at Indiana University.
Raymond Smullyan at the MacTutor History of Mathematics archive.
Raymond Smullyan at the Mathematics Genealogy Project.
Raymond Smullyan at Piano Society
1919 births
2017 deaths
20th-century American mathematicians
20th-century American philosophers
21st-century American mathematicians
21st-century American philosophers
American Taoists
American chess writers
American classical pianists
American male classical pianists
American male pianists
American logicians
American magicians
American male non-fiction writers
American people of Russian-Jewish descent
Chess variant inventors
Discover (magazine) people
Indiana University faculty
Lehman College faculty
Mathematicians from New York (state)
Mathematics popularizers
People from Far Rockaway, Queens
Princeton University alumni
Puzzle designers
Recreational mathematicians
University of Chicago alumni
Writers of Sherlock Holmes pastiches | Raymond Smullyan | [
"Mathematics"
] | 3,126 | [
"Recreational mathematics",
"Recreational mathematicians"
] |
56,520 | https://en.wikipedia.org/wiki/Conservation%20easement | In the United States, a conservation easement (also called conservation covenant, conservation restriction or conservation servitude) is a power invested in a qualified land conservation organization called a "land trust", or a governmental (municipal, county, state or federal) entity to constrain, as to a specified land area, the exercise of rights otherwise held by a landowner so as to achieve certain conservation purposes. It is an interest in real property established by agreement between a landowner and land trust or unit of government. The conservation easement "runs with the land", meaning it is applicable to both present and future owners of the land. The grant of conservation easement, as with any real property interest, is part of the chain of title for the property and is normally recorded in local land records.
The conservation easement's purposes will vary depending on the character of the particular property, the goals of the land trust or government unit, and the needs of the landowners. For example, an easement's purposes (often called "conservation objectives") might include any one or more of the following:
Maintain and improve water quality;
Perpetuate and foster the growth of healthy forest;
Maintain and improve wildlife habitat and migration corridors;
Protect scenic vistas visible from roads and other public areas; or
Ensure that lands are managed so that they are always available for sustainable agriculture and forestry.
The conservation easement's administrative terms for advancing the conservation objectives also vary but typically forbid or substantially constrain subdivision and other real estate development.
The most distinguishing feature of the conservation easement as a conservation tool is that it enables users to achieve specific conservation objectives on the land while keeping the land in the ownership and control of landowners for uses consistent with the conservation objectives.
Unlike land use regulation, a conservation easement is placed on property voluntarily by the owner whose rights are being restricted. The restrictions of the easement, once set in place, are however perpetual (and potentially reduce the market value of the remaining ownership interest in the property). Appraisals of the value of the easement, and financial arrangements between the parties (land owner and land trust), generally are kept private.
The landowner who grants a conservation easement continues to manage and otherwise privately own the land and may receive significant state and federal tax advantages for having donated and/or sold the conservation easement. In granting the conservation easement, the easement holder has a responsibility to monitor future uses of the land to ensure compliance with the terms of the easement and to enforce the terms if a violation occurs.
Although a conservation easement prohibits certain uses by the landowner, such an easement does not make the land public. On the contrary, many conservation easements confer no use of the land either to the easement holder or to the public. Furthermore, many conservation easements reserve to the landowner specific uses which if not reserved would be prohibited. Some conservation easements confer specific uses to the easement holder or to the public. These details are spelled out in the legal document that creates the conservation easement.
Income tax deductions
Landowners in the United States who donate a "qualifying" conservation easement to a "qualified" land protection organization under the regulations set forth in 170(h) of the Internal Revenue Code may be eligible for a federal income tax deduction equal to the value of their donation. The value of the easement donation, as determined by a qualified appraiser, equals the difference between the fair market value of the property before and after the easement takes effect.
To qualify for this income tax deduction, the easement must be: a) perpetual; b) held by a qualified governmental or non-profit organization; and, c) serve a valid "conservation purpose", meaning the property must have an appreciable natural, scenic, historic, scientific, recreational, or open space value. As a result of legislation signed by President George W. Bush on August 17, 2006 (H.R. 4 The Pensions Protection Act of 2006), in 2006 and 2007, conservation easement donors were able to deduct the value of their gift at the rate of 50% of their adjusted gross income (AGI) per year. Further, landowners with 50% or more of their income from agriculture were able to deduct the donation at a rate of 100% of their AGI. Any amount of the donation remaining after the first year could be carried forward for fifteen additional years (allowing a maximum of sixteen years within which the deduction may be utilized), or until the amount of the deduction has been used up, whichever comes first. With the passage of the Farm Bill in the summer of 2008 these expanded federal income tax incentives were extended such that they also apply to all conservation easements donated in 2008 and 2009. The provision was renewed annually each year between 2010 and 2014 and was finally incorporated to the tax code without an expiration date in 2015.
Income tax credits (states)
Land conservation advocates have long tried to enact additional tax incentives for landowners to donate easements, above the federal charitable deduction (and state tax deduction in states that conform to federal tax process). There has been discussion of creating a federal income tax credit for easement donors since around 1980. However, no federal tax credit has been enacted. States, however, have moved ahead to grant credits that can be used to pay state income tax to donors of qualified conservation easements. In 1983, North Carolina became the first state to establish such a program.
Attorney Philip Tabas of The Nature Conservancy promoted the state tax credit idea widely in the 1990s. In 1999, four state legislatures enacted state tax credit programs (Virginia, Delaware, Colorado, and Connecticut, in that order). South Carolina and California followed in 2000. Several other states have followed since.
For landowners with little income subject to state taxation, a tax credit is of little value and may be insufficient incentive to grant a conservation easement. For this reason, some states, including Colorado and Virginia, the state tax credit is transferable—that is, the donor/landowner can sell her/his credit to someone else; the buyer can use the purchased tax credit, normally purchased at a discount from face value, against their own Colorado income tax. However, caps on the amount of credit an easement can generate, and other restrictions, limit the scope of some state tax credit programs.
In the states where credit for conservation land donations is transferable, free markets have arisen. Brokers assist landowners with excess credit to contact buyers, and the brokers often handle payments and paperwork to protect the principals, and to ensure that transfers are fully reported to the state tax authorities. The federal and state tax treatment of profits from sale and use of transferable tax credit have been the subject of extensive discussion and the issuance of several guidance documents by the Internal Revenue Service.
The New Mexico state income tax credit was originated in 2003. New transferability legislation, effective January 1, 2008, applies retroactively to conservation easements effected from January 1, 2004.
The Virginia transferable credit program is the largest among the States in dollar value of property conserved. By the end of 2010, $2,512,000,000 of property value had been donated as easements in Virginia for which tax credit was claimed. The qualifying easements cover over of Virginia landscape. The Virginia program now (2011) grants about $110 million of new tax credit each year. The credit allowance is 40% of the appraised value of the easement donation, so this equates to $275 million of property value donated per year for protection of wildlife habitat, farmland and woodland, and scenic open space—in perpetuity. The other state tax credit programs are smaller in dollar measurement, but are very significant in the area and the conservation values that they cause to be protected. The concept of state tax credit action (in the absence of a federal tax credit) that Philip Tabas and The Nature Conservancy promoted in the 1990s has borne remarkable fruit, and continues to expand today.
Estate tax reductions and exclusions
For landowners who will leave sizable estates upon their death, the most important financial impact of a conservation easement may be a significant reduction in estate taxes. Estate taxes often make it difficult for heirs to keep land intact and in the family because of high estate tax rates and high development value of land. It may be necessary to subdivide or sell land for development in order to pay these taxes which may not be the desire of the landowner or their heirs. A conservation easement can often provide significant help with this problem in three important ways:
Reduction in value of the estate. The deceased's estate will be reduced by the value of the donated conservation easement. As a result, taxes will be lower because heirs will not be required to pay taxes on the extinguished development rights. In other words, heirs will only have to pay estate taxes on preserved farmland values, and not full development values.
Estate exclusion. Section 2031(c) of the tax code provides further estate tax incentives for properties subject to a donated conservation easement. When property has a qualified conservation easement placed upon it, up to an additional 40% of the value of land (subject to a $500,000 cap) may be excluded from the estate when the landowner dies. This exclusion is in addition to the reduction in land value attributable to the easement itself as described above.
After death easement. Heirs may also receive these benefits (but not the income tax deduction) by electing to donate a conservation easement after the landowner's death and prior to filing the estate return (called a "post mortem" election).
In Pennsylvania, conservation restrictions on land included in the estate can reduce the inheritance tax owed.
Property tax incentives
Many states offer property tax incentives to conservation easement donors.
Issues to consider
As is the case with any property interest, a conservation easement may be taken by eminent domain (and thereby extinguished) when the public value of the proposed project exceeds that of the conservation interest being protected by the easement.
Conservation easements may result in a significant reduction in the sale price of the land because a builder can no longer develop it. In fact, this difference in value is the basis for the granting of the original tax incentives. An estimate of 35%–65% value reduction has been made on conservation easement land to the land owner.
Clear boundaries of adjacent properties are not always consistent with each other. Currently, the NCED manages this issue by snapping boundary polygons to a standard parcel layer which may differ from the original data provided by a landowner.
Against the background of the beneficial effects for nature provided by conservation easements, research suggests to also consider in-fee driven conservation efforts (i.e. direct purchase of land through conservation actors). The cost-effectiveness of either governance approach depends on various aspects such as economic and local ecological conditions, which hence need to be closely considered for the decision.
Purchase of conservation easements
Many conservation easements are purchased with funds from federal, state, and local governments, nonprofit organizations, or private donors. In these cases, landowners are paid directly for the purchase of the conservation easement.
The Farm Bill, updated every five or more years, provides an important source of funds for conservation easement purchase. The 2014 Farm Bill created the Agricultural Conservation Easement Program (ACEP) by consolidating the Farm and Ranch Lands Protection Program, the Grassland Reserve Program, and the Wetlands Reserve Program. Under ACEP, the Natural Resources Conservation Service helps tribes, state and local governments, and land trusts protect agriculture from development and other non-agricultural uses. ACEP includes Agricultural Land Easements and Wetland Reserve Easements. Agricultural land easements preserve land for food production and aids in soil and water conservation. Wetland reserve easements aim to restore wetland areas that have been converted into agricultural land. To maximize the benefits, the program targets land that has both a high chance of restoration success and a history of low crop yields or crop failure. The Farm Bill also funds the purchase of conservation easements for forestland. The Forest Legacy Program is a voluntary Federal program in partnership with States which protects privately owned forest lands. Landowners are required to prepare a multiple resource management plan as part of the conservation easement acquisition.
The majority of states have direct funding sources for conservation. Commonly used funding sources include real estate transfer tax, legislative bonds, and lottery proceeds. For instance, in 2014, New Jersey added conservation funding from corporate business taxes through constitutional amendment, approved by 65% of voters. Many states and counties have programs for the purchase of agricultural conservation easements (PACE) to protect productive farmland from non-agricultural development. In 1974, Suffolk County in New York enacted the first PACE (also known as purchase of development rights or PDR) program. King County in Washington and the states of Maryland, Massachusetts, and Connecticut quickly followed suit. As of 2016, the PACE program operates in 32 states through both state and local programs.
National Conservation Easement Database
The National Conservation Easement Database maps conservation easements and provides a resource for understanding what resources conservation easements protect in the U.S. As of 2018, the National Conservation Easement Database included over 130,000 conservation easements on 24.7 million acres.
See also
Easement refuge
Fee tail
Freedom to roam
Land use
Malpai Borderlands
Natural landscape
Prime farmland
Sangre de Cristo Land Grant
William Ginsberg, attorney who litigated pioneering case regarding tax deductibility of conservation easements
References
External links
How a conservation easement works —Private Landowner Network
The Nature of the Conservation Easement and the Document Granting It
The Conservation Easement Handbook: The Trust for Public Land and the Land Trust Alliance
Farmland Information Center
PCC Farmland Trust
Conservation easements at the New York State Department of Environmental Conservation
Special Issue of "Law and Contemporary Problems" on Conservation Easements
Agriculture in the United States
Nature conservation in the United States
Energy law
Legal documents
Real property law
Urban planning | Conservation easement | [
"Engineering"
] | 2,858 | [
"Urban planning",
"Architecture"
] |
56,525 | https://en.wikipedia.org/wiki/Triglyceride | A triglyceride (from tri- and glyceride; also TG, triacylglycerol, TAG, or triacylglyceride) is an ester derived from glycerol and three fatty acids.
Triglycerides are the main constituents of body fat in humans and other vertebrates as well as vegetable fat.
They are also present in the blood to enable the bidirectional transference of adipose fat and blood glucose from the liver and are a major component of human skin oils.
Many types of triglycerides exist. One specific classification focuses on saturated and unsaturated types. Saturated fats have no C=C groups; unsaturated fats feature one or more C=C groups. Unsaturated fats tend to have a lower melting point than saturated analogues; as a result, they are often liquid at room temperature.
Chemical structure
The three fatty acids substituents can be the same, but they are usually different. Many triglycerides are known because many fatty acids are known. The chain lengths of the fatty acid groups vary in naturally occurring triglycerides, Those containing 16, 18, or 20 carbon atoms are defined as long-chain triglycerides, while medium-chain triglycerides contain shorter fatty acids. Animals synthesize even-numbered fatty acids, but bacteria possess the ability to synthesise odd- and branched-chain fatty acids. As a result, ruminant animal fat contains odd-numbered fatty acids, such as 15, due to the action of bacteria in the rumen. Many fatty acids are unsaturated; some are polyunsaturated (e.g., those derived from linoleic acid).
Most natural fats contain a complex mixture of individual triglycerides. Because of their heterogeneity, they melt over a broad range of temperatures. Cocoa butter is unusual in that it is composed of only a few triglycerides, derived from palmitic, oleic, and stearic acids in the 1-, 2-, and 3-positions of glycerol, respectively.
The simplest triglycerides are those where the three fatty acids are identical. Their names indicate the fatty acid: stearin derived from stearic acid, triolein derived from oleic acid, palmitin derived from palmitic acid, etc. These compounds can be obtained in three crystalline forms (polymorphs): α, β, and β′, the three forms differing in their melting points.
A triglyceride containing different fatty acids is known as a mixed triglyceride. These are more common in nature.
If the first and third fatty acids on the glycerol differ, then the mixed triglyceride is chiral.
Physical properties
Triglycerides are colorless, although degraded samples can appear yellowish. Stearin, a simple, saturated, symmetrical triglyceride, is a solid near room temperature, but most examples are oils. Their density is near 0.-0.9 g/cm3.
Biosynthesis
Triglycerides are tri-esters derived from the condensation reaction of glycerol with three fatty acids. Their formation can be summarised by the following overall equation:
In nature, the formation of triglycerides is not random; rather, specific fatty acids are selectively condensed with the hydroxyl functional groups of glycerol. Animal fats typically have unsaturated fatty acid residues on carbon atoms 1 and 3. Extreme examples of non-random fats are cocoa butter (mentioned above) and lard, which contains about 20% triglyceride with palmitic acid on carbon 2 and oleic acid on carbons 1 and 3. An early step in the biosynthesis is the formation of the glycerol-1-phosphate:
The three oxygen atoms in this phosphate ester are differentiated, setting the stage for regiospecific formation of triglycerides, as the diol reacts selectively with coenzyme-A derivatives of the fatty acids, RC(O)S–CoA:
The phosphate ester linkage is then hydrolysed to make way for the introduction of a third fatty acid ester:
Nomenclature
Common fat names
Fats are often named after their source, e.g., olive oil, cod liver oil, shea butter, tail fat. Some have traditional names of their own, e.g., butter, lard, ghee, and margarine. The composition of these natural fats are somewhat variable. The oleic acid component in olive oil can vary from 64-86%.
Chemical fatty acid names
Triglycerides are then commonly named as esters of those acids, as in glyceryl 1,2-dioleate 3-palmitate, the name for a brood pheromone of the honey bee. Where the fatty acid residues in a triglyceride are all the same, names like olein (for glyceryl trioleate) and palmitin (for glyceryl tripalmitate) are common.
IUPAC
In the International Union of Pure and Applied Chemistry's (IUPAC's) general chemical nomenclature for organic compounds, any organic structure can be named by starting from its corresponding hydrocarbon and then specifying differences so as to describe its structure completely. For fatty acids, for example, the position and orientation of carbon-carbon double bonds is specified counting from the carboxyl functional group. Thus, oleic acid is formally named (9Z)-octadec-9-enoic acid, which describes that the compound has:
an 18 carbon chain ("octadec-") with the carbon of the carboxyl ("-oic acid") given the number 1
all carbon-carbon bonds are single except for the double bond then joins carbon 9 ("9-en") to carbon 10
the chain connects to each of the carbons of the double bond on the same side (hence, cis, or "(9Z)" - the "Z" being an abbreviation for the German word zusammen, meaning together).
IUPAC nomenclature can also handle branched chains and derivatives where hydrogen atoms are replaced by other chemical groups. Triglycerides take formal IUPAC names according to the rule governing naming of esters. For example, the formal name propane-1,2,3-tryl 1,2-bis((9Z)-octadec-9-enoate) 3-(hexadecanoate) applies to the pheromone informally named as glyceryl 1,2-dioleate-3-palmitate, and also known by other common names including 1,2-dioleoyl-3-palmitoylglycerol, glycerol dioleate palmitate, and 3-palmito-1,2-diolein.
Fatty acid code
A notation specific for fatty acids with unbranched chain, that is as precise as the IUPAC one but easier to parse, is a code of the form "{N}:{D} cis-{CCC} trans-{TTT}", where {N} is the number of carbons (including the carboxyl one), {D} is the number of double bonds, {CCC} is a list of the positions of the cis double bonds, and {TTT} is a list of the positions of the trans bonds. Either or both cis and trans lists and their labels are omitted if there are no multiple bonds with that geometry. For example, the codes for stearic, oleic, elaidic, and vaccenic acids are "18:0", "18:1 cis-9", "18:1 trans-9", and "18:1 trans-11", respectively. Catalpic acid, (9E,11E,13Z)-octadeca-9,11,13-trienoic acid according to IUPAC nomenclature, has the code "18:3 cis-13 trans-9,11".
Saturated and unsaturated fats
For human nutrition, an important classification of fats is based on the number and position of double bonds in the constituent fatty acids. Saturated fat has a predominance of saturated fatty acids, without any double bonds, while unsaturated fat has predominantly unsaturated acids with double bonds. (The names refer to the fact that each double bond means two fewer hydrogen atoms in the chemical formula. Thus, a saturated fatty acid, having no double bonds, has the maximum number of hydrogen atoms for a given number of carbon atomsthat is, it is "saturated" with hydrogen atoms.)
Unsaturated fatty acids are further classified into monounsaturated (MUFAs), with a single double bond, and polyunsaturated (PUFAs), with two or more. Natural fats usually contain several different saturated and unsaturated acids, even on the same molecule. For example, in most vegetable oils, the saturated palmitic (C16:0) and stearic (C18:0) acid residues are usually attached to positions 1 and 3 (sn1 and sn3) of the glycerol hub, whereas the middle position (sn2) is usually occupied by an unsaturated one, such as oleic (C18:1, ω–9) or linoleic (C18:2, ω–6).)
Saturated fats generally have a higher melting point than unsaturated ones with the same molecular weight, and thus are more likely to be solid at room temperature. For example, the animal fats tallow and lard are high in saturated fatty acid content and are solids. Olive and linseed oils on the other hand are unsaturated and liquid. Unsaturated fats are prone to oxidation by air, which causes them to become rancid and inedible.
The double bonds in unsaturated fats can be converted into single bonds by reaction with hydrogen effected by a catalyst. This process, called hydrogenation, is used to turn vegetable oils into solid or semisolid vegetable fats like margarine, which can substitute for tallow and butter and (unlike unsaturated fats) resist rancidification. Under some conditions, hydrogenation can creates some unwanted trans acids from cis acids.
In cellular metabolism, unsaturated fat molecules yield slightly less energy (i.e., fewer calories) than an equivalent amount of saturated fat. The heats of combustion of saturated, mono-, di-, and tri-unsaturated 18-carbon fatty acid esters have been measured as 2859, 2828, 2794, and 2750 kcal/mol, respectively; or, on a weight basis, 10.75, 10.71, 10.66, and 10.58 kcal/ga decrease of about 0.6% for each additional double bond.
The greater the degree of unsaturation in a fatty acid (i.e., the more double bonds in the fatty acid) the more vulnerable it is to lipid peroxidation (rancidity). Antioxidants can protect unsaturated fat from lipid peroxidation.
Commercial applications
While it is the nutritional aspects of polyunsaturated fatty acids that are generally of greatest interest, these materials also have non-food applications.
Linseed oil and related oils are important components of useful products used in oil paints and related coatings. Linseed oil is rich in di- and tri-unsaturated fatty acid components, which tend to harden in the presence of oxygen. This heat-producing hardening process is peculiar to these so-called drying oils. It is caused by a polymerization process that begins with oxygen molecules attacking the carbon backbone. Aside from llinseed oil, other oils exhibit drying properties and are used in more specialized applications. These include tung, poppyseed, perilla, and walnut oil. All "polymerize" on exposure to oxygen to form solid films, useful in paints and varnishes.
Triglycerides can also be split into methyl esters of the constituent fatty acids via transesterification:
The resulting fatty acid methyl esters can be used as fuel in diesel engines, hence their name biodiesel.
Staining
Staining for fatty acids, triglycerides, lipoproteins, and other lipids is done through the use of lysochromes (fat-soluble dyes). These dyes can allow the qualification of a certain fat of interest by staining the material a specific color. Some examples: Sudan IV, Oil Red O, and Sudan Black B.
Interactive pathway map
See also
Diglyceride acyltransferase, an enzyme that produces triglycerides
Glycerol-3-phosphate acyltransferases, enzymes involved in early step in biosynthesis of triglycerides
Phosphatidic acids, playing a role in biosynthesis of triglycerides
Medium-chain triglycerides
Lipid profile
Lipids
Vertical auto profile
Hypertriglyceridemia, the presence of high amounts of triglycerides in the blood.
References
External links
Lowering Triglycerides (EMedicineHealth.com; October 2020)
Lipid disorders
Esters | Triglyceride | [
"Chemistry"
] | 2,833 | [
"Organic compounds",
"Esters",
"Functional groups"
] |
56,552 | https://en.wikipedia.org/wiki/Jet%20lag | Jet lag is a temporary physiological condition that occurs when a person's circadian rhythm is out of sync with the time zone they are in, and is a typical result from travelling rapidly across multiple time zones (east–west or west–east). For example, someone travelling from New York to London, i.e. from west to east, feels as if the time were five hours earlier than local time, and someone travelling from London to New York, i.e. from east to west, feels as if the time were five hours later than local time. The phase shift when travelling from east to west is referred to as phase-delay of the circadian cycle, whereas going west to east is phase-advance of the cycle. Most travellers find that it is harder to adjust time zones when travelling east. Jet lag was previously classified as a circadian rhythm sleep disorder.
The condition may last several days before a traveller becomes fully adjusted to a new time zone; it takes on average one day per time zone crossed to reach circadian reentrainment. Jet lag is especially an issue for airline pilots, aircraft crew, and frequent travellers. Airlines have regulations aimed at combating pilot fatigue caused by jet lag.
The term jet lag was created after the arrival of jet aircraft, because prior that it was uncommon to travel far and fast enough to cause the condition.
Discovery
According to a 1969 study by the Federal Aviation Administration, aviator Wiley Post was the first to write about the effects of flying across time zones in his 1931 co-authored book, Around the World in Eight Days.
Signs and symptoms
The symptoms of jet lag can be quite varied, depending on the amount of time zone alteration, time of day, and individual differences. Sleep disturbance occurs, with poor sleep upon arrival or sleep disruptions such as trouble falling asleep (when flying east), early awakening (when flying west), and trouble remaining asleep. Cognitive effects include poorer performance on mental tasks and concentration; dizziness, nausea, insomnia, confusion, anxiety, increased fatigue, headaches, and irritability; and problems with digestion, including indigestion, changes in the frequency and consistency of bowel movements, and reduced appetite. The symptoms are caused by a circadian rhythm that is out of sync with the day–night cycle of the destination, as well as the possibility of internal desynchronisation. Jet lag has been measured with simple analogue scales, but a study has shown that these are relatively blunt for assessing all the problems associated with jet lag. The Liverpool Jet Lag Questionnaire was developed to measure all the symptoms of jet lag at several times of day, and has been used to assess jet lag in athletes.
Jet lag may require a change of three time zones or more to occur, though some individuals can be affected by as little as a single time zone or the single-hour shift to or from daylight saving time. Symptoms and consequences of jet lag can be a significant concern for athletes travelling east or west to competitions, as performance is often dependent on a combination of physical and mental characteristics that are affected by jet lag. This is often a common concern at international sporting events like the Olympics and FIFA World Cup. However many athletes arrive at least 2–4 weeks ahead of these events, to help adjust from any jet lag issues.
Travel fatigue
Travel fatigue is general fatigue, disorientation, and headache caused by a disruption in routine, time spent in a cramped space with little chance to move around, a low-oxygen environment, and dehydration caused by dry air and limited food and drink. It does not necessarily involve the shift in circadian rhythms that cause jet lag. Travel fatigue can occur without crossing time zones, and it often disappears after one day accompanied by a night of good quality sleep.
Cause
Jet lag is a chronobiological problem, similar to issues often induced by shift work and circadian rhythm sleep disorders. When travelling across a number of time zones, a person's body clock (circadian rhythm) will be out of synchronisation with the destination time, as it experiences daylight and darkness contrary to the rhythms to which it was accustomed. The body's natural pattern is disturbed, as the rhythms that dictate times for eating, sleeping, hormone regulation, body temperature variation, and other functions no longer correspond to the environment, nor to each other in some cases. To the degree that the body cannot immediately realign these rhythms, it is jet lagged.<ref>Cheng, Maria, 'How to avoid the worst of jet lag and maximize your travel time, Associated Press, August 21, 2024</ref>
The speed at which the body adjusts to a new rhythm depends on the individual as well as the direction of travel; some people may require several days to adjust to a new time zone, while others experience little disruption.
Crossing the International Date Line does not in itself contribute to jet lag, as the guide for calculating jet lag is the number of time zones crossed, with a maximum possible time difference of plus or minus 12 hours. If the absolute time difference between two locations is greater than 12 hours, one must subtract 24 from or add 24 to that number. For example, the time zone UTC+14 will be at the same time of day as UTC−10, though the former is one day ahead of the latter.
Jet lag is linked only to the distance travelled along the east–west axis. A ten-hour flight between Europe and southern Africa does not cause jet lag, as the direction of travel is primarily north–south. A four-hour flight between Miami, Florida, and Phoenix, Arizona, in the United States may result in jet lag, as the direction of travel is primarily east–west.
Double desynchronisation
There are two separate processes related to biological timing: circadian oscillators and homeostasis. The circadian system is located in the suprachiasmatic nucleus (SCN) in the hypothalamus of the brain. The other process is homeostatic sleep propensity, which is a function of the amount of time elapsed since the last adequate sleep episode.
The human body has a master clock in the SCN and peripheral oscillators in tissues. The SCN's role is to send signals to the peripheral oscillators, which synchronise them for physiological functions. The SCN responds to light information sent from the retina. It is hypothesised that peripheral oscillators respond to internal signals such as hormones, food intake, and "nervous stimuli".
The implication of independent internal clocks may explain some of the symptoms of jet lag. People who travel across several time zones can, within a few days, adapt their sleep–wake cycles with light from the environment. However, their skeletal muscles, liver, lungs, and other organs will adapt at different rates. This internal biological de-synchronisation is exacerbated as the body is not in sync with the environmenta double desynchronisation'', which has implications for health and mood.
Delayed sleep phase disorder
Delayed sleep phase disorder is a medical disorder characterised by delayed sleeping time and a proportionately delayed waking time due to a phase delay in the internal biological master clock. Specific genotypes underlie this disorder. If allowed to sleep as dictated by their endogenous clock these individuals will not have any ill effects as a result of their phase-shifted sleeping time.
Management
Light exposure
Light is the strongest stimulus, or zeitgeber, for realigning a person's circadian cycle, and the key to quick adaptation is therefore timed light exposure based on the traveller's sleep pattern, chronotype, and plans.
Timed light exposure can be effective to help people match their circadian rhythms with the expected cycle at their destination; it requires strict adherence to timing. Light therapy is a popular method used by professional athletes to reduce jet lag. Timed correctly, the light may contribute to an advance or delay of the circadian phase to match the destination.
The US Centers for Disease Control and Prevention (CDC) recommends mobile apps for the correct timing of light exposure and avoidance, when to use caffeine, and when to sleep.
Melatonin administration
In addition to timed light exposure, the right type and dose of melatonin, at the right time, can help travellers shift faster and sleep better as they are transitioning between time zones. There are issues regarding the appropriate timing of melatonin use, in addition to the legality of the substance in certain countries. For athletes, anti-doping agencies may prohibit or limit its use.
Melatonin can be considered to be a darkness signal, with effects on circadian timing that are the opposite of the effects of exposure to light. Melatonin receptors are situated on the suprachiasmatic nucleus, which is the anatomical site of the circadian clock. The results of a few field studies of melatonin administration, monitoring circadian phase, have provided evidence for a correlation between the reduction of jet lag symptoms and the accelerated realignment of the circadian clock.
Short duration trips
In the case of short duration trips, jet lag may be minimised by maintaining a sleep-wake schedule based on the originating time zone after arriving at the destination, but this strategy is often impractical in regard to desired social activities or work obligations. Shifting one's sleep schedule before departure by 1–2 hours to match the destination time zone may also shorten the duration of jet lag. Symptoms can be further reduced through a combination of artificial exposure to light and rescheduling, as these have been shown to augment phase-shifting.
Pharmacotherapy
The short-term use of hypnotic medication has shown efficacy in reducing insomnia related to jet lag. In a study, zolpidem improved sleep quality and reduced awakenings for people travelling across five to nine time zones. The potential adverse effects of hypnotic agents, like amnesia and confusion, have led some doctors to advise patients to test such medications prior to using them for treating jet lag. Several cases using triazolam to promote sleep during a flight reported dramatic global amnesia.
Mental health implications
Jet lag may affect the mental health of vulnerable individuals. When travelling across time zones, there is a "phase-shift of body temperature, rapid-eye-movement sleep, melatonin production, and other circadian rhythms". A 2002 study found that relapse of bipolar and psychotic disorders occurred more frequently when seven or more time zones had been crossed in the past week than when three or fewer had been crossed. Although significant circadian rhythm disruption has been documented as affecting individuals with bipolar disorder, an Australian team studied suicide statistics from 1971 to 2001 to determine whether the one-hour shifts involved in daylight saving time had an effect. They found increased incidence of male suicide after the commencement of daylight saving time but not after returning to standard time.
See also
Sleep deprivation
Notes
References
Aviation medicine
Circadian rhythm
Sleep disorders
Time zones | Jet lag | [
"Biology"
] | 2,279 | [
"Behavior",
"Sleep",
"Sleep disorders",
"Circadian rhythm"
] |
56,556 | https://en.wikipedia.org/wiki/Ketone%20bodies | Ketone bodies are water-soluble molecules or compounds that contain the ketone groups produced from fatty acids by the liver (ketogenesis). Ketone bodies are readily transported into tissues outside the liver, where they are converted into acetyl-CoA (acetyl-Coenzyme A)which then enters the citric acid cycle (Krebs cycle) and is oxidized for energy. These liver-derived ketone groups include acetoacetic acid (acetoacetate), beta-hydroxybutyrate, and acetone, a spontaneous breakdown product of acetoacetate (see graphic).
Ketone bodies are produced by the liver during periods of caloric restriction of various scenarios: low food intake (fasting), carbohydrate restrictive diets, starvation, prolonged intense exercise, alcoholism, or during untreated (or inadequately treated) type 1 diabetes mellitus. Ketone bodies are produced in liver cells by the breakdown of fatty acids. They are released into the blood after glycogen stores in the liver have been depleted. (Glycogen stores typically are depleted within the first 24 hours of fasting.) Ketone bodies are also produced in glial cells under periods of food restriction to sustain memory formation
When two acetyl-CoA molecules lose their -CoAs (or coenzyme A groups), they can form a (covalent) dimer called acetoacetate. β-hydroxybutyrate is a reduced form of acetoacetate, in which the ketone group is converted into an alcohol (or hydroxyl) group (see illustration on the right). Both are 4-carbon molecules that can readily be converted back into acetyl-CoA by most tissues of the body, with the notable exception of the liver. Acetone is the decarboxylated form of acetoacetate which cannot be converted back into acetyl-CoA except via detoxification in the liver where it is converted into lactic acid, which can, in turn, be oxidized into pyruvic acid, and only then into acetyl-CoA.
Ketone bodies have a characteristic smell, which can easily be detected in the breath of persons in ketosis and ketoacidosis. It is often described as fruity or like nail polish remover (which usually contains acetone or ethyl acetate).
Apart from the three endogenous ketone bodies, other ketone bodies like β-ketopentanoate and β-hydroxypentanoate may be created as a result of the metabolism of synthetic triglycerides, such as triheptanoin.
Production
Fats stored in adipose tissue are released from the fat cells into the blood as free fatty acids and glycerol when insulin levels are low and glucagon and epinephrine levels in the blood are high. This occurs between meals, during fasting, starvation and strenuous exercise, when blood glucose levels are likely to fall. Fatty acids are very high energy fuels and are taken up by all metabolizing cells that have mitochondria. This is because fatty acids can only be metabolized in the mitochondria. Red blood cells do not contain mitochondria and are therefore entirely dependent on anaerobic glycolysis for their energy requirements. In all other tissues, the fatty acids that enter the metabolizing cells are combined with coenzyme A to form acyl-CoA chains. These are transferred into the mitochondria of the cells, where they are broken down into acetyl-CoA units by a sequence of reactions known as β-oxidation.
The acetyl-CoA produced by β-oxidation enters the citric acid cycle in the mitochondrion by combining with oxaloacetate to form citrate. This results in the complete combustion of the acetyl group of acetyl-CoA (see diagram above, on the right) to CO2 and water. The energy released in this process is captured in the form of 1 GTP and 9 ATP molecules per acetyl group (or acetic acid molecule) oxidized. This is the fate of acetyl-CoA wherever β-oxidation of fatty acids occurs, except under certain circumstances in the liver. In the liver oxaloacetate is wholly or partially diverted into the gluconeogenic pathway during fasting, starvation, a low carbohydrate diet, prolonged strenuous exercise, and in uncontrolled type 1 diabetes mellitus. Under these circumstances oxaloacetate is hydrogenated to malate which is then removed from the mitochondrion to be converted into glucose in the cytoplasm of the liver cells, from where the glucose is released into the blood. In the liver, therefore, oxaloacetate is unavailable for condensation with acetyl-CoA when significant gluconeogenesis has been stimulated by low (or absent) insulin and high glucagon concentrations in the blood. Under these circumstances, acetyl-CoA is diverted to the formation of acetoacetate and beta-hydroxybutyrate. Acetoacetate, beta-hydroxybutyrate, and their spontaneous breakdown product, acetone, are known as ketone bodies. The ketone bodies are released by the liver into the blood. All cells with mitochondria can take ketone bodies up from the blood and reconvert them into acetyl-CoA, which can then be used as fuel in their citric acid cycles, as no other tissue can divert its oxaloacetate into the gluconeogenic pathway in the way that the liver does this. Unlike free fatty acids, ketone bodies can cross the blood–brain barrier and are therefore available as fuel for the cells of the central nervous system, acting as a substitute for glucose, on which these cells normally survive. The occurrence of high levels of ketone bodies in the blood during starvation, a low carbohydrate diet and prolonged heavy exercise can lead to ketosis, and in its extreme form in out-of-control type 1 diabetes mellitus, as ketoacidosis.
Acetoacetate has a highly characteristic smell, for the people who can detect this smell, which occurs in the breath and urine during ketosis. On the other hand, most people can smell acetone, whose "sweet & fruity" odor also characterizes the breath of persons in ketosis or, especially, ketoacidosis.
Fuel utilization across different organs
Ketone bodies can be used as fuel in the heart, brain and muscle, but not the liver. They yield 2 guanosine triphosphate (GTP) and 22 adenosine triphosphate (ATP) molecules per acetoacetate molecule when oxidized in the mitochondria. Ketone bodies are transported from the liver to other tissues, where acetoacetate and β-hydroxybutyrate can be reconverted to acetyl-CoA to produce reducing equivalents (NADH and FADH2), via the citric acid cycle. Though it is the source of ketone bodies, the liver cannot use them for energy because it lacks the enzyme thiophorase (β-ketoacyl-CoA transferase). Acetone is taken up by the liver in low concentrations and undergoes detoxification through the methylglyoxal pathway which ends with lactate. Acetone in high concentrations, as can occur with prolonged fasting or a ketogenic diet, is absorbed by cells outside the liver and metabolized through a different pathway via propylene glycol. Though the pathway follows a different series of steps requiring ATP, propylene glycol can eventually be turned into pyruvate.
Heart
The heart preferentially uses fatty acids as fuel under normal physiologic conditions. However, under ketotic conditions, the heart can effectively use ketone bodies for this purpose.
Brain
For several decades the liver has been considered as the main supplier of ketone bodies to fuel brain energy metabolism. However, recent evidence has demonstrated that glial cells can fuel neurons with locally synthesized ketone bodies to sustain memory formation upon food restriction.
The brain gets a portion of its fuel requirements from ketone bodies when glucose is less available than normal. In the event of low glucose concentration in the blood, most other tissues have alternative fuel sources besides ketone bodies and glucose (such as fatty acids), but studies have indicated that the brain has an obligatory requirement for some glucose. After strict fasting for 3 days, the brain gets 25% of its energy from ketone bodies. After about 24 days, ketone bodies become the major fuel of the brain, making up to two-thirds of brain fuel consumption. Many studies suggest that human brain cells can survive with little or no glucose, but proving the point is ethically questionable. During the initial stages of ketosis, the brain does not burn ketones, since they are an important substrate for lipid synthesis in the brain. Furthermore, ketones produced from omega-3 fatty acids may reduce cognitive deterioration in old age.
Ketogenesis helped fuel the enlargement of the human brain during its evolution. It was previously proposed that ketogenesis is key to the evolution and viability of bigger brains in general. However, the loss of HMGCS2 (and consequently this ability) in three large-brained mammalian lineages (cetaceans, elephants–mastodons, Old World fruit bats) shows otherwise. Out of the three lineages, only fruit bats have the expected sensitivity to starvation; the other two have found alternative ways to fuel the body during starvation.
Ketosis and ketoacidosis
In normal individuals, there is a constant production of ketone bodies by the liver and their utilization by extrahepatic tissues. The concentration of ketone bodies in blood is maintained around . Their excretion in urine is very low and undetectable by routine urine tests (Rothera's test).
When the rate of synthesis of ketone bodies exceeds the rate of utilization, their concentration in blood increases; this is known as ketonemia. This is followed by ketonuria – excretion of ketone bodies in urine. The overall picture of ketonemia and ketonuria is commonly referred to as ketosis. The smell of acetoacetate and/or acetone in breath is a common feature in ketosis.
When a type 1 diabetic suffers acute biological stress (infection, heart attack, or physical trauma) or fails to administer enough insulin, they may enter the pathological state of diabetic ketoacidosis. Under these circumstances, the low or absent insulin levels in the blood, combined with the inappropriately high glucagon concentrations, induce the liver to produce glucose at an inappropriately increased rate, causing acetyl-CoA resulting from the beta-oxidation of fatty acids, to be converted into ketone bodies. The resulting very high levels of ketone bodies lower the pH of the blood plasma, which reflexively triggers the kidneys to excrete urine with very high acid levels. The high levels of glucose and ketones in the blood also spill passively into the urine (due to the inability of the renal tubules to reabsorb glucose and ketones from the tubular fluid, being overwhelmed by the high volumes of these substances being filtered into the tubular fluid). The resulting osmotic diuresis of glucose causes the removal of water and electrolytes from the blood resulting in potentially fatal dehydration.
Individuals who follow a low-carbohydrate diet will also develop ketosis. This induced ketosis is sometimes called nutritional ketosis, but the level of ketone body concentrations are on the order of whereas the pathological ketoacidosis is .
The process of ketosis has been studied for its effects in improving the cognitive symptoms of neurodegenerative diseases including Alzheimer's disease. Clinical trials have also looked to ketosis in children for Angelman syndrome.
See also
Fatty acid metabolism
References
External links
– Diabetic Ketoacidosis
Fat metabolism at unisanet.unisa.edu.au
Antidepressants
Histone deacetylase inhibitors
Lipid metabolism | Ketone bodies | [
"Chemistry"
] | 2,557 | [
"Lipid biochemistry",
"Lipid metabolism",
"Metabolism"
] |
56,557 | https://en.wikipedia.org/wiki/Blood%20glucose%20monitoring | Blood glucose monitoring is the use of a glucose meter for testing the concentration of glucose in the blood (glycemia). Particularly important in diabetes management, a blood glucose test is typically performed by piercing the skin (typically, via fingerstick) to draw blood, then applying the blood to a chemically active disposable 'test-strip'. The other main option is continuous glucose monitoring (CGM). Different manufacturers use different technology, but most systems measure an electrical characteristic and use this to determine the glucose level in the blood. Skin-prick methods measure capillary blood glucose (i.e., the level found in capillary blood), whereas CGM correlates interstitial fluid glucose level to blood glucose level. Measurements may occur after fasting or at random nonfasting intervals (random glucose tests), each of which informs diagnosis or monitoring in different ways.
Healthcare professionals advise patients with diabetes mellitus on the appropriate monitoring regimen for their condition. Most people with type 2 diabetes test at least once per day. The Mayo Clinic generally recommends that diabetics who use insulin (all type 1 diabetics and many type 2 diabetics) test their blood sugar more often (4–8 times per day for type 1 diabetics, 2 or more times per day for type 2 diabetics), both to assess the effectiveness of their prior insulin dose and to help determine their next insulin dose.
Purpose
Blood glucose monitoring reveals individual patterns of blood glucose changes, and helps in the planning of meals, activities, and at what time of day to take medications.
Also, testing allows for a quick response to high blood sugar (hyperglycemia) or low blood sugar (hypoglycemia). This might include diet adjustments, exercise, and insulin (as instructed by the health care provider).
Blood glucose meters
A blood glucose meter is an electronic device for measuring the blood glucose level. A relatively small drop of blood is placed on a disposable test strip which interfaces with a digital meter. Within several seconds, the level of blood glucose will be shown on the digital display. Needing only a small drop of blood for the meter means that the time and effort required for testing are reduced and the compliance of diabetic people to their testing regimens is improved significantly. Blood glucose meters provide results in various units such as eAG (mg/dL) and eAG (mmol/L), and may also estimate A1C levels. These measurements can aid in classifying blood glucose levels as normal, prediabetic, or diabetic, facilitating effective diabetes management for users. While some models offer interpretative features that indicate the health status based on these results, not all meters provide this functionality, focusing instead on providing raw glucose measurements. Users of blood glucose meters without interpretative features can utilize online calculators to determine their blood glucose status based on measured values. The cost of using blood glucose meters is believed to be a cost-benefit relative to the avoided medical costs of the complications of diabetes.
Recent advances include:
alternative site testing, the use of blood drops from places other than the fingertips, usually the palm or forearm. This alternative site testing uses the same test strips and meter, is practically pain-free, and gives the fingertips a needed break if they become sore. The disadvantage of this technique is that there is usually less blood flow to alternative sites, which prevents the reading from being accurate when the blood sugar level is changing.
no coding systems. Older systems required 'coding' of the strips to the meter. This carried a risk of 'miscoding', which can lead to inaccurate results. Two approaches have resulted in systems that no longer require coding. Some systems are 'autocoded', where technology is used to code each strip to the meter. And some are manufactured to a 'single code', thereby avoiding the risk of miscoding.
multi-test systems. Some systems use a cartridge or a disc containing multiple test strips. This has the advantage that the user doesn't have to load individual strips each time, which is convenient and can enable quicker testing.
downloadable meters. Most newer systems come with software that allows the user to download meter results to a computer. This information can then be used, together with health care professional guidance, to enhance and improve diabetes management. The meters usually require a connection cable, unless they are designed to work wirelessly with an insulin pump, are designed to plug directly into the computer, or use a radio (Bluetooth, for example) or infrared connection.
Continuous glucose monitoring
A continuous glucose monitor determines glucose levels on a continuous basis (every few minutes). A typical system consists of:
a disposable glucose sensor placed just under the skin, which is worn for a few days until replacement
a link from the sensor to a non-implanted transmitter which communicates to a radio receiver
an electronic receiver is worn like a pager (or insulin pump) that displays glucose levels with nearly continuous updates, as well as monitors rising and falling trends.
Continuous glucose monitors measure the concentration of glucose in a sample of interstitial fluid. Shortcomings of CGM systems due to this fact are:
continuous systems must be calibrated with a traditional blood glucose measurement (using current technology) and therefore require both the CGM system and occasional "fingerstick"
glucose levels in interstitial fluid lag behind blood glucose values
Patients, therefore, require traditional fingerstick measurements for calibration (typically twice per day) and are often advised to use fingerstick measurements to confirm hypo- or hyperglycemia before taking corrective action.
The lag time discussed above has been reported to be about 5 minutes. Anecdotally, some users of the various systems report lag times of up to 10–15 minutes. This lag time is insignificant when blood sugar levels are relatively consistent. However, blood sugar levels, when changing rapidly, may read in the normal range on a CGM system while in reality the patient is already experiencing symptoms of an out-of-range blood glucose value and may require treatment. Patients using CGM are therefore advised to consider both the absolute value of the blood glucose level given by the system as well as any trend in the blood glucose levels. For example, a patient using CGM with a blood glucose of 100 mg/dl on their CGM system might take no action if their blood glucose has been consistent for several readings, while a patient with the same blood glucose level but whose blood glucose has been dropping steeply in a short period of time might be advised to perform a fingerstick test to check for hypoglycemia.
Continuous monitoring allows examination of how the blood glucose level reacts to insulin, exercise, food, and other factors. The additional data can be useful for setting correct insulin dosing ratios for food intake and correction of hyperglycemia. Monitoring during periods when blood glucose levels are not typically checked (e.g. overnight) can help to identify problems in insulin dosing (such as basal levels for insulin pump users or long-acting insulin levels for patients taking injections). Monitors may also be equipped with alarms to alert patients of hyperglycemia or hypoglycemia so that a patient can take corrective action(s) (after fingerstick testing, if necessary) even in cases where they do not feel symptoms of either condition. While the technology has its limitations, studies have demonstrated that patients with continuous sensors experience a smaller number of hyperglycemic and hypoglycemic events, a reduction in their glycated hemoglobin levels and a decrease in glycemic variability. Compared to intermittent testing, it is likely to help reduce hypertensive complications during pregnancy. In a recent systematic review with meta-analysis about glycaemia monitoring in critical patients who are haemodynamically unstable and require intensive monitoring of glycaemia it concluded that should be undertaken using arterial blood samples and POC blood gas analysers, as this is more reliable and is not affected by the variability of different confusion factors. Determining glycaemia in capillary blood using glucometry may be suitable in stable patients or when close monitoring of glycaemia is not required.
Continuous blood glucose monitoring is not automatically covered by health insurance in the United States in the same way that most other diabetic supplies are covered (e.g. standard glucose testing supplies, insulin, and insulin pumps). However, an increasing number of insurance companies do cover continuous glucose monitoring supplies (both the receiver and disposable sensors) on a case-by-case basis if the patient and doctor show a specific need. The lack of insurance coverage is exacerbated by the fact that disposable sensors must be frequently replaced. Some sensors have been U.S. Food and Drug Administration (FDA) approved for 7- and 3-day use, (although some patients wear sensors for longer than the recommended period) and the receiving meters likewise have finite lifetimes (less than 2 years and as little as 6 months). This is one factor in the slow uptake in the use of sensors that have been marketed in the United States.
The principles, history and recent developments of operation of electrochemical glucose biosensors are discussed in a chemical review by Joseph Wang.
Glucose sensing bio-implants
Investigations on the use of test strips have shown that the required self-injury acts as a psychological barrier restraining the patients from sufficient glucose control. As a result, secondary diseases are caused by excessive glucose levels. A significant improvement of diabetes therapy might be achieved with an implantable sensor that would continuously monitor blood sugar levels within the body and transmit the measured data outside. The burden of regular blood testing would be taken from the patient, who would instead follow the course of their glucose levels on an intelligent device like a laptop or a smartphone.
Glucose concentrations do not necessarily have to be measured in blood vessels, but may also be determined in the interstitial fluid, where the same levels prevail – with a time lag of a few minutes – due to its connection with the capillary system. However, the enzymatic glucose detection scheme used in single-use test strips is not directly suitable for implants. One main problem is caused by the varying supply of oxygen, by which glucose is converted to glucono lactone and HO by glucose oxidase. Since the implantation of a sensor into the body is accompanied by growth of encapsulation tissue, the diffusion of oxygen to the reaction zone is continuously diminished. This decreasing oxygen availability causes the sensor reading to drift, requiring frequent re-calibration using finger-sticks and test strips.
One approach to achieving long-term glucose sensing is to measure and compensate for the changing local oxygen concentration. Other approaches replace the troublesome glucose oxidase reaction with a reversible sensing reaction, known as an affinity assay. This scheme was originally put forward by Schultz & Sims in 1978. A number of different affinity assays have been investigated, with fluorescent assays proving most common. MEMS technology has recently allowed for smaller and more convenient alternatives to fluorescent detection, via measurement of viscosity. Investigation of affinity-based sensors has shown that encapsulation by body tissue does not cause a drift of the sensor signal, but only a time lag of the signal compared to the direct measurement in blood. A new implantable continuous glucose monitor based on affinity principles and fluorescence detection is the Eversense device manufactured by Senseonics Inc. This device has been approved by the FDA for 90 day implantation.
Non-invasive technologies
Some new technologies to monitor blood glucose levels will not require access to blood to read the glucose level. Non-invasive technologies include microwave/RF sensing, near IR detection, ultrasound and dielectric spectroscopy. These may free the person with diabetes from finger sticks to supply the drop of blood for blood glucose analysis.
Most of the non-invasive methods under development are continuous glucose monitoring methods and offer the advantage of providing additional information to the subject between the conventional finger stick, blood glucose measurements, and overtime periods where no finger stick measurements are available (i.e. while the subject is sleeping).
Effectiveness
For patients with diabetes mellitus type 2, the importance of monitoring and the optimal frequency of monitoring are not clear. A 2011 study found no evidence that blood glucose monitoring leads to better patient outcomes in actual practice. Randomized controlled trials found that self-monitoring of blood glucose did not improve glycated hemoglobin (HbA1c) among "reasonably well controlled non-insulin treated patients with type 2 diabetes" or lead to significant changes in quality of life. However a recent meta-analysis of 47 randomized controlled trials encompassing 7677 patients showed that self-care management intervention improves glycemic control in diabetics, with an estimated 0.36% (95% CI, 0.21–0.51) reduction in their glycated hemoglobin values. Furthermore, a recent study showed that patients described as being "Uncontrolled Diabetics" (defined in this study by HbA1C levels >8%) showed a statistically significant decrease in the HbA1C levels after a 90-day period of seven-point self-monitoring of blood glucose (SMBG) with a relative risk reduction (RRR) of 0.18% (95% CI, 0.86–2.64%, p<.001). Regardless of lab values or other numerical parameters, the purpose of the clinician is to improve quality of life and patient outcomes in diabetic patients. A recent study included 12 randomized controlled trials and evaluated outcomes in 3259 patients. The authors concluded through a qualitative analysis that SMBG on quality of life showed no effect on patient satisfaction or the patients' health-related quality of life. Furthermore, the same study identified that patients with type 2 diabetes mellitus diagnosed greater than one year prior to initiation of SMBG, who were not on insulin, experienced a statistically significant reduction in their HbA1C of 0.3% (95% CI, -0.4 – -0.1) at six months follow up, but a statistically insignificant reduction of 0.1% (95% CI, -0.3 – 0.04) at twelve months follow up. Conversely, newly diagnosed patients experienced a statistically significant reduction of 0.5% (95% CI, -0.9 – -0.1) at 12 months follow up. A recent study found that a treatment strategy of intensively lowering blood sugar levels (below 6%) in patients with additional cardiovascular disease risk factors poses more harm than benefit. For type 2 diabetics who are not on insulin, exercise and diet are the best tools. Blood glucose monitoring is, in that case, simply a tool to evaluate the success of diet and exercise. Insulin-dependent type 2 diabetics do not need to monitor their blood sugar as frequently as type 1 diabetics. In a recent systematic review with meta-analysis, about glycaemia monitoring in critical patients who are haemodynamically unstable and require intensive monitoring of glycaemia it concluded that should be undertaken using arterial blood samples and POC blood gas analysers, as this is more reliable and is not affected by the variability of different confusion factors. Determining glycaemia in capillary blood using glucometry may be suitable in stable patients or when close monitoring of glycaemia is not required.
Recommendations
The National Institute for Health and Clinical Excellence (NICE), UK released updated diabetes recommendations on 30 May 2008, which recommend that self-monitoring of plasma glucose levels for people with newly diagnosed type 2 diabetes must be integrated into a structured self-management education process. The recommendations have been updated in August 2015 for children and young adults with type 1 diabetes.
The American Diabetes Association (ADA), which produces guidelines for diabetes care and clinical practice recommendations, recently updated its "Standards of Medical Care" in January 2019 to acknowledge that routine self-monitoring of blood glucose in people who are not using insulin is of limited additional clinical benefit. A randomized controlled trial evaluated once-daily self-monitoring that included tailored patient messaging and did not show that this strategy led to significant changes in A1C after a year.
References
External links
Glucose monitoring
Activity trackers
Diabetes-related tests
Insulin therapies
Medical monitoring | Blood glucose monitoring | [
"Chemistry"
] | 3,383 | [
"Blood tests",
"Chemical pathology"
] |
56,558 | https://en.wikipedia.org/wiki/Blood%20pressure | Blood pressure (BP) is the pressure of circulating blood against the walls of blood vessels. Most of this pressure results from the heart pumping blood through the circulatory system. When used without qualification, the term "blood pressure" refers to the pressure in a brachial artery, where it is most commonly measured. Blood pressure is usually expressed in terms of the systolic pressure (maximum pressure during one heartbeat) over diastolic pressure (minimum pressure between two heartbeats) in the cardiac cycle. It is measured in millimeters of mercury (mmHg) above the surrounding atmospheric pressure, or in kilopascals (kPa). The difference between the systolic and diastolic pressures is known as pulse pressure, while the average pressure during a cardiac cycle is known as mean arterial pressure.
Blood pressure is one of the vital signs—together with respiratory rate, heart rate, oxygen saturation, and body temperature—that healthcare professionals use in evaluating a patient's health. Normal resting blood pressure in an adult is approximately systolic over diastolic, denoted as "120/80 mmHg". Globally, the average blood pressure, age standardized, has remained about the same since 1975 to the present, at approximately 127/79 mmHg in men and 122/77 mmHg in women, although these average data mask significantly diverging regional trends.
Traditionally, a health-care worker measured blood pressure non-invasively by auscultation (listening) through a stethoscope for sounds in one arm's artery as the artery is squeezed, closer to the heart, by an aneroid gauge or a mercury-tube sphygmomanometer. Auscultation is still generally considered to be the gold standard of accuracy for non-invasive blood pressure readings in clinic. However, semi-automated methods have become common, largely due to concerns about potential mercury toxicity, although cost, ease of use and applicability to ambulatory blood pressure or home blood pressure measurements have also influenced this trend. Early automated alternatives to mercury-tube sphygmomanometers were often seriously inaccurate, but modern devices validated to international standards achieve an average difference between two standardized reading methods of 5 mm Hg or less, and a standard deviation of less than 8 mm Hg. Most of these semi-automated methods measure blood pressure using oscillometry (measurement by a pressure transducer in the cuff of the device of small oscillations of intra-cuff pressure accompanying heartbeat-induced changes in the volume of each pulse).
Blood pressure is influenced by cardiac output, systemic vascular resistance, blood volume and arterial stiffness, and varies depending on person's situation, emotional state, activity and relative health or disease state. In the short term, blood pressure is regulated by baroreceptors, which act via the brain to influence the nervous and the endocrine systems.
Blood pressure that is too low is called hypotension, pressure that is consistently too high is called hypertension, and normal pressure is called normotension. Both hypertension and hypotension have many causes and may be of sudden onset or of long duration. Long-term hypertension is a risk factor for many diseases, including stroke, heart disease, and kidney failure. Long-term hypertension is more common than long-term hypotension.
Classification, normal and abnormal values
Systemic arterial pressure
Blood pressure measurements can be influenced by circumstances of measurement. Guidelines use different thresholds for office (also known as clinic), home (when the person measures their own blood pressure at home), and ambulatory blood pressure (using an automated device over a 24-hour period).
The risk of cardiovascular disease increases progressively above 90 mmHg, especially among women.
Observational studies demonstrate that people who maintain arterial pressures at the low end of these pressure ranges have much better long-term cardiovascular health. There is an ongoing medical debate over what is the optimal level of blood pressure to target when using drugs to lower blood pressure with hypertension, particularly in older people.
Blood pressure fluctuates from minute to minute and normally shows a circadian rhythm over a 24-hour period, with highest readings in the early morning and evenings and lowest readings at night. Loss of the normal fall in blood pressure at night is associated with a greater future risk of cardiovascular disease and there is evidence that night-time blood pressure is a stronger predictor of cardiovascular events than day-time blood pressure. Blood pressure varies over longer time periods (months to years) and this variability predicts adverse outcomes. Blood pressure also changes in response to temperature, noise, emotional stress, consumption of food or liquid, dietary factors, physical activity, changes in posture (such as standing-up), drugs, and disease. The variability in blood pressure and the better predictive value of ambulatory blood pressure measurements has led some authorities, such as the National Institute for Health and Care Excellence (NICE) in the UK, to advocate for the use of ambulatory blood pressure as the preferred method for diagnosis of hypertension.
Various other factors, such as age and sex, also influence a person's blood pressure. Differences between left-arm and right-arm blood pressure measurements tend to be small. However, occasionally there is a consistent difference greater than 10 mmHg which may need further investigation, e.g. for peripheral arterial disease, obstructive arterial disease or aortic dissection.
There is no accepted diagnostic standard for hypotension, although pressures less than 90/60 are commonly regarded as hypotensive. In practice blood pressure is considered too low only if symptoms are present.
Systemic arterial pressure and age
Fetal blood pressure
In pregnancy, it is the fetal heart and not the mother's heart that builds up the fetal blood pressure to drive blood through the fetal circulation. The blood pressure in the fetal aorta is approximately 30 mmHg at 20 weeks of gestation, and increases to approximately 45 mmHg at 40 weeks of gestation.
The average blood pressure for full-term infants:
Systolic 65–95 mmHg
Diastolic 30–60 mmHg
Childhood
In children the normal ranges for blood pressure are lower than for adults and depend on height. Reference blood pressure values have been developed for children in different countries, based on the distribution of blood pressure in children of these countries.
Aging adults
In adults in most societies, systolic blood pressure tends to rise from early adulthood onward, up to at least age 70; diastolic pressure tends to begin to rise at the same time but start to fall earlier in mid-life, approximately age 55. Mean blood pressure rises from early adulthood, plateauing in mid-life, while pulse pressure rises quite markedly after the age of 40. Consequently, in many older people, systolic blood pressure often exceeds the normal adult range, if the diastolic pressure is in the normal range this is termed isolated systolic hypertension. The rise in pulse pressure with age is attributed to increased stiffness of the arteries. An age-related rise in blood pressure is not considered healthy and is not observed in some isolated unacculturated communities.
Systemic venous pressure
Blood pressure generally refers to the arterial pressure in the systemic circulation. However, measurement of pressures in the venous system and the pulmonary vessels plays an important role in intensive care medicine but requires invasive measurement of pressure using a catheter.
Venous pressure is the vascular pressure in a vein or in the atria of the heart. It is much lower than arterial pressure, with common values of 5 mmHg in the right atrium and 8 mmHg in the left atrium.
Variants of venous pressure include:
Central venous pressure, which is a good approximation of right atrial pressure, which is a major determinant of right ventricular end diastolic volume. (However, there can be exceptions in some cases.)
The jugular venous pressure (JVP) is the indirectly observed pressure over the venous system. It can be useful in the differentiation of different forms of heart and lung disease.
The portal venous pressure is the blood pressure in the portal vein. It is normally 5–10 mmHg
Pulmonary pressure
Normally, the pressure in the pulmonary artery is about 15 mmHg at rest.
Increased blood pressure in the capillaries of the lung causes pulmonary hypertension, leading to interstitial edema if the pressure increases to above 20 mmHg, and to pulmonary edema at pressures above 25 mmHg.
Aortic pressure
Aortic pressure, also called central aortic blood pressure, or central blood pressure, is the blood pressure at the root of the aorta. Elevated aortic pressure has been found to be a more accurate predictor of both cardiovascular events and mortality, as well as structural changes in the heart, than has peripheral blood pressure (such as measured through the brachial artery). Traditionally it involved an invasive procedure to measure aortic pressure, but now there are non-invasive methods of measuring it indirectly without a significant margin of error.
Certain researchers have argued for physicians to begin using aortic pressure, as opposed to peripheral blood pressure, as a guide for clinical decisions. The way antihypertensive drugs impact peripheral blood pressure can often be very different from the way they impact central aortic pressure.
Mean systemic pressure
If the heart is stopped, blood pressure falls, but it does not fall to zero. The remaining pressure measured after cessation of the heart beat and redistribution of blood throughout the circulation is termed the mean systemic pressure or mean circulatory filling pressure; typically this is proximally ~7 mmHg.
Disorders of blood pressure
Disorders of blood pressure control include high blood pressure, low blood pressure, and blood pressure that shows excessive or maladaptive fluctuation.
High blood pressure
Arterial hypertension can be an indicator of other problems and may have long-term adverse effects. Sometimes it can be an acute problem, such as in a hypertensive emergency when blood pressure is more than 180/120 mmHg.
Levels of arterial pressure put mechanical stress on the arterial walls. Higher pressures increase heart workload and progression of unhealthy tissue growth (atheroma) that develops within the walls of arteries. The higher the pressure, the more stress that is present and the more atheroma tend to progress and the heart muscle tends to thicken, enlarge and become weaker over time.
Persistent hypertension is one of the risk factors for strokes, heart attacks, heart failure, and arterial aneurysms, and is the leading cause of chronic kidney failure. Even moderate elevation of arterial pressure leads to shortened life expectancy. At severely high pressures, mean arterial pressures 50% or more above average, a person can expect to live no more than a few years unless appropriately treated. For people with high blood pressure, higher heart rate variability (HRV) is a risk factor for atrial fibrillation.
Both high systolic pressure and high pulse pressure (the numerical difference between systolic and diastolic pressures) are risk factors. Elevated pulse pressure has been found to be a stronger independent predictor of cardiovascular events, especially in older populations, than has systolic, diastolic, or mean arterial pressure. In some cases, it appears that a decrease in excessive diastolic pressure can actually increase risk, probably due to the increased difference between systolic and diastolic pressures (ie. widened pulse pressure). If systolic blood pressure is elevated (>140 mmHg) with a normal diastolic blood pressure (<90 mmHg), it is called isolated systolic hypertension and may present a health concern. According to the 2017 American Heart Association blood pressure guidelines state that a systolic blood pressure of 130–139 mmHg with a diastolic pressure of 80–89 mmHg is "stage one hypertension".
For those with heart valve regurgitation, a change in its severity may be associated with a change in diastolic pressure. In a study of people with heart valve regurgitation that compared measurements two weeks apart for each person, there was an increased severity of aortic and mitral regurgitation when diastolic blood pressure increased, whereas when diastolic blood pressure decreased, there was a decreased severity.
Low blood pressure
Blood pressure that is too low is known as hypotension. This is a medical concern if it causes signs or symptoms, such as dizziness, fainting, or in extreme cases in medical emergencies, circulatory shock. Causes of low arterial pressure include sepsis, hypovolemia, bleeding, cardiogenic shock, reflex syncope, hormonal abnormalities such as Addison's disease, eating disorders – particularly anorexia nervosa and bulimia.
Orthostatic hypotension
A large fall in blood pressure upon standing (typically a systolic/diastolic blood pressure decrease of >20/10 mmHg) is termed orthostatic hypotension (postural hypotension) and represents a failure of the body to compensate for the effect of gravity on the circulation. Standing results in an increased hydrostatic pressure in the blood vessels of the lower limbs. The consequent distension of the veins below the diaphragm (venous pooling) causes ~500 ml of blood to be relocated from the chest and upper body. This results in a rapid decrease in central blood volume and a reduction of ventricular preload which in turn reduces stroke volume, and mean arterial pressure. Normally this is compensated for by multiple mechanisms, including activation of the autonomic nervous system which increases heart rate, myocardial contractility and systemic arterial vasoconstriction to preserve blood pressure and elicits venous vasoconstriction to decrease venous compliance. Decreased venous compliance also results from an intrinsic myogenic increase in venous smooth muscle tone in response to the elevated pressure in the veins of the lower body.
Other compensatory mechanisms include the veno-arteriolar axon reflex, the 'skeletal muscle pump' and 'respiratory pump'. Together these mechanisms normally stabilize blood pressure within a minute or less. If these compensatory mechanisms fail and arterial pressure and blood flow decrease beyond a certain point, the perfusion of the brain becomes critically compromised (i.e., the blood supply is not sufficient), causing lightheadedness, dizziness, weakness or fainting. Usually this failure of compensation is due to disease, or drugs that affect the sympathetic nervous system. A similar effect is observed following the experience of excessive gravitational forces (G-loading), such as routinely experienced by aerobatic or combat pilots 'pulling Gs' where the extreme hydrostatic pressures exceed the ability of the body's compensatory mechanisms.
Variable or fluctuating blood pressure
Some fluctuation or variation in blood pressure is normal. Variation in blood pressure that is significantly greater than the norm is known as labile hypertension and is associated with increased risk of cardiovascular disease brain small vessel disease, and dementia independent of the average blood pressure level. Recent evidence from clinical trials has also linked variation in blood pressure to mortality, stroke, heart failure, and cardiac changes that may give rise to heart failure. These data have prompted discussion of whether excessive variation in blood pressure should be treated, even among normotensive older adults.
Older individuals and those who had received blood pressure medications are more likely to exhibit larger fluctuations in pressure, and there is some evidence that different antihypertensive agents have different effects on blood pressure variability; whether these differences translate to benefits in outcome is uncertain.
Physiology
During each heartbeat, blood pressure varies between a maximum (systolic) and a minimum (diastolic) pressure. The blood pressure in the circulation is principally due to the pumping action of the heart. However, blood pressure is also regulated by neural regulation from the brain (see Hypertension and the brain), as well as osmotic regulation from the kidney. Differences in mean blood pressure drive the flow of blood around the circulation. The rate of mean blood flow depends on both blood pressure and the resistance to flow presented by the blood vessels. In the absence of hydrostatic effects (e.g. standing), mean blood pressure decreases as the circulating blood moves away from the heart through arteries and capillaries due to viscous losses of energy. Mean blood pressure drops over the whole circulation, although most of the fall occurs along the small arteries and arterioles. Pulsatility also diminishes in the smaller elements of the arterial circulation, although some transmitted pulsatility is observed in capillaries. Gravity affects blood pressure via hydrostatic forces (e.g., during standing), and valves in veins, breathing, and pumping from contraction of skeletal muscles also influence blood pressure, particularly in veins.
Hemodynamics
A simple view of the hemodynamics of systemic arterial pressure is based around mean arterial pressure (MAP) and pulse pressure. Most influences on blood pressure can be understood in terms of their effect on cardiac output, systemic vascular resistance, or arterial stiffness (the inverse of arterial compliance). Cardiac output is the product of stroke volume and heart rate. Stroke volume is influenced by 1) the end-diastolic volume or filling pressure of the ventricle acting via the Frank–Starling mechanism—this is influenced by blood volume; 2) cardiac contractility; and 3) afterload, the impedance to blood flow presented by the circulation. In the short-term, the greater the blood volume, the higher the cardiac output. This has been proposed as an explanation of the relationship between high dietary salt intake and increased blood pressure; however, responses to increased dietary sodium intake vary between individuals and are highly dependent on autonomic nervous system responses and the renin–angiotensin system, changes in plasma osmolarity may also be important. In the longer-term the relationship between volume and blood pressure is more complex. In simple terms, systemic vascular resistance is mainly determined by the caliber of small arteries and arterioles. The resistance attributable to a blood vessel depends on its radius as described by the Hagen-Poiseuille's equation (resistance∝1/radius4). Hence, the smaller the radius, the higher the resistance. Other physical factors that affect resistance include: vessel length (the longer the vessel, the higher the resistance), blood viscosity (the higher the viscosity, the higher the resistance) and the number of vessels, particularly the smaller numerous, arterioles and capillaries. The presence of a severe arterial stenosis increases resistance to flow, however this increase in resistance rarely increases systemic blood pressure because its contribution to total systemic resistance is small, although it may profoundly decrease downstream flow. Substances called vasoconstrictors reduce the caliber of blood vessels, thereby increasing blood pressure. Vasodilators (such as nitroglycerin) increase the caliber of blood vessels, thereby decreasing arterial pressure. In the longer term a process termed remodeling also contributes to changing the caliber of small blood vessels and influencing resistance and reactivity to vasoactive agents. Reductions in capillary density, termed capillary rarefaction, may also contribute to increased resistance in some circumstances.
In practice, each individual's autonomic nervous system and other systems regulating blood pressure, notably the kidney, respond to and regulate all these factors so that, although the above issues are important, they rarely act in isolation and the actual arterial pressure response of a given individual can vary widely in the short and long term.
Pulse pressure
The pulse pressure is the difference between the measured systolic and diastolic pressures,
The pulse pressure is a consequence of the pulsatile nature of the cardiac output, i.e. the heartbeat. The magnitude of the pulse pressure is usually attributed to the interaction of the stroke volume of the heart, the compliance (ability to expand) of the arterial system—largely attributable to the aorta and large elastic arteries—and the resistance to flow in the arterial tree.
Clinical significance of pulse pressure
A healthy pulse pressure is around 40 mmHg. A pulse pressure that is consistently 60 mmHg or greater is likely to be associated with disease, and a pulse pressure of 50 mmHg or more increases the risk of cardiovascular disease as well as other complications such as eye and kidney disease. Pulse pressure is considered low if it is less than 25% of the systolic. (For example, if the systolic pressure is 120 mmHg, then the pulse pressure would be considered low if it is less than 30 mmHg, since 30 is 25% of 120.) A very low pulse pressure can be a symptom of disorders such as congestive heart failure.
Elevated pulse pressure has been found to be a stronger independent predictor of cardiovascular events, especially in older populations, than has systolic, diastolic, or mean arterial pressure. This increased risk exists for both men and women and even when no other cardiovascular risk factors are present. The increased risk also exists even in cases in which diastolic pressure decreases over time while systolic remains steady.
A meta-analysis in 2000 showed that a 10 mmHg increase in pulse pressure was associated with a 20% increased risk of cardiovascular mortality, and a 13% increase in risk for all coronary end points. The study authors also noted that, while risks of cardiovascular end points do increase with higher systolic pressures, at any given systolic blood pressure the risk of major cardiovascular end points increases, rather than decreases, with lower diastolic levels. This suggests that interventions that lower diastolic pressure without also lowering systolic pressure (and thus lowering pulse pressure) could actually be counterproductive. There are no drugs currently approved to lower pulse pressure, although some antihypertensive drugs may modestly lower pulse pressure, while in some cases a drug that lowers overall blood pressure may actually have the counterproductive side effect of raising pulse pressure.
Pulse pressure can both widen or narrow in people with sepsis depending on the degree of hemodynamic compromise. A pulse pressure of over 70 mmHg in sepsis is correlated with an increased chance of survival and a more positive response to IV fluids.
Mean arterial pressure
Mean arterial pressure (MAP) is the average of blood pressure over a cardiac cycle and is determined by the cardiac output (CO), systemic vascular resistance (SVR), and central venous pressure (CVP):
In practice, the contribution of CVP (which is small) is generally ignored and so
MAP is often estimated from measurements of the systolic pressure, and the diastolic pressure, using the equation:
where k = 0.333 although other values for k have been advocated.
Regulation of blood pressure
The endogenous, homeostatic regulation of arterial pressure is not completely understood, but the following mechanisms of regulating arterial pressure have been well-characterized:
Baroreceptor reflex: Baroreceptors in the high pressure receptor zones detect changes in arterial pressure. These baroreceptors send signals ultimately to the medulla of the brain stem, specifically to the rostral ventrolateral medulla (RVLM). The medulla, by way of the autonomic nervous system, adjusts the mean arterial pressure by altering both the force and speed of the heart's contractions, as well as the systemic vascular resistance. The most important arterial baroreceptors are located in the left and right carotid sinuses and in the aortic arch.
Renin–angiotensin system (RAS): This system is generally known for its long-term adjustment of arterial pressure. This system allows the kidney to compensate for loss in blood volume or drops in arterial pressure by activating an endogenous vasoconstrictor known as angiotensin II.
Aldosterone release: This steroid hormone is released from the adrenal cortex in response to activation of the renin-angiotensin system, high serum potassium levels, or elevated adrenocorticotropic hormone (ACTH). Renin converts angiotensinogen to angiotensin I, which is converted by angiotensin converting enzyme to angiotensin II. Angiotensin II then signals to the adrenal cortex to release aldosterone. Aldosterone stimulates sodium retention and potassium excretion by the kidneys and the consequent salt and water retention increases plasma volume, and indirectly, arterial pressure. Aldosterone may also exert direct pressor effects on vascular smooth muscle and central effects on sympathetic nervous system activity.
Baroreceptors in low pressure receptor zones (mainly in the venae cavae and the pulmonary veins, and in the atria) result in feedback by regulating the secretion of antidiuretic hormone (ADH/vasopressin), renin and aldosterone. The resultant increase in blood volume results in an increased cardiac output by the Frank–Starling law of the heart, in turn increasing arterial blood pressure.
These different mechanisms are not necessarily independent of each other, as indicated by the link between the RAS and aldosterone release. When blood pressure falls many physiological cascades commence in order to return the blood pressure to a more appropriate level.
The blood pressure fall is detected by a decrease in blood flow and thus a decrease in glomerular filtration rate (GFR).
Decrease in GFR is sensed as a decrease in Na+ levels by the macula densa.
The macula densa causes an increase in Na+ reabsorption, which causes water to follow in via osmosis and leads to an ultimate increase in plasma volume. Further, the macula densa releases adenosine which causes constriction of the afferent arterioles.
At the same time, the juxtaglomerular cells sense the decrease in blood pressure and release renin.
Renin converts angiotensinogen (inactive form) to angiotensin I (active form).
Angiotensin I flows in the bloodstream until it reaches the capillaries of the lungs where angiotensin-converting enzyme (ACE) acts on it to convert it into angiotensin II.
Angiotensin II is a vasoconstrictor that will increase blood flow to the heart and subsequently the preload, ultimately increasing the cardiac output.
Angiotensin II also causes an increase in the release of aldosterone from the adrenal glands.
Aldosterone further increases the Na+ and H2O reabsorption in the distal convoluted tubule of the nephron.
The RAS is targeted pharmacologically by ACE inhibitors and angiotensin II receptor antagonists (also known as angiotensin receptor blockers; ARB). The aldosterone system is directly targeted by aldosterone antagonists. The fluid retention may be targeted by diuretics; the antihypertensive effect of diuretics is due to its effect on blood volume. Generally, the baroreceptor reflex is not targeted in hypertension because if blocked, individuals may experience orthostatic hypotension and fainting.
Measurement
Arterial pressure is most commonly measured via a sphygmomanometer, which uses the height of a column of mercury, or an aneroid gauge, to reflect the blood pressure by auscultation. The most common automated blood pressure measurement technique is based on the oscillometric method. Fully automated oscillometric measurement has been available since 1981. This principle has recently been used to measure blood pressure with a smartphone. Measuring pressure invasively, by penetrating the arterial wall to take the measurement, is much less common and usually restricted to a hospital setting. Novel methods to measure blood pressure without penetrating the arterial wall, and without applying any pressure on patient's body are being explored, for example, cuffless measurements that uses only optical sensors.
In office blood pressure measurement, terminal digit preference is common. According to one study, approximately 40% of recorded measurements ended with the digit zero, whereas "without bias, 10%–20% of measurements are expected to end in zero"
In animals
Blood pressure levels in non-human mammals may vary depending on the species. Heart rate differs markedly, largely depending on the size of the animal (larger animals have slower heart rates). The giraffe has a distinctly high arterial pressure of about 190 mm Hg, enabling blood perfusion through the -long neck to the head. In other species subjected to orthostatic blood pressure, such as arboreal snakes, blood pressure is higher than in non-arboreal snakes. A heart near to the head (short heart-to-head distance) and a long tail with tight integument favor blood perfusion to the head.
As in humans, blood pressure in animals differs by age, sex, time of day, and environmental circumstances: measurements made in laboratories or under anesthesia may not be representative of values under free-living conditions. Rats, mice, dogs and rabbits have been used extensively to study the regulation of blood pressure.
Hypertension in cats and dogs
Hypertension in cats and dogs is generally diagnosed if the blood pressure is greater than 150 mm Hg (systolic), although sight hounds have higher blood pressures than most other dog breeds; a systolic pressure greater than 180 mmHg is considered abnormal in these dogs.
See also
History of hypertension
References
External links
Articles containing video clips
Cardiovascular physiology
Mathematics in medicine | Blood pressure | [
"Mathematics"
] | 6,121 | [
"Applied mathematics",
"Mathematics in medicine"
] |
56,561 | https://en.wikipedia.org/wiki/Anesthesia | Anesthesia (American English) or anaesthesia (British English) is a state of controlled, temporary loss of sensation or awareness that is induced for medical or veterinary purposes. It may include some or all of analgesia (relief from or prevention of pain), paralysis (muscle relaxation), amnesia (loss of memory), and unconsciousness. An individual under the effects of anesthetic drugs is referred to as being anesthetized.
Anesthesia enables the painless performance of procedures that would otherwise require physical restraint in a non-anesthetized individual, or would otherwise be technically unfeasible. Three broad categories of anesthesia exist:
General anesthesia suppresses central nervous system activity and results in unconsciousness and total lack of sensation, using either injected or inhaled drugs.
Sedation suppresses the central nervous system to a lesser degree, inhibiting both anxiety and creation of long-term memories without resulting in unconsciousness.
Regional and local anesthesia block transmission of nerve impulses from a specific part of the body. Depending on the situation, this may be used either on its own (in which case the individual remains fully conscious), or in combination with general anesthesia or sedation.
Local anesthesia is simple infiltration by the clinician directly onto the region of interest (e.g. numbing a tooth for dental work).
Peripheral nerve blocks use drugs targeted at peripheral nerves to anesthetize an isolated part of the body, such as an entire limb.
Neuraxial blockade, mainly epidural and spinal anesthesia, can be performed in the region of the central nervous system itself, suppressing all incoming sensation from nerves supplying the area of the block.
In preparing for a medical or veterinary procedure, the clinician chooses one or more drugs to achieve the types and degree of anesthesia characteristics appropriate for the type of procedure and the particular patient. The types of drugs used include general anesthetics, local anesthetics, hypnotics, dissociatives, sedatives, adjuncts, neuromuscular-blocking drugs, narcotics, and analgesics.
The risks of complications during or after anesthesia are often difficult to separate from those of the procedure for which anesthesia is being given, but in the main they are related to three factors: the health of the individual, the complexity and stress of the procedure itself, and the anaesthetic technique. Of these factors, the individual's health has the greatest impact. Major perioperative risks can include death, heart attack, and pulmonary embolism whereas minor risks can include postoperative nausea and vomiting and hospital readmission. Some conditions, like local anesthetic toxicity, airway trauma or malignant hyperthermia, can be more directly attributed to specific anesthetic drugs and techniques.
Medical uses
The purpose of anesthesia can be distilled down to three basic goals or endpoints:
hypnosis (a temporary loss of consciousness and with it a loss of memory. In a pharmacological context, the word hypnosis usually has this technical meaning, in contrast to its more familiar lay or psychological meaning of an altered state of consciousness not necessarily caused by drugs—see hypnosis).
analgesia (lack of sensation which also blunts autonomic reflexes)
muscle relaxation
Different types of anesthesia affect the endpoints differently. Regional anesthesia, for instance, affects analgesia; benzodiazepine-type sedatives (used for sedation, or "twilight anesthesia") favor amnesia; and general anesthetics can affect all of the endpoints. The goal of anesthesia is to achieve the endpoints required for the given surgical procedure with the least risk to the subject.
To achieve the goals of anesthesia, drugs act on different but interconnected parts of the nervous system. Hypnosis, for instance, is generated through actions on the nuclei in the brain and is similar to the activation of sleep. The effect is to make people less aware and less reactive to noxious stimuli.
Loss of memory (amnesia) is created by action of drugs on multiple (but specific) regions of the brain. Memories are created as either declarative or non-declarative memories in several stages (short-term, long-term, long-lasting) the strength of which is determined by the strength of connections between neurons termed synaptic plasticity. Each anesthetic produces amnesia through unique effects on memory formation at variable doses. Inhalational anesthetics will reliably produce amnesia through general suppression of the nuclei at doses below those required for loss of consciousness. Drugs like midazolam produce amnesia through different pathways by blocking the formation of long-term memories.
Nevertheless, a person can dream under anesthesia or are conscious of the procedure despite giving no indication of this during it. An estimated 22% of people do dream under general anesthesia, and one or two cases in a thousand have some consciousness, termed "anesthesia awareness". It is not known whether animals dream while under general anesthesia.
Techniques
Anesthesia is unique in that it is not a direct means of treatment; rather, it allows the clinician to do things that may treat, diagnose, or cure an ailment which would otherwise be painful or complicated. The best anesthetic, therefore, is the one with the lowest risk to the patient that still achieves the endpoints required to complete the procedure. The first stage in anesthesia is the pre-operative risk assessment consisting of the medical history, physical examination and lab tests. Diagnosing the patient's pre-operative physical status allows the clinician to minimize anesthetic risks. A well completed medical history will arrive at the correct diagnosis 56% of the time which increases to 73% with a physical examination. Lab tests help in diagnosis but only in 3% of cases, underscoring the need for a full history and physical examination prior to anesthetics. Incorrect pre-operative assessments or preparations are the root cause of 11% of all adverse anesthetic events.
Safe anesthesia care depends greatly on well-functioning teams of highly trained healthcare workers. The medical specialty centred around anesthesia is called anesthesiology, and doctors specialised in the field are termed anesthesiologists. Additional healthcare professionals involved in anesthesia provision have varying titles and roles depending on the jurisdiction, and include anesthetic nurses, nurse anesthetists, anesthesiologist assistants, anaesthetic technicians, anaesthesia associates, operating department practitioners and anesthesia technologists. International standards for the safe practice of anesthesia, jointly endorsed by the World Health Organization and the World Federation of Societies of Anaesthesiologists, highly recommend that anesthesia should be provided, overseen or led by anesthesiologists, with the exception of minimal sedation or superficial procedures performed under local anesthesia.
A trained, vigilant anesthesia provider should continually care for the patient; where the provider is not an anesthesiologist, they should be locally directed and supervised by an anesthesiologist, and in countries or settings where this is not feasible, care should be led by the most qualified local individual within a regional or national anesthesiologist-led framework. The same minimum standards for patient safety apply regardless of the provider, including continuous clinical and biometric monitoring of tissue oxygenation, perfusion and blood pressure; confirmation of correct placement of airway management devices by auscultation and carbon dioxide detection; use of the WHO Surgical Safety Checklist; and safe onward transfer of the patient's care following the procedure.
One part of the risk assessment is based on the patient's health. The American Society of Anesthesiologists has developed a six-tier scale that stratifies the patient's pre-operative physical state. It is called the ASA physical status classification. The scale assesses risk as the patient's general health relates to an anesthetic.
The more detailed pre-operative medical history aims to discover genetic disorders (such as malignant hyperthermia or pseudocholinesterase deficiency), habits (tobacco, drug and alcohol use), physical attributes (such as obesity or a difficult airway) and any coexisting diseases (especially cardiac and respiratory diseases) that might impact the anesthetic. The physical examination helps quantify the impact of anything found in the medical history in addition to lab tests.
Aside from the generalities of the patient's health assessment, an evaluation of specific factors as they relate to the surgery also need to be considered for anesthesia. For instance, anesthesia during childbirth must consider not only the mother but the baby. Cancers and tumors that occupy the lungs or throat create special challenges to general anesthesia. After determining the health of the patient undergoing anesthesia and the endpoints that are required to complete the procedure, the type of anesthetic can be selected. Choice of surgical method and anesthetic technique aims to reduce risk of complications, shorten time needed for recovery and minimize the surgical stress response.
General anesthesia
Anesthesia is a combination of the endpoints (discussed above) that are reached by drugs acting on different but overlapping sites in the central nervous system. General anesthesia (as opposed to sedation or regional anesthesia) has three main goals: lack of movement (paralysis), unconsciousness, and blunting of the stress response. In the early days of anesthesia, anesthetics could reliably achieve the first two, allowing surgeons to perform necessary procedures, but many patients died because the extremes of blood pressure and pulse caused by the surgical insult were ultimately harmful. Eventually, the need for blunting of the surgical stress response was identified by Harvey Cushing, who injected local anesthetic prior to hernia repairs. This led to the development of other drugs that could blunt the response, leading to lower surgical mortality rates.
The most common approach to reach the endpoints of general anesthesia is through the use of inhaled general anesthetics. Each anesthetic has its own potency, which is correlated to its solubility in oil. This relationship exists because the drugs bind directly to cavities in proteins of the central nervous system, although several theories of general anesthetic action have been described. Inhalational anesthetics are thought to exact their effects on different parts of the central nervous system. For instance, the immobilizing effect of inhaled anesthetics results from an effect on the spinal cord whereas sedation, hypnosis and amnesia involve sites in the brain. The potency of an inhalational anesthetic is quantified by its minimum alveolar concentration (MAC). The MAC is the percentage dose of anesthetic that will prevent a response to painful stimulus in 50% of subjects. The higher the MAC, generally, the less potent the anesthetic.
The ideal anesthetic drug would provide hypnosis, amnesia, analgesia, and muscle relaxation without undesirable changes in blood pressure, pulse or breathing. In the 1930s, physicians started to augment inhaled general anesthetics with intravenous general anesthetics. The drugs used in combination offered a better risk profile to the subject under anesthesia and a quicker recovery. A combination of drugs was later shown to result in lower odds of dying in the first seven days after anesthetic. For instance, propofol (injection) might be used to start the anesthetic, fentanyl (injection) used to blunt the stress response, midazolam (injection) given to ensure amnesia and sevoflurane (inhaled) during the procedure to maintain the effects. More recently, several intravenous drugs have been developed which, if desired, allow inhaled general anesthetics to be avoided completely.
Equipment
The core instrument in an inhalational anesthetic delivery system is an anesthetic machine. It has vaporizers, ventilators, an anesthetic breathing circuit, waste gas scavenging system and pressure gauges. The purpose of the anesthetic machine is to provide anesthetic gas at a constant pressure, oxygen for breathing and to remove carbon dioxide or other waste anesthetic gases. Since inhalational anesthetics are flammable, various checklists have been developed to confirm that the machine is ready for use, that the safety features are active and the electrical hazards are removed. Intravenous anesthetic is delivered either by bolus doses or an infusion pump. There are also many smaller instruments used in airway management and monitoring the patient. The common thread to modern machinery in this field is the use of fail-safe systems that decrease the odds of catastrophic misuse of the machine.
Monitoring
Patients under general anesthesia must undergo continuous physiological monitoring to ensure safety. In the US, the American Society of Anesthesiologists (ASA) has established minimum monitoring guidelines for patients receiving general anesthesia, regional anesthesia, or sedation. These include electrocardiography (ECG), heart rate, blood pressure, inspired and expired gases, oxygen saturation of the blood (pulse oximetry), and temperature. In the UK the Association of Anaesthetists (AAGBI) have set minimum monitoring guidelines for general and regional anesthesia. For minor surgery, this generally includes monitoring of heart rate, oxygen saturation, blood pressure, and inspired and expired concentrations for oxygen, carbon dioxide, and inhalational anesthetic agents. For more invasive surgery, monitoring may also include temperature, urine output, blood pressure, central venous pressure, pulmonary artery pressure and pulmonary artery occlusion pressure, cardiac output, cerebral activity, and neuromuscular function. In addition, the operating room environment must be monitored for ambient temperature and humidity, as well as for accumulation of exhaled inhalational anesthetic agents, which might be deleterious to the health of operating room personnel.
Sedation
Sedation (also referred to as dissociative anesthesia or twilight anesthesia) creates hypnotic, sedative, anxiolytic, amnesic, anticonvulsant, and centrally produced muscle-relaxing properties. From the perspective of the person giving the sedation, the patient appears sleepy, relaxed and forgetful, allowing unpleasant procedures to be more easily completed. Sedatives such as benzodiazepines are usually given with pain relievers (such as narcotics, or local anesthetics or both) because they do not, by themselves, provide significant pain relief.
From the perspective of the subject receiving a sedative, the effect is a feeling of general relaxation, amnesia (loss of memory) and time passing quickly. Many drugs can produce a sedative effect including benzodiazepines, propofol, thiopental, ketamine and inhaled general anesthetics. The advantage of sedation over a general anesthetic is that it generally does not require support of the airway or breathing (no tracheal intubation or mechanical ventilation) and can have less of an effect on the cardiovascular system which may add to a greater margin of safety in some patients.
Regional anesthesia
When pain is blocked from a part of the body using local anesthetics, it is generally referred to as regional anesthesia. There are many types of regional anesthesia either by injecting into the tissue itself, a vein that feeds the area or around a nerve trunk that supplies sensation to the area. The latter are called nerve blocks and are divided into peripheral or central nerve blocks.
The following are the types of regional anesthesia:
Infiltrative anesthesia: a small amount of local anesthetic is injected in a small area to stop any sensation (such as during the closure of a laceration, as a continuous infusion or "freezing" a tooth). The effect is almost immediate.
Peripheral nerve block: local anesthetic is injected near a nerve that provides sensation to particular portion of the body. There is significant variation in the speed of onset and duration of anesthesia depending on the potency of the drug (e.g. Mandibular block, Fascia Iliaca Compartment Block).
Intravenous regional anesthesia (also called a Bier block): dilute local anesthetic is infused to a limb through a vein with a tourniquet placed to prevent the drug from diffusing out of the limb.
Central nerve block: Local anesthetic is injected or infused in or around a portion of the central nervous system (discussed in more detail below in spinal, epidural and caudal anesthesia).
Topical anesthesia: local anesthetics that are specially formulated to diffuse through the mucous membranes or skin to give a thin layer of analgesia to an area (e.g. EMLA patches).
Tumescent anesthesia: a large amount of very dilute local anesthetics are injected into the subcutaneous tissues during liposuction.
Systemic local anesthetics: local anesthetics are given systemically (orally or intravenous) to relieve neuropathic pain.
A 2018 Cochrane review found moderate quality evidence that regional anesthesia may reduce the frequency of persistent postoperative pain (PPP) from 3 to 18 months following thoracotomy and 3 to 12 months following caesarean. Low quality evidence was found 3 to 12 months following breast cancer surgery. This review acknowledges certain limitations that impact its applicability beyond the surgeries and regional anesthesia techniques reviewed.
Nerve blocks
When local anesthetic is injected around a larger diameter nerve that transmits sensation from an entire region it is referred to as a nerve block or regional nerve blockade. Nerve blocks are commonly used in dentistry, when the mandibular nerve is blocked for procedures on the lower teeth. With larger diameter nerves (such as the interscalene block for upper limbs or psoas compartment block for lower limbs) the nerve and position of the needle is localized with ultrasound or electrical stimulation. Evidence supports the use of ultrasound guidance alone, or in combination with peripheral nerve stimulation, as superior for improved sensory and motor block, a reduction in the need for supplementation and fewer complications. Because of the large amount of local anesthetic required to affect the nerve, the maximum dose of local anesthetic has to be considered. Nerve blocks are also used as a continuous infusion, following major surgery such as knee, hip and shoulder replacement surgery, and may be associated with lower complications. Nerve blocks are also associated with a lower risk of neurologic complications compared to the more central epidural or spinal neuraxial blocks.
Spinal, epidural and caudal anesthesia
Central neuraxial anesthesia is the injection of local anesthetic around the spinal cord to provide analgesia in the abdomen, pelvis or lower extremities. It is divided into either spinal (injection into the subarachnoid space), epidural (injection outside of the subarachnoid space into the epidural space) and caudal (injection into the cauda equina or tail end of the spinal cord). Spinal and epidural are the most commonly used forms of central neuraxial blockade.
Spinal anesthesia is a "one-shot" injection that provides rapid onset and profound sensory anesthesia with lower doses of anesthetic, and is usually associated with neuromuscular blockade (loss of muscle control). Epidural anesthesia uses larger doses of anesthetic infused through an indwelling catheter which allows the anesthetic to be augmented should the effects begin to dissipate. Epidural anesthesia does not typically affect muscle control.
Because central neuraxial blockade causes arterial and venous vasodilation, a drop in blood pressure is common. This drop is largely dictated by the venous side of the circulatory system which holds 75% of the circulating blood volume. The physiologic effects are much greater when the block is placed above the 5th thoracic vertebra. An ineffective block is most often due to inadequate anxiolysis or sedation rather than a failure of the block itself.
Acute pain management
Nociception (pain sensation) is not hard-wired into the body. Instead, it is a dynamic process wherein persistent painful stimuli can sensitize the system and either make pain management difficult or promote the development of chronic pain. For this reason, preemptive acute pain management may reduce both acute and chronic pain and is tailored to the surgery, the environment in which it is given (in-patient/out-patient) and the individual.
Pain management is classified into either pre-emptive or on-demand. On-demand pain medications typically include either opioid or non-steroidal anti-inflammatory drugs but can also make use of novel approaches such as inhaled nitrous oxide or ketamine. On demand drugs can be administered by a clinician ("as needed drug orders") or by the patient using patient-controlled analgesia (PCA). PCA has been shown to provide slightly better pain control and increased patient satisfaction when compared with conventional methods. Common preemptive approaches include epidural neuraxial blockade or nerve blocks. One review which looked at pain control after abdominal aortic surgery found that epidural blockade provides better pain relief (especially during movement) in the period up to three postoperative days. It reduces the duration of postoperative tracheal intubation by roughly half. The occurrence of prolonged postoperative mechanical ventilation and myocardial infarction is also reduced by epidural analgesia.
Risks and complications
Risks and complications as they relate to anesthesia are classified as either morbidity (a disease or disorder that results from anesthesia) or mortality (death that results from anesthesia). Quantifying how anesthesia contributes to morbidity and mortality can be difficult because the patient's health prior to surgery and the complexity of the surgical procedure can also contribute to the risks.
Prior to the introduction of anesthesia in the early 19th century, the physiologic stress from surgery caused significant complications and many deaths from shock. The faster the surgery was, the lower the rate of complications (leading to reports of very quick amputations). The advent of anesthesia allowed more complicated and life-saving surgery to be completed, decreased the physiologic stress of the surgery, but added an element of risk. It was two years after the introduction of ether anesthetics that the first death directly related to the use of anesthesia was reported.
Morbidity can be major (myocardial infarction, pneumonia, pulmonary embolism, kidney failure/chronic kidney disease, postoperative cognitive dysfunction and allergy) or minor (minor nausea, vomiting, readmission). There is usually overlap in the contributing factors that lead to morbidity and mortality between the health of the patients, the type of surgery being performed and the anesthetic. To understand the relative risk of each contributing factor, consider that the rate of deaths totally attributed to the patient's health is 1:870. Compare that to the rate of deaths totally attributed to surgical factors (1:2860) or anesthesia alone (1:185,056) illustrating that the single greatest factor in anesthetic mortality is the health of the patient. These statistics can also be compared to the first such study on mortality in anesthesia from 1954, which reported a rate of death from all causes at 1:75 and a rate attributed to anesthesia alone at 1:2680. Direct comparisons between mortality statistics cannot reliably be made over time and across countries because of differences in the stratification of risk factors, however, there is evidence that anesthetics have made a significant improvement in safety but to what degree is uncertain.
Rather than stating a flat rate of morbidity or mortality, many factors are reported as contributing to the relative risk of the procedure and anesthetic combined. For instance, an operation on a person who is between the ages of 60–79 years old places the patient at 2.3 times greater risk than someone less than 60 years old. Having an ASA score of 3, 4 or 5 places the person at 10.7 times greater risk than someone with an ASA score of 1 or 2. Other variables include age greater than 80 (3.3 times risk compared to those under 60), gender (females have a lower risk of 0.8), urgency of the procedure (emergencies have a 4.4 times greater risk), experience of the person completing the procedure (less than 8 years experience and/or less than 600 cases have a 1.1 times greater risk) and the type of anesthetic (regional anesthetics are lower risk than general anesthetics). Obstetrical, the very young and the very old are all at greater risk of complication so extra precautions may need to be taken.
On 14 December 2016, the Food and Drug Administration issued a Public Safety Communication warning that "repeated or lengthy use of general anesthetic and sedation drugs during surgeries or procedures in children younger than 3 years or in pregnant women during their third trimester may affect the development of children's brains." The warning was criticized by the American College of Obstetricians and Gynecologists, which pointed out the absence of direct evidence regarding use in pregnant women and the possibility that "this warning could inappropriately dissuade providers from providing medically indicated care during pregnancy." Patient advocates noted that a randomized clinical trial would be unethical, that the mechanism of injury is well-established in animals, and that studies had shown exposure to multiple uses of anesthetic significantly increased the risk of developing learning disabilities in young children, with a hazard ratio of 2.12 (95% confidence interval, 1.26–3.54).
Recovery
The immediate time after anesthesia is called emergence. Emergence from general anesthesia or sedation requires careful monitoring because there is still a risk of complication. Nausea and vomiting are reported at 9.8% but will vary with the type of anesthetic and procedure. There is a need for airway support in 6.8%, there can be urinary retention (more common in those over 50 years of age) and hypotension in 2.7%. Hypothermia, shivering and confusion are also common in the immediate post-operative period because of the lack of muscle movement (and subsequent lack of heat production) during the procedure. Furthermore, the rare manifestation in the post-anesthetic period may be the occurrence of functional neurological symptom disorder (FNSD).
Postoperative cognitive dysfunction (also known as POCD and post-anesthetic confusion) is a disturbance in cognition after surgery. It may also be variably used to describe emergence delirium (immediate post-operative confusion) and early cognitive dysfunction (diminished cognitive function in the first post-operative week). Although the three entities (delirium, early POCD and long-term POCD) are separate, the presence of delirium post-operatively predicts the presence of early POCD. There does not appear to be an association between delirium or early POCD and long-term POCD. According to a recent study conducted at the David Geffen School of Medicine at UCLA, the brain navigates its way through a series of activity clusters, or "hubs" on its way back to consciousness. Andrew Hudson, an assistant professor in anesthesiology states, "Recovery from anesthesia is not simply the result of the anesthetic 'wearing off,' but also of the brain finding its way back through a maze of possible activity states to those that allow conscious experience. Put simply, the brain reboots itself."
Long-term POCD is a subtle deterioration in cognitive function, that can last for weeks, months, or longer. Most commonly, relatives of the person report a lack of attention, memory and loss of interest in activities previously dear to the person (such as crosswords). In a similar way, people in the workforce may report an inability to complete tasks at the same speed they could previously. There is good evidence that POCD occurs after cardiac surgery and the major reason for its occurrence is the formation of microemboli. POCD also appears to occur in non-cardiac surgery. Its causes in non-cardiac surgery are less clear but older age is a risk factor for its occurrence.
History
The first attempts at general anesthesia were probably herbal remedies administered in prehistory. Alcohol is one of the oldest known sedatives and it was used in ancient Mesopotamia thousands of years ago. The Sumerians are said to have cultivated and harvested the opium poppy (Papaver somniferum) in lower Mesopotamia as early as 3400 BCE. The ancient Egyptians had some surgical instruments, as well as crude analgesics and sedatives, including possibly an extract prepared from the mandrake fruit.
In China, Bian Que (Chinese: 扁鹊, Wade–Giles: Pien Ch'iao, ) was a legendary Chinese internist and surgeon who reportedly used general anesthesia for surgical procedures. Despite this, it was the Chinese physician Hua Tuo whom historians considered the first verifiable historical figure to develop a type of mixture of anesthesia, though his recipe has yet to be fully discovered.
Throughout Europe, Asia, and the Americas, a variety of Solanum species containing potent tropane alkaloids was used for anesthesia. In 13th-century Italy, Theodoric Borgognoni used similar mixtures along with opiates to induce unconsciousness, and treatment with the combined alkaloids proved a mainstay of anesthesia until the 19th century. Local anesthetics were used in Inca civilization where shamans chewed coca leaves and performed operations on the skull while spitting into the wounds they had inflicted to anesthetize. Cocaine was later isolated and became the first effective local anesthetic. It was first used in eye surgery in 1884 by Karl Koller, at the suggestion of Sigmund Freud. German surgeon August Bier (1861–1949) was the first to use cocaine for intrathecal anesthesia in 1898. Romanian surgeon Nicolae Racoviceanu-Piteşti (1860–1942) was the first to use opioids for intrathecal analgesia; he presented his experience in Paris in 1901.
The "soporific sponge" ("sleep sponge") used by Arabic physicians was introduced to Europe by the Salerno school of medicine in the late 12th century and by Ugo Borgognoni (1180–1258) in the 13th century. The sponge was promoted and described by Ugo's son and fellow surgeon, Theodoric Borgognoni (1205–1298). In this anesthetic method, a sponge was soaked in a dissolved solution of opium, mandragora, hemlock juice, and other substances. The sponge was then dried and stored; just before surgery the sponge was moistened and then held under the patient's nose. When all went well, the fumes rendered the individual unconscious.
The most famous anesthetic, ether, may have been synthesized as early as the 8th century, but it took many centuries for its anesthetic importance to be appreciated, even though the 16th century physician and polymath Paracelsus noted that chickens made to breathe it not only fell asleep but also felt no pain. By the early 19th century, ether was being used by humans, but only as a recreational drug.
Meanwhile, in 1772, English scientist Joseph Priestley discovered the gas nitrous oxide. Initially, people thought this gas to be lethal, even in small doses, like some other nitrogen oxides. However, in 1799, British chemist and inventor Humphry Davy decided to find out by experimenting on himself. To his astonishment he found that nitrous oxide made him laugh, so he nicknamed it "laughing gas". In 1800 Davy wrote about the potential anesthetic properties of nitrous oxide in relieving pain during surgery, but nobody at that time pursued the matter any further.
On 14 November 1804, Hanaoka Seishū, a Japanese doctor, became the first person to successfully perform surgery using general anesthesia. Hanaoka learned traditional Japanese medicine as well as Dutch-imported European surgery and Chinese medicine. After years of research and experimentation, he finally developed a formula which he named tsūsensan (also known as mafutsu-san), which combined Korean morning glory and other herbs.
Hanaoka's success in performing this painless operation soon became widely known, and patients began to arrive from all parts of Japan. Hanaoka went on to perform many operations using tsūsensan, including resection of malignant tumors, extraction of bladder stones, and extremity amputations. Before his death in 1835, Hanaoka performed more than 150 operations for breast cancer. However, this finding did not benefit the rest of the world until 1854 as the national isolation policy of the Tokugawa shogunate prevented Hanaoka's achievements from being publicized until after the isolation ended. Nearly forty years would pass before Crawford Long, who is titled as the inventor of modern anesthetics in the West, used general anesthesia in Jefferson, Georgia.
Long noticed that his friends felt no pain when they injured themselves while staggering around under the influence of diethyl ether. He immediately thought of its potential in surgery. Conveniently, a participant in one of those "ether frolics", a student named James Venable, had two small tumors he wanted excised. But fearing the pain of surgery, Venable kept putting the operation off. Hence, Long suggested that he have his operation while under the influence of ether. Venable agreed, and on 30 March 1842 he underwent a painless operation. However, Long did not announce his discovery until 1849.
Horace Wells conducted the first public demonstration of the inhalational anesthetic at the Massachusetts General Hospital in Boston in 1845. However, the nitrous oxide was improperly administered and the person cried out in pain. On 16 October 1846, Boston dentist William Thomas Green Morton gave a successful demonstration using diethyl ether to medical students at the same venue. Morton, who was unaware of Long's previous work, was invited to the Massachusetts General Hospital to demonstrate his new technique for painless surgery. After Morton had induced anesthesia, surgeon John Collins Warren removed a tumor from the neck of Edward Gilbert Abbott. This occurred in the surgical amphitheater now called the Ether Dome. The previously skeptical Warren was impressed and stated, "Gentlemen, this is no humbug." In a letter to Morton shortly thereafter, physician and writer Oliver Wendell Holmes Sr. proposed naming the state produced "anesthesia", and the procedure an "anesthetic".
Morton at first attempted to hide the actual nature of his anesthetic substance, referring to it as Letheon. He received a US patent for his substance, but news of the successful anesthetic spread quickly by late 1846. Respected surgeons in Europe including Liston, Dieffenbach, Pirogov, and Syme quickly undertook numerous operations with ether. An American-born physician, Boott, encouraged London dentist James Robinson to perform a dental procedure on a Miss Lonsdale. This was the first case of an operator-anesthetist. On the same day, 19 December 1846, in Dumfries Royal Infirmary, Scotland, a Dr. Scott used ether for a surgical procedure. The first use of anesthesia in the Southern Hemisphere took place in Launceston, Tasmania, that same year. Drawbacks with ether such as excessive vomiting and its explosive flammability led to its replacement in England with chloroform.
Discovered in 1831 by an American physician Samuel Guthrie (1782–1848), and independently a few months later by Frenchman Eugène Soubeiran (1797–1859) and Justus von Liebig (1803–1873) in Germany, chloroform was named and chemically characterized in 1834 by Jean-Baptiste Dumas (1800–1884). In 1842, Dr Robert Mortimer Glover in London discovered the anaesthetic qualities of chloroform on laboratory animals.
In 1847, Scottish obstetrician James Young Simpson was the first to demonstrate the anesthetic properties of chloroform on humans and helped to popularize the drug for use in medicine. This first supply came from local pharmacists, James Duncan and William Flockhart, and its use spread quickly, with 750,000 doses weekly in Britain by 1895. Simpson arranged for Flockhart to supply Florence Nightingale. Chloroform gained royal approval in 1853 when John Snow administered it to Queen Victoria when she was in labor with Prince Leopold. For the experience of child birth itself, chloroform met all the Queen's expectations; she stated it was "delightful beyond measure". Chloroform was not without fault though. The first fatality directly attributed to chloroform administration was recorded on 28 January 1848 after the death of Hannah Greener. This was the first of many deaths to follow from the untrained handling of chloroform. Surgeons began to appreciate the need for a trained anesthetist. The need, as Thatcher writes, was for an anesthetist to "(1) Be satisfied with the subordinate role that the work would require, (2) Make anesthesia their one absorbing interest, (3) not look at the situation of anesthetist as one that put them in a position to watch and learn from the surgeons technique (4) accept the comparatively low pay and (5) have the natural aptitude and intelligence to develop a high level of skill in providing the smooth anesthesia and relaxation that the surgeon demanded" These qualities of an anesthetist were often found in submissive medical students and even members of the public. More often, surgeons sought out nurses to provide anesthesia. By the time of the Civil War, many nurses had been professionally trained with the support of surgeons.
John Snow of London published articles from May 1848 onwards "On Narcotism by the Inhalation of Vapours" in the London Medical Gazette. Snow also involved himself in the production of equipment needed for the administration of inhalational anesthetics, the forerunner of today's anesthesia machines.
Alice Magaw, born in November 1860, is often referred to as "The Mother of Anesthesia". Her renown as the personal anesthesia provider for William and Charles Mayo was solidified by Mayo's own words in his 1905 article in which he described his satisfaction with and reliance on nurse anesthetists: "The question of anaesthesia is a most important one. We have regular anaesthetists [on] whom we can depend so that I can devote my entire attention to the surgical work." Magaw kept thorough records of her cases and recorded these anesthetics. In her publication reviewing more than 14,000 surgical anesthetics, Magaw indicates she successfully provided anesthesia without an anesthetic-related death. Magaw describes
in another article, "We have administered an anesthetic 1,092 times; ether alone 674 times; chloroform 245 times; ether and chloroform combined 173 times. I can report that out of this number, 1,092 cases, we have not had an accident". Magaw's records and outcomes created a legacy defining that the delivery of anesthesia by nurses would serve the surgical community without increasing the risks to patients. In fact, Magaw's outcomes would eclipse those of practitioners today.
The first comprehensive medical textbook on the subject, Anesthesia, was authored in 1914 by anesthesiologist Dr. James Tayloe Gwathmey and the chemist Dr. Charles Baskerville. This book served as the standard reference for the specialty for decades and included details on the history of anesthesia as well as the physiology and techniques of inhalation, rectal, intravenous, and spinal anesthesia.
Of these first famous anesthetics, only nitrous oxide is still widely used today, with chloroform and ether having been replaced by safer but sometimes more expensive general anesthetics, and cocaine by more effective local anesthetics with less abuse potential.
Society and culture
Almost all healthcare providers use anesthetic drugs to some degree, but most health professions have their own field of specialists in the field including medicine, nursing and dentistry.
Doctors specializing in anaesthesiology, including perioperative care, development of an anesthetic plan, and the administration of anesthetics are known in the US as anesthesiologists and in the UK, Canada, Australia, and NZ as anaesthetists or anaesthesiologists. All anesthetics in the UK, Australia, New Zealand, Hong Kong and Japan are administered by doctors. Nurse anesthetists also administer anesthesia in 109 nations. In the US, 35% of anesthetics are provided by physicians in solo practice, about 55% are provided by anesthesia care teams (ACTs) with anesthesiologists medically directing certified registered nurse anesthetists (CRNAs) or anesthesiologist assistants, and about 10% are provided by CRNAs in solo practice. There can also be anesthesiologist assistants (US) or physicians' assistants (anaesthesia) (UK) who assist with anesthesia.
Special populations
There are many circumstances when anesthesia needs to be altered for special circumstances due to the procedure (such as in cardiac surgery, cardiothoracic anesthesiology or neurosurgery), the patient (such as in pediatric anesthesia, geriatric, bariatric or obstetrical anesthesia) or special circumstances (such as in trauma, prehospital care, robotic surgery or extreme environments).
See also
References
External links
NICE Guidelines on pre-operative tests
ASA Physical Status Classification
DMOZ link to anesthesia society sites
A Comprehensive Guide to Anesthetic Drugs and Their Mechanisms of Action
Anesthesiology | Anesthesia | [
"Biology"
] | 8,761 | [
"Medical technology"
] |
56,565 | https://en.wikipedia.org/wiki/Circadian%20rhythm | A circadian rhythm (), or circadian cycle, is a natural oscillation that repeats roughly every 24 hours. Circadian rhythms can refer to any process that originates within an organism (i.e., endogenous) and responds to the environment (is entrained by the environment). Circadian rhythms are regulated by a circadian clock whose primary function is to rhythmically co-ordinate biological processes so they occur at the correct time to maximize the fitness of an individual. Circadian rhythms have been widely observed in animals, plants, fungi and cyanobacteria and there is evidence that they evolved independently in each of these kingdoms of life.
The term circadian comes from the Latin , meaning "around", and , meaning "day". Processes with 24-hour cycles are more generally called diurnal rhythms; diurnal rhythms should not be called circadian rhythms unless they can be confirmed as endogenous, and not environmental.
Although circadian rhythms are endogenous, they are adjusted to the local environment by external cues called zeitgebers (from German (; )), which include light, temperature and redox cycles. In clinical settings, an abnormal circadian rhythm in humans is known as a circadian rhythm sleep disorder.
History
The earliest recorded account of a circadian process is credited to Theophrastus, dating from the 4th century BC, probably provided to him by report of Androsthenes, a ship's captain serving under Alexander the Great. In his book, 'Περὶ φυτῶν ἱστορία', or 'Enquiry into plants', Theophrastus describes a "tree with many leaves like the rose, and that this closes at night, but opens at sunrise, and by noon is completely unfolded; and at evening again it closes by degrees and remains shut at night, and the natives say that it goes to sleep." The tree mentioned by him was much later identified as the tamarind tree by the botanist, H Bretzl, in his book on the botanical findings of the Alexandrian campaigns.
The observation of a circadian or diurnal process in humans is mentioned in Chinese medical texts dated to around the 13th century, including the Noon and Midnight Manual and the Mnemonic Rhyme to Aid in the Selection of Acu-points According to the Diurnal Cycle, the Day of the Month and the Season of the Year.
In 1729, French scientist Jean-Jacques d'Ortous de Mairan conducted the first experiment designed to distinguish an endogenous clock from responses to daily stimuli. He noted that 24-hour patterns in the movement of the leaves of the plant Mimosa pudica persisted, even when the plants were kept in constant darkness.
In 1896, Patrick and Gilbert observed that during a prolonged period of sleep deprivation, sleepiness increases and decreases with a period of approximately 24 hours. In 1918, J.S. Szymanski showed that animals are capable of maintaining 24-hour activity patterns in the absence of external cues such as light and changes in temperature.
In the early 20th century, circadian rhythms were noticed in the rhythmic feeding times of bees. Auguste Forel, Ingeborg Beling, and Oskar Wahl conducted numerous experiments to determine whether this rhythm was attributable to an endogenous clock. The existence of circadian rhythm was independently discovered in fruit flies in 1935 by two German zoologists, Hans Kalmus and Erwin Bünning.
In 1954, an important experiment reported by Colin Pittendrigh demonstrated that eclosion (the process of pupa turning into adult) in Drosophila pseudoobscura was a circadian behaviour. He demonstrated that while temperature played a vital role in eclosion rhythm, the period of eclosion was delayed but not stopped when temperature was decreased.
The term circadian was coined by Franz Halberg in 1959. According to Halberg's original definition:
In 1977, the International Committee on Nomenclature of the International Society for Chronobiology formally adopted the definition:
Ron Konopka and Seymour Benzer identified the first clock mutation in Drosophila in 1971, naming the gene "period" (per) gene, the first discovered genetic determinant of behavioral rhythmicity. The per gene was isolated in 1984 by two teams of researchers. Konopka, Jeffrey Hall, Michael Roshbash and their team showed that per locus is the centre of the circadian rhythm, and that loss of per stops circadian activity. At the same time, Michael W. Young's team reported similar effects of per, and that the gene covers 7.1-kilobase (kb) interval on the X chromosome and encodes a 4.5-kb poly(A)+ RNA. They went on to discover the key genes and neurones in Drosophila circadian system, for which Hall, Rosbash and Young received the Nobel Prize in Physiology or Medicine 2017.
Joseph Takahashi discovered the first mammalian circadian clock mutation (clockΔ19) using mice in 1994. However, recent studies show that deletion of clock does not lead to a behavioral phenotype (the animals still have normal circadian rhythms), which questions its importance in rhythm generation.
The first human clock mutation was identified in an extended Utah family by Chris Jones, and genetically characterized by Ying-Hui Fu and Louis Ptacek. Affected individuals are extreme 'morning larks' with 4-hour advanced sleep and other rhythms. This form of familial advanced sleep phase syndrome is caused by a single amino acid change, S662➔G, in the human PER2 protein.
Criteria
To be called circadian, a biological rhythm must meet these three general criteria:
The rhythm has an endogenously derived free-running period of time that lasts approximately 24 hours. The rhythm persists in constant conditions, i.e. constant darkness, with a period of about 24 hours. The period of the rhythm in constant conditions is called the free-running period and is denoted by the Greek letter τ (tau). The rationale for this criterion is to distinguish circadian rhythms from simple responses to daily external cues. A rhythm cannot be said to be endogenous unless it has been tested and persists in conditions without external periodic input. In diurnal animals (active during daylight hours), in general τ is slightly greater than 24 hours, whereas, in nocturnal animals (active at night), in general τ is shorter than 24 hours.
The rhythms are entrainable. The rhythm can be reset by exposure to external stimuli (such as light and heat), a process called entrainment. The external stimulus used to entrain a rhythm is called the zeitgeber, or "time giver". Travel across time zones illustrates the ability of the human biological clock to adjust to the local time; a person will usually experience jet lag before entrainment of their circadian clock has brought it into sync with local time.
The rhythms exhibit temperature compensation. In other words, they maintain circadian periodicity over a range of physiological temperatures. Many organisms live at a broad range of temperatures, and differences in thermal energy will affect the kinetics of all molecular processes in their . In order to keep track of time, the organism's circadian clock must maintain roughly a 24-hour periodicity despite the changing kinetics, a property known as temperature compensation. The Q10 temperature coefficient is a measure of this compensating effect. If the Q10 coefficient remains approximately 1 as temperature increases, the rhythm is considered to be temperature-compensated.
Origin
Circadian rhythms allow organisms to anticipate and prepare for precise and regular environmental changes. They thus enable organisms to make better use of environmental resources (e.g. light and food) compared to those that cannot predict such availability. It has therefore been suggested that circadian rhythms put organisms at a selective advantage in evolutionary terms. However, rhythmicity appears to be as important in regulating and coordinating internal metabolic processes, as in coordinating with the environment. This is suggested by the maintenance (heritability) of circadian rhythms in fruit flies after several hundred generations in constant laboratory conditions, as well as in creatures in constant darkness in the wild, and by the experimental elimination of behavioral—but not physiological—circadian rhythms in quail.
What drove circadian rhythms to evolve has been an enigmatic question. Previous hypotheses emphasized that photosensitive proteins and circadian rhythms may have originated together in the earliest cells, with the purpose of protecting replicating DNA from high levels of damaging ultraviolet radiation during the daytime. As a result, replication was relegated to the dark. However, evidence for this is lacking: in fact the simplest organisms with a circadian rhythm, the cyanobacteria, do the opposite of this: they divide more in the daytime. Recent studies instead highlight the importance of co-evolution of redox proteins with circadian oscillators in all three domains of life following the Great Oxidation Event approximately 2.3 billion years ago. The current view is that circadian changes in environmental oxygen levels and the production of reactive oxygen species (ROS) in the presence of daylight are likely to have driven a need to evolve circadian rhythms to preempt, and therefore counteract, damaging redox reactions on a daily basis.
The simplest known circadian clocks are bacterial circadian rhythms, exemplified by the prokaryote cyanobacteria. Recent research has demonstrated that the circadian clock of Synechococcus elongatus can be reconstituted in vitro with just the three proteins (KaiA, KaiB, KaiC) of their central oscillator. This clock has been shown to sustain a 22-hour rhythm over several days upon the addition of ATP. Previous explanations of the prokaryotic circadian timekeeper were dependent upon a DNA transcription/translation feedback mechanism.
A defect in the human homologue of the Drosophila "period" gene was identified as a cause of the sleep disorder FASPS (Familial advanced sleep phase syndrome), underscoring the conserved nature of the molecular circadian clock through evolution. Many more genetic components of the biological clock are now known. Their interactions result in an interlocked feedback loop of gene products resulting in periodic fluctuations that the cells of the body interpret as a specific time of the day.
It is now known that the molecular circadian clock can function within a single cell. That is, it is cell-autonomous. This was shown by Gene Block in isolated mollusk basal retinal neurons (BRNs). At the same time, different cells may communicate with each other resulting in a synchronized output of electrical signaling. These may interface with endocrine glands of the brain to result in periodic release of hormones. The receptors for these hormones may be located far across the body and synchronize the peripheral clocks of various organs. Thus, the information of the time of the day as relayed by the eyes travels to the clock in the brain, and, through that, clocks in the rest of the body may be synchronized. This is how the timing of, for example, sleep/wake, body temperature, thirst, and appetite are coordinately controlled by the biological clock.
Importance in animals
Circadian rhythmicity is present in the sleeping and feeding patterns of animals, including human beings. There are also clear patterns of core body temperature, brain wave activity, hormone production, cell regeneration, and other biological activities. In addition, photoperiodism, the physiological reaction of organisms to the length of day or night, is vital to both plants and animals, and the circadian system plays a role in the measurement and interpretation of day length. Timely prediction of seasonal periods of weather conditions, food availability, or predator activity is crucial for survival of many species. Although not the only parameter, the changing length of the photoperiod (day length) is the most predictive environmental cue for the seasonal timing of physiology and behavior, most notably for timing of migration, hibernation, and reproduction.
Effect of circadian disruption
Mutations or deletions of clock genes in mice have demonstrated the importance of body clocks to ensure the proper timing of cellular/metabolic events; clock-mutant mice are hyperphagic and obese, and have altered glucose metabolism. In mice, deletion of the Rev-ErbA alpha clock gene can result in diet-induced obesity and changes the balance between glucose and lipid utilization, predisposing to diabetes. However, it is not clear whether there is a strong association between clock gene polymorphisms in humans and the susceptibility to develop the metabolic syndrome.
Effect of light–dark cycle
The rhythm is linked to the light–dark cycle. Animals, including humans, kept in total darkness for extended periods eventually function with a free-running rhythm. Their sleep cycle is pushed back or forward each "day", depending on whether their "day", their endogenous period, is shorter or longer than 24 hours. The environmental cues that reset the rhythms each day are called zeitgebers. Totally blind subterranean mammals (e.g., blind mole rat Spalax sp.) are able to maintain their endogenous clocks in the apparent absence of external stimuli. Although they lack image-forming eyes, their photoreceptors (which detect light) are still functional; they do surface periodically as well.
Free-running organisms that normally have one or two consolidated sleep episodes will still have them when in an environment shielded from external cues, but the rhythm is not entrained to the 24-hour light–dark cycle in nature. The sleep–wake rhythm may, in these circumstances, become out of phase with other circadian or ultradian rhythms such as metabolic, hormonal, CNS electrical, or neurotransmitter rhythms.
Recent research has influenced the design of spacecraft environments, as systems that mimic the light–dark cycle have been found to be highly beneficial to astronauts. Light therapy has been trialed as a treatment for sleep disorders.
Arctic animals
Norwegian researchers at the University of Tromsø have shown that some Arctic animals (e.g., ptarmigan, reindeer) show circadian rhythms only in the parts of the year that have daily sunrises and sunsets. In one study of reindeer, animals at 70 degrees North showed circadian rhythms in the autumn, winter and spring, but not in the summer. Reindeer on Svalbard at 78 degrees North showed such rhythms only in autumn and spring. The researchers suspect that other Arctic animals as well may not show circadian rhythms in the constant light of summer and the constant dark of winter.
A 2006 study in northern Alaska found that day-living ground squirrels and nocturnal porcupines strictly maintain their circadian rhythms through 82 days and nights of sunshine. The researchers speculate that these two rodents notice that the apparent distance between the sun and the horizon is shortest once a day, and thus have a sufficient signal to entrain (adjust) by.
Butterflies and moths
The navigation of the fall migration of the Eastern North American monarch butterfly (Danaus plexippus) to their overwintering grounds in central Mexico uses a time-compensated sun compass that depends upon a circadian clock in their antennae. Circadian rhythm is also known to control mating behavioral in certain moth species such as Spodoptera littoralis, where females produce specific pheromone that attracts and resets the male circadian rhythm to induce mating at night.
In plants
Plant circadian rhythms tell the plant what season it is and when to flower for the best chance of attracting pollinators. Behaviors showing rhythms include leaf movement (Nyctinasty), growth, germination, stomatal/gas exchange, enzyme activity, photosynthetic activity, and fragrance emission, among others. Circadian rhythms occur as a plant entrains to synchronize with the light cycle of its surrounding environment. These rhythms are endogenously generated, self-sustaining and are relatively constant over a range of ambient temperatures. Important features include two interacting transcription-translation feedback loops: proteins containing PAS domains, which facilitate protein-protein interactions; and several photoreceptors that fine-tune the clock to different light conditions. Anticipation of changes in the environment allows appropriate changes in a plant's physiological state, conferring an adaptive advantage. A better understanding of plant circadian rhythms has applications in agriculture, such as helping farmers stagger crop harvests to extend crop availability and securing against massive losses due to weather.
Light is the signal by which plants synchronize their internal clocks to their environment and is sensed by a wide variety of photoreceptors. Red and blue light are absorbed through several phytochromes and cryptochromes. Phytochrome A, phyA, is light labile and allows germination and de-etiolation when light is scarce. Phytochromes B–E are more stable with , the main phytochrome in seedlings grown in the light. The cryptochrome (cry) gene is also a light-sensitive component of the circadian clock and is thought to be involved both as a photoreceptor and as part of the clock's endogenous pacemaker mechanism. Cryptochromes 1–2 (involved in blue–UVA) help to maintain the period length in the clock through a whole range of light conditions.
The central oscillator generates a self-sustaining rhythm and is driven by two interacting feedback loops that are active at different times of day. The morning loop consists of CCA1 (Circadian and Clock-Associated 1) and LHY (Late Elongated Hypocotyl), which encode closely related MYB transcription factors that regulate circadian rhythms in Arabidopsis, as well as PRR 7 and 9 (Pseudo-Response Regulators.) The evening loop consists of GI (Gigantea) and ELF4, both involved in regulation of flowering time genes. When CCA1 and LHY are overexpressed (under constant light or dark conditions), plants become arrhythmic, and mRNA signals reduce, contributing to a negative feedback loop. Gene expression of CCA1 and LHY oscillates and peaks in the early morning, whereas TOC1 gene expression oscillates and peaks in the early evening. While it was previously hypothesised that these three genes model a negative feedback loop in which over-expressed CCA1 and LHY repress TOC1 and over-expressed TOC1 is a positive regulator of CCA1 and LHY, it was shown in 2012 by Andrew Millar and others that TOC1, in fact, serves as a repressor not only of CCA1, LHY, and PRR7 and 9 in the morning loop but also of GI and ELF4 in the evening loop. This finding and further computational modeling of TOC1 gene functions and interactions suggest a reframing of the plant circadian clock as a triple negative-component repressilator model rather than the positive/negative-element feedback loop characterizing the clock in mammals.
In 2018, researchers found that the expression of PRR5 and TOC1 hnRNA nascent transcripts follows the same oscillatory pattern as processed mRNA transcripts rhythmically in A. thaliana. LNKs binds to the 5'region of PRR5 and TOC1 and interacts with RNAP II and other transcription factors. Moreover, RVE8-LNKs interaction enables a permissive histone-methylation pattern (H3K4me3) to be modified and the histone-modification itself parallels the oscillation of clock gene expression.
It has previously been found that matching a plant's circadian rhythm to its external environment's light and dark cycles has the potential to positively affect the plant. Researchers came to this conclusion by performing experiments on three different varieties of Arabidopsis thaliana. One of these varieties had a normal 24-hour circadian cycle. The other two varieties were mutated, one to have a circadian cycle of more than 27 hours, and one to have a shorter than normal circadian cycle of 20 hours.
The Arabidopsis with the 24-hour circadian cycle was grown in three different environments. One of these environments had a 20-hour light and dark cycle (10 hours of light and 10 hours of dark), the other had a 24-hour light and dark cycle (12 hours of light and 12 hours of dark),and the final environment had a 28-hour light and dark cycle (14 hours of light and 14 hours of dark). The two mutated plants were grown in both an environment that had a 20-hour light and dark cycle and in an environment that had a 28-hour light and dark cycle. It was found that the variety of Arabidopsis with a 24-hour circadian rhythm cycle grew best in an environment that also had a 24-hour light and dark cycle. Overall, it was found that all the varieties of Arabidopsis thaliana had greater levels of chlorophyll and increased growth in environments whose light and dark cycles matched their circadian rhythm.
Researchers suggested that a reason for this could be that matching an Arabidopsis circadian rhythm to its environment could allow the plant to be better prepared for dawn and dusk, and thus be able to better synchronize its processes. In this study, it was also found that the genes that help to control chlorophyll peaked a few hours after dawn. This appears to be consistent with the proposed phenomenon known as metabolic dawn.
According to the metabolic dawn hypothesis, sugars produced by photosynthesis have potential to help regulate the circadian rhythm and certain photosynthetic and metabolic pathways. As the sun rises, more light becomes available, which normally allows more photosynthesis to occur. The sugars produced by photosynthesis repress PRR7. This repression of PRR7 then leads to the increased expression of CCA1. On the other hand, decreased photosynthetic sugar levels increase PRR7 expression and decrease CCA1 expression. This feedback loop between CCA1 and PRR7 is what is proposed to cause metabolic dawn.
In Drosophila
The molecular mechanism of circadian rhythm and light perception are best understood in Drosophila. Clock genes are discovered from Drosophila, and they act together with the clock neurones. There are two unique rhythms, one during the process of hatching (called eclosion) from the pupa, and the other during mating. The clock neurones are located in distinct clusters in the central brain. The best-understood clock neurones are the large and small lateral ventral neurons (l-LNvs and s-LNvs) of the optic lobe. These neurones produce pigment dispersing factor (PDF), a neuropeptide that acts as a circadian neuromodulator between different clock neurones.
Drosophila circadian rhythm is through a transcription-translation feedback loop. The core clock mechanism consists of two interdependent feedback loops, namely the PER/TIM loop and the CLK/CYC loop. The CLK/CYC loop occurs during the day and initiates the transcription of the per and tim genes. But their proteins levels remain low until dusk, because during daylight also activates the doubletime (dbt) gene. DBT protein causes phosphorylation and turnover of monomeric PER proteins. TIM is also phosphorylated by shaggy until sunset. After sunset, DBT disappears, so that PER molecules stably bind to TIM. PER/TIM dimer enters the nucleus several at night, and binds to CLK/CYC dimers. Bound PER completely stops the transcriptional activity of CLK and CYC.
In the early morning, light activates the cry gene and its protein CRY causes the breakdown of TIM. Thus PER/TIM dimer dissociates, and the unbound PER becomes unstable. PER undergoes progressive phosphorylation and ultimately degradation. Absence of PER and TIM allows activation of clk and cyc genes. Thus, the clock is reset to start the next circadian cycle.
PER-TIM model
This protein model was developed based on the oscillations of the PER and TIM proteins in the Drosophila. It is based on its predecessor, the PER model where it was explained how the PER gene and its protein influence the biological clock. The model includes the formation of a nuclear PER-TIM complex which influences the transcription of the PER and the TIM genes (by providing negative feedback) and the multiple phosphorylation of these two proteins. The circadian oscillations of these two proteins seem to synchronise with the light-dark cycle even if they are not necessarily dependent on it. Both PER and TIM proteins are phosphorylated and after they form the PER-TIM nuclear complex they return inside the nucleus to stop the expression of the PER and TIM mRNA. This inhibition lasts as long as the protein, or the mRNA is not degraded. When this happens, the complex releases the inhibition. Here can also be mentioned that the degradation of the TIM protein is sped up by light.
In mammals
The primary circadian clock in mammals is located in the suprachiasmatic nucleus (or nuclei) (SCN), a pair of distinct groups of cells located in the hypothalamus. Destruction of the SCN results in the complete absence of a regular sleep–wake rhythm. The SCN receives information about illumination through the eyes. The retina of the eye contains "classical" photoreceptors ("rods" and "cones"), which are used for conventional vision. But the retina also contains specialized ganglion cells that are directly photosensitive, and project directly to the SCN, where they help in the entrainment (synchronization) of this master circadian clock. The proteins involved in the SCN clock are homologous to those found in the fruit fly.
These cells contain the photopigment melanopsin and their signals follow a pathway called the retinohypothalamic tract, leading to the SCN. If cells from the SCN are removed and cultured, they maintain their own rhythm in the absence of external cues.
The SCN takes the information on the lengths of the day and night from the retina, interprets it, and passes it on to the pineal gland, a tiny structure shaped like a pine cone and located on the epithalamus. In response, the pineal secretes the hormone melatonin. Secretion of melatonin peaks at night and ebbs during the day and its presence provides information about night-length.
Several studies have indicated that pineal melatonin feeds back on SCN rhythmicity to modulate circadian patterns of activity and other processes. However, the nature and system-level significance of this feedback are unknown.
The circadian rhythms of humans can be entrained to slightly shorter and longer periods than the Earth's 24 hours. Researchers at Harvard have shown that human subjects can at least be entrained to a 23.5-hour cycle and a 24.65-hour cycle.
Humans
Early research into circadian rhythms suggested that most people preferred a day closer to 25 hours when isolated from external stimuli like daylight and timekeeping. However, this research was faulty because it failed to shield the participants from artificial light. Although subjects were shielded from time cues (like clocks) and daylight, the researchers were not aware of the phase-delaying effects of indoor electric lights. The subjects were allowed to turn on light when they were awake and to turn it off when they wanted to sleep. Electric light in the evening delayed their circadian phase. A more stringent study conducted in 1999 by Harvard University estimated the natural human rhythm to be closer to 24 hours and 11 minutes: much closer to the solar day. Consistent with this research was a more recent study from 2010, which also identified sex differences, with the circadian period for women being slightly shorter (24.09 hours) than for men (24.19 hours). In this study, women tended to wake up earlier than men and exhibit a greater preference for morning activities than men, although the underlying biological mechanisms for these differences are unknown.
Biological markers and effects
The classic phase markers for measuring the timing of a mammal's circadian rhythm are:
melatonin secretion by the pineal gland,
core body temperature minimum, and
plasma level of cortisol.
For temperature studies, subjects must remain awake but calm and semi-reclined in near darkness while their rectal temperatures are taken continuously. Though variation is great among normal chronotypes, the average human adult's temperature reaches its minimum at about 5:00 a.m., about two hours before habitual wake time. Baehr et al. found that, in young adults, the daily body temperature minimum occurred at about 04:00 (4 a.m.) for morning types, but at about 06:00 (6 a.m.) for evening types. This minimum occurred at approximately the middle of the eight-hour sleep period for morning types, but closer to waking in evening types.
Melatonin is absent from the system or undetectably low during daytime. Its onset in dim light, dim-light melatonin onset (DLMO), at roughly 21:00 (9 p.m.) can be measured in the blood or the saliva. Its major metabolite can also be measured in morning urine. Both DLMO and the midpoint (in time) of the presence of the hormone in the blood or saliva have been used as circadian markers. However, newer research indicates that the melatonin offset may be the more reliable marker. Benloucif et al. found that melatonin phase markers were more stable and more highly correlated with the timing of sleep than the core temperature minimum. They found that both sleep offset and melatonin offset are more strongly correlated with phase markers than the onset of sleep. In addition, the declining phase of the melatonin levels is more reliable and stable than the termination of melatonin synthesis.
Other physiological changes that occur according to a circadian rhythm include heart rate and many cellular processes "including oxidative stress, cell metabolism, immune and inflammatory responses, epigenetic modification, hypoxia/hyperoxia response pathways, endoplasmic reticular stress, autophagy, and regulation of the stem cell environment." In a study of young men, it was found that the heart rate reaches its lowest average rate during sleep, and its highest average rate shortly after waking.
In contradiction to previous studies, it has been found that there is no effect of body temperature on performance on psychological tests. This is likely due to evolutionary pressures for higher cognitive function compared to the other areas of function examined in previous studies.
Outside the "master clock"
More-or-less independent circadian rhythms are found in many organs and cells in the body outside the suprachiasmatic nuclei (SCN), the "master clock". Indeed, neuroscientist Joseph Takahashi and colleagues stated in a 2013 article that "almost every cell in the body contains a circadian clock". For example, these clocks, called peripheral oscillators, have been found in the adrenal gland, oesophagus, lungs, liver, pancreas, spleen, thymus, and skin. There is also some evidence that the olfactory bulb and prostate may experience oscillations, at least when cultured.
Though oscillators in the skin respond to light, a systemic influence has not been proven. In addition, many oscillators, such as liver cells, for example, have been shown to respond to inputs other than light, such as feeding.
Light and the biological clock
Light resets the biological clock in accordance with the phase response curve (PRC). Depending on the timing, light can advance or delay the circadian rhythm. Both the PRC and the required illuminance vary from species to species, and lower light levels are required to reset the clocks in nocturnal rodents than in humans.
Enforced longer or shorter cycles
Various studies on humans have made use of enforced sleep/wake cycles strongly different from 24 hours, such as those conducted by Nathaniel Kleitman in 1938 (28 hours) and Derk-Jan Dijk and Charles Czeisler in the 1990s (20 hours). Because people with a normal (typical) circadian clock cannot entrain to such abnormal day/night rhythms, this is referred to as a forced desynchrony protocol. Under such a protocol, sleep and wake episodes are uncoupled from the body's endogenous circadian period, which allows researchers to assess the effects of circadian phase (i.e., the relative timing of the circadian cycle) on aspects of sleep and wakefulness including sleep latency and other functions - both physiological, behavioral, and cognitive.
Studies also show that Cyclosa turbinata is unique in that its locomotor and web-building activity cause it to have an exceptionally short-period circadian clock, about 19 hours. When C. turbinata spiders are placed into chambers with periods of 19, 24, or 29 hours of evenly split light and dark, none of the spiders exhibited decreased longevity in their own circadian clock. These findings suggest that C. turbinata do not have the same costs of extreme desynchronization as do other species of animals.
Human health
Foundation of circadian medicine
The leading edge of circadian biology research is translation of basic body clock mechanisms into clinical tools, and this is especially relevant to the treatment of cardiovascular disease. Timing of medical treatment in coordination with the body clock, chronotherapeutics, may also benefit patients with hypertension (high blood pressure) by significantly increasing efficacy and reduce drug toxicity or adverse reactions. 3) "Circadian Pharmacology" or drugs targeting the circadian clock mechanism have been shown experimentally in rodent models to significantly reduce the damage due to heart attacks and prevent heart failure. Importantly, for rational translation of the most promising Circadian Medicine therapies to clinical practice, it is imperative that we understand how it helps treat disease in both biological sexes.
Causes of disruption to circadian rhythms
Indoor lighting
Lighting requirements for circadian regulation are not simply the same as those for vision; planning of indoor lighting in offices and institutions is beginning to take this into account. Animal studies on the effects of light in laboratory conditions have until recently considered light intensity (irradiance) but not color, which can be shown to "act as an essential regulator of biological timing in more natural settings".
Blue LED lighting suppresses melatonin production five times more than the orange-yellow high-pressure sodium (HPS) light; a metal halide lamp, which is white light, suppresses melatonin at a rate more than three times greater than HPS. Depression symptoms from long term nighttime light exposure can be undone by returning to a normal cycle.
Airline pilots and cabin crew
Due to the nature of work of airline pilots, who often cross several time zones and regions of sunlight and darkness in one day, and spend many hours awake both day and night, they are often unable to maintain sleep patterns that correspond to the natural human circadian rhythm; this situation can easily lead to fatigue. The NTSB cites this as contributing to many accidents, and has conducted several research studies in order to find methods of combating fatigue in pilots.
Effect of drugs
Studies conducted on both animals and humans show major bidirectional relationships between the circadian system and abusive drugs. It is indicated that these abusive drugs affect the central circadian pacemaker. Individuals with substance use disorder display disrupted rhythms. These disrupted rhythms can increase the risk for substance abuse and relapse. It is possible that genetic and/or environmental disturbances to the normal sleep and wake cycle can increase the susceptibility to addiction.
It is difficult to determine if a disturbance in the circadian rhythm is at fault for an increase in prevalence for substance abuse—or if other environmental factors such as stress are to blame.
Changes to the circadian rhythm and sleep occur once an individual begins abusing drugs and alcohol. Once an individual stops using drugs and alcohol, the circadian rhythm continues to be disrupted.
Alcohol consumption disrupts circadian rhythms, with acute intake causing dose-dependent alterations in melatonin and cortisol levels, as well as core body temperature, which normalize the following morning, while chronic alcohol use leads to more severe and persistent disruptions that are associated with alcohol use disorders (AUD) and withdrawal symptoms.
The stabilization of sleep and the circadian rhythm might possibly help to reduce the vulnerability to addiction and reduce the chances of relapse.
Circadian rhythms and clock genes expressed in brain regions outside the suprachiasmatic nucleus may significantly influence the effects produced by drugs such as cocaine. Moreover, genetic manipulations of clock genes profoundly affect cocaine's actions.
Consequences of disruption to circadian rhythms
Disruption
Disruption to rhythms usually has a negative effect. Many travelers have experienced the condition known as jet lag, with its associated symptoms of fatigue, disorientation and insomnia.
A number of other disorders, such as bipolar disorder, depression, and some sleep disorders such as delayed sleep phase disorder (DSPD), are associated with irregular or pathological functioning of circadian rhythms.
Disruption to rhythms in the longer term is believed to have significant adverse health consequences for peripheral organs outside the brain, in particular in the development or exacerbation of cardiovascular disease.
Studies have shown that maintaining normal sleep and circadian rhythms is important for many aspects of brain and health. A number of studies have also indicated that a power-nap, a short period of sleep during the day, can reduce stress and may improve productivity without any measurable effect on normal circadian rhythms. Circadian rhythms also play a part in the reticular activating system, which is crucial for maintaining a state of consciousness. A reversal in the sleep–wake cycle may be a sign or complication of uremia, azotemia or acute kidney injury. Studies have also helped elucidate how light has a direct effect on human health through its influence on the circadian biology.
Relationship with cardiovascular disease
One of the first studies to determine how disruption of circadian rhythms causes cardiovascular disease was performed in the Tau hamsters, which have a genetic defect in their circadian clock mechanism. When maintained in a 24-hour light-dark cycle that was "out of sync" with their normal 22 circadian mechanism they developed profound cardiovascular and renal disease; however, when the Tau animals were raised for their entire lifespan on a 22-hour daily light-dark cycle they had a healthy cardiovascular system. The adverse effects of circadian misalignment on human physiology has been studied in the laboratory using a misalignment protocol, and by studying shift workers. Circadian misalignment is associated with many risk factors of cardiovascular disease. High levels of the atherosclerosis biomarker, resistin, have been reported in shift workers indicating the link between circadian misalignment and plaque build up in arteries. Additionally, elevated triacylglyceride levels (molecules used to store excess fatty acids) were observed and contribute to the hardening of arteries, which is associated with cardiovascular diseases including heart attack, stroke and heart disease. Shift work and the resulting circadian misalignment is also associated with hypertension.
Obesity and diabetes
Obesity and diabetes are associated with lifestyle and genetic factors. Among those factors, disruption of the circadian clockwork and/or misalignment of the circadian timing system with the external environment (e.g., light–dark cycle) can play a role in the development of metabolic disorders.
Shift work or chronic jet lag have profound consequences for circadian and metabolic events in the body. Animals that are forced to eat during their resting period show increased body mass and altered expression of clock and metabolic genes. In humans, shift work that favours irregular eating times is associated with altered insulin sensitivity, diabetes and higher body mass.
Cognitive effects
Reduced cognitive function has been associated with circadian misalignment. Chronic shift workers display increased rates of operational error, impaired visual-motor performance and processing efficacy which can lead to both a reduction in performance and potential safety issues. Increased risk of dementia is associated with chronic night shift workers compared to day shift workers, particularly for individuals over 50 years old.
Society and culture
In 2017, Jeffrey C. Hall, Michael W. Young, and Michael Rosbash were awarded Nobel Prize in Physiology or Medicine "for their discoveries of molecular mechanisms controlling the circadian rhythm".
Circadian rhythms was taken as an example of scientific knowledge being transferred into the public sphere.
See also
Actigraphy (also known as actimetry)
ARNTL
ARNTL2
Bacterial circadian rhythms
Circadian rhythm sleep disorders, such as
Advanced sleep phase disorder
Delayed sleep phase disorder
Non-24-hour sleep–wake disorder
Chronobiology
Chronodisruption
CLOCK
Circasemidian rhythm
Circaseptan, 7-day biological cycle
Cryptochrome
CRY1 and CRY2: the cryptochrome family genes
Diurnal cycle
Light effects on circadian rhythm
Light in school buildings
PER1, PER2, and PER3: the period family genes
Photosensitive ganglion cell: part of the eye which is involved in regulating circadian rhythm.
Polyphasic sleep
Rev-ErbA alpha
Segmented sleep
Sleep architecture (sleep in humans)
Sleep in non-human animals
Stefania Follini
Ultradian rhythm
References
Further reading
External links
Sleep
Biology of bipolar disorder
Plant intelligence | Circadian rhythm | [
"Biology"
] | 8,613 | [
"Behavior",
"Plants",
"Plant intelligence",
"Circadian rhythm",
"Sleep"
] |
56,566 | https://en.wikipedia.org/wiki/Chronobiology | Chronobiology is a field of biology that examines timing processes, including periodic (cyclic) phenomena in living organisms, such as their adaptation to solar- and lunar-related rhythms. These cycles are known as biological rhythms. Chronobiology comes from the ancient Greek χρόνος (chrónos, meaning "time"), and biology, which pertains to the study, or science, of life. The related terms chronomics and chronome have been used in some cases to describe either the molecular mechanisms involved in chronobiological phenomena or the more quantitative aspects of chronobiology, particularly where comparison of cycles between organisms is required.
Chronobiological studies include but are not limited to comparative anatomy, physiology, genetics, molecular biology and behavior of organisms related to their biological rhythms. Other aspects include epigenetics, development, reproduction, ecology and evolution.
The subject
Chronobiology studies variations of the timing and duration of biological activity in living organisms which occur for many essential biological processes. These occur (a) in animals (eating, sleeping, mating, hibernating, migration, cellular regeneration, etc.), (b) in plants (leaf movements, photosynthetic reactions, etc.), and in microbial organisms such as fungi and protozoa. They have even been found in bacteria, especially among the cyanobacteria (aka blue-green algae, see bacterial circadian rhythms). The best studied rhythm in chronobiology is the circadian rhythm, a roughly 24-hour cycle shown by physiological processes in all these organisms. The term circadian comes from the Latin circa, meaning "around" and dies, "day", meaning "approximately a day." It is regulated by circadian clocks.
The circadian rhythm can further be broken down into routine cycles during the 24-hour day:
Diurnal, which describes organisms active during daytime
Nocturnal, which describes organisms active in the night
Crepuscular, which describes animals primarily active during the dawn and dusk hours (ex: domestic cats, white-tailed deer, some bats)
While circadian rhythms are defined as regulated by endogenous processes, other biological cycles may be regulated by exogenous signals. In some cases, multi-trophic systems may exhibit rhythms driven by the circadian clock of one of the members (which may also be influenced or reset by external factors). The endogenous plant cycles may regulate the activity of the bacterium by controlling availability of plant-produced photosynthate.
Many other important cycles are also studied, including:
Infradian rhythms, which are cycles longer than a day. Examples include circannual or annual cycles that govern migration or reproduction cycles in many plants and animals, or the human menstrual cycle.
Ultradian rhythms, which are cycles shorter than 24 hours, such as the 90-minute REM cycle, the 4-hour nasal cycle, or the 3-hour cycle of growth hormone production.
Tidal rhythms, commonly observed in marine life, which follow the roughly 12.4-hour transition from high to low tide and back.
Lunar rhythms, which follow the lunar month (29.5 days). They are relevant e.g. for marine life, as the level of the tides is modulated across the lunar cycle.
Gene oscillations – some genes are expressed more during certain hours of the day than during other hours.
Within each cycle, the time period during which the process is more active is called the acrophase. When the process is less active, the cycle is in its bathyphase or trough phase. The particular moment of highest activity is the peak or maximum; the lowest point is the nadir.
History
A circadian cycle was first observed in the 18th century in the movement of plant leaves by the French scientist Jean-Jacques d'Ortous de Mairan. In 1751 Swedish botanist and naturalist Carl Linnaeus (Carl von Linné) designed a flower clock using certain species of flowering plants. By arranging the selected species in a circular pattern, he designed a clock that indicated the time of day by the flowers that were open at each given hour. For example, among members of the daisy family, he used the hawk's beard plant which opened its flowers at 6:30 am and the hawkbit which did not open its flowers until 7 am.
The 1960 symposium at Cold Spring Harbor Laboratory laid the groundwork for the field of chronobiology.
It was also in 1960 that Patricia DeCoursey invented the phase response curve, one of the major tools used in the field since.
Franz Halberg of the University of Minnesota, who coined the word circadian, is widely considered the "father of American chronobiology." However, it was Colin Pittendrigh and not Halberg who was elected to lead the Society for Research in Biological Rhythms in the 1970s. Halberg wanted more emphasis on the human and medical issues while Pittendrigh had his background more in evolution and ecology. With Pittendrigh as leader, the Society members did basic research on all types of organisms, plants as well as animals. More recently it has been difficult to get funding for such research on any other organisms than mice, rats, humans and fruit flies.
The role of Retinal Ganglion cells
Melanopsin as a circadian photopigment
In 2002, Hattar and his colleagues showed that melanopsin plays a key role in a variety of photic responses, including pupillary light reflex, and synchronization of the biological clock to daily light-dark cycles. He also described the role of melanopsin in ipRGCs. Using a rat melanopsin gene, a melanopsin-specific antibody, and fluorescent immunocytochemistry, the team concluded that melanopsin is expressed in some RGCs. Using a Beta-galactosidase assay, they found that these RGC axons exit the eyes together with the optic nerve and project to the suprachiasmatic nucleus (SCN), the primary circadian pacemaker in mammals. They also demonstrated that the RGCs containing melanopsin were intrinsically photosensitive. Hattar concluded that melanopsin is the photopigment in a small subset of RGCs that contributes to the intrinsic photosensitivity of these cells and is involved in their non-image forming functions, such as photic entrainment and pupillary light reflex.
Melanopsin cells relay inputs from rods and cones
Hattar, armed with the knowledge that melanopsin was the photopigment responsible for the photosensitivity of ipRGCs, set out to study the exact role of the ipRGC in photoentrainment. In 2008, Hattar and his research team transplanted diphtheria toxin genes into the mouse melanopsin gene locus to create mutant mice that lacked ipRGCs. The research team found that while the mutants had little difficulty identifying visual targets, they could not entrain to light-dark cycles. These results led Hattar and his team to conclude that ipRGCs do not affect image-forming vision, but significantly affect non-image forming functions such as photoentrainment.
Distinct ipRGCs
Further research has shown that ipRGCs project to different brain nuclei to control both non-image forming and image forming functions. These brain regions include the SCN, where input from ipRGCs is necessary to photoentrain circadian rhythms, and the olivary pretectal nucleus (OPN), where input from ipRGCs control the pupillary light reflex. Hattar and colleagues conducted research that demonstrated that ipRGCs project to hypothalamic, thalamic, stratal, brainstem and limbic structures. Although ipRGCs were initially viewed as a uniform population, further research revealed that there are several subtypes with distinct morphology and physiology. Since 2011, Hattar's laboratory has contributed to these findings and has successfully distinguished subtypes of ipRGCs.
Diversity of ipRGCs
Hattar and colleges utilized Cre-based strategies for labeling ipRGCs to reveal that there are at least five ipRGC subtypes that project to a number of central targets. Five classes of ipRGCs, M1 through M5, have been characterized to date in rodents. These classes differ in morphology, dendritic localization, melanopsin content, electrophysiological profiles, and projections.
Diversity in M1 cells
Hattar and his co-workers discovered that, even among the subtypes of ipRGC, there can be designated sets that differentially control circadian versus pupillary behavior. In experiments with M1 ipRGCs, they discovered that the transcription factor Brn3b is expressed by M1 ipRGCs that target the OPN, but not by ones that target the SCN. Using this knowledge, they designed an experiment to cross Melanopsin-Cre mice with mice that conditionally expressed a toxin from the Brn3b locus. This allowed them to selectively ablate only the OPN projecting M1 ipRGCS, resulting in a loss of pupil reflexes. However, this did not impair circadian photo entrainment. This demonstrated that the M1 ipRGC consist of molecularly distinct subpopulations that innervate different brain regions and execute specific light-induced functions. This isolation of a 'labeled line' consisting of differing molecular and functional properties in a highly specific ipRGC subtype was an important first for the field. It also underscored the extent to which molecular signatures can be used to distinguish between RGC populations that would otherwise appear the same, which in turn facilitates further investigation into their specific contributions to visual processing.
Psychological impact of light exposure
Previous studies in circadian biology have established that exposure to light during abnormal hours leads to sleep deprivation and disruption of the circadian system, which affect mood and cognitive functioning. While this indirect relationship had been corroborated, not much work had been done to examine whether there was a direct relationship between irregular light exposure, aberrant mood, cognitive function, normal sleep patterns and circadian oscillations. In a study published in 2012, the Hattar Laboratory was able to show that deviant light cycles directly induce depression-like symptoms and lead to impaired learning in mice, independent of sleep and circadian oscillations.
Effect on mood
ipRGCs project to areas of the brain that are important for regulating circadian rhythmicity and sleep, most notably the SCN, subparaventricular nucleus, and the ventrolateral preoptic area. In addition, ipRGCs transmit information to many areas in the limbic system, which is strongly tied to emotion and memory. To examine the relationship between deviant light exposure and behavior, Hattar and his colleagues studied mice exposed to alternating 3.5-hour light and dark periods (T7 mice) and compared them with mice exposed to alternating 12-hour light and dark periods (T24 mice). Compared to a T24 cycle, the T7 mice got the same amount of total sleep and their circadian expression of PER2, an element of the SCN pacemaker, was not disrupted. Through the T7 cycle, the mice were exposed to light at all circadian phases. Light pulses presented at night lead to expression of the transcription factor c-Fos in the amygdala, lateral habenula, and subparaventricular nucleus further implicating light's possible influence on mood and other cognitive functions.
Mice subjected to the T7 cycle exhibited depression-like symptoms, exhibiting decreased preference for sucrose (sucrose anhedonia) and exhibiting more immobility than their T24 counterparts in the forced swim test (FST). Additionally, T7 mice maintained rhythmicity in serum corticosterone, however the levels were elevated compared to the T24 mice, a trend that is associated with depression. Chronic administration of the antidepressant Fluoxetine lowered corticosterone levels in T7 mice and reduced depression-like behavior while leaving their circadian rhythms unaffected.
Effect on learning
The hippocampus is a structure in the limbic system that receives projections from ipRGCs. It is required for the consolidation of short-term memories into long-term memories as well as spatial orientation and navigation. Depression and heightened serum corticosterone levels are linked to impaired hippocampal learning. Hattar and his team analyzed the T7 mice in the Morris water maze (MWM), a spatial learning task that places a mouse in a small pool of water and tests the mouse's ability to locate and remember the location of a rescue platform located just below the waterline. Compared to the T24 mice, the T7 mice took longer to find the platform in subsequent trials and did not exhibit a preference for the quadrant containing the platform. In addition, T7 mice exhibited impaired hippocampal long-term potentiation (LTP) when subjected to theta burst stimulation (TBS). Recognition memory was also affected, with T7 mice failing to show preference for novel objects in the novel object recognition test.
Necessity of ipRGCs
Mice without (Opn4aDTA/aDTA mice) are not susceptible to the negative effects of an aberrant light cycle, indicating that light information transmitted through these cells plays an important role in regulation of mood and cognitive functions such as learning and memory.
Research developments
Light and melatonin
More recently, light therapy and melatonin administration have been explored by Alfred J. Lewy (OHSU), Josephine Arendt (University of Surrey, UK) and other researchers as a means to reset animal and human circadian rhythms. Additionally, the presence of low-level light at night accelerates circadian re-entrainment of hamsters of all ages by 50%; this is thought to be related to simulation of moonlight.
In the second half of 20th century, substantial contributions and formalizations have been made by Europeans such as Jürgen Aschoff and Colin Pittendrigh, who pursued different but complementary views on the phenomenon of entrainment of the circadian system by light (parametric, continuous, tonic, gradual vs. nonparametric, discrete, phasic, instantaneous, respectively).
Chronotypes
Humans can have a propensity to be morning people or evening people; these behavioral preferences are called chronotypes for which there are various assessment questionnaires and biological marker correlations.
Mealtimes
There is also a food-entrainable biological clock, which is not confined to the suprachiasmatic nucleus. The location of this clock has been disputed. Working with mice, however, Fuller et al. concluded that the food-entrainable clock seems to be located in the dorsomedial hypothalamus. During restricted feeding, it takes over control of such functions as activity timing, increasing the chances of the animal successfully locating food resources.
Diurnal patterns on the Internet
In 2018 a study published in PLoS ONE showed how 73 psychometric indicators measured on Twitter Content follow a diurnal pattern.
A followup study appeared on Chronobiology International in 2021 showed that these patterns were not disrupted by the 2020 UK lockdown.
Modulators of circadian rhythms
In 2021, scientists reported the development of a light-responsive days-lasting modulator of circadian rhythms of tissues via Ck1 inhibition. Such modulators may be useful for chronobiology research and repair of organs that are "out of sync".
Other fields
Chronobiology is an interdisciplinary field of investigation. It interacts with medical and other research fields such as sleep medicine, endocrinology, geriatrics, sports medicine, space medicine, psychiatry and photoperiodism.
See also
Bacterial circadian rhythms
Biological clock (aging)
Circadian rhythm
Circannual cycle
Circaseptan, 7-day biological cycle
Familial sleep traits
Frank A. Brown, Jr.
Hitoshi Okamura
Light effects on circadian rhythm
Photoperiodism
Suprachiasmatic nucleus
Scotobiology
Time perception
Malcolm von Schantz
References
Further reading
Hastings, Michael, "The brain, circadian rhythms, and clock genes". Clinical review" BMJ 1998;317:1704-1707 19 December.
U.S. Congress, Office of Technology Assessment, "Biological Rhythms: Implications for the Worker". U.S. Government Printing Office, September 1991. Washington, DC. OTA-BA-463. NTIS PB92-117589
Ashikari, M., Higuchi, S., Ishikawa, F., and Tsunetsugu, Y., "Interdisciplinary Symposium on 'Human Beings and Environments': Approaches from Biological Anthropology, Social Anthropology and Developmental Psychology". Sunday, 25 August 2002
"Biorhythm experiment management plan", NASA, Ames Research Center. Moffett Field, 1983.
"Biological Rhythms and Human Adaptation to the Environment". US Army Medical Research and Materiel Command (AMRMC), US Army Research Institute of Environmental Medicine.
Ebert, D., K.P. Ebmeier, T. Rechlin, and W.P. Kaschka, "Biological Rhythms and Behavior", Advances in Biological Psychiatry. ISSN 0378-7354
Horne, J.A. (Jim) & Östberg, Olov (1976). A Self-Assessment Questionnaire to determine Morningness-Eveningness in Human Circadian Rhythms. International Journal of Chronobiology, 4, 97–110.
Roenneberg, Till, Cologne (2010). Wie wir ticken – Die Bedeutung der Chronobiologie für unser Leben, Dumont, .
The Linnean Society of London
External links
Halberg Chronobiology Center at the University of Minnesota, founded by Franz Halberg, the "Father of Chronobiology"
The University of Virginia offers an online tutorial on chronobiology.
See the Science Museum of Virginia publication Can plants tell time?
The University of Manchester has an informative Biological Clock Web Site
S Ertel's analysis of Chizhevsky's work
Biological processes
Circadian rhythm
Neuroscience | Chronobiology | [
"Biology"
] | 3,781 | [
"Behavior",
"Neuroscience",
"Circadian rhythm",
"nan",
"Chronobiology",
"Sleep"
] |
56,567 | https://en.wikipedia.org/wiki/Hyperbolic%20functions | In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points form a circle with a unit radius, the points form the right half of the unit hyperbola. Also, similarly to how the derivatives of and are and respectively, the derivatives of and are and respectively.
Hyperbolic functions occur in the calculations of angles and distances in hyperbolic geometry. They also occur in the solutions of many linear differential equations (such as the equation defining a catenary), cubic equations, and Laplace's equation in Cartesian coordinates. Laplace's equations are important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity.
The basic hyperbolic functions are:
hyperbolic sine "" (),
hyperbolic cosine "" (),
from which are derived:
hyperbolic tangent "" (),
hyperbolic cotangent "" (),
hyperbolic secant "" (),
hyperbolic cosecant "" or "" ()
corresponding to the derived trigonometric functions.
The inverse hyperbolic functions are:
inverse hyperbolic sine "" (also denoted "", "" or sometimes "")
inverse hyperbolic cosine "" (also denoted "", "" or sometimes "")
inverse hyperbolic tangent "" (also denoted "", "" or sometimes "")
inverse hyperbolic cotangent "" (also denoted "", "" or sometimes "")
inverse hyperbolic secant "" (also denoted "", "" or sometimes "")
inverse hyperbolic cosecant "" (also denoted "", "", "","", "", or sometimes "" or "")
The hyperbolic functions take a real argument called a hyperbolic angle. The magnitude of a hyperbolic angle is the area of its hyperbolic sector to xy = 1. The hyperbolic functions may be defined in terms of the legs of a right triangle covering this sector.
In complex analysis, the hyperbolic functions arise when applying the ordinary sine and cosine functions to an imaginary angle. The hyperbolic sine and the hyperbolic cosine are entire functions. As a result, the other hyperbolic functions are meromorphic in the whole complex plane.
By Lindemann–Weierstrass theorem, the hyperbolic functions have a transcendental value for every non-zero algebraic value of the argument.
Hyperbolic functions were introduced in the 1760s independently by Vincenzo Riccati and Johann Heinrich Lambert. Riccati used and () to refer to circular functions and and () to refer to hyperbolic functions. Lambert adopted the names, but altered the abbreviations to those used today. The abbreviations , , , are also currently used, depending on personal preference.
Notation
Definitions
There are various equivalent ways to define the hyperbolic functions.
Exponential definitions
In terms of the exponential function:
Hyperbolic sine: the odd part of the exponential function, that is,
Hyperbolic cosine: the even part of the exponential function, that is,
Hyperbolic tangent:
Hyperbolic cotangent: for ,
Hyperbolic secant:
Hyperbolic cosecant: for ,
Differential equation definitions
The hyperbolic functions may be defined as solutions of differential equations: The hyperbolic sine and cosine are the solution of the system
with the initial conditions The initial conditions make the solution unique; without them any pair of functions would be a solution.
and are also the unique solution of the equation ,
such that , for the hyperbolic cosine, and , for the hyperbolic sine.
Complex trigonometric definitions
Hyperbolic functions may also be deduced from trigonometric functions with complex arguments:
Hyperbolic sine:
Hyperbolic cosine:
Hyperbolic tangent:
Hyperbolic cotangent:
Hyperbolic secant:
Hyperbolic cosecant:
where is the imaginary unit with .
The above definitions are related to the exponential definitions via Euler's formula (See below).
Characterizing properties
Hyperbolic cosine
It can be shown that the area under the curve of the hyperbolic cosine (over a finite interval) is always equal to the arc length corresponding to that interval:
Hyperbolic tangent
The hyperbolic tangent is the (unique) solution to the differential equation , with .
Useful relations
The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity (up to but not including sinhs or implied sinhs of 4th degree) for , , or and into a hyperbolic identity, by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term containing a product of two sinhs.
Odd and even functions:
Hence:
Thus, and are even functions; the others are odd functions.
Hyperbolic sine and cosine satisfy:
the last of which is similar to the Pythagorean trigonometric identity.
One also has
for the other functions.
Sums of arguments
particularly
Also:
Subtraction formulas
Also:
Half argument formulas
where is the sign function.
If , then
Square formulas
Inequalities
The following inequality is useful in statistics:
It can be proved by comparing the Taylor series of the two functions term by term.
Inverse functions as logarithms
Derivatives
Second derivatives
Each of the functions and is equal to its second derivative, that is:
All functions with this property are linear combinations of and , in particular the exponential functions and .
Standard integrals
The following integrals can be proved using hyperbolic substitution:
where C is the constant of integration.
Taylor series expressions
It is possible to express explicitly the Taylor series at zero (or the Laurent series, if the function is not defined at zero) of the above functions.
This series is convergent for every complex value of . Since the function is odd, only odd exponents for occur in its Taylor series.
This series is convergent for every complex value of . Since the function is even, only even exponents for occur in its Taylor series.
The sum of the sinh and cosh series is the infinite series expression of the exponential function.
The following series are followed by a description of a subset of their domain of convergence, where the series is convergent and its sum equals the function.
where:
is the nth Bernoulli number
is the nth Euler number
Infinite products and continued fractions
The following expansions are valid in the whole complex plane:
Comparison with circular functions
The hyperbolic functions represent an expansion of trigonometry beyond the circular functions. Both types depend on an argument, either circular angle or hyperbolic angle.
Since the area of a circular sector with radius and angle (in radians) is , it will be equal to when . In the diagram, such a circle is tangent to the hyperbola xy = 1 at (1,1). The yellow sector depicts an area and angle magnitude. Similarly, the yellow and red regions together depict a hyperbolic sector with area corresponding to hyperbolic angle magnitude.
The legs of the two right triangles with hypotenuse on the ray defining the angles are of length times the circular and hyperbolic functions.
The hyperbolic angle is an invariant measure with respect to the squeeze mapping, just as the circular angle is invariant under rotation.
The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic functions that does not involve complex numbers.
The graph of the function is the catenary, the curve formed by a uniform flexible chain, hanging freely between two fixed points under uniform gravity.
Relationship to the exponential function
The decomposition of the exponential function in its even and odd parts gives the identities
and
Combined with Euler's formula
this gives
for the general complex exponential function.
Additionally,
Hyperbolic functions for complex numbers
Since the exponential function can be defined for any complex argument, we can also extend the definitions of the hyperbolic functions to complex arguments. The functions and are then holomorphic.
Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers:
so:
Thus, hyperbolic functions are periodic with respect to the imaginary component, with period ( for hyperbolic tangent and cotangent).
See also
e (mathematical constant)
Equal incircles theorem, based on sinh
Hyperbolastic functions
Hyperbolic growth
Inverse hyperbolic functions
List of integrals of hyperbolic functions
Poinsot's spirals
Sigmoid function
Soboleva modified hyperbolic tangent
Trigonometric functions
References
External links
Hyperbolic functions on PlanetMath
GonioLab: Visualization of the unit circle, trigonometric and hyperbolic functions (Java Web Start)
Web-based calculator of hyperbolic functions
Exponentials
Hyperbolic geometry
Analytic functions | Hyperbolic functions | [
"Mathematics"
] | 1,841 | [
"E (mathematical constant)",
"Exponentials"
] |
56,568 | https://en.wikipedia.org/wiki/Pleiades | The Pleiades (), also known as Seven Sisters and Messier 45 (M45), is an asterism of an open star cluster containing young B-type stars in the northwest of the constellation Taurus. At a distance of about 444 light-years, it is among the nearest star clusters to Earth and the nearest Messier object to Earth, being the most obvious star cluster to the naked eye in the night sky. It is also observed to house the reflection nebula NGC 1432, an HII region. Around 2330 BC it marked the vernal point.
The cluster is dominated by hot blue luminous stars that have formed within the last 100 million years. Reflection nebulae around the brightest stars were once thought to be leftover material from their formation, but are now considered likely to be an unrelated dust cloud in the interstellar medium through which the stars are currently passing. This dust cloud is estimated to be moving at a speed of approximately 18 km/s relative to the stars in the cluster.
Computer simulations have shown that the Pleiades were probably formed from a compact configuration that once resembled the Orion Nebula. Astronomers estimate that the cluster will survive for approximately another 250 million years, after which the clustering will be lost due to gravitational interactions with the galactic neighborhood.
Together with the open star cluster of the Hyades, the Pleiades form the Golden Gate of the Ecliptic.
Origin of name
The name, Pleiades, comes from . It probably derives from ( 'to sail') because of the cluster's importance in delimiting the sailing season in the Mediterranean Sea: "the season of navigation began with their heliacal rising". In Classical Greek mythology the name was used for seven divine sisters called the Pleiades. In time, the name was said to be derived from that of a mythical mother, Pleione, effectively meaning "daughters of Pleione". In reality, the ancient name of the star cluster related to sailing almost certainly came first in the culture, naming of a relationship to the sister deities followed, and eventually appearing in later myths, to interpret the group name, a mother, Pleione.
Astronomical role of M45 in antiquity
The M45 group played an important role in ancient times for the establishment of many calendars thanks to the combination of two remarkable elements. The first, which is still valid, is its unique and easily identifiable appearance on the celestial vault near the ecliptic. The second, essential for the ancients, is that in the middle of the third millennium BC, this asterism (a prominent pattern or group of stars that is smaller than a constellation) marked the vernal point. (2330 BC with ecliptic latitude about +3.5° according to Stellarium)
The importance of this asterism is also evident in northern Europe. The Pleiades cluster is displayed on the Nebra sky disc that was found in Germany and is dated to around 1600 BC. On the disk the cluster is represented in a high position between the Sun and the Moon.
This asterism also marks the beginning of several ancient calendars:
In ancient India, it constitutes, in the Atharvaveda, compiled around 1200-1000 BC, the first (Sanskrit name for lunar stations), which is called (), a revealing name since it literally means 'the Cuttings', i.e. "Those that mark the break of the year". This is so before the classic list lowers this to third place, henceforth giving the first to the star couple β Arietis and γ Arietis, which, notably in Hipparchus, at that time, marks the equinox.
In Mesopotamia, the MUL.APIN compendium, the first known Mesopotamian astronomy treatise, discovered at Nineveh in the library of Assurbanipal and dating from no later than 627 BC, presents a list of deities [holders of stars] who stand on "the path of the Moon", a list which begins with mul.MUL.
In Greece, the () are a group whose name is probably functional before having a mythological meaning, as André Lebœuffle points out, who has his preference for the explanation by the Indo-European root that expresses the idea of 'multiplicity, crowd, assembly'.
Similarly, the Ancient Arabs begin their old parapegma type calendar, that of the , with M45 under the name of (). And this before their classic calendar, that of the or 'lunar stations', also begins with the star couple β Arietis and γ Arietis whose name, (), is literally "the Two Marks [of entering the equinox]"
Although M45 is no longer at the vernal point, the asterism still remains important, both functionally and symbolically. In addition to the changes in the calendars based on the lunar stations among the Indians and the Arabs, consider the case of an ancient Yemeni calendar in which the months are designated according to an astronomical criterion that caused it to be named Calendar of the Pleiades: the month of , literally 'five', is that during which the Sun and , i.e. the Pleiades, deviate from each other by five movements of the Moon, i.e. five times the path that the Moon travels on average in one day and one night, to use the terminology of .
Nomenclature and mythology
The Pleiades are a prominent sight in winter in the Northern Hemisphere, and are easily visible from mid-southern latitudes. They have been known since antiquity to cultures all around the world, including the Celts (, ); pre-colonial Filipinos (who called it , or , among other names), for whom it indicated the beginning of the year; Hawaiians (who call them ), Māori (who call them ); Indigenous Australians (from several traditions); the Achaemenid Empire, whence in Persians (who called them or ); the Arabs (who call them ; ); the Chinese (who called them ; ); the Quechua (who call them Qullqa or the storehouse); the Japanese (who call them ; , ); the Maya; the Aztec; the Sioux; the Kiowa; and the Cherokee. In Hinduism, the Pleiades are known as and are scripturally associated with the war deity and are also identified or associated with the (Seven Mothers). Hindus celebrate the first day (new moon) of the month of Kartik (month) as Diwali, a festival of abundance and lamps. The Pleiades are also mentioned three times in the Bible.
The earliest known depiction of the Pleiades is likely a Northern German Bronze Age artifact known as the Nebra sky disk, dated to approximately 1600 BC. The Babylonian star catalogues name the Pleiades (), meaning 'stars' (literally 'star star'), and they head the list of stars along the ecliptic, reflecting the fact that they were close to the point of the vernal equinox around the twenty-third century BC. The Ancient Egyptians may have used the names "Followers" and "Ennead" in the prognosis texts of the Calendar of Lucky and Unlucky Days of papyrus Cairo 86637. Some Greek astronomers considered them to be a distinct constellation, and they are mentioned by Hesiod's Works and Days, Homer's Iliad and Odyssey, and the Geoponica. The Pleiades was the most well-known "star" among pre-Islamic Arabs and so often referred to simply as "the Star" (; ). Some scholars of Islam suggested that the Pleiades are the "star" mentioned in ('The Star') in the Quran.
On numerous cylinder seals from the beginning of the first millennium BC, M45 is represented by seven points, while the Seven Gods appear, on low-reliefs of Neo-Assyrian royal palaces, wearing long open robes and large cylindrical headdresses surmounted by short feathers and adorned with three frontal rows of horns and a crown of feathers, while carrying both an ax and a knife, as well as a bow and a quiver.
As noted by scholar Stith Thompson, the constellation was "nearly always imagined" as a group of seven sisters, and their myths explain why there are only six. Some scientists suggest that these may come from observations back when Pleione was farther from Atlas and more visible as a separate star as far back as 100,000 BC.
Subaru
In Japan, the cluster is mentioned under the name ("six stars") in the eighth-century Kojiki. The cluster is now known in Japan as Subaru.
The name was chosen for that of the Subaru Telescope, the flagship telescope of the National Astronomical Observatory of Japan, located at the Mauna Kea Observatory on the island of Hawaii. It had the largest monolithic primary mirror in the world from its commissioning in 1998 until 2005.
It also was chosen as the brand name of Subaru automobiles to reflect the origins of the firm as the joining of five companies, and is depicted in the firm's six-star logo.
Tolkien's Legendarium
In J. R. R. Tolkien's legendarium, where The Lord of the Rings is set, Pleiades is referred to as Remmirath, the netted star, as are several other celestial bodies, such as the constellation Orion as Menelvagor, swordsman of the Sky.
Observational history
Galileo Galilei was the first astronomer to view the Pleiades through a telescope. He thereby discovered that the cluster contains many stars too dim to be seen with the naked eye. He published his observations, including a sketch of the Pleiades showing 36 stars, in his treatise Sidereus Nuncius in March 1610.
The Pleiades have long been known to be a physically related group of stars rather than any chance alignment. John Michell calculated in 1767 that the probability of a chance alignment of so many bright stars was only 1 in 500,000, and so surmised that the Pleiades and many other clusters must consist of physically related stars. When studies were first made of the proper motions of the stars, it was found that they are all moving in the same direction across the sky, at the same rate, further demonstrating that they were related.
Charles Messier measured the position of the cluster and included it as "M45" in his catalogue of comet-like objects, published in 1771. Along with the Orion Nebula and the Praesepe cluster, Messier's inclusion of the Pleiades has been noted as curious, as most of Messier's objects were much fainter and more easily confused with comets—something that seems scarcely possible for the Pleiades. One possibility is that Messier simply wanted to have a larger catalogue than his scientific rival Lacaille, whose 1755 catalogue contained 42 objects, and so he added some bright, well-known objects to boost the number on his list.
Edme-Sébastien Jeaurat then drew in 1782 a map of 64 stars of the Pleiades from his observations in 1779, which he published in 1786.
Distance
The distance to the Pleiades can be used as a key first step to calibrate the cosmic distance ladder. As the cluster is relatively close to the Earth, the distance should be relatively easy to measure and has been estimated by many methods. Accurate knowledge of the distance allows astronomers to plot a Hertzsprung–Russell diagram for the cluster, which, when compared with those plotted for clusters whose distance is not known, allows their distances to be estimated. Other methods may then extend the distance scale from open clusters to galaxies and clusters of galaxies, and a cosmic distance ladder may be constructed. Ultimately astronomers' understanding of the age and future evolution of the universe is influenced by their knowledge of the distance to the Pleiades. Yet some authors argue that the controversy over the distance to the Pleiades discussed below is a red herring, since the cosmic distance ladder can (presently) rely on a suite of other nearby clusters where consensus exists regarding the distances as established by the Hipparcos satellite and independent means (e.g., the Hyades, the Coma Berenices cluster, etc.).
Measurements of the distance have elicited much controversy. Results prior to the launch of the Hipparcos satellite generally found that the Pleiades were approximately 135 parsecs (pc) away from Earth. Data from Hipparcos yielded a surprising result, namely a distance of only 118 pc, by measuring the parallax of stars in the cluster—a technique that should yield the most direct and accurate results. Later work consistently argued that the Hipparcos distance measurement for the Pleiades was erroneous: In particular, distances derived to the cluster via the Hubble Space Telescope and infrared color–magnitude diagram fitting (so-called "spectroscopic parallax") favor a distance between 135 and 140 pc; a dynamical distance from optical interferometric observations of the inner pair of stars within Atlas (a bright triple star in the Pleiades) favors a distance of 133 to 137 pc. However, the author of the 2007–2009 catalog of revised Hipparcos parallaxes reasserted that the distance to the Pleiades is ~120 pc and challenged the dissenting evidence. In 2012, Francis and Anderson proposed that a systematic effect on Hipparcos parallax errors for stars in clusters would bias calculation using the weighted mean; they gave a Hipparcos parallax distance of 126 pc and photometric distance of 132 pc based on stars in the AB Doradus, Tucana-Horologium and Beta Pictoris moving groups, which are all similar in age and composition to the Pleiades. Those authors note that the difference between these results may be attributed to random error.
More recent results using very-long-baseline interferometry (VLBI) (August 2014), and preliminary solutions using Gaia Data Release 1 (September 2016) and Gaia Data Release 2 (August 2018), determine distances of 136.2 ± 1.2 pc, 134 ± 6 pc and 136.2 ± 5.0 pc, respectively. The Gaia Data Release 1 team were cautious about their result, and the VLBI authors assert "that the Hipparcos-measured distance to the Pleiades cluster is in error".
The most recent distance estimate of the distance to the Pleiades based on the Gaia Data Release 3 is .
Composition
The cluster core radius is approximately 8 light-years and tidal radius is approximately 43 light-years. The cluster contains more than 1,000 statistically confirmed members, not counting the number that would be added if all binary stars could be resolved. Its light is dominated by young, hot blue stars, up to 14 of which may be seen with the naked eye, depending on local observing conditions and visual acuity of the observer. The brightest stars form a shape somewhat similar to that of Ursa Major and Ursa Minor. The total mass contained in the cluster is estimated to be approximately 800 solar masses and is dominated by fainter and redder stars. An estimate of the frequency of binary stars in the Pleiades is approximately 57%.
The cluster contains many brown dwarfs, such as Teide 1. These are objects with less than approximately 8% of the Sun's mass, insufficient for nuclear fusion reactions to start in their cores and become proper stars. They may constitute up to 25% of the total population of the cluster, although they contribute less than 2% of the total mass. Astronomers have made great efforts to find and analyze brown dwarfs in the Pleiades and other young clusters, because they are still relatively bright and observable, while brown dwarfs in older clusters have faded and are much more difficult to study.
Brightest stars
The brightest stars of the cluster are named the Seven Sisters in early Greek mythology: Sterope, Merope, Electra, Maia, Taygeta, Celaeno, and Alcyone. Later, they were assigned parents, Pleione and Atlas. As daughters of Atlas, the Hyades were sisters of the Pleiades.
The following table gives details of the brightest stars in the cluster:
Age and future evolution
Ages for star clusters may be estimated by comparing the Hertzsprung–Russell diagram for the cluster with theoretical models of stellar evolution. Using this technique, ages for the Pleiades of between 75 and 150 million years have been estimated. The wide spread in estimated ages is a result of uncertainties in stellar evolution models, which include factors such as convective overshoot, in which a convective zone within a star penetrates an otherwise non-convective zone, resulting in higher apparent ages.
Another way of estimating the age of the cluster is by looking at the lowest-mass objects. In normal main-sequence stars, lithium is rapidly destroyed in nuclear fusion reactions. Brown dwarfs can retain their lithium, however. Due to lithium's very low ignition temperature of 2.5 × 106 K, the highest-mass brown dwarfs will burn it eventually, and so determining the highest mass of brown dwarfs still containing lithium in the cluster may give an idea of its age. Applying this technique to the Pleiades gives an age of about 115 million years.
The cluster is slowly moving in the direction of the feet of what is currently the constellation of Orion. Like most open clusters, the Pleiades will not stay gravitationally bound forever. Some component stars will be ejected after close encounters with other stars; others will be stripped by tidal gravitational fields. Calculations suggest that the cluster will take approximately 250 million years to disperse, because of gravitational interactions with giant molecular clouds and the spiral arms of our galaxy hastening its demise.
Reflection nebulosity
With larger amateur telescopes, the nebulosity around some of the stars may be easily seen, especially when long-exposure photographs are taken. Under ideal observing conditions, some hint of nebulosity around the cluster may be seen even with small telescopes or average binoculars. It is a reflection nebula, caused by dust reflecting the blue light of the hot, young stars.
It was formerly thought that the dust was left over from the formation of the cluster, but at the age of approximately 100 million years generally accepted for the cluster, almost all the dust originally present would have been dispersed by radiation pressure. Instead, it seems that the cluster is simply passing through a particularly dusty region of the interstellar medium.
Studies show that the dust responsible for the nebulosity is not uniformly distributed, but is concentrated mainly in two layers along the line of sight to the cluster. These layers may have been formed by deceleration due to radiation pressure as the dust has moved toward the stars.
Possible planets
Analyzing deep-infrared images obtained by the Spitzer Space Telescope and Gemini North telescope, astronomers discovered that one of the stars in the cluster, HD 23514, which has a mass and luminosity a bit greater than that of the Sun, is surrounded by an extraordinary number of hot dust particles. This could be evidence for planet formation around HD 23514.
Videos
Gallery
See also
Stozhary
Matrikas
The Seven Sages
References
External links
Information on the Pleiades from SEDS
Astronomical objects known since antiquity
Messier objects
NGC objects
Orion–Cygnus Arm
Taurus (constellation) | Pleiades | [
"Astronomy"
] | 4,000 | [
"Taurus (constellation)",
"Constellations"
] |
56,590 | https://en.wikipedia.org/wiki/New%20General%20Catalogue | The New General Catalogue of Nebulae and Clusters of Stars (abbreviated NGC) is an astronomical catalogue of deep-sky objects compiled by John Louis Emil Dreyer in 1888. The NGC contains 7,840 objects, including galaxies, star clusters and emission nebulae. Dreyer published two supplements to the NGC in 1895 and 1908, known as the Index Catalogues (abbreviated IC), describing a further 5,386 astronomical objects. Thousands of these objects are best known by their NGC or IC numbers, which remain in widespread use.
The NGC expanded and consolidated the cataloguing work of William and Caroline Herschel, and John Herschel's General Catalogue of Nebulae and Clusters of Stars. Objects south of the celestial equator are catalogued somewhat less thoroughly, but many were included based on observation by John Herschel or James Dunlop.
The NGC contained multiple errors, but attempts to eliminate them were made by the Revised New General Catalogue (RNGC) by Jack W. Sulentic and William G. Tifft in 1973, NGC2000.0 by Roger W. Sinnott in 1988, and the NGC/IC Project in 1993. A Revised New General Catalogue and Index Catalogue (abbreviated as RNGC/IC) was compiled in 2009 by Wolfgang Steinicke and updated in 2019 with 13,957 objects.
Original catalogue
The original New General Catalogue was compiled during the 1880s by John Louis Emil Dreyer using observations from William Herschel and his son John, among others. Dreyer had already published a supplement to Herschel's General Catalogue of Nebulae and Clusters (GC), containing about 1,000 new objects. In 1886, he suggested building a second supplement to the General Catalogue, but the Royal Astronomical Society asked Dreyer to compile a new version instead. This led to the publication of the New General Catalogue in the Memoirs of the Royal Astronomical Society in 1888.
Assembling the NGC was a challenge, as Dreyer had to deal with many contradictory and unclear reports made with a variety of telescopes with apertures ranging from 2 to 72 inches. While he did check some himself, the sheer number of objects meant Dreyer had to accept them as published by others for the purpose of his compilation. The catalogue contained several errors, mostly relating to position and descriptions, but Dreyer referenced the catalogue, which allowed later astronomers to review the original references and publish corrections to the original NGC.
Index Catalogue
The first major update to the NGC is the Index Catalogue of Nebulae and Clusters of Stars (abbreviated as IC), published in two parts by Dreyer in 1895 (IC I, containing 1,520 objects) and 1908 (IC II, containing 3,866 objects). It serves as a supplement to the NGC, and contains an additional 5,386 objects, collectively known as the IC objects. It summarizes the discoveries of galaxies, clusters and nebulae between 1888 and 1907, most of them made possible by photography. A list of corrections to the IC was published in 1912.
Revised New General Catalogue
The Revised New Catalogue of Nonstellar Astronomical Objects (abbreviated as RNGC) was compiled by Sulentic and Tifft in the early 1970s, and was published in 1973, as an update to the NGC. The work did not incorporate several previously published corrections to the NGC data (including corrections published by Dreyer himself), and introduced some new errors. For example, the well-known compact galaxy group Copeland Septet in the Leo constellation appears as non-existent in the RNGC.
Nearly 800 objects are listed as "non-existent" in the RNGC. The designation is applied to objects which are duplicate catalogue entries, those which were not detected in subsequent observations, and a number of objects catalogued as star clusters which in subsequent studies were regarded as coincidental groupings. A 1993 monograph considered the 229 star clusters called non-existent in the RNGC. They had been "misidentified or have not been located since their discovery in the 18th and 19th centuries". It found that one of the 229—NGC 1498—was not actually in the sky. Five others were duplicates of other entries, 99 existed "in some form", and the other 124 required additional research to resolve.
As another example, reflection nebula NGC 2163 in Orion was classified "non-existent" due to a transcription error by Dreyer. Dreyer corrected his own mistake in the Index Catalogues, but the RNGC preserved the original error, and additionally reversed the sign of the declination, resulting in NGC 2163 being classified as non-existent.
Revised New General Catalogue and Index Catalogue
The Revised New General Catalogue and Index Catalogue (abbreviated as RNGC/IC) is a compilation made by Wolfgang Steinicke in 2009. It is a comprehensive and authoritative treatment of the NGC and IC catalogues. The number of objects with status of "not found" in this catalogue is 301 objects (2.3%). The brightest star in this catalogue is NGC 771 with magnitude of 4.0.
NGC 2000.0
NGC 2000.0 (also known as the Complete New General Catalog and Index Catalog of Nebulae and Star Clusters) is a 1988 compilation of the NGC and IC made by Roger W. Sinnott, using the J2000.0 coordinates. It incorporates several corrections and errata made by astronomers over the years.
NGC/IC Project
The NGC/IC Project was a collaboration among professional and amateur astronomers formed
by Steve Gottlieb in 1990, although Steve Gottlieb already started to observe and record NGC objects as early as 1979. Other primary team members were Harold G. Corwin Jr., Malcolm Thomson, Robert E. Erdmann and Jeffrey Corder. The project was completed by 2017. This project identified all NGC and IC objects, corrected mistakes, collected images and basic astronomical data and checked all historical data related to the objects.
See also
Messier object
Catalogue of Nebulae and Clusters of Stars
Astronomical catalog
List of astronomical catalogues
List of NGC objects
References
External links
The Interactive NGC Catalog Online
Adventures in Deep Space: Challenging Observing Projects for Amateur Astronomers.
Revised New General Catalogue
Astronomical catalogues
1888 documents
1888 in science | New General Catalogue | [
"Astronomy"
] | 1,251 | [
"Astronomical catalogues",
"Astronomical objects",
"Works about astronomy"
] |
56,637 | https://en.wikipedia.org/wiki/Ammonium%20perchlorate | Ammonium perchlorate ("AP") is an inorganic compound with the formula . It is a colorless or white solid that is soluble in water. It is a powerful oxidizer. Combined with a fuel, it can be used as a rocket propellant called ammonium perchlorate composite propellant. Its instability has involved it in a number of accidents, such as the PEPCON disaster.
Production
Ammonium perchlorate (AP) is produced by reaction between ammonia and perchloric acid. This process is the main outlet for the industrial production of perchloric acid. The salt also can be produced by salt metathesis reaction of ammonium salts with sodium perchlorate. This process exploits the relatively low solubility of NH4ClO4, which is about 10% of that for sodium perchlorate.
AP crystallises as colorless rhombohedra.
Decomposition
Like most ammonium salts, ammonium perchlorate decomposes before melting. Mild heating results in production of hydrogen chloride, nitrogen, oxygen, and water.
4 NH4ClO4 → 4 HCl + 2 N2 + 5 O2 + 6 H2O
The combustion of AP is quite complex and is widely studied. AP crystals decompose before melting, even though a thin liquid layer has been observed on crystal surfaces during high-pressure combustion processes. Strong heating may lead to explosions. Complete reactions leave no residue. Pure crystals cannot sustain a flame below the pressure of 2 MPa.
AP is a Class 4 oxidizer (can undergo an explosive reaction) for particle sizes over 15 micrometres and is classified as an explosive for particle sizes less than 15 micrometres.
Applications
During World War I England and France used mixtures featuring ammonium perchlorate (such as "balstine") as a substitute high explosive.
The primary use of ammonium perchlorate is in making solid rocket propellants. When AP is mixed with a fuel (like a powdered aluminium and/or with an elastomeric binder), it can generate self-sustained combustion at pressures far below atmospheric pressure. It is an important oxidizer with a decades-long history of use in solid rocket propellants – space launch (including the Space Shuttle Solid Rocket Booster), military, amateur, and hobby high-power rockets, as well as in some fireworks.
Some "breakable" epoxy adhesives contain suspensions of AP. Upon heating to 300°C, the AP degrades the organic adhesive, breaking the cemented joint.
Toxicity
Perchlorate itself confers little acute toxicity. For example, sodium perchlorate has an of 2–4g/kg and is eliminated rapidly after ingestion. However, chronic exposure to perchlorates, even in low concentrations, has been shown to cause various thyroid problems, as it is taken up in place of iodine.
References
Further reading
Ammonium compounds
Perchlorates
Pyrotechnic oxidizers
Rocket oxidizers
Oxidizing agents
Explosive chemicals | Ammonium perchlorate | [
"Chemistry"
] | 625 | [
"Redox",
"Perchlorates",
"Oxidizing agents",
"Salts",
"Rocket oxidizers",
"Ammonium compounds",
"Explosive chemicals"
] |
56,654 | https://en.wikipedia.org/wiki/Perchloric%20acid | Perchloric acid is a mineral acid with the formula HClO4. It is an oxoacid of chlorine. Usually found as an aqueous solution, this colorless compound is a stronger acid than sulfuric acid, nitric acid and hydrochloric acid. It is a powerful oxidizer when hot, but aqueous solutions up to approximately 70% by weight at room temperature are generally safe, only showing strong acid features and no oxidizing properties. Perchloric acid is useful for preparing perchlorate salts, especially ammonium perchlorate, an important rocket fuel component. Perchloric acid is dangerously corrosive and readily forms potentially explosive mixtures.
History
Perchloric acid was first synthesized (together with potassium perchlorate) by Austrian chemist and called "oxygenated chloric acid" in mid-1810s. French pharmacist Georges-Simon Serullas introduced the modern designation along with discovering its solid monohydrate (which he, however, mistook for an anhydride).
Production
Perchloric acid is produced industrially by two routes. The traditional method exploits the high aqueous solubility of sodium perchlorate (209 g/100 ml of water at room temperature). Treatment of such solutions with hydrochloric acid gives perchloric acid, precipitating solid sodium chloride:
NaClO4 + HCl → NaCl + HClO4
The concentrated acid can be purified by distillation. The alternative route, which is more direct and avoids salts, entails anodic oxidation of aqueous chlorine at a platinum electrode.
Laboratory preparations
It can be distilled from a solution of potassium perchlorate in sulfuric acid. Treatment of barium perchlorate with sulfuric acid precipitates barium sulfate, leaving perchloric acid. It can also be made by mixing nitric acid with ammonium perchlorate and boiling while adding hydrochloric acid. The reaction gives nitrous oxide and perchloric acid due to a concurrent reaction involving the ammonium ion and can be concentrated and purified significantly by boiling off the remaining nitric and hydrochloric acids.
Properties
Anhydrous perchloric acid is an unstable oily liquid at room temperature. It forms at least five hydrates, several of which have been characterized crystallographically. These solids consist of the perchlorate anion linked via hydrogen bonds to H2O and H3O+ centers. An example is hydronium perchlorate. Perchloric acid forms an azeotrope with water, consisting of about 72.5% perchloric acid. This form of the acid is stable indefinitely and is commercially available. Such solutions are hygroscopic. Thus, if left open to the air, concentrated perchloric acid dilutes itself by absorbing water from the air.
Dehydration of perchloric acid gives the anhydride dichlorine heptoxide:
2 HClO4 + P4O10 → Cl2O7 + H2P4O11
Uses
Perchloric acid is mainly produced as a precursor to ammonium perchlorate, which is used in rocket propellant. The growth in rocketry has led to increased production of perchloric acid. Several million kilograms are produced annually. Perchloric acid is one of the most proven materials for etching of liquid crystal displays and critical electronics applications as well as ore extraction and has unique properties in analytical chemistry. Additionally it is a useful component in etching of chrome.
As an acid
Perchloric acid, a superacid, is one of the strongest Brønsted–Lowry acids. That its pKa is lower than −9 is evidenced by the fact that its monohydrate contains discrete hydronium ions and can be isolated as a stable, crystalline solid, formulated as [H3O+][]. The most recent estimate of its aqueous pKa is . It provides strong acidity with minimal interference because perchlorate is weakly nucleophilic (explaining the high acidity of HClO4). Other acids of noncoordinating anions, such as fluoroboric acid and hexafluorophosphoric acid are susceptible to hydrolysis, whereas perchloric acid is not. Despite hazards associated with the explosiveness of its salts, the acid is often preferred in certain syntheses. For similar reasons, it is a useful eluent in ion-exchange chromatography. It is also used in electropolishing or the etching of aluminium, molybdenum, and other metals.
In geochemistry, perchloric acid aids in the digestion of silicate mineral samples for analysis, and also for complete digestion of organic matter.
Safety
Given its strong oxidizing properties, perchloric acid is subject to extensive regulations as it can react violently with metals and flammable substances such as wood, plastics, and oils. Work conducted with perchloric acid must be conducted in fume hoods with a wash-down capability to prevent accumulation of oxidisers in the ductwork.
On February 20, 1947 in Los Angeles, California, 17 people were killed and 150 injured in the O'Connor Plating Works disaster. A bath, consisting of over 1000 litres of 75% perchloric acid and 35% acetic anhydride by volume which was being used to electro-polish aluminium furniture, exploded. Organic compounds were added to the overheating bath when an iron rack was replaced with one coated with cellulose acetobutyrate (Tenit-2 plastic). A few minutes later the bath exploded. The O'Connor Electro-Plating plant, 25 other buildings, and 40 automobiles were destroyed, and 250 nearby homes were damaged.
See also
Chloric acid
Oxidizing acid
Transition metal perchlorate complexes
References
External links
International Chemical Safety Card 1006
Perchlorates
Halogen oxoacids
Mineral acids
Oxidizing acids
Superacids | Perchloric acid | [
"Chemistry"
] | 1,254 | [
"Acids",
"Inorganic compounds",
"Mineral acids",
"Perchlorates",
"Oxidizing agents",
"Salts",
"Superacids",
"Oxidizing acids"
] |
56,778 | https://en.wikipedia.org/wiki/Azeotrope | An azeotrope () or a constant heating point mixture is a mixture of two or more liquids whose proportions cannot be changed by simple distillation. This happens because when an azeotrope is boiled, the vapour has the same proportions of constituents as the unboiled mixture. Knowing an azeotrope's behavior is important for distillation.
Each azeotrope has a characteristic boiling point. The boiling point of an azeotrope is either less than the boiling point temperatures of any of its constituents (a positive azeotrope), or greater than the boiling point of any of its constituents (a negative azeotrope). For both positive and negative azeotropes, it is not possible to separate the components by fractional distillation and azeotropic distillation is usually used instead.
For technical applications, the pressure-temperature-composition behavior of a mixture is the most important, but other important thermophysical properties are also strongly influenced by azeotropy, including the surface tension and transport properties.
Etymology
The term azeotrope is derived from the Greek words ζέειν (boil) and τρόπος (turning) with the prefix α- (no) to give the overall meaning, "no change on boiling". The term was coined in 1911 by English chemist John Wade and Richard William Merriman. Because their composition is unchanged by distillation, azeotropes are also called (especially in older texts) constant boiling point mixtures.
Types
Positive azeotropes
A solution that shows greater positive deviation from Raoult's law forms a minimum boiling azeotrope at a specific composition. In general, a positive azeotrope boils at a lower temperature than any other ratio of its constituents. Positive azeotropes are also called minimum boiling mixtures or pressure maximum azeotropes. A well-known example of a positive azeotrope is an ethanol–water mixture (obtained by fermentation of sugars) consisting of 95.63% ethanol and 4.37% water (by mass), which boils at 78.2 °C. Ethanol boils at 78.4 °C, water boils at 100 °C, but the azeotrope boils at 78.2 °C, which is lower than either of its constituents. Indeed, 78.2 °C is the minimum temperature at which any ethanol/water solution can boil at atmospheric pressure. Once this composition has been achieved, the liquid and vapour have the same composition, and no further separation occurs.
The boiling and recondensation of a mixture of two solvents are changes of chemical state; as such, they are best illustrated with a phase diagram. If the pressure is held constant, the two variable parameters are the temperature and the composition.
The adjacent diagram shows a positive azeotrope of hypothetical constituents, X and Y. The bottom trace illustrates the boiling temperature of various compositions. Below the bottom trace, only the liquid phase is in equilibrium. The top trace illustrates the vapor composition above the liquid at a given temperature. Above the top trace, only the vapor is in equilibrium. Between the two traces, liquid and vapor phases exist simultaneously in equilibrium: for example, heating a 25% X : 75% Y mixture to temperature AB would generate vapor of composition B over liquid of composition A. The azeotrope is the point on the diagram where the two curves touch. The horizontal and vertical steps show the path of repeated distillations. Point A is the boiling point of a nonazeotropic mixture. The vapor that separates at that temperature has composition B. The shape of the curves requires that the vapor at B be richer in constituent X than the liquid at point A. The vapor is physically separated from the VLE (vapor-liquid equilibrium) system and is cooled to point C, where it condenses. The resulting liquid (point C) is now richer in X than it was at point A. If the collected liquid is boiled again, it progresses to point D, and so on. The stepwise progression shows how repeated distillation can never produce a distillate that is richer in constituent X than the azeotrope. Note that starting to the right of the azeotrope point results in the same stepwise process closing in on the azeotrope point from the other direction.
Negative azeotropes
A solution that shows large negative deviation from Raoult's law forms a maximum boiling azeotrope at a specific composition. Nitric acid and water is an example of this class of azeotrope. This azeotrope has an approximate composition of 68% nitric acid and 32% water by mass, with a boiling point of . In general, a negative azeotrope boils at a higher temperature than any other ratio of its constituents. Negative azeotropes are also called maximum boiling mixtures or pressure minimum azeotropes. An example of a negative azeotrope is hydrochloric acid at a concentration of 20.2% and 79.8% water (by mass). Hydrogen chloride boils at −85 °C and water at 100 °C, but the azeotrope boils at 110 °C, which is higher than either of its constituents. The maximum boiling point of any hydrochloric acid solution is 110 °C. Other examples:
hydrofluoric acid (35.6%) / water, boils at 111.35 °C
nitric acid (68%) / water, boils at 120.2 °C at 1 atm
perchloric acid (71.6%) / water, boils at 203 °C
sulfuric acid (98.3%) / water, boils at 338 °C
The adjacent diagram shows a negative azeotrope of ideal constituents, X and Y. Again the bottom trace illustrates the boiling temperature at various compositions, and again, below the bottom trace the mixture must be entirely liquid phase. The top trace again illustrates the condensation temperature of various compositions, and again, above the top trace the mixture must be entirely vapor phase. The point, A, shown here is a boiling point with a composition chosen very near to the azeotrope. The vapor is collected at the same temperature at point B. That vapor is cooled, condensed, and collected at point C. Because this example is a negative azeotrope rather than a positive one, the distillate is farther from the azeotrope than the original liquid mixture at point A was. So the distillate is poorer in constituent X and richer in constituent Y than the original mixture. Because this process has removed a greater fraction of Y from the liquid than it had originally, the residue must be poorer in Y and richer in X after distillation than before.
If the point, A had been chosen to the right of the azeotrope rather than to the left, the distillate at point C would be farther to the right than A, which is to say that the distillate would be richer in X and poorer in Y than the original mixture. So in this case too, the distillate moves away from the azeotrope and the residue moves toward it. This is characteristic of negative azeotropes. No amount of distillation, however, can make either the distillate or the residue arrive on the opposite side of the azeotrope from the original mixture. This is characteristic of all azeotropes.
Double azeotropes
Also more complex azeotropes exist, which comprise both a minimum-boiling and a maximum-boiling point. Such a system is called a double azeotrope, and will have two azeotropic compositions and boiling points. An example is water and N-methylethylenediamine as well as benzene and hexafluorobenzene.
Complex systems
Some azeotropes fit into neither the positive nor negative categories. The best known of these is the ternary azeotrope formed by 30% acetone, 47% chloroform, and 23% methanol, which boils at 57.5 °C. Each pair of these constituents forms a binary azeotrope, but chloroform/methanol and acetone/methanol both form positive azeotropes while chloroform/acetone forms a negative azeotrope. The resulting ternary azeotrope is neither positive nor negative. Its boiling point falls between the boiling points of acetone and chloroform, so it is neither a maximum nor a minimum boiling point. This type of system is called a saddle azeotrope. Only systems of three or more constituents can form saddle azeotropes.
Miscibility and zeotropy
If the constituents of a mixture are completely miscible in all proportions with each other, the type of azeotrope is called a homogeneous azeotrope. Homogeneous azeotropes can be of the low-boiling or high-boiling azeotropic type. For example, any amount of ethanol can be mixed with any amount of water to form a homogeneous solution.
If the components of a mixture are not completely miscible, an azeotrope can be found inside the miscibility gap. This type of azeotrope is called a heterogeneous azeotrope or heteroazeotrope. A heteroazeotropic distillation will have two liquid phases. Heterogeneous azeotropes are only known in combination with temperature-minimum azeotropic behavior. For example, if equal volumes of chloroform (water solubility 0.8 g/100 ml at 20 °C) and water are shaken together and then left to stand, the liquid will separate into two layers. Analysis of the layers shows that the top layer is mostly water with a small amount of chloroform dissolved in it, and the bottom layer is mostly chloroform with a small amount of water dissolved in it. If the two layers are heated together, the system of layers will boil at 53.3 °C, which is lower than either the boiling point of chloroform (61.2 °C) or the boiling point of water (100 °C). The vapor will consist of 97.0% chloroform and 3.0% water regardless of how much of each liquid layer is present provided both layers are indeed present. If the vapor is re-condensed, the layers will reform in the condensate, and will do so in a fixed ratio, which in this case is 4.4% of the volume in the top layer and 95.6% in the bottom layer.
Combinations of solvents that do not form an azeotrope when mixed in any proportion are said to be zeotropic. Azeotropes are useful in separating zeotropic mixtures. An example is zeotropic acetic acid and water. It is very difficult to separate out pure acetic acid (boiling point: 118.1 °C): progressive distillations produce drier solutions, but each further distillation becomes less effective at removing the remaining water. Distilling the solution to dry acetic acid is therefore economically impractical. But ethyl acetate forms an azeotrope with water that boils at 70.4 °C. By adding ethyl acetate as an entrainer, it is possible to distill away the azeotrope and leave nearly pure acetic acid as the residue.
Number of constituents
Azeotropes consisting of two constituents are called binary azeotropes such as diethyl ether (33%) / halothane (66%) a mixture once commonly used in anesthesia. Azeotropes consisting of three constituents are called ternary azeotropes, e.g. acetone / methanol / chloroform. Azeotropes of more than three constituents are also known.
Condition of existence
The condition relates activity coefficients in liquid phase to total pressure and the vapour pressures of pure components.
Azeotropes can form only when a mixture deviates from Raoult's law, the equality of compositions in liquid phase and vapor phases, in vapour-liquid equilibrium and Dalton's law the equality of pressures for total pressure being equal to the sum of the partial pressures in real mixtures.
In other words: Raoult's law predicts the vapor pressures of ideal mixtures as a function of composition ratio. More simply: per Raoult's law molecules of the constituents stick to each other to the same degree as they do to themselves. For example, if the constituents are X and Y, then X sticks to Y with roughly equal energy as X does with X and Y does with Y. A positive deviation from Raoult's law results when the constituents have a disaffinity for each other – that is X sticks to X and Y to Y better than X sticks to Y. Because this results in the mixture having less total affinity of the molecules than the pure constituents, they more readily escape from the stuck-together phase, which is to say the liquid phase, and into the vapor phase. When X sticks to Y more aggressively than X does to X and Y does to Y, the result is a negative deviation from Raoult's law. In this case because the molecules in the mixture are sticking together more than in the pure constituents, they are more reluctant to escape the stuck-together liquid phase.
When the deviation is great enough to cause a local maxima or minima in the vapor pressure versus mole fraction graph (i.e. for some mole fraction of X in the solution), It is a mathematical consequence of the Gibbs–Duhem equation that at that point, the vapor above the solution will have the same composition as that of the liquid, resulting in an azeotrope.
The adjacent diagram illustrates total vapor pressure of three hypothetical mixtures of constituents, X, and Y. The temperature throughout the plot is assumed to be constant. The center trace is a straight line, which is what Raoult's law predicts for an ideal mixture. In general solely mixtures of chemically similar solvents, such as n-hexane with n-heptane, form nearly ideal mixtures that come close to obeying Raoult's law. The top trace illustrates a nonideal mixture that has a positive deviation from Raoult's law, where the total combined vapor pressure of constituents, X and Y, is greater than what is predicted by Raoult's law. The top trace deviates sufficiently that there is a point on the curve where its tangent is horizontal. Whenever a mixture has a positive deviation and has a point at which the tangent is horizontal, the composition at that point is a positive azeotrope. At that point the total vapor pressure is at a maximum. Likewise the bottom trace illustrates a nonideal mixture that has a negative deviation from Raoult's law, and at the composition where tangent to the trace is horizontal there is a negative azeotrope. This is also the point where total vapor pressure is minimum.
Separation
If the two solvents can form a negative azeotrope, then distillation of any mixture of those constituents will result in the residue being closer to the composition at the azeotrope than the original mixture.
For example, if a hydrochloric acid solution contains less than 20.2% hydrogen chloride, boiling the mixture will leave behind a solution that is richer in hydrogen chloride than the original. If the solution initially contains more than 20.2% hydrogen chloride, then boiling will leave behind a solution that is poorer in hydrogen chloride than the original. Boiling of any hydrochloric acid solution long enough will cause the solution left behind to approach the azeotropic ratio. On the other hand, if two solvents can form a positive azeotrope, then distillation of any mixture of those constituents will result in the residue away from the composition at the azeotrope than the original mixture. For example, if a 50/50 mixture of ethanol and water is distilled once, the distillate will be 80% ethanol and 20% water, which is closer to the azeotropic mixture than the original, which means the solution left behind will be poorer in ethanol. Distilling the 80/20% mixture produces a distillate that is 87% ethanol and 13% water. Further repeated distillations will produce mixtures that are progressively closer to the azeotropic ratio of 95.5/4.5%. No numbers of distillations will ever result in a distillate that exceeds the azeotropic ratio. Likewise, when distilling a mixture of ethanol and water that is richer in ethanol than the azeotrope, the distillate (contrary to intuition) will be poorer in ethanol than the original but still richer than the azeotrope.
Distillation is one of the primary tools that chemists and chemical engineers use to separate mixtures into their constituents. Because distillation cannot separate the constituents of an azeotrope, the separation of azeotropic mixtures (also called azeotrope breaking) is a topic of considerable interest. Indeed, this difficulty led some early investigators to believe that azeotropes were actually compounds of their constituents. But there are two reasons for believing that this is not the case. One is that the molar ratio of the constituents of an azeotrope is not generally the ratio of small integers. For example, the azeotrope formed by water and acetonitrile contains 2.253 moles (or 9/4 with a relative error of just 2%) of acetonitrile for each mole of water. A more compelling reason for believing that azeotropes are not compounds is, as discussed in the last section, that the composition of an azeotrope can be affected by pressure. Contrast that with a true compound, carbon dioxide for example, which is two moles of oxygen for each mole of carbon no matter what pressure the gas is observed at. That azeotropic composition can be affected by pressure suggests a means by which such a mixture can be separated.
Pressure swing distillation
A hypothetical azeotrope of constituents X and Y is shown in the adjacent diagram. Two sets of curves on a phase diagram one at an arbitrarily chosen low pressure and another at an arbitrarily chosen, but higher, pressure. The composition of the azeotrope is substantially different between the high- and low-pressure plots: higher in X for the high-pressure system. The goal is to separate X in as high a concentration as possible starting from point A. At the low pressure, it is possible by progressive distillation to reach a distillate at the point, B, which is on the same side of the azeotrope as A. Successive distillation steps near the azeotropic composition exhibit very little difference in boiling temperature. If this distillate is now exposed to the high pressure, it boils at point C. From C, by progressive distillation it is possible to reach a distillate at the point D, which is on the same side of the high-pressure azeotrope as C. If that distillate is then exposed again to the low pressure, it boils at point E, which is on the opposite side of the low-pressure azeotrope to A. So, by means of the pressure swing, it is possible to cross over the low-pressure azeotrope.
When the solution is boiled at point E, the distillate is poorer in X than the residue at point E. This means that the residue is richer in X than the distillate at point E. Indeed, progressive distillation can produce a residue as rich in X as is required.
In summary:
Low-pressure rectification (A to B)
High-pressure rectification (C to D)
Low-pressure stripping (E to target purity)
Rectification: the distillate, or "tops", is retained and exhibits an increasingly lower boiling point.
Stripping: the residue, or "bottoms", is retained and exhibits an increasingly higher boiling point.
A mixture of 5% water with 95% tetrahydrofuran is an example of an azeotrope that can be economically separated using a pressure swing: a swing in this case between 1 atm and 8 atm. By contrast the composition of the water to ethanol azeotrope discussed earlier is not affected enough by pressure to be easily separated using pressure swings and instead, an entrainer may be added that either modifies the azeotropic composition and exhibits immiscibility with one of the components, or extractive distillation may be used.
Azeotropic distillation
Other methods of separation involve introducing an additional agent, called an entrainer, that will affect the volatility of one of the azeotrope constituents more than another. When an entrainer is added to a binary azeotrope to form a ternary azeotrope, and the resulting mixture distilled, the method is called azeotropic distillation. The best known example is adding benzene or cyclohexane to the water/ethanol azeotrope. With cyclohexane as the entrainer, the ternary azeotrope is 7% water, 17% ethanol, and 76% cyclohexane, and boils at 62.1 °C. Just enough cyclohexane is added to the water/ethanol azeotrope to engage all of the water into the ternary azeotrope. When the mixture is then boiled, the azeotrope vaporizes leaving a residue composed almost entirely of the excess ethanol.
Chemical action separation
Another type of entrainer is one that has a strong chemical affinity for one of the constituents. Using again the example of the water/ethanol azeotrope, the liquid can be shaken with calcium oxide, which reacts strongly with water to form the nonvolatile compound, calcium hydroxide. Nearly all of the calcium hydroxide can be separated by filtration and the filtrate redistilled to obtain 100% pure ethanol. A more extreme example is the azeotrope of 1.2% water with 98.8% diethyl ether. Ether holds the last bit of water so tenaciously that only a very powerful desiccant such as sodium metal added to the liquid phase can result in completely dry ether. Anhydrous calcium chloride is used as a desiccant for drying a wide variety of solvents since it is inexpensive and does not react with most nonaqueous solvents. Chloroform is an example of a solvent that can be effectively dried using calcium chloride.
Distillation using a dissolved salt
When a salt is dissolved in a solvent, it always has the effect of raising the boiling point of that solvent – that is it decreases the volatility of the solvent. When the salt is readily soluble in one constituent of a mixture but not in another, the volatility of the constituent in which it is soluble is decreased and the other constituent is unaffected. In this way, for example, it is possible to break the water/ethanol azeotrope by dissolving potassium acetate in it and distilling the result.
Extractive distillation
Extractive distillation is similar to azeotropic distillation, except in this case the entrainer is less volatile than any of the azeotrope's constituents. For example, the azeotrope of 20% acetone with 80% chloroform can be broken by adding water and distilling the result. The water forms a separate layer in which the acetone preferentially dissolves. The result is that the distillate is richer in chloroform than the original azeotrope.
Pervaporation and other membrane methods
The pervaporation method uses a membrane that is more permeable to the one constituent than to another to separate the constituents of an azeotrope as it passes from liquid to vapor phase. The membrane is rigged to lie between the liquid and vapor phases. Another membrane method is vapor permeation, where the constituents pass through the membrane entirely in the vapor phase. In all membrane methods, the membrane separates the fluid passing through it into a permeate (that which passes through) and a retentate (that which is left behind). When the membrane is chosen so that is it more permeable to one constituent than another, then the permeate will be richer in that first constituent than the retentate.
See also
Azeotrope tables
Bancroft point
Batch distillation
Ebulliometer
Eutectic system
References
External links
Azeotrope defined with a limerick.
Prediction of azeotropic behaviour by the inversion of functions from the plane to the plane
Azeotropes on YouTube
Chemical engineering thermodynamics
Phase transitions
Separation processes | Azeotrope | [
"Physics",
"Chemistry",
"Engineering"
] | 5,253 | [
"Physical phenomena",
"Phase transitions",
"Separation processes",
"Chemical engineering",
"Critical phenomena",
"Phases of matter",
"Chemical engineering thermodynamics",
"nan",
"Statistical mechanics",
"Matter"
] |
56,803 | https://en.wikipedia.org/wiki/Mercury%28II%29%20fulminate | Mercury(II) fulminate, or Hg(CNO)2, is a primary explosive. It is highly sensitive to friction, heat and shock and is mainly used as a trigger for other explosives in percussion caps and detonators. Mercury(II) cyanate, though its chemical formula is identical, has a different atomic arrangement, making the cyanate and fulminate anionic isomers.
First used as a priming composition in small copper caps beginning in the 1820s, mercury fulminate quickly replaced flints as a means to ignite black powder charges in muzzle-loading firearms. Later, during the late 19th century and most of the 20th century, mercury fulminate became widely used in primers for self-contained rifle and pistol ammunition; it was the only practical detonator for firing projectiles until the early 20th century. Mercury fulminate has the distinct advantage over potassium chlorate of being non-corrosive, but it is known to weaken with time, by decomposing into its constituent elements. The reduced mercury which results forms amalgams with cartridge brass, weakening it, as well. Today, mercury fulminate has been replaced in primers by more efficient chemical substances. These are non-corrosive, less toxic, and more stable over time; they include lead azide, lead styphnate, and tetrazene derivatives. In addition, none of these compounds requires mercury for manufacture, supplies of which can be unreliable in wartime.
Preparation
Mercury(II) fulminate is prepared by dissolving mercury in nitric acid and adding ethanol to the solution. It was first prepared by Edward Charles Howard in 1800. The crystal structure of this compound was determined only in 2007.
Silver fulminate can be prepared in a similar way, but this salt is even more unstable than mercury fulminate; it can explode even under water and is impossible to accumulate in large amounts because it detonates under its own weight.
Decomposition
The thermal decomposition of mercury(II) fulminate can begin at temperatures as low as 100 °C, though it proceeds at a much higher rate with increasing temperature.
A possible reaction for the decomposition of mercury(II) fulminate yields carbon dioxide gas, nitrogen gas, and a combination of relatively stable mercury salts.
4 Hg(CNO)2 → 2 CO2 + N2 + HgO + 3 Hg(OCN)CN
Hg(CNO)2 → 2 CO + N2 + Hg
Hg(CNO)2 → :Hg(OCN)2 (cyanate or / and isocyanate)
2 Hg(CNO)2 → 2 CO2 + N2 + Hg + Hg(CN)2 (mercury(II) cyanide)
See also
Fulminic acid
Potassium fulminate
References
External links
National Pollutant Inventory - Mercury and compounds Fact Sheet
Mercury(II) compounds
Fulminates
Explosive chemicals | Mercury(II) fulminate | [
"Chemistry"
] | 616 | [
"Explosive chemicals",
"Fulminates"
] |
56,825 | https://en.wikipedia.org/wiki/Eating%20disorder | An eating disorder is a mental disorder defined by abnormal eating behaviors that adversely affect a person's physical or mental health. These behaviors may include eating either too much or too little. Types of eating disorders include binge eating disorder, where the patient keeps eating large amounts in a short period of time typically while not being hungry; anorexia nervosa, where the person has an intense fear of gaining weight and restricts food or overexercises to manage this fear; bulimia nervosa, where individuals eat a large quantity (binging) then try to rid themselves of the food (purging); pica, where the patient eats non-food items; rumination syndrome, where the patient regurgitates undigested or minimally digested food; avoidant/restrictive food intake disorder (ARFID), where people have a reduced or selective food intake due to some psychological reasons; and a group of other specified feeding or eating disorders. Anxiety disorders, depression and substance abuse are common among people with eating disorders. These disorders do not include obesity. People often experience comorbidity between an eating disorder and OCD. It is estimated 20–60% of patients with an ED have a history of OCD.
The causes of eating disorders are not clear, although both biological and environmental factors appear to play a role. Cultural idealization of thinness is believed to contribute to some eating disorders. Individuals who have experienced sexual abuse are also more likely to develop eating disorders. Some disorders such as pica and rumination disorder occur more often in people with intellectual disabilities.
Treatment can be effective for many eating disorders. Treatment varies by disorder and may involve counseling, dietary advice, reducing excessive exercise, and the reduction of efforts to eliminate food. Medications may be used to help with some of the associated symptoms. Hospitalization may be needed in more serious cases. About 70% of people with anorexia and 50% of people with bulimia recover within five years. Only 10% of people with eating disorders receive treatment, and of those, approximately 80% do not receive the proper care. Many are sent home weeks earlier than the recommended stay and are not provided with the necessary treatment. Recovery from binge eating disorder is less clear and estimated at 20% to 60%. Both anorexia and bulimia increase the risk of death. When people experience comorbidity with an eating disorder and OCD, certain aspects of treatment can be negatively impacted. OCD can make it harder to recover from obsession over weight and shape, body dissatisfaction, and body checking. This is in part because ED cognitions serve a similar purpose to OCD obsessions and compulsions (e.g., safety behaviors as temporary relief from anxiety). Research shows OCD does not have an impact on the BMI of patients during treatment.
Estimates of the prevalence of eating disorders vary widely, reflecting differences in gender, age, and culture as well as methods used for diagnosis and measurement.
In the developed world, anorexia affects about 0.4% and bulimia affects about 1.3% of young women in a given year. Binge eating disorder affects about 1.6% of women and 0.8% of men in a given year. According to one analysis, the percent of women who will have anorexia at some point in their lives may be up to 4%, or up to 2% for bulimia and binge eating disorders. Rates of eating disorders appear to be lower in less developed countries. Anorexia and bulimia occur nearly ten times more often in females than males. The typical onset of eating disorders is in late childhood to early adulthood. Rates of other eating disorders are not clear.
Classification
ICD and DSM diagnoses
These eating disorders are specified as mental disorders in standard medical manuals, including the ICD-10 and the DSM-5.
Anorexia nervosa (AN) is the restriction of energy intake relative to requirements, leading to significantly low body weight in the context of age, sex, developmental trajectory, and physical health. It is accompanied by an intense fear of gaining weight or becoming fat, as well as a disturbance in the way one experiences and appraises their body weight or shape. There are two subtypes of AN: the restricting type, and the binge-eating/purging type. The restricting type describes presentations in which weight loss is attained through dieting, fasting, and/or excessive exercise, with an absence of binge/purge behaviors. The binge-eating/purging type describes presentations in which the individual with the condition has engaged in recurrent episodes of binge-eating and purging behavior, such as self-induced vomiting, misuse of laxatives, and diuretics.
Pubertal and post-pubertal females with anorexia often experience amenorrhea, that is the loss of menstrual periods, due to the extreme weight loss these individuals face. Although amenorrhea was a required criterion for a diagnosis of anorexia in the DSM-IV, it was dropped in the DSM-5 due to its exclusive nature, as male, post-menopause women, or individuals who do not menstruate for other reasons would fail to meet this criterion. Females with bulimia may also experience amenorrhea, although the cause is not clear.
Bulimia nervosa (BN) is characterized by recurrent binge eating followed by compensatory behaviors such as purging (self-induced vomiting, eating to the point of vomiting, excessive use of laxatives/diuretics, or excessive exercise). Fasting may also be used as a method of purging following a binge. However, unlike anorexia nervosa, body weight is maintained at or above a minimally normal level. Severity of BN is determined by the number of episodes of inappropriate compensatory behaviors per week.
Binge eating disorder (BED) is characterized by recurrent episodes of binge eating without use of inappropriate compensatory behaviors that are present in BN and AN binge-eating/purging subtype. Binge eating episodes are associated with eating much more rapidly than normal, eating until feeling uncomfortably full, eating large amounts of food when not feeling physically hungry, eating alone because of feeling embarrassed by how much one is eating, and/or feeling disgusted with oneself, depressed or very guilty after eating. For a BED diagnosis to be given, marked distress regarding binge eating must be present, and the binge eating must occur an average of once a week for 3 months. Severity of BED is determined by the number of binge eating episodes per week.
Pica is the persistent eating of nonnutritive, nonfood substances in a way that is not developmentally appropriate or culturally supported. Although substances consumed vary with age and availability, paper, soap, hair, chalk, paint, and clay are among the most commonly consumed in those with a pica diagnosis. There are multiple causes for the onset of pica, including iron-deficiency anemia, malnutrition, and pregnancy, and pica often occurs in tandem with other mental health disorders associated with impaired function, such as intellectual disability, autism spectrum disorder, and schizophrenia. In order for a diagnosis of pica to be warranted, behaviors must last for at least one month.
Rumination disorder encompasses the repeated regurgitation of food, which may be re-chewed, re-swallowed, or spit out. For this diagnosis to be warranted, behaviors must persist for at least one month, and regurgitation of food cannot be attributed to another medical condition. Additionally, rumination disorder is distinct from AN, BN, BED, and ARFID, and thus cannot occur during the course of one of these illnesses.
Avoidant/restrictive food intake disorder (ARFID) is a feeding or eating disturbance, such as a lack of interest in eating food, avoidance based on sensory characteristics of food, or concern about aversive consequences of eating, that prevents one from meeting nutritional energy needs. It is frequently associated with weight loss, nutritional deficiency, or failure to meet growth trajectories. Notably, ARFID is distinguishable from AN and BN in that there is no evidence of a disturbance in the way in which one's body weight or shape is experienced. The disorder is not better explained by lack of available food, cultural practices, a concurrent medical condition, or another mental disorder.
Other Specified Feeding or Eating Disorder (OSFED) is an eating or feeding disorder that does not meet full DSM-5 criteria for AN, BN, or BED. Examples of otherwise-specified eating disorders include individuals with atypical anorexia nervosa, who meet all criteria for AN except being underweight despite substantial weight loss; atypical bulimia nervosa, who meet all criteria for BN except that bulimic behaviors are less frequent or have not been ongoing for long enough; purging disorder; and night eating syndrome.
Unspecified Feeding or Eating Disorder (USFED) describes feeding or eating disturbances that cause marked distress and impairment in important areas of functioning but that do not meet the full criteria for any of the other diagnoses. The specific reason the presentation does not meet criteria for a specified disorder is not given. For example, an USFED diagnosis may be given when there is insufficient information to make a more specific diagnosis, such as in an emergency room setting.
Other
Compulsive overeating, which may include habitual "grazing" of food or episodes of binge eating without feelings of guilt.
Diabulimia, which is characterized by the deliberate manipulation of insulin levels by diabetics in an effort to control their weight.
Drunkorexia, which is commonly characterized by purposely restricting food intake in order to reserve food calories for alcoholic calories, exercising excessively in order to burn calories from drinking, and over-drinking alcohol in order to purge previously consumed food.
Food maintenance, which is characterized by a set of aberrant eating behaviors of children in foster care.
Night eating syndrome, which is characterized by nocturnal hyperphagia (consumption of 25% or more of the total daily calories after the evening meal) with nocturnal ingestions, insomnia, loss of morning appetite and depression.
Nocturnal sleep-related eating disorder, which is a parasomnia characterized by eating, habitually out-of-control, while in a state of NREM sleep, with no memory of this the next morning.
Gourmand syndrome, a rare condition occurring after damage to the frontal lobe. Individuals develop an obsessive focus on fine foods.
Orthorexia nervosa, a term used by Steven Bratman to describe an obsession with a "pure" diet, in which a person develops an obsession with avoiding unhealthy foods to the point where it interferes with the person's life.
Klüver-Bucy syndrome, caused by bilateral lesions of the medial temporal lobe, includes compulsive eating, hypersexuality, hyperorality, visual agnosia, and docility.
Prader-Willi syndrome, a genetic disorder associated with insatiable appetite and morbid obesity.
Pregorexia, which is characterized by extreme dieting and over-exercising in order to control pregnancy weight gain. Prenatal undernutrition is associated with low birth weight, coronary heart disease, type 2 diabetes, stroke, hypertension, cardiovascular disease risk, and depression.
Muscle dysmorphia is characterized by appearance preoccupation that one's own body is too small, too skinny, insufficiently muscular, or insufficiently lean. Muscle dysmorphia affects mostly males.
Purging disorder. Recurrent purging behavior to influence weight or shape in the absence of binge eating. It is more properly a disorder of elimination rather than eating disorder.
Symptoms and long-term effects
Symptoms and complications vary according to the nature and severity of the eating disorder:
Associated physical symptoms of eating disorders include weakness, fatigue, sensitivity to cold, reduced beard growth in men, reduction in waking erections, reduced libido, weight loss and growth failure.
Frequent vomiting, which may cause acid reflux or entry of acidic gastric material into the laryngoesophageal tract, can lead to unexplained hoarseness. As such, individuals who induce vomiting as part of their eating disorder, such as those with anorexia nervosa, binge eating-purging type or those with purging-type bulimia nervosa, are at risk for acid reflux.
Polycystic ovary syndrome (PCOS) is the most common endocrine disorder to affect women. Though often associated with obesity it can occur in normal weight individuals. PCOS has been associated with binge eating and bulimic behavior.
Other possible manifestations are dry lips, burning tongue, parotid gland swelling, and temporomandibular disorders.
Psychopathology
The psychopathology of eating disorders centers around body image disturbance, such as concerns with weight and shape; self-worth being too dependent on weight and shape; fear of gaining weight even when underweight; denial of how severe the symptoms are and a distortion in the way the body is experienced.
The main psychopathological features of anorexia were outlined in 1982 as problems in body perception, emotion processing and interpersonal relationships. Women with eating disorders have greater body dissatisfaction. This impairment of body perception involves vision, proprioception, interoception and tactile perception. There is an alteration in integration of signals in which body parts are experienced as dissociated from the body as a whole. Bruch once theorized that difficult early relationships were related to the cause of anorexia and how primary caregivers can contribute to the onset of the illness.
A prominent feature of bulimia is dissatisfaction with body shape. However, dissatisfaction with body shape is not of diagnostic significance as it is sometimes present in individuals with no eating disorder. This highly labile feature can fluctuate depending on changes in shape and weight, the degree of control over eating and mood. In contrast, a necessary diagnostic feature for anorexia nervosa and bulimia nervosa is having overvalued ideas about shape and weight are relatively stable and partially related to the patients' low self-esteem.
Pro-ana subculture
Pro-ana refers to the promotion of behaviors related to the eating disorder anorexia nervosa. Several websites promote eating disorders, and can provide a means for individuals to communicate in order to maintain eating disorders. Members of these websites typically feel that their eating disorder is the only aspect of a chaotic life that they can control. These websites are often interactive and have discussion boards where individuals can share strategies, ideas, and experiences, such as diet and exercise plans that achieve extremely low weights. A study comparing the personal web-blogs that were pro-eating disorder with those focused on recovery found that the pro-eating disorder blogs contained language reflecting lower cognitive processing, used a more closed-minded writing style, contained less emotional expression and fewer social references, and focused more on eating-related contents than did the recovery blogs.
Causes
There is no single cause of eating disorders.
Many people with eating disorders also have body image disturbance and a comorbid body dysmorphic disorder (BDD), leading them to an altered perception of their body. Studies have found that a high proportion of individuals diagnosed with body dysmorphic disorder also had some type of eating disorder, with 15% of individuals having either anorexia nervosa or bulimia nervosa. This link between body dysmorphic disorder and anorexia stems from the fact that both BDD and anorexia nervosa are characterized by a preoccupation with physical appearance and a distortion of body image.
There are also many other possibilities such as environmental, social and interpersonal issues that could promote and sustain these illnesses. Also, the media are oftentimes blamed for the rise in the incidence of eating disorders due to the fact that media images of idealized slim physical shape of people such as models and celebrities motivate or even force people to attempt to achieve slimness themselves. The media are accused of distorting reality, in the sense that people portrayed in the media are either naturally thin and thus unrepresentative of normality or unnaturally thin by forcing their bodies to look like the ideal image by putting excessive pressure on themselves to look a certain way. While past findings have described eating disorders as primarily psychological, environmental, and sociocultural, further studies have uncovered evidence that there is a genetic component.
Genetics
Numerous studies show a genetic predisposition toward eating disorders. Twin studies have found a slight instances of genetic variance when considering the different criterion of both anorexia nervosa and bulimia nervosa as endophenotypes contributing to the disorders as a whole. A genetic link has been found on chromosome 1 in multiple family members of an individual with anorexia nervosa. An individual who is a first degree relative of someone who has had or currently has an eating disorder is seven to twelve times more likely to have an eating disorder themselves. Twin studies also show that at least a portion of the vulnerability to develop eating disorders can be inherited, and there is evidence to show that there is a genetic locus that shows susceptibility for developing anorexia nervosa. About 50% of eating disorder cases are attributable to genetics. Other cases are due to external reasons or developmental problems. There are also other neurobiological factors at play tied to emotional reactivity and impulsivity that could lead to binging and purging behaviors.
Epigenetics mechanisms are means by which environmental effects alter gene expression via methods such as DNA methylation; these are independent of and do not alter the underlying DNA sequence. They are heritable, but also may occur throughout the lifespan, and are potentially reversible. Dysregulation of dopaminergic neurotransmission due to epigenetic mechanisms has been implicated in various eating disorders. Other candidate genes for epigenetic studies in eating disorders include leptin, pro-opiomelanocortin (POMC) and brain-derived neurotrophic factor (BDNF).
There has found to be a genetic correlation between anorexia nervosa and OCD, suggesting a strong etiology. First and second relatives of probands with OCD have a greater chance of developing anorexia nervosa as genetic relatedness increases.
Psychological
Eating disorders are classified as Axis I disorders in the Diagnostic and Statistical Manual of Mental Health Disorders (DSM-IV) published by the American Psychiatric Association. There are various other psychological issues that may factor into eating disorders, some fulfill the criteria for a separate Axis I diagnosis or a personality disorder which is coded Axis II and thus are considered comorbid to the diagnosed eating disorder. Axis II disorders are subtyped into 3 "clusters": A, B and C. The causality between personality disorders and eating disorders has yet to be fully established. Some people have a previous disorder which may increase their vulnerability to developing an eating disorder. Some develop them afterwards. The severity and type of eating disorder symptoms have been shown to affect comorbidity. There has been controversy over various editions of the DSM diagnostic criteria including the latest edition, DSM-V, published in 2013.
Cognitive attentional bias
Attentional bias may have an effect on eating disorders. Attentional bias is the preferential attention toward certain types of information in the environment while simultaneously ignoring others. Individuals with eating disorders can be thought to have schemas, knowledge structures, which are dysfunctional as they may bias judgement, thought, behaviour in a manner that is self-destructive or maladaptive. They may have developed a disordered schema which focuses on body size and eating. Thus, this information is given the highest level of importance and overvalued among other cognitive structures. Researchers have found that people who have eating disorders tend to pay more attention to stimuli related to food. For people struggling to recover from an eating disorder or addiction, this tendency to pay attention to certain signals while discounting others can make recovery that much more difficult.
Studies have utilized the Stroop task to assess the probable effect of attentional bias on eating disorders. This may involve separating food and eating words from body shape and weight words. Such studies have found that anorexic subjects were slower to colour name food related words than control subjects. Other studies have noted that individuals with eating disorders have significant attentional biases associated with eating and weight stimuli.
Personality traits
There are various childhood personality traits associated with the development of eating disorders, such as perfectionism and neuroticism. These personality traits are found to link eating disorders and OCD. During adolescence these traits may become intensified due to a variety of physiological and cultural influences such as the hormonal changes associated with puberty, stress related to the approaching demands of maturity and socio-cultural influences and perceived expectations, especially in areas that concern body image. Eating disorders have been associated with a fragile sense of self and with disordered mentalization. Many personality traits have a genetic component and are highly heritable. Maladaptive levels of certain traits may be acquired as a result of anoxic or traumatic brain injury, neurodegenerative diseases such as Parkinson's disease, neurotoxicity such as lead exposure, bacterial infection such as Lyme disease or parasitic infection such as Toxoplasma gondii as well as hormonal influences. While studies are still continuing via the use of various imaging techniques such as fMRI; these traits have been shown to originate in various regions of the brain such as the amygdala and the prefrontal cortex. Disorders in the prefrontal cortex and the executive functioning system have been shown to affect eating behavior.
Celiac disease
People with gastrointestinal disorders may be more risk of developing disordered eating practices than the general population, principally restrictive eating disturbances. An association of anorexia nervosa with celiac disease has been found. The role that gastrointestinal symptoms play in the development of eating disorders seems rather complex. Some authors report that unresolved symptoms prior to gastrointestinal disease diagnosis may create a food aversion in these persons, causing alterations to their eating patterns. Other authors report that greater symptoms throughout their diagnosis led to greater risk. It has been documented that some people with celiac disease, irritable bowel syndrome or inflammatory bowel disease who are not conscious about the importance of strictly following their diet, choose to consume their trigger foods to promote weight loss. On the other hand, individuals with good dietary management may develop anxiety, food aversion and eating disorders because of concerns around cross contamination of their foods. Some authors suggest that medical professionals should evaluate the presence of an unrecognized celiac disease in all people with eating disorder, especially if they present any gastrointestinal symptom (such as decreased appetite, abdominal pain, bloating, distension, vomiting, diarrhea or constipation), weight loss, or growth failure; and also routinely ask celiac patients about weight or body shape concerns, dieting or vomiting for weight control, to evaluate the possible presence of eating disorders, specially in women.
Environmental influences
Child maltreatment
Child abuse which encompasses physical, psychological, and sexual abuse, as well as neglect, has been shown to approximately triple the risk of an eating disorder. Sexual abuse appears to double the risk of bulimia; however, the association is less clear for anorexia. The risk for individuals developing eating disorders increases if the individual grew up in an invalidating environment where displays of emotions were often punished. Abuse that has also occurred in childhood produces intolerable difficult emotions that cannot be expressed in a healthy manner. Eating disorders come in as an escape coping mechanism, as a means to control and avoid overwhelming negative emotions and feelings. Those who report physical or sexual maltreatment as a child are at an increased risk of developing an eating disorder.
Social isolation
Social isolation has been shown to have a deleterious effect on an individual's physical and emotional well-being. Those that are socially isolated have a higher mortality rate in general as compared to individuals that have established social relationships. This effect on mortality is markedly increased in those with pre-existing medical or psychiatric conditions, and has been especially noted in cases of coronary heart disease. "The magnitude of risk associated with social isolation is comparable with that of cigarette smoking and other major biomedical and psychosocial risk factors." (Brummett et al.)
Social isolation can be inherently stressful, depressing and anxiety-provoking. In an attempt to ameliorate these distressful feelings an individual may engage in emotional eating in which food serves as a source of comfort. The loneliness of social isolation and the inherent stressors thus associated have been implicated as triggering factors in binge eating as well.
Waller, Kennerley and Ohanian (2007) argued that both bingeing–vomiting and restriction are emotion suppression strategies, but they are just utilized at different times. For example, restriction is used to pre-empt any emotion activation, while bingeing–vomiting is used after an emotion has been activated.
Parental influence
Parental influence has been shown to be an intrinsic component in the development of eating behaviors of children. This influence is manifested and shaped by a variety of diverse factors such as familial genetic predisposition, dietary choices as dictated by cultural or ethnic preferences, the parents' own body shape, how they talk about their own body, and eating patterns, the degree of involvement and expectations of their children's eating behavior as well as the interpersonal relationship of parent and child. It is also influenced by the general psychosocial climate of the home and whether a nurturing stable environment is present. It has been shown that maladaptive parental behavior has an important role in the development of eating disorders. As to the more subtle aspects of parental influence, it has been shown that eating patterns are established in early childhood and that children should be allowed to decide when their appetite is satisfied as early as the age of two. A direct link has been shown between obesity and parental pressure to eat more.
Coercive tactics in regard to diet have not been proven to be efficacious in controlling a child's eating behavior. Affection and attention have been shown to affect the degree of a child's finickiness and their acceptance of a more varied diet.
Adams and Crane (1980), have shown that parents are influenced by stereotypes that influence their perception of their child's body. The conveyance of these negative stereotypes also affects the child's own body image and satisfaction. Hilde Bruch, a pioneer in the field of studying eating disorders, asserts that anorexia nervosa often occurs in girls who are high achievers, obedient, and always trying to please their parents. Their parents have a tendency to be over-controlling and fail to encourage the expression of emotions, inhibiting daughters from accepting their own feelings and desires. Adolescent females in these overbearing families lack the ability to be independent from their families, yet realize the need to, often resulting in rebellion. Controlling their food intake may make them feel better, as it provides them with a sense of control.
Negative parental body-talk, meaning when a parent comments on their own weight, shape or size, is strongly correlated with disordered eating in their children. Children whose parents engage in self-talk about their weight frequently are three times as likely to practice extreme weight control behaviors such as disordered eating, than children who do not overhear negative parental body-talk. Additionally, negative body-talk from mothers is explicitly correlated with disordered eating in adolescent girls.
Peer pressure
In various studies such as one conducted by The McKnight Investigators, peer pressure was shown to be a significant contributor to body image concerns and attitudes toward eating among subjects in their teens and early twenties.
Eleanor Mackey and co-author, Annette M. La Greca of the University of Miami, studied 236 teen girls from public high schools in southeast Florida. "Teen girls' concerns about their own weight, about how they appear to others and their perceptions that their peers want them to be thin are significantly related to weight-control behavior", says psychologist Eleanor Mackey of the Children's National Medical Center in Washington and lead author of the study. "Those are really important."
According to one study, 40% of 9- and 10-year-old girls are already trying to lose weight. Such dieting is reported to be influenced by peer behavior, with many of those individuals on a diet reporting that their friends also were dieting. The number of friends dieting and the number of friends who pressured them to diet also played a significant role in their own choices.
Elite athletes have a significantly higher rate in eating disorders. Female athletes in sports such as gymnastics, ballet, diving, etc. are found to be at the highest risk among all athletes. Women are more likely than men to acquire an eating disorder between the ages of 13 and 25. About 0–15% of those with bulimia and anorexia are men.
Other psychological problems that could possibly create an eating disorder such as Anorexia Nervosa are depression, and low self-esteem. Depression is a state of mind where emotions are unstable causing a person's eating habits to change due to sadness and no interest of doing anything. According to PSYCOM "Studies show that a high percentage of people with an eating disorder will experience depression." Depression is a state of mind where people seem to refuge without being able to get out of it. A big factor of this can affect people with their eating and this can mostly affect teenagers. Teenagers are big candidates for Anorexia for the reason that during the teenage years, many things start changing and they start to think certain ways. According to Life Works an article about eating disorders "People of any age can be affected by pressure from their peers, the media and even their families but it is worse when you're a teenager at school." Teenagers can develop eating disorder such as Anorexia due to peer pressure which can lead to Depression. Many teens start off this journey by feeling pressure for wanting to look a certain way of feeling pressure for being different. This brings them to finding the result in eating less and soon leading to Anorexia which can bring big harms to the physical state.
Cultural pressure
Western perspective
There is a cultural emphasis on thinness which is especially pervasive in western society. A child's perception of external pressure to achieve the ideal body that is represented by the media predicts the child's body image dissatisfaction, body dysmorphic disorder and an eating disorder. "The cultural pressure on men and women to be 'perfect' is an important predisposing factor for the development of eating disorders". Further, when women of all races base their evaluation of their self upon what is considered the culturally ideal body, the incidence of eating disorders increases.
Socioeconomic status (SES) has been viewed as a risk factor for eating disorders, presuming that possessing more resources allows for an individual to actively choose to diet and reduce body weight. Some studies have also shown a relationship between increasing body dissatisfaction with increasing SES. However, once high socioeconomic status has been achieved, this relationship weakens and, in some cases, no longer exists.
The media plays a major role in the way in which people view themselves. Countless magazine ads and commercials depict thin celebrities. Society has taught people that being accepted by others is necessary at all costs. This has led to the belief that in order to fit in one must look a certain way. Televised beauty competitions such as the Miss America Competition contribute to the idea of what it means to be beautiful because competitors are evaluated on the basis of their opinion.
In addition to socioeconomic status being considered a cultural risk factor so is the world of sports. Athletes and eating disorders tend to go hand in hand, especially the sports where weight is a competitive factor. Gymnastics, horse back riding, wrestling, body building, and dancing are just a few that fall into this category of weight dependent sports. Eating disorders among individuals that participate in competitive activities, especially women, often lead to having physical and biological changes related to their weight that often mimic prepubescent stages. Oftentimes as women's bodies change they lose their competitive edge which leads them to taking extreme measures to maintain their younger body shape. Men often struggle with binge eating followed by excessive exercise while focusing on building muscle rather than losing fat, but this goal of gaining muscle is just as much an eating disorder as obsessing over thinness. The following statistics taken from Susan Nolen-Hoeksema's book, (ab)normal psychology, show the estimated percentage of athletes that struggle with eating disorders based on the category of sport.
Aesthetic sports (dance, figure skating, gymnastics) – 35%
Weight dependent sports (judo, wrestling) – 29%
Endurance sports (cycling, swimming, running) – 20%
Technical sports (golf, high jumping) – 14%
Ball game sports (volleyball, soccer) – 12%
Although most of these athletes develop eating disorders to keep their competitive edge, others use exercise as a way to maintain their weight and figure. This is just as serious as regulating food intake for competition. Even though there is mixed evidence showing at what point athletes are challenged with eating disorders, studies show that regardless of competition level all athletes are at higher risk for developing eating disorders that non-athletes, especially those that participate in sports where thinness is a factor.
Pressure from society is also seen within the homosexual community. Gay men are at greater risk of eating disorder symptoms than heterosexual men. Within the gay culture, muscularity gives the advantages of both social and sexual desirability and also power. These pressures and ideas that another homosexual male may desire a mate who is thinner or muscular can possibly lead to eating disorders. The higher eating disorder symptom score reported, the more concern about how others perceive them and the more frequent and excessive exercise sessions occur. High levels of body dissatisfaction are also linked to external motivation to working out and old age; however, having a thin and muscular body occurs within younger homosexual males than older.
Most of the cross-cultural studies use definitions from the DSM-IV-TR, which has been criticized as reflecting a Western cultural bias. Thus, assessments and questionnaires may not be constructed to detect some of the cultural differences associated with different disorders. Also, when looking at individuals in areas potentially influenced by Western culture, few studies have attempted to measure how much an individual has adopted the mainstream culture or retained the traditional cultural values of the area. Lastly, the majority of the cross-cultural studies on eating disorders and body image disturbances occurred in Western nations and not in the countries or regions being examined.
While there are many influences to how an individual processes their body image, the media does play a major role. Along with the media, parental influence, peer influence, and self-efficacy beliefs also play a large role in an individual's view of themselves. The way the media presents images can have a lasting effect on an individual's perception of their body image. Eating disorders are a worldwide issue and while women are more likely to be affected by an eating disorder it still affects both genders (Schwitzer 2012). The media influences eating disorders whether shown in a positive or negative light, it then has a responsibility to use caution when promoting images that projects an ideal that many turn to eating disorders to attain.
To try to address unhealthy body image in the fashion world, in 2015, France passed a law requiring models to be declared healthy by a doctor to participate in fashion shows. It also requires re-touched images to be marked as such in magazines.
There is a relationship between "thin ideal" social media content and body dissatisfaction and eating disorders among young adult women, especially in the Western hemisphere. New research points to an "internalization" of distorted images online, as well as negative comparisons among young adult women. Most studies have been based in the U.S., the U.K, and Australia, these are places where the thin ideal is strong among women, as well as the strive for the "perfect" body.
In addition to mere media exposure, there is an online "pro-eating disorder" community. Through personal blogs and Twitter, this community promotes eating disorders as a "lifestyle", and continuously posts pictures of emaciated bodies, and tips on how to stay thin. The hashtag "#proana" (pro-anorexia), is a product of this community, as well as images promoting weight loss, tagged with the term "thinspiration". According to social comparison theory, young women have a tendency to compare their appearance to others, which can result in a negative view of their own bodies and altering of eating behaviors, that in turn can develop disordered eating behaviors.
When body parts are isolated and displayed in the media as objects to be looked at, it is called objectification, and women are affected most by this phenomenon. Objectification increases self-objectification, where women judge their own body parts as a mean of praise and pleasure for others. There is a significant link between self-objectification, body dissatisfaction, and disordered eating, as the beauty ideal is altered through social media.
Although eating disorders are typically under diagnosed in people of color, they still experience eating disorders in great numbers. It is thought that the stress that those of color face in the United States from being multiply marginalized may contribute to their rates of eating disorders. Eating disorders, for these women, may be a response to environmental stressors such as racism, abuse and poverty.
African perspective
In the majority of many African communities, thinness is generally not seen as an ideal body type and most pressure to attain a slim figure may stem from influence or exposure to Western culture and ideology. Traditional African cultural ideals are reflected in the practice of some health professionals; in Ghana, pharmacists sell appetite stimulants to women who desire to, as Ghanaians stated, "grow fat". Girls are told that if they wish to find a partner and birth children they must gain weight. On the contrary, there are certain taboos surrounding a slim body image, specifically in West Africa. Lack of body fat is linked to poverty and HIV/AIDS.
However, the emergence of Western and European influence, specifically with the introduction of such fashion and modelling shows and competitions, is changing certain views among body acceptance, and the prevalence of eating disorders has consequently increased. This acculturation is also related to how South Africa is concurrently undergoing rapid, intense urbanization. Such modern development is leading to cultural changes, and professionals cite rates of eating disorders in this region will increase with urbanization, specifically with changes in identity, body image, and cultural issues. Further, exposure to Western values through private Caucasian schools or caretakers is another possible factor related to acculturation which may be associated with the onset of eating disorders.
Other factors which are cited to be related to the increasing prevalence of eating disorders in African communities can be related to sexual conflicts, such as psychosexual guilt, first sexual intercourse, and pregnancy. Traumatic events which are related to both family (i.e. parental separation) and eating related issues are also cited as possible effectors. Religious fasting, particularly around times of stress, and feelings of self-control are also cited as determinants in the onset of eating disorders.
Asian perspective
The West plays a role in Asia's economic development via foreign investments, advanced technologies joining financial markets, and the arrival of American and European companies in Asia, especially through outsourcing manufacturing operations. This exposure to Western culture, especially the media, imparts Western body ideals to Asian society, termed Westernization. In part, Westernization fosters eating disorders among Asian populations. However, there are also country-specific influences on the occurrence of eating disorders in Asia.
China
In China as well as other Asian countries, Westernization, migration from rural to urban areas, after-effects of sociocultural events, and disruptions of social and emotional support are implicated in the emergence of eating disorders. In particular, risk factors for eating disorders include higher socioeconomic status, preference for a thin body ideal, history of child abuse, high anxiety levels, hostile parental relationships, jealousy towards media idols, and above-average scores on the body dissatisfaction and interoceptive awareness sections of the Eating Disorder Inventory. Similarly to the West, researchers have identified the media as a primary source of pressures relating to physical appearance, which may even predict body change behaviors in males and females.
Fiji
While colonised by the British in 1874, Fiji kept a large degree of linguistic and cultural diversity which characterised the ethnic Fijian population. Though gaining independence in 1970, Fiji has rejected Western, capitalist values which challenged its mutual trusts, bonds, kinships and identity as a nation. Similar to studies conducted on Polynesian groups, ethnic Fijian traditional aesthetic ideals reflected a preference for a robust body shape; thus, the prevailing 'pressure to be slim,' thought to be associated with diet and disordered eating in many Western societies was absent in traditional Fiji. Additionally, traditional Fijian values would encourage a robust appetite and a widespread vigilance for and social response to weight loss. Individual efforts to reshape the body by dieting or exercise, thus traditionally was discouraged.
However, studies conducted in 1995 and 1998 both demonstrated a link between the introduction of television in the country, and the emergence of eating disorders in young adolescent ethnic Fijian girls. Through the quantitative data collected in these studies there was found to be a significant increase in the prevalence of two key indicators of disordered eating: self-induced vomiting and high Eating Attitudes Test- 26. These results were recorded following prolonged television exposure in the community, and an associated increase in the percentage of households owning television sets. Additionally, qualitative data linked changing attitudes about dieting, weight loss and aesthetic ideas in the peer environment to Western media images. The impact of television was especially profound given the longstanding social and cultural traditions that had previously rejected the notions of dieting, purging and body dissatisfaction in Fiji. Additional studies in 2011 found that social network media exposure, independent of direct media and other cultural exposures, was also associated with eating pathology.
Hong Kong
From the early- to-mid- 1990s, a variant form of anorexia nervosa was identified in Hong Kong. This variant form did not share features of anorexia in the West, notably "fat-phobia" and distorted body image. Patients attributed their restrictive food intake to somatic complaints, such as epigastric bloating, abdominal or stomach pain, or a lack of hunger or appetite. Compared to Western patients, individuals with this variant anorexia demonstrated bulimic symptoms less frequently and tended to have lower pre-morbid body mass index. This form disapproves the assumption that a "fear of fatness or weight gain" is the defining characteristic of individuals with anorexia nervosa.
India
In the past, the available evidence did not suggest that unhealthy weight loss methods and eating disordered behaviors are common in India as proven by stagnant rates of clinically diagnosed eating disorders. However, it appears that rates of eating disorders in urban areas of India are increasing based on surveys from psychiatrists who were asked whether they perceived eating disorders to be a "serious clinical issue" in India. One notable Indian psychiatrist and eating disorder specialist Dr Udipi Gauthamadas is on record saying, "Disturbed eating attitudes and behaviours affect about 25 to 40 percent of adolescent girls and around 20 percent of adolescent boys. While on one hand there is increasing recognition of eating disorders in the country, there is also a persisting belief that this illness is alien to India. This prevents many sufferers from seeking professional help."
23.5% of respondents believed that rates of eating disorders were rising in Bangalore, 26.5% claimed that rates were stagnant, and 42%, the largest percentage, expressed uncertainty. It has been suggested that urbanization and socioeconomic status are associated with increased risk for body weight dissatisfaction. However, due to the physical size of and diversity within India, trends may vary throughout the country.
American perspective
Black and African American
Historically, identifying as African American has been considered a protective factor for body dissatisfaction. Those identifying as African American have been found to have a greater acceptance of larger body image ideals and less internalization of the thin ideal, and African American women have reported the lowest levels of body dissatisfaction among the five major racial/ethnic groups in the US.
However, recent research contradicts these findings, indicating that African American women may exhibit levels of body dissatisfaction comparable to other racial/ethnic minority groups. In this way, just because those who identify as African American may not internalize the thin ideal as strongly as other racial and ethnic groups, it does not mean that they do not hold other appearance ideals that may promote body shape concerns. Similarly, recent research shows that African Americans exhibit rates of disordered eating that are similar to or even higher than their white counterparts.
American Indian and Alaska Native
American Indian and Alaska Native women are more likely than white women to both experience a fear of losing control over their eating and to abuse laxatives and diuretics for weight control purposes. They have comparable rates of binge eating and other disordered weight control behaviors in comparison to other racial groups.
Latinos
Disproportionately high rates of disordered eating and body dissatisfaction have been found in Hispanics in comparison to other racial and ethnic groups. Studies have found significantly more laxative use in those identifying as Hispanic in comparison to non-Hispanic white counterparts. Specifically, those identifying as Hispanic may be at heightened risk of engaging in binge eating and bingeing/purging behaviors.
Food insecurity
Food insecurity is defined as inadequate access to sufficient food, both in terms of quantity and quality, in direct contrast to food security, which is conceptualized as having access to sufficient, safe, and nutritious food to meet dietary needs and preferences. Notably, levels of food security exist on a continuum from reliable access to food to disrupted access to food.
Multiple studies have found food insecurity to be associated with eating pathology. A study conducted on individuals visiting a food bank in Texas found higher food insecurity to be correlated with higher levels of binge eating, overall eating disorder pathology, dietary restraint, compensatory behaviors and weight self-stigma. Findings of a replication study with a larger, more diverse sample mirrored these results, and a study looking at the relationship between food insecurity and bulimia nervosa similarly found greater food insecurity to be associated with elevated levels of eating pathology.
Trauma
One study has found that binge-eating disorder may stem from trauma, with some female patients engaging in these disorders to numb pain experienced through sexual trauma. There are various forms of trauma that individuals may have experienced, leading them to cope through an eating disorder. When in pain, individuals may attempt to exert control over this aspect of their lives, perceiving it as their only means of managing their life. The brain is a very complex organ that tries its best to help us navigate through the hardships of life.
Sexual Orientation and Gender Identity
Sexual orientation, gender identity and gender norms influence people with eating disorders. Some eating disorder patients have implied that enforced heterosexuality and heterosexism led many to engage in their condition to align with norms associated with their gender identity. Families may restrict women's food intake to keep them thin, thus increasing their ability to attain a male romantic partner. Non-heterosexual male adolescents are consistently at higher risk of developing disordered eating than their heterosexual peers for various body image concerns, including worries about weight, shape, muscle tone, and definition. Eating disorders in trans and non-binary adolescents is complicated in that some eating disorder symptoms may affirm gender identity in transitioning patients, complicating treatment. For example, loss of menstruation in birth-assigned females or a slender frame in birth-assigned males may align with their gender identity during transition.
Mechanisms
Biochemical: Eating behavior is a complex process controlled by the neuroendocrine system, of which the Hypothalamus-pituitary-adrenal-axis (HPA axis) is a major component. Dysregulation of the HPA axis has been associated with eating disorders, such as irregularities in the manufacture, amount or transmission of certain neurotransmitters, hormones or neuropeptides and amino acids such as homocysteine, elevated levels of which are found in AN and BN as well as depression.
Serotonin: a neurotransmitter involved in depression also has an inhibitory effect on eating behavior.
Norepinephrine is both a neurotransmitter and a hormone; abnormalities in either capacity may affect eating behavior.
Dopamine: which in addition to being a precursor of norepinephrine and epinephrine is also a neurotransmitter which regulates the rewarding property of food.
Neuropeptide Y also known as NPY is a hormone that encourages eating and decreases metabolic rate. Blood levels of NPY are elevated in patients with anorexia nervosa, and studies have shown that injection of this hormone into the brain of rats with restricted food intake increases their time spent running on a wheel. Normally the hormone stimulates eating in healthy patients, but under conditions of starvation it increases their activity rate, probably to increase the chance of finding food. The increased levels of NPY in the blood of patients with eating disorders can in some ways explain the instances of extreme over-exercising found in most anorexia nervosa patients.
Leptin and ghrelin: leptin is a hormone produced primarily by the fat cells in the body; it has an inhibitory effect on appetite by inducing a feeling of satiety. Ghrelin is an appetite inducing hormone produced in the stomach and the upper portion of the small intestine. Circulating levels of both hormones are an important factor in weight control. While often associated with obesity, both hormones and their respective effects have been implicated in the pathophysiology of anorexia nervosa and bulimia nervosa. Leptin can also be used to distinguish between constitutional thinness found in a healthy person with a low BMI and an individual with anorexia nervosa.
Gut bacteria and immune system: studies have shown that a majority of patients with anorexia and bulimia nervosa have elevated levels of autoantibodies that affect hormones and neuropeptides that regulate appetite control and the stress response. There may be a direct correlation between autoantibody levels and associated psychological traits. Later study revealed that autoantibodies reactive with alpha-MSH are, in fact, generated against ClpB, a protein produced by certain gut bacteria e.g. Escherichia coli. ClpB protein was identified as a conformational antigen-mimetic of alpha-MSH. In patients with eating disorders plasma levels of anti-ClpB IgG and IgM correalated with patients' psychological traits
Infection: PANDAS is an abbreviation for the controversial Pediatric Autoimmune Neuropsychiatric Disorders Associated with Streptococcal Infections hypothesis. Children with PANDAS are postulated to "have obsessive-compulsive disorder (OCD) and/or tic disorders such as Tourette syndrome, and in whom symptoms worsen following infections such as strep throat". (NIMH) PANDAS and the broader PANS are hypothesized to be a precipitating factor in the development of anorexia nervosa in some cases, (PANDAS AN).
Lesions: studies have shown that lesions to the right frontal lobe or temporal lobe can cause the pathological symptoms of an eating disorder.
Tumors: tumors in various regions of the brain have been implicated in the development of abnormal eating patterns.
Brain calcification: a study highlights a case in which prior calcification of the right thalumus may have contributed to development of anorexia nervosa.
Somatosensory homunculus: is the representation of the body located in the somatosensory cortex, first described by renowned neurosurgeon Wilder Penfield. The illustration was originally termed "Penfield's Homunculus", homunculus meaning little man. "In normal development this representation should adapt as the body goes through its pubertal growth spurt. However, in AN it is hypothesized that there is a lack of plasticity in this area, which may result in impairments of sensory processing and distortion of body image". (Bryan Lask, also proposed by VS Ramachandran)
Obstetric complications: There have been studies done which show maternal smoking, obstetric and perinatal complications such as maternal anemia, very pre-term birth (less than 32 weeks), being born small for gestational age, neonatal cardiac problems, preeclampsia, placental infarction and sustaining a cephalhematoma at birth increase the risk factor for developing either anorexia nervosa or bulimia nervosa. Some of this developmental risk as in the case of placental infarction, maternal anemia and cardiac problems may cause intrauterine hypoxia, umbilical cord occlusion or cord prolapse may cause ischemia, resulting in cerebral injury, the prefrontal cortex in the fetus and neonate is highly susceptible to damage as a result of oxygen deprivation which has been shown to contribute to executive dysfunction, ADHD, and may affect personality traits associated with both eating disorders and comorbid disorders such as impulsivity, mental rigidity and obsessionality. The problem of perinatal brain injury, in terms of the costs to society and to the affected individuals and their families, is extraordinary. (Yafeng Dong, PhD)
Symptom of starvation: Evidence suggests that the symptoms of eating disorders are actually symptoms of the starvation itself, not of a mental disorder. In a study involving thirty-six healthy young men that were subjected to semi-starvation, the men soon began displaying symptoms commonly found in patients with eating disorders. In this study, the healthy men ate approximately half of what they had become accustomed to eating and soon began developing symptoms and thought patterns (preoccupation with food and eating, ritualistic eating, impaired cognitive ability, other physiological changes such as decreased body temperature) that are characteristic symptoms of anorexia nervosa. The men used in the study also developed hoarding and obsessive collecting behaviors, even though they had no use for the items, which revealed a possible connection between eating disorders and obsessive–compulsive disorder.
Diagnosis
According to Pritts and Susman "The medical history is the most powerful tool for diagnosing eating disorders". There are many medical disorders that mimic eating disorders and comorbid psychiatric disorders. Early detection and intervention can assure a better recovery and can improve a lot the quality of life of these patients. In the past 30 years eating disorders have become increasingly conspicuous and it is uncertain whether the changes in presentation reflect a true increase. Anorexia nervosa and bulimia nervosa are the most clearly defined subgroups of a wider range of eating disorders. Many patients present with subthreshold expressions of the two main diagnoses: others with different patterns and symptoms.
As eating disorders, especially anorexia nervosa, are thought of as being associated with young, white females, diagnosis of eating disorders in other races happens more rarely. In one study, when clinicians were presented with identical case studies demonstrating disordered eating symptoms in Black, Hispanic, and white women, 44% noted the white woman's behavior as problematic; 41% identified the Hispanic woman's behavior as problematic, and only 17% of the clinicians noted the Black woman's behavior as problematic (Gordon, Brattole, Wingate, & Joiner, 2006).
Medical
The diagnostic workup typically includes complete medical and psychosocial history and follows a rational and formulaic approach to the diagnosis. Neuroimaging using fMRI, MRI, PET and SPECT scans have been used to detect cases in which a lesion, tumor or other organic condition has been either the sole causative or contributory factor in an eating disorder. "Right frontal intracerebral lesions with their close relationship to the limbic system could be causative for eating disorders, we therefore recommend performing a cranial MRI in all patients with suspected eating disorders" (Trummer M et al. 2002), "intracranial pathology should also be considered however certain is the diagnosis of early-onset anorexia nervosa. Second, neuroimaging plays an important part in diagnosing early-onset anorexia nervosa, both from a clinical and a research prospective".(O'Brien et al. 2001).
Psychological
After ruling out organic causes and the initial diagnosis of an eating disorder being made by a medical professional, a trained mental health professional aids in the assessment and treatment of the underlying psychological components of the eating disorder and any comorbid psychological conditions. The clinician conducts a clinical interview and may employ various psychometric tests. Some are general in nature while others were devised specifically for use in the assessment of eating disorders. Some of the general tests that may be used are the Hamilton Depression Rating Scale and the Beck Depression Inventory. longitudinal research showed that there is an increase in chance that a young adult female would develop bulimia due to their current psychological pressure and as the person ages and matures, their emotional problems change or are resolved and then the symptoms decline.
Several types of scales are currently used – (a) self-report questionnaires –EDI-3, BSQ, TFEQ, MAC, BULIT-R, QEWP-R, EDE-Q, EAT, NEQ – and other; (b) semi-structured interviews – SCID-I, EDE – and other; (c) clinical interviews unstructured or observer-based rating scales- Morgan Russel scale The majority of the scales used were described and used in adult populations. From all the scales evaluated and analyzed, only three are described at the child population – it is EAT-26 (children above 16 years), EDI-3 (children above 13 years), and ANSOCQ (children above 13 years). It is essential to develop specific scales for people under 18 years of age, given the increasing incidence of ED among children and the need for early detection and appropriate intervention. Moreover, the urgent need for accurate scales and telemedicine testing and diagnosis tools are of high importance during the COVID-19 pandemic (Leti, Garner & al., 2020).
Differential diagnoses
There are multiple medical conditions which may be misdiagnosed as a primary psychiatric disorder, complicating or delaying treatment. These may have a synergistic effect on conditions which mimic an eating disorder or on a properly diagnosed eating disorder.
Lyme disease is known as the "great imitator", as it may present as a variety of psychiatric or neurological disorders including anorexia nervosa.
Gastrointestinal diseases, such as celiac disease, Crohn's disease, peptic ulcer, eosinophilic esophagitis or non-celiac gluten sensitivity, among others. Celiac disease is also known as the "great imitator", because it may involve several organs and cause an extensive variety of non-gastrointestinal symptoms, such as psychiatric and neurological disorders, including anorexia nervosa.
Addison's disease is a disorder of the adrenal cortex which results in decreased hormonal production. Addison's disease, even in subclinical form may mimic many of the symptoms of anorexia nervosa.
Gastric adenocarcinoma is one of the most common forms of cancer in the world. Complications due to this condition have been misdiagnosed as an eating disorder.
Hypothyroidism, hyperthyroidism, hypoparathyroidism and hyperparathyroidism may mimic some of the symptoms of, can occur concurrently with, be masked by or exacerbate an eating disorder.
Toxoplasma seropositivity: even in the absence of symptomatic toxoplasmosis, toxoplasma gondii exposure has been linked to changes in human behavior and psychiatric disorders including those comorbid with eating disorders such as depression. In reported case studies the response to antidepressant treatment improved only after adequate treatment for toxoplasma.
Neurosyphilis: It is estimated that there may be up to one million cases of untreated syphilis in the US alone. "The disease can present with psychiatric symptoms alone, psychiatric symptoms that can mimic any other psychiatric illness". Many of the manifestations may appear atypical. Up to 1.3% of short term psychiatric admissions may be attributable to neurosyphilis, with a much higher rate in the general psychiatric population. (Ritchie, M Perdigao J,)
Dysautonomia: a wide variety of autonomic nervous system (ANS) disorders may cause a wide variety of psychiatric symptoms including anxiety, panic attacks and depression. Dysautonomia usually involves failure of sympathetic or parasympathetic components of the ANS system but may also include excessive ANS activity. Dysautonomia can occur in conditions such as diabetes and alcoholism.
Psychological disorders which may be confused with an eating disorder, or be co-morbid with one:
Emetophobia is an anxiety disorder characterized by an intense fear of vomiting. A person so impacted may develop rigorous standards of food hygiene, such as not touching food with their hands. They may become socially withdrawn to avoid situations which in their perception may make them vomit. Many who have emetophobia are diagnosed with anorexia or self-starvation. In severe cases of emetophobia they may drastically reduce their food intake.
Phagophobia is an anxiety disorder characterized by a fear of eating, it is usually initiated by an adverse experience while eating such as choking or vomiting. Persons with this disorder may present with complaints of pain while swallowing.
Body dysmorphic disorder (BDD) is listed as an obsessive-compulsive disorder that affects up to 2% of the population. BDD is characterized by excessive rumination over an actual or perceived physical flaw. BDD has been diagnosed equally among men and women. While BDD has been misdiagnosed as anorexia nervosa, it also occurs comorbidly in 39% of eating disorder cases. BDD is a chronic and debilitating condition which may lead to social isolation, major depression and suicidal ideation and attempts. Neuroimaging studies to measure response to facial recognition have shown activity predominately in the left hemisphere in the left lateral prefrontal cortex, lateral temporal lobe and left parietal lobe showing hemispheric imbalance in information processing. There is a reported case of the development of BDD in a 21-year-old male following an inflammatory brain process. Neuroimaging showed the presence of a new atrophy in the frontotemporal region.
Prevention
Prevention aims to promote a healthy development before the occurrence of eating disorders. It also intends early identification of an eating disorder before it is too late to treat. Children as young as ages 5–7 are aware of the cultural messages regarding body image and dieting. Prevention comes in bringing these issues to the light. The following topics can be discussed with young children (as well as teens and young adults).
Emotional Bites: a simple way to discuss emotional eating is to ask children about why they might eat besides being hungry. Talk about more effective ways to cope with emotions, emphasizing the value of sharing feelings with a trusted adult.
Say No to Teasing: another concept is to emphasize that it is wrong to say hurtful things about other people's body sizes.
Intuitive Eating: emphasize the importance of listening to one's body. That is, eat when you are hungry, pay attention to fullness, and choose foods that make you feel good. Children intuitively grasp these concepts. Additionally, parents can reinforce intuitive eating by removing value judgments of food as “good” or “bad” from conversations about food.
Positive Body Talk: family members can help prevent eating disorders by not making negative comments about themselves. When children hear family members complain that they are fat or about the proportions of their bodies, this influences their own body image and is a contributing factor to the development of eating disorders.
Fitness Comes in All Sizes: educate children about the genetics of body size and the normal changes occurring in the body. Discuss their fears and hopes about growing bigger. Focus on fitness and a balanced diet.
Internet and modern technologies provide new opportunities for prevention. Online programs have the potential to increase the use of prevention programs. The development and practice of prevention programs via online sources make it possible to reach a wide range of people at minimal cost. Such an approach can also make prevention programs to be sustainable.
Parents can do a lot for their children at a young age to impede them from ever seeing themselves in the eyes of an eating disorder. The parents who are actively engaged in their children's lives' often contribute to fostering a stronger sense of self-love in them.
Treatment
Treatment varies according to type and severity of eating disorder, and often more than one treatment option is utilized.
Various forms of cognitive behavioral therapy have been developed for eating disorders and found to be useful. If a person is experiencing comorbidity between an eating disorder and OCD, exposure and response prevention, coupled with weight restoration and serotonin reputake inhibitors has proven most effective. Other forms of psychotherapies can also be useful.
Family doctors play an important role in early treatment of people with eating disorders by encouraging those who are also reluctant to see a psychiatrist. Treatment can take place in a variety of different settings such as community programs, hospitals, day programs, and groups. The American Psychiatric Association (APA) recommends a team approach to treatment of eating disorders. The members of the team are usually a psychiatrist, therapist, and registered dietitian, but other clinicians may be included.
That said, some treatment methods are:
Cognitive behavioral therapy (CBT), which postulates that an individual's feelings and behaviors are caused by their own thoughts instead of external stimuli such as other people, situations or events; the idea is to change how a person thinks and reacts to a situation even if the situation itself does not change. See Cognitive behavioral treatment of eating disorders.
Acceptance and commitment therapy: a type of CBT
Cognitive behavioral therapy enhanched (CBT-E): the most widespread cognitive behavioral psychotherapy specific for eating disorders
Cognitive remediation therapy (CRT), a set of cognitive drills or compensatory interventions designed to enhance cognitive functioning.
Exposure and Response Prevention: a type of CBT; the gradual exposure to anxiety provoking situations in a safe environment, to learn how to deal with the uncomfortableness
The Maudsley anorexia nervosa treatment for adults (MANTRA), which focuses on addressing rigid information processing styles, emotional avoidance, pro-anorectic beliefs, and difficulties with interpersonal relationships. These four targets of treatment are proposed to be core maintenance factors within the Cognitive-Interpersonal Maintenance Model of anorexia nervosa.
Dialectical behavior therapy
Family therapy including "conjoint family therapy" (CFT), "separated family therapy" (SFT) and Maudsley Family Therapy.
Behavioral therapy: focuses on gaining control and changing unwanted behaviors.
Interpersonal psychotherapy (IPT)
Cognitive Emotional Behaviour Therapy (CEBT)
Art therapy
Nutrition counseling and Medical nutrition therapy
Self-help and guided self-help have been shown to be helpful in AN, BN and BED; this includes support groups and self-help groups such as Eating Disorders Anonymous and Overeaters Anonymous. Having meaninful relationships are often a way to recovery. Having a partner, friend or someone else close in your life may lead away from the way of problematic eating according to professor Cynthia M. Bulik.
psychoanalytic psychotherapy
Inpatient care
There are few studies on the cost-effectiveness of the various treatments. Treatment can be expensive; due to limitations in health care coverage, people hospitalized with anorexia nervosa may be discharged while still underweight, resulting in relapse and rehospitalization. Research has found comorbidity between an eating disorder (e.g., anorexia nervosa, bulimia nervosa, and binge eating) and OCD does not impact the length of the time patients spend in treatment, but can negatively impact treatment outcomes.
For children with anorexia, the only well-established treatment is the family treatment-behavior. For other eating disorders in children, however, there is no well-established treatments, though family treatment-behavior has been used in treating bulimia.
A 2019 Cochrane review examined studies comparing the effectiveness of inpatient versus outpatient models of care for eating disorders. Four trials including 511 participants were studied but the review was unable to draw any definitive conclusions as to the superiority of one model over another.
Barriers to treatment
A variety of barriers to eating disorder treatment have been identified, typically grouped into individual and systemic barriers. Individual barriers include shame, fear of stigma, cultural perceptions, minimizing the seriousness of the problem, unfamiliarity with mental health services, and a lack of trust in mental health professionals. Systemic barriers include language differences, financial limitations, lack of insurance coverage, inaccessible health care facilities, time conflicts, long waits, lack of transportation, and lack of child care. These barriers may be particularly exacerbated for those who identify outside of the skinny, white, affluent girl stereotype that dominates in the field of eating disorders, such that those who do not identify with this stereotype are much less likely to seek treatment.
Conditions during the COVID-19 pandemic may increase the difficulties experienced by those with eating disorders, and the risk that otherwise healthy individuals may develop eating disorders. The pandemic has been a stressful life event for everyone, increasing anxiety and isolation, disrupting normal routines, creating economic strain and food insecurity, and making it more difficult and stressful to obtain needed resources including food and medical treatment.
The COVID-19 pandemic in England exposed a dramatic rise in demand for eating disorder services which the English NHS struggled to meet. The National Institute for Health and Care Excellence and NHS England both advised that services should not impose thresholds using body mass index or duration of illness to determine whether treatment for eating disorders should be offered, but there were continuing reports that these recommendations were not followed.
In terms of access to treatment, therapy sessions have generally switched from in-person to video calls. This may actually help people who previously had difficulty finding a therapist with experience in treating eating disorders, for example, those who live in rural areas.
Studies suggest that virtual (telehealth) CBT can be as effective as face-to-face CBT for bulimia and other mental illnesses. To help patients cope with conditions during the pandemic, therapists may have to particularly emphasize strategies to create structure where little is present, build interpersonal connections, and identify and avoid triggers.
Medication
Orlistat is used in obesity treatment. Olanzapine seems to promote weight gain as well as the ability to ameliorate obsessional behaviors concerning weight gain. zinc supplements have been shown to be helpful, and cortisol is also being investigated.
Two pharmaceuticals, Prozac and Vyvanse, have been approved by the FDA to treat bulimia nervosa and binge-eating disorder, respectively. Olanzapine has also been used off-label to treat anorexia nervosa. Studies are also underway to explore psychedelic and psychedelic-adjacent medicines such as MDMA, psilocybin and ketamine for anorexia nervosa and binge-eating disorder.
Outcomes
For anorexia nervosa, bulimia nervosa, and binge eating disorder, there is a general agreement that full recovery rates range between 50% and 85%, with larger proportions of people experiencing at least partial remission. It can be a lifelong struggle or it can be overcome within months.
Miscarriages: Pregnant women with a binge eating disorder have shown to have a greater chance of having a miscarriage compared to pregnant women with any other eating disorders. According to a study done, out of a group of pregnant women being evaluated, 46.7% of the pregnancies ended with a miscarriage in women that were diagnosed with BED, with 23.0% in the control. In the same study, 21.4% of women diagnosed with Bulimia Nervosa had their pregnancies end with miscarriages and only 17.7% of the controls.
Relapse: An individual who is in remission from BN and EDNOS (Eating Disorder Not Otherwise Specified) is at a high risk of falling back into the habit of self-harm. Factors such as high stress regarding their job, pressures from society, as well as other occurrences that inflict stress on a person, can push a person back to what they feel will ease the pain. A study tracked a group of selected people that were either diagnosed with BN or EDNOS for 60 months. After the 60 months were complete, the researchers recorded whether or not the person was having a relapse. The results found that the probability of a person previously diagnosed with EDNOS had a 41% chance of relapsing; a person with BN had a 47% chance.
Attachment insecurity: People who are showing signs of attachment anxiety will most likely have trouble communicating their emotional status as well as having trouble seeking effective social support. Signs that a person has adopted this symptom include not showing recognition to their caregiver or when he/she is feeling pain. In a clinical sample, it is clear that at the pretreatment step of a patient's recovery, more severe eating disorder symptoms directly corresponds to higher attachment anxiety. The more this symptom increases, the more difficult it is to achieve eating disorder reduction prior to treatment.
Impaired Decision Making: Studies have found mixed results on the relationship between eating disorders and decision making. Researchers have continuously found that patients with anorexia were less capable of thinking about long-term consequences of their decisions when completing the Iowa Gambling Task, a test designed to measure a person's decision-making capabilities. Consequently, they were at a higher risk of making hastier, harmful choices.
Anorexia symptoms include the increasing chance of getting osteoporosis. Thinning of the hair as well as dry hair and skin are also very common. The muscles of the heart will also start to change if no treatment is inflicted on the patient. This causes the heart to have an abnormally slow heart rate along with low blood pressure. Heart failure becomes a major consideration when this begins to occur. Muscles throughout the body begin to lose their strength. This will cause the individual to begin feeling faint, drowsy, and weak. Along with these symptoms, the body will begin to grow a layer of hair called lanugo. The human body does this in response to the lack of heat and insulation due to the low percentage of body fat.
Bulimia symptoms include heart problems like an irregular heartbeat that can lead to heart failure and death may occur. This occurs because of the electrolyte imbalance that is a result of the constant binge and purge process. The probability of a gastric rupture increases. A gastric rupture is when there is a sudden rupture of the stomach lining that can be fatal. The acids that are contained in the vomit can cause a rupture in the esophagus as well as tooth decay. As a result, to laxative abuse, irregular bowel movements may occur along with constipation. Sores along the lining of the stomach called peptic ulcers begin to appear and the chance of developing pancreatitis increases.
Binge eating symptoms include high blood pressure, which can cause heart disease if it is not treated. Many patients recognize an increase in the levels of cholesterol. The chance of being diagnosed with gallbladder disease increases, which affects an individual's digestive tract.
Risk of death
Eating disorders result in about 7,000 deaths a year as of 2010, making them the mental illnesses with the highest mortality rate. Anorexia has a risk of death that is increased about 5 fold with 20% of these deaths as a result of suicide. Rates of death in bulimia and other disorders are similar at about a 2 fold increase.
The mortality rate for those with anorexia is 5.4 per 1000 individuals per year. Roughly 1.3 deaths were due to suicide. A person who is or had been in an inpatient setting had a rate of 4.6 deaths per 1000. Of individuals with bulimia about 2 persons per 1000 persons die per year and among those with EDNOS about 3.3 per 1000 people die per year.
Epidemiology
It is a common misconception that eating disorders are restricted only to women, and this may have skewed research disproportionately to study female populations. In the developed world, binge eating disorder affects about 1.6% of women and 0.8% of men in a given year. Anorexia affects about 0.4% and bulimia affects about 1.3% of young women in a given year. Up to 4% of women have anorexia, 2% have bulimia, and 2% have binge eating disorder at some point in time. Anorexia and bulimia occur nearly ten times more often in females than males. Typically, they begin in late childhood or early adulthood. Rates of other eating disorders are not clear. Rates of eating disorders appear to be lower in less developed countries.
In the United States, twenty million women and ten million men have an eating disorder at least once in their lifetime.
Anorexia
Rates of anorexia in the general population among women aged 11 to 65 ranges from 0 to 2.2% and around 0.3% among men. The incidence of female cases is low in general medicine or specialized consultation in town, ranging from 4.2 and 8.3/100,000 individuals per year. The incidence of AN ranges from 109 to 270/100,000 individuals per year. Mortality varies according to the population considered. AN has one of the highest mortality rates among mental illnesses. The rates observed are 6.2 to 10.6 times greater than that observed in the general population for follow-up periods ranging from 13 to 10 years. Standardized mortality ratios for anorexia vary from 1.36% to 20%.
Bulimia
Bulimia affects females 9 times more often than males. Approximately one to three percent women develop bulimia in their lifetime. About 2% to 3% of women are currently affected in the United States. New cases occur in about 12 per 100,000 population per year. The standardized mortality ratios for bulimia is 1% to 3%.
Binge eating disorder
Reported rates vary from 1.3 to 30% among subjects seeking weight-loss treatment. Based on surveys, BED appears to affect about 1-2% at some point in their life, with 0.1-1% of people affected in a given year. BED is more common among females than males. There have been no published studies investigating the effects of BED on mortality, although it is comorbid with disorders that are known to increase mortality risks.
Economics
Since 2017, the number of cost-effectiveness studies regarding eating disorders appears to be increasing in the past six years.
In 2011 United States dollars, annual healthcare costs were $1,869 greater among individuals with eating disorders compared to the general population. The added presence of mental health comorbidities was also associated with higher, but not statistically significant, costs difference of $1,993.
In 2013 Canadian dollars, the total hospital cost per admission for treatment of anorexia nervosa was $51,349 and the total societal cost was $54,932 based on an average length of stay of 37.9 days. For every unit increase in body mass index, there was also a 15.7% decrease in hospital cost.
For Ontario, Canada patients who received specialized inpatient care for an eating disorder both out of country and in province, annual total healthcare costs were about $11 million before 2007 and $6.5 million in the years afterwards. For those treated out of country alone, costs were about $5 million before 2007 and $2 million in the years afterwards.
Evolutionary perspective
Evolutionary psychiatry as an emerging scientific discipline has been studying mental disorders from an evolutionary perspective. If eating disorders have evolutionary functions or if they are new modern "lifestyle" problems is still debated.
See also
Eating disorders in Chinese women
Eating disorder not otherwise specified
Fatphobia
Feeding disorder
References
External links
Behavioral neuroscience
Behavioural syndromes associated with physiological disturbances and physical factors
Wikipedia medicine articles ready to translate
Wikipedia neurology articles ready to translate | Eating disorder | [
"Biology"
] | 16,956 | [
"Behavioural sciences",
"Behavior",
"Behavioral neuroscience"
] |
56,873 | https://en.wikipedia.org/wiki/Lactose%20intolerance | Lactose intolerance is caused by a lessened ability or a complete inability to digest lactose, a sugar found in dairy products. Humans vary in the amount of lactose they can tolerate before symptoms develop. Symptoms may include abdominal pain, bloating, diarrhea, flatulence, and nausea. These symptoms typically start thirty minutes to two hours after eating or drinking something containing lactose, with the severity typically depending on the amount consumed. Lactose intolerance does not cause damage to the gastrointestinal tract.
Lactose intolerance is due to the lack of the enzyme lactase in the small intestines to break lactose down into glucose and galactose. There are four types: primary, secondary, developmental, and congenital. Primary lactose intolerance occurs as the amount of lactase declines as people grow up. Secondary lactose intolerance is due to injury to the small intestine. Such injury could be the result of infection, celiac disease, inflammatory bowel disease, or other diseases. Developmental lactose intolerance may occur in premature babies and usually improves over a short period of time. Congenital lactose intolerance is an extremely rare genetic disorder in which little or no lactase is made from birth. The reduction of lactase production starts typically in late childhood or early adulthood, but prevalence increases with age.
Diagnosis may be confirmed if symptoms resolve following eliminating lactose from the diet. Other supporting tests include a hydrogen breath test and a stool acidity test. Other conditions that may produce similar symptoms include irritable bowel syndrome, celiac disease, and inflammatory bowel disease. Lactose intolerance is different from a milk allergy. Management is typically by decreasing the amount of lactose in the diet, taking lactase supplements, or treating the underlying disease. People are typically able to drink at least one cup of milk without developing symptoms, with greater amounts tolerated if drunk with a meal or throughout the day.
Worldwide, around 65% of adults are affected by lactose malabsorption. Other mammals usually lose the ability to digest lactose after weaning. Lactose intolerance is the ancestral state of all humans before the recent evolution of lactase persistence in some cultures, which extends lactose tolerance into adulthood. Lactase persistence evolved in several populations independently, probably as an adaptation to the domestication of dairy animals around 10,000 years ago. Today the prevalence of lactose tolerance varies widely between regions and ethnic groups. The ability to digest lactose is most common in people of Northern European descent, and to a lesser extent in some parts of the Middle East and Africa. Lactose intolerance is most common among people of East Asian descent, with 90% lactose intolerance, people of Jewish descent, in many African countries and Arab countries, and among people of Southern European descent (notably amongst Greeks and Italians). Traditional food cultures reflect local variations in tolerance and historically many societies have adapted to low levels of tolerance by making dairy products that contain less lactose than fresh milk. The medicalization of lactose intolerance as a disorder has been attributed to biases in research history, since most early studies were conducted amongst populations which are normally tolerant, as well as the cultural and economic importance and impact of milk in countries such as the United States.
Terminology
Lactose intolerance primarily refers to a syndrome with one or more symptoms upon the consumption of food substances containing lactose sugar. Individuals may be lactose intolerant to varying degrees, depending on the severity of these symptoms.
Hypolactasia is the term specifically for the small intestine producing little or no lactase enzyme. If a person with hypolactasia consumes lactose sugar, it results in lactose malabsorption. The digestive system is unable to process the lactose sugar, and the unprocessed sugars in the gut produce the symptoms of lactose intolerance.
Lactose intolerance is not an allergy, because it is not an immune response, but rather a sensitivity to dairy caused by a deficiency of lactase enzyme. Milk allergy, occurring in about 2% of the population, is a separate condition, with distinct symptoms that occur when the presence of milk proteins trigger an immune reaction.
Signs and symptoms
The principal manifestation of lactose intolerance is an adverse reaction to products containing lactose (primarily milk), including abdominal bloating and cramps, flatulence, diarrhea, nausea, borborygmi, and vomiting (particularly in adolescents). These appear one-half to two hours after consumption. The severity of these signs and symptoms typically increases with the amount of lactose consumed; most lactose-intolerant people can tolerate a certain level of lactose in their diets without ill effects.
Because lactose intolerance is not an allergy, it does not produce allergy symptoms (such as itching, hives, or anaphylaxis).
Causes
Lactose intolerance is a consequence of lactase deficiency, which may be genetic (primary hypolactasia and primary congenital alactasia) or environmentally induced (secondary or acquired hypolactasia). In either case, symptoms are caused by insufficient levels of lactase in the lining of the duodenum. Lactose, a disaccharide molecule found in milk and dairy products, cannot be directly absorbed through the wall of the small intestine into the bloodstream, so, in the absence of lactase, passes intact into the colon. Bacteria in the colon can metabolise lactose, and the resulting fermentation produces copious amounts of gas (a mixture of hydrogen, carbon dioxide, and methane) that causes the various abdominal symptoms. The unabsorbed sugars and fermentation products also raise the osmotic pressure of the colon, causing an increased flow of water into the bowels (diarrhea).
Lactose intolerance in infants (congenital lactase deficiency) is caused by mutations in the LCT gene. The LCT gene provides the instructions for making lactase. Mutations are believed to interfere with the function of lactase, causing affected infants to have a severely impaired ability to digest lactose in breast milk or formula. Lactose intolerance in adulthood is a result of gradually decreasing activity (expression) of the LCT gene after infancy, which occurs in most humans. The specific DNA sequence in the MCM6 gene helps control whether the LCT gene is turned on or off. At least several thousand years ago, some humans developed a mutation in the MCM6 gene that keeps the LCT gene turned on even after breast feeding is stopped. Populations that are lactose intolerant lack this mutation. The LCT and MCM6 genes are both located on the long arm (q) of chromosome 2 in region 21. The locus can be expressed as 2q21. The lactase deficiency also could be linked to certain heritages and varies widely. A 2016 study of over 60,000 participants from 89 countries found regional prevalence of lactose malabsorption was "64% (54–74) in Asia (except Middle East), 47% (33–61) in eastern Europe, Russia, and former Soviet Republics, 38% (CI 18–57) in Latin America, 70% (57–83) in the Middle East, 66% (45–88) in northern Africa, 42% (13–71) in northern America, 45% (19–71) in Oceania, 63% (54–72) in sub-Saharan Africa, and 28% (19–37) in northern, southern and western Europe." According to Johns Hopkins Medicine, lactose intolerance is more common in Asian Americans, African Americans, Mexican Americans, and Native Americans. Analysis of the DNA of 94 ancient skeletons in Europe and Russia concluded that the mutation for lactose tolerance appeared about 4,300 years ago and spread throughout the European population.
Some human populations have developed lactase persistence, in which lactase production continues into adulthood probably as a response to the benefits of being able to digest milk from farm animals. Some have argued that this links intolerance to natural selection favoring lactase-persistent individuals, but it is also consistent with a physiological response to decrease lactase production when it is not needed in cultures in which dairy products are not an available food source. Although populations in Europe, India, Arabia, and Africa were first thought to have high rates of lactase persistence because of a single mutation, lactase persistence has been traced to a number of mutations that occurred independently. Different alleles for lactase persistence have developed at least three times in East African populations, with persistence extending from 26% in Tanzania to 88% in the Beja pastoralist population in Sudan.
The accumulation of epigenetic factors, primarily DNA methylation, in the extended LCT region, including the gene enhancer located in the MCM6 gene near C/T-13910 SNP, may also contribute to the onset of lactose intolerance in adults. Age-dependent expression of LCT in mice intestinal epithelium has been linked to DNA methylation in the gene enhancer.
Lactose intolerance is classified according to its causes as:
Primary hypolactasia
Primary hypolactasia, or primary lactase deficiency, is genetic, develops in childhood at various ages, and is caused by the absence of a lactase persistence allele. In individuals without the lactase persistence allele, less lactase is produced by the body over time, leading to hypolactasia in adulthood. The frequency of lactase persistence, which allows lactose tolerance, varies enormously worldwide, with the highest prevalence in Northwestern Europe, declines across southern Europe and the Middle East and is low in Asia and most of Africa, although it is common in pastoralist populations from Africa.
Secondary hypolactasia
Secondary hypolactasia or secondary lactase deficiency, also called acquired hypolactasia or acquired lactase deficiency, is caused by an injury to the small intestine. This form of lactose intolerance can occur in both infants and lactase persistent adults and is generally reversible. It may be caused by acute gastroenteritis, coeliac disease, Crohn's disease, ulcerative colitis, chemotherapy, intestinal parasites (such as giardia), or other environmental causes.
Primary congenital alactasia
Primary congenital alactasia, also called congenital lactase deficiency, is an extremely rare, autosomal recessive enzyme defect that prevents lactase expression from birth. People with congenital lactase deficiency cannot digest lactose from birth, so cannot digest breast milk. This genetic defect is characterized by a complete lack of lactase (alactasia). About 40 cases have been reported worldwide, mainly limited to Finland. Before the 20th century, babies born with congenital lactase deficiency often did not survive, but death rates decreased with soybean-derived infant formulas and manufactured lactose-free dairy products.
Diagnosis
In order to assess lactose intolerance, intestinal function is challenged by ingesting more dairy products than can be readily digested. Clinical symptoms typically appear within 30 minutes, but may take up to two hours, depending on other foods and activities. Substantial variability in response (symptoms of nausea, cramping, bloating, diarrhea, and flatulence) is to be expected, as the extent and severity of lactose intolerance varies among individuals.
The next step is to determine whether it is due to primary lactase deficiency or an underlying disease that causes secondary lactase deficiency. Physicians should investigate the presence of undiagnosed coeliac disease, Crohn's disease, or other enteropathies when secondary lactase deficiency is suspected and infectious gastroenteritis has been ruled out.
Lactose intolerance is distinct from milk allergy, an immune response to cow's milk proteins. They may be distinguished in diagnosis by giving lactose-free milk, producing no symptoms in the case of lactose intolerance, but the same reaction as to normal milk in the presence of a milk allergy. A person can have both conditions. If positive confirmation is necessary, four tests are available.
Hydrogen breath test
In a hydrogen breath test, the most accurate lactose intolerance test, after an overnight fast, 25 grams of lactose (in a solution with water) are swallowed. If the lactose cannot be digested, enteric bacteria metabolize it and produce hydrogen, which, along with methane, if produced, can be detected on the patient's breath by a clinical gas chromatograph or compact solid-state detector. The test takes about 2.5 hours to complete. If the hydrogen levels in the patient's breath are high, they may have lactose intolerance. This test is not usually done on babies and very young children, because it can cause severe diarrhea.
Lactose tolerance test
In conjunction, measuring blood glucose level every 10 to 15 minutes after ingestion will show a "flat curve" in individuals with lactose malabsorption, while the lactase persistent will have a significant "top", with a typical elevation of 50% to 100%, within one to two hours. However, due to the need for frequent blood sampling, this approach has been largely replaced by breath testing.
After an overnight fast, blood is drawn and then 50 grams of lactose (in aqueous solution) are swallowed. Blood is then drawn again at the 30-minute, 1-hour, 2-hour, and 3-hour marks. If the lactose cannot be digested, blood glucose levels will rise by less than 20 mg/dl.
Stool acidity test
This test can be used to diagnose lactose intolerance in infants, for whom other forms of testing are risky or impractical. The infant is given lactose to drink. If the individual is tolerant, the lactose is digested and absorbed in the small intestine; otherwise, it is not digested and absorbed, and it reaches the colon. The bacteria in the colon, mixed with the lactose, cause acidity in stools. Stools passed after the ingestion of the lactose are tested for level of acidity. If the stools are acidic, the infant is intolerant to lactose.
Stool pH in lactose intolerance is less than 5.5.
Intestinal biopsy
An intestinal biopsy must confirm lactase deficiency following discovery of elevated hydrogen in the hydrogen breath test. Modern techniques have enabled a bedside test, identifying presence of lactase enzyme on upper gastrointestinal endoscopy instruments. However, for research applications such as mRNA measurements, a specialist laboratory is required.
Stool sugar chromatography
Chromatography can be used to separate and identify undigested sugars present in faeces. Although lactose may be detected in the faeces of people with lactose intolerance, this test is not considered reliable enough to conclusively diagnose or exclude lactose intolerance.
Genetic diagnostic
Genetic tests may be useful in assessing whether a person has primary lactose intolerance. Lactase activity persistence in adults is associated with two polymorphisms: C/T 13910 and G/A 22018 located in the MCM6 gene. These polymorphisms may be detected by molecular biology techniques at the DNA extracted from blood or saliva samples; genetic kits specific for this diagnosis are available. The procedure consists of extracting and amplifying DNA from the sample, following with a hybridation protocol in a strip. Colored bands are obtained as result, and depending on the different combinations, it would be possible to determine whether the patient is lactose intolerant. This test allows a noninvasive definitive diagnostic.
Management
When lactose intolerance is due to secondary lactase deficiency, treatment of the underlying disease may allow lactase activity to return to normal levels. In people with celiac disease, lactose intolerance normally reverts or improves several months after starting a gluten-free diet, but temporary dietary restriction of lactose may be needed.
People with primary lactase deficiency cannot modify their body's ability to produce lactase. In societies where lactose intolerance is the norm, it is not considered a condition that requires treatment. However, where dairy is a larger component of the normal diet, a number of efforts may be useful. There are four general principles in dealing with lactose intolerance: avoidance of dietary lactose, substitution to maintain nutrient intake, regulation of calcium intake, and use of enzyme substitute. Regular consumption of dairy food by lactase deficient individuals may also reduce symptoms of intolerance by promoting colonic bacteria adaptation.
Dietary avoidance
The primary way of managing the symptoms of lactose intolerance is to limit the intake of lactose to a level that can be tolerated. Lactase deficient individuals vary in the amount of lactose they can tolerate, and some report that their tolerance varies over time, depending on health status and pregnancy. However, as a rule of thumb, people with primary lactase deficiency and no small intestine injury are usually able to consume at least 12 grams of lactose per sitting without symptoms, or with only mild symptoms, with greater amounts tolerated if consumed with a meal or throughout the day.
Lactose is found primarily in dairy products, which vary in the amount of lactose they contain:
Milk – unprocessed cow's milk is about 4.7% lactose; goat's milk 4.7%; sheep's milk 4.7%; buffalo milk 4.86%; and yak milk 4.93%.
Sour cream and buttermilk – if made in the traditional way, this may be tolerable, but most modern brands add milk solids.
Yogurt – lactobacilli used in the production of yogurt metabolize lactose to varying degrees, depending on the type of yogurt. Some bacteria found in yogurt also produce their own lactase, which facilitates digestion in the intestines of lactose intolerant individuals.
Cheese – The curdling of cheese concentrates most of the lactose from milk into the whey: fresh cottage cheese contains 7% of the lactose found in an equivalent mass of milk. Further fermentation and aging converts the remaining lactose into lactic acid; traditionally made hard cheeses, which have a long ripening period, contain virtually no lactose: cheddar contains less than 1.5% of the lactose found in an equivalent mass of milk. However, manufactured cheeses may be produced using processes that do not have the same lactose-reducing properties.
There used to be a lack of standardization on how lactose is measured and reported in food. The different molecular weights of anhydrous lactose or lactose monohydrate result in up to 5% difference. One source recommends using the "carbohydrates" or "sugars" part of the nutritional label as surrogate for lactose content, but such "lactose by difference" values are not assured to correspond to real lactose content. The stated dairy content of a product also varies according to manufacturing processes and labelling practices, and commercial terminology varies between languages and regions. As a result, absolute figures for the amount of lactose consumed (by weight) may not be very reliable.
Kosher products labeled pareve or fleishig are free of milk. However, if a "D" (for "dairy") is present next to the circled "K", "U", or other hechsher, the food product likely contains milk solids, although it may also simply indicate the product was produced on equipment shared with other products containing milk derivatives.
Lactose is also a commercial food additive used for its texture, flavor, and adhesive qualities. It is found in additives labelled as casein, caseinate, whey, lactoserum, milk solids, modified milk ingredients, etc. As such, lactose is found in foods such as processed meats (sausages/hot dogs, sliced meats, pâtés), gravy stock powder, margarines, sliced breads, breakfast cereals, potato chips, processed foods, medications, prepared meals, meal replacements (powders and bars), protein supplements (powders and bars), and even beers in the milk stout style. Some barbecue sauces and liquid cheeses used in fast-food restaurants may also contain lactose. When dining out, carrying lactose intolerance cards that explain dietary restrictions in the local language can help communicate needs to restaurant staff. Lactose is often used as the primary filler (main ingredient) in most prescription and non-prescription solid pill form medications, though product labeling seldom mentions the presence of 'lactose' or 'milk', and neither do product monograms provided to pharmacists, and most pharmacists are unaware of the very wide scale yet common use of lactose in such medications until they contact the supplier or manufacturer for verification.
Milk substitutes
Plant-based milks and derivatives such as soy milk, rice milk, almond milk, coconut milk, hazelnut milk, oat milk, hemp milk, macadamia nut milk, and peanut milk are inherently lactose-free. Low-lactose and lactose-free versions of foods are often available to replace dairy-based foods for those with lactose intolerance.
Lactase supplements
When lactose avoidance is not possible, or on occasions when a person chooses to consume such items, then enzymatic lactase supplements may be used.
Lactase enzymes similar to those produced in the small intestines of humans are produced industrially by fungi of the genus Aspergillus. The enzyme, β-galactosidase, is available in tablet form in a variety of doses, in many countries without a prescription. It functions well only in high-acid environments, such as that found in the human gut due to the addition of gastric juices from the stomach. Unfortunately, too much acid can denature it, so it should not be taken on an empty stomach. Also, the enzyme is ineffective if it does not reach the small intestine by the time the problematic food does. Lactose-sensitive individuals can experiment with both timing and dosage to fit their particular needs.
While essentially the same process as normal intestinal lactose digestion, direct treatment of milk employs a different variety of industrially produced lactase. This enzyme, produced by yeast from the genus Kluyveromyces, takes much longer to act, must be thoroughly mixed throughout the product, and is destroyed by even mildly acidic environments. Its main use is in producing the lactose-free or lactose-reduced dairy products sold in supermarkets.
Rehabituation to dairy products
Regular consumption of dairy foods containing lactose can promote a colonic bacteria adaptation, enhancing a favorable microbiome, which allows people with primary lactase deficiency to diminish their intolerance and to consume more dairy foods. The way to induce tolerance is based on progressive exposure, consuming smaller amounts frequently, distributed throughout the day. Lactose intolerance can also be managed by ingesting live yogurt cultures containing lactobacilli that are able to digest the lactose in other dairy products.
Epidemiology
Worldwide, about 65% of people experience some form of lactose intolerance as they age past infancy, but there are significant differences between populations and regions. As few as 5% of northern Europeans are lactose intolerant, while as many as 90% of adults in parts of Asia are lactose intolerant.
In northern European countries, early adoption of dairy farming conferred a selective evolutionary advantage to individuals that could tolerate lactose. This led to higher frequencies of lactose tolerance in these countries. For example, almost 100% of Irish people are predicted to be lactose tolerant. Conversely, regions of the south, such as Africa, did not adopt dairy farming as early and tolerance from milk consumption did not occur the same way as in northern Europe. Lactose intolerance is common among people of Jewish descent, as well as from West Africa, the Arab countries, Greece, and Italy. Different populations will present certain gene constructs depending on the evolutionary and cultural pre-settings of the geographical region.
History
Greater lactose tolerance has come about in two ways. Some populations have developed genetic changes to allow the digestion of lactose: lactase persistence. Other populations developed cooking methods like milk fermentation.
Lactase persistence in humans evolved relatively recently (in the last 10,000 years) among some populations. Around 8,000 years ago in modern-day Turkey, humans became reliant on newly-domesticated animals that could be milked; such as cows, sheep, and goats. This resulted in higher frequency of lactase persistence. Lactase persistence became high in regions such as Europe, Scandinavia, the Middle East and Northwestern India. However, most people worldwide remain lactase -persistent. Populations that raised animals not used for milk tend to have 90–100 percent of a lactose intolerant rate. For this reason, lactase persistence is of some interest to the fields of anthropology, human genetics, and archaeology, which typically use the genetically derived persistence/non-persistence terminology.
The rise of dairy and producing dairy related products from cow milk alone, varies across different regions of the world, aside from genetic predisposition. The process of turning milk into cheese dates back earlier than 5200 BC.
DNA analysis in February 2012 revealed that Ötzi was lactose intolerant, supporting the theory that lactose intolerance was still common at that time, despite the increasing spread of agriculture and dairying.
Genetic analysis shows lactase persistence has developed several times in different places independently in an example of convergent evolution.
History of research
It was not until relatively recently that medicine recognised the worldwide prevalence of lactose intolerance and its genetic causes. Its symptoms were described as early as Hippocrates (460–370 BC), but until the 1960s, the prevailing assumption was that tolerance was the norm. Intolerance was explained as the result of a milk allergy, intestinal pathogens, or as being psychosomatic – it being recognised that some cultures did not practice dairying, and people from those cultures often reacted badly to consuming milk. Two reasons have been given for this misconception. One was that early research was conducted solely on European-descended populations, which have an unusually low incidence of lactose intolerance and an extensive cultural history of dairying. As a result, researchers wrongly concluded that tolerance was the global norm. Another reason is that lactose intolerance tends to be under-reported: lactose intolerant individuals can tolerate at least some lactose before they show symptoms, and their symptoms differ in severity. The large majority of people are able to digest some quantity of milk, for example in tea or coffee, without developing any adverse effects. Fermented dairy products, such as cheese, also contain significantly less lactose than plain milk. Therefore, in societies where tolerance is the norm, many lactose intolerant people who consume only small amounts of dairy, or have only mild symptoms, may be unaware that they cannot digest lactose.
Eventually, in the 1960s, it was recognised that lactose intolerance was correlated with race in the United States. Subsequent research revealed that lactose intolerance was more common globally than tolerance, and that the variation was due to genetic differences, not an adaptation to cultural practices.
Other animals
Most mammals normally cease to produce lactase and become lactose intolerant after weaning. The downregulation of lactase expression in mice could be attributed to the accumulation of DNA methylation in the Lct gene and the adjacent Mcm6 gene.
See also
References
External links
Digestive system
Milk
Conditions diagnosed by stool test
Food sensitivity
Wikipedia medicine articles ready to translate
Wikipedia neurology articles ready to translate
Ötzi | Lactose intolerance | [
"Biology"
] | 5,808 | [
"Digestive system",
"Organ systems"
] |
56,874 | https://en.wikipedia.org/wiki/Horizontal%20line%20test | In mathematics, the horizontal line test is a test used to determine whether a function is injective (i.e., one-to-one).
In calculus
A horizontal line is a straight, flat line that goes from left to right. Given a function (i.e. from the real numbers to the real numbers), we can decide if it is injective by looking at horizontal lines that intersect the function's graph. If any horizontal line intersects the graph in more than one point, the function is not injective. To see this, note that the points of intersection have the same y-value (because they lie on the line ) but different x values, which by definition means the function cannot be injective.
Variations of the horizontal line test can be used to determine whether a function is surjective or bijective:
The function f is surjective (i.e., onto) if and only if its graph intersects any horizontal line at least once.
f is bijective if and only if any horizontal line will intersect the graph exactly once.
In set theory
Consider a function with its corresponding graph as a subset of the Cartesian product . Consider the horizontal lines in :. The function f is injective if and only if each horizontal line intersects the graph at most once. In this case the graph is said to pass the horizontal line test. If any horizontal line intersects the graph more than once, the function fails the horizontal line test and is not injective.
See also
Vertical line test
Inverse function
Monotonic function
References
Basic concepts in set theory | Horizontal line test | [
"Mathematics"
] | 328 | [
"Basic concepts in set theory"
] |
56,926 | https://en.wikipedia.org/wiki/Baikal%E2%80%93Amur%20Mainline | The Baikal–Amur Mainline (, , , ) is a broad-gauge railway line in Russia. Traversing Eastern Siberia and the Russian Far East, the -long BAM runs about 610 to 770 km (380 to 480 miles) north of and parallel to the Trans-Siberian Railway.
The Soviet Union built the BAM as a strategic alternative route to the Trans–Siberian Railway, seen as vulnerable especially along the sections close to the border with China. The BAM cost $14 billion, and it was built with special, durable tracks since much of it ran over permafrost. Due to the severe terrain, weather, length and cost, Soviet general secretary Leonid Brezhnev described BAM in 1974 as "the construction project of the century".
If the permafrost layer that supports the BAM railway line were to melt, the railway would collapse and sink into peat bog layers that cannot bear its weight. In 2016 and 2018 there were reports about climate change and damage to buildings and infrastructure as a result of thawing permafrost.
Route
The BAM departs from the Trans-Siberian railway at Tayshet, then crosses the Angara River at Bratsk and the Lena River at Ust-Kut, proceeds past Severobaikalsk at the northern tip of Lake Baikal, past Tynda and Khani, crosses the Amur River at Komsomolsk-on-Amur and finally reaches the Pacific Ocean at Sovetskaya Gavan. There are 21 tunnels along the line, with a total length of . There are also more than 4,200 bridges, with a total length of over .
Of the whole route, only the western Tayshet-Taksimo sector of is electrified. The route is largely single-track, although the reservation is wide enough for double-tracking for its full length, in the case of eventual duplication. The unusual thing about the railway is that it is electrified with a 27.5 kV, 50 Hz catenary minimum height at above top of the rails to suit double-stacking under the overhead wires on the Russian gauge tracks, which requires rolling stock to be modified for service on the railway.
At Tynda the route is crossed by the Amur–Yakutsk Mainline, which runs north to Neryungri and Tommot, with an extension to Nizhny Bestyakh opened in 2019. The original section of the AYaM connecting the Trans-Siberian at Bamovskaya with the BAM at Tynda is also referred to as the "Little BAM".
During the winter the passenger trains go from Moscow past Tayshet and Tynda to Neryungri and Tommot and there are also a daily trains from Tynda to Komsomolsk-on-Amur and from Komsomolsk-on-Amur to Sovetskaya Gavan on the Pacific Ocean via Vanino ("Vladivostok-Sovetskaya Gavan" train No.351Э). Travel time from Tayshet to Tynda is 48 hours. Travel time from Tynda to Komsomolsk-on-Amur is 36 hours. Travel time from Komsomolsk-on-Amur to Sovetskaya Gavan is 13 hours.
There are ten tunnels along the BAM railway, totaling of route. They include:
Baikalsky tunnel
Severomuysky Tunnel
Kodar Tunnell
Dusse Alin Tunnel
Korshunovsky tunnel
These are among the longest tunnels in Russia.
In addition, the route crosses 11 full-flowing rivers (including the Lena, Amur, Zeya, Vitim, Olyokma, Selemdzha and Bureya). In total, 2230 large and small bridges were built on it.
History
Early plans and start of construction
The route of the present-day BAM first came under consideration in the 1880s as an option for the eastern section of the planned Trans-Siberian railway.
In the 1930s, labor-camp inmates, in particular from the Bamlag camp of the Gulag system, built the section from Tayshet to Bratsk. In a confusing transfer of names, the label BAM applied from 1933 to 1935 to the project to double-track the Trans-Siberian east of Lake Baikal, constructed largely using forced labor.
1945 saw the finalisation of plans for upgrading the BAM for diesel or electric instead of steam traction, and for the heavier axle-loads of eight-axle oil tankers to carry new-found oil from Western Siberia. The upgrading required 25 years and 3,000 surveyors and designers, although much of the redesign work (particularly as regards the central section) took place between 1967 and 1974.
Construction project of the century
In March 1974, Soviet General Secretary Brezhnev proposed that the BAM would be one of the two major projects in the Tenth Five Year Plan (1976–80). He famously stated that "BAM will be constructed with clean hands only!" and firmly rejected the suggestion to again use prison labor. A few weeks later, he challenged the Young Communist League (Komsomol) to join in "the construction project of the century". The 17th Komsomol congress (held in April 1974) announced the BAM as a Komsomol shock construction project, created the central Komsomol headquarters of BAM construction, and appointed Dmitry Filippov the chief of the headquarters.
By the end of 1974, perhaps 50,000 young people of the 156,000 young people who applied had moved to the BAM service area. In 1975 and 1976, 28 new settlements were inaugurated and 70 new bridges, including the Amur and Lena bridges, were erected. And while of track was laid, the track-laying rate would have needed to nearly triple to meet the 1983 deadline.
In September 1984, a "golden spike" was hammered into place, connecting the eastern and western sections of the BAM. The Western media was not invited to attend this historic event as Soviet officials did not want any comments about the line's operational status. In reality, only one third of the BAM's track was fully operational for civilians, due to military reasons.
The BAM was again declared complete in 1991. By then, the total cost to build the line was US$14 billion (RU₽106 trillion).
Crisis
Beginning in the mid-1980s, the BAM project attracted increasing criticism for having been poorly planned. Infrastructure and basic services like running water were often not in place when workers arrived. At least 60 boomtowns developed along the route, but today many of these places are deserted ghost towns and unemployment in the area is high. The building of the BAM has also been criticised for its complete lack of environmental protection.
When the Soviet Union was dissolved, numerous mining and industrial projects in the region were cancelled and the BAM was greatly underutilized until the late 1990s, running at a large operational deficit.
In 1996, the BAM as a single operational body was dissolved, with the western section from Tayshet to Khani becoming the East Siberian Railway and the rest transferred to the management of the Far Eastern Railway.
During the Russo-Ukrainian War, on November 30, 2023, an explosion occurred in the Severomuysky Tunnel. A second explosion happened soon thereafter on the bypass used as backup for the tunnel. The Security Service of Ukraine claimed responsibility for the explosions.
Current situation and future prospects
A major improvement was the opening of the Severomuysky Tunnel on 5 December 2003. It is up to 1.5 kilometres (nearly 1 mile) deep. Construction took 27 years to complete. Prior to this, the corresponding route segment was long, with heavy slopes necessitating the use of auxiliary bank engine locomotives.
With the resources boom of recent years and improving economic conditions in Russia, use of the line is increasing. Plans exist for the development of mining areas such as Udokanskoye and Chineyskoye near Novaya Chara, as well as one of Eurasia's largest coal deposits at Elginskoye (Elga) in the Sakha Republic (Yakutia). In connection with this, a number of branch lines have been built or are under construction.
In January 2012 the Russian mining company Mechel completed the construction of the 320-kilometre-long branch line to Elginskoye, branching from the BAM station Ulak, west of the Zeya River crossing in northwestern Amur Oblast. The branch line connects the Elginskoye coal mine to the Russian railroad network.
Currently under discussion is the construction of a bridge or tunnel under the Strait of Tartary to Sakhalin Island, with the possibility of the further construction of a bridge or tunnel from Sakhalin to Japan. A tunnel from the mainland to Sakhalin was previously begun under Joseph Stalin, but was abandoned after his death. A second attempt in 2003 was also postponed during construction. Current economic conditions make the short-term completion of the tunnel doubtful, although Russian president Dmitry Medvedev announced in November 2008 his support for a revival of this project.
The BAM now also attracts the interest of Western railway enthusiasts, with some tourist activity on the line.
Also, the BAM itself extension from Komsomolsk-on-Amur to Magadan (Okhotsk coastal route), full length electrification, full length track doubling, and double-stacking under the overhead wires on the Russian gauge tracks (with well cars to make 6.15m height) are proposed.
Along the BAM
Tayshet to Lake Baikal :
Lake Baikal to Tynda :
Tynda to Komsomolsk :
Komsomolsk to Sovetskaya Gavan :
This section was completed by prisoners during World War II, except for the section east of Komsomolsk which was completed in 1974.
In April 2008 the state-owned Bamtonnelstroy corporation started work on the new single-track Kuznetsovsky Tunnel to bypass an older tunnel built in 1943–1945. It was opened in December 2012. The old tunnel had difficult gradients; building the new tunnel relieved a bottleneck on the BAM. The 59.8 bn roubles (about $1.93 bn) project included of new track. In 2010, Yakunin had said, the stretch between Komsomolsk and Sovetskaya Gavan was the weakest link on the BAM, which, he said, could be carrying 100 million tons of freight a year in 2050.
Branches
575: Khrebtovaya to Ust-Ilimsk, : opened in 1970, it runs northeast to serve the Ust-Ilimsk Dam.
1,257: Novy Uoyan: possible start of line south on east side to Lake Baikal.
2,364: Tynda to the Trans-Siberian at Bamovskaya, (the 'Little BAM'): this branch was built by prisoners in 1933–37, torn up in 1942 and its rails shipped to the front and rebuilt in 1972–75.
2,364: Tynda to Yakutsk: see Amur–Yakutsk Mainline.
3,315: Novy Urgal to the Trans-Siberian at Izvestovskaya, : in the Bureya River basin, it was built mostly by Japanese POWs. There is a branch north from Novy Urgal to the Chegdomyn coal fields.
3,837: Komsomolsk south to Khabarovsk, ; on east side (flood plain) of the Amur. south: Lake Bolon.
51 (line km restart at Komsomolsk): Selikhin to Cherny Mys, : north along the Amur. Built 1950–53, it was planned to extend this to a tunnel to Sakhalin Island. There is talk of restarting it.
The BAM road
Running approximately alongside the railway track is the BAM road, a railway service track. It is said to be in a very poor state, with collapsed bridges, dangerous river crossings, severe potholes and "unrelenting energy-sapping bogs". The narrow, dilapidated Vitim River Bridge (aka Kuandinsky Bridge) that crosses the Vitim river has attracted attention since its first appearance on social media in 2009. The passage of the bridge is forbidden since 2016 but remains a common road for individuals to reach the town of Koanda.
The road is passable only by the most extreme off-road vehicles and adventure motorcycles. In 2009, a group of three experienced motorcycle riders took a whole month to travel from Komsomolsk (in the east) to Lake Baikal.
Honors
Main belt asteroid 2031 BAM, discovered in 1969 by Soviet astronomer Lyudmila Chernykh, is named in honor of the builders of the BAM.
Gallery
References
External links
Construction history of the BAM
Private homepage about the BAM (section in English)
BAM: Soviet construction project of a century
BAM Guide on Trailblazer Publications website
NYTimes 2012 travel feature
The Baikal Amur Mainline is a popular adventure motorcycle travel route
Gulag industry
Railway lines in Russia
Rail transport in the Soviet Union
Rail transport in Siberia
Rail transport in the Russian Far East
1520 mm gauge railways in Russia
Chief Directorate of Railroad Construction Camps
Megaprojects | Baikal–Amur Mainline | [
"Engineering"
] | 2,706 | [
"Megaprojects"
] |
56,938 | https://en.wikipedia.org/wiki/Institute%20of%20Electrical%20and%20Electronics%20Engineers | The Institute of Electrical and Electronics Engineers (IEEE) is an American 501(c)(3) professional association for electrical engineering, electronics engineering, and other related disciplines.
The IEEE has a corporate office in New York City and an operations center in Piscataway, New Jersey. The IEEE was formed in 1963 as an amalgamation of the American Institute of Electrical Engineers and the Institute of Radio Engineers.
History
The IEEE traces its founding to 1884 and the American Institute of Electrical Engineers. In 1912, the rival Institute of Radio Engineers was formed. Although the AIEE was initially larger, the IRE attracted more students and was larger by the mid-1950s. The AIEE and IRE merged in 1963.
The IEEE is headquartered in New York City, but most business is done at the IEEE Operations Center in Piscataway, New Jersey, opened in 1975.
The Australian Section of the IEEE existed between 1972 and 1985, after which it split into state- and territory-based sections.
, IEEE has over 460,000 members in 190 countries, with more than 66 percent from outside the United States.
Publications
IEEE claims to produce over 30% of the world's literature in the electrical, electronics, and computer engineering fields, publishing approximately 200 peer-reviewed journals and magazines. IEEE publishes more than 1,700 conference proceedings every year.
The published content in these journals as well as the content from several hundred annual conferences sponsored by the IEEE are available in the IEEE Electronic Library (IEL) available through IEEE Xplore platform, for subscription-based access and individual publication purchases.
In addition to journals and conference proceedings, the IEEE also publishes tutorials and standards that are produced by its standardization committees. The organization also has its own IEEE paper format.
Technical societies
IEEE has 39 technical societies, each focused on a certain knowledge area, which provide specialized publications, conferences, business networking and other services.
Other bodies
IEEE Global History Network
In September 2008, the IEEE History Committee founded the IEEE Global History Network, which now redirects to Engineering and Technology History Wiki.
IEEE Foundation
The IEEE Foundation is a charitable foundation established in 1973 to support and promote technology education, innovation, and excellence. It is incorporated separately from the IEEE, although it has a close relationship to it. Members of the Board of Directors of the foundation are required to be active members of IEEE, and one third of them must be current or former members of the IEEE Board of Directors.
Initially, the role of the IEEE Foundation was to accept and administer donations for the IEEE Awards program, but donations increased beyond what was necessary for this purpose, and the scope was broadened. In addition to soliciting and administering unrestricted funds, the foundation also administers donor-designated funds supporting particular educational, humanitarian, historical preservation, and peer recognition programs of the IEEE. As of the end of 2014, the foundation's total assets were nearly $45 million, split equally between unrestricted and donor-designated funds.
Controversies
Huawei ban
In May 2019, IEEE restricted Huawei employees from peer reviewing papers or handling papers as editors due to the "severe legal implications" of U.S. government sanctions against Huawei. As members of its standard-setting body, Huawei employees could continue to exercise their voting rights, attend standards development meetings, submit proposals and comment in public discussions on new standards. The ban sparked outrage among Chinese scientists on social media. Some professors in China decided to cancel their memberships.
On June 3, 2019, IEEE lifted restrictions on Huawei's editorial and peer review activities after receiving clearance from the United States government.
Position on the Russian invasion of Ukraine
On February 26, 2022, the chair of the IEEE Ukraine Section, Ievgen Pichkalov, publicly appealed to the IEEE members to "freeze [IEEE] activities and membership in Russia" and requested "public reaction and strict disapproval of Russia's aggression" from the IEEE and IEEE Region 8. On March 17, 2022, an article in the form of Q&A interview with IEEE Russia (Siberia) senior member Roman Gorbunov titled "A Russian Perspective on the War in Ukraine" was published in IEEE Spectrum to demonstrate "the plurality of views among IEEE members" and the "views that are at odds with international reporting on the war in Ukraine". On March 30, 2022, activist Anna Rohrbach created an open letter to the IEEE in an attempt to have them directly address the article, stating that the article used "common narratives in Russian propaganda" on the 2022 Russian invasion of Ukraine and requesting the IEEE Spectrum to acknowledge "that they have unwittingly published a piece furthering misinformation and Russian propaganda." A few days later a note from the editors was added on April 6 with an apology "for not providing adequate context at the time of publication", though the editors did not revise the original article.
See also
Certified Software Development Professional (CSDP) program of the IEEE Computer Society
Glossary of electrical and electronics engineering
Engineering and Technology History Wiki
Eta Kappa Nu – IEEE HKN Honor society
IEEE Standards Association
Institution of Engineering and Technology (UK)
International Electrotechnical Commission (IEC)
List of IEEE awards
List of IEEE conferences
List of IEEE fellows
Notes
References
External links
IEEE Xplore – Research database and online digital library archive
IEEE History Center
501(c)(3) organizations
1963 establishments in New York (state)
Organizations based in New York City
Piscataway, New Jersey
Professional associations based in the United States
Standards organizations in the United States | Institute of Electrical and Electronics Engineers | [
"Engineering"
] | 1,119 | [
"Electrical engineering organizations",
"Institute of Electrical and Electronics Engineers"
] |
56,990 | https://en.wikipedia.org/wiki/Tower%20of%20Hanoi | The Tower of Hanoi (also called The problem of Benares Temple, Tower of Brahma or Lucas' Tower, and sometimes pluralized as Towers, or simply pyramid puzzle) is a mathematical game or puzzle consisting of three rods and a number of disks of various diameters, which can slide onto any rod. The puzzle begins with the disks stacked on one rod in order of decreasing size, the smallest at the top, thus approximating a conical shape. The objective of the puzzle is to move the entire stack to one of the other rods, obeying the following rules:
Only one disk may be moved at a time.
Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack or on an empty rod.
No disk may be placed on top of a disk that is smaller than it.
With three disks, the puzzle can be solved in seven moves. The minimal number of moves required to solve a Tower of Hanoi puzzle is , where n is the number of disks.
Origins
The puzzle was invented by the French mathematician Édouard Lucas, first presented in 1883 as a game discovered by "N. Claus (de Siam)" (an anagram of "Lucas d'Amiens"), and later published as a booklet in 1889 and in a posthumously-published volume of Lucas' Récréations mathématiques. Accompanying the game was an instruction booklet, describing the game's purported origins in Tonkin, and claiming that according to legend Brahmins at a temple in Benares have been carrying out the movement of the "Sacred Tower of Brahma", consisting of sixty-four golden disks, according to the same rules as in the game, and that the completion of the tower would lead to the end of the world. Numerous variations on this legend regarding the ancient and mystical nature of the puzzle popped up almost immediately.
If the legend were true, and if the priests were able to move disks at a rate of one per second, using the smallest number of moves, it would take them 264 − 1 seconds or roughly 585 billion years to finish, which is about 42 times the estimated current age of the universe.
There are many variations on this legend. For instance, in some back stories, the temple is a monastery, and the priests are monks. The temple or monastery may be in various locales including Hanoi, and may be associated with any religion. In some versions, other elements are introduced, such as the fact that the tower was created at the beginning of the world, or that the priests or monks may make only one move per day.
Solution
The puzzle can be played with any number of disks, although many toy versions have around 7 to 9 of them. The minimal number of moves required to solve a Tower of Hanoi puzzle with n disks is .
Iterative solution
A simple solution for the toy puzzle is to alternate moves between the smallest piece and a non-smallest piece. When moving the smallest piece, always move it to the next position in the same direction (to the right if the starting number of pieces is even, to the left if the starting number of pieces is odd). If there is no tower position in the chosen direction, move the piece to the opposite end, but then continue to move in the correct direction. For example, if you started with three pieces, you would move the smallest piece to the opposite end, then continue in the left direction after that. When the turn is to move the non-smallest piece, there is only one legal move. Doing this will complete the puzzle in the fewest moves.
Simpler statement of iterative solution
The iterative solution is equivalent to repeated execution of the following sequence of steps until the goal has been achieved:
Move one disk from peg A to peg B or vice versa, whichever move is legal.
Move one disk from peg A to peg C or vice versa, whichever move is legal.
Move one disk from peg B to peg C or vice versa, whichever move is legal.
Following this approach, the stack will end up on peg B if the number of disks is odd and peg C if it is even.
Recursive solution
The key to solving a problem recursively is to recognize that it can be broken down into a collection of smaller sub-problems, to each of which that same general solving procedure that we are seeking applies, and the total solution is then found in some simple way from those sub-problems' solutions. Each of these created sub-problems being "smaller" guarantees that the base case(s) will eventually be reached. For the Towers of Hanoi:
label the pegs A, B, C,
let n be the total number of disks, and
number the disks from 1 (smallest, topmost) to n (largest, bottom-most).
Assuming all n disks are distributed in valid arrangements among the pegs; assuming there are m top disks on a source peg, and all the rest of the disks are larger than m, so they can be safely ignored; to move m disks from a source peg to a target peg using a spare peg, without violating the rules:
Move m − 1 disks from the source to the spare peg, by the same general solving procedure. Rules are not violated, by assumption. This leaves the disk m as a top disk on the source peg.
Move the disk m from the source to the target peg, which is guaranteed to be a valid move, by the assumptions — a simple step.
Move the m − 1 disk that we have just placed on the spare, from the spare to the target peg by the same general solving procedure, so they are placed on top of the disk m without violating the rules.
The base case is to move 0 disks (in steps 1 and 3), that is, do nothing—which does not violate the rules.
The full Tower of Hanoi solution then moves n disks from the source peg A to the target peg C, using B as the spare peg.
This approach can be given a rigorous mathematical proof with mathematical induction and is often used as an example of recursion when teaching programming.
Logical analysis of the recursive solution
As in many mathematical puzzles, finding a solution is made easier by solving a slightly more general problem: how to move a tower of h (height) disks from a starting peg f = A (from) onto a destination peg t = C (to), B being the remaining third peg and assuming t ≠ f. First, observe that the problem is symmetric for permutations of the names of the pegs (symmetric group S3). If a solution is known moving from peg A to peg C, then, by renaming the pegs, the same solution can be used for every other choice of starting and destination peg. If there is only one disk (or even none at all), the problem is trivial. If h = 1, then move the disk from peg A to peg C. If h > 1, then somewhere along the sequence of moves, the largest disk must be moved from peg A to another peg, preferably to peg C. The only situation that allows this move is when all smaller h − 1 disks are on peg B. Hence, first all h − 1 smaller disks must go from A to B. Then move the largest disk and finally move the h − 1 smaller disks from peg B to peg C. The presence of the largest disk does not impede any move of the h − 1 smaller disks and can be temporarily ignored. Now the problem is reduced to moving h − 1 disks from one peg to another one, first from A to B and subsequently from B to C, but the same method can be used both times by renaming the pegs. The same strategy can be used to reduce the h − 1 problem to h − 2, h − 3, and so on until only one disk is left. This is called recursion. This algorithm can be schematized as follows.
Identify the disks in order of increasing size by the natural numbers from 0 up to but not including h. Hence disk 0 is the smallest one, and disk h − 1 the largest one.
The following is a procedure for moving a tower of h disks from a peg A onto a peg C, with B being the remaining third peg:
If h > 1, then first use this procedure to move the h − 1 smaller disks from peg A to peg B.
Now the largest disk, i.e. disk h can be moved from peg A to peg C.
If h > 1, then again use this procedure to move the h − 1 smaller disks from peg B to peg C.
By mathematical induction, it is easily proven that the above procedure requires the minimum number of moves possible and that the produced solution is the only one with this minimal number of moves. Using recurrence relations, the exact number of moves that this solution requires can be calculated by: . This result is obtained by noting that steps 1 and 3 take moves, and step 2 takes one move, giving .
Non-recursive solution
The list of moves for a tower being carried from one peg onto another one, as produced by the recursive algorithm, has many regularities. When counting the moves starting from 1, the ordinal of the disk to be moved during move m is the number of times m can be divided by 2. Hence every odd move involves the smallest disk. It can also be observed that the smallest disk traverses the pegs f, t, r, f, t, r, etc. for odd height of the tower and traverses the pegs f, r, t, f, r, t, etc. for even height of the tower. This provides the following algorithm, which is easier, carried out by hand, than the recursive algorithm.
In alternate moves:
Move the smallest disk to the peg it has not recently come from.
Move another disk legally (there will be only one possibility).
For the very first move, the smallest disk goes to peg t if h is odd and to peg r if h is even.
Also observe that:
Disks whose ordinals have even parity move in the same sense as the smallest disk.
Disks whose ordinals have odd parity move in opposite sense.
If h is even, the remaining third peg during successive moves is t, r, f, t, r, f, etc.
If h is odd, the remaining third peg during successive moves is r, t, f, r, t, f, etc.
With this knowledge, a set of disks in the middle of an optimal solution can be recovered with no more state information than the positions of each disk:
Call the moves detailed above a disk's "natural" move.
Examine the smallest top disk that is not disk 0, and note what its only (legal) move would be: if there is no such disk, then we are either at the first or last move.
If that move is the disk's "natural" move, then the disk has not been moved since the last disk 0 move, and that move should be taken.
If that move is not the disk's "natural" move, then move disk 0.
Binary solution
Disk positions may be determined more directly from the binary (base-2) representation of the move number (the initial state being move #0, with all digits 0, and the final state being with all digits 1), using the following rules:
There is one binary digit (bit) for each disk.
The most significant (leftmost) bit represents the largest disk. A value of 0 indicates that the largest disk is on the initial peg, while a 1 indicates that it is on the final peg (right peg if number of disks is odd and middle peg otherwise).
The bitstring is read from left to right, and each bit can be used to determine the location of the corresponding disk.
A bit with the same value as the previous one means that the corresponding disk is stacked on top of the previous disk on the same peg.
(That is to say: a straight sequence of 1s or 0s means that the corresponding disks are all on the same peg.)
A bit with a different value to the previous one means that the corresponding disk is one position to the left or right of the previous one. Whether it is left or right is determined by this rule:
Assume that the initial peg is on the left.
Also assume "wrapping"—so the right peg counts as one peg "left" of the left peg, and vice versa.
Let n be the number of greater disks that are located on the same peg as their first greater disk and add 1 if the largest disk is on the left peg. If n is even, the disk is located one peg to the right, if n is odd, the disk located one peg to the left (in case of an even number of disks and vice versa otherwise).
For example, in an 8-disk Hanoi:
Move 0 = 00000000.
The largest disk is 0, so it is on the left (initial) peg.
All other disks are 0 as well, so they are stacked on top of it. Hence all disks are on the initial peg.
Move = 11111111.
The largest disk is 1, so it is on the middle (final) peg.
All other disks are 1 as well, so they are stacked on top of it. Hence all disks are on the final peg and the puzzle is complete.
Move 21610 = 11011000.
The largest disk is 1, so it is on the middle (final) peg.
Disk two is also 1, so it is stacked on top of it, on the middle peg.
Disk three is 0, so it is on another peg. Since n is odd (n = 1), it is one peg to the left, i.e. on the left peg.
Disk four is 1, so it is on another peg. Since n is odd (n = 1), it is one peg to the left, i.e. on the right peg.
Disk five is also 1, so it is stacked on top of it, on the right peg.
Disk six is 0, so it is on another peg. Since n is even (n = 2), the disk is one peg to the right, i.e. on the left peg.
Disks seven and eight are also 0, so they are stacked on top of it, on the left peg.
The source and destination pegs for the mth move can also be found elegantly from the binary representation of m using bitwise operations. To use the syntax of the C programming language, move m is from peg (m & m - 1) % 3 to peg ((m | m - 1) + 1) % 3, where the disks begin on peg 0 and finish on peg 1 or 2 according as whether the number of disks is even or odd. Another formulation is from peg (m - (m & -m)) % 3 to peg (m + (m & -m)) % 3.
Furthermore, the disk to be moved is determined by the number of times the move count (m) can be divided by 2 (i.e. the number of zero bits at the right), counting the first move as 1 and identifying the disks by the numbers 0, 1, 2, etc. in order of increasing size. This permits a very fast non-recursive computer implementation to find the positions of the disks after m moves without reference to any previous move or distribution of disks.
The operation, which counts the number of consecutive zeros at the end of a binary number, gives a simple solution to the problem: the disks are numbered from zero, and at move m, disk number count trailing zeros is moved the minimal possible distance to the right (circling back around to the left as needed).
Gray-code solution
The binary numeral system of Gray codes gives an alternative way of solving the puzzle. In the Gray system, numbers are expressed in a binary combination of 0s and 1s, but rather than being a standard positional numeral system, the Gray code operates on the premise that each value differs from its predecessor by only one bit changed.
If one counts in Gray code of a bit size equal to the number of disks in a particular Tower of Hanoi, begins at zero and counts up, then the bit changed each move corresponds to the disk to move, where the least-significant bit is the smallest disk, and the most-significant bit is the largest.
Counting moves from 1 and identifying the disks by numbers starting from 0 in order of increasing size, the ordinal of the disk to be moved during move m is the number of times m can be divided by 2.
This technique identifies which disk to move, but not where to move it to. For the smallest disk, there are always two possibilities. For the other disks there is always one possibility, except when all disks are on the same peg, but in that case either it is the smallest disk that must be moved or the objective has already been achieved. Luckily, there is a rule that does say where to move the smallest disk to. Let f be the starting peg, t the destination peg, and r the remaining third peg. If the number of disks is odd, the smallest disk cycles along the pegs in the order f → t → r → f → t → r, etc. If the number of disks is even, this must be reversed: f → r → t → f → r → t, etc.
The position of the bit change in the Gray code solution gives the size of the disk moved at each step: 1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, ... , a sequence also known as the ruler function, or one more than the power of 2 within the move number. In the Wolfram Language, IntegerExponent[Range[2^8 - 1], 2] + 1 gives moves for the 8-disk puzzle.
Graphical representation
The game can be represented by an undirected graph, the nodes representing distributions of disks and the edges representing moves. For one disk, the graph is a triangle:
The graph for two disks is three triangles connected to form the corners of a larger triangle.
A second letter is added to represent the larger disk. Clearly, it cannot initially be moved.
The topmost small triangle now represents the one-move possibilities with two disks:
The nodes at the vertices of the outermost triangle represent distributions with all disks on the same peg.
For h + 1 disks, take the graph of h disks and replace each small triangle with the graph for two disks.
For three disks the graph is:
call the pegs a, b, and c
list disk positions from left to right in order of increasing size
The sides of the outermost triangle represent the shortest ways of moving a tower from one peg to another one. The edge in the middle of the sides of the largest triangle represents a move of the largest disk. The edge in the middle of the sides of each next smaller triangle represents a move of each next smaller disk. The sides of the smallest triangles represent moves of the smallest disk.
In general, for a puzzle with n disks, there are 3n nodes in the graph; every node has three edges to other nodes, except the three corner nodes, which have two: it is always possible to move the smallest disk to one of the two other pegs, and it is possible to move one disk between those two pegs except in the situation where all disks are stacked on one peg. The corner nodes represent the three cases where all the disks are stacked on one peg. The diagram for n + 1 disks is obtained by taking three copies of the n-disk diagram—each one representing all the states and moves of the smaller disks for one particular position of the new largest disk—and joining them at the corners with three new edges, representing the only three opportunities to move the largest disk. The resulting figure thus has 3n+1 nodes and still has three corners remaining with only two edges.
As more disks are added, the graph representation of the game will resemble a fractal figure, the Sierpiński triangle. It is clear that the great majority of positions in the puzzle will never be reached when using the shortest possible solution; indeed, if the priests of the legend are using the longest possible solution (without re-visiting any position), it will take them 364 − 1 moves, or more than 1023 years.
The longest non-repetitive way for three disks can be visualized by erasing the unused edges:
Incidentally, this longest non-repetitive path can be obtained by forbidding all moves from a to c.
The Hamiltonian cycle for three disks is:
The graphs clearly show that:
From every arbitrary distribution of disks, there is exactly one shortest way to move all disks onto one of the three pegs.
Between every pair of arbitrary distributions of disks there are one or two different shortest paths.
From every arbitrary distribution of disks, there are one or two different longest non-self-crossing paths to move all disks to one of the three pegs.
Between every pair of arbitrary distributions of disks there are one or two different longest non-self-crossing paths.
Let Nh be the number of non-self-crossing paths for moving a tower of h disks from one peg to another one. Then:
N1 = 2
Nh+1 = (Nh)2 + (Nh)3
This gives Nh to be 2, 12, 1872, 6563711232, ...
Variations
Linear Hanoi
If all moves must be between adjacent pegs (i.e. given pegs A, B, C, one cannot move directly between pegs A and C), then moving a stack of n disks from peg A to peg C takes 3n − 1 moves. The solution uses all 3n valid positions, always taking the unique move that does not undo the previous move. The position with all disks at peg B is reached halfway, i.e. after (3n − 1) / 2 moves.
Cyclic Hanoi
In Cyclic Hanoi, we are given three pegs (A, B, C), which are arranged as a circle with the clockwise and the counterclockwise directions being defined as A – B – C – A and A – C – B – A, respectively. The moving direction of the disk must be clockwise. It suffices to represent the sequence of disks to be moved. The solution can be found using two mutually recursive procedures:
To move n disks counterclockwise to the neighbouring target peg:
move n − 1 disks counterclockwise to the target peg
move disk #n one step clockwise
move n − 1 disks clockwise to the start peg
move disk #n one step clockwise
move n − 1 disks counterclockwise to the target peg
To move n disks clockwise to the neighbouring target peg:
move n − 1 disks counterclockwise to a spare peg
move disk #n one step clockwise
move n − 1 disks counterclockwise to the target peg
Let C(n) and A(n) represent moving n disks clockwise and counterclockwise, then we can write down both formulas:
The solution for the Cyclic Hanoi has some interesting properties:
The move-patterns of transferring a tower of disks from a peg to another peg are symmetric with respect to the center points.
The smallest disk is the first and last disk to move.
Groups of the smallest disk moves alternate with single moves of other disks.
The number of disks moves specified by C(n) and A(n) are minimal.
With four pegs and beyond
Although the three-peg version has a simple recursive solution long been known, the optimal solution for the Tower of Hanoi problem with four pegs (called Reve's puzzle) was not verified until 2014, by Bousch.
However, in case of four or more pegs, the Frame–Stewart algorithm is known without proof of optimality since 1941.
For the formal derivation of the exact number of minimal moves required to solve the problem by applying the Frame–Stewart algorithm (and other equivalent methods), see the following paper.
For other variants of the four-peg Tower of Hanoi problem, see Paul Stockmeyer's survey paper.
The so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes.
Frame–Stewart algorithm
The Frame–Stewart algorithm is described below:
Let be the number of disks.
Let be the number of pegs.
Define to be the minimum number of moves required to transfer n disks using r pegs.
The algorithm can be described recursively:
For some , , transfer the top disks to a single peg other than the start or destination pegs, taking moves.
Without disturbing the peg that now contains the top disks, transfer the remaining disks to the destination peg, using only the remaining pegs, taking moves.
Finally, transfer the top disks to the destination peg, taking moves.
The entire process takes moves. Therefore, the count should be picked for which this quantity is minimum. In the 4-peg case, the optimal equals , where is the nearest integer function. For example, in the UPenn CIS 194 course on Haskell, the first assignment page lists the optimal solution for the 15-disk and 4-peg case as 129 steps, which is obtained for the above value of k.
This algorithm is presumed to be optimal for any number of pegs; its number of moves is 2Θ(n1/(r−2)) (for fixed r).
General shortest paths and the number 466/885
A curious generalization of the original goal of the puzzle is to start from a given configuration of the disks where all disks are not necessarily on the same peg and to arrive in a minimal number of moves at another given configuration. In general, it can be quite difficult to compute a shortest sequence of moves to solve this problem. A solution was proposed by Andreas Hinz and is based on the observation that in a shortest sequence of moves, the largest disk that needs to be moved (obviously one may ignore all of the largest disks that will occupy the same peg in both the initial and final configurations) will move either exactly once or exactly twice.
The mathematics related to this generalized problem becomes even more interesting when one considers the average number of moves in a shortest sequence of moves between two initial and final disk configurations that are chosen at random. Hinz and Chan Tat-Hung independently discovered (see also
) that the average number of moves in an n-disk Tower is given by the following exact formula:
For large enough n, only the first and second terms do not converge to zero, so we get an asymptotic expression: , as . Thus intuitively, we could interpret the fraction of as representing the ratio of the labor one has to perform when going from a randomly chosen configuration to another randomly chosen configuration, relative to the difficulty of having to cross the "most difficult" path of length which involves moving all the disks from one peg to another. An alternative explanation for the appearance of the constant 466/885, as well as a new and somewhat improved algorithm for computing the shortest path, was given by Romik.
Magnetic Hanoi
In Magnetic Tower of Hanoi, each disk has two distinct sides North and South (typically colored "red" and "blue").
Disks must not be placed with the similar poles together—magnets in each disk prevent this illegal move.
Also, each disk must be flipped as it is moved.
Bicolor Towers of Hanoi
This variation of the famous Tower of Hanoi puzzle was offered to grade 3–6 students at 2ème Championnat de France des Jeux Mathématiques et Logiques held in July 1988.
The rules of the puzzle are essentially the same: disks are transferred between pegs one at a time. At no time may a bigger disk be placed on top of a smaller one. The difference is that now for every size there are two disks: one black and one white. Also, there are now two towers of disks of alternating colors. The goal of the puzzle is to make the towers monochrome (same color). The biggest disks at the bottom of the towers are assumed to swap positions.
Tower of Hanoy
A variation of the puzzle has been adapted as a solitaire game with nine playing cards under the name Tower of Hanoy. It is not known whether the altered spelling of the original name is deliberate or accidental.
Applications
The Tower of Hanoi is frequently used in psychological research on problem-solving. There also exists a variant of this task called Tower of London for neuropsychological diagnosis and treatment of disorders of executive function.
Zhang and Norman used several isomorphic (equivalent) representations of the game to study the impact of representational effect in task design. They demonstrated an impact on user performance by changing the way that the rules of the game are represented, using variations in the physical design of the game components. This knowledge has impacted on the development of the TURF framework for the representation of human–computer interaction.
The Tower of Hanoi is also used as a backup rotation scheme when performing computer data backups where multiple tapes/media are involved.
As mentioned above, the Tower of Hanoi is popular for teaching recursive algorithms to beginning programming students. A pictorial version of this puzzle is programmed into the emacs editor, accessed by typing M-x hanoi. There is also a sample algorithm written in Prolog.
The Tower of Hanoi is also used as a test by neuropsychologists trying to evaluate frontal lobe deficits.
In 2010, researchers published the results of an experiment that found that the ant species Linepithema humile were successfully able to solve the 3-disk version of the Tower of Hanoi problem through non-linear dynamics and pheromone signals.
In 2014, scientists synthesized multilayered palladium nanosheets with a Tower of Hanoi-like structure.
In popular culture
In the science fiction story "Now Inhale", by Eric Frank Russell, a human is held prisoner on a planet where the local custom is to make the prisoner play a game until it is won or lost before his execution. The protagonist knows that a rescue ship might take a year or more to arrive, so he chooses to play Towers of Hanoi with 64 disks. This story makes reference to the legend about the Buddhist monks playing the game until the end of the world.
In the 1966 Doctor Who story The Celestial Toymaker, the eponymous villain forces the Doctor to play a ten-piece, 1,023-move Tower of Hanoi game entitled The Trilogic Game with the pieces forming a pyramid shape when stacked.
In 2007, the concept of the Towers Of Hanoi problem was used in Professor Layton and the Diabolical Box in puzzles 6, 83, and 84, but the disks had been changed to pancakes. The puzzle was based around a dilemma where the chef of a restaurant had to move a pile of pancakes from one plate to the other with the basic principles of the original puzzle (i.e. three plates that the pancakes could be moved onto, not being able to put a larger pancake onto a smaller one, etc.)
In the 2011 film Rise of the Planet of the Apes, this puzzle, called in the film the "Lucas Tower", is used as a test to study the intelligence of apes.
The puzzle is featured regularly in adventure and puzzle games. Since it is easy to implement, and easily recognised, it is well suited to use as a puzzle in a larger graphical game (e.g. Star Wars: Knights of the Old Republic and Mass Effect). Some implementations use straight disks, but others disguise the puzzle in some other form. There is an arcade version by Sega.
A 15-disk version of the puzzle appears in the game Sunless Sea as a lock to a tomb. The player has the option to click through each move of the puzzle in order to solve it, but the game notes that it will take 32,767 moves to complete. If an especially dedicated player does click through to the end of the puzzle, it is revealed that completing the puzzle does not unlock the door.
This was first used as a challenge in Survivor Thailand in 2002 but rather than rings, the pieces were made to resemble a temple. Sook Jai threw the challenge to get rid of Jed even though Shii-Ann knew full well how to complete the puzzle.
The problem is featured as part of a reward challenge in a 2011 episode of the American version of the Survivor TV series. Both players (Ozzy Lusth and Benjamin "Coach" Wade) struggled to understand how to solve the puzzle and are aided by their fellow tribe members.
In Genshin Impact, this puzzle is shown in Faruzan's hangout quest, "Early Learning Mechanism", where she mentions seeing it as a mechanism and uses it to make a toy prototype for children. She calls it pagoda stacks.
See also
ABACABA pattern
Backup rotation scheme, a TOH application
Baguenaudier
Recursion (computer science)
"The Nine Billion Names of God", 1953 Arthur C. Clark short story with a similar premise to the game's framing story
Notes
External links
1883 introductions
1889 documents
19th-century inventions
French inventions
Mechanical puzzles
Mathematical puzzles
Articles with example C code
Articles with example Python (programming language) code
Divide-and-conquer algorithms | Tower of Hanoi | [
"Mathematics"
] | 6,847 | [
"Recreational mathematics",
"Mechanical puzzles"
] |
57,027 | https://en.wikipedia.org/wiki/Fitts%27s%20law | Fitts's law (often cited as Fitts' law) is a predictive model of human movement primarily used in human–computer interaction and ergonomics. The law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device. It was initially developed by Paul Fitts.
Fitts's law has been shown to apply under a variety of conditions; with many different limbs (hands, feet, the lower lip, head-mounted sights), manipulanda (input devices), physical environments (including underwater), and user populations (young, old, special educational needs, and drugged participants).
Original model formulation
The original 1954 paper by Paul Morris Fitts proposed a metric to quantify the difficulty of a target selection task.
The metric was based on an information analogy, where the distance to the center of the target (D) is like a signal and the tolerance or width of the target (W) is like noise.
The metric is Fitts's index of difficulty (ID, in bits):
Fitts also proposed an index of performance (IP, in bits per second) as a measure of human performance. The metric combines a task's index of difficulty (ID) with the movement time (MT, in seconds) in selecting the target. In Fitts's words,
"The average rate of information generated by a series of movements is the average information per movement divided by the time per movement." Thus,
Today, IP is more commonly called throughput (TP). It is also common to include an adjustment for accuracy in the calculation.
Researchers after Fitts began the practice of building linear regression equations and examining the correlation (r) for goodness of fit. The equation expresses the relationship between
MT and the D and W task parameters:
where:
MT is the average time to complete the movement.
a and b are constants that depend on the choice of input device and are usually determined empirically by regression analysis. a defines the intersection on the y axis and is often interpreted as a delay. The b parameter is a slope and describes an acceleration. Both parameters show the linear dependency in Fitts's law.
ID is the index of difficulty.
D is the distance from the starting point to the center of the target.
W is the width of the target measured along the axis of motion. W can also be thought of as the allowed error tolerance in the final position, since the final point of the motion must fall within ± of the target's center.
Since shorter movement times are desirable for a given task, the value of the b parameter can be used as a metric when comparing computer pointing devices against one another. The first human–computer interface application of Fitts's law was by Card, English, and Burr, who used the index of performance (IP), interpreted as , to compare performance of different input devices, with the mouse coming out on top compared to the joystick or directional movement keys. This early work, according to Stuart Card's biography, "was a major factor leading to the mouse's commercial introduction by Xerox".
Many experiments testing Fitts's law apply the model to a dataset in which either distance or width, but not both, are varied. The model's predictive power deteriorates when both are varied over a significant range. Notice that because the ID term depends only on the ratio of distance to width, the model implies that a target distance and width combination can be re-scaled arbitrarily without affecting movement time, which is impossible. Despite its flaws, this form of the model does possess remarkable predictive power across a range of computer interface modalities and motor tasks, and has provided many insights into user interface design principles.
Movement
A movement during a single Fitts's law task can be split into two phases:
initial movement. A fast but imprecise movement towards the target
final movement. Slower but more precise movement in order to acquire the target
The first phase is defined by the distance to the target. In this phase the distance can be closed quickly while still being imprecise. The second movement tries to perform a slow and controlled precise movement to actually hit the target.
The task duration scales linearly in regards to difficulty. But as different tasks can have the same difficulty, it is derived that distance has a greater impact on the overall task completion time than target size.
Often it is cited that Fitts's law can be applied to eye tracking. This seems to be at least a controversial topic as Drewes showed. During fast saccadic eye movements the user is blind. During a Fitts's law task the user consciously acquires its target and can actually see it, making these two types of interaction not comparable.
Bits per second: model innovations driven by information theory
The formulation of Fitts's index of difficulty most frequently used in the human–computer interaction community is called the Shannon formulation:
This form was proposed by Scott MacKenzie, professor at York University, and named for its resemblance to the Shannon–Hartley theorem. It describes the transmission of information using bandwidth, signal strength and noise. In Fitts's law, the distance represents signal strength, while target width is noise.
Using this form of the model, the difficulty of a pointing task was equated to a quantity of information transmitted (in units of bits) by performing the task. This was justified by the assertion that pointing reduces to an information processing task. Although no formal mathematical connection was established between Fitts's law and the Shannon-Hartley theorem it was inspired by, the Shannon form of the law has been used extensively, likely due to the appeal of quantifying motor actions using information theory. In 2002 the ISO 9241 was published, providing standards for human–computer interface testing, including the use of the Shannon form of Fitts's law. It has been shown that the information transmitted via serial keystrokes on a keyboard and the information implied by the ID for such a task are not consistent. The Shannon-Entropy results in a different information value than Fitts's law. The authors note, though, that the error is negligible and only has to be accounted for in comparisons of devices with known entropy or measurements of human information processing capabilities.
Adjustment for accuracy: use of the effective target width
An important improvement to Fitts's law was proposed by Crossman in 1956 (see Welford, 1968, pp. 147–148) and used by Fitts
in his 1964 paper with Peterson. With the adjustment, target width (W) is replaced by an effective target width (We).
We is computed from the standard deviation in the selection coordinates gathered over a sequence of trials for a particular D-W condition. If the selections are logged as x coordinates along the axis of approach to the target, then
This yields
and hence
If the selection coordinates are normally distributed, We spans 96% of the distribution. If the observed error rate was 4% in the sequence of trials, then We = W. If the error rate was greater than 4%, We > W, and if the error rate was less than 4%, We < W. By using We, a Fitts' law model more closely reflects what users actually did, rather than what they were asked to do.
The main advantage in computing IP as above is that spatial variability, or accuracy, is included in the measurement. With the adjustment for accuracy, Fitts's law more truly encompasses the speed-accuracy tradeoff. The equations above appear in ISO 9241-9 as the recommended method of computing throughput.
Welford's model: innovations driven by predictive power
Not long after the original model was proposed, a 2-factor variation was proposed under the intuition that target distance and width have separate effects on movement time. Welford's model, proposed in 1968, separated the influence of target distance and width into separate terms, and provided improved predictive power:
This model has an additional parameter, so its predictive accuracy cannot be directly compared with 1-factor forms of Fitts's law. However, a variation on Welford's model inspired by the Shannon formulation,
The additional parameter k allows the introduction of angles into the model. Now the users position can be accounted for. The influence of the angle can be weighted using the exponent. This addition was introduced by Kopper et al. in 2010.
The formula reduces to the Shannon form when k = 1. Therefore, this model can be directly compared against the Shannon form of Fitts's law using the F-test of nested models. This comparison reveals that not only does the Shannon form of Welford's model better predict movement times, but it is also more robust when control-display gain (the ratio between e.g. hand movement and cursor movement) is varied. Consequently, although the Shannon model is slightly more complex and less intuitive, it is empirically the best model to use for virtual pointing tasks.
Extending the model from 1D to 2D and other nuances
Extensions to two or more dimensions
In its original form, Fitts's law is meant to apply only to one-dimensional tasks. However, the original experiments required subjects to move a stylus (in three dimensions) between two metal plates on a table, termed the reciprocal tapping task. The target width perpendicular to the direction of movement was very wide to avoid it having a significant influence on performance. A major application for Fitts's law is 2D virtual pointing tasks on computer screens, in which targets have bounded sizes in both dimensions.
Fitts's law has been extended to two-dimensional tasks in two different ways. For navigating e.g. hierarchical pull-down menus, the user must generate a trajectory with the pointing device that is constrained by the menu geometry; for this application the Accot-Zhai steering law was derived.
For simply pointing to targets in a two-dimensional space, the model generally holds as-is but requires adjustments to capture target geometry and quantify targeting errors in a logically consistent way.
Multiple Methods have been used to determine the target size :
status Quo: horizontal width of the target
sum model: W equals height + width
area model: W equals height * width
smaller of model: W smaller value of height and width
W-model: W is the effective width in the direction of the movement
While the W-model is sometimes considered the state-of-the-art measurement, the truly correct representation for non-circular targets is substantially more complex, as it requires computing the angle-specific convolution between the trajectory of the pointing device and the target
Characterizing performance
Since the a and b parameters should capture movement times over a potentially wide range of task geometries, they can serve as a performance metric for a given interface. In doing so, it is necessary to separate variation between users from variation between interfaces.
The a parameter is typically positive and close to zero, and sometimes ignored in characterizing average performance, as in Fitts' original experiment. Multiple methods exist for identifying parameters from experimental data, and the choice of method is the subject of heated debate, since method variation can result in parameter differences that overwhelm underlying performance differences.
An additional issue in characterizing performance is incorporating success rate: an aggressive user can achieve shorter movement times at the cost of experimental trials in which the target is missed. If the latter are not incorporated into the model, then average movement times can be artificially decreased.
Temporal targets
Fitts's law deals only with targets defined in space. However, a target can be defined purely on the time axis, which is called a temporal target. A blinking target or a target moving toward a selection area are examples of temporal targets. Similar to space, the distance to the target (i.e., temporal distance Dt) and the width of the target (i.e., temporal width Wt) can be defined for temporal targets as well. The temporal distance is the amount of time a person must wait for a target to appear. The temporal width is a short duration from the moment the target appears until it disappears. For example, for a blinking target, Dt can be thought of as the period of blinking and Wt as the duration of the blinking. As with targets in space, the larger the Dt or the smaller the Wt, the more difficult it becomes to select the target.
The task of selecting the temporal target is called temporal pointing. The model for temporal pointing was first presented to the human–computer interaction field in 2016. The model predicts the error rate, the human performance in temporal pointing, as a function of temporal index of difficulty (IDt):
Implications for UI design
Multiple design guidelines for GUIs can be derived from the implications of Fitts's law. In its basic form, Fitts's law says that targets a user has to hit should be as big as possible. This is derived from the W parameter. More specifically, the effective size of the button should be as big as possible, meaning that its form has to be optimized for the direction of the user's movement onto the target.
Layouts should also cluster functions that are commonly used with each other. Optimizing for the D parameter in this way allows for smaller travel times.
Placing layout elements on the four edges of the screen allows for infinitely large targets in one dimension and therefore presents ideal scenarios. Since the pointer will always stop at the edge, the user can move the mouse with the greatest possible speed and still hit the target. The target area is effectively infinitely long along the movement axis. Therefore, this guideline is called “Rule of the infinite edges”. The use of this rule can be seen for example in MacOS, which always places the menu bar on the top left edge of the screen instead of the current program's windowframe.
This effect can be exaggerated at the four corners of a screen. At these points two edges collide and form a theoretically infinitely big button. Microsoft Windows (prior to Windows 11) places its "Start" button in the lower left corner and Microsoft Office 2007 uses the upper left corner for its "Office" menu. These four spots are sometimes called "magic corners".
MacOS places the close button on the upper left side of the program window and the menu bar fills out the magic corner with another button.
A UI that allows for pop-up menus rather than fixed drop-down menus reduces travel times for the D parameter. The user can continue interaction right from the current mouse position and doesn't have to move to a different preset area. Many operating systems use this when displaying right-click context menus. As the menu starts right on the pixel which the user clicked, this pixel is referred to as the "magic" or "prime pixel".
James Boritz et al. (1991) compared radial menu designs. In a radial menu all items have the same distance from the prime pixel. The research suggests that in practical implementations the direction in which a user has to move their mouse has also to be accounted for. For right-handed users, selecting the left-most menu item was significantly more difficult than the right-most one. No differences were found for transitions from upper to lower functions and vice versa.
See also
Accot–Zhai steering law
Hick's law
Point-and-click
Crossing-based interface
References
Bibliography
External links
An Interactive Visualisation of Fitts's Law with JavaScript and D3 by Simon Wallner
Fitts' Law at CS Dept. NSF-Supported Education Infrastructure Project
Fitts’ Law: Modeling Movement Time in HCI
Bibliography of Fitts’ Law Research compiled by I. Scott MacKenzie
Fitts' Law Software – Free Download by I. Scott MacKenzie
A Quiz Designed to Give You Fitts by Bruce Tognazzini
Human–computer interaction
Motor control | Fitts's law | [
"Engineering",
"Biology"
] | 3,297 | [
"Human–computer interaction",
"Behavior",
"Human–machine interaction",
"Motor control"
] |
57,068 | https://en.wikipedia.org/wiki/Copra | Copra (from ; ; ; ) is the dried, white flesh of the coconut from which coconut oil is extracted. Traditionally, the coconuts are sun-dried, especially for export, before the oil, also known as copra oil, is pressed out. The oil extracted from copra is rich in lauric acid, making it an important commodity in the preparation of lauryl alcohol, soaps, fatty acids, cosmetics, etc. and thus a lucrative product for many coconut-producing countries. The palatable oil cake, known as copra cake, obtained as a residue in the production of copra oil is used in animal feeds. The ground cake is known as coconut or copra meal.
Production
Copra has traditionally been grated and ground, then boiled in water to extract coconut oil. It was used by Pacific island cultures and became a valuable commercial product for merchants in the South Seas and South Asia in the 1860s. Nowadays, coconut oil (70%) is extracted by crushing copra; the by-product is known as copra cake or copra meal (30%).
The coconut cake which remains after the oil is extracted is 18–25% protein, but contains so much dietary fiber it cannot be eaten in large quantities by humans. Instead, it is normally fed to ruminants.
The production of copra – removing the shell, breaking it up, drying – is usually done where the coconut palms grow. Copra can be made by smoke drying, sun drying, or kiln drying. Hybrid solar drying systems can also be used for a continuous drying process. In a hybrid solar drying system, solar energy is utilized during daylight and energy from burning biomass is used when sunlight is not sufficient or during night time. Sun drying requires little more than racks and sufficient sunlight. Halved nuts are drained of water, and left with the meat facing the sky; they can be washed to remove mold-creating contaminants. After two days the meat can be removed from the shell with ease, and the drying process is complete after three to five more days (up to seven in total). Sun drying is often combined with kiln drying, eight hours of exposure to sunlight means the time spent in a kiln can be reduced by a day and the hot air the shells are exposed to in the kiln is more easily able to remove the remaining moisture. This process can also be done in reverse order: partially drying the copra in the kiln, and finishing the process with sunlight. Starting with sun drying requires careful inspection to avoid contamination with mold while starting with kiln-drying can harden the meat and prevent it from drying out completely in the sun.
In India, small but whole coconuts can be dried over the course of eight months to a year, and the meat inside removed and sold as a whole ball. Meat prepared in this fashion is sweet, soft, oily and is cream-coloured instead of being white. Coconut meat can be dried using direct heat and smoke from a fire, using simple racks to suspend the coconut over the fire. The smoke residue can help preserve the half-dried meat but the process overall suffers from unpredictable results and the risk of fires.
While there are some large plantations with integrated operations, copra remains primarily a smallholder crop. In former years copra was collected by traders going from island to island and port to port in the Pacific Ocean but South Pacific production is now much diminished, with the exception of Papua New Guinea, the Solomon Islands and Vanuatu.
Economics
Copra production begins on coconut plantations. Coconut trees are generally spaced apart, allowing a density of 100–160 coconut trees per hectare. A standard tree bears around 50–80 nuts a year, and average earnings in Vanuatu (1999) were US$0.20 per kg (one kg equals 8 nuts)—so a farmer could earn approximately US$120 to US$320 yearly for each planted hectare. Copra has since more than doubled in price, and was quoted at US$540 per ton in the Philippines on a CIF Rotterdam basis (US$0.54 per kg) by the Financial Times on 9 November 2012.
In 2017 the value of global exports of copra was $145-146 Million. The largest exporter was Papua New Guinea with 35% of the global total, followed by Indonesia (20%), Solomon Islands (13%) and Vanuatu (12%). The largest importer of copra is the Philippines, which imports $93.4 Million or 64% of the global total. A very large number of small farmers and tree owners produce copra, which is a vital part of their income.
Aflatoxin susceptibility
Copra is highly susceptible to the growth of molds and their production of aflatoxins if not dried properly. Aflatoxins can be highly toxic, and are among the most potent known natural carcinogens, particularly affecting the liver. Aflatoxins in copra cake, fed to animals, can be passed on to milk or meat from livestock, leading to human illnesses.
Animal feed
Copra meal is used as fodder for horses and cattle. Its high oil and protein levels are fattening for stock. The protein in copra meal has been heat treated and provides a source of high-quality protein for cattle, sheep and deer, because it does not break down in the rumen.
Coconut oil can be extracted using either mechanical expellers or solvents (hexane). Mechanically expelled copra meal is of higher feeding value, because it contains typically 8–12% oil, whereas the solvent-extracted copra meal contains only 2–4% oil. Premium quality copra meal can also contain 20–22% crude protein, and <20ppb aflatoxin.
High-quality copra meal contains <12% non-structural carbohydrate (NSC), which makes it well suited for feeding to horses that are prone to ulcers, insulin resistance, colic, tying up, and acidosis.
Shipment
Copra has been classed with dangerous goods due to its spontaneously combustive nature. It is identified as a Division 4.2 substance.
References
External links
Is Copra potentially a horse killer? – Horsetalk.co.nz
Copra linked to cancer-causing agent
Not all copra created equal
Making Coconut Oil – The Old Chamorro Way
AFLATOXIN CONTAMINATION IN FOODS AND FEEDS IN THE PHILIPPINES, in Manual on the application of the HACCP System in Mycotoxin prevention and ... , Food and Agriculture Organization of the United Nations.
AFLATOXIN CONTAMINATION IN FOODS AND FEEDS IN THE PHILIPPINES, Food and Agriculture Organization of the United Nations.
Food ingredients
Coconuts
Energy crops
Commodity markets in Kerala | Copra | [
"Technology"
] | 1,376 | [
"Food ingredients",
"Components"
] |
57,074 | https://en.wikipedia.org/wiki/Tetryl | 2,4,6-Trinitrophenylmethylnitramine or tetryl (C7H5N5O8) is an explosive compound used to make detonators and explosive booster charges.
Tetryl is a nitramine booster explosive, though its use has been largely superseded by RDX. Tetryl is a sensitive secondary high explosive used as a booster, a small charge placed next to the detonator in order to propagate detonation into the main explosive charge.
Chemical properties
Tetryl is a yellow crystalline solid powder material, practically insoluble in water but soluble in acetone, benzene and other solvents. When tetryl is heated, it first melts, then decomposes and explodes. It burns readily and is more easily detonated than ammonium picrate or TNT, being about as sensitive as picric acid. It is detonated by friction, shock, or spark. It remains stable at all temperatures which may be encountered in storage. It is generally used in the form of pressed pellets, and has been approved as the standard bursting charge for small-caliber projectiles, since it gives much better fragmentation than TNT. It has an explosive velocity of 23,600–23,900 feet per second (7200–7300 m/s). Tetryl is the basis for the service tetryl blasting caps necessary for positive detonation of TNT. A mixture of mercury fulminate and potassium chlorate is included in the cap to ensure detonation of tetryl.
Environmental effect
The most toxic ordnance compounds, tetryl and 1,3,5-TNB, are also the most degradable. Therefore, these chemicals are expected to be short-lived in nature, and environmental impacts would not be expected in areas that are not currently subject to chronic inputs of these chemicals. Tetryl decomposes rapidly in methanol/water solutions, as well as with heat. All aqueous samples expected to contain tetryl should be diluted with acetonitrile prior to filtration and acidified to pH < 3. All samples expected to contain tetryl should not be exposed to temperatures above room temperature. In addition, degradation products of tetryl appear as a shoulder on the 2,4,6-TNT peak. Peak heights rather than peak areas should be used when tetryl is present in concentrations that are significant relative to the concentration of 2,4,6-TNT.
History and synthesis
Tetryl was used mainly during World Wars I and II and later conflicts. Tetryl is usually used on its own, though can sometimes be found in compositions such as tetrytol. Tetryl is no longer manufactured or used in the United States, but can still be found in legacy munitions such as the M14 anti-personnel landmine.
Dutch chemist Karel Hendrik Mertens originally synthesized the compound as a part of his doctoral dissertation published in 1877 by slowly mixing dimethylaniline with concentrated nitric acid in the presence of sulfuric acid, and it's still a viable lab technique. However, in the 1930s a more economical route was commercialized, where methylamine produced by the Smoleński method (developed after WWI) reacts with dinitrochlorobenzene to make dinitromethylaniline which is then easily nitrated without byproducts.
Health concerns
Although tetryl is among the most toxic explosive compounds, it is very short-lived. This combined with the fact that the health impacts of this compound are largely unstudied, not much is known about any health problems that this compound may cause.
Epidemiological data shows that tetryl has most effect on the skin, acting as a strong irritant. Symptoms of skin sensitization such as dermatitis, itch, erythema, etc. may occur. Tetryl can also affect mucous membranes, the upper respiratory tract, and possibly the liver.
See also
Hexanitrobenzene
Trinitrotoluene
RE factor
References
Cooper, Paul W., Explosives Engineering, New York: Wiley-VCH, 1996.
External links
Tetryl, Agency for Toxic Substances and Disease Registry
Occupational Safety and Health Guideline for Tetryl , Occupational Safety & Health Administration
Explosive chemicals
Nitroamines | Tetryl | [
"Chemistry"
] | 902 | [
"Explosive chemicals"
] |
57,076 | https://en.wikipedia.org/wiki/HMX | HMX, also called octogen, is a powerful and relatively insensitive nitroamine high explosive chemically related to RDX. The compound's name is the subject of much speculation, having been variously listed as High Melting Explosive, High-velocity Military Explosive, or High-Molecular-weight RDX.
The molecular structure of HMX consists of an eight-membered ring of alternating carbon and nitrogen atoms, with a nitro group attached to each nitrogen atom. Because of its high mass-specific enthalpy of formation, it is one of the most potent chemical explosives manufactured, although a number of newer ones, including HNIW and ONC, are more powerful.
Synthesis
HMX is more complicated to manufacture than most explosives, and this confines it to specialist applications. It and RDX are both produced by the Bachmann process—nitration of hexamine using a mixture of ammonium nitrate and nitric acid in a mixture of acetic acid and acetic anhydride as solvent—with the major product determined by the specific reaction conditions.
Applications
Also known as cyclotetramethylene-tetranitramine, tetrahexamine tetranitramine, or , HMX was first made in 1930. In 1949 it was discovered that HMX can be prepared by nitrolysis of RDX. Nitrolysis of RDX is performed by dissolving RDX in a 55% HNO3 solution, followed by placing the solution on a steambath for about six hours. HMX is used almost exclusively in military applications, including as the detonator in nuclear weapons, in the form of polymer-bonded explosive, and as a solid-rocket propellant.
HMX is used in melt-castable explosives when mixed with TNT, which as a class are referred to as "octols". Additionally, polymer-bonded explosive compositions containing HMX are used in the manufacture of missile warheads and armor-piercing shaped charges.
HMX is also used in the process of perforating the steel casing in oil and gas wells. The HMX is built into a shaped charge that is detonated within the wellbore to punch a hole through the steel casing and surrounding cement out into the hydrocarbon-bearing formations. The pathway that is created allows formation fluids to flow into the wellbore and onward to the surface.
The Hayabusa2 space probe used HMX to excavate a hole in an asteroid in order to access material that had not been exposed to the solar wind.
Ongoing research aims to reduce its sensitivity and improve some manufacturing properties.
Health and environmental fate
Analytical methods
HMX enters the environment through air, water, and soil because it is widely used in military and civil applications. At present, reverse-phase HPLC and more sensitive LC-MS methods have been developed to accurately quantify the concentration of HMX in a variety of matrices in environmental assessments.
Toxicity
At present, the information needed to determine if HMX causes cancer is insufficient. Due to the lack of information, EPA has determined that HMX is not classifiable as to its human carcinogenicity.
The available data on the effects on human health of exposure to HMX are limited. HMX causes CNS effects similar to those of RDX, but at considerably higher doses. In one study, volunteers submitted to patch testing, which produced skin irritation. Another study of a cohort of 93 workers at an ammunition plant found no hematological, hepatic, autoimmune, or renal diseases. However, the study did not quantify the levels of exposure to HMX.
HMX exposure has been investigated in several studies on animals. Overall, the toxicity appears to be quite low. HMX is poorly absorbed by ingestion. When applied to the dermis, it induces mild skin irritation but not delayed contact sensitization. Various acute and subchronic neurobehavioral effects have been reported in rabbits and rodents, including ataxia, sedation, hyperkinesia, and convulsions. The chronic effects of HMX that have been documented through animal studies include decreased hemoglobin, increased serum alkaline phosphatase, and decreased albumin. Pathological changes were also observed in the animals' livers and kidneys.
Gas exchange rate was used as an indicator of chemical stress in Northern bobwhite quail (Colinus virginianus) eggs, and no evidence of alterations in metabolic rates associated with HMX exposure was observed. No data are available concerning the possible reproductive, developmental, or carcinogenic effects of HMX. HMX is considered less toxic than TNT or RDX. Remediating HMX-contaminated water supplies has proven to be successful.
Biodegradation
Both wild and transgenic plants can phytoremediate explosives from soil and water.
See also
2,4,6-Tris(trinitromethyl)-1,3,5-triazine
4,4’-Dinitro-3,3’-diazenofuroxan (DDF)
Heptanitrocubane (HNC)
HHTDD
Octanitrocubane (ONC)
RE factor
Notes
References
Further reading
Explosive chemicals
Nitroamines
Nitrogen heterocycles
Eight-membered rings | HMX | [
"Chemistry"
] | 1,094 | [
"Explosive chemicals"
] |
57,115 | https://en.wikipedia.org/wiki/Pleochroic%20halo | A pleochroic halo, or radiohalo, is a microscopic, spherical shell of discolouration (pleochroism) within minerals such as biotite that occurs in granite and other igneous rocks. The halo is a zone of radiation damage caused by the inclusion of minute radioactive crystals within the host crystal structure. The inclusions are typically zircon, apatite, or titanite which can accommodate uranium or thorium within their crystal structures. One explanation is that the discolouration is caused by alpha particles emitted by the nuclei; the radius of the concentric shells are proportional to the particles' energy.
Production
Uranium-238 follows a sequence of decay through thorium, radium, radon, polonium, and lead. These are the alpha-emitting isotopes in the sequence. (Because of their continuous energy distribution and greater range, beta particles cannot form distinct rings.)
The final characteristics of a pleochroic halo depends upon the initial isotope, and the size of each ring of a halo is dependent upon the alpha decay energy. A pleochroic halo formed from U-238 has theoretically eight concentric rings, with five actually distinguishable under a lighted microscope, while a halo formed from polonium has only one, two, or three rings depending on which isotope the starting material is. In U-238 haloes, U-234, and Ra-226 rings coincide with the Th-230 to form one ring; Rn-222 and Po-210 rings also coincide to form one ring. These rings are indistinguishable from one another under a petrographic microscope.
References
Further reading
External links
Geology of Gentry's "Tiny Mystery", J. Richard Wakefield, Journal of Geological Education, May 1988.
Polonium Halo FAQs, TalkOrigins Archive
Radioactive gemstones
Radiometric dating
Lead
Polonium
Radon
Radium
Thorium | Pleochroic halo | [
"Chemistry"
] | 391 | [
"Radiometric dating",
"Radioactivity"
] |
57,122 | https://en.wikipedia.org/wiki/Multiplication%20table | In mathematics, a multiplication table (sometimes, less formally, a times table) is a mathematical table used to define a multiplication operation for an algebraic system.
The decimal multiplication table was traditionally taught as an essential part of elementary arithmetic around the world, as it lays the foundation for arithmetic operations with base-ten numbers. Many educators believe it is necessary to memorize the table up to 9 × 9.
History
Pre-modern times
The oldest known multiplication tables were used by the Babylonians about 4000 years ago. However, they used a base of 60. The oldest known tables using a base of 10 are the Chinese decimal multiplication table on bamboo strips dating to about 305 BC, during China's Warring States period.
The multiplication table is sometimes attributed to the ancient Greek mathematician Pythagoras (570–495 BC). It is also called the Table of Pythagoras in many languages (for example French, Italian and Russian), sometimes in English. The Greco-Roman mathematician Nichomachus (60–120 AD), a follower of Neopythagoreanism, included a multiplication table in his Introduction to Arithmetic, whereas the oldest surviving Greek multiplication table is on a wax tablet dated to the 1st century AD and currently housed in the British Museum.
In 493 AD, Victorius of Aquitaine wrote a 98-column multiplication table which gave (in Roman numerals) the product of every number from 2 to 50 times and the rows were "a list of numbers starting with one thousand, descending by hundreds to one hundred, then descending by tens to ten, then by ones to one, and then the fractions down to 1/144."
Modern times
In his 1820 book The Philosophy of Arithmetic, mathematician John Leslie published a multiplication table up to 1000 × 1000, which allows numbers to be multiplied in triplets of digits at a time. Leslie also recommended that young pupils memorize the multiplication table up to 50 × 50.
The illustration below shows a table up to 12 × 12, which is a size commonly used nowadays in English-world schools.
Because multiplication of integers is commutative, many schools use a smaller table as below. Some schools even remove the first column since 1 is the multiplicative identity.
The traditional rote learning of multiplication was based on memorization of columns in the table, arranged as follows.
This form of writing the multiplication table in columns with complete number sentences is still used in some countries, such as Bosnia and Herzegovina, instead of the modern grids above.
Patterns in the tables
There is a pattern in the multiplication table that can help people to memorize the table more easily. It uses the figures below:
Figure 1 is used for multiples of 1, 3, 7, and 9. Figure 2 is used for the multiples of 2, 4, 6, and 8. These patterns can be used to memorize the multiples of any number from 0 to 10, except 5. As you would start on the number you are multiplying, when you multiply by 0, you stay on 0 (0 is external and so the arrows have no effect on 0, otherwise 0 is used as a link to create a perpetual cycle). The pattern also works with multiples of 10, by starting at 1 and simply adding 0, giving you 10, then just apply every number in the pattern to the "tens" unit as you would normally do as usual to the "ones" unit.
For example, to recall all the multiples of 7:
Look at the 7 in the first picture and follow the arrow.
The next number in the direction of the arrow is 4. So think of the next number after 7 that ends with 4, which is 14.
The next number in the direction of the arrow is 1. So think of the next number after 14 that ends with 1, which is 21.
After coming to the top of this column, start with the bottom of the next column, and travel in the same direction. The number is 8. So think of the next number after 21 that ends with 8, which is 28.
Proceed in the same way until the last number, 3, corresponding to 63.
Next, use the 0 at the bottom. It corresponds to 70.
Then, start again with the 7. This time it will correspond to 77.
Continue like this.
In abstract algebra
Tables can also define binary operations on groups, fields, rings, and other algebraic systems. In such contexts they are called Cayley tables.
For every natural number n, addition and multiplication in Zn, the ring of integers modulo n, is described by an n by n table. (See Modular arithmetic.) For example, the tables for Z5 are:
For other examples, see group.
Hypercomplex numbers
Hypercomplex number multiplication tables show the non-commutative results of multiplying two hypercomplex imaginary units. The simplest example is that of the quaternion multiplication table.
{|class="wikitable"
|+Quaternion multiplication table
|-
!width=15 nowrap|↓ × →
!width=15|
!width=15|
!width=15|
!width=15|
|-
!
|
|
|
|
|-
!
|
|
|
|
|-
!
|
|
|
|
|-
!
|
|
|
|
|}
For further examples, see , , and .
Chinese and Japanese multiplication tables
Mokkan discovered at Heijō Palace suggest that the multiplication table may have been introduced to Japan through Chinese mathematical treatises such as the Sunzi Suanjing, because their expression of the multiplication table share the character in products less than ten. Chinese and Japanese share a similar system of eighty-one short, easily memorable sentences taught to students to help them learn the multiplication table up to 9 × 9. In current usage, the sentences that express products less than ten include an additional particle in both languages. In the case of modern Chinese, this is (); and in Japanese, this is (). This is useful for those who practice calculation with a suanpan or a soroban, because the sentences remind them to shift one column to the right when inputting a product that does not begin with a tens digit. In particular, the Japanese multiplication table uses non-standard pronunciations for numbers in some specific instances (such as the replacement of san roku with saburoku).
Warring States decimal multiplication bamboo slips
A bundle of 21 bamboo slips dated 305 BC in the Warring States period in the Tsinghua Bamboo Slips (清華簡) collection is the world's earliest known example of a decimal multiplication table.
Standards-based mathematics reform in the US
In 1989, the National Council of Teachers of Mathematics (NCTM) developed new standards which were based on the belief that all students should learn higher-order thinking skills, which recommended reduced emphasis on the teaching of traditional methods that relied on rote memorization, such as multiplication tables. Widely adopted texts such as Investigations in Numbers, Data, and Space (widely known as TERC after its producer, Technical Education Research Centers) omitted aids such as multiplication tables in early editions. NCTM made it clear in their 2006 Focal Points that basic mathematics facts must be learned, though there is no consensus on whether rote memorization is the best method. In recent years, a number of nontraditional methods have been devised to help children learn multiplication facts, including video-game style apps and books that aim to teach times tables through character-based stories.
See also
Vedic square
IBM 1620, an early computer that used tables stored in memory to perform addition and multiplication
References
Multiplication
Mathematics education
Mathematical tables | Multiplication table | [
"Mathematics"
] | 1,556 | [
"Mathematical tables"
] |
57,169 | https://en.wikipedia.org/wiki/Heating%2C%20ventilation%2C%20and%20air%20conditioning | Heating, ventilation, and air conditioning (HVAC) is the use of various technologies to control the temperature, humidity, and purity of the air in an enclosed space. Its goal is to provide thermal comfort and acceptable indoor air quality. HVAC system design is a subdiscipline of mechanical engineering, based on the principles of thermodynamics, fluid mechanics, and heat transfer. "Refrigeration" is sometimes added to the field's abbreviation as HVAC&R or HVACR, or "ventilation" is dropped, as in HACR (as in the designation of HACR-rated circuit breakers).
HVAC is an important part of residential structures such as single family homes, apartment buildings, hotels, and senior living facilities; medium to large industrial and office buildings such as skyscrapers and hospitals; vehicles such as cars, trains, airplanes, ships and submarines; and in marine environments, where safe and healthy building conditions are regulated with respect to temperature and humidity, using fresh air from outdoors.
Ventilating or ventilation (the "V" in HVAC) is the process of exchanging or replacing air in any space to provide high indoor air quality which involves temperature control, oxygen replenishment, and removal of moisture, odors, smoke, heat, dust, airborne bacteria, carbon dioxide, and other gases. Ventilation removes unpleasant smells and excessive moisture, introduces outside air, keeps interior building air circulating, and prevents stagnation of the interior air. Methods for ventilating a building are divided into mechanical/forced and natural types.
Overview
The three major functions of heating, ventilation, and air conditioning are interrelated, especially with the need to provide thermal comfort and acceptable indoor air quality within reasonable installation, operation, and maintenance costs. HVAC systems can be used in both domestic and commercial environments. HVAC systems can provide ventilation, and maintain pressure relationships between spaces. The means of air delivery and removal from spaces is known as room air distribution.
Individual systems
In modern buildings, the design, installation, and control systems of these functions are integrated into one or more HVAC systems. For very small buildings, contractors normally estimate the capacity and type of system needed and then design the system, selecting the appropriate refrigerant and various components needed. For larger buildings, building service designers, mechanical engineers, or building services engineers analyze, design, and specify the HVAC systems. Specialty mechanical contractors and suppliers then fabricate, install and commission the systems. Building permits and code-compliance inspections of the installations are normally required for all sizes of buildings
District networks
Although HVAC is executed in individual buildings or other enclosed spaces (like NORAD's underground headquarters), the equipment involved is in some cases an extension of a larger district heating (DH) or district cooling (DC) network, or a combined DHC network. In such cases, the operating and maintenance aspects are simplified and metering becomes necessary to bill for the energy that is consumed, and in some cases energy that is returned to the larger system. For example, at a given time one building may be utilizing chilled water for air conditioning and the warm water it returns may be used in another building for heating, or for the overall heating-portion of the DHC network (likely with energy added to boost the temperature).
Basing HVAC on a larger network helps provide an economy of scale that is often not possible for individual buildings, for utilizing renewable energy sources such as solar heat, winter's cold, the cooling potential in some places of lakes or seawater for free cooling, and the enabling function of seasonal thermal energy storage. By utilizing natural sources that can be used for HVAC systems it can make a huge difference for the environment and help expand the knowledge of using different methods.
History
HVAC is based on inventions and discoveries made by Nikolay Lvov, Michael Faraday, Rolla C. Carpenter, Willis Carrier, Edwin Ruud, Reuben Trane, James Joule, William Rankine, Sadi Carnot, Alice Parker and many others.
Multiple inventions within this time frame preceded the beginnings of the first comfort air conditioning system, which was designed in 1902 by Alfred Wolff (Cooper, 2003) for the New York Stock Exchange, while Willis Carrier equipped the Sacketts-Wilhems Printing Company with the process AC unit the same year. Coyne College was the first school to offer HVAC training in 1899. The first residential AC was installed by 1914, and by the 1950s there was "widespread adoption of residential AC".
The invention of the components of HVAC systems went hand-in-hand with the Industrial Revolution, and new methods of modernization, higher efficiency, and system control are constantly being introduced by companies and inventors worldwide.
Heating
Heaters are appliances whose purpose is to generate heat (i.e. warmth) for the building. This can be done via central heating. Such a system contains a boiler, furnace, or heat pump to heat water, steam, or air in a central location such as a furnace room in a home, or a mechanical room in a large building. The heat can be transferred by convection, conduction, or radiation. Space heaters are used to heat single rooms and only consist of a single unit.
Generation
Heaters exist for various types of fuel, including solid fuels, liquids, and gases. Another type of heat source is electricity, normally heating ribbons composed of high resistance wire (see Nichrome). This principle is also used for baseboard heaters and portable heaters. Electrical heaters are often used as backup or supplemental heat for heat pump systems.
The heat pump gained popularity in the 1950s in Japan and the United States. Heat pumps can extract heat from various sources, such as environmental air, exhaust air from a building, or from the ground. Heat pumps transfer heat from outside the structure into the air inside. Initially, heat pump HVAC systems were only used in moderate climates, but with improvements in low temperature operation and reduced loads due to more efficient homes, they are increasing in popularity in cooler climates. They can also operate in reverse to cool an interior.
Distribution
Water/steam
In the case of heated water or steam, piping is used to transport the heat to the rooms. Most modern hot water boiler heating systems have a circulator, which is a pump, to move hot water through the distribution system (as opposed to older gravity-fed systems). The heat can be transferred to the surrounding air using radiators, hot water coils (hydro-air), or other heat exchangers. The radiators may be mounted on walls or installed within the floor to produce floor heat.
The use of water as the heat transfer medium is known as hydronics. The heated water can also supply an auxiliary heat exchanger to supply hot water for bathing and washing.
Air
Warm air systems distribute the heated air through ductwork systems of supply and return air through metal or fiberglass ducts. Many systems use the same ducts to distribute air cooled by an evaporator coil for air conditioning. The air supply is normally filtered through air filters to remove dust and pollen particles.
Dangers
The use of furnaces, space heaters, and boilers as a method of indoor heating could result in incomplete combustion and the emission of carbon monoxide, nitrogen oxides, formaldehyde, volatile organic compounds, and other combustion byproducts. Incomplete combustion occurs when there is insufficient oxygen; the inputs are fuels containing various contaminants and the outputs are harmful byproducts, most dangerously carbon monoxide, which is a tasteless and odorless gas with serious adverse health effects.
Without proper ventilation, carbon monoxide can be lethal at concentrations of 1000 ppm (0.1%). However, at several hundred ppm, carbon monoxide exposure induces headaches, fatigue, nausea, and vomiting. Carbon monoxide binds with hemoglobin in the blood, forming carboxyhemoglobin, reducing the blood's ability to transport oxygen. The primary health concerns associated with carbon monoxide exposure are its cardiovascular and neurobehavioral effects. Carbon monoxide can cause atherosclerosis (the hardening of arteries) and can also trigger heart attacks. Neurologically, carbon monoxide exposure reduces hand to eye coordination, vigilance, and continuous performance. It can also affect time discrimination.
Ventilation
Ventilation is the process of changing or replacing air in any space to control the temperature or remove any combination of moisture, odors, smoke, heat, dust, airborne bacteria, or carbon dioxide, and to replenish oxygen. It plays a critical role in maintaining a healthy indoor environment by preventing the buildup of harmful pollutants and ensuring the circulation of fresh air. Different methods, such as natural ventilation through windows and mechanical ventilation systems, can be used depending on the building design and air quality needs. Ventilation often refers to the intentional delivery of the outside air to the building indoor space. It is one of the most important factors for maintaining acceptable indoor air quality in buildings.
Although ventilation is an integral component of maintaining good indoor air quality, it may not be satisfactory alone. A clear understanding of both indoor and outdoor air quality parameters is needed to improve the performance of ventilation in terms of ... In scenarios where outdoor pollution would deteriorate indoor air quality, other treatment devices such as filtration may also be necessary.
Methods for ventilating a building may be divided into mechanical/forced and natural types.
Mechanical or forced
Mechanical, or forced, ventilation is provided by an air handler (AHU) and used to control indoor air quality. Excess humidity, odors, and contaminants can often be controlled via dilution or replacement with outside air. However, in humid climates more energy is required to remove excess moisture from ventilation air.
Kitchens and bathrooms typically have mechanical exhausts to control odors and sometimes humidity. Factors in the design of such systems include the flow rate (which is a function of the fan speed and exhaust vent size) and noise level. Direct drive fans are available for many applications and can reduce maintenance needs.
In summer, ceiling fans and table/floor fans circulate air within a room for the purpose of reducing the perceived temperature by increasing evaporation of perspiration on the skin of the occupants. Because hot air rises, ceiling fans may be used to keep a room warmer in the winter by circulating the warm stratified air from the ceiling to the floor.
Passive
Natural ventilation is the ventilation of a building with outside air without using fans or other mechanical systems. It can be via operable windows, louvers, or trickle vents when spaces are small and the architecture permits. ASHRAE defined Natural ventilation as the flow of air through open windows, doors, grilles, and other planned building envelope penetrations, and as being driven by natural and/or artificially produced pressure differentials.
Natural ventilation strategies also include cross ventilation, which relies on wind pressure differences on opposite sides of a building. By strategically placing openings, such as windows or vents, on opposing walls, air is channeled through the space to enhance cooling and ventilation. Cross ventilation is most effective when there are clear, unobstructed paths for airflow within the building.
In more complex schemes, warm air is allowed to rise and flow out high building openings to the outside (stack effect), causing cool outside air to be drawn into low building openings. Natural ventilation schemes can use very little energy, but care must be taken to ensure comfort. In warm or humid climates, maintaining thermal comfort solely via natural ventilation might not be possible. Air conditioning systems are used, either as backups or supplements. Air-side economizers also use outside air to condition spaces, but do so using fans, ducts, dampers, and control systems to introduce and distribute cool outdoor air when appropriate.
An important component of natural ventilation is air change rate or air changes per hour: the hourly rate of ventilation divided by the volume of the space. For example, six air changes per hour means an amount of new air, equal to the volume of the space, is added every ten minutes. For human comfort, a minimum of four air changes per hour is typical, though warehouses might have only two. Too high of an air change rate may be uncomfortable, akin to a wind tunnel which has thousands of changes per hour. The highest air change rates are for crowded spaces, bars, night clubs, commercial kitchens at around 30 to 50 air changes per hour.
Room pressure can be either positive or negative with respect to outside the room. Positive pressure occurs when there is more air being supplied than exhausted, and is common to reduce the infiltration of outside contaminants.
Airborne diseases
Natural ventilation is a key factor in reducing the spread of airborne illnesses such as tuberculosis, the common cold, influenza, meningitis or COVID-19. Opening doors and windows are good ways to maximize natural ventilation, which would make the risk of airborne contagion much lower than with costly and maintenance-requiring mechanical systems. Old-fashioned clinical areas with high ceilings and large windows provide the greatest protection. Natural ventilation costs little and is maintenance free, and is particularly suited to limited-resource settings and tropical climates, where the burden of TB and institutional TB transmission is highest. In settings where respiratory isolation is difficult and climate permits, windows and doors should be opened to reduce the risk of airborne contagion. Natural ventilation requires little maintenance and is inexpensive.
Natural ventilation is not practical in much of the infrastructure because of climate. This means that the facilities need to have effective mechanical ventilation systems and or use Ceiling Level UV or FAR UV ventilation systems.
Ventilation is measured in terms of Air Changes Per Hour (ACH). As of 2023, the CDC recommends that all spaces have a minimum of 5 ACH. For hospital rooms with airborne contagions the CDC recommends a minimum of 12 ACH. The challenges in facility ventilation are public unawareness, ineffective government oversight, poor building codes that are based on comfort levels, poor system operations, poor maintenance, and lack of transparency.
UVC or Ultraviolet Germicidal Irradiation is a function used in modern air conditioners which reduces airborne viruses, bacteria, and fungi, through the use of a built-in LED UV light that emits a gentle glow across the evaporator. As the cross-flow fan circulates the room air, any viruses are guided through the sterilization module’s irradiation range, rendering them instantly inactive.
Air conditioning
An air conditioning system, or a standalone air conditioner, provides cooling and/or humidity control for all or part of a building. Air conditioned buildings often have sealed windows, because open windows would work against the system intended to maintain constant indoor air conditions. Outside, fresh air is generally drawn into the system by a vent into a mix air chamber for mixing with the space return air. Then the mixture air enters an indoor or outdoor heat exchanger section where the air is to be cooled down, then be guided to the space creating positive air pressure. The percentage of return air made up of fresh air can usually be manipulated by adjusting the opening of this vent. Typical fresh air intake is about 10% of the total supply air.
Air conditioning and refrigeration are provided through the removal of heat. Heat can be removed through radiation, convection, or conduction. The heat transfer medium is a refrigeration system, such as water, air, ice, and chemicals are referred to as refrigerants. A refrigerant is employed either in a heat pump system in which a compressor is used to drive thermodynamic refrigeration cycle, or in a free cooling system that uses pumps to circulate a cool refrigerant (typically water or a glycol mix).
It is imperative that the air conditioning horsepower is sufficient for the area being cooled. Underpowered air conditioning systems will lead to power wastage and inefficient usage. Adequate horsepower is required for any air conditioner installed.
Refrigeration cycle
The refrigeration cycle uses four essential elements to cool, which are compressor, condenser, metering device, and evaporator.
At the inlet of a compressor, the refrigerant inside the system is in a low pressure, low temperature, gaseous state. The compressor pumps the refrigerant gas up to high pressure and temperature.
From there it enters a heat exchanger (sometimes called a condensing coil or condenser) where it loses heat to the outside, cools, and condenses into its liquid phase.
An expansion valve (also called metering device) regulates the refrigerant liquid to flow at the proper rate.
The liquid refrigerant is returned to another heat exchanger where it is allowed to evaporate, hence the heat exchanger is often called an evaporating coil or evaporator. As the liquid refrigerant evaporates it absorbs heat from the inside air, returns to the compressor, and repeats the cycle. In the process, heat is absorbed from indoors and transferred outdoors, resulting in cooling of the building.
In variable climates, the system may include a reversing valve that switches from heating in winter to cooling in summer. By reversing the flow of refrigerant, the heat pump refrigeration cycle is changed from cooling to heating or vice versa. This allows a facility to be heated and cooled by a single piece of equipment by the same means, and with the same hardware.
Free cooling
Free cooling systems can have very high efficiencies, and are sometimes combined with seasonal thermal energy storage so that the cold of winter can be used for summer air conditioning. Common storage mediums are deep aquifers or a natural underground rock mass accessed via a cluster of small-diameter, heat-exchanger-equipped boreholes. Some systems with small storages are hybrids, using free cooling early in the cooling season, and later employing a heat pump to chill the circulation coming from the storage. The heat pump is added-in because the storage acts as a heat sink when the system is in cooling (as opposed to charging) mode, causing the temperature to gradually increase during the cooling season.
Some systems include an "economizer mode", which is sometimes called a "free-cooling mode". When economizing, the control system will open (fully or partially) the outside air damper and close (fully or partially) the return air damper. This will cause fresh, outside air to be supplied to the system. When the outside air is cooler than the demanded cool air, this will allow the demand to be met without using the mechanical supply of cooling (typically chilled water or a direct expansion "DX" unit), thus saving energy. The control system can compare the temperature of the outside air vs. return air, or it can compare the enthalpy of the air, as is frequently done in climates where humidity is more of an issue. In both cases, the outside air must be less energetic than the return air for the system to enter the economizer mode.
Packaged split system
Central, "all-air" air-conditioning systems (or package systems) with a combined outdoor condenser/evaporator unit are often installed in North American residences, offices, and public buildings, but are difficult to retrofit (install in a building that was not designed to receive it) because of the bulky air ducts required. (Minisplit ductless systems are used in these situations.) Outside of North America, packaged systems are only used in limited applications involving large indoor space such as stadiums, theatres or exhibition halls.
An alternative to packaged systems is the use of separate indoor and outdoor coils in split systems. Split systems are preferred and widely used worldwide except in North America. In North America, split systems are most often seen in residential applications, but they are gaining popularity in small commercial buildings. Split systems are used where ductwork is not feasible or where the space conditioning efficiency is of prime concern. The benefits of ductless air conditioning systems include easy installation, no ductwork, greater zonal control, flexibility of control, and quiet operation. In space conditioning, the duct losses can account for 30% of energy consumption. The use of minisplits can result in energy savings in space conditioning as there are no losses associated with ducting.
With the split system, the evaporator coil is connected to a remote condenser unit using refrigerant piping between an indoor and outdoor unit instead of ducting air directly from the outdoor unit. Indoor units with directional vents mount onto walls, suspended from ceilings, or fit into the ceiling. Other indoor units mount inside the ceiling cavity so that short lengths of duct handle air from the indoor unit to vents or diffusers around the rooms.
Split systems are more efficient and the footprint is typically smaller than the package systems. On the other hand, package systems tend to have a slightly lower indoor noise level compared to split systems since the fan motor is located outside.
Dehumidification
Dehumidification (air drying) in an air conditioning system is provided by the evaporator. Since the evaporator operates at a temperature below the dew point, moisture in the air condenses on the evaporator coil tubes. This moisture is collected at the bottom of the evaporator in a pan and removed by piping to a central drain or onto the ground outside.
A dehumidifier is an air-conditioner-like device that controls the humidity of a room or building. It is often employed in basements that have a higher relative humidity because of their lower temperature (and propensity for damp floors and walls). In food retailing establishments, large open chiller cabinets are highly effective at dehumidifying the internal air. Conversely, a humidifier increases the humidity of a building.
The HVAC components that dehumidify the ventilation air deserve careful attention because outdoor air constitutes most of the annual humidity load for nearly all buildings.
Humidification
Maintenance
All modern air conditioning systems, even small window package units, are equipped with internal air filters. These are generally of a lightweight gauze-like material, and must be replaced or washed as conditions warrant. For example, a building in a high dust environment, or a home with furry pets, will need to have the filters changed more often than buildings without these dirt loads. Failure to replace these filters as needed will contribute to a lower heat exchange rate, resulting in wasted energy, shortened equipment life, and higher energy bills; low air flow can result in iced-over evaporator coils, which can completely stop airflow. Additionally, very dirty or plugged filters can cause overheating during a heating cycle, which can result in damage to the system or even fire.
Because an air conditioner moves heat between the indoor coil and the outdoor coil, both must be kept clean. This means that, in addition to replacing the air filter at the evaporator coil, it is also necessary to regularly clean the condenser coil. Failure to keep the condenser clean will eventually result in harm to the compressor because the condenser coil is responsible for discharging both the indoor heat (as picked up by the evaporator) and the heat generated by the electric motor driving the compressor.
Energy efficiency
HVAC is significantly responsible for promoting energy efficiency of buildings as the building sector consumes the largest percentage of global energy. Since the 1980s, manufacturers of HVAC equipment have been making an effort to make the systems they manufacture more efficient. This was originally driven by rising energy costs, and has more recently been driven by increased awareness of environmental issues. Additionally, improvements to the HVAC system efficiency can also help increase occupant health and productivity. In the US, the EPA has imposed tighter restrictions over the years. There are several methods for making HVAC systems more efficient.
Heating energy
In the past, water heating was more efficient for heating buildings and was the standard in the United States. Today, forced air systems can double for air conditioning and are more popular.
Some benefits of forced air systems, which are now widely used in churches, schools, and high-end residences, are
Better air conditioning effects
Energy savings of up to 15–20%
Even conditioning
A drawback is the installation cost, which can be slightly higher than traditional HVAC systems.
Energy efficiency can be improved even more in central heating systems by introducing zoned heating. This allows a more granular application of heat, similar to non-central heating systems. Zones are controlled by multiple thermostats. In water heating systems the thermostats control zone valves, and in forced air systems they control zone dampers inside the vents which selectively block the flow of air. In this case, the control system is very critical to maintaining a proper temperature.
Forecasting is another method of controlling building heating by calculating the demand for heating energy that should be supplied to the building in each time unit.
Ground source heat pump
Ground source, or geothermal, heat pumps are similar to ordinary heat pumps, but instead of transferring heat to or from outside air, they rely on the stable, even temperature of the earth to provide heating and air conditioning. Many regions experience seasonal temperature extremes, which would require large-capacity heating and cooling equipment to heat or cool buildings. For example, a conventional heat pump system used to heat a building in Montana's low temperature or cool a building in the highest temperature ever recorded in the US— in Death Valley, California, in 1913 would require a large amount of energy due to the extreme difference between inside and outside air temperatures. A metre below the earth's surface, however, the ground remains at a relatively constant temperature. Utilizing this large source of relatively moderate temperature earth, a heating or cooling system's capacity can often be significantly reduced. Although ground temperatures vary according to latitude, at underground, temperatures generally only range from .
Solar air conditioning
Photovoltaic solar panels offer a new way to potentially decrease the operating cost of air conditioning. Traditional air conditioners run using alternating current, and hence, any direct-current solar power needs to be inverted to be compatible with these units. New variable-speed DC-motor units allow solar power to more easily run them since this conversion is unnecessary, and since the motors are tolerant of voltage fluctuations associated with variance in supplied solar power (e.g., due to cloud cover).
Ventilation energy recovery
Energy recovery systems sometimes utilize heat recovery ventilation or energy recovery ventilation systems that employ heat exchangers or enthalpy wheels to recover sensible or latent heat from exhausted air. This is done by transfer of energy from the stale air inside the home to the incoming fresh air from outside.
Air conditioning energy
The performance of vapor compression refrigeration cycles is limited by thermodynamics. These air conditioning and heat pump devices move heat rather than convert it from one form to another, so thermal efficiencies do not appropriately describe the performance of these devices. The Coefficient of performance (COP) measures performance, but this dimensionless measure has not been adopted. Instead, the Energy Efficiency Ratio (EER) has traditionally been used to characterize the performance of many HVAC systems. EER is the Energy Efficiency Ratio based on a outdoor temperature. To more accurately describe the performance of air conditioning equipment over a typical cooling season a modified version of the EER, the Seasonal Energy Efficiency Ratio (SEER), or in Europe the ESEER, is used. SEER ratings are based on seasonal temperature averages instead of a constant outdoor temperature. The current industry minimum SEER rating is 14 SEER. Engineers have pointed out some areas where efficiency of the existing hardware could be improved. For example, the fan blades used to move the air are usually stamped from sheet metal, an economical method of manufacture, but as a result they are not aerodynamically efficient. A well-designed blade could reduce the electrical power required to move the air by a third.
Demand-controlled kitchen ventilation
Demand-controlled kitchen ventilation (DCKV) is a building controls approach to controlling the volume of kitchen exhaust and supply air in response to the actual cooking loads in a commercial kitchen. Traditional commercial kitchen ventilation systems operate at 100% fan speed independent of the volume of cooking activity and DCKV technology changes that to provide significant fan energy and conditioned air savings. By deploying smart sensing technology, both the exhaust and supply fans can be controlled to capitalize on the affinity laws for motor energy savings, reduce makeup air heating and cooling energy, increasing safety, and reducing ambient kitchen noise levels.
Air filtration and cleaning
Air cleaning and filtration removes particles, contaminants, vapors and gases from the air. The filtered and cleaned air then is used in heating, ventilation, and air conditioning. Air cleaning and filtration should be taken in account when protecting our building environments. If present, contaminants can come out from the HVAC systems if not removed or filtered properly.
Clean air delivery rate (CADR) is the amount of clean air an air cleaner provides to a room or space. When determining CADR, the amount of airflow in a space is taken into account. For example, an air cleaner with a flow rate of per minute and an efficiency of 50% has a CADR of per minute. Along with CADR, filtration performance is very important when it comes to the air in our indoor environment. This depends on the size of the particle or fiber, the filter packing density and depth, and the airflow rate.
Circulation of harmful substances
Poorly maintained air conditioners/ventilation systems can harbor mold, bacteria, and other contaminants, which are then circulated throughout indoor spaces, contributing to ...
Industry and standards
The HVAC industry is a worldwide enterprise, with roles including operation and maintenance, system design and construction, equipment manufacturing and sales, and in education and research. The HVAC industry was historically regulated by the manufacturers of HVAC equipment, but regulating and standards organizations such as HARDI (Heating, Air-conditioning and Refrigeration Distributors International), ASHRAE, SMACNA, ACCA (Air Conditioning Contractors of America), Uniform Mechanical Code, International Mechanical Code, and AMCA have been established to support the industry and encourage high standards and achievement. (UL as an omnibus agency is not specific to the HVAC industry.)
The starting point in carrying out an estimate both for cooling and heating depends on the exterior climate and interior specified conditions. However, before taking up the heat load calculation, it is necessary to find fresh air requirements for each area in detail, as pressurization is an important consideration.
International
ISO 16813:2006 is one of the ISO building environment standards. It establishes the general principles of building environment design. It takes into account the need to provide a healthy indoor environment for the occupants as well as the need to protect the environment for future generations and promote collaboration among the various parties involved in building environmental design for sustainability. ISO16813 is applicable to new construction and the retrofit of existing buildings.
The building environmental design standard aims to:
provide the constraints concerning sustainability issues from the initial stage of the design process, with building and plant life cycle to be considered together with owning and operating costs from the beginning of the design process;
assess the proposed design with rational criteria for indoor air quality, thermal comfort, acoustical comfort, visual comfort, energy efficiency, and HVAC system controls at every stage of the design process;
iterate decisions and evaluations of the design throughout the design process.
United States
Licensing
In the United States, federal licensure is generally handled by EPA certified (for installation and service of HVAC devices).
Many U.S. states have licensing for boiler operation. Some of these are listed as follows:
Arkansas
Georgia
Michigan
Minnesota
Montana
New Jersey
North Dakota
Ohio
Oklahoma
Oregon
Finally, some U.S. cities may have additional labor laws that apply to HVAC professionals.
Societies
Many HVAC engineers are members of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). ASHRAE regularly organizes two annual technical committees and publishes recognized standards for HVAC design, which are updated every four years.
Another popular society is AHRI, which provides regular information on new refrigeration technology, and publishes relevant standards and codes.
Codes
Codes such as the UMC and IMC do include much detail on installation requirements, however. Other useful reference materials include items from SMACNA, ACGIH, and technical trade journals.
American design standards are legislated in the Uniform Mechanical Code or International Mechanical Code. In certain states, counties, or cities, either of these codes may be adopted and amended via various legislative processes. These codes are updated and published by the International Association of Plumbing and Mechanical Officials (IAPMO) or the International Code Council (ICC) respectively, on a 3-year code development cycle. Typically, local building permit departments are charged with enforcement of these standards on private and certain public properties.
Technicians
An HVAC technician is a tradesman who specializes in heating, ventilation, air conditioning, and refrigeration. HVAC technicians in the US can receive training through formal training institutions, where most earn associate degrees. Training for HVAC technicians includes classroom lectures and hands-on tasks, and can be followed by an apprenticeship wherein the recent graduate works alongside a professional HVAC technician for a temporary period. HVAC techs who have been trained can also be certified in areas such as air conditioning, heat pumps, gas heating, and commercial refrigeration.
United Kingdom
The Chartered Institution of Building Services Engineers is a body that covers the essential Service (systems architecture) that allow buildings to operate. It includes the electrotechnical, heating, ventilating, air conditioning, refrigeration and plumbing industries. To train as a building services engineer, the academic requirements are GCSEs (A-C) / Standard Grades (1-3) in Maths and Science, which are important in measurements, planning and theory. Employers will often want a degree in a branch of engineering, such as building environment engineering, electrical engineering or mechanical engineering. To become a full member of CIBSE, and so also to be registered by the Engineering Council UK as a chartered engineer, engineers must also attain an Honours Degree and a master's degree in a relevant engineering subject. CIBSE publishes several guides to HVAC design relevant to the UK market, and also the Republic of Ireland, Australia, New Zealand and Hong Kong. These guides include various recommended design criteria and standards, some of which are cited within the UK building regulations, and therefore form a legislative requirement for major building services works. The main guides are:
Guide A: Environmental Design
Guide B: Heating, Ventilating, Air Conditioning and Refrigeration
Guide C: Reference Data
Guide D: Transportation systems in Buildings
Guide E: Fire Safety Engineering
Guide F: Energy Efficiency in Buildings
Guide G: Public Health Engineering
Guide H: Building Control Systems
Guide J: Weather, Solar and Illuminance Data
Guide K: Electricity in Buildings
Guide L: Sustainability
Guide M: Maintenance Engineering and Management
Within the construction sector, it is the job of the building services engineer to design and oversee the installation and maintenance of the essential services such as gas, electricity, water, heating and lighting, as well as many others. These all help to make buildings comfortable and healthy places to live and work in. Building Services is part of a sector that has over 51,000 businesses and employs represents 2–3% of the GDP.
Australia
The Air Conditioning and Mechanical Contractors Association of Australia (AMCA), Australian Institute of Refrigeration, Air Conditioning and Heating (AIRAH), Australian Refrigeration Mechanical Association and CIBSE are responsible.
Asia
Asian architectural temperature-control have different priorities than European methods. For example, Asian heating traditionally focuses on maintaining temperatures of objects such as the floor or furnishings such as Kotatsu tables and directly warming people, as opposed to the Western focus, in modern periods, on designing air systems.
Philippines
The Philippine Society of Ventilating, Air Conditioning and Refrigerating Engineers (PSVARE) along with Philippine Society of Mechanical Engineers (PSME) govern on the codes and standards for HVAC / MVAC (MVAC means "mechanical ventilation and air conditioning") in the Philippines.
India
The Indian Society of Heating, Refrigerating and Air Conditioning Engineers (ISHRAE) was established to promote the HVAC industry in India. ISHRAE is an associate of ASHRAE. ISHRAE was founded at New Delhi in 1981 and a chapter was started in Bangalore in 1989. Between 1989 & 1993, ISHRAE chapters were formed in all major cities in India.
See also
Air speed (HVAC)
Architectural engineering
ASHRAE Handbook
Auxiliary power unit
Cleanroom
Electric heating
Fan coil unit
Glossary of HVAC terms
Head-end power
Hotel electric power
Mechanical engineering
Outdoor wood-fired boiler
Radiant cooling
Sick building syndrome
Uniform Codes
Uniform Mechanical Code
Ventilation (architecture)
World Refrigeration Day
Wrightsoft
References
Further reading
International Mechanical Code (2012 (Second Printing)) by the International Code Council, Thomson Delmar Learning.
Modern Refrigeration and Air Conditioning (August 2003) by Althouse, Turnquist, and Bracciano, Goodheart-Wilcox Publisher; 18th edition.
The Cost of Cool.
Whai is LEV?
External links
Building biology
Building engineering
Mechanical engineering
Construction | Heating, ventilation, and air conditioning | [
"Physics",
"Engineering"
] | 7,667 | [
"Applied and interdisciplinary physics",
"Building engineering",
"Construction",
"Civil engineering",
"Mechanical engineering",
"Building biology",
"Architecture"
] |
57,183 | https://en.wikipedia.org/wiki/Middleware%20%28distributed%20applications%29 | Middleware in the context of distributed applications is software that provides services beyond those provided by the operating system to enable the various components of a distributed system to communicate and manage data. Middleware supports and simplifies complex distributed applications. It includes web servers, application servers, messaging and similar tools that support application development and delivery. Middleware is especially integral to modern information technology based on XML, SOAP, Web services, and service-oriented architecture.
Middleware often enables interoperability between applications that run on different operating systems, by supplying services so the application can exchange data in a standards-based way. Middleware sits "in the middle" between application software that may be working on different operating systems. It is similar to the middle layer of a three-tier single system architecture, except that it is stretched across multiple systems or applications. Examples include EAI software, telecommunications software, transaction monitors, and messaging-and-queueing software.
The distinction between operating system and middleware functionality is, to some extent, arbitrary. While core kernel functionality can only be provided by the operating system itself, some functionality previously provided by separately sold middleware is now integrated in operating systems. A typical example is the TCP/IP stack for telecommunications, nowadays included virtually in every operating system.
Definitions
Middleware is defined as software that provides a link between separate software applications. It is sometimes referred to as plumbing because it connects two applications and passes data between them. Middleware allows data contained in one database to be accessed through another. This makes it particularly useful for enterprise application integration and data integration tasks.
In more abstract terms, middleware is "The software layer that lies between the operating system and applications on each side of a distributed computing system in a network."
Origins
Middleware gained popularity in the 1980s as a solution to the problem of how to link newer applications to older legacy systems, although the term had been in use since 1968. It also facilitated distributed processing, the connection of multiple applications to create a larger application, usually over a network.
Use
Middleware services provide a more functional set of application programming interfaces to allow an application to:
Locate transparently across the network, thus providing interaction with another service or application
Filter data to make them friendly usable or public via anonymization process for privacy protection (for example)
Be independent from network services
Be reliable and always available
Add complementary attributes like semantics
when compared to the operating system and network services.
Middleware offers some unique technological advantages for business and industry. For example, traditional database systems are usually deployed in closed environments where users access the system only via a restricted network or intranet (e.g., an enterprise’s internal network). With the phenomenal growth of the World Wide Web, users can access virtually any database for which they have proper access rights from anywhere in the world. Middleware addresses the problem of varying levels of interoperability among different database structures. Middleware facilitates transparent access to legacy database management systems (DBMSs) or applications via a web server without regard to database-specific characteristics.
Businesses frequently use middleware applications to link information from departmental databases, such as payroll, sales, and accounting, or databases housed in multiple geographic locations. In the highly competitive healthcare community, laboratories make extensive use of middleware applications for data mining, laboratory information system (LIS) backup, and to combine systems during hospital mergers. Middleware helps bridge the gap between separate LISs in a newly formed healthcare network following a hospital buyout.
Middleware can help software developers avoid having to write application programming interfaces (API) for every control program, by serving as an independent programming interface for their applications.
For Future Internet network operation through traffic monitoring in multi-domain scenarios, using mediator tools (middleware) is a powerful help since they allow operators, searchers and service providers to supervise Quality of service and analyse eventual failures in telecommunication services. The Middleware stack is devised of several components (CSMS, TV Statistics & Client applications). It is known as the software brains of OTT platforms as it controls and interconnects all the components of the solution. The Content and Subscriber Management System (CSMS) is the central part of the solution commonly referred to as an administration portal. Apart from being the main interface for operator personnel to administer the TV service (Subscribers, Content, Packages, etc.) it also controls the majority of TV services and interacts with streaming & CDN and DRM serves to deliver Live, VOD and recorded content to the end users. It also integrates with external systems for billing, provisioning and with EPG and VOD content providers. Client applications authorize the CSMS and communicate with it, to provide required TV services to the end users on different devices.
Finally, e-commerce uses middleware to assist in handling rapid and secure transactions over many different types of computer environments. In short, middleware has become a critical element across a broad range of industries, thanks to its ability to bring together resources across dissimilar networks or computing platforms.
In 2004 members of the European Broadcasting Union (EBU) carried out a study of Middleware with respect to system integration in broadcast environments. This involved system design engineering experts from 10 major European broadcasters working over a 12-month period to understand the effect of predominantly software-based products to media production and broadcasting system design techniques. The resulting reports Tech 3300 and Tech 3300s were published and are freely available from the EBU web site.
Types
Message-oriented middleware
Message-oriented middleware (MOM) is middleware where transactions or event notifications are delivered between disparate systems or components by way of messages, often via an enterprise messaging system. With MOM, messages sent to the client are collected and stored until they are acted upon, while the client continues with other processing.
Enterprise messaging
An enterprise messaging system is a type of middleware that facilitates message passing between disparate systems or components in standard formats, often using XML, SOAP or web services. As part of an enterprise messaging system, message broker software may queue, duplicate, translate and deliver messages to disparate systems or components in a messaging system.
Enterprise service bus
Enterprise service bus (ESB) is defined by the Burton Group as "some type of integration middleware product that supports both message-oriented middleware and Web services".
Intelligent middleware
Intelligent Middleware (IMW) provides real-time intelligence and event management through intelligent agents. The IMW manages the real-time processing of high volume sensor signals and turns these signals into intelligent and actionable business information. The actionable information is then delivered in end-user power dashboards to individual users or is pushed to systems within or outside the enterprise. It is able to support various heterogeneous types of hardware and software and provides an API for interfacing with external systems. It should have a highly scalable, distributed architecture which embeds intelligence throughout the network to transform raw data systematically into actionable and relevant knowledge. It can also be packaged with tools to view and manage operations and build advanced network applications most effectively.
Content-centric middleware
Content-centric middleware offers a simple provider-consumer abstraction through which applications can issue requests for uniquely identified content, without worrying about where or how it is obtained. Juno is one example, which allows applications to generate content requests associated with high-level delivery requirements. The middleware then adapts the underlying delivery to access the content from sources that are best suited to matching the requirements. This is therefore similar to Publish/subscribe middleware, as well as the Content-centric networking paradigm.
Remote procedure call
Remote procedure call middleware enables a client to use services running on remote systems. The process can be synchronous or asynchronous.
Object request broker
With object request broker middleware, it is possible for applications to send objects and request services in an object-oriented system.
SQL-oriented data access
SQL-oriented Data Access is middleware between applications and database servers.
Embedded middleware
Embedded middleware provides communication services and software/firmware integration interface that operates between embedded applications, the embedded operating system, and external applications.
Other
Other sources include these additional classifications:
Transaction processing monitors provides tools and an environment to develop and deploy distributed applications.
Application servers software installed on a computer to facilitate the serving (running) of other applications.
Integration Levels
Data Integration
Integration of data resources like files and databases
Cloud Integration
Integration between various cloud services
B2B Integration
Integration of data resources and partner interfaces
Application Integration
Integration of applications managed by a company
Vendors
IBM, Red Hat, Oracle Corporation and Microsoft are some of the vendors that provide middleware software. Vendors such as Axway, SAP, TIBCO, Informatica, Objective Interface Systems, Pervasive, ScaleOut Software and webMethods were specifically founded to provide more niche middleware solutions. Groups such as the Apache Software Foundation, OpenSAF, the ObjectWeb Consortium (now OW2) and OASIS' AMQP encourage the development of open source middleware. Microsoft .NET "Framework" architecture is essentially "Middleware" with typical middleware functions distributed between the various products, with most inter-computer interaction by industry standards, open APIs or RAND software licence. Solace provides middleware in purpose-built hardware for implementations that may experience scale. StormMQ provides Message Oriented Middleware as a service.
See also
Comparison of business integration software
Middleware Analysts
Service-oriented architecture
Enterprise Service Bus
Event-driven SOA
ObjectWeb
References
External links
Internet2 Middleware Initiative
SWAMI - Swedish Alliance for Middleware Infrastructure
Open Middleware Infrastructure Institute (OMII-UK)
Middleware Integration Levels
European Broadcasting Union Middleware report.
More detailed supplement to the European Broadcasting Union Middleware report.
ObjectWeb - international community developing open-source middleware
Systems engineering | Middleware (distributed applications) | [
"Technology",
"Engineering"
] | 2,002 | [
"Software engineering",
"Systems engineering",
"Middleware",
"IT infrastructure"
] |
57,260 | https://en.wikipedia.org/wiki/Envelope | An envelope is a common packaging item, usually made of thin, flat material. It is designed to contain a flat object, such as a letter or card.
Traditional envelopes are made from sheets of paper cut to one of three shapes: a rhombus, a short-arm cross or a kite. These shapes allow the envelope structure to be made by folding the sheet sides around a central rectangular area. In this manner, a rectangle-faced enclosure is formed with an arrangement of four flaps on the reverse side.
Overview
A folding sequence such that the last flap closed is on a short side is referred to in commercial envelope manufacture as a pocket – a format frequently employed in the packaging of small quantities of seeds. Although in principle the flaps can be held in place by securing the topmost flap at a single point (for example with a wax seal), generally they are pasted or gummed together at the overlaps. They are most commonly used for enclosing and sending mail (letters) through a prepaid-postage postal system.
Window envelopes have a hole cut in the front side that allows the paper within to be seen. They are generally arranged so that the receiving address printed on the letter is visible, saving duplication of the address on the envelope itself. The window is normally covered with a transparent or translucent film to protect the letter inside, as was first designed by Americus F. Callahan in 1901 and patented the following year. In some cases, shortages of materials or the need to economize resulted in envelopes that had no film covering the window. One innovative process, invented in Europe about 1905, involved using hot oil to saturate the area of the envelope where the address would appear. The treated area became sufficiently translucent for the address to be readable. there is no international standard for window envelopes, but some countries, including Germany and the United Kingdom, have national standards.
An aerogram is related to a letter sheet, both being designed to have writing on the inside to minimize the weight. Any handmade envelope is effectively a letter sheet because prior to the folding stage it offers the opportunity for writing a message on that area of the sheet that after folding becomes the inside of the face of the envelope. For document security, the letter sheet can be sealed with wax. Another secure form of letter sheet is a locked letter, that is formed by cutting and folding the sheet in an elaborate way that prevents the letter from being opened without creating obvious damage to the letter/envelope.
The "envelope" used to launch the Penny Post component of the British postal reforms of 1840 by Sir Rowland Hill and the invention of the postage stamp, was a lozenge-shaped lettersheet known as a Mulready. If desired, a separate letter could be enclosed with postage remaining at one penny provided the combined weight did not exceed half an ounce (14 grams). This was a legacy of the previous system of calculating postage, which partly depended on the number of sheets of paper used.
During the U.S. Civil War those in the Confederate States Army occasionally used envelopes made from wallpaper, due to financial hardship.
A "return envelope" is a pre-addressed, smaller envelope included as the contents of a larger envelope and can be used for courtesy reply mail, metered reply mail, or freepost (business reply mail). Some envelopes are designed to be reused as the return envelope, saving the expense of including a return envelope in the contents of the original envelope. The direct mail industry makes extensive use of return envelopes as a response mechanism.
Up until 1840, all envelopes were handmade, each being individually cut to the appropriate shape out of an individual rectangular sheet. In that year George Wilson in the United Kingdom patented the method of tessellating (tiling) a number of envelope patterns across and down a large sheet, thereby reducing the overall amount of waste produced per envelope when they were cut out. In 1845 Edwin Hill and Warren de la Rue obtained a patent for a steam-driven machine that not only cut out the envelope shapes but creased and folded them as well. (Mechanised gumming had yet to be devised.) The convenience of the sheets ready cut to shape popularized the use of machine-made envelopes, and the economic significance of the factories that had produced handmade envelopes gradually diminished.
As envelopes are made of paper, they are intrinsically amenable to embellishment with additional graphics and text over and above the necessary postal markings. This is a feature that the direct mail industry has long taken advantage of—and more recently the Mail Art movement. Custom printed envelopes has also become an increasingly popular marketing method for small business.
Most of the over 400 billion envelopes of all sizes made worldwide are machine-made.
Sizes
International standard sizes
International standard ISO 269 (withdrawn in 2009 without replacement) defined several standard envelope sizes, which are designed for use with ISO 216 standard paper sizes:
The German standard DIN 678 defines a similar list of envelope formats.
DL comes from the DIN Lang (German: "Long") size envelope which originated in the 1920s.
North American sizes
There are dozens of sizes of envelopes available in the United States.
The designations such as "A2" do not correspond to ISO paper sizes. Sometimes, North American paper jobbers and printers will insert a hyphen to distinguish from ISO sizes, thus: A-2.
The No. 10 envelope is the standard business envelope size in the United States.
PWG 5101.1 also lists the following even inch sizes for envelopes: , , , , , and .
Envelopes accepted by the U.S. Postal Service for mailing at the price of a letter must be:
Rectangular
At least inches high × 5 inches long × 0.007 inch thick.
No more than inches high × inches long × inch thick.
Letters that have a length-to-height aspect ratio of less than 1.3 or more than 2.5 are classified as "non-machinable" by the USPS and may cost more to mail.
Chinese sizes
Japanese sizes
Japanese traditional rectangular (角形, kakugata, K) and long (長形, nagagata, N) envelopes open on the short side, while Western-style (洋形, yōgata, Y) envelopes open on the long side.
The Japanese standard JIS S 5502 was first published in 1964. Some traditional sizes were not kept and some sizes have been removed until its latest edition in 2014, leaving behind gaps in the numeric sequence of designations.
Manufacture
History of envelopes
The first known envelope was nothing like the paper envelope of today. It can be dated back to around 3500 to 3200 BC in the ancient Middle East. Hollow clay spheres were molded around financial tokens and used in private transactions. The two people who discovered these first envelopes were Jacques de Morgan, in 1901, and Roland de Mecquenem, in 1907.
Paper envelopes were developed in China, where paper was invented by 2nd century BC. Paper envelopes, known as chih poh, were used to store gifts of money. In the Southern Song dynasty, the Chinese imperial court used paper envelopes to distribute monetary gifts to government officials.
In Western history, from the time flexible writing material became more readily available in the 13th century until the mid-19th century, correspondence was typically secured by a process of folding and sealing the letter itself, sometimes including elaborate letterlocking techniques to indicate tampering or prove authenticity. Some of these letter techniques, which could involve stitching or wax seals, were also employed to secure hand-made envelopes.
Prior to 1840, all envelopes were handmade, including those for commercial use. In 1840 George Wilson of London was granted a patent for an envelope-cutting machine (patent: "an improved paper-cutting machine"); these machine-cut envelopes still needed to be folded by hand. There is a picture of the front and backside of an envelope stamped in 1841 here on this page. It seems to be machine cut. In 1845, Edwin Hill and Warren De La Rue were granted a British patent for the first envelope-folding machine.
The "envelopes" produced by the Hill/De La Rue machine were not like those used today. They were flat diamond, lozenge (or rhombus)-shaped sheets or "blanks" that had been precut to shape before being fed to the machine for creasing and made ready for folding to form a rectangular enclosure. The edges of the overlapping flaps treated with a paste or adhesive and the method of securing the envelope or wrapper was a user choice. The symmetrical flap arrangement meant that it could be held together with a single wax seal at the apex of the topmost flap. (That the flaps of an envelope can be held together by applying a seal at a single point is a classic design feature of an envelope.)
Nearly 50 years passed before a commercially successful machine for producing pre-gummed envelopes, like those in use today, appeared.
The origin of the use of the diamond shape for envelopes is debated. However, as an alternative to simply wrapping a sheet of paper around a folded letter or an invitation and sealing the edges, it is a tidy and ostensibly paper-efficient way of producing a rectangular-faced envelope. Where the claim to be paper-efficient fails is a consequence of paper manufacturers normally making paper available in rectangular sheets, because the largest size of envelope that can be realised by cutting out a diamond or any other shape which yields an envelope with symmetrical flaps is smaller than the largest that can be made from that sheet simply by folding.
The folded diamond-shaped sheet (or "blank") was in use at the beginning of the 19th century as a novelty wrapper for invitations and letters among the proportion of the population that had the time to sit and cut them out and were affluent enough not to bother about the waste offcuts. Their use first became widespread in the UK when the British government took monopoly control of postal services and tasked Rowland Hill with its introduction. The new service was launched in May 1840 with a postage-paid machine-printed illustrated (or pictorial) version of the wrapper and the much-celebrated first adhesive postage stamp, the Penny Black, for the production of which the Jacob Perkins printing process was used to deter counterfeiting and forgery. The wrappers were printed and sold as a sheet of 12, with cutting the purchaser's task. Known as Mulready stationery, because the illustration was created by the respected artist William Mulready, the envelopes were withdrawn when the illustration was ridiculed and lampooned. Nevertheless, the public apparently saw the convenience of the wrappers being available ready-shaped, and it must have been obvious that with the stamp available totally plain versions of the wrapper could be produced and postage prepaid by purchasing a stamp and affixing it to the wrapper once folded and secured. In this way although the postage-prepaid printed pictorial version died ignominiously, the diamond-shaped wrapper acquired de facto official status and became readily available to the public notwithstanding the time taken to cut them out and the waste generated. With the issuing of the stamps and the operation and control of the service (which is a communications medium) in government hands the British model spread around the world and the diamond-shaped wrapper went with it.
Hill also installed his brother Edwin as The Controller of Stamps, and it was he with his partner Warren De La Rue who patented the machine for mass-producing the diamond-shaped sheets for conversion to envelopes in 1845. Today, envelope-making machine manufacture is a long- and well-established international industry, and blanks are produced with a short-arm-cross shape and a kite shape as well as diamond shape. (The short-arm-cross style is mostly encountered in "pocket" envelopes i.e. envelopes with the closing flap on a short side. The more common style, with the closing flap on a long side, are sometimes referred to as "standard" or "wallet" style for purposes of differentiation.)
The most famous paper-making machine was the Fourdrinier machine. The process involves taking processed pulp stock and converting it to a continuous web which is gathered as a reel. Subsequently, the reel is guillotined edge to edge to create a large number of properly rectangular sheets because ever since the invention of Gutenberg's press paper has been closely associated with printing.
To this day, all other mechanical printing and duplicating equipments devised in the meantime, including the typewriter (which was used up to the 1990s for addressing envelopes), have been primarily designed to process rectangular sheets. Hence the large sheets are in turn guillotined down to the sizes of rectangular sheet commonly used in the commercial printing industry, and nowadays to the sizes commonly used as feed-stock in office-grade computer printers, copiers and duplicators (mainly ISO, A4 and US Letter).
Using any mechanical printing equipment to print on envelopes, which although rectangular, are in fact folded sheets with differing thicknesses across their surfaces, calls for skill and attention on the part of the operator. In commercial printing the task of printing on machine-made envelopes is referred to as "overprinting" and is usually confined to the front of the envelope. If printing is required on all four flaps as well as the front, the process is referred to as "printing on the flat". Eye-catching illustrated envelopes or pictorial envelopes, the origins of which as an artistic genre can be attributed to the Mulready stationery – and which was printed in this way – are used extensively for direct mail. In this respect, direct mail envelopes have a shared history with propaganda envelopes (or "covers") as they are called by philatelists.
Present and future state of envelopes
In 1998, the U.S. Postal Service became the first postal authority to approve a system of printing digital stamps. With this innovative alternative to an adhesive-backed postage stamp, businesses could more easily produce envelopes in-house, address them, and customize them with advertising information on the face.
The fortunes of the commercial envelope manufacturing industry and the postal service go hand in hand, and both link to the printing industry and the mechanized envelope processing industry producing equipment such as franking and addressing machines. Technological developments affecting one ricochet through the others: addressing machines print addresses, postage stamps are a print product, franking machines imprint a frank on an envelope. If fewer envelopes are required; fewer stamps are required; fewer franking machines are required and fewer addressing machines are required. For example, the advent of information-based indicia (IBI) (commonly referred to as digitally-encoded electronic stamps or digital indicia) by the US Postal Service in 1998 caused widespread consternation in the franking machine industry, as their machines were rendered obsolete, and resulted in a flurry of lawsuits involving Pitney Bowes among others. The advent of e-mail in the late 1990s appeared to offer a substantial threat to the postal service. By 2008 letter-post service operators were reporting significantly smaller volumes of letter-post, specifically stamped envelopes, which they attributed mainly to e-mail. Although a corresponding reduction in the volume of envelopes required would have been expected, no such decrease was reported as widely as the reduction in letter-post volumes.
Types of envelopes
Windowed envelopes
A windowed envelope is an envelope with a plastic or glassine window in it. The plastic in these envelopes creates problems in paper recycling.
Security envelopes
Security envelopes have special tamper-resistant and tamper-evident features. They are used for high value products and documents as well as for evidence for legal proceedings.
Some security envelopes have a patterned tint printed on the inside, which makes it difficult to read the contents. Various patterns exist.
Mailers
Some envelopes are available for full-size documents or for other items. Some carriers have large mailing envelopes for their express services. Other similar envelopes are available at stationery supply locations.
These mailers usually have an opening on an end with a flap that can be attached by gummed adhesive, integral pressure-sensitive adhesive, adhesive tape, or security tape.
Construction is usually:
Paperboard
Corrugated fiberboard
Polyethylene, often a coextrusion
Nonwoven fabric
Padded mailers
Shipping envelopes can have padding to provide stiffness and some degree of cushioning. The padding can be ground newsprint, plastic foam sheets, or bubble packing.
Inter-office envelopes
Various U.S. Federal Government offices use Standard Form (SF) 65 Government Messenger Envelopes for inter-office mail delivery. These envelopes are typically light brown in color and un-sealed with string-tied closure method and an array of holes throughout both sides such that it is somewhat visible what the envelope contains. Other colloquial names for this envelope include "Holey Joe" and "Shotgun" envelope due to the holey nature of the envelope. Address method is unique in that these envelopes are re-usable and the previous address is crossed out thoroughly and the new addressee (name, building, room, and mailstop) is written in the next available box. Although still in use, SF-65 is no longer listed on the United States Office of Personnel Management website list of standard forms.
See also
Back-of-the-envelope calculation
✉ Envelope character in the Dingbats section of Unicode
Green envelope, a Malay custom
Red envelope, a Chinese custom
Return address
Secrecy of correspondence
References
Notes
External links
Available via the Smithsonian National Postal Museum
the ISO 216 paper size system and the ideas behind its design.
Methods from the Envelope and Letter Folding Association
papersizes.guide
Chinese inventions
Domestic implements
English inventions
Packaging
Paper products
Postal history
Postal systems
Stationery | Envelope | [
"Technology"
] | 3,673 | [
"Transport systems",
"Postal systems"
] |
57,285 | https://en.wikipedia.org/wiki/Protease | A protease (also called a peptidase, proteinase, or proteolytic enzyme) is an enzyme that catalyzes proteolysis, breaking down proteins into smaller polypeptides or single amino acids, and spurring the formation of new protein products. They do this by cleaving the peptide bonds within proteins by hydrolysis, a reaction where water breaks bonds. Proteases are involved in numerous biological pathways, including digestion of ingested proteins, protein catabolism (breakdown of old proteins), and cell signaling.
In the absence of functional accelerants, proteolysis would be very slow, taking hundreds of years. Proteases can be found in all forms of life and viruses. They have independently evolved multiple times, and different classes of protease can perform the same reaction by completely different catalytic mechanisms.
Classification
Based on catalytic residue
Proteases can be classified into seven broad groups:
Serine proteases - using a serine alcohol
Cysteine proteases - using a cysteine thiol
Threonine proteases - using a threonine secondary alcohol
Aspartic proteases - using an aspartate carboxylic acid
Glutamic proteases - using a glutamate carboxylic acid
Metalloproteases - using a metal, usually zinc
Asparagine peptide lyases - using an asparagine to perform an elimination reaction (not requiring water)
Proteases were first grouped into 84 families according to their evolutionary relationship in 1993, and classified under four catalytic types: serine, cysteine, aspartic, and metallo proteases. The threonine and glutamic proteases were not described until 1995 and 2004 respectively. The mechanism used to cleave a peptide bond involves making an amino acid residue that has the cysteine and threonine (proteases) or a water molecule (aspartic, glutamic and metalloproteases) nucleophilic so that it can attack the peptide carbonyl group. One way to make a nucleophile is by a catalytic triad, where a histidine residue is used to activate serine, cysteine, or threonine as a nucleophile. This is not an evolutionary grouping, however, as the nucleophile types have evolved convergently in different superfamilies, and some superfamilies show divergent evolution to multiple different nucleophiles. Metalloproteases, aspartic, and glutamic proteases utilize their active site residues to activate a water molecule, which then attacks the scissile bond.
Peptide lyases
A seventh catalytic type of proteolytic enzymes, asparagine peptide lyase, was described in 2011. Its proteolytic mechanism is unusual since, rather than hydrolysis, it performs an elimination reaction. During this reaction, the catalytic asparagine forms a cyclic chemical structure that cleaves itself at asparagine residues in proteins under the right conditions. Given its fundamentally different mechanism, its inclusion as a peptidase may be debatable.
Based on evolutionary phylogeny
An up-to-date classification of protease evolutionary superfamilies is found in the MEROPS database. In this database, proteases are classified firstly by 'clan' (superfamily) based on structure, mechanism and catalytic residue order (e.g. the PA clan where P indicates a mixture of nucleophile families). Within each 'clan', proteases are classified into families based on sequence similarity (e.g. the S1 and C3 families within the PA clan). Each family may contain many hundreds of related proteases (e.g. trypsin, elastase, thrombin and streptogrisin within the S1 family).
Currently more than 50 clans are known, each indicating an independent evolutionary origin of proteolysis.
Based on optimal pH
Alternatively, proteases may be classified by the optimal pH in which they are active:
Acid proteases
Neutral proteases involved in type 1 hypersensitivity. Here, it is released by mast cells and causes activation of complement and kinins. This group includes the calpains.
Basic proteases (or alkaline proteases)
Enzymatic function and mechanism
Proteases are involved in digesting long protein chains into shorter fragments by splitting the peptide bonds that link amino acid residues. Some detach the terminal amino acids from the protein chain (exopeptidases, such as aminopeptidases, carboxypeptidase A); others attack internal peptide bonds of a protein (endopeptidases, such as trypsin, chymotrypsin, pepsin, papain, elastase).
Catalysis
Catalysis is achieved by one of two mechanisms:
Aspartic, glutamic, and metallo-proteases activate a water molecule, which performs a nucleophilic attack on the peptide bond to hydrolyze it.
Serine, threonine, and cysteine proteases use a nucleophilic residue (usually in a catalytic triad). That residue performs a nucleophilic attack to covalently link the protease to the substrate protein, releasing the first half of the product. This covalent acyl-enzyme intermediate is then hydrolyzed by activated water to complete catalysis by releasing the second half of the product and regenerating the free enzyme
Specificity
Proteolysis can be highly promiscuous such that a wide range of protein substrates are hydrolyzed. This is the case for digestive enzymes such as trypsin, which have to be able to cleave the array of proteins ingested into smaller peptide fragments. Promiscuous proteases typically bind to a single amino acid on the substrate and so only have specificity for that residue. For example, trypsin is specific for the sequences ...K\... or ...R\... ('\'=cleavage site).
Conversely some proteases are highly specific and only cleave substrates with a certain sequence. Blood clotting (such as thrombin) and viral polyprotein processing (such as TEV protease) requires this level of specificity in order to achieve precise cleavage events. This is achieved by proteases having a long binding cleft or tunnel with several pockets that bind to specified residues. For example, TEV protease is specific for the sequence ...ENLYFQ\S... ('\'=cleavage site).
Degradation and autolysis
Proteases, being themselves proteins, are cleaved by other protease molecules, sometimes of the same variety. This acts as a method of regulation of protease activity. Some proteases are less active after autolysis (e.g. TEV protease) whilst others are more active (e.g. trypsinogen).
Biodiversity of proteases
Proteases occur in all organisms, from prokaryotes to eukaryotes to viruses. These enzymes are involved in a multitude of physiological reactions from simple digestion of food proteins to highly regulated cascades (e.g., the blood-clotting
cascade, the complement system, apoptosis pathways, and the invertebrate prophenoloxidase-activating cascade). Proteases can either break specific peptide bonds (limited proteolysis), depending on the amino acid sequence of a protein, or completely break down a peptide to amino acids (unlimited proteolysis). The activity can be a destructive change (abolishing a protein's function or digesting it to its principal components), it can be an activation of a function, or it can be a signal in a signalling pathway.
Plants
Plant genomes encode hundreds of proteases, largely of unknown function. Those with known function are largely involved in developmental regulation. Plant proteases also play a role in regulation of photosynthesis.
Animals
Proteases are used throughout an organism for various metabolic processes. Acid proteases secreted into the stomach (such as pepsin) and serine proteases present in the duodenum (trypsin and chymotrypsin) enable the digestion of protein in food. Proteases present in blood serum (thrombin, plasmin, Hageman factor, etc.) play an important role in blood-clotting, as well as lysis of the clots, and the correct action of the immune system. Other proteases are present in leukocytes (elastase, cathepsin G) and play several different roles in metabolic control. Some snake venoms are also proteases, such as pit viper haemotoxin and interfere with the victim's blood clotting cascade. Proteases determine the lifetime of other proteins playing important physiological roles like hormones, antibodies, or other enzymes. This is one of the fastest "switching on" and "switching off" regulatory mechanisms in the physiology of an organism.
By a complex cooperative action, proteases can catalyze cascade reactions, which result in rapid and efficient amplification of an organism's response to a physiological signal.
Bacteria
Bacteria secrete proteases to hydrolyse the peptide bonds in proteins and therefore break the proteins down into their constituent amino acids. Bacterial and fungal proteases are particularly important to the global carbon and nitrogen cycles in the recycling of proteins, and such activity tends to be regulated by nutritional signals in these organisms. The net impact of nutritional regulation of protease activity among the thousands of species present in soil can be observed at the overall microbial community level as proteins are broken down in response to carbon, nitrogen, or sulfur limitation.
Bacteria contain proteases responsible for general protein quality control (e.g. the AAA+ proteasome) by degrading unfolded or misfolded proteins.
A secreted bacterial protease may also act as an exotoxin, and be an example of a virulence factor in bacterial pathogenesis (for example, exfoliative toxin). Bacterial exotoxic proteases destroy extracellular structures.
Viruses
The genomes of some viruses encode one massive polyprotein, which needs a protease to cleave this into functional units (e.g. the hepatitis C virus and the picornaviruses). These proteases (e.g. TEV protease) have high specificity and only cleave a very restricted set of substrate sequences. They are therefore a common target for protease inhibitors.
Archaea
Archaea use proteases to regulate various cellular processes from cell-signaling, metabolism, secretion and protein quality control. Only two ATP-dependent proteases are found in archaea: the membrane associated LonB protease and a soluble 20S proteosome complex .
Uses
The field of protease research is enormous. Since 2004, approximately 8000 papers related to this field were published each year. Proteases are used in industry, medicine and as a basic biological research tool.
Digestive proteases are part of many laundry detergents and are also used extensively in the bread industry in bread improver. A variety of proteases are used medically both for their native function (e.g. controlling blood clotting) or for completely artificial functions (e.g. for the targeted degradation of pathogenic proteins). Highly specific proteases such as TEV protease and thrombin are commonly used to cleave fusion proteins and affinity tags in a controlled fashion.
Protease-containing plant-solutions called vegetarian rennet have been in use for hundreds of years in Europe and the Middle East for making kosher and halal Cheeses. Vegetarian rennet from Withania coagulans has been in use for thousands of years as a Ayurvedic remedy for digestion and diabetes in the Indian subcontinent. It is also used to make Paneer.
Inhibitors
The activity of proteases is inhibited by protease inhibitors. One example of protease inhibitors is the serpin superfamily. It includes alpha 1-antitrypsin (which protects the body from excessive effects of its own inflammatory proteases), alpha 1-antichymotrypsin (which does likewise), C1-inhibitor (which protects the body from excessive protease-triggered activation of its own complement system), antithrombin (which protects the body from excessive coagulation), plasminogen activator inhibitor-1 (which protects the body from inadequate coagulation by blocking protease-triggered fibrinolysis), and neuroserpin.
Natural protease inhibitors include the family of lipocalin proteins, which play a role in cell regulation and differentiation. Lipophilic ligands, attached to lipocalin proteins, have been found to possess tumor protease inhibiting properties. The natural protease inhibitors are not to be confused with the protease inhibitors used in antiretroviral therapy. Some viruses, with HIV/AIDS among them, depend on proteases in their reproductive cycle. Thus, protease inhibitors are developed as antiviral therapeutic agents.
Other natural protease inhibitors are used as defense mechanisms. Common examples are the trypsin inhibitors found in the seeds of some plants, most notable for humans being soybeans, a major food crop, where they act to discourage predators. Raw soybeans are toxic to many animals, including humans, until the protease inhibitors they contain have been denatured.
See also
Ligase
Protease
cysteine-
serine-
threonine-
aspartic-
glutamic-
metallo-
PA clan
Convergent evolution
Proteolysis
Catalytic triad
The Proteolysis Map
Proteases in angiogenesis
Intramembrane proteases
Protease inhibitor (pharmacology)
Protease inhibitor (biology)
TopFIND - database of protease specificity, substrates, products and inhibitors
MEROPS - Database of protease evolutionary groups
References
External links
International Proteolysis Society
MEROPS - the peptidase database
List of protease inhibitors
Protease cutting predictor
List of proteases and their specificities (see also )
Proteolysis MAP from Center for Proteolytic Pathways
Proteolysis Cut Site database - curated expert annotation from users
Protease cut sites graphical interface
TopFIND protease database covering cut sites, substrates and protein termini
Post-translational modification | Protease | [
"Chemistry"
] | 3,105 | [
"Post-translational modification",
"Gene expression",
"Biochemical reactions"
] |
57,326 | https://en.wikipedia.org/wiki/De%20Moivre%27s%20formula | In mathematics, de Moivre's formula (also known as de Moivre's theorem and de Moivre's identity) states that for any real number and integer it is the case that
where is the imaginary unit (). The formula is named after Abraham de Moivre, although he never stated it in his works. The expression is sometimes abbreviated to .
The formula is important because it connects complex numbers and trigonometry. By expanding the left hand side and then comparing the real and imaginary parts under the assumption that is real, it is possible to derive useful expressions for and in terms of and .
As written, the formula is not valid for non-integer powers . However, there are generalizations of this formula valid for other exponents. These can be used to give explicit expressions for the th roots of unity, that is, complex numbers such that .
Using the standard extensions of the sine and cosine functions to complex numbers, the formula is valid even when is an arbitrary complex number.
Example
For and , de Moivre's formula asserts that
or equivalently that
In this example, it is easy to check the validity of the equation by multiplying out the left side.
Relation to Euler's formula
De Moivre's formula is a precursor to Euler's formula
with expressed in radians rather than degrees, which establishes the fundamental relationship between the trigonometric functions and the complex exponential function.
One can derive de Moivre's formula using Euler's formula and the exponential law for integer powers
since Euler's formula implies that the left side is equal to while the right side is equal to
Proof by induction
The truth of de Moivre's theorem can be established by using mathematical induction for natural numbers, and extended to all integers from there. For an integer , call the following statement :
For , we proceed by mathematical induction. is clearly true. For our hypothesis, we assume is true for some natural . That is, we assume
Now, considering :
See angle sum and difference identities.
We deduce that implies . By the principle of mathematical induction it follows that the result is true for all natural numbers. Now, is clearly true since . Finally, for the negative integer cases, we consider an exponent of for natural .
The equation (*) is a result of the identity
for . Hence, holds for all integers .
Formulae for cosine and sine individually
For an equality of complex numbers, one necessarily has equality both of the real parts and of the imaginary parts of both members of the equation. If , and therefore also and , are real numbers, then the identity of these parts can be written using binomial coefficients. This formula was given by 16th century French mathematician François Viète:
In each of these two equations, the final trigonometric function equals one or minus one or zero, thus removing half the entries in each of the sums. These equations are in fact valid even for complex values of , because both sides are entire (that is, holomorphic on the whole complex plane) functions of , and two such functions that coincide on the real axis necessarily coincide everywhere. Here are the concrete instances of these equations for and :
The right-hand side of the formula for is in fact the value of the Chebyshev polynomial at .
Failure for non-integer powers, and generalization
De Moivre's formula does not hold for non-integer powers. The derivation of de Moivre's formula above involves a complex number raised to the integer power . If a complex number is raised to a non-integer power, the result is multiple-valued (see failure of power and logarithm identities).
Roots of complex numbers
A modest extension of the version of de Moivre's formula given in this article can be used to find the -th roots of a complex number for a non-zero integer . (This is equivalent to raising to a power of ).
If is a complex number, written in polar form as
then the -th roots of are given by
where varies over the integer values from 0 to .
This formula is also sometimes known as de Moivre's formula.
Complex numbers raised to an arbitrary power
Generally, if (in polar form) and are arbitrary complex numbers, then the set of possible values is
(Note that if is a rational number that equals in lowest terms then this set will have exactly distinct values rather than infinitely many. In particular, if is an integer then the set will have exactly one value, as previously discussed.) In contrast, de Moivre's formula gives
which is just the single value from this set corresponding to .
Analogues in other settings
Hyperbolic trigonometry
Since , an analog to de Moivre's formula also applies to the hyperbolic trigonometry. For all integers ,
If is a rational number (but not necessarily an integer), then will be one of the values of .
Extension to complex numbers
For any integer , the formula holds for any complex number
where
Quaternions
To find the roots of a quaternion there is an analogous form of de Moivre's formula. A quaternion in the form
can be represented in the form
In this representation,
and the trigonometric functions are defined as
In the case that ,
that is, the unit vector. This leads to the variation of De Moivre's formula:
Example
To find the cube roots of
write the quaternion in the form
Then the cube roots are given by:
2 × 2 matrices
With matrices, when is an integer. This is a direct consequence of the isomorphism between the matrices of type and the complex plane.
References
.
External links
De Moivre's Theorem for Trig Identities by Michael Croucher, Wolfram Demonstrations Project.
Theorems in complex analysis
Articles containing proofs
Abraham de Moivre | De Moivre's formula | [
"Mathematics"
] | 1,196 | [
"Articles containing proofs",
"Theorems in mathematical analysis",
"Theorems in complex analysis"
] |
57,330 | https://en.wikipedia.org/wiki/Circulatory%20system | In vertebrates, the circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the body. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek meaning heart, and Latin meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Many invertebrates such as arthropods have an open circulatory system with a heart that pumps a hemolymph which returns via the body cavity rather than via blood vessels. Diploblasts such as sponges and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the blood circulatory system; without it the blood would become depleted of fluid.
The lymphatic system also works with the immune system. The circulation of lymph takes much longer than that of blood and, unlike the closed (blood) circulatory system, the lymphatic system is an open system. Some sources describe it as a secondary circulatory system.
The circulatory system can be affected by many cardiovascular diseases. Cardiologists are medical professionals which specialise in the heart, and cardiothoracic surgeons specialise in operating on the heart and its surrounding areas. Vascular surgeons focus on disorders of the blood vessels, and lymphatic vessels.
Structure
The circulatory system includes the heart, blood vessels, and blood. The cardiovascular system in all vertebrates, consists of the heart and blood vessels. The circulatory system is further divided into two major circuits – a pulmonary circulation, and a systemic circulation. The pulmonary circulation is a circuit loop from the right heart taking deoxygenated blood to the lungs where it is oxygenated and returned to the left heart. The systemic circulation is a circuit loop that delivers oxygenated blood from the left heart to the rest of the body, and returns deoxygenated blood back to the right heart via large veins known as the venae cavae. The systemic circulation can also be defined as two parts – a macrocirculation and a microcirculation. An average adult contains five to six quarts (roughly 4.7 to 5.7 liters) of blood, accounting for approximately 7% of their total body weight. Blood consists of plasma, red blood cells, white blood cells, and platelets. The digestive system also works with the circulatory system to provide the nutrients the system needs to keep the heart pumping.
Further circulatory routes are associated, such as the coronary circulation to the heart itself, the cerebral circulation to the brain, renal circulation to the kidneys, and bronchial circulation to the bronchi in the lungs.
The human circulatory system is closed, meaning that the blood is contained within the vascular network. Nutrients travel through tiny blood vessels of the microcirculation to reach organs. The lymphatic system is an essential subsystem of the circulatory system consisting of a network of lymphatic vessels, lymph nodes, organs, tissues and circulating lymph. This subsystem is an open system. A major function is to carry the lymph, draining and returning interstitial fluid into the lymphatic ducts back to the heart for return to the circulatory system. Another major function is working together with the immune system to provide defense against pathogens.
Heart
The heart pumps blood to all parts of the body providing nutrients and oxygen to every cell, and removing waste products. The left heart pumps oxygenated blood returned from the lungs to the rest of the body in the systemic circulation. The right heart pumps deoxygenated blood to the lungs in the pulmonary circulation. In the human heart there is one atrium and one ventricle for each circulation, and with both a systemic and a pulmonary circulation there are four chambers in total: left atrium, left ventricle, right atrium and right ventricle. The right atrium is the upper chamber of the right side of the heart. The blood that is returned to the right atrium is deoxygenated (poor in oxygen) and passed into the right ventricle to be pumped through the pulmonary artery to the lungs for re-oxygenation and removal of carbon dioxide. The left atrium receives newly oxygenated blood from the lungs as well as the pulmonary vein which is passed into the strong left ventricle to be pumped through the aorta to the different organs of the body.
Pulmonary circulation
The pulmonary circulation is the part of the circulatory system in which oxygen-depleted blood is pumped away from the heart, via the pulmonary artery, to the lungs and returned, oxygenated, to the heart via the pulmonary vein.
Oxygen-deprived blood from the superior and inferior vena cava enters the right atrium of the heart and flows through the tricuspid valve (right atrioventricular valve) into the right ventricle, from which it is then pumped through the pulmonary semilunar valve into the pulmonary artery to the lungs. Gas exchange occurs in the lungs, whereby is released from the blood, and oxygen is absorbed. The pulmonary vein returns the now oxygen-rich blood to the left atrium.
A separate circuit from the systemic circulation, the bronchial circulation supplies blood to the tissue of the larger airways of the lung.
Systemic circulation
The systemic circulation is a circuit loop that delivers oxygenated blood from the left heart to the rest of the body through the aorta. Deoxygenated blood is returned in the systemic circulation to the right heart via two large veins, the inferior vena cava and superior vena cava, where it is pumped from the right atrium into the pulmonary circulation for oxygenation. The systemic circulation can also be defined as having two parts – a macrocirculation and a microcirculation.
Blood vessels
The blood vessels of the circulatory system are the arteries, veins, and capillaries. The large arteries and veins that take blood to, and away from the heart are known as the great vessels.
Arteries
Oxygenated blood enters the systemic circulation when leaving the left ventricle, via the aortic semilunar valve. The first part of the systemic circulation is the aorta, a massive and thick-walled artery. The aorta arches and gives branches supplying the upper part of the body after passing through the aortic opening of the diaphragm at the level of thoracic ten vertebra, it enters the abdomen. Later, it descends down and supplies branches to abdomen, pelvis, perineum and the lower limbs.
The walls of the aorta are elastic. This elasticity helps to maintain the blood pressure throughout the body. When the aorta receives almost five litres of blood from the heart, it recoils and is responsible for pulsating blood pressure. As the aorta branches into smaller arteries, their elasticity goes on decreasing and their compliance goes on increasing.
Capillaries
Arteries branch into small passages called arterioles and then into the capillaries. The capillaries merge to bring blood into the venous system. The total length of muscle capillaries in a 70 kg human is estimated to be between 9,000 and 19,000 km.
Veins
Capillaries merge into venules, which merge into veins. The venous system feeds into the two major veins: the superior vena cava – which mainly drains tissues above the heart – and the inferior vena cava – which mainly drains tissues below the heart. These two large veins empty into the right atrium of the heart.
Portal veins
The general rule is that arteries from the heart branch out into capillaries, which collect into veins leading back to the heart. Portal veins are a slight exception to this. In humans, the only significant example is the hepatic portal vein which combines from capillaries around the gastrointestinal tract where the blood absorbs the various products of digestion; rather than leading directly back to the heart, the hepatic portal vein branches into a second capillary system in the liver.
Coronary circulation
The heart itself is supplied with oxygen and nutrients through a small "loop" of the systemic circulation and derives very little from the blood contained within the four chambers.
The coronary circulation system provides a blood supply to the heart muscle itself. The coronary circulation begins near the origin of the aorta by two coronary arteries: the right coronary artery and the left coronary artery. After nourishing the heart muscle, blood returns through the coronary veins into the coronary sinus and from this one into the right atrium. Backflow of blood through its opening during atrial systole is prevented by the Thebesian valve. The smallest cardiac veins drain directly into the heart chambers.
Cerebral circulation
The brain has a dual blood supply, an anterior and a posterior circulation from arteries at its front and back. The anterior circulation arises from the internal carotid arteries to supply the front of the brain. The posterior circulation arises from the vertebral arteries, to supply the back of the brain and brainstem. The circulation from the front and the back join (anastomise) at the circle of Willis. The neurovascular unit, composed of various cells and vasculature channels within the brain, regulates the flow of blood to activated neurons in order to satisfy their high energy demands.
Renal circulation
The renal circulation is the blood supply to the kidneys, contains many specialized blood vessels and receives around 20% of the cardiac output. It branches from the abdominal aorta and returns blood to the ascending inferior vena cava.
Development
The development of the circulatory system starts with vasculogenesis in the embryo. The human arterial and venous systems develop from different areas in the embryo. The arterial system develops mainly from the aortic arches, six pairs of arches that develop on the upper part of the embryo. The venous system arises from three bilateral veins during weeks 4 – 8 of embryogenesis. Fetal circulation begins within the 8th week of development. Fetal circulation does not include the lungs, which are bypassed via the truncus arteriosus. Before birth the fetus obtains oxygen (and nutrients) from the mother through the placenta and the umbilical cord.
Arteries
The human arterial system originates from the aortic arches and from the dorsal aortae starting from week 4 of embryonic life. The first and second aortic arches regress and form only the maxillary arteries and stapedial arteries respectively. The arterial system itself arises from aortic arches 3, 4 and 6 (aortic arch 5 completely regresses).
The dorsal aortae, present on the dorsal side of the embryo, are initially present on both sides of the embryo. They later fuse to form the basis for the aorta itself. Approximately thirty smaller arteries branch from this at the back and sides. These branches form the intercostal arteries, arteries of the arms and legs, lumbar arteries and the lateral sacral arteries. Branches to the sides of the aorta will form the definitive renal, suprarenal and gonadal arteries. Finally, branches at the front of the aorta consist of the vitelline arteries and umbilical arteries. The vitelline arteries form the celiac, superior and inferior mesenteric arteries of the gastrointestinal tract. After birth, the umbilical arteries will form the internal iliac arteries.
Veins
The human venous system develops mainly from the vitelline veins, the umbilical veins and the cardinal veins, all of which empty into the sinus venosus.
Function
About 98.5% of the oxygen in a sample of arterial blood in a healthy human, breathing air at sea-level pressure, is chemically combined with hemoglobin molecules. About 1.5% is physically dissolved in the other blood liquids and not connected to hemoglobin. The hemoglobin molecule is the primary transporter of oxygen in vertebrates.
Clinical significance
Many diseases affect the circulatory system. These include a number of cardiovascular diseases, affecting the heart and blood vessels; hematologic diseases that affect the blood, such as anemia, and lymphatic diseases affecting the lymphatic system. Cardiologists are medical professionals which specialise in the heart, and cardiothoracic surgeons specialise in operating on the heart and its surrounding areas. Vascular surgeons focus on the blood vessels.
Cardiovascular disease
Diseases affecting the cardiovascular system are called cardiovascular disease.
Many of these diseases are called "lifestyle diseases" because they develop over time and are related to a person's exercise habits, diet, whether they smoke, and other lifestyle choices a person makes. Atherosclerosis is the precursor to many of these diseases. It is where small atheromatous plaques build up in the walls of medium and large arteries. This may eventually grow or rupture to occlude the arteries. It is also a risk factor for acute coronary syndromes, which are diseases that are characterised by a sudden deficit of oxygenated blood to the heart tissue. Atherosclerosis is also associated with problems such as aneurysm formation or splitting ("dissection") of arteries.
Another major cardiovascular disease involves the creation of a clot, called a "thrombus". These can originate in veins or arteries. Deep venous thrombosis, which mostly occurs in the legs, is one cause of clots in the veins of the legs, particularly when a person has been stationary for a long time. These clots may embolise, meaning travel to another location in the body. The results of this may include pulmonary embolus, transient ischaemic attacks, or stroke.
Cardiovascular diseases may also be congenital in nature, such as heart defects or persistent fetal circulation, where the circulatory changes that are supposed to happen after birth do not. Not all congenital changes to the circulatory system are associated with diseases, a large number are anatomical variations.
Investigations
The function and health of the circulatory system and its parts are measured in a variety of manual and automated ways. These include simple methods such as those that are part of the cardiovascular examination, including the taking of a person's pulse as an indicator of a person's heart rate, the taking of blood pressure through a sphygmomanometer or the use of a stethoscope to listen to the heart for murmurs which may indicate problems with the heart's valves. An electrocardiogram can also be used to evaluate the way in which electricity is conducted through the heart.
Other more invasive means can also be used. A cannula or catheter inserted into an artery may be used to measure pulse pressure or pulmonary wedge pressures. Angiography, which involves injecting a dye into an artery to visualise an arterial tree, can be used in the heart (coronary angiography) or brain. At the same time as the arteries are visualised, blockages or narrowings may be fixed through the insertion of stents, and active bleeds may be managed by the insertion of coils. An MRI may be used to image arteries, called an MRI angiogram. For evaluation of the blood supply to the lungs a CT pulmonary angiogram may be used. Vascular ultrasonography may be used to investigate vascular diseases affecting the venous system and the arterial system including the diagnosis of stenosis, thrombosis or venous insufficiency. An intravascular ultrasound using a catheter is also an option.
Surgery
There are a number of surgical procedures performed on the circulatory system:
Coronary artery bypass surgery
Coronary stent used in angioplasty
Vascular surgery
Vein stripping
Cosmetic procedures
Cardiovascular procedures are more likely to be performed in an inpatient setting than in an ambulatory care setting; in the United States, only 28% of cardiovascular surgeries were performed in the ambulatory care setting.
Other animals
While humans, as well as other vertebrates, have a closed blood circulatory system (meaning that the blood never leaves the network of arteries, veins and capillaries), some invertebrate groups have an open circulatory system containing a heart but limited blood vessels. The most primitive, diploblastic animal phyla lack circulatory systems.
An additional transport system, the lymphatic system, which is only found in animals with a closed blood circulation, is an open system providing an accessory route for excess interstitial fluid to be returned to the blood.
The blood vascular system first appeared probably in an ancestor of the triploblasts over 600 million years ago, overcoming the time-distance constraints of diffusion, while endothelium evolved in an ancestral vertebrate some 540–510 million years ago.
Open circulatory system
In arthropods, the open circulatory system is a system in which a fluid in a cavity called the hemocoel or haemocoel bathes the organs directly with oxygen and nutrients, with there being no distinction between blood and interstitial fluid; this combined fluid is called hemolymph or haemolymph. Muscular movements by the animal during locomotion can facilitate hemolymph movement, but diverting flow from one area to another is limited. When the heart relaxes, blood is drawn back toward the heart through open-ended pores (ostia).
Hemolymph fills all of the interior hemocoel of the body and surrounds all cells. Hemolymph is composed of water, inorganic salts (mostly sodium, chloride, potassium, magnesium, and calcium), and organic compounds (mostly carbohydrates, proteins, and lipids). The primary oxygen transporter molecule is hemocyanin.
There are free-floating cells, the hemocytes, within the hemolymph. They play a role in the arthropod immune system.
Closed circulatory system
The circulatory systems of all vertebrates, as well as of annelids (for example, earthworms) and cephalopods (squids, octopuses and relatives) always keep their circulating blood enclosed within heart chambers or blood vessels and are classified as closed, just as in humans. Still, the systems of fish, amphibians, reptiles, and birds show various stages of the evolution of the circulatory system. Closed systems permit blood to be directed to the organs that require it.
In fish, the system has only one circuit, with the blood being pumped through the capillaries of the gills and on to the capillaries of the body tissues. This is known as single cycle circulation. The heart of fish is, therefore, only a single pump (consisting of two chambers).
In amphibians and most reptiles, a double circulatory system is used, but the heart is not always completely separated into two pumps. Amphibians have a three-chambered heart.
In reptiles, the ventricular septum of the heart is incomplete and the pulmonary artery is equipped with a sphincter muscle. This allows a second possible route of blood flow. Instead of blood flowing through the pulmonary artery to the lungs, the sphincter may be contracted to divert this blood flow through the incomplete ventricular septum into the left ventricle and out through the aorta. This means the blood flows from the capillaries to the heart and back to the capillaries instead of to the lungs. This process is useful to ectothermic (cold-blooded) animals in the regulation of their body temperature.
Mammals, birds and crocodilians show complete separation of the heart into two pumps, for a total of four heart chambers; it is thought that the four-chambered heart of birds and crocodilians evolved independently from that of mammals. Double circulatory systems permit blood to be repressurized after returning from the lungs, speeding up delivery of oxygen to tissues.
No circulatory system
Circulatory systems are absent in some animals, including flatworms. Their body cavity has no lining or enclosed fluid. Instead, a muscular pharynx leads to an extensively branched digestive system that facilitates direct diffusion of nutrients to all cells. The flatworm's dorso-ventrally flattened body shape also restricts the distance of any cell from the digestive system or the exterior of the organism. Oxygen can diffuse from the surrounding water into the cells, and carbon dioxide can diffuse out. Consequently, every cell is able to obtain nutrients, water and oxygen without the need of a transport system.
Some animals, such as jellyfish, have more extensive branching from their gastrovascular cavity (which functions as both a place of digestion and a form of circulation), this branching allows for bodily fluids to reach the outer layers, since the digestion begins in the inner layers.
History
The earliest known writings on the circulatory system are found in the Ebers Papyrus (16th century BCE), an ancient Egyptian medical papyrus containing over 700 prescriptions and remedies, both physical and spiritual. In the papyrus, it acknowledges the connection of the heart to the arteries. The Egyptians thought air came in through the mouth and into the lungs and heart. From the heart, the air travelled to every member through the arteries. Although this concept of the circulatory system is only partially correct, it represents one of the earliest accounts of scientific thought.
In the 6th century BCE, the knowledge of circulation of vital fluids through the body was known to the Ayurvedic physician Sushruta in ancient India. He also seems to have possessed knowledge of the arteries, described as 'channels' by Dwivedi & Dwivedi (2007). The first major ancient Greek research into the circulatory system was completed by Plato in the Timaeus, who argues that blood circulates around the body in accordance with the general rules that govern the motions of the elements in the body; accordingly, he does not place much importance in the heart itself. The valves of the heart were discovered by a physician of the Hippocratic school around the early 3rd century BC. However, their function was not properly understood then. Because blood pools in the veins after death, arteries look empty. Ancient anatomists assumed they were filled with air and that they were for the transport of air.
The Greek physician, Herophilus, distinguished veins from arteries but thought that the pulse was a property of arteries themselves. Greek anatomist Erasistratus observed that arteries that were cut during life bleed. He ascribed the fact to the phenomenon that air escaping from an artery is replaced with blood that enters between veins and arteries by very small vessels. Thus he apparently postulated capillaries but with reversed flow of blood.
In 2nd-century AD Rome, the Greek physician Galen knew that blood vessels carried blood and identified venous (dark red) and arterial (brighter and thinner) blood, each with distinct and separate functions. Growth and energy were derived from venous blood created in the liver from chyle, while arterial blood gave vitality by containing pneuma (air) and originated in the heart. Blood flowed from both creating organs to all parts of the body where it was consumed and there was no return of blood to the heart or liver. The heart did not pump blood around, the heart's motion sucked blood in during diastole and the blood moved by the pulsation of the arteries themselves. Galen believed that the arterial blood was created by venous blood passing from the left ventricle to the right by passing through 'pores' in the interventricular septum, air passed from the lungs via the pulmonary artery to the left side of the heart. As the arterial blood was created 'sooty' vapors were created and passed to the lungs also via the pulmonary artery to be exhaled.
In 1025, The Canon of Medicine by the Persian physician, Avicenna, "erroneously accepted the Greek notion regarding the existence of a hole in the ventricular septum by which the blood traveled between the ventricles." Despite this, Avicenna "correctly wrote on the cardiac cycles and valvular function", and "had a vision of blood circulation" in his Treatise on Pulse. While also refining Galen's erroneous theory of the pulse, Avicenna provided the first correct explanation of pulsation: "Every beat of the pulse comprises two movements and two pauses. Thus, expansion : pause : contraction : pause. [...] The pulse is a movement in the heart and arteries ... which takes the form of alternate expansion and contraction."
In 1242, the Arabian physician, Ibn al-Nafis described the process of pulmonary circulation in greater, more accurate detail than his predecessors, though he believed, as they did, in the notion of vital spirit (pneuma), which he believed was formed in the left ventricle. Ibn al-Nafis stated in his Commentary on Anatomy in Avicenna's Canon:
...the blood from the right chamber of the heart must arrive at the left chamber but there is no direct pathway between them. The thick septum of the heart is not perforated and does not have visible pores as some people thought or invisible pores as Galen thought. The blood from the right chamber must flow through the vena arteriosa (pulmonary artery) to the lungs, spread through its substances, be mingled there with air, pass through the arteria venosa (pulmonary vein) to reach the left chamber of the heart and there form the vital spirit...
In addition, Ibn al-Nafis had an insight into what later became a larger theory of the capillary circulation. He stated that "there must be small communications or pores (manafidh in Arabic) between the pulmonary artery and vein," a prediction that preceded the discovery of the capillary system by more than 400 years. Ibn al-Nafis' theory was confined to blood transit in the lungs and did not extend to the entire body.
Michael Servetus was the first European to describe the function of pulmonary circulation, although his achievement was not widely recognized at the time, for a few reasons. He firstly described it in the "Manuscript of Paris" (near 1546), but this work was never published. And later he published this description, but in a theological treatise, Christianismi Restitutio, not in a book on medicine. Only three copies of the book survived but these remained hidden for decades, the rest were burned shortly after its publication in 1553 because of persecution of Servetus by religious authorities.
A better known discovery of pulmonary circulation was by Vesalius's successor at Padua, Realdo Colombo, in 1559.
Finally, the English physician William Harvey, a pupil of Hieronymus Fabricius (who had earlier described the valves of the veins without recognizing their function), performed a sequence of experiments and published his Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus in 1628, which "demonstrated that there had to be a direct connection between the venous and arterial systems throughout the body, and not just the lungs. Most importantly, he argued that the beat of the heart produced a continuous circulation of blood through minute connections at the extremities of the body. This is a conceptual leap that was quite different from Ibn al-Nafis' refinement of the anatomy and bloodflow in the heart and lungs." This work, with its essentially correct exposition, slowly convinced the medical world. However, Harvey did not identify the capillary system connecting arteries and veins; this was discovered by Marcello Malpighi in 1661.
See also
References
External links
Circulatory Pathways in Anatomy and Physiology by OpenStax
The Circulatory System
Michael Servetus Research Study on the Manuscript of Paris by Servetus (1546 description of the Pulmonary Circulation)
Exercise physiology
Angiology | Circulatory system | [
"Biology"
] | 6,106 | [
"Organ systems",
"Circulatory system"
] |
57,331 | https://en.wikipedia.org/wiki/Microwave%20auditory%20effect | The microwave auditory effect, also known as the microwave hearing effect or the Frey effect, consists of the human perception of sounds induced by pulsed or modulated radio frequencies. The perceived sounds are generated directly inside the human head without the need of any receiving electronic device. The effect was first reported by persons working in the vicinity of radar transponders during World War II. In 1961, the American neuroscientist Allan H. Frey studied this phenomenon and was the first to publish information on the nature of the microwave auditory effect. The cause is thought to be thermoelastic expansion of portions of the auditory apparatus, although competing theories explain the results of holographic interferometry tests differently.
Research in the U.S.
Allan H. Frey was the first American to publish on the microwave auditory effect (MAE). Frey's "Human auditory system response to modulated electromagnetic energy" appeared in the Journal of Applied Physiology in 1961. In his experiments, the subjects were discovered to be able to hear appropriately pulsed microwave radiation, from a distance of a few inches to hundreds of feet from the transmitter. In Frey's tests, a repetition rate of 50 Hz was used, with pulse width between 10–70 microseconds. The perceived loudness was found to be linked to the peak power density, instead of average power density. At 1.245 GHz, the peak power density for perception was below 80 mW/cm2. According to Frey, the induced sounds were described as "a buzz, clicking, hiss, or knocking, depending on several transmitter parameters, i.e., pulse width and pulse-repetition rate". By changing transmitter parameters, Frey was able to induce the "perception of severe buffeting of the head, without such apparent vestibular symptoms as dizziness or nausea". Other transmitter parameters induced a pins and needles sensation. Frey experimented with nerve-deaf subjects, and speculated that the human detecting mechanism was in the cochlea, but at the time of the experiment the results were inconclusive due to factors such as tinnitus.
Auditory sensations of clicking or buzzing have been reported by some workers at modern-day microwave transmitting sites that emit pulsed microwave radiation. Auditory responses to transmitted frequencies from approximately 200 MHz to at least 3 GHz have been reported. The cause is thought to be thermoelastic expansion of portions of auditory apparatus, and the generally accepted mechanism is rapid (but minuscule, in the range of 10−5 °C) heating of brain by each pulse, and the resulting pressure wave traveling through the skull to the cochlea.
In 1975, an article by neuropsychologist Don Justesen discussing radiation effects on human perception referred to an experiment by Joseph C. Sharp and Mark Grove at the Walter Reed Army Institute of Research during which Sharp and Grove reportedly were able to recognize nine out of ten words transmitted by "voice modulated microwaves". Although it's been reported that "power levels required for transmitting sound... [would cause] brain damage due to... thermal effects" the pops associated with the microwave audio effect are not sustained over time, and the effect is due to brief, sudden, increases in temperature. So while threshold levels of for the microwave audio effect of 267mW/cm² for 1.3GHz and 5000mW/cm² 2.9GHz, respectively, were reported by Frey in 1961, for the peak amplitude (providing the pops) and would only give an average (sustained) power density of only 0.4mW/cm² and 2mW/cm² respectively similar to current cellphones. However, it's been argued that despite waves the microwave auditory effect only constituting a rapid 10−6 °C rise in temperature, for threshold peaks on each pulse, that, at the least, a strong peak of around 1400 kW/cm² (1.4 billion mW/cm²) would certainly be harmful due to the resulting pressure wave.
Electronic warfare
In 2003–04, WaveBand Corp. had a contract from the U.S. Navy for the design of an MAE system they called MEDUSA (Mob Excess Deterrent Using Silent Audio) that was intended to temporarily incapacitate personnel through remote application. Reportedly, Sierra Nevada Corp. took over the contract from WaveBand. Experts, such as Kenneth Foster, a University of Pennsylvania bioengineering professor who published research on the microwave auditory effect in 1974, have discounted the effectiveness of the proposed device. Foster said that because of human biophysics, the device "would kill you well before you were bothered by the noise". According to former professor at the University of Washington Bill Guy, "There's a misunderstanding by the public and even some scientists about this auditory effect," and "there couldn't possibly be a hazard from the sound, because the heat would get you first".
Microwave effects have been proposed as the cause of otherwise unexplained illnesses of U.S. diplomats in Cuba and China occurring since 2017 and 2018. However, this explanation has been debated. Bioengineer Kenneth R. Foster noted of the health effects observed in the diplomats, "it's crazy, but it's sure as heck not microwaves." As of October 2021, a microwave cause remains one of the major hypotheses.
Conspiracy theories
Numerous individuals suffering from auditory hallucinations, delusional disorders, or other mental illnesses have claimed that government agents use forms of mind control technologies based on microwave signals to transmit sounds and thoughts into their heads as a form of electronic harassment, referring to the alleged technology as "voice to skull" or "V2K".
There are extensive online support networks and numerous websites operated by people fearing mind control. Mental health professionals maintain that many of these websites exhibit evidence of delusional disorders, although they are divided over whether such sites reinforce mental troubles, or act as a form of group social support.
Psychologists have identified many examples of people reporting 'mind control experiences' (MCEs) on self-published web pages that are "highly likely to be influenced by delusional beliefs". Common themes include "Bad Guys" using "psychotronics" and "microwaves", frequent mention of the CIA's MKULTRA project, and frequent citing of Frey's 1962 paper entitled "Human auditory system response to modulated electromagnetic energy".
See also
Cosmic ray visual phenomena
Electroreception
Havana syndrome
Photoacoustic effect
Sound from ultrasound
Specific absorption rate – government standards for measurement of human radio frequency exposures
Notes
References and further reading
R.C. Jones, S.S. Stevens, and M.H. Lurie. J. Acoustic. Soc. Am. 12: 281, 1940.
H. Burr and A. Mauro. Yale J Biol. and Med. 21:455, 1949.
H. von Gierke. Noise Control 2: 37, 1956.
J. Zwislocki. J. Noise Control 4: 42, 1958.
R. Morrow and J. Seipel. J. Wash. Acad. SCI. 50: 1, 1960.
A.H. Frey. Aero Space Med. 32: 1140, 1961.
P.C. Neider and W.D. Neff. Science 133: 1010,1961.
R. Niest, L. Pinneo, R. Baus, J. Fleming, and R. McAfee. Annual Report. USA Rome Air Development Command, TR-61-65, 1961.
A.H. Frey. "Human auditory system response to modulated electromagnetic energy. " J Applied Physiol 17 (4): 689–92, 1962.
A.H. Frey. "Behavioral Biophysics", Psychol Bull 63(5): 322–37, 1965.
F.A. Giori and A.R. Winterberger. "Remote Physiological Monitoring Using a Microwave Interferometer", Biomed Sci Instr 3: 291–307, 1967.
A.H. Frey and R. Messenger. "Human Perception of Illumination with Pulsed Ultrahigh-Frequency Electromagnetic Energy", Science 181: 356–8, 1973.
R. Rodwell. "Army tests new riot weapon", New Scientist September 20, p. 684, 1973.
A.W. Guy, C.K. Chou, J.C. Lin, and D. Christensen. "Microwave induced acoustic effects in mammalian auditory systems and physical materials", Annals of New York Academy of Sciences, 247:194–218, 1975.
D.R. Justesen. "Microwaves and Behavior", Am Psychologist, 392 (Mar): 391–401, 1975.
S.M. Michaelson. "Sensation and Perception of Microwave Energy", In: S.M. Michaelson, M.W. Miller, R. Magin, and E.L. Carstensen (eds.), Fundamental and Applied Aspects of Nonionizing Radiation. Plenum Press, New York, pp. 213–24, 1975.
E.S. Eichert and A.H. Frey. "Human Auditory System Response to Lower Power Density Pulse Modulated Electromagnetic Energy: A Search for Mechanisms", J Microwave Power 11(2): 141, 1976.
W. Bise. "Low power radio-frequency and microwave effects on human electroencephalogram and behavior", Physiol Chem Phys 10(5): 387–98, 1978.
J.C. Lin. Microwave Auditory Effects and Applications, Thomas, Springfield Ill, p. 176, 1978.
P.L. Stocklin and B.F. Stocklin. "Possible Microwave Mechanisms of the Mammalian Nervous System", T-I-T J Life Sci 9: 29–51, 1979.
H. Frolich. "The Biological Effects of Microwaves and Related Questions", Adv Electronics Electron Physics 53: 85–152, 1980.
H. Lai. "Neurological Effects of Radiofrequency Electromagnetic Radiation" In: J.C. Lin (ed.), Advances in Electromagnetic Fields in Living Systems vol 1, Plenum, NY & London, pp. 27–80, 1994.
R.C. Beason and P. Semm. "Responses of neurons to an amplitude modulated microwave stimulus", Neurosci Lett 333: 175–78, 2002.
J.A. Elder and C.K. Chou. "Auditory Responses to Pulsed Radiofrequency Energy", Bioelectromagnetics Suppl 8: S162–73, 2003.
External links
Seaman, Ronald L., "Transmission of microwave-induced intracranial sound to the inner ear is most likely through cranial aqueducts," Mckesson Bioservices Corporation, Wrair United States Army Medical Research Detachment. (PDF)
Lin, J.C., 1980, "The microwave auditory phenomenon," Proceedings of the IEEE, 68:67–73. Navy-NSF-supported research.
Lin, JC., "Microwave auditory effect- a comparison of some possible transduction mechanisms". J Microwave Power. 1976 Mar;11(1):77–81. 1976.
Guy, A.W., C.K. Chou, J.C. Lin and D. Christensen, 1975, Microwave induced acoustic effects in mammalian auditory systems and physical materials, Annals of New York Academy of Sciences, 247:194–218
Fist, Stewart, "Australian exposure standards". Crossroads, The Australian, March 1999.
Microwave auditory effects and applications, James C. Lin; Publisher: Thomas;
United States Department of Defense, Air Force Research Laboratory comprehensive review on RFR-auditory effect in humans
"Auditory Responses to Pulsed Radiofrequency Energy" Bioelectromagnetics Suppl 8: S162-73, 2003.
Espionage
Human physiology
Non-lethal weapons
Cognitive neuroscience
Hearing
Mind control
Radio spectrum
Hallucinations | Microwave auditory effect | [
"Physics"
] | 2,462 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
57,399 | https://en.wikipedia.org/wiki/PILOT | Programmed Inquiry, Learning, or Teaching (PILOT) is a simple high-level programming language developed in the 1960s. Like its younger sibling LOGO, it was developed as an early foray into the technology of computer-assisted instruction.
PILOT is an imperative language similar in structure to BASIC and FORTRAN in its basic layout and structure. Its keywords are single characters, T for "type" to print text, or A for "accept", to input values from the user.
History
Starting in 1960, John Amsden Starkweather a psychology professor at the University of California, San Francisco medical center, developed a simple system for automating the construction of computer question-and-answer tests. By 1962 the system was functional on the IBM 1620 and given the name "Computest". This proved interesting enough to gain a grant for further development by the U.S. Office of Education in 1965.
Using this funding, Starkweather began development of an expanded version of the system with more functionality. He gave this version the new name PILOT. Early versions were shown in 1966, and the almost-complete version was released in 1968. The next year it was released into the public domain.
PILOT was later adopted by H. Dean Brown at the Stanford Research Institute (SRI) Education Laboratory. Brown popularized PILOT as a language for use directly by children. Brown's efforts changed the language from one intended for use by teachers to write tests and instructional materials to one intended to be used to teach programming.
In 1973, Starkweather brought together a number of people interested in computer aided teaching to develop a machine-independent specification for the language, PILOT-73. A portable subset was Aldo defined as Core PILOT. Core was then ported to the Datapoint 2200, an Intel 8008 powered terminal that would later be known as a personal computer. At $13,000, this was more expensive than many contemporary minicomputers and did not see much use. However, this port proved very useful after the Intel 8080 came to market and spawned many inexpensive microcomputers.
Starting in the late 1970s, Western Washington University began expanding the language into Common PILOT. This formed the basis for a number of later microcomputer variants, including most versions on the MOS 6502. These varied from the Common definition, as well as from each other. PILOT on the Apple II was written in UCSD Pascal. These versions led to a revival of the PILOT language for teaching, and led to an expanded version known as Super PILOT which added device control so programs could play videodisks and similar tasks. Atari PILOT took the language in a different direction by adding turtle graphics adapted from LOGO.
For a time there was an effort to make a single standard for the language as IEEE Standard 1154-1991, but this was abandoned in 2000.
Language syntax
A line of PILOT code contains (from left to right) the following syntax elements:
an optional label
a command letter
an optional Y (for yes) or N (for no)
an optional conditional expression in parentheses
a colon (":")
an operand, or multiple operands delimited by commas.
A label can also be alone in a line, not followed by other code. The syntax for a label is an asterisk followed by an identifier (alphanumeric string with alphabetic initial character).
Command letters
The following commands are used in "core PILOT". Lines beginning with "R:" indicate a remark (or a comment) explaining the code that follows.
A Accept input into "accept buffer". Examples:
R:Next line of input replaces current contents of accept buffer
A:
R:Next line of input replaces accept buffer, and string variable 'FREE'
A:$FREE
R:Next 3 lines of input assigned to string variables 'X', 'Y' and 'Z'
A:$X,$Y,$Z
R:Numeric input assigned to numeric variable "Q"
A:#Q
C Compute and assign numeric value. Most PILOT implementations have only integer arithmetic, and no arrays. Example:
R:Assign arithmetic mean of #X and #Y to #AM
C:#AM=(#X+#Y)/2
D Dimension an array, on some implementations.
E End (return from) subroutine or (if outside of a subroutine) abort program. Always used without any operand.
J Jump to a label. Example:
J:*RESTART
M Match the accept buffer against string variables or string literals. Example:
R:Search accept buffer for "TRUTH", the value of MEXICO and "YOUTH", in that order
M:TRUTH,$MEXICO,YOUTH
The first match string (if any) that is a substring of the accept buffer is assigned to the special variable $MATCH. The buffer characters left of the first match are assigned to $LEFT, and the characters on the right are assigned to $RIGHT.
The match flag is set to 'yes' or 'no', depending on whether a match is made. Any statement that has a Y following the command letter is processed only if the match flag is set. Statements with N are processed only if the flag is not set.
N Equivalent to TN: (type if last match unsuccessful)
R The operand of R: is a comment, and therefore has no effect.
T 'Type' operand as output. Examples:
R:The next line prints a literal string
T:Thank you for your support.
R:The next line combines a literal string with a variable expression
T:Thank you, $NAME.
U Use (call) a subroutine. A subroutine starts with a label and ends with E: Example:
R:Call subroutine starting at label *INITIALIZE
U:*INITIALIZE
Y Equivalent to TY: (type if last match successful)
Parentheses If there is a parenthesized expression in a statement, it is a conditional expression, and the statement is processed only if the test has a value of 'true'. Example:
R:Type message if x>y+z
T(#X>#Y+#Z):Condition met
Derivatives
Extensions to core PILOT include arrays and floating point numbers in Apple PILOT for the Apple II, and LOGO-inspired turtle graphics in Atari PILOT for Atari 8-bit computers.
Between 1979 and 1983 the UK PILOT User Group was run by Alec Wood a teacher at Wirral Grammar School for Boys, Merseyside UK. Several machine code versions of a mini PILOT were produced for the microcomputers of the time and a school in Scotland developed an interactive foreign language tutorial where pupils guided footprints around a town asking and answering questions in German, French, etc. An article in the December 1979 of Computer Age covered an early implementation called Tiny Pilot and gave a complete machine code listing.
Versions of PILOT overlaid on the BASIC interpreters of early microcomputers were not unknown in the late 1970s and early 1980s, and Byte Magazine at one point published a non-Turing complete derivative of PILOT known as Waduzitdo by Larry Kheriarty as a way of demonstrating what a computer was capable of. PETPILOT (PILOT for the Commodore PET) was the first non-Commodore language for the PET and was written in Microsoft BASIC which shipped with the PET, with a little assistance from Bill Gates. It was created in 1979 by Dave Gomberg and could run on a 4K PET (which was never shipped) and ran well on the 8K PETs that Commodore shipped. It was written in Larry Tessler's living room on PET serial number 2.
1983's Vanilla PILOT for the Commodore 64 added turtle graphics, as did Super Turtle PILOT which was published as a type-in listing in the October 1987 issue of COMPUTE! magazine.
In 1991 the Institute of Electrical and Electronics Engineers (IEEE) published a standard for Pilot as IEEE Std 1154-1991. It has since been withdrawn. A reference implementation based on this was implemented by Eric Raymond, and maintained—-reluctantly—-for the next 15 years.
In 1990 eSTeem PILOT for Atari ST computers was developed and programmed by Tom Nielsen, EdD. Based on the IEEE Standards for PILOT, it includes Atari-specific features such as control of Laserdisc and CDROM devices.
A 2018 hobbyist implementation, psPILOT, based in part on the IEEE standard, was implemented using Microsoft's PowerShell scripting language.
References
Further reading
Educational programming languages
IEEE standards | PILOT | [
"Technology"
] | 1,731 | [
"Computer standards",
"IEEE standards"
] |
57,414 | https://en.wikipedia.org/wiki/Evolutionary%20developmental%20biology | Evolutionary developmental biology (informally, evo-devo) is a field of biological research that compares the developmental processes of different organisms to infer how developmental processes evolved.
The field grew from 19th-century beginnings, where embryology faced a mystery: zoologists did not know how embryonic development was controlled at the molecular level. Charles Darwin noted that having similar embryos implied common ancestry, but little progress was made until the 1970s. Then, recombinant DNA technology at last brought embryology together with molecular genetics. A key early discovery was of homeotic genes that regulate development in a wide range of eukaryotes.
The field is composed of multiple core evolutionary concepts. One is deep homology, the finding that dissimilar organs such as the eyes of insects, vertebrates and cephalopod molluscs, long thought to have evolved separately, are controlled by similar genes such as pax-6, from the evo-devo gene toolkit. These genes are ancient, being highly conserved among phyla; they generate the patterns in time and space which shape the embryo, and ultimately form the body plan of the organism. Another is that species do not differ much in their structural genes, such as those coding for enzymes; what does differ is the way that gene expression is regulated by the toolkit genes. These genes are reused, unchanged, many times in different parts of the embryo and at different stages of development, forming a complex cascade of control, switching other regulatory genes as well as structural genes on and off in a precise pattern. This multiple pleiotropic reuse explains why these genes are highly conserved, as any change would have many adverse consequences which natural selection would oppose.
New morphological features and ultimately new species are produced by variations in the toolkit, either when genes are expressed in a new pattern, or when toolkit genes acquire additional functions. Another possibility is the neo-Lamarckian theory that epigenetic changes are later consolidated at gene level, something that may have been important early in the history of multicellular life.
History
Early theories
Philosophers began to think about how animals acquired form in the womb in classical antiquity. Aristotle asserts in his Physics treatise that according to Empedocles, order "spontaneously" appears in the developing embryo. In his The Parts of Animals treatise, he argues that Empedocles' theory was wrong. In Aristotle's account, Empedocles stated that the vertebral column is divided into vertebrae because, as it happens, the embryo twists about and snaps the column into pieces. Aristotle argues instead that the process has a predefined goal: that the "seed" that develops into the embryo began with an inbuilt "potential" to become specific body parts, such as vertebrae. Further, each sort of animal gives rise to animals of its own kind: humans only have human babies.
Recapitulation
A recapitulation theory of evolutionary development was proposed by Étienne Serres in 1824–26, echoing the 1808 ideas of Johann Friedrich Meckel. They argued that the embryos of 'higher' animals went through or recapitulated a series of stages, each of which resembled an animal lower down the great chain of being. For example, the brain of a human embryo looked first like that of a fish, then in turn like that of a reptile, bird, and mammal before becoming clearly human. The embryologist Karl Ernst von Baer opposed this, arguing in 1828 that there was no linear sequence as in the great chain of being, based on a single body plan, but a process of epigenesis in which structures differentiate. Von Baer instead recognized four distinct animal body plans: radiate, like starfish; molluscan, like clams; articulate, like lobsters; and vertebrate, like fish. Zoologists then largely abandoned recapitulation, though Ernst Haeckel revived it in 1866.
Evolutionary morphology
From the early 19th century through most of the 20th century, embryology faced a mystery. Animals were seen to develop into adults of widely differing body plan, often through similar stages, from the egg, but zoologists knew almost nothing about how embryonic development was controlled at the molecular level, and therefore equally little about how developmental processes had evolved. Charles Darwin argued that a shared embryonic structure implied a common ancestor. For example, Darwin cited in his 1859 book On the Origin of Species the shrimp-like larva of the barnacle, whose sessile adults looked nothing like other arthropods; Linnaeus and Cuvier had classified them as molluscs. Darwin also noted Alexander Kowalevsky's finding that the tunicate, too, was not a mollusc, but in its larval stage had a notochord and pharyngeal slits which developed from the same germ layers as the equivalent structures in vertebrates, and should therefore be grouped with them as chordates.
19th century zoology thus converted embryology into an evolutionary science, connecting phylogeny with homologies between the germ layers of embryos. Zoologists including Fritz Müller proposed the use of embryology to discover phylogenetic relationships between taxa. Müller demonstrated that crustaceans shared the Nauplius larva, identifying several parasitic species that had not been recognized as crustaceans. Müller also recognized that natural selection must act on larvae, just as it does on adults, giving the lie to recapitulation, which would require larval forms to be shielded from natural selection. Two of Haeckel's other ideas about the evolution of development have fared better than recapitulation: he argued in the 1870s that changes in the timing (heterochrony) and changes in the positioning within the body (heterotopy) of aspects of embryonic development would drive evolution by changing the shape of a descendant's body compared to an ancestor's. It took a century before these ideas were shown to be correct.
In 1917, D'Arcy Thompson wrote a book on the shapes of animals, showing with simple mathematics how small changes to parameters, such as the angles of a gastropod's spiral shell, can radically alter an animal's form, though he preferred a mechanical to evolutionary explanation. But without molecular evidence, progress stalled.
In 1952, Alan Turing published his paper "The Chemical Basis of Morphogenesis", on the development of patterns in animals' bodies. He suggested that morphogenesis could be explained by a reaction–diffusion system, a system of reacting chemicals able to diffuse through the body. He modelled catalysed chemical reactions using partial differential equations, showing that patterns emerged when the chemical reaction produced both a catalyst (A) and an inhibitor (B) that slowed down production of A. If A and B then diffused at different rates, A dominated in some places, and B in others. The Russian biochemist Boris Belousov had run experiments with similar results, but was unable to publish them because scientists thought at that time that creating visible order violated the second law of thermodynamics.
The modern synthesis of the early 20th century
In the so-called modern synthesis of the early 20th century, between 1918 and 1930 Ronald Fisher brought together Darwin's theory of evolution, with its insistence on natural selection, heredity, and variation, and Gregor Mendel's laws of genetics into a coherent structure for evolutionary biology. Biologists assumed that an organism was a straightforward reflection of its component genes: the genes coded for proteins, which built the organism's body. Biochemical pathways (and, they supposed, new species) evolved through mutations in these genes. It was a simple, clear and nearly comprehensive picture: but it did not explain embryology. Sean B. Carroll has commented that had evo-devo's insights been available, embryology would certainly have played a central role in the synthesis.
The evolutionary embryologist Gavin de Beer anticipated evolutionary developmental biology in his 1930 book Embryos and Ancestors, by showing that evolution could occur by heterochrony, such as in the retention of juvenile features in the adult. This, de Beer argued, could cause apparently sudden changes in the fossil record, since embryos fossilise poorly. As the gaps in the fossil record had been used as an argument against Darwin's gradualist evolution, de Beer's explanation supported the Darwinian position. However, despite de Beer, the modern synthesis largely ignored embryonic development to explain the form of organisms, since population genetics appeared to be an adequate explanation of how forms evolved.
The lac operon
In 1961, Jacques Monod, Jean-Pierre Changeux and François Jacob discovered the lac operon in the bacterium Escherichia coli. It was a cluster of genes, arranged in a feedback control loop so that its products would only be made when "switched on" by an environmental stimulus. One of these products was an enzyme that splits a sugar, lactose; and lactose itself was the stimulus that switched the genes on. This was a revelation, as it showed for the first time that genes, even in organisms as small as a bacterium, are subject to precise control. The implication was that many other genes were also elaborately regulated.
The birth of evo-devo and a second synthesis
In 1977, a revolution in thinking about evolution and developmental biology began, with the arrival of recombinant DNA technology in genetics, the book Ontogeny and Phylogeny by Stephen J. Gould and the paper "Evolution and Tinkering" by François Jacob. Gould laid to rest Haeckel's interpretation of evolutionary embryology, while Jacob set out an alternative theory.
This led to a second synthesis, at last including embryology as well as molecular genetics, phylogeny, and evolutionary biology to form evo-devo. In 1978, Edward B. Lewis discovered homeotic genes that regulate embryonic development in Drosophila fruit flies, which like all insects are arthropods, one of the major phyla of invertebrate animals.
Bill McGinnis quickly discovered homeotic gene sequences, homeoboxes, in animals in other phyla, in vertebrates such as frogs, birds, and mammals; they were later also found in fungi such as yeasts, and in plants. There were evidently strong similarities in the genes that controlled development across all the eukaryotes.
In 1980, Christiane Nüsslein-Volhard and Eric Wieschaus described gap genes which help to create the segmentation pattern in fruit fly embryos; they and Lewis won a Nobel Prize for their work in 1995.
Later, more specific similarities were discovered: for example, the distal-less gene was found in 1989 to be involved in the development of appendages or limbs in fruit flies, the fins of fish, the wings of chickens, the parapodia of marine annelid worms, the ampullae and siphons of tunicates, and the tube feet of sea urchins. It was evident that the gene must be ancient, dating back to the last common ancestor of bilateral animals (before the Ediacaran Period, which began some 635 million years ago). Evo-devo had started to uncover the ways that all animal bodies were built during development.
The control of body structure
Deep homology
Roughly spherical eggs of different animals give rise to unique morphologies, from jellyfish to lobsters, butterflies to elephants. Many of these organisms share the same structural genes for bodybuilding proteins like collagen and enzymes, but biologists had expected that each group of animals would have its own rules of development. The surprise of evo-devo is that the shaping of bodies is controlled by a rather small percentage of genes, and that these regulatory genes are ancient, shared by all animals. The giraffe does not have a gene for a long neck, any more than the elephant has a gene for a big body. Their bodies are patterned by a system of switching which causes development of different features to begin earlier or later, to occur in this or that part of the embryo, and to continue for more or less time.
The puzzle of how embryonic development was controlled began to be solved using the fruit fly Drosophila melanogaster as a model organism. The step-by-step control of its embryogenesis was visualized by attaching fluorescent dyes of different colours to specific types of protein made by genes expressed in the embryo. A dye such as green fluorescent protein, originally from a jellyfish, was typically attached to an antibody specific to a fruit fly protein, forming a precise indicator of where and when that protein appeared in the living embryo.
Using such a technique, in 1994 Walter Gehring found that the pax-6 gene, vital for forming the eyes of fruit flies, exactly matches an eye-forming gene in mice and humans. The same gene was quickly found in many other groups of animals, such as squid, a cephalopod mollusc. Biologists including Ernst Mayr had believed that eyes had arisen in the animal kingdom at least 40 times, as the anatomy of different types of eye varies widely. For example, the fruit fly's compound eye is made of hundreds of small lensed structures (ommatidia); the human eye has a blind spot where the optic nerve enters the eye, and the nerve fibres run over the surface of the retina, so light has to pass through a layer of nerve fibres before reaching the detector cells in the retina, so the structure is effectively "upside-down"; in contrast, the cephalopod eye has the retina, then a layer of nerve fibres, then the wall of the eye "the right way around". The evidence of pax-6, however, was that the same genes controlled the development of the eyes of all these animals, suggesting that they all evolved from a common ancestor. Ancient genes had been conserved through millions of years of evolution to create dissimilar structures for similar functions, demonstrating deep homology between structures once thought to be purely analogous. This notion was later extended to the evolution of embryogenesis and has caused a radical revision of the meaning of homology in evolutionary biology.
Gene toolkit
A small fraction of the genes in an organism's genome control the organism's development. These genes are called the developmental-genetic toolkit. They are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. Differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. Most toolkit genes are parts of signalling pathways: they encode transcription factors, cell adhesion proteins, cell surface receptor proteins and signalling ligands that bind to them, and secreted morphogens that diffuse through the embryo. All of these help to define the fate of undifferentiated cells in the embryo. Together, they generate the patterns in time and space which shape the embryo, and ultimately form the body plan of the organism. Among the most important toolkit genes are the Hox genes. These transcription factors contain the homeobox protein-binding DNA motif, also found in other toolkit genes, and create the basic pattern of the body along its front-to-back axis.
Hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva. Pax-6, already mentioned, is a classic toolkit gene. Although other toolkit genes are involved in establishing the plant bodyplan, homeobox genes are also found in plants, implying they are common to all eukaryotes.
The embryo's regulatory networks
The protein products of the regulatory toolkit are reused not by duplication and modification, but by a complex mosaic of pleiotropy, being applied unchanged in many independent developmental processes, giving pattern to many dissimilar body structures. The loci of these pleiotropic toolkit genes have large, complicated and modular cis-regulatory elements. For example, while a non-pleiotropic rhodopsin gene in the fruit fly has a cis-regulatory element just a few hundred base pairs long, the pleiotropic eyeless cis-regulatory region contains 6 cis-regulatory elements in over 7000 base pairs. The regulatory networks involved are often very large. Each regulatory protein controls "scores to hundreds" of cis-regulatory elements. For instance, 67 fruit fly transcription factors controlled on average 124 target genes each. All this complexity enables genes involved in the development of the embryo to be switched on and off at exactly the right times and in exactly the right places. Some of these genes are structural, directly forming enzymes, tissues and organs of the embryo. But many others are themselves regulatory genes, so what is switched on is often a precisely-timed cascade of switching, involving turning on one developmental process after another in the developing embryo.
Such a cascading regulatory network has been studied in detail in the development of the fruit fly embryo. The young embryo is oval in shape, like a rugby ball. A small number of genes produce messenger RNAs that set up concentration gradients along the long axis of the embryo. In the early embryo, the bicoid and hunchback genes are at high concentration near the anterior end, and give pattern to the future head and thorax; the caudal and nanos genes are at high concentration near the posterior end, and give pattern to the hindmost abdominal segments. The effects of these genes interact; for instance, the Bicoid protein blocks the translation of caudal messenger RNA, so the Caudal protein concentration becomes low at the anterior end. Caudal later switches on genes which create the fly's hindmost segments, but only at the posterior end where it is most concentrated.
The Bicoid, Hunchback and Caudal proteins in turn regulate the transcription of gap genes such as giant, knirps, Krüppel, and tailless in a striped pattern, creating the first level of structures that will become segments. The proteins from these in turn control the pair-rule genes, which in the next stage set up 7 bands across the embryo's long axis. Finally, the segment polarity genes such as engrailed split each of the 7 bands into two, creating 14 future segments.
This process explains the accurate conservation of toolkit gene sequences, which has resulted in deep homology and functional equivalence of toolkit proteins in dissimilar animals (seen, for example, when a mouse protein controls fruit fly development). The interactions of transcription factors and cis-regulatory elements, or of signalling proteins and receptors, become locked in through multiple usages, making almost any mutation deleterious and hence eliminated by natural selection.
The mechanism that sets up every animal's front-back axis is the same, implying a common ancestor. There is a similar mechanism for the back-belly axis for bilaterian animals, but it is reversed between arthropods and vertebrates. Another process, gastrulation of the embryo, is driven by Myosin II molecular motors, which are not conserved across species. The process may have been started by movements of sea water in the environment, later replaced by the evolution of tissue movements in the embryo.
The origins of novelty
Among the more surprising and, perhaps, counterintuitive (from a neo-Darwinian viewpoint) results of recent research in evolutionary developmental biology is that the diversity of body plans and morphology in organisms across many phyla are not necessarily reflected in diversity at the level of the sequences of genes, including those of the developmental genetic toolkit and other genes involved in development. Indeed, as John Gerhart and Marc Kirschner have noted, there is an apparent paradox: "where we most expect to find variation, we find conservation, a lack of change". So, if the observed morphological novelty between different clades does not come from changes in gene sequences (such as by mutation), where does it come from? Novelty may arise by mutation-driven changes in gene regulation.
Variations in the toolkit
Variations in the toolkit may have produced a large part of the morphological evolution of animals. The toolkit can drive evolution in two ways. A toolkit gene can be expressed in a different pattern, as when the beak of Darwin's large ground-finch was enlarged by the BMP gene, or when snakes lost their legs as distal-less became under-expressed or not expressed at all in the places where other reptiles continued to form their limbs. Or, a toolkit gene can acquire a new function, as seen in the many functions of that same gene, distal-less, which controls such diverse structures as the mandible in vertebrates, legs and antennae in the fruit fly, and eyespot pattern in butterfly wings. Given that small changes in toolbox genes can cause significant changes in body structures, they have often enabled the same function convergently or in parallel. distal-less generates wing patterns in the butterflies Heliconius erato and Heliconius melpomene, which are Müllerian mimics. In so-called facilitated variation, their wing patterns arose in different evolutionary events, but are controlled by the same genes. Developmental changes can contribute directly to speciation.
Consolidation of epigenetic changes
Evolutionary innovation may sometimes begin in Lamarckian style with epigenetic alterations of gene regulation or phenotype generation, subsequently consolidated by changes at the gene level. Epigenetic changes include modification of DNA by reversible methylation, as well as nonprogrammed remoulding of the organism by physical and other environmental effects due to the inherent plasticity of developmental mechanisms. The biologists Stuart A. Newman and Gerd B. Müller have suggested that organisms early in the history of multicellular life were more susceptible to this second category of epigenetic determination than are modern organisms, providing a basis for early macroevolutionary changes.
Developmental bias
Development in specific lineages can be biased either positively, towards a given trajectory or phenotype, or negatively, away from producing certain types of change; either may be absolute (the change is always or never produced) or relative. Evidence for any such direction in evolution is however hard to acquire and can also result from developmental constraints that limit diversification. For example, in the gastropods, the snail-type shell is always built as a tube that grows both in length and in diameter; selection has created a wide variety of shell shapes such as flat spirals, cowries and tall turret spirals within these constraints. Among the centipedes, the Lithobiomorpha always have 15 trunk segments as adults, probably the result of a developmental bias towards an odd number of trunk segments. Another centipede order, the Geophilomorpha, the number of segments varies in different species between 27 and 191, but the number is always odd, making this an absolute constraint; almost all the odd numbers in that range are occupied by one or another species.
Ecological evolutionary developmental biology
Ecological evolutionary developmental biology integrates research from developmental biology and ecology to examine their relationship with evolutionary theory. Researchers study concepts and mechanisms such as developmental plasticity, epigenetic inheritance, genetic assimilation, niche construction and symbiosis.
See also
Arthropod head problem
Cell signaling
Evolution & Development (journal)
Human evolutionary developmental biology
Just So Stories (as seen by evolutionary developmental biologists)
Plant evolutionary developmental biology
Recapitulation theory
Notes
References
Sources
External links
Evolutionary developmental biology
Subfields of evolutionary biology
Developmental biology
Extended evolutionary synthesis | Evolutionary developmental biology | [
"Biology"
] | 4,839 | [
"Behavior",
"Developmental biology",
"Subfields of evolutionary biology",
"Reproduction"
] |
57,455 | https://en.wikipedia.org/wiki/Sago | Sago () is a starch extracted from the pith, or spongy core tissue, of various tropical palm stems, especially those of Metroxylon sagu. It is a major staple food for the lowland peoples of New Guinea and the Maluku Islands, where it is called saksak, rabia and sagu. The largest supply of sago comes from Southeast Asia, particularly Indonesia and Malaysia. Large quantities of sago are sent to Europe and North America for cooking purposes. It is traditionally cooked and eaten in various forms, such as rolled into balls, mixed with boiling water to form a glue-like paste (papeda), or as a pancake.
Sago is often produced commercially in the form of "pearls" (small rounded starch aggregates, partly gelatinized by heating). Sago pearls can be boiled with water or milk and sugar to make a sweet sago pudding. Sago pearls are similar in appearance to the pearled starches of other origin, e.g. cassava starch (tapioca) and potato starch. They may be used interchangeably in some dishes, and tapioca pearls are often marketed as "sago", since they are much cheaper to produce. Compared to tapioca pearls, real sago pearls are off-white, uneven in size, brittle and cook very quickly.
The name sago is also sometimes used for starch extracted from other sources, especially the sago cycad, Cycas revoluta. The sago cycad is also commonly known as the sago palm, although this is a misnomer as cycads are not palms. Extracting edible starch from the sago cycad requires special care due to the poisonous nature of cycads. Cycad sago is used for many of the same purposes as palm sago.
The fruit of palm trees from which the sago is produced is not allowed to ripen fully, as full ripening completes the life cycle of the tree and exhausts the starch reserves in the trunk to produce the seeds to the point of death, leaving a hollow shell. The palms are cut down when they are about 15 years old, just before or shortly after the inflorescence appears. The stems, which grow high, are split out. The starch-containing pith is taken from the stems and ground to powder. The powder is kneaded in water over a cloth or sieve to release the starch. The water with the starch passes into a trough where the starch settles. After a few washings, the starch is ready to be used in cooking. A single palm yields about of dry starch.
Historical records
Sago was noted by the Chinese historian Zhao Rukuo (1170–1231) during the Song dynasty. In his Zhu Fan Zhi (1225), a collection of descriptions of foreign countries, he writes that the kingdom of Boni "produces no wheat, but hemp and rice, and they use sha-hu (sago) for grain".
Sources, extraction and preparation
Palm sago
The sago palm, Metroxylon sagu, is found in tropical lowland forest and freshwater swamps across Southeast Asia and New Guinea and is the primary source of sago. It tolerates a wide variety of soils and may reach 30 meters in height (including the leaves). Several other species of the genus Metroxylon, particularly Metroxylon salomonense and Metroxylon amicarum, are also used as sources of sago throughout Melanesia and Micronesia.
Sago palms grow very quickly, in clumps of different ages similar to bananas, one sucker matures, then flowers and dies. It is replaced by another sucker, with up to 1.5 m of vertical stem growth per year. The stems are thick and are either self-supporting or have a moderate climbing habit; the leaves are pinnate. Each palm trunk produces a single inflorescence at its tip at the end of its life. Sago palms are harvested at the age of 7–15 years, just before or shortly after the inflorescence appears and when the stems are full of starch stored for use in reproduction. One palm can yield 150–300 kg of starch.
Sago is extracted from Metroxylon palms by splitting the stem lengthwise and removing the pith which is then crushed and kneaded to release the starch before being washed and strained to extract the starch from the fibrous residue. The raw starch suspension in water is then collected in a settling container.
Cycad sago
The sago cycad, Cycas revoluta, is a slow-growing wild or ornamental plant. Its common names "sago palm" and "king sago palm" are misnomers as cycads are not palms. Processed starch known as sago is made from this and other cycads. It is a less-common food source for some peoples of the Pacific and Indian Oceans. Unlike palms, cycads are highly poisonous: most parts of the plant contain the neurotoxins cycasin and BMAA. Consumption of cycad seeds has been implicated in the outbreak of Parkinson's disease-like neurological disorder in Guam and other locations in the Pacific. Thus, before any part of the plant may safely be eaten the toxins must be removed through extended processing.
Sago is extracted from the sago cycad by cutting the pith from the stem, root and seeds of the cycads, grinding the pith to a coarse flour, before being dried, pounded, and soaked. The starch is then washed carefully and repeatedly to leach out the natural toxins. The starchy residue is then dried and cooked, producing a starch similar to palm sago/sabudana.
Cassava sago
In many countries including Australia, Brazil, and India, tapioca pearls made from cassava root are also referred to as sago, sagu, sabudana, etc.
Uses
Nutrition
Sago from Metroxylon palms is nearly pure carbohydrate and has very little protein, vitamins, or minerals. of dry sago typically comprises 94 grams of carbohydrate, 0.2 grams of protein, 0.5 grams of dietary fiber, 10 mg of calcium, 1.2 mg of iron and negligible amounts of fat, carotene, thiamine and ascorbic acid and yields approximately of food energy. Sago palms are typically found in areas unsuited for other forms of agriculture, so sago cultivation is often the most ecologically appropriate form of land-use and the nutritional deficiencies of the food can often be compensated for with other readily available foods.
Sago starch can be baked (resulting in a product analogous to bread, pancake, or biscuit) or mixed with boiling water to form a paste. It is a main staple of many traditional communities in New Guinea and Maluku in the form of papeda, Borneo, South Sulawesi (most known in Luwu Regency) and Sumatra. In Palembang, sago is one of the ingredients to make pempek. In Brunei, it is used for making the popular local dish called the ambuyat. It is also used commercially in making noodles and white bread. Sago starch can also be used as a thickener for other dishes. It can be made into steamed puddings such as sago plum pudding.
In Malaysia, the traditional food "keropok lekor" (fish cracker) uses sago as one of its main ingredients. In the making of the popular keropok lekor of Losong in Kuala Terengganu, each kilogram of fish meat is mixed with half a kilogram of fine sago, with a little salt added for flavour. Tons of raw sago are imported each year into Malaysia to support the keropok lekor industry.
In 1805, two captured crew members of the shipwrecked schooner Betsey were kept alive until their escape from an undetermined island on a diet of sago.
Any starch can be pearled by heating and stirring small aggregates of moist starch, producing partly gelatinized dry kernels that swell but remain intact on boiling. Pearl sago closely resembles pearl tapioca. Both are typically small (about 2 mm diameter) dry, opaque balls. Both may be white (if very pure) or colored naturally gray, brown or black, or artificially pink, yellow, green, etc. When soaked and cooked, both become much larger, translucent, soft and spongy. Both are widely used in Indian, Bangladeshi and Sri Lankan cuisine in a variety of dishes and around the world, usually in puddings. In India, it is used in a variety of dishes such as desserts boiled with sweetened milk on occasion of religious fasts.
The Penan people of Borneo have sago from Eugeissona palms as their staple carbohydrate.
Textile production
Sago starch is also used to treat fiber in a process is called sizing, which makes fibers easier to machine. The process helps to bind the fiber, give it a predictable slip for running on metal, standardize the level of hydration of the fiber and give the textile more body. Most of the natural based cloth and clothing has been sized; this leaves a residue which is removed in the first wash.
Other uses
Because many traditional people rely on sago-palm as their main food staple and because supplies are finite, in some areas commercial or industrial harvesting of wild stands of sago-palm can conflict with the food needs of local communities.
There is also a research conducted to potentially make use of the waste from sago palm industry as an adsorbent for cleaning up oil spills.
See also
Arenga pinnata
Landang
Sandige
References
Citations
General and cited references
Flach, M. and F. Rumawas, eds. (1996). Plant Resources of South-East Asia (PROSEA) No. 9: Plants Yielding Non-Seed Carbohydrates. Leiden: Blackhuys.
Lie, Goan-Hong. (1980). "The Comparative Nutritional Roles of Sago and Cassava in Indonesia." In: Stanton, W.R. and M. Flach, eds., Sago: The Equatorial Swamp as a Natural Resource. The Hague, Boston, London: Martinus Nijhoff.
McClatchey, W., H.I. Manner, and C.R. Elevitch. (2005). "Metroxylon amicarum, M. paulcoxii, M. sagu, M. salomonense, M. vitiense, and M. warburgii (sago palm), ver. 1.1". In: Elevitch, C.R. (ed.) Species Profiles for Pacific Island Agroforestry. Permanent Agriculture Resources (PAR), Holualoa, Hawaii.
Pickell, D. (2002). Between the Tides: A Fascinating Journey Among the Kamoro of New Guinea. Singapore: Periplus Press.
Stanton, W.R. and M. Flach, eds., Sago: The Equatorial Swamp as a Natural Resource. The Hague, Boston, London: Martinus Nijhoff.
Further reading
External links
Species profile for Metroxylon sagu
http://www.fao.org/ag/agA/AGAP/FRG/AFRIS/Data/416.HTM
Sago Uses
Edible thickening agents
Food ingredients
Indian cuisine
Indonesian cuisine
Malagasy cuisine
Melanesian cuisine
Oceanian cuisine
Papua New Guinean cuisine
Staple foods
Tropical agriculture | Sago | [
"Technology"
] | 2,432 | [
"Food ingredients",
"Components"
] |
57,484 | https://en.wikipedia.org/wiki/Sidereal%20year | A sidereal year (, ; ), also called a sidereal orbital period, is the time that Earth or another planetary body takes to orbit the Sun once with respect to the fixed stars.
Hence, for Earth, it is also the time taken for the Sun to return to the same position relative to Earth with respect to the fixed stars after apparently travelling once around the ecliptic.
It equals for the J2000.0 epoch. The sidereal year differs from the solar year, "the period of time required for the ecliptic longitude of the Sun to increase 360 degrees", due to the precession of the equinoxes.
The sidereal year is 20 min 24.5 s longer than the mean tropical year at J2000.0 .
At present, the rate of axial precession corresponds to a period of 25,772 years, so sidereal year is longer than tropical year by 1,224.5 seconds (20 min 24.5 s, ~365.24219*86400/25772).
Before the discovery of the precession of the equinoxes by Hipparchus in the Hellenistic period, the difference between sidereal and tropical year was unknown to the Greeks.
For naked-eye observation, the shift of the constellations relative to the equinoxes only becomes apparent over centuries or "ages", and pre-modern calendars such as Hesiod's Works and Days would give the times of the year for sowing, harvest, and so on by reference to the first visibility of stars, effectively using the sidereal year.
The Indian national calendar, based on the works of Maga Brahmins, as are the calendars of neighbouring countries, is traditionally reckoned by the Sun's entry into the sign of Aries and is also supposed to align with the spring equinox and have relevance to the harvesting and planting season and thus the tropical year.
However, as the entry into the constellation occurs 25 days later, according to the astronomical calculation of the sidereal year, this date marks the South and Southeast Asian solar New Year in other countries and cultures
See also
Anomalistic year
Gaussian year
Julian year (astronomy)
Orbital period
Precession § Astronomy
Sidereal time
Solar calendar
Tropical year
Mars time
Notes
Works cited
Astronomical coordinate systems
Types of year
Hindu astrology
Technical factors of astrology
ru:Сидерический год | Sidereal year | [
"Astronomy",
"Mathematics"
] | 504 | [
"Astronomical coordinate systems",
"Coordinate systems"
] |
57,526 | https://en.wikipedia.org/wiki/P%C3%A9clet%20number | In continuum mechanics, the Péclet number (, after Jean Claude Eugène Péclet) is a class of dimensionless numbers relevant in the study of transport phenomena in a continuum. It is defined to be the ratio of the rate of advection of a physical quantity by the flow to the rate of diffusion of the same quantity driven by an appropriate gradient. In the context of species or mass transfer, the Péclet number is the product of the Reynolds number and the Schmidt number (). In the context of the thermal fluids, the thermal Péclet number is equivalent to the product of the Reynolds number and the Prandtl number ().
The Péclet number is defined as:
For mass transfer, it is defined as:
Such ratio can also be re-written in terms of times, as a ratio between the characteristic temporal intervals of the system:
For the diffusion happens in a much longer time compared to the advection, and therefore the latter of the two phenomena predominates in the mass transport.
For heat transfer, the Péclet number is defined as:
where is the characteristic length, the local flow velocity, the mass diffusion coefficient, the Reynolds number, the Schmidt number, the Prandtl number, and the thermal diffusivity,
where is the thermal conductivity, the density, and the specific heat capacity.
In engineering applications the Péclet number is often very large. In such situations, the dependency of the flow upon downstream locations is diminished, and variables in the flow tend to become 'one-way' properties. Thus, when modelling certain situations with high Péclet numbers, simpler computational models can be adopted.
A flow will often have different Péclet numbers for heat and mass. This can lead to the phenomenon of double diffusive convection.
In the context of particulate motion the Péclet number has also been called Brenner number, with symbol , in honour of Howard Brenner.
The Péclet number also finds applications beyond transport phenomena, as a general measure for the relative importance of the random fluctuations and of the systematic average behavior in mesoscopic systems
See also
Nusselt number
References
Convection
Dimensionless numbers of fluid mechanics
Dimensionless numbers of thermodynamics
Fluid dynamics
Heat conduction | Péclet number | [
"Physics",
"Chemistry",
"Engineering"
] | 464 | [
"Transport phenomena",
"Thermodynamic properties",
"Physical phenomena",
"Physical quantities",
"Dimensionless numbers of thermodynamics",
"Chemical engineering",
"Convection",
"Thermodynamics",
"Piping",
"Heat conduction",
"Fluid dynamics"
] |
57,539 | https://en.wikipedia.org/wiki/Ulysses%20%28spacecraft%29 | Ulysses ( , ) was a robotic space probe whose primary mission was to orbit the Sun and study it at all latitudes. It was launched in 1990 and made three "fast latitude scans" of the Sun in 1994/1995, 2000/2001, and 2007/2008. In addition, the probe studied several comets. Ulysses was a joint venture of the European Space Agency (ESA) and the United States' National Aeronautics and Space Administration (NASA), under leadership of ESA with participation from Canada's National Research Council. The last day for mission operations on Ulysses was 30 June 2009.
To study the Sun at all latitudes, the probe needed to change its orbital inclination and leave the plane of the Solar System. To change the orbital inclination of a spacecraft to about 80° requires a large change in heliocentric velocity, the energy to achieve which far exceeded the capabilities of any launch vehicle. To reach the desired orbit around the Sun, the mission's planners chose a gravity assist maneuver around Jupiter, but this Jupiter encounter meant that Ulysses could not be powered by solar cells. The probe was powered instead by a General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG).
The spacecraft was originally named Odysseus, because of its lengthy and indirect trajectory to study the solar poles. It was renamed Ulysses, the Latin translation of "Odysseus", at ESA's request in honor not only of Homer's mythological hero but also of Dante's character in the Inferno. Ulysses was originally scheduled for launch in May 1986 aboard the Space Shuttle Challenger on STS-61-F. Due to the 28 January 1986 loss of Challenger, the launch of Ulysses was delayed until 6 October 1990 aboard Discovery (mission STS-41).
Spacecraft
The spacecraft was designed by ESA and built by Dornier Systems, a German aircraft manufacturer. The body was roughly a box, approximately in size. The box mounted the dish antenna and the GPHS-RTG radioisotope thermoelectric generator (RTG) power source. The box was divided into noisy and quiet sections. The noisy section abutted the RTG; the quiet section housed the instrument electronics. Particularly "loud" components, such as the preamps for the radio dipole, were mounted outside the structure entirely, and the box acted as a Faraday cage.
Ulysses was spin-stabilised about its z-axis which roughly coincides with the axis of the dish antenna. The RTG, whip antennas, and instrument boom were placed to stabilize this axis, with the spin rate nominally at 5 rpm. Inside the body was a hydrazine fuel tank. Hydrazine monopropellant was used for course corrections inbound to Jupiter, and later used exclusively to repoint the spin axis (and thus, the antenna) at Earth. The spacecraft was controlled by eight thrusters in two blocks. Thrusters were pulsed in the time domain to perform rotation or translation. Four Sun sensors detected orientation. For fine attitude control, the S-band antenna feed was mounted slightly off-axis. This offset feed combined with the spacecraft spin introduced an apparent oscillation to a radio signal transmitted from Earth when received on board the spacecraft. The amplitude and phase of this oscillation were proportional to the orientation of the spin axis relative to the Earth direction. This method of determining the relative orientation is called conical scanning and was used by early radars for automated tracking of targets and was also very common in early infrared guided missiles.
The spacecraft used S-band for uplinked commands and downlinked telemetry, through dual redundant 5-watt transceivers. The spacecraft used X-band for science return (downlink only), using dual 20 watts TWTAs until the failure of the last remaining TWTA in January 2008. Both bands used the dish antenna with prime-focus feeds, unlike the Cassegrain feeds of most other spacecraft dishes.
Dual tape recorders, each of approximately 45-megabit capacity, stored science data between the nominal eight-hour communications sessions during the prime and extended mission phases.
The spacecraft was designed to withstand both the heat of the inner Solar System and the cold at Jupiter's distance. Extensive blanketing and electric heaters protected the probe against the cold temperatures of the outer Solar System.
Multiple computer systems (CPUs/microprocessors/Data Processing Units) are used in several of the scientific instruments, including several radiation-hardened RCA CDP1802 microprocessors. Documented 1802 usage includes dual-redundant 1802s in the COSPIN, and at least one 1802 each in the GRB, HI-SCALE, SWICS, SWOOPS and URAP instruments, with other possible microprocessors incorporated elsewhere.
Total mass at launch was , of which 33.5 kg was hydrazine propellant used for attitude control and orbit correction.
Instruments
The twelve different Instruments came from ESA and NASA. The first design was based on two probes, one by NASA and one by ESA, but the probe of NASA was defunded and in the end the instruments of the cancelled probe were mounted on Ulysses.
Radio/Plasma antennas: Two beryllium copper antennas were unreeled outwards from the body, perpendicular to the RTG and spin axis. Together this dipole spanned 72 meters (236.2 ft). A third antenna, of hollow beryllium copper, was deployed from the body, along the spin axis opposite the dish. It was a monopole antenna, 7.5 meters (24.6 ft) long. These measured radio waves generated by plasma releases, or the plasma itself as it passed over the spacecraft. This receiver ensemble was sensitive from DC to 1 MHz.
Experiment Boom: A third type of boom, shorter and much more rigid, extended from the last side of the spacecraft, opposite the RTG. This was a hollow carbon-fiber tube, of 50 mm (2 in.) diameter. It can be seen in the photo as the silver rod stowed alongside the body. It carried four types of instruments: a solid-state X-ray instrument, composed of two silicon detectors, to study X-rays from solar flares and Jupiter's aurorae; the Gamma-Ray Burst experiment, consisting of two CsI scintillator crystals with photomultipliers; two different magnetometers, a helium vector magnetometer and a fluxgate magnetometer; and a two-axis magnetic search coil antenna measured AC magnetic fields.
Body-Mounted Instruments: Detectors for electrons, ions, neutral gas, dust, and cosmic rays were mounted on the spacecraft body around the quiet section.
Lastly, the radio communications link could be used to search for gravitational waves (through Doppler shifts) and to probe the Sun's atmosphere through radio occultation. No gravitational waves were detected.
Total instrument mass was 55 kg.
Magnetometer (MAG): MAG measured the magnetic field in the heliosphere. Measurements of Jupiter's magnetic field were also performed. Two magnetometers performed Ulysses magnetic field measurements, the Vector Helium Magnetometer and the Fluxgate Magnetometer.
Solar Wind Plasma Experiment (SWOOPS): detected the solar wind at all solar distances and latitudes and in three dimensions. It measured positive ions and electrons.
Solar Wind Ion Composition Instrument (SWICS): determined composition, temperature and speed of the atoms and ions that comprise the solar wind.
Unified Radio and Plasma Wave Instrument (URAP): picked up radio waves from the Sun and electromagnetic waves generated in the solar wind close to the spacecraft.
Energetic Particle Instrument (EPAC) and GAS: EPAC investigated the energy, fluxes and distribution of energetic particles in the heliosphere. GAS studied the uncharged gases (helium) of interstellar origin.
Low-Energy Ion and Electron Experiment (HI-SCALE): investigated the energy, fluxes and distribution of energetic particles in the heliosphere.
Cosmic Ray and Solar Particle Instrument (COSPIN): investigated the energy, fluxes and distribution of energetic particles and galactic cosmic rays in the heliosphere.
Solar X-ray and Cosmic Gamma-Ray Burst Instrument (GRB): studied cosmic gamma ray bursts and X-rays from solar flares.
Dust Experiment (DUST): Direct measurements of interplanetary and interstellar dust grains to investigate their properties as functions of the distance from the Sun and solar latitude.
Mission
Planning
Until Ulysses, the Sun had only been observed from low solar latitudes. The Earth's orbit defines the ecliptic plane, which differs from the Sun's equatorial plane by only 7.25°. Even spacecraft directly orbiting the Sun do so in planes close to the ecliptic because a direct launch into a high-inclination solar orbit would require a prohibitively large launch vehicle.
Several spacecraft (Mariner 10, Pioneer 11, and Voyagers 1 and 2) had performed gravity assist maneuvers in the 1970s. Those maneuvers were to reach other planets also orbiting close to the ecliptic, so they were mostly in-plane changes. However, gravity assists are not limited to in-plane maneuvers; a suitable flyby of Jupiter could produce a significant plane change. An Out-Of-The-Ecliptic mission (OOE) was thereby proposed. See article Pioneer H.
Originally, two spacecraft were to be built by NASA and ESA, as the International Solar Polar Mission. One would be sent over Jupiter, then under the Sun. The other would fly under Jupiter, then over the Sun. This would provide simultaneous coverage. Due to cutbacks, the U.S. spacecraft was cancelled in 1981. One spacecraft was designed, and the project recast as Ulysses, due to the indirect and untried flight path. NASA would provide the Radioisotope Thermoelectric Generator (RTG) and launch services, ESA would build the spacecraft assigned to Astrium GmbH, Friedrichshafen, Germany (formerly Dornier Systems). The instruments would be split into teams from universities and research institutes in Europe and the United States. This process provided the 12 instruments on board.
The changes delayed launch from February 1983 to May 1986 when it was to be deployed by the Space Shuttle Challenger (boosted by the proposed Centaur G Prime upper stage). However, the Challenger disaster forced a two-and-a-half year stand down of the shuttle fleet, mandated the cancellation of the Centaur-G upper stage, and pushed the launch date to October 1990.
Launch
Ulysses was deployed into low Earth orbit from the Space Shuttle Discovery. From there, it was propelled on a trajectory to Jupiter by a combination of solid rocket motors. This upper stage consisted of a two-stage Boeing IUS (Inertial Upper Stage), plus a McDonnell Douglas PAM-S (Payload Assist Module-Special). The IUS was inertially stabilised and actively guided during its burn. The PAM-S was unguided and it and Ulysses were spun up to 80 rpm for stability at the start of its burn. On burnout of the PAM-S, the motor and spacecraft stack was yo-yo de-spun (weights deployed at the end of cables) to below 8 rpm prior to separation of the spacecraft. On leaving Earth, the spacecraft became the fastest ever artificially-accelerated spacecraft, and held that title until the New Horizons probe was launched.
On its way to Jupiter, the spacecraft was in an elliptical non-Hohmann transfer orbit. At this time, Ulysses had a low orbital inclination to the ecliptic.
Jupiter swing-by
It arrived at Jupiter on 8 February 1992 for a swing-by maneuver that increased its inclination to the ecliptic by 80.2°. The giant planet's gravity bent the spacecraft's flight path southward and away from the ecliptic plane. This put it into a final orbit around the Sun that would take it past the Sun's north and south poles. The size and shape of the orbit were adjusted to a much smaller degree so that aphelion remained at approximately 5 AU, Jupiter's distance from the Sun, and perihelion was somewhat greater than 1 AU, the Earth's distance from the Sun. The orbital period is approximately six years.
Polar regions of the Sun
Between 1994 and 1995 it explored both the southern and northern polar regions of the Sun, respectively.
Comet C/1996 B2 (Hyakutake)
On 1 May 1996, the spacecraft unexpectedly crossed the ion tail of Comet Hyakutake (C/1996 B2), revealing the tail to be at least 3.8 AU in length.
Comet C/1999 T1 (McNaught–Hartley)
An encounter with a comet tail happened again in 2004 when Ulysses flew through the ion tailings of C/1999 T1 (McNaught-Hartley). A coronal mass ejection carried the cometary material to Ulysses.
Second Jupiter encounter
Ulysses approached aphelion in 2003/2004 and made further distant observations of Jupiter.
Comet C/2006 P1 (McNaught)
In 2007, Ulysses passed through the tail of comet C/2006 P1 (McNaught). The results were surprisingly different from its pass through Hyakutake's tail, with the measured solar wind velocity dropping from approximately 700 kilometers per second (1,566,000 mph) to less than 400 kilometers per second (895,000 mph).
Extended mission
ESA's Science Program Committee approved the fourth extension of the Ulysses mission to March 2004 thereby allowing it to operate over the Sun's poles for the third time in 2007 and 2008. After it became clear that the power output from the spacecraft's RTG would be insufficient to operate science instruments and keep the attitude control fuel, hydrazine, from freezing, instrument power sharing was initiated. Up until then, the most important instruments had been kept online constantly, whilst others were deactivated. When the probe neared the Sun, its power-hungry heaters were turned off and all instruments were turned on.
On 22 February 2008, 17 years and 4 months after the launch of the spacecraft, ESA and NASA announced that the mission operations for Ulysses would likely cease within a few months. On 12 April 2008, NASA announced that the end date will be 1 July 2008.
The spacecraft operated successfully for over four times its design life. A component within the last remaining working chain of X-band downlink subsystem failed on 15 January 2008. The other chain in the X-band subsystem had previously failed in 2003.
Downlink to Earth resumed on S-band, but the beamwidth of the high gain antenna in the S-band was not as narrow as in the X–band, so that the received downlink signal was much weaker, hence reducing the achievable data rate. As the spacecraft traveled on its outbound trajectory to the orbit of Jupiter, the downlink signal would have eventually fallen below the receiving capability of even the largest antennas (70 meters - 229.7 feet - in diameter) of the Deep Space Network.
Even before the downlink signal was lost due to distance, the hydrazine attitude control fuel on board the spacecraft was considered likely to freeze, as the radioisotope thermal generators (RTGs) failed to generate enough power for the heaters to overcome radiative heat loss into space. Once the hydrazine froze, the spacecraft would no longer be able to maneuver to keep its high gain antenna pointing towards Earth, and the downlink signal would then be lost in a matter of days. The failure of the X-band communications subsystem hastened this, because the coldest part of the fuel pipework was routed over the X-band traveling-wave tube amplifiers, because they generated enough heat during operation to keep the propellant plumbing warm.
The previously announced mission end date of 1 July 2008, came and went but mission operations continued albeit in a reduced capacity. The availability of science data gathering was limited to only when Ulysses was in contact with a ground station due to the deteriorating S-band downlink margin no longer being able to support simultaneous real-time data and tape recorder playback. When the spacecraft was out of contact with a ground station, the S-band transmitter was switched off and the power was diverted to the internal heaters to add to the warming of the hydrazine. On 30 June 2009, ground controllers sent commands to switch to the low gain antennas. This stopped communications with the spacecraft, in combination with previous commands to shut down its transmitter entirely.
Results
During cruise phases, Ulysses provided unique data. As the only spacecraft out of the ecliptic with a gamma-ray instrument, Ulysses was an important part of the InterPlanetary Network (IPN). The IPN detects gamma ray bursts (GRBs); since gamma rays cannot be focused with mirrors, it was very difficult to locate GRBs with enough accuracy to study them further. Instead, several spacecraft can locate the burst through multilateration. Each spacecraft has a gamma-ray detector, with readouts noted in tiny fractions of a second. By comparing the arrival times of gamma showers with the separations of the spacecraft, a location can be determined, for follow-up with other telescopes. Because gamma rays travel at the speed of light, wide separations are needed. Typically, a determination came from comparing: one of several spacecraft orbiting the Earth, an inner-Solar-System probe (to Mars, Venus, or an asteroid), and Ulysses. When Ulysses crossed the ecliptic twice per orbit, many GRB determinations lost accuracy.
Additional discoveries:
Data provided by Ulysses led to the discovery that the Sun's magnetic field interacts with the Solar System in a more complex fashion than previously assumed.
Data provided by Ulysses led to the discovery that dust coming into the Solar System from deep space was 30 times more abundant than previously expected.
In 2007–2008 data provided by Ulysses led to the determination that the magnetic field emanating from the Sun's poles is much weaker than previously observed.
That the solar wind has "grown progressively weaker during the mission and is currently at its weakest since the start of the Space Age".
Fate
Ulysses will most likely continue in heliocentric orbit around the Sun indefinitely. However, there is a chance that in one of its re-encounters with Jupiter a close fly-by with one of the Jovian moons would be enough to alter its course and so the probe would enter a hyperbolic trajectory around the Sun and leave the Solar System.
See also
References
External links
ESA Ulysses website
ESA Ulysses mission operations website
ESA Ulysses Home page
NASA/JPL Ulysses website
Ulysses Measuring Mission Profile by NASA's Solar System Exploration
ESA/NASA/JPL: Ulysses subsystems and instrumentation in high detail
Where is Ulysses now!
Max Planck Institute Ulysses website
Interview with Ulysses Mission Operations Manager Nigel Angold on Planetary Radio
Interactive 3D visualisation of Ulysses Jupiter gravity assist and polar orbit around the Sun
European Space Agency space probes
NASA space probes
Missions to the Sun
Missions to Jupiter
Derelict satellites in heliocentric orbit
Missions to comets
Spacecraft launched by the Space Shuttle
Derelict space probes
Spacecraft launched in 1990
Spacecraft decommissioned in 2009
Solar space observatories | Ulysses (spacecraft) | [
"Astronomy"
] | 3,923 | [
"Space telescopes",
"Solar space observatories"
] |
57,547 | https://en.wikipedia.org/wiki/Xenopus | Xenopus () (Gk., ξενος, xenos = strange, πους, pous = foot, commonly known as the clawed frog) is a genus of highly aquatic frogs native to sub-Saharan Africa. Twenty species are currently described within it. The two best-known species of this genus are Xenopus laevis and Xenopus tropicalis, which are commonly studied as model organisms for developmental biology, cell biology, toxicology, neuroscience and for modelling human disease and birth defects.
The genus is also known for its polyploidy, with some species having up to 12 sets of chromosomes.
Characteristics
Xenopus laevis is a rather inactive creature. It is incredibly hardy and can live up to 15 years. At times the ponds that Xenopus laevis is found in dry up, compelling it, in the dry season, to burrow into the mud, leaving a tunnel for air. It may lie dormant for up to a year. If the pond dries up in the rainy season, Xenopus laevis may migrate long distances to another pond, maintaining hydration by the rains. It is an adept swimmer, swimming in all directions with ease. It is barely able to hop, but it is able to crawl. It spends most of its time underwater and comes to surface to breathe. Respiration is predominantly through its well-developed lungs; there is little cutaneous respiration.
Description
All species of Xenopus have flattened, somewhat egg-shaped and streamlined bodies, and very slippery skin (because of a protective mucus covering). The frog's skin is smooth, but with a lateral line sensory organ that has a stitch-like appearance. The frogs are all excellent swimmers and have powerful, fully webbed toes, though the fingers lack webbing. Three of the toes on each foot have conspicuous black claws.
The frog's eyes are on top of the head, looking upwards. The pupils are circular. They have no moveable eyelids, tongues (rather it is completely attached to the floor of the mouth) or eardrums (similarly to Pipa pipa, the common Suriname toad).
Unlike most amphibians, they have no haptoglobin in their blood.
Behaviour
Xenopus species are entirely aquatic, though they have been observed migrating on land to nearby bodies of water during times of drought or in heavy rain. They are usually found in lakes, rivers, swamps, potholes in streams, and man-made reservoirs.
Adult frogs are usually both predators and scavengers, and since their tongues are unusable, the frogs use their small fore limbs to aid in the feeding process. Since they also lack vocal sacs, they make clicks (brief pulses of sound) underwater (again similar to Pipa pipa). Males establish a hierarchy of social dominance in which primarily one male has the right to make the advertisement call. The females of many species produce a release call, and Xenopus laevis females produce an additional call when sexually receptive and soon to lay eggs. The Xenopus species are also active during the twilight (or crepuscular) hours.
During breeding season, the males develop ridge-like nuptial pads (black in color) on their fingers to aid in grasping the female. The frogs' mating embrace is inguinal, meaning the male grasps the female around her waist.
Species
Extant species
Xenopus allofraseri
Xenopus amieti (volcano clawed frog)
Xenopus andrei (Andre's clawed frog)
Xenopus borealis (Marsabit clawed frog)
Xenopus boumbaensis (Mawa clawed frog)
Xenopus calcaratus
Xenopus clivii (Eritrea clawed frog)
Xenopus epitropicalis (Cameroon clawed frog)
Xenopus eysoole
Xenopus fischbergi
Xenopus fraseri (Fraser's platanna)
Xenopus gilli (Cape platanna)
Xenopus itombwensis
Xenopus kobeli
Xenopus laevis (African clawed frog or common platanna)
Xenopus largeni (Largen's clawed frog)
Xenopus lenduensis (Lendu Plateau clawed frog)
Xenopus longipes (Lake Oku clawed frog)
Xenopus mellotropicalis
Xenopus muelleri (Müller's platanna)
Xenopus parafraseri
Xenopus petersii (Peters' platanna)
Xenopus poweri
Xenopus pygmaeus (Bouchia clawed frog)
Xenopus ruwenzoriensis (Uganda clawed frog)
Xenopus tropicalis (western clawed frog)
Xenopus vestitus (Kivu clawed frog)
Xenopus victorianus (Lake Victoria clawed frog)
Xenopus wittei (De Witte's clawed frog)
Fossil species
The following fossil species have been described:
†Xenopus arabiensis - Oligocene Yemen Volcanic Group, Yemen
†Xenopus hasaunus
†Xenopus romeri - Itaboraian Itaboraí Formation, Brazil
†Xenopus stromeri
cf. Xenopus sp. - Campanian - Los Alamitos Formation, Argentina
Xenopus (Xenopus) sp. - Late Oligocene Nsungwe Formation, Tanzania
Xenopus sp. - Miocene Morocco
Xenopus sp. - Early Pleistocene Olduvai Formation, Tanzania
Model organism for biological research
Like many other frogs, they are often used in laboratory as research subjects. Xenopus embryos and eggs are a popular model system for a wide variety of biological studies. This animal is used because of its powerful combination of experimental tractability and close evolutionary relationship with humans, at least compared to many model organisms.
Xenopus has long been an important tool for in vivo studies in molecular, cell, and developmental biology of vertebrate animals. However, the wide breadth of Xenopus research stems from the additional fact that cell-free extracts made from Xenopus are a premier in vitro system for studies of fundamental aspects of cell and molecular biology. Thus, Xenopus is a vertebrate model system that allows for high-throughput in vivo analyses of gene function and high-throughput biochemistry. Furthermore, Xenopus oocytes are a leading system for studies of ion transport and channel physiology. Xenopus is also a unique system for analyses of genome evolution and whole genome duplication in vertebrates, as different Xenopus species form a ploidy series formed by interspecific hybridization.
In 1931, Lancelot Hogben noted that Xenopus laevis females ovulated when injected with the urine of pregnant women. This led to a pregnancy test that was later refined by South African researchers Hillel Abbe Shapiro and Harry Zwarenstein. A female Xenopus frog injected with a woman's urine was put in a jar with a little water. If eggs were in the water a day later it meant the woman was pregnant. Four years after the first Xenopus test, Zwarenstein's colleague, Dr Louis Bosman, reported that the test was accurate in more than 99% of cases. From the 1930s to the 1950s, thousands of frogs were exported across the world for use in these pregnancy tests.
The of the Marine Biological Laboratory is an in vivo repository for transgenic and mutant strains and a training center.
Online Model Organism Database
Xenbase is the Model Organism Database (MOD) for both Xenopus laevis and Xenopus tropicalis.
Investigation of human disease genes
All modes of Xenopus research (embryos, cell-free extracts, and oocytes) are commonly used in direct studies of human disease genes and to study the basic science underlying initiation and progression of cancer. Xenopus embryos for in vivo studies of human disease gene function: Xenopus embryos are large and easily manipulated, and moreover, thousands of embryos can be obtained in a single day. Indeed, Xenopus was the first vertebrate animal for which methods were developed to allow rapid analysis of gene function using misexpression (by mRNA injection). Injection of mRNA in Xenopus that led to the cloning of interferon. Moreover, the use of morpholino-antisense oligonucleotides for gene knockdowns in vertebrate embryos, which is now widely used, was first developed by Janet Heasman using Xenopus.
In recent years, these approaches have played in important role in studies of human disease genes. The mechanism of action for several genes mutated in human cystic kidney disorders (e.g. nephronophthisis) have been extensively studied in Xenopus embryos, shedding new light on the link between these disorders, ciliogenesis and Wnt signaling. Xenopus embryos have also provided a rapid test bed for validating newly discovered disease genes. For example, studies in Xenopus confirmed and elucidated the role of PYCR1 in cutis laxa with progeroid features.
Transgenic Xenopus for studying transcriptional regulation of human disease genes: Xenopus embryos develop rapidly, so transgenesis in Xenopus is a rapid and effective method for analyzing genomic regulatory sequences. In a recent study, mutations in the SMAD7 locus were revealed to associate with human colorectal cancer. The mutations lay in conserved, but noncoding sequences, suggesting these mutations impacted the patterns of SMAD7 transcription. To test this hypothesis, the authors used Xenopus transgenesis, and revealed this genomic region drove expression of GFP in the hindgut. Moreover, transgenics made with the mutant version of this region displayed substantially less expression in the hindgut.
Xenopus cell-free extracts for biochemical studies of proteins encoded by human disease genes: A unique advantage of the Xenopus system is that cytosolic extracts contain both soluble cytoplasmic and nuclear proteins (including chromatin proteins). This is in contrast to cellular extracts prepared from somatic cells with already distinct cellular compartments. Xenopus egg extracts have provided numerous insights into the basic biology of cells with particular impact on cell division and the DNA transactions associated with it (see below).
Studies in Xenopus egg extracts have also yielded critical insights into the mechanism of action of human disease genes associated with genetic instability and elevated cancer risk, such as ataxia telangiectasia, BRCA1 inherited breast and ovarian cancer, Nbs1 Nijmegen breakage syndrome, RecQL4 Rothmund-Thomson syndrome, c-Myc oncogene and FANC proteins (Fanconi anemia).
Xenopus oocytes for studies of gene expression and channel activity related to human disease: Yet another strength of Xenopus is the ability to rapidly and easily assay the activity of channel and transporter proteins using expression in oocytes. This application has also led to important insights into human disease, including studies related to trypanosome transmission, Epilepsy with ataxia and sensorineural deafness Catastrophic cardiac arrhythmia (Long-QT syndrome) and Megalencephalic leukoencephalopathy.
Gene editing by the CRISPR/CAS system has recently been demonstrated in Xenopus tropicalis and Xenopus laevis. This technique is being used to screen the effects of human disease genes in Xenopus and the system is sufficiently efficient to study the effects within the same embryos that have been manipulated.
Investigation of fundamental biological processes
Signal transduction: Xenopus embryos and cell-free extracts are widely used for basic research in signal transduction. In just the last few years, Xenopus embryos have provided crucial insights into the mechanisms of TGF-beta and Wnt signal transduction. For example, Xenopus embryos were used to identify the enzymes that control ubiquitination of Smad4, and to demonstrate direct links between TGF-beta superfamily signaling pathways and other important networks, such as the MAP kinase pathway and the Wnt pathway. Moreover, new methods using egg extracts revealed novel, important targets of the Wnt/GSK3 destruction complex.
Cell division: Xenopus egg extracts have allowed the study of many complicated cellular events in vitro. Because egg cytosol can support successive cycling between mitosis and interphase in vitro, it has been critical to diverse studies of cell division. For example, the small GTPase Ran was first found to regulate interphase nuclear transport, but Xenopus egg extracts revealed the critical role of Ran GTPase in mitosis independent of its role in interphase nuclear transport. Similarly, the cell-free extracts were used to model nuclear envelope assembly from chromatin, revealing the function of RanGTPase in regulating nuclear envelope reassembly after mitosis. More recently, using Xenopus egg extracts, it was possible to demonstrate the mitosis-specific function of the nuclear lamin B in regulating spindle morphogenesis and to identify new proteins that mediate kinetochore attachment to microtubules. Cell-free systems have recently become practical investigatory tools, and Xenopus oocytes are often the source of the extracts used. This has produced significant results in understanding mitotic oscillation and microtubules.
Embryonic development: Xenopus embryos are widely used in developmental biology. A summary of recent advances made by Xenopus research in recent years would include:
Epigenetics of cell fate specification and epigenome reference maps
microRNA in germ layer patterning and eye development
Link between Wnt signaling and telomerase
Development of the vasculature
Gut morphogenesis
Contact inhibition and neural crest cell migration and the generation of neural crest from pluripotent blastula cells
- Role of Notch: Dorsky et al 1995 elucidated a pattern of expression followed by downregulation
DNA replication: Xenopus cell-free extracts also support the synchronous assembly and the activation of origins of DNA replication. They have been instrumental in characterizing the biochemical function of the prereplicative complex, including MCM proteins.
DNA damage response: Cell-free extracts have been instrumental to unravel the signaling pathways activated in response to DNA double-strand breaks (ATM), replication fork stalling (ATR) or DNA interstrand crosslinks (FA proteins and ATR). Notably, several mechanisms and components of these signal transduction pathways were first identified in Xenopus.
Apoptosis: Xenopus oocytes provide a tractable model for biochemical studies of apoptosis. Recently, oocytes were used recently to study the biochemical mechanisms of caspase-2 activation; importantly, this mechanism turns out to be conserved in mammals.
Regenerative medicine: In recent years, tremendous interest in developmental biology has been stoked by the promise of regenerative medicine. Xenopus has played a role here, as well. For example, expression of seven transcription factors in pluripotent Xenopus cells rendered those cells able to develop into functional eyes when implanted into Xenopus embryos, providing potential insights into the repair of retinal degeneration or damage. In a vastly different study, Xenopus embryos was used to study the effects of tissue tension on morphogenesis, an issue that will be critical for in vitro tissue engineering. Xenopus species are important model organisms for the study of spinal cord regeneration, because while capable of regeneration in their larval stages, Xenopus lose this capacity in early metamorphosis.
Physiology: The directional beating of multiciliated cells is essential to development and homeostasis in the central nervous system, the airway, and the oviduct. The multiciliated cells of the Xenopus epidermis have recently been developed as the first in vivo test-bed for live-cell studies of such ciliated tissues, and these studies have provided important insights into the biomechanical and molecular control of directional beating.
Actin: Another result from cell-free Xenopus oocyte extracts has been improved understanding of actin.
Small molecule screens to develop novel therapies
Because huge amounts of material are easily obtained, all modalities of Xenopus research are now being used for small-molecule based screens.
Chemical genetics of vascular growth in Xenopus tadpoles: Given the important role of neovascularization in cancer progression, Xenopus embryos were recently used to identify new small molecules inhibitors of blood vessel growth. Notably, compounds identified in Xenopus were effective in mice. Notably, frog embryos figured prominently in a study that used evolutionary principles to identify a novel vascular disrupting agent that may have chemotherapeutic potential. That work was featured in the New York Times Science Times
In vivo testing of potential endocrine disruptors in transgenic Xenopus embryos; A high-throughput assay for thyroid disruption has recently been developed using transgenic Xenopus embryos.
Small molecule screens in Xenopus egg extracts: Egg extracts provide ready analysis of molecular biological processes and can rapidly screened. This approach was used to identify novel inhibitors of proteasome-mediated protein degradation and DNA repair enzymes.
Genetic studies
While Xenopus laevis is the most commonly used species for developmental biology studies, genetic studies, especially forward genetic studies, can be complicated by their pseudotetraploid genome. Xenopus tropicalis provides a simpler model for genetic studies, having a diploid genome.
Gene expression knockdown techniques
The expression of genes can be reduced by a variety of means, for example by using antisense oligonucleotides targeting specific mRNA molecules. DNA oligonucleotides complementary to specific mRNA molecules are often chemically modified to improve their stability in vivo. The chemical modifications used for this purpose include phosphorothioate, 2'-O-methyl, morpholino, MEA phosphoramidate and DEED phosphoramidate.
Morpholino oligonucleotides
Morpholino oligos are used in both X. laevis and X. tropicalis to probe the function of a protein by observing the results of eliminating the protein's activity. For example, a set of X. tropicalis genes has been screened in this fashion.
Morpholino oligos (MOs) are short, antisense oligos made of modified nucleotides. MOs can knock down gene expression by inhibiting mRNA translation, blocking RNA splicing, or inhibiting miRNA activity and maturation. MOs have proven to be effective knockdown tools in developmental biology experiments and RNA-blocking reagents for cells in culture. MOs do not degrade their RNA targets, but instead act via a steric blocking mechanism RNAseH-independent manner. They remain stable in cells and do not induce immune responses. Microinjection of MOs in early Xenopus embryos can suppress gene expression in a targeted manner.
Like all antisense approaches, different MOs can have different efficacy, and may cause off-target, non-specific effects. Often, several MOs need to be tested to find an effective target sequence. Rigorous controls are used to demonstrate specificity, including:
Phenocopy of genetic mutation
Verification of reduced protein by western or immunostaining
mRNA rescue by adding back a mRNA immune to the MO
use of 2 different MOs (translation blocking and splice blocking)
injection of control MOs
Xenbase provides a searchable catalog of over 2000 MOs that have been specifically used in Xenopus research. The data is searchable via sequence, gene symbol and various synonyms (as used in different publications). Xenbase maps the MOs to the latest Xenopus genomes in GBrowse, predicts 'off-target' hits, and lists all Xenopus literature in which the morpholino has been published.
References
External links
Xenbase ~ A Xenopus laevis and tropicalis Web Resource
Pipidae
Amphibian genera
Amphibians of Sub-Saharan Africa
Vertebrate developmental biology
Animal models
Taxa named by Johann Andreas Wagner
Extant Oligocene first appearances | Xenopus | [
"Biology"
] | 4,212 | [
"Model organisms",
"Animal models"
] |
57,555 | https://en.wikipedia.org/wiki/Acid%20dissociation%20constant | In chemistry, an acid dissociation constant (also known as acidity constant, or acid-ionization constant; denoted ) is a quantitative measure of the strength of an acid in solution. It is the equilibrium constant for a chemical reaction
HA <=> A^- + H^+
known as dissociation in the context of acid–base reactions. The chemical species HA is an acid that dissociates into , called the conjugate base of the acid, and a hydrogen ion, . The system is said to be in equilibrium when the concentrations of its components do not change over time, because both forward and backward reactions are occurring at the same rate.
The dissociation constant is defined by
or by its logarithmic form
where quantities in square brackets represent the molar concentrations of the species at equilibrium. For example, a hypothetical weak acid having Ka = 10−5, the value of log Ka is the exponent (−5), giving pKa = 5. For acetic acid, Ka = 1.8 x 10−5, so pKa is 4.7. A higher Ka corresponds to a stronger acid (an acid that is more dissociated at equilibrium). The form pKa is often used because it provides a convenient logarithmic scale, where a lower pKa corresponds to a stronger acid.
Theoretical background
The acid dissociation constant for an acid is a direct consequence of the underlying thermodynamics of the dissociation reaction; the pKa value is directly proportional to the standard Gibbs free energy change for the reaction. The value of the pKa changes with temperature and can be understood qualitatively based on Le Châtelier's principle: when the reaction is endothermic, Ka increases and pKa decreases with increasing temperature; the opposite is true for exothermic reactions.
The value of pKa also depends on molecular structure of the acid in many ways. For example, Pauling proposed two rules: one for successive pKa of polyprotic acids (see Polyprotic acids below), and one to estimate the pKa of oxyacids based on the number of =O and −OH groups (see Factors that affect pKa values below). Other structural factors that influence the magnitude of the acid dissociation constant include inductive effects, mesomeric effects, and hydrogen bonding. Hammett type equations have frequently been applied to the estimation of pKa.
The quantitative behaviour of acids and bases in solution can be understood only if their pKa values are known. In particular, the pH of a solution can be predicted when the analytical concentration and pKa values of all acids and bases are known; conversely, it is possible to calculate the equilibrium concentration of the acids and bases in solution when the pH is known. These calculations find application in many different areas of chemistry, biology, medicine, and geology. For example, many compounds used for medication are weak acids or bases, and a knowledge of the pKa values, together with the octanol-water partition coefficient, can be used for estimating the extent to which the compound enters the blood stream. Acid dissociation constants are also essential in aquatic chemistry and chemical oceanography, where the acidity of water plays a fundamental role. In living organisms, acid–base homeostasis and enzyme kinetics are dependent on the pKa values of the many acids and bases present in the cell and in the body. In chemistry, a knowledge of pKa values is necessary for the preparation of buffer solutions and is also a prerequisite for a quantitative understanding of the interaction between acids or bases and metal ions to form complexes. Experimentally, pKa values can be determined by potentiometric (pH) titration, but for values of pKa less than about 2 or more than about 11, spectrophotometric or NMR measurements may be required due to practical difficulties with pH measurements.
Definitions
According to Arrhenius's original molecular definition, an acid is a substance that dissociates in aqueous solution, releasing the hydrogen ion (a proton):
HA <=> A- + H+
The equilibrium constant for this dissociation reaction is known as a dissociation constant. The liberated proton combines with a water molecule to give a hydronium (or oxonium) ion (naked protons do not exist in solution), and so Arrhenius later proposed that the dissociation should be written as an acid–base reaction:
HA + H2O <=> A- + H3O+
Brønsted and Lowry generalised this further to a proton exchange reaction:
The acid loses a proton, leaving a conjugate base; the proton is transferred to the base, creating a conjugate acid. For aqueous solutions of an acid HA, the base is water; the conjugate base is and the conjugate acid is the hydronium ion. The Brønsted–Lowry definition applies to other solvents, such as dimethyl sulfoxide: the solvent S acts as a base, accepting a proton and forming the conjugate acid .
HA + S <=> A- + SH+
In solution chemistry, it is common to use as an abbreviation for the solvated hydrogen ion, regardless of the solvent. In aqueous solution denotes a solvated hydronium ion rather than a proton.
The designation of an acid or base as "conjugate" depends on the context. The conjugate acid of a base B dissociates according to
BH+ + OH- <=> B + H2O
which is the reverse of the equilibrium
The hydroxide ion , a well known base, is here acting as the conjugate base of the acid water. Acids and bases are thus regarded simply as donors and acceptors of protons respectively.
A broader definition of acid dissociation includes hydrolysis, in which protons are produced by the splitting of water molecules. For example, boric acid () produces as if it were a proton donor, but it has been confirmed by Raman spectroscopy that this is due to the hydrolysis equilibrium:
B(OH)3 + 2 H2O <=> B(OH)4- + H3O+
Similarly, metal ion hydrolysis causes ions such as to behave as weak acids:
[Al(H2O)6]^3+ + H2O <=> [Al(H2O)5(OH)]^2+ + H3O+
According to Lewis's original definition, an acid is a substance that accepts an electron pair to form a coordinate covalent bond.
Equilibrium constant
An acid dissociation constant is a particular example of an equilibrium constant. The dissociation of a monoprotic acid, HA, in dilute solution can be written as
HA <=> A- + H+
The thermodynamic equilibrium constant can be defined by
where represents the activity, at equilibrium, of the chemical species X. is dimensionless since activity is dimensionless. Activities of the products of dissociation are placed in the numerator, activities of the reactants are placed in the denominator. See activity coefficient for a derivation of this expression.
Since activity is the product of concentration and activity coefficient (γ) the definition could also be written as
where represents the concentration of HA and is a quotient of activity coefficients.
To avoid the complications involved in using activities, dissociation constants are determined, where possible, in a medium of high ionic strength, that is, under conditions in which can be assumed to be always constant. For example, the medium might be a solution of 0.1 molar (M) sodium nitrate or 3 M potassium perchlorate. With this assumption,
is obtained. Note, however, that all published dissociation constant values refer to the specific ionic medium used in their determination and that different values are obtained with different conditions, as shown for acetic acid in the illustration above. When published constants refer to an ionic strength other than the one required for a particular application, they may be adjusted by means of specific ion theory (SIT) and other theories.
Cumulative and stepwise constants
A cumulative equilibrium constant, denoted by is related to the product of stepwise constants, denoted by For a dibasic acid the relationship between stepwise and overall constants is as follows
H2A <=> A^2- + 2H+
Note that in the context of metal-ligand complex formation, the equilibrium constants for the formation of metal complexes are usually defined as association constants. In that case, the equilibrium constants for ligand protonation are also defined as association constants. The numbering of association constants is the reverse of the numbering of dissociation constants; in this example
Association and dissociation constants
When discussing the properties of acids it is usual to specify equilibrium constants as acid dissociation constants, denoted by Ka, with numerical values given the symbol pKa.
On the other hand, association constants are used for bases.
However, general purpose computer programs that are used to derive equilibrium constant values from experimental data use association constants for both acids and bases. Because stability constants for a metal-ligand complex are always specified as association constants, ligand protonation must also be specified as an association reaction. The definitions show that the value of an acid dissociation constant is the reciprocal of the value of the corresponding association constant:
Notes
For a given acid or base in water, , the self-ionization constant of water.
The association constant for the formation of a supramolecular complex may be denoted as Ka; in such cases "a" stands for "association", not "acid".
For polyprotic acids, the numbering of stepwise association constants is the reverse of the numbering of the dissociation constants. For example, for phosphoric acid (details in the polyprotic acids section below):
Temperature dependence
All equilibrium constants vary with temperature according to the van 't Hoff equation
is the gas constant and is the absolute temperature. Thus, for exothermic reactions, the standard enthalpy change, , is negative and K decreases with temperature. For endothermic reactions, is positive and K increases with temperature.
The standard enthalpy change for a reaction is itself a function of temperature, according to Kirchhoff's law of thermochemistry:
where is the heat capacity change at constant pressure. In practice may be taken to be constant over a small temperature range.
Dimensionality
In the equation
Ka appears to have dimensions of concentration. However, since , the equilibrium constant, , cannot have a physical dimension. This apparent paradox can be resolved in various ways.
Assume that the quotient of activity coefficients has a numerical value of 1, so that has the same numerical value as the thermodynamic equilibrium constant .
Express each concentration value as the ratio c/c0, where c0 is the concentration in a [hypothetical] standard state, with a numerical value of 1, by definition.
Express the concentrations on the mole fraction scale. Since mole fraction has no dimension, the quotient of concentrations will, by definition, be a pure number.
The procedures, (1) and (2), give identical numerical values for an equilibrium constant. Furthermore, since a concentration is simply proportional to mole fraction and density :
and since the molar mass is a constant in dilute solutions, an equilibrium constant value determined using (3) will be simply proportional to the values obtained with (1) and (2).
It is common practice in biochemistry to quote a value with a dimension as, for example, "Ka = 30 mM" in order to indicate the scale, millimolar (mM) or micromolar (μM) of the concentration values used for its calculation.
Strong acids and bases
An acid is classified as "strong" when the concentration of its undissociated species is too low to be measured. Any aqueous acid with a pKa value of less than 0 is almost completely deprotonated and is considered a strong acid. All such acids transfer their protons to water and form the solvent cation species (H3O+ in aqueous solution) so that they all have essentially the same acidity, a phenomenon known as solvent leveling. They are said to be fully dissociated in aqueous solution because the amount of undissociated acid, in equilibrium with the dissociation products, is below the detection limit. Likewise, any aqueous base with an association constant pKb less than about 0, corresponding to pKa greater than about 14, is leveled to OH− and is considered a strong base.
Nitric acid, with a pK value of around −1.7, behaves as a strong acid in aqueous solutions with a pH greater than 1. At lower pH values it behaves as a weak acid.
pKa values for strong acids have been estimated by theoretical means. For example, the pKa value of aqueous HCl has been estimated as −9.3.
Monoprotic acids
After rearranging the expression defining Ka, and putting , one obtains
This is the Henderson–Hasselbalch equation, from which the following conclusions can be drawn.
At half-neutralization the ratio ; since , the pH at half-neutralization is numerically equal to pKa. Conversely, when , the concentration of HA is equal to the concentration of A−.
The buffer region extends over the approximate range pKa ± 2. Buffering is weak outside the range pKa ± 1. At pH ≤ pKa − 2 the substance is said to be fully protonated and at pH ≥ pKa + 2 it is fully dissociated (deprotonated).
If the pH is known, the ratio may be calculated. This ratio is independent of the analytical concentration of the acid.
In water, measurable pKa values range from about −2 for a strong acid to about 12 for a very weak acid (or strong base).
A buffer solution of a desired pH can be prepared as a mixture of a weak acid and its conjugate base. In practice, the mixture can be created by dissolving the acid in water, and adding the requisite amount of strong acid or base. When the pKa and analytical concentration of the acid are known, the extent of dissociation and pH of a solution of a monoprotic acid can be easily calculated using an ICE table.
Polyprotic acids
A polyprotic acid is a compound which may lose more than 1 proton. Stepwise dissociation constants are each defined for the loss of a single proton. The constant for dissociation of the first proton may be denoted as Ka1 and the constants for dissociation of successive protons as Ka2, etc. Phosphoric acid, , is an example of a polyprotic acid as it can lose three protons.
{| class="wikitable"
! Equilibrium
! pK definition and value
|-
| H3PO4 <=> H2PO4- + H+
|
|-
| H2PO4- <=> HPO4^2- + H+
|
|-
| HPO4^2- <=> PO4^3- + H+
|
|}
When the difference between successive pK values is about four or more, as in this example, each species may be considered as an acid in its own right; In fact salts of may be crystallised from solution by adjustment of pH to about 5.5 and salts of may be crystallised from solution by adjustment of pH to about 10. The species distribution diagram shows that the concentrations of the two ions are maximum at pH 5.5 and 10.
When the difference between successive pK values is less than about four there is overlap between the pH range of existence of the species in equilibrium. The smaller the difference, the more the overlap. The case of citric acid is shown at the right; solutions of citric acid are buffered over the whole range of pH 2.5 to 7.5.
According to Pauling's first rule, successive pK values of a given acid increase . For oxyacids with more than one ionizable hydrogen on the same atom, the pKa values often increase by about 5 units for each proton removed, as in the example of phosphoric acid above.
It can be seen in the table above that the second proton is removed from a negatively charged species. Since the proton carries a positive charge extra work is needed to remove it, which is why pKa2 is greater than pKa1. pKa3 is greater than pKa2 because there is further charge separation. When an exception to Pauling's rule is found, it indicates that a major change in structure is also occurring. In the case of (aq), the vanadium is octahedral, 6-coordinate, whereas vanadic acid is tetrahedral, 4-coordinate. This means that four "particles" are released with the first dissociation, but only two "particles" are released with the other dissociations, resulting in a much greater entropy contribution to the standard Gibbs free energy change for the first reaction than for the others.
{| class="wikitable"
! Equilibrium
! pKa
|-
| [VO2(H2O)4]+ <=> H3VO4 + H+ + 2H2O
|
|-
| H3VO4 <=> H2VO4- + H+
|
|-
| H2VO4- <=> HVO4^2- + H+
|
|-
| HVO4^2- <=> VO4^3- + H+
|
|}
Isoelectric point
For substances in solution, the isoelectric point (pI) is defined as the pH at which the sum, weighted by charge value, of concentrations of positively charged species is equal to the weighted sum of concentrations of negatively charged species. In the case that there is one species of each type, the isoelectric point can be obtained directly from the pK values. Take the example of glycine, defined as AH. There are two dissociation equilibria to consider.
AH2+ <=> AH~+ H+ \qquad [AH][H+] = \mathit{K}_1 [AH2+]
AH <=> A^-~+H+ \qquad [A^- ][H+] = \mathit{K}_2 [AH]
Substitute the expression for [AH] from the second equation into the first equation
[A^- ][H+]^2 = \mathit{K}_1 \mathit{K}_2 [AH2+]
At the isoelectric point the concentration of the positively charged species, , is equal to the concentration of the negatively charged species, , so
Therefore, taking cologarithms, the pH is given by
pI values for amino acids are listed at proteinogenic amino acid. When more than two charged species are in equilibrium with each other a full speciation calculation may be needed.
Bases and basicity
The equilibrium constant Kb for a base is usually defined as the association constant for protonation of the base, B, to form the conjugate acid, .
B + H2O <=> HB+ + OH-
Using similar reasoning to that used before
Kb is related to Ka for the conjugate acid. In water, the concentration of the hydroxide ion, , is related to the concentration of the hydrogen ion by , therefore
Substitution of the expression for into the expression for Kb gives
When Ka, Kb and Kw are determined under the same conditions of temperature and ionic strength, it follows, taking cologarithms, that pKb = pKw − pKa. In aqueous solutions at 25 °C, pKw is 13.9965, so
with sufficient accuracy for most practical purposes. In effect there is no need to define pKb separately from pKa, but it is done here as often only pKb values can be found in the older literature.
For an hydrolyzed metal ion, Kb can also be defined as a stepwise dissociation constant
This is the reciprocal of an association constant for formation of the complex.
Basicity expressed as dissociation constant of conjugate acid
Because the relationship pKb = pKw − pKa holds only in aqueous solutions (though analogous relationships apply for other amphoteric solvents), subdisciplines of chemistry like organic chemistry that usually deal with nonaqueous solutions generally do not use pKb as a measure of basicity. Instead, the pKa of the conjugate acid, denoted by pKaH, is quoted when basicity needs to be quantified. For base B and its conjugate acid BH+ in equilibrium, this is defined as
A higher value for pKaH corresponds to a stronger base. For example, the values and indicate that (triethylamine) is a stronger base than (pyridine).
Amphoteric substances
An amphoteric substance is one that can act as an acid or as a base, depending on pH. Water (below) is amphoteric. Another example of an amphoteric molecule is the bicarbonate ion that is the conjugate base of the carbonic acid molecule H2CO3 in the equilibrium
H2CO3 + H2O <=> HCO3- + H3O+
but also the conjugate acid of the carbonate ion in (the reverse of) the equilibrium
HCO3- + OH- <=> CO3^2- + H2O
Carbonic acid equilibria are important for acid–base homeostasis in the human body.
An amino acid is also amphoteric with the added complication that the neutral molecule is subject to an internal acid–base equilibrium in which the basic amino group attracts and binds the proton from the acidic carboxyl group, forming a zwitterion.
NH2CHRCO2H <=> NH3+CHRCO2-
At pH less than about 5 both the carboxylate group and the amino group are protonated. As pH increases the acid dissociates according to
NH3+CHRCO2H <=> NH3+CHRCO2- + H+
At high pH a second dissociation may take place.
NH3+CHRCO2- <=> NH2CHRCO2- + H+
Thus the amino acid molecule is amphoteric because it may either be protonated or deprotonated.
Water self-ionization
The water molecule may either gain or lose a proton. It is said to be amphiprotic. The ionization equilibrium can be written
H2O <=> OH- + H+
where in aqueous solution denotes a solvated proton. Often this is written as the hydronium ion , but this formula is not exact because in fact there is solvation by more than one water molecule and species such as , , and are also present.
The equilibrium constant is given by
With solutions in which the solute concentrations are not very high, the concentration can be assumed to be constant, regardless of solute(s); this expression may then be replaced by
The self-ionization constant of water, Kw, is thus just a special case of an acid dissociation constant. A logarithmic form analogous to pKa may also be defined
These data can be modelled by a parabola with
From this equation, pKw = 14 at 24.87 °C. At that temperature both hydrogen and hydroxide ions have a concentration of 10−7 M.
Acidity in nonaqueous solutions
A solvent will be more likely to promote ionization of a dissolved acidic molecule in the following circumstances:
It is a protic solvent, capable of forming hydrogen bonds.
It has a high donor number, making it a strong Lewis base.
It has a high dielectric constant (relative permittivity), making it a good solvent for ionic species.
pKa values of organic compounds are often obtained using the aprotic solvents dimethyl sulfoxide (DMSO) and acetonitrile (ACN).
DMSO is widely used as an alternative to water because it has a lower dielectric constant than water, and is less polar and so dissolves non-polar, hydrophobic substances more easily. It has a measurable pKa range of about 1 to 30. Acetonitrile is less basic than DMSO, and, so, in general, acids are weaker and bases are stronger in this solvent. Some pKa values at 25 °C for acetonitrile (ACN) and dimethyl sulfoxide (DMSO). are shown in the following tables. Values for water are included for comparison.
Ionization of acids is less in an acidic solvent than in water. For example, hydrogen chloride is a weak acid when dissolved in acetic acid. This is because acetic acid is a much weaker base than water.
HCl + CH3CO2H <=> Cl- + CH3C(OH)2+
Compare this reaction with what happens when acetic acid is dissolved in the more acidic solvent pure sulfuric acid:
H2SO4 + CH3CO2H <=> HSO4- + CH3C(OH)2+
The unlikely geminal diol species is stable in these environments. For aqueous solutions the pH scale is the most convenient acidity function. Other acidity functions have been proposed for non-aqueous media, the most notable being the Hammett acidity function, H0, for superacid media and its modified version H− for superbasic media.
In aprotic solvents, oligomers, such as the well-known acetic acid dimer, may be formed by hydrogen bonding. An acid may also form hydrogen bonds to its conjugate base. This process, known as homoconjugation, has the effect of enhancing the acidity of acids, lowering their effective pKa values, by stabilizing the conjugate base. Homoconjugation enhances the proton-donating power of toluenesulfonic acid in acetonitrile solution by a factor of nearly 800.
In aqueous solutions, homoconjugation does not occur, because water forms stronger hydrogen bonds to the conjugate base than does the acid.
Mixed solvents
When a compound has limited solubility in water it is common practice (in the pharmaceutical industry, for example) to determine pKa values in a solvent mixture such as water/dioxane or water/methanol, in which the compound is more soluble. In the example shown at the right, the pKa value rises steeply with increasing percentage of dioxane as the dielectric constant of the mixture is decreasing.
A pKa value obtained in a mixed solvent cannot be used directly for aqueous solutions. The reason for this is that when the solvent is in its standard state its activity is defined as one. For example, the standard state of water:dioxane mixture with 9:1 mixing ratio is precisely that solvent mixture, with no added solutes. To obtain the pKa value for use with aqueous solutions it has to be extrapolated to zero co-solvent concentration from values obtained from various co-solvent mixtures.
These facts are obscured by the omission of the solvent from the expression that is normally used to define pKa, but pKa values obtained in a given mixed solvent can be compared to each other, giving relative acid strengths. The same is true of pKa values obtained in a particular non-aqueous solvent such a DMSO.
A universal, solvent-independent, scale for acid dissociation constants has not been developed, since there is no known way to compare the standard states of two different solvents.
Factors that affect pKa values
Pauling's second rule is that the value of the first pKa for acids of the formula XOm(OH)n depends primarily on the number of oxo groups m, and is approximately independent of the number of hydroxy groups n, and also of the central atom X. Approximate values of pKa are 8 for m = 0, 2 for m = 1, −3 for m = 2 and < −10 for m = 3. Alternatively, various numerical formulas have been proposed including pKa = 8 − 5m (known as Bell's rule), pKa = 7 − 5m, or pKa = 9 − 7m. The dependence on m correlates with the oxidation state of the central atom, X: the higher the oxidation state the stronger the oxyacid.
For example, pKa for HClO is 7.2, for HClO2 is 2.0, for HClO3 is −1 and HClO4 is a strong acid (). The increased acidity on adding an oxo group is due to stabilization of the conjugate base by delocalization of its negative charge over an additional oxygen atom. This rule can help assign molecular structure: for example, phosphorous acid, having molecular formula H3PO3, has a pKa near 2, which suggested that the structure is HPO(OH)2, as later confirmed by NMR spectroscopy, and not P(OH)3, which would be expected to have a pKa near 8.
Inductive effects and mesomeric effects affect the pKa values. A simple example is provided by the effect of replacing the hydrogen atoms in acetic acid by the more electronegative chlorine atom. The electron-withdrawing effect of the substituent makes ionisation easier, so successive pKa values decrease in the series 4.7, 2.8, 1.4, and 0.7 when 0, 1, 2, or 3 chlorine atoms are present. The Hammett equation, provides a general expression for the effect of substituents.
log(Ka) = log(K) + ρσ.
Ka is the dissociation constant of a substituted compound, K is the dissociation constant when the substituent is hydrogen, ρ is a property of the unsubstituted compound and σ has a particular value for each substituent. A plot of log(Ka) against σ is a straight line with intercept log(K) and slope ρ. This is an example of a linear free energy relationship as log(Ka) is proportional to the standard free energy change. Hammett originally formulated the relationship with data from benzoic acid with different substituents in the ortho- and para- positions: some numerical values are in Hammett equation. This and other studies allowed substituents to be ordered according to their electron-withdrawing or electron-releasing power, and to distinguish between inductive and mesomeric effects.
Alcohols do not normally behave as acids in water, but the presence of a double bond adjacent to the OH group can substantially decrease the pKa by the mechanism of keto–enol tautomerism. Ascorbic acid is an example of this effect. The diketone 2,4-pentanedione (acetylacetone) is also a weak acid because of the keto–enol equilibrium. In aromatic compounds, such as phenol, which have an OH substituent, conjugation with the aromatic ring as a whole greatly increases the stability of the deprotonated form.
Structural effects can also be important. The difference between fumaric acid and maleic acid is a classic example. Fumaric acid is (E)-1,4-but-2-enedioic acid, a trans isomer, whereas maleic acid is the corresponding cis isomer, i.e. (Z)-1,4-but-2-enedioic acid (see cis-trans isomerism). Fumaric acid has pKa values of approximately 3.0 and 4.5. By contrast, maleic acid has pKa values of approximately 1.5 and 6.5. The reason for this large difference is that when one proton is removed from the cis isomer (maleic acid) a strong intramolecular hydrogen bond is formed with the nearby remaining carboxyl group. This favors the formation of the maleate H+, and it opposes the removal of the second proton from that species. In the trans isomer, the two carboxyl groups are always far apart, so hydrogen bonding is not observed.
Proton sponge, 1,8-bis(dimethylamino)naphthalene, has a pKa value of 12.1. It is one of the strongest amine bases known. The high basicity is attributed to the relief of strain upon protonation and strong internal hydrogen bonding.
Effects of the solvent and solvation should be mentioned also in this section. It turns out, these influences are more subtle than that of a dielectric medium mentioned above. For example, the expected (by electronic effects of methyl substituents) and observed in gas phase order of basicity of methylamines, Me3N > Me2NH > MeNH2 > NH3, is changed by water to Me2NH > MeNH2 > Me3N > NH3. Neutral methylamine molecules are hydrogen-bonded to water molecules mainly through one acceptor, N–HOH, interaction and only occasionally just one more donor bond, NH–OH2. Hence, methylamines are stabilized to about the same extent by hydration, regardless of the number of methyl groups. In stark contrast, corresponding methylammonium cations always utilize all the available protons for donor NH–OH2 bonding. Relative stabilization of methylammonium ions thus decreases with the number of methyl groups explaining the order of water basicity of methylamines.
Thermodynamics
An equilibrium constant is related to the standard Gibbs energy change for the reaction, so for an acid dissociation constant
.
R is the gas constant and T is the absolute temperature. Note that and . At 25 °C, ΔG in kJ·mol−1 ≈ 5.708 pKa (1 kJ·mol−1 = 1000 joules per mole). Free energy is made up of an enthalpy term and an entropy term.
The standard enthalpy change can be determined by calorimetry or by using the van 't Hoff equation, though the calorimetric method is preferable. When both the standard enthalpy change and acid dissociation constant have been determined, the standard entropy change is easily calculated from the equation above. In the following table, the entropy terms are calculated from the experimental values of pKa and ΔH. The data were critically selected and refer to 25 °C and zero ionic strength, in water.
The first point to note is that, when pKa is positive, the standard free energy change for the dissociation reaction is also positive. Second, some reactions are exothermic and some are endothermic, but, when ΔH is negative TΔS is the dominant factor, which determines that ΔG is positive. Last, the entropy contribution is always unfavourable () in these reactions. Ions in aqueous solution tend to orient the surrounding water molecules, which orders the solution and decreases the entropy. The contribution of an ion to the entropy is the partial molar entropy which is often negative, especially for small or highly charged ions. The ionization of a neutral acid involves formation of two ions so that the entropy decreases (). On the second ionization of the same acid, there are now three ions and the anion has a charge, so the entropy again decreases.
Note that the standard free energy change for the reaction is for the changes from the reactants in their standard states to the products in their standard states. The free energy change at equilibrium is zero since the chemical potentials of reactants and products are equal at equilibrium.
Experimental determination
The experimental determination of pKa values is commonly performed by means of titrations, in a medium of high ionic strength and at constant temperature. A typical procedure would be as follows. A solution of the compound in the medium is acidified with a strong acid to the point where the compound is fully protonated. The solution is then titrated with a strong base until all the protons have been removed. At each point in the titration pH is measured using a glass electrode and a pH meter. The equilibrium constants are found by fitting calculated pH values to the observed values, using the method of least squares.
The total volume of added strong base should be small compared to the initial volume of titrand solution in order to keep the ionic strength nearly constant. This will ensure that pKa remains invariant during the titration.
A calculated titration curve for oxalic acid is shown at the right. Oxalic acid has pKa values of 1.27 and 4.27. Therefore, the buffer regions will be centered at about pH 1.3 and pH 4.3. The buffer regions carry the information necessary to get the pKa values as the concentrations of acid and conjugate base change along a buffer region.
Between the two buffer regions there is an end-point, or equivalence point, at about pH 3. This end-point is not sharp and is typical of a diprotic acid whose buffer regions overlap by a small amount: pKa2 − pKa1 is about three in this example. (If the difference in pK values were about two or less, the end-point would not be noticeable.) The second end-point begins at about pH 6.3 and is sharp. This indicates that all the protons have been removed. When this is so, the solution is not buffered and the pH rises steeply on addition of a small amount of strong base. However, the pH does not continue to rise indefinitely. A new buffer region begins at about pH 11 (pKw − 3), which is where self-ionization of water becomes important.
It is very difficult to measure pH values of less than two in aqueous solution with a glass electrode, because the Nernst equation breaks down at such low pH values. To determine pK values of less than about 2 or more than about 11 spectrophotometric or NMR measurements may be used instead of, or combined with, pH measurements.
When the glass electrode cannot be employed, as with non-aqueous solutions, spectrophotometric methods are frequently used. These may involve absorbance or fluorescence measurements. In both cases the measured quantity is assumed to be proportional to the sum of contributions from each photo-active species; with absorbance measurements the Beer–Lambert law is assumed to apply.
Isothermal titration calorimetry (ITC) may be used to determine both a pK value and the corresponding standard enthalpy for acid dissociation. Software to perform the calculations is supplied by the instrument manufacturers for simple systems.
Aqueous solutions with normal water cannot be used for 1H NMR measurements but heavy water, , must be used instead. 13C NMR data, however, can be used with normal water and 1H NMR spectra can be used with non-aqueous media. The quantities measured with NMR are time-averaged chemical shifts, as proton exchange is fast on the NMR time-scale. Other chemical shifts, such as those of 31P can be measured.
Micro-constants
For some polyprotic acids, dissociation (or association) occurs at more than one nonequivalent site, and the observed macroscopic equilibrium constant, or macro-constant, is a combination of micro-constants involving distinct species. When one reactant forms two products in parallel, the macro-constant is a sum of two micro-constants, This is true for example for the deprotonation of the amino acid cysteine, which exists in solution as a neutral zwitterion . The two micro-constants represent deprotonation either at sulphur or at nitrogen, and the macro-constant sum here is the acid dissociation constant
Similarly, a base such as spermine has more than one site where protonation can occur. For example, mono-protonation can occur at a terminal group or at internal groups. The Kb values for dissociation of spermine protonated at one or other of the sites are examples of micro-constants. They cannot be determined directly by means of pH, absorbance, fluorescence or NMR measurements; a measured Kb value is the sum of the K values for the micro-reactions.
Nevertheless, the site of protonation is very important for biological function, so mathematical methods have been developed for the determination of micro-constants.
When two reactants form a single product in parallel, the macro-constant For example, the abovementioned equilibrium for spermine may be considered in terms of Ka values of two tautomeric conjugate acids, with macro-constant In this case This is equivalent to the preceding expression since is proportional to
When a reactant undergoes two reactions in series, the macro-constant for the combined reaction is the product of the micro-constant for the two steps. For example, the abovementioned cysteine zwitterion can lose two protons, one from sulphur and one from nitrogen, and the overall macro-constant for losing two protons is the product of two dissociation constants This can also be written in terms of logarithmic constants as
Applications and significance
A knowledge of pKa values is important for the quantitative treatment of systems involving acid–base equilibria in solution. Many applications exist in biochemistry; for example, the pKa values of proteins and amino acid side chains are of major importance for the activity of enzymes and the stability of proteins. Protein pKa values cannot always be measured directly, but may be calculated using theoretical methods. Buffer solutions are used extensively to provide solutions at or near the physiological pH for the study of biochemical reactions; the design of these solutions depends on a knowledge of the pKa values of their components. Important buffer solutions include MOPS, which provides a solution with pH 7.2, and tricine, which is used in gel electrophoresis. Buffering is an essential part of acid base physiology including acid–base homeostasis, and is key to understanding disorders such as acid–base disorder. The isoelectric point of a given molecule is a function of its pK values, so different molecules have different isoelectric points. This permits a technique called isoelectric focusing, which is used for separation of proteins by 2-D gel polyacrylamide gel electrophoresis.
Buffer solutions also play a key role in analytical chemistry. They are used whenever there is a need to fix the pH of a solution at a particular value. Compared with an aqueous solution, the pH of a buffer solution is relatively insensitive to the addition of a small amount of strong acid or strong base. The buffer capacity of a simple buffer solution is largest when pH = pKa. In acid–base extraction, the efficiency of extraction of a compound into an organic phase, such as an ether, can be optimised by adjusting the pH of the aqueous phase using an appropriate buffer. At the optimum pH, the concentration of the electrically neutral species is maximised; such a species is more soluble in organic solvents having a low dielectric constant than it is in water. This technique is used for the purification of weak acids and bases.
A pH indicator is a weak acid or weak base that changes colour in the transition pH range, which is approximately pKa ± 1. The design of a universal indicator requires a mixture of indicators whose adjacent pKa values differ by about two, so that their transition pH ranges just overlap.
In pharmacology, ionization of a compound alters its physical behaviour and macro properties such as solubility and lipophilicity, log p). For example, ionization of any compound will increase the solubility in water, but decrease the lipophilicity. This is exploited in drug development to increase the concentration of a compound in the blood by adjusting the pKa of an ionizable group.
Knowledge of pKa values is important for the understanding of coordination complexes, which are formed by the interaction of a metal ion, Mm+, acting as a Lewis acid, with a ligand, L, acting as a Lewis base. However, the ligand may also undergo protonation reactions, so the formation of a complex in aqueous solution could be represented symbolically by the reaction
To determine the equilibrium constant for this reaction, in which the ligand loses a proton, the pKa of the protonated ligand must be known. In practice, the ligand may be polyprotic; for example EDTA4− can accept four protons; in that case, all pKa values must be known. In addition, the metal ion is subject to hydrolysis, that is, it behaves as a weak acid, so the pK values for the hydrolysis reactions must also be known.
Assessing the hazard associated with an acid or base may require a knowledge of pKa values. For example, hydrogen cyanide is a very toxic gas, because the cyanide ion inhibits the iron-containing enzyme cytochrome c oxidase. Hydrogen cyanide is a weak acid in aqueous solution with a pKa of about 9. In strongly alkaline solutions, above pH 11, say, it follows that sodium cyanide is "fully dissociated" so the hazard due to the hydrogen cyanide gas is much reduced. An acidic solution, on the other hand, is very hazardous because all the cyanide is in its acid form. Ingestion of cyanide by mouth is potentially fatal, independently of pH, because of the reaction with cytochrome c oxidase.
In environmental science acid–base equilibria are important for lakes and rivers; for example, humic acids are important components of natural waters. Another example occurs in chemical oceanography: in order to quantify the solubility of iron(III) in seawater at various salinities, the pKa values for the formation of the iron(III) hydrolysis products , and were determined, along with the solubility product of iron hydroxide.
Values for common substances
There are multiple techniques to determine the pKa of a chemical, leading to some discrepancies between different sources. Well measured values are typically within 0.1 units of each other. Data presented here were taken at 25 °C in water. More values can be found in the Thermodynamics section, above. A table of pKa of carbon acids, measured in DMSO, can be found on the page on carbanions.
See also
Acidosis
Acids in wine: tartaric, malic and citric are the principal acids in wine.
Alkalosis
Arterial blood gas
Chemical equilibrium
Conductivity (electrolytic)
Grotthuss mechanism: how protons are transferred between hydronium ions and water molecules, accounting for the exceptionally high ionic mobility of the proton (animation).
Hammett acidity function: a measure of acidity that is used for very concentrated solutions of strong acids, including superacids.
Ion transport number
Ocean acidification: dissolution of atmospheric carbon dioxide affects seawater pH. The reaction depends on total inorganic carbon and on solubility equilibria with solid carbonates such as limestone and dolomite.
Law of dilution
pCO2
pH
Predominance diagram: relates to equilibria involving polyoxyanions. pKa values are needed to construct these diagrams.
Proton affinity: a measure of basicity in the gas phase.
Stability constants of complexes: formation of a complex can often be seen as a competition between proton and metal ion for a ligand, which is the product of dissociation of an acid.
Notes
References
Further reading
(Previous edition published as )
(Non-aqueous solvents)
(translation editor: Mary R. Masson)
Chapter 4: Solvent Effects on the Position of Homogeneous Chemical Equilibria.
External links
Acidity–Basicity Data in Nonaqueous Solvents Extensive bibliography of pKa values in DMSO, acetonitrile, THF, heptane, 1,2-dichloroethane, and in the gas phase
Curtipot All-in-one freeware for pH and acid–base equilibrium calculations and for simulation and analysis of potentiometric titration curves with spreadsheets
SPARC Physical/Chemical property calculator Includes a database with aqueous, non-aqueous, and gaseous phase pKa values than can be searched using SMILES or CAS registry numbers
Aqueous-Equilibrium Constants pKa values for various acid and bases. Includes a table of some solubility products
Free guide to pKa and log p interpretation and measurement Explanations of the relevance of these properties to pharmacology
Free online prediction tool (Marvin) pKa, log p, log d etc. From ChemAxon
Chemicalize.org:List of predicted structure based properties
pKa Chart by David A. Evans
Equilibrium chemistry
Acids
Bases (chemistry)
Analytical chemistry
Physical chemistry | Acid dissociation constant | [
"Physics",
"Chemistry"
] | 10,292 | [
"Applied and interdisciplinary physics",
"Acids",
"Equilibrium chemistry",
"Bases (chemistry)",
"nan",
"Physical chemistry"
] |
57,559 | https://en.wikipedia.org/wiki/Predation | Predation is a biological interaction in which one organism, the predator, kills and eats another organism, its prey. It is one of a family of common feeding behaviours that includes parasitism and micropredation (which usually do not kill the host) and parasitoidism (which always does, eventually). It is distinct from scavenging on dead prey, though many predators also scavenge; it overlaps with herbivory, as seed predators and destructive frugivores are predators.
Predation behavior varies significantly depending on the organism. Many predators, especially carnivores, have evolved distinct hunting strategies. Pursuit predation involves the active search for and pursuit of prey, whilst ambush predators instead wait for prey to present an opportunity for capture, and often use stealth or aggressive mimicry. Other predators are opportunistic or omnivorous and only practice predation occasionally.
Most obligate carnivores are specialized for hunting. They may have acute senses such as vision, hearing, or smell for prey detection. Many predatory animals have sharp claws or jaws to grip, kill, and cut up their prey. Physical strength is usually necessary for large carnivores such as big cats to kill larger prey. Other adaptations include stealth, endurance, intelligence, social behaviour, and aggressive mimicry that improve hunting efficiency.
Predation has a powerful selective effect on prey, and the prey develops anti-predator adaptations such as warning colouration, alarm calls and other signals, camouflage, mimicry of well-defended species, and defensive spines and chemicals. Sometimes predator and prey find themselves in an evolutionary arms race, a cycle of adaptations and counter-adaptations. Predation has been a major driver of evolution since at least the Cambrian period.
Definition
At the most basic level, predators kill and eat other organisms. However, the concept of predation is broad, defined differently in different contexts, and includes a wide variety of feeding methods; moreover, some relationships that result in the prey's death are not necessarily called predation. A parasitoid, such as an ichneumon wasp, lays its eggs in or on its host; the eggs hatch into larvae, which eat the host, and it inevitably dies. Zoologists generally call this a form of parasitism, though conventionally parasites are thought not to kill their hosts. A predator can be defined to differ from a parasitoid in that it has many prey, captured over its lifetime, where a parasitoid's larva has just one, or at least has its food supply provisioned for it on just one occasion.
There are other difficult and borderline cases. Micropredators are small animals that, like predators, feed entirely on other organisms; they include fleas and mosquitoes that consume blood from living animals, and aphids that consume sap from living plants. However, since they typically do not kill their hosts, they are now often thought of as parasites. Animals that graze on phytoplankton or mats of microbes are predators, as they consume and kill their food organisms, while herbivores that browse leaves are not, as their food plants usually survive the assault. When animals eat seeds (seed predation or granivory) or eggs (egg predation), they are consuming entire living organisms, which by definition makes them predators.
Scavengers, organisms that only eat organisms found already dead, are not predators, but many predators such as the jackal and the hyena scavenge when the opportunity arises. Among invertebrates, social wasps such as yellowjackets are both hunters and scavengers of other insects.
Taxonomic range
While examples of predators among mammals and birds are well known, predators can be found in a broad range of taxa including arthropods. They are common among insects, including mantids, dragonflies, lacewings and scorpionflies. In some species such as the alderfly, only the larvae are predatory (the adults do not eat). Spiders are predatory, as well as other terrestrial invertebrates such as scorpions; centipedes; some mites, snails and slugs; nematodes; and planarian worms. In marine environments, most cnidarians (e.g., jellyfish, hydroids), ctenophora (comb jellies), echinoderms (e.g., sea stars, sea urchins, sand dollars, and sea cucumbers) and flatworms are predatory. Among crustaceans, lobsters, crabs, shrimps and barnacles are predators, and in turn crustaceans are preyed on by nearly all cephalopods (including octopuses, squid and cuttlefish).
Seed predation is restricted to mammals, birds, and insects but is found in almost all terrestrial ecosystems. Egg predation includes both specialist egg predators such as some colubrid snakes and generalists such as foxes and badgers that opportunistically take eggs when they find them.
Some plants, like the pitcher plant, the Venus fly trap and the sundew, are carnivorous and consume insects. Methods of predation by plants varies greatly but often involves a food trap, mechanical stimulation, and electrical impulses to eventually catch and consume its prey. Some carnivorous fungi catch nematodes using either active traps in the form of constricting rings, or passive traps with adhesive structures.
Many species of protozoa (eukaryotes) and bacteria (prokaryotes) prey on other microorganisms; the feeding mode is evidently ancient, and evolved many times in both groups. Among freshwater and marine zooplankton, whether single-celled or multi-cellular, predatory grazing on phytoplankton and smaller zooplankton is common, and found in many species of nanoflagellates, dinoflagellates, ciliates, rotifers, a diverse range of meroplankton animal larvae, and two groups of crustaceans, namely copepods and cladocerans.
Foraging
To feed, a predator must search for, pursue and kill its prey. These actions form a foraging cycle. The predator must decide where to look for prey based on its geographical distribution; and once it has located prey, it must assess whether to pursue it or to wait for a better choice. If it chooses pursuit, its physical capabilities determine the mode of pursuit (e.g., ambush or chase). Having captured the prey, it may also need to expend energy handling it (e.g., killing it, removing any shell or spines, and ingesting it).
Search
Predators have a choice of search modes ranging from sit-and-wait to active or widely foraging. The sit-and-wait method is most suitable if the prey are dense and mobile, and the predator has low energy requirements. Wide foraging expends more energy, and is used when prey is sedentary or sparsely distributed. There is a continuum of search modes with intervals between periods of movement ranging from seconds to months. Sharks, sunfish, Insectivorous birds and shrews are almost always moving while web-building spiders, aquatic invertebrates, praying mantises and kestrels rarely move. In between, plovers and other shorebirds, freshwater fish including crappies, and the larvae of coccinellid beetles (ladybirds), alternate between actively searching and scanning the environment.
Prey distributions are often clumped, and predators respond by looking for patches where prey is dense and then searching within patches. Where food is found in patches, such as rare shoals of fish in a nearly empty ocean, the search stage requires the predator to travel for a substantial time, and to expend a significant amount of energy, to locate each food patch. For example, the black-browed albatross regularly makes foraging flights to a range of around , up to a maximum foraging range of for breeding birds gathering food for their young. With static prey, some predators can learn suitable patch locations and return to them at intervals to feed. The optimal foraging strategy for search has been modelled using the marginal value theorem.
Search patterns often appear random. One such is the Lévy walk, that tends to involve clusters of short steps with occasional long steps. It is a good fit to the behaviour of a wide variety of organisms including bacteria, honeybees, sharks and human hunter-gatherers.
Assessment
Having found prey, a predator must decide whether to pursue it or keep searching. The decision depends on the costs and benefits involved. A bird foraging for insects spends a lot of time searching but capturing and eating them is quick and easy, so the efficient strategy for the bird is to eat every palatable insect it finds. By contrast, a predator such as a lion or falcon finds its prey easily but capturing it requires a lot of effort. In that case, the predator is more selective.
One of the factors to consider is size. Prey that is too small may not be worth the trouble for the amount of energy it provides. Too large, and it may be too difficult to capture. For example, a mantid captures prey with its forelegs and they are optimized for grabbing prey of a certain size. Mantids are reluctant to attack prey that is far from that size. There is a positive correlation between the size of a predator and its prey.
A predator may assess a patch and decide whether to spend time searching for prey in it. This may involve some knowledge of the preferences of the prey; for example, ladybirds can choose a patch of vegetation suitable for their aphid prey.
Capture
To capture prey, predators have a spectrum of pursuit modes that range from overt chase (pursuit predation) to a sudden strike on nearby prey (ambush predation). Another strategy in between ambush and pursuit is ballistic interception, where a predator observes and predicts a prey's motion and then launches its attack accordingly.
Ambush
Ambush or sit-and-wait predators are carnivorous animals that capture prey by stealth or surprise. In animals, ambush predation is characterized by the predator's scanning the environment from a concealed position until a prey is spotted, and then rapidly executing a fixed surprise attack. Vertebrate ambush predators include frogs, fish such as the angel shark, the northern pike and the eastern frogfish. Among the many invertebrate ambush predators are trapdoor spiders and Australian Crab spiders on land and mantis shrimps in the sea. Ambush predators often construct a burrow in which to hide, improving concealment at the cost of reducing their field of vision. Some ambush predators also use lures to attract prey within striking range. The capturing movement has to be rapid to trap the prey, given that the attack is not modifiable once launched.
Ballistic interception
Ballistic interception is the strategy where a predator observes the movement of a prey, predicts its motion, works out an interception path, and then attacks the prey on that path. This differs from ambush predation in that the predator adjusts its attack according to how the prey is moving. Ballistic interception involves a brief period for planning, giving the prey an opportunity to escape. Some frogs wait until snakes have begun their strike before jumping, reducing the time available to the snake to recalibrate its attack, and maximising the angular adjustment that the snake would need to make to intercept the frog in real time. Ballistic predators include insects such as dragonflies, and vertebrates such as archerfish (attacking with a jet of water), chameleons (attacking with their tongues), and some colubrid snakes.
Pursuit
In pursuit predation, predators chase fleeing prey. If the prey flees in a straight line, capture depends only on the predator's being faster than the prey. If the prey manoeuvres by turning as it flees, the predator must react in real time to calculate and follow a new intercept path, such as by parallel navigation, as it closes on the prey. Many pursuit predators use camouflage to approach the prey as close as possible unobserved (stalking) before starting the pursuit. Pursuit predators include terrestrial mammals such as humans, African wild dogs, spotted hyenas and wolves; marine predators such as dolphins, orcas and many predatory fishes, such as tuna; predatory birds (raptors) such as falcons; and insects such as dragonflies.
An extreme form of pursuit is endurance or persistence hunting, in which the predator tires out the prey by following it over a long distance, sometimes for hours at a time. The method is used by human hunter-gatherers and by canids such as African wild dogs and domestic hounds. The African wild dog is an extreme persistence predator, tiring out individual prey by following them for many miles at relatively low speed.
A specialised form of pursuit predation is the lunge feeding of baleen whales. These very large marine predators feed on plankton, especially krill, diving and actively swimming into concentrations of plankton, and then taking a huge gulp of water and filtering it through their feathery baleen plates.
Pursuit predators may be social, like the lion and wolf that hunt in groups, or solitary.
Handling
Once the predator has captured the prey, it has to handle it: very carefully if the prey is dangerous to eat, such as if it possesses sharp or poisonous spines, as in many prey fish. Some catfish such as the Ictaluridae have spines on the back (dorsal) and belly (pectoral) which lock in the erect position; as the catfish thrashes about when captured, these could pierce the predator's mouth, possibly fatally. Some fish-eating birds like the osprey avoid the danger of spines by tearing up their prey before eating it.
Solitary versus social predation
In social predation, a group of predators cooperates to kill prey. This makes it possible to kill creatures larger than those they could overpower singly; for example, hyenas, and wolves collaborate to catch and kill herbivores as large as buffalo, and lions even hunt elephants. It can also make prey more readily available through strategies like flushing of prey and herding it into a smaller area. For example, when mixed flocks of birds forage, the birds in front flush out insects that are caught by the birds behind. Spinner dolphins form a circle around a school of fish and move inwards, concentrating the fish by a factor of 200. By hunting socially chimpanzees can catch colobus monkeys that would readily escape an individual hunter, while cooperating Harris hawks can trap rabbits.
Predators of different species sometimes cooperate to catch prey. In coral reefs, when fish such as the grouper and coral trout spot prey that is inaccessible to them, they signal to giant moray eels, Napoleon wrasses or octopuses. These predators are able to access small crevices and flush out the prey. Killer whales have been known to help whalers hunt baleen whales.
Social hunting allows predators to tackle a wider range of prey, but at the risk of competition for the captured food. Solitary predators have more chance of eating what they catch, at the price of increased expenditure of energy to catch it, and increased risk that the prey will escape. Ambush predators are often solitary to reduce the risk of becoming prey themselves. Of 245 terrestrial members of the Carnivora (the group that includes the cats, dogs, and bears), 177 are solitary; and 35 of the 37 wild cats are solitary, including the cougar and cheetah. However, the solitary cougar does allow other cougars to share in a kill, and the coyote can be either solitary or social. Other solitary predators include the northern pike, wolf spiders and all the thousands of species of solitary wasps among arthropods, and many microorganisms and zooplankton.
Specialization
Physical adaptations
Under the pressure of natural selection, predators have evolved a variety of physical adaptations for detecting, catching, killing, and digesting prey. These include speed, agility, stealth, sharp senses, claws, teeth, filters, and suitable digestive systems.
For detecting prey, predators have well-developed vision, smell, or hearing. Predators as diverse as owls and jumping spiders have forward-facing eyes, providing accurate binocular vision over a relatively narrow field of view, whereas prey animals often have less acute all-round vision. Animals such as foxes can smell their prey even when it is concealed under of snow or earth. Many predators have acute hearing, and some such as echolocating bats hunt exclusively by active or passive use of sound.
Predators including big cats, birds of prey, and ants share powerful jaws, sharp teeth, or claws which they use to seize and kill their prey. Some predators such as snakes and fish-eating birds like herons and cormorants swallow their prey whole; some snakes can unhinge their jaws to allow them to swallow large prey, while fish-eating birds have long spear-like beaks that they use to stab and grip fast-moving and slippery prey. Fish and other predators have developed the ability to crush or open the armoured shells of molluscs.
Many predators are powerfully built and can catch and kill animals larger than themselves; this applies as much to small predators such as ants and shrews as to big and visibly muscular carnivores like the cougar and lion.
Diet and behaviour
Predators are often highly specialized in their diet and hunting behaviour; for example, the Eurasian lynx only hunts small ungulates. Others such as leopards are more opportunistic generalists, preying on at least 100 species. The specialists may be highly adapted to capturing their preferred prey, whereas generalists may be better able to switch to other prey when a preferred target is scarce. When prey have a clumped (uneven) distribution, the optimal strategy for the predator is predicted to be more specialized as the prey are more conspicuous and can be found more quickly; this appears to be correct for predators of immobile prey, but is doubtful with mobile prey.
In size-selective predation, predators select prey of a certain size. Large prey may prove troublesome for a predator, while small prey might prove hard to find and in any case provide less of a reward. This has led to a correlation between the size of predators and their prey. Size may also act as a refuge for large prey. For example, adult elephants are relatively safe from predation by lions, but juveniles are vulnerable.
Camouflage and mimicry
Members of the cat family such as the snow leopard (treeless highlands), tiger (grassy plains, reed swamps), ocelot (forest), fishing cat (waterside thickets), and lion (open plains) are camouflaged with coloration and disruptive patterns suiting their habitats.
In aggressive mimicry, certain predators, including insects and fishes, make use of coloration and behaviour to attract prey. Female Photuris fireflies, for example, copy the light signals of other species, thereby attracting male fireflies, which they capture and eat. Flower mantises are ambush predators; camouflaged as flowers, such as orchids, they attract prey and seize it when it is close enough. Frogfishes are extremely well camouflaged, and actively lure their prey to approach using an esca, a bait on the end of a rod-like appendage on the head, which they wave gently to mimic a small animal, gulping the prey in an extremely rapid movement when it is within range.
Venom
Many smaller predators such as the box jellyfish use venom to subdue their prey, and venom can also aid in digestion (as is the case for rattlesnakes and some spiders). The marbled sea snake that has adapted to egg predation has atrophied venom glands, and the gene for its three finger toxin contains a mutation (the deletion of two nucleotides) that inactives it. These changes are explained by the fact that its prey does not need to be subdued.
Electric fields
Several groups of predatory fish have the ability to detect, track, and sometimes, as in the electric ray, to incapacitate their prey by sensing and generating electric fields. The electric organ is derived from modified nerve or muscle tissue.
Physiology
Physiological adaptations to predation include the ability of predatory bacteria to digest the complex peptidoglycan polymer from the cell walls of the bacteria that they prey upon. Carnivorous vertebrates of all five major classes (fishes, amphibians, reptiles, birds, and mammals) have lower relative rates of sugar to amino acid transport than either herbivores or omnivores, presumably because they acquire plenty of amino acids from the animal proteins in their diet.
Antipredator adaptations
To counter predation, prey have evolved defences for use at each stage of an attack. They can try to avoid detection, such as by using camouflage and mimicry. They can detect predators and warn others of their presence.
If detected, they can try to avoid being the target of an attack, for example, by signalling that they are toxic or unpalatable, by signalling that a chase would be unprofitable, or by forming groups. If they become a target, they can try to fend off the attack with defences such as armour, quills, unpalatability, or mobbing; and they can often escape an attack in progress by startling the predator, playing dead, shedding body parts such as tails, or simply fleeing.
Coevolution
Predators and prey are natural enemies, and many of their adaptations seem designed to counter each other. For example, bats have sophisticated echolocation systems to detect insects and other prey, and insects have developed a variety of defences including the ability to hear the echolocation calls. Many pursuit predators that run on land, such as wolves, have evolved long limbs in response to the increased speed of their prey. Their adaptations have been characterized as an evolutionary arms race, an example of the coevolution of two species. In a gene centered view of evolution, the genes of predator and prey can be thought of as competing for the prey's body. However, the "life-dinner" principle of Dawkins and Krebs predicts that this arms race is asymmetric: if a predator fails to catch its prey, it loses its dinner, while if it succeeds, the prey loses its life.
The metaphor of an arms race implies ever-escalating advances in attack and defence. However, these adaptations come with a cost; for instance, longer legs have an increased risk of breaking, while the specialized tongue of the chameleon, with its ability to act like a projectile, is useless for lapping water, so the chameleon must drink dew off vegetation.
The "life-dinner" principle has been criticized on multiple grounds. The extent of the asymmetry in natural selection depends in part on the heritability of the adaptive traits. Also, if a predator loses enough dinners, it too will lose its life. On the other hand, the fitness cost of a given lost dinner is unpredictable, as the predator may quickly find better prey. In addition, most predators are generalists, which reduces the impact of a given prey adaption on a predator. Since specialization is caused by predator-prey coevolution, the rarity of specialists may imply that predator-prey arms races are rare.
It is difficult to determine whether given adaptations are truly the result of coevolution, where a prey adaptation gives rise to a predator adaptation that is countered by further adaptation in the prey. An alternative explanation is escalation, where predators are adapting to competitors, their own predators or dangerous prey. Apparent adaptations to predation may also have arisen for other reasons and then been co-opted for attack or defence. In some of the insects preyed on by bats, hearing evolved before bats appeared and was used to hear signals used for territorial defence and mating. Their hearing evolved in response to bat predation, but the only clear example of reciprocal adaptation in bats is stealth echolocation.
A more symmetric arms race may occur when the prey are dangerous, having spines, quills, toxins or venom that can harm the predator. The predator can respond with avoidance, which in turn drives the evolution of mimicry. Avoidance is not necessarily an evolutionary response as it is generally learned from bad experiences with prey. However, when the prey is capable of killing the predator (as can a coral snake with its venom), there is no opportunity for learning and avoidance must be inherited. Predators can also respond to dangerous prey with counter-adaptations. In western North America, the common garter snake has developed a resistance to the toxin in the skin of the rough-skinned newt.
Role in ecosystems
Predators affect their ecosystems not only directly by eating their own prey, but by indirect means such as reducing predation by other species, or altering the foraging behaviour of a herbivore, as with the biodiversity effect of wolves on riverside vegetation or sea otters on kelp forests. This may explain population dynamics effects such as the cycles observed in lynx and snowshoe hares.
Trophic level
One way of classifying predators is by trophic level. Carnivores that feed on herbivores are secondary consumers; their predators are tertiary consumers, and so forth. At the top of this food chain are apex predators such as lions. Many predators however eat from multiple levels of the food chain; a carnivore may eat both secondary and tertiary consumers. This means that many predators must contend with intraguild predation, where other predators kill and eat them. For example, coyotes compete with and sometimes kill gray foxes and bobcats.
Trophic transfer
Trophic transfer within an ecosystem refers to the transport of energy and nutrients as a result of predation. Energy passes from one trophic level to the next as predators consume organic matter from another organism's body. Within each transfer, while there are uses of energy, there are also losses of energy.
Marine trophic levels vary depending on locality and the size of the primary producers. There are generally up to six trophic levels in the open ocean, four over continental shelves, and around three in upwelling zones. For example, a marine habitat with five trophic levels could be represented as follows: Herbivores (feed primarily on phytoplankton); Carnivores (feed primarily on other zooplankton/animals); Detritivores (feed primarily on dead organic matter/detritus; Omnivores (feed on a mixed diet of phyto- and zooplankton and detritus); and Mixotrophs which combine autotrophy (using light energy to grow without intake of any additional organic compounds or nutrients) with heterotrophy (feeding on other plants and animals for energy and nutrients—herbivores, omnivores and carnivores, and detritivores).
Trophic transfer efficiency measures how effectively energy is transferred or passed up through higher trophic levels of the marine food web. As energy moves up the trophic levels, it decreases due to heat, waste, and the natural metabolic processes that occur as predators consume their prey. The result is that only about 10% of the energy at any trophic level is transferred to the next level. This is often referred to as "the 10% rule" which limits the number of trophic levels that an individual ecosystem is capable of supporting.
Biodiversity maintained by apex predation
Predators may increase the biodiversity of communities by preventing a single species from becoming dominant. Such predators are known as keystone species and may have a profound influence on the balance of organisms in a particular ecosystem. Introduction or removal of this predator, or changes in its population density, can have drastic cascading effects on the equilibrium of many other populations in the ecosystem. For example, grazers of a grassland may prevent a single dominant species from taking over.
The elimination of wolves from Yellowstone National Park had profound impacts on the trophic pyramid. In that area, wolves are both keystone species and apex predators. Without predation, herbivores began to over-graze many woody browse species, affecting the area's plant populations. In addition, wolves often kept animals from grazing near streams, protecting the beavers' food sources. The removal of wolves had a direct effect on the beaver population, as their habitat became territory for grazing. Increased browsing on willows and conifers along Blacktail Creek due to a lack of predation caused channel incision because the reduced beaver population was no longer able to slow the water down and keep the soil in place. The predators were thus demonstrated to be of vital importance in the ecosystem.
Population dynamics
In the absence of predators, the population of a species can grow exponentially until it approaches the carrying capacity of the environment. Predators limit the growth of prey both by consuming them and by changing their behavior. Increases or decreases in the prey population can also lead to increases or decreases in the number of predators, for example, through an increase in the number of young they bear.
Cyclical fluctuations have been seen in populations of predator and prey, often with offsets between the predator and prey cycles. A well-known example is that of the snowshoe hare and lynx. Over a broad span of boreal forests in Alaska and Canada, the hare populations fluctuate in near synchrony with a 10-year period, and the lynx populations fluctuate in response. This was first seen in historical records of animals caught by fur hunters for the Hudson's Bay Company over more than a century.
A simple model of a system with one species each of predator and prey, the Lotka–Volterra equations, predicts population cycles. However, attempts to reproduce the predictions of this model in the laboratory have often failed; for example, when the protozoan Didinium nasutum is added to a culture containing its prey, Paramecium caudatum, the latter is often driven to extinction.
The Lotka–Volterra equations rely on several simplifying assumptions, and they are structurally unstable, meaning that any change in the equations can stabilize or destabilize the dynamics. For example, one assumption is that predators have a linear functional response to prey: the rate of kills increases in proportion to the rate of encounters. If this rate is limited by time spent handling each catch, then prey populations can reach densities above which predators cannot control them. Another assumption is that all prey individuals are identical. In reality, predators tend to select young, weak, and ill individuals, leaving prey populations able to regrow.
Many factors can stabilize predator and prey populations. One example is the presence of multiple predators, particularly generalists that are attracted to a given prey species if it is abundant and look elsewhere if it is not. As a result, population cycles tend to be found in northern temperate and subarctic ecosystems because the food webs are simpler. The snowshoe hare-lynx system is subarctic, but even this involves other predators, including coyotes, goshawks and great horned owls, and the cycle is reinforced by variations in the food available to the hares.
A range of mathematical models have been developed by relaxing the assumptions made in the Lotka–Volterra model; these variously allow animals to have geographic distributions, or to migrate; to have differences between individuals, such as sexes and an age structure, so that only some individuals reproduce; to live in a varying environment, such as with changing seasons; and analysing the interactions of more than just two species at once. Such models predict widely differing and often chaotic predator-prey population dynamics. The presence of refuge areas, where prey are safe from predators, may enable prey to maintain larger populations but may also destabilize the dynamics.
Evolutionary history
Predation dates from before the rise of commonly recognized carnivores by hundreds of millions (perhaps billions) of years. Predation has evolved repeatedly in different groups of organisms. The rise of eukaryotic cells at around 2.7 Gya, the rise of multicellular organisms at about 2 Gya, and the rise of mobile predators (around 600 Mya - 2 Gya, probably around 1 Gya) have all been attributed to early predatory behavior, and many very early remains show evidence of boreholes or other markings attributed to small predator species. It likely triggered major evolutionary transitions including the arrival of cells, eukaryotes, sexual reproduction, multicellularity, increased size, mobility (including insect flight) and armoured shells and exoskeletons.
The earliest predators were microbial organisms, which engulfed or grazed on others. Because the fossil record is poor, these first predators could date back anywhere between 1 and over 2.7 Gya (billion years ago). Predation visibly became important shortly before the Cambrian period—around —as evidenced by the almost simultaneous development of calcification in animals and algae, and predation-avoiding burrowing. However, predators had been grazing on micro-organisms since at least , with evidence of selective (rather than random) predation from a similar time.
Auroralumina attenboroughii is an Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predatory animals, catching small prey with its nematocysts as modern cnidarians do.
The fossil record demonstrates a long history of interactions between predators and their prey from the Cambrian period onwards, showing for example that some predators drilled through the shells of bivalve and gastropod molluscs, while others ate these organisms by breaking their shells.
Among the Cambrian predators were invertebrates like the anomalocaridids with appendages suitable for grabbing prey, large compound eyes and jaws made of a hard material like that in the exoskeleton of an insect.
Some of the first fish to have jaws were the armoured and mainly predatory placoderms of the Silurian to Devonian periods, one of which, the Dunkleosteus, is considered the world's first vertebrate "superpredator", preying upon other predators.
Insects developed the ability to fly in the Early Carboniferous or Late Devonian, enabling them among other things to escape from predators.
Among the largest predators that have ever lived were the theropod dinosaurs such as Tyrannosaurus from the Cretaceous period. They preyed upon herbivorous dinosaurs such as hadrosaurs, ceratopsians and ankylosaurs.
In human society
Practical uses
Humans, as omnivores, are to some extent predatory, using weapons and tools to fish, hunt and trap animals. They also use other predatory species such as dogs, cormorants, and falcons to catch prey for food or for sport.
Two mid-sized predators, dogs and cats, are the animals most often kept as pets in western societies.
Human hunters, including the San of southern Africa, use persistence hunting, a form of pursuit predation where the pursuer may be slower than prey such as a kudu antelope over short distances, but follows it in the midday heat until it is exhausted, a pursuit that can take up to five hours.
In biological pest control, predators (and parasitoids) from a pest's natural range are introduced to control populations, at the risk of causing unforeseen problems. Natural predators, provided they do no harm to non-pest species, are an environmentally friendly and sustainable way of reducing damage to crops and an alternative to the use of chemical agents such as pesticides.
Symbolic uses
In film, the idea of the predator as a dangerous if humanoid enemy is used in the 1987 science fiction horror action film Predator and its three sequels. A terrifying predator, a gigantic Man-eater|man-eating great white shark, is central, too, to Steven Spielberg's 1974 thriller Jaws.
Among poetry on the theme of predation, a predator's consciousness might be explored, such as in Ted Hughes's Pike. The phrase "Nature, red in tooth and claw" from Alfred, Lord Tennyson's 1849 poem "In Memoriam A.H.H." has been interpreted as referring to the struggle between predators and prey.
In mythology and folk fable, predators such as the fox and wolf have mixed reputations. The fox was a symbol of fertility in ancient Greece, but a weather demon in northern Europe, and a creature of the devil in early Christianity; the fox is presented as sly, greedy, and cunning in fables from Aesop onwards. The big bad wolf is known to children in tales such as Little Red Riding Hood, but is a demonic figure in the Icelandic Edda sagas, where the wolf Fenrir appears in the apocalyptic ending of the world. In the Middle Ages, belief spread in werewolves, men transformed into wolves. In ancient Rome, and in ancient Egypt, the wolf was worshipped, the she-wolf appearing in the founding myth of Rome, suckling Romulus and Remus. More recently, in Rudyard Kipling's 1894 The Jungle Book, Mowgli is raised by the wolf pack. Attitudes to large predators in North America, such as wolf, grizzly bear and cougar, have shifted from hostility or ambivalence, accompanied by active persecution, towards positive and protective in the second half of the 20th century.
See also
Ecology of fear
Predation problem
Predator–prey reversal
Prey naiveté
Wa-Tor
Cannibalism
Notes
References
Sources
External links
Ecology
Biological pest control | Predation | [
"Biology"
] | 7,747 | [
"Ecology"
] |
57,560 | https://en.wikipedia.org/wiki/Diastase | A diastase (; from Greek διάστασις, "separation") is any one of a group of enzymes that catalyses the breakdown of starch into maltose. For example, the diastase α-amylase degrades starch to a mixture of the disaccharide maltose; the trisaccharide maltotriose, which contains three α (1-4)-linked glucose residues; and oligosaccharides, known as dextrins, that contain the α (1-6)-linked glucose branches.
Diastase was the first enzyme discovered. It was extracted from malt solution in 1833 by Anselme Payen and Jean-François Persoz, chemists at a French sugar factory. The
name "diastase" comes from the Greek word διάστασις (diastasis) (a parting, a separation), because when beer mash is heated, the enzyme causes the starch in the barley seed to transform quickly into soluble sugars and hence the husk to separate from the rest of the seed. Today, "diastase" refers to any α-, β-, or γ-amylase (all of which are hydrolases) that can break down carbohydrates.
The commonly used -ase suffix for naming enzymes was derived from the name diastase.
When used as a pharmaceutical drug, diastase has the ATC code .
Amylases can also be extracted from other sources including plants, saliva and milk.
Clinical significance
Urine diastase is useful in diagnosing uncertain abdominal cases (especially when pancreatitis is suspected), stones in the common bile duct (choledocholithiasis), jaundice and in ruling out post-operative injury to the pancreas; provided that the diastase level is correlated with clinical features of the patient.
Diastase is also used in conjunction with periodic acid–Schiff stain in histology. For example, glycogen is darkly stained by PAS but can be dissolved by diastase. Fungi, on the other hand, stain darkly with PAS even after treatment by diastase.
See also
Takadiastase
Whipple disease
Amylase
References
Payen, A. et J.-F. Persoz (1833) "Mémoire sur la diastase, les principaux produits de ses réactions et leurs applications aux arts industriels" (Memoir on diastase, the principal products of its reactions, and their applications to the industrial arts), Annales de chimie et de physique, 2nd series, 53 : 73–92.
External links
Introduction and Uses of Diastase Enzyme
Carbohydrate metabolism
Hydrolases | Diastase | [
"Chemistry"
] | 586 | [
"Carbohydrate metabolism",
"Carbohydrate chemistry",
"Metabolism"
] |
57,622 | https://en.wikipedia.org/wiki/Common%20sunflower | The common sunflower (Helianthus annuus) is a species of large annual forb of the daisy family Asteraceae. The common sunflower is harvested for its edible oily seeds, which are often eaten as a snack food. They are also used in the production of cooking oil, as food for livestock, as bird food, and as a plantings in domestic gardens for aesthetics. Wild plants are known for their multiple flower heads, whereas the domestic sunflower often possesses a single large flower head atop an unbranched stem.
Description
The plant has an erect rough-hairy stem, reaching typical heights of . The tallest sunflower on record achieved . Sunflower leaves are broad, coarsely toothed, rough and mostly alternate; those near the bottom are largest and commonly heart-shaped.
Flower
The plant flowers in summer. What is often called the "flower" of the sunflower is actually a "flower head" (pseudanthium), wide, of numerous small individual five-petaled flowers ("florets"). The outer flowers, which resemble petals, are called ray flowers. Each "petal" consists of a ligule composed of fused petals of an asymmetrical ray flower. They are sexually sterile and may be yellow, red, orange, or other colors. The spirally arranged flowers in the center of the head are called disk flowers. These mature into fruit (sunflower "seeds").
The prairie sunflower (H. petiolaris) is similar in appearance to the wild common sunflower; the scales in its central disk are tipped by white hairs.
Heliotropism
A common misconception is that flowering sunflower heads track the Sun across the sky. Although immature flower buds exhibit this behaviour, the mature flowering heads point in a fixed (and typically easterly) direction throughout the day. This old misconception was disputed in 1597 by the English botanist John Gerard, who grew sunflowers in his famous herbal garden: "[some] have reported it to turn with the Sun, the which I could never observe, although I have endeavored to find out the truth of it." The uniform alignment of sunflower heads in a field might give some people the false impression that the flowers are tracking the Sun.
This alignment results from heliotropism in an earlier development stage, the young flower stage, before full maturity of flower heads (anthesis). Young sunflowers orient themselves in the direction of the sun. At dawn, the head of the flower faces east and moves west throughout the day. When sunflowers reach full maturity, they no longer follow the sun and continuously face east. Young flowers reorient overnight to face east in anticipation of the morning. Their heliotropic motion is a circadian rhythm, synchronized by the sun, which continues if the sun disappears on cloudy days or if plants are moved to constant light. They are able to regulate their circadian rhythm in response to the blue-light emitted by a light source. If a sunflower plant in the bud stage is rotated 180°, the bud will be turning away from the sun for a few days, as resynchronization with the sun takes time.
When growth of the flower stalk stops and the flower is mature, the heliotropism also stops and the flower faces east from that moment onward. This eastward orientation allows rapid warming in the morning, and as a result, an increase in pollinator visits. Sunflowers do not have a pulvinus below their inflorescence. A pulvinus is a flexible segment in the leaf stalks (petiole) of some plant species and functions as a 'joint'. It effectuates leaf motion due to reversible changes in turgor pressure which occurs without growth. The sensitive plant's closing leaves are a good example of reversible leaf movement via pulvinuli.
Floret arrangement
Generally, each floret is oriented toward the next by approximately the golden angle, 137.5°, producing a pattern of interconnecting spirals, where the number of left spirals and the number of right spirals are successive Fibonacci numbers. Typically, there are 34 spirals in one direction and 55 in the other; however, in a very large sunflower head there could be 89 in one direction and 144 in the other. This pattern produces the most efficient packing of seeds mathematically possible within the flower head.
A model for the pattern of florets in the head of a sunflower was proposed by H. Vogel in 1979. This is expressed in polar coordinates
where θ is the angle, r is the radius or distance from the center, and n is the index number of the floret and c is a constant scaling factor. It is a form of Fermat's spiral. The angle 137.5° is related to the golden ratio (55/144 of a circular angle, where 55 and 144 are Fibonacci numbers) and gives a close packing of florets. This model has been used to produce computer generated representations of sunflowers.
Genome
The sunflower genome is diploid with a base chromosome number of 17 and an estimated genome size of 2,871–3,189 million base pairs. Some sources claim its true size is around 3.5 billion base pairs (slightly larger than the human genome).
Etymology
In the binomial name Helianthus annuus, the genus name is derived from the Greek ἥλιος : hḗlios 'sun' and ἄνθος : ánthos 'flower'. The species name annuus means 'annual' in Latin.
Distribution and habitat
The plant was first domesticated in the Americas. Sunflower seeds were brought to Europe from the Americas in the 16th century, where, along with sunflower oil, they became a widespread cooking ingredient. With time, the bulk of industrial-scale production has shifted to Eastern Europe, and () Russia and Ukraine together produce over half of worldwide seed production.
Sunflowers grow best in fertile, moist, well-drained soil with heavy mulch. They often appear on dry open areas and foothills. Outside of cultivation, the common sunflower is found on moist clay-based soils in areas with climates similar to Texas. In contrast, the related Helianthus debilis and Helianthus petiolaris are found on drier, sandier soils.
The precise native range is difficult to determine. According to Plants of the World Online (POWO), it is native to Arizona, California, and Nevada in the present-day United States and to all parts of Mexico except the Gulf Coast and southeast. Though not giving much detail, the Missouri Botanical Garden Plant Finder also lists it as native to the Western United States and Canada. The information published by the Biota of North America Program (BONAP) largely agrees with this, showing the common sunflower as native to states west of the Mississippi, though also listed as a noxious weed in Iowa, Minnesota, and Texas. Regardless of its original range, it can now be found in almost every part of the world that is not tropical, desert, or tundra.
Ecology
Threats and diseases
One of the major threats that sunflowers face today is Fusarium, a filamentous fungus that is found largely in soil and plants. It is a pathogen that over the years has caused an increasing amount of damage and loss of sunflower crops, some as extensive as 80% of damaged crops.
Downy mildew is another disease to which sunflowers are susceptible. Its susceptibility to downy mildew is particularly high due to the sunflower's way of growth and development. Sunflower seeds are generally planted only an inch deep in the ground. When such shallow planting is done in moist and soaked earth or soil, it increases the chances of diseases such as downy mildew.
Another major threat to sunflower crops are broomrapes, a family of plants which parasitize the roots of various other plants, including sunflowers. Damage and loss to sunflower crops as a result of broomrape can be as high as 100%.
Cultivation
In commercial planting, seeds are planted apart and deep.
History
Common sunflower was one of several plants cultivated by Native Americans in prehistoric North America as part of the Eastern Agricultural Complex, which also included corn, beans, squash, and a variety of other crops. Although it was commonly accepted that the sunflower was first domesticated in what is now the southeastern US, roughly 5,000 years ago, there is evidence that it was first domesticated in Mexico around 2600 BCE. These crops were found in Tabasco, Mexico, at the San Andres dig site. The earliest known examples in the US of a fully domesticated sunflower have been found in Tennessee, and date to around 2300 BCE. Other very early examples come from rockshelter sites in Eastern Kentucky. Many indigenous American peoples used the sunflower as the symbol of their solar deity, including the Aztecs and the Otomi of Mexico and the Incas in South America. In 1510, early Spanish explorers encountered the sunflower in the Americas and carried its seeds back to Europe. Of the four plants known to have been domesticated in eastern North America and to have become important agricultural commodities, the sunflower is currently the most economically important.
Research of phylogeographic relations and population demographic patterns across sunflowers has demonstrated that earlier cultivated sunflowers form a clade from wild populations from the Great Plains, which indicates that there was a single domestication event in central North America. Following the cultivated sunflower's origin, it may have gone through significant bottlenecks dating back to ~5,000 years ago.
In the 16th century the first crop breeds were brought from America to Europe by explorers. Domestic sunflower seeds have been found in Mexico, dating to 2100 BCE. Native American people grew sunflowers as a crop from Mexico to Southern Canada. They then were introduced to the Russian Empire, where oilseed cultivators were located, and the flowers were developed and grown on an industrial scale. The Russian Empire reintroduced this oilseed cultivation process to North America in the mid-20th century; North America began their commercial era of sunflower production and breeding. New breeds of the Helianthus spp. began to become more prominent in new geographical areas. During the 18th century, the use of sunflower oil became very popular in Russia, particularly with members of the Russian Orthodox Church, because only plant-based fats were allowed during Lent, according to fasting traditions. In the early 19th century, it was first commercialized in the village of Alexeyevka in Voronezh Governorate by the merchant named Daniil Bokaryov, who developed a technology suitable for its large-scale extraction, and quickly spread around. The town's coat of arms has included an image of a sunflower ever since.
Production
In 2020, world production of sunflower seeds was 50 million tonnes, led by Russia and Ukraine, with 53% combined of the total.
Fertilizer use
Researchers have analyzed the impact of various nitrogen-based fertilizers on the growth of sunflowers. Ammonium nitrate was found to produce better nitrogen absorption than urea, which performed better in low-temperature areas.
Crop rotation
Sunflower cultivation typically uses crop rotation, often with cereals, soybean, or rapeseed. This reduces idle periods and increases total sunflower production and profitability.
Hybrids and cultivars
In today's market, most of the sunflower seeds provided or grown by farmers are hybrids. Hybrids or hybridized sunflowers are produced by cross-breeding different types and species, for example cultivated sunflowers with wild species. By doing so, new genetic recombinations are obtained ultimately leading to the production of new hybrid species. These hybrid species generally have a higher fitness and carry properties or characteristics that farmers look for, such as resistance to pathogens.
Hybrid, Helianthus annuus dwarf2 does not contain the hormone gibberellin and does not display heliotropic behavior. Plants treated with an external application of the hormone display a temporary restoration of elongation growth patterns. This growth pattern diminished by 35% 7–14 days after final treatment.
Hybrid male sterile and male fertile flowers that display heterogeneity have a low crossover of honeybee visitation. Sensory cues such as pollen odor, diameter of seed head, and height may influence pollinator visitation of pollinators that display constancy behavior patterns.
Sunflowers are grown as ornamentals in a domestic setting. Being easy to grow and producing spectacular results in any good, moist soil in full sun, they are a favourite subject for children. A large number of cultivars, of varying size and color, are now available to grow from seed. The following are cultivars of sunflowers (those marked have gained the Royal Horticultural Society's Award of Garden Merit):-
Uses
Sunflower "whole seed" (fruit) are sold as a snack food, raw or after roasting in ovens, with or without salt and/or seasonings added. Sunflower seeds can be processed into a peanut butter alternative, sunflower butter. It is also sold as food for birds and can be used directly in cooking and salads. Native Americans had multiple uses for sunflowers in the past, such as in bread, medical ointments, dyes and body paints.
Sunflower oil, extracted from the seeds, is used for cooking, as a carrier oil and to produce margarine and biodiesel, as it is cheaper than olive oil. A range of sunflower varieties exist with differing fatty acid compositions; some "high-oleic" types contain a higher level of monounsaturated fats in their oil than even olive oil. The oil is also sometimes used in soap. After World War I, during the Russian Civil War, people in Ukraine used sunflower seed oil in lamps as a substitute for kerosene due to shortages. The light from such a lamp has been described as "miserable" and "smoky".
The cake remaining after the seeds have been processed for oil is used as livestock feed. The hulls resulting from the dehulling of the seeds before oil extraction can also be fed to domestic animals. Some recently developed cultivars have drooping heads. These cultivars are less attractive to gardeners growing the flowers as ornamental plants, but appeal to farmers, because they reduce bird damage and losses from some plant diseases. Sunflowers also produce latex, and are the subject of experiments to improve their suitability as an alternative crop for producing hypoallergenic rubber.
Traditionally, several Native American groups planted sunflowers on the north edges of their gardens as a "fourth sister" to the better-known three sisters combination of corn, beans, and squash. Annual species are often planted for their allelopathic properties. It was also used by Native Americans to dress hair. Among the Zuni people, the fresh or dried root is chewed by the medicine man before sucking venom from a snakebite and applying a poultice to the wound. This compound poultice of the root is applied with much ceremony to rattlesnake bites.
However, for commercial farmers growing other commodity crops, the wild sunflower is often considered a weed. Especially in the Midwestern US, wild (perennial) species are often found in corn and soybean fields and can decrease yields. The decrease in yield can be attributed to the production of phenolic compounds which are used to reduce competition for nutrients in nutrient-poor growing areas of the common sunflower.
Phytoremediation
Helianthus annuus can be used in phytoremediation to extract pollutants from soil such as lead and other heavy metals, such as cadmium, zinc, cesium, strontium, and uranium. The phytoremediation process begins by absorbing the heavy metal(s) through the roots, which gradually accumulate in other areas, such as the shoots and leaves. Helianthus annuus can also be used in rhizofiltration to neutralize radionuclides, such as caesium-137 and strontium-90 from a pond after the Chernobyl disaster. A similar campaign was mounted in response to the Fukushima Daiichi nuclear disaster.
In culture
According to Iroquois mythology, the first sunflowers grew out of Earth Woman's legs after she died giving birth to her twin sons, Sapling and Flint.
The Zuni people use the blossoms ceremonially for anthropic worship. Sunflowers were also worshipped by the Incas because they viewed it as a symbol for the Sun.
Stories of Clytie the nymph who was spurned by her former lover Helios end with her transformed into what may be translated as sunflower. However, the plant in Greek mythology may be, "partly pale and partly red, and very like a violet". The plant described also exhibits heliotropism, with its face turning towards the sun. This plant may be a species in the genus heliotrope (Heliotropium). However, less commonly it is identified as the common marigold (Calendula officinalis).
During the 19th century, it was believed that nearby plants of the species would protect a home from malaria. The flowers are the subject of Vincent van Gogh's Sunflowers series of still-life paintings.
In July 2015, viable seeds were acquired from the field where Malaysia Airlines Flight 17 crashed on a year earlier and were grown in tribute to the 15 Dutch residents of Hilversum who were killed. Earlier that year, Fairfax chief correspondent Paul McGeough and photographer Kate Geraghty had collected 1.5 kg of sunflower seeds from the wreck site for family and friends of the 38 Australian victims, who aimed to give them a poignant symbol of hope.
On 13 May 2021, during the National Costume competition of the Miss Universe 2020 beauty pageant, Miss Dominican Republic Kimberly Jiménez wore a "Goddess of Sunflowers" costume covered in gold and yellow rhinestones that included several real sunflowers sewn onto the fabric.
Symbolism
The sunflower is the national flower of Ukraine. Ukrainians used sunflower as a main source of cooking oil instead of butter or lard forbidden by the Orthodox Church when observing Lent. They were also planted to serve as bioremediation in Chernobyl. In June 1996, U.S., Russian, and Ukrainian officials planted sunflowers at the Pervomaysk missile base where Soviet nuclear weapons were formerly placed. During the Russian invasion of Ukraine, a video widely shared on social media showed a Ukrainian woman confronting a Russian soldier, telling the latter to "take these seeds and put them in your pockets so at least sunflowers will grow when you all lie down here". The sunflower has since become a global symbol of resistance, unity, and hope.
The sunflower is also the state flower of the U.S. state of Kansas and one of the city flowers of Kitakyūshū, Japan.
During the late 19th century, the flower was used as the symbol of the Aesthetic Movement.
The sunflower was chosen as the symbol of the Spiritualist Church, for many reasons, but mostly because of the (false) belief that the flowers turn toward the sun as "Spiritualism turns toward the light of truth". Modern Spiritualists often have art or jewelry with sunflower designs.
The sunflower is often used as a symbol of green ideology. The flower is also the symbol of the Vegan Society.
The sunflower is the symbol behind the Sunflower Movement, a 2014 mass protest in Taiwan.
The Hidden Disabilities Sunflower was first used as a visible symbol (typically worn on a lanyard) in May 2016 at London Gatwick Airport. It has since come into common usage throughout the UK and in the Commonwealth more generally.
References
Sources
Pope, Kevin; Pohl, Mary E. D.; Jones, John G.; Lentz, David L.; von Nagy, Christopher; Vega, Francisco J.; Quitmyer Irvy R. (18 May 2001). "Origin and Environmental Setting of Ancient Agriculture in the Lowlands of Mesoamerica". Science, 292(5520):1370–1373.
Shosteck, Robert (1974) Flowers and Plants: An International Lexicon with Biographical Notes. New York: Quadrangle/The New York Times Book Co. .
External links
National Sunflower Association
Sunflowerseed—USDA Economic Research Service. Summary of sunflower production, trade, and consumption and links to relevant USDA reports.
Sunflower cultivation—New Crop Resource Online Program, Purdue University
annuus
Flora of Mexico
Flora of the United States
Annual plants
Edible nuts and seeds
Energy crops
Garden plants of North America
Phytoremediation plants
Agriculture in Mesoamerica
Crops originating from Pre-Columbian North America
Plants used in Native American cuisine
Plants used in traditional Native American medicine
Pre-Columbian California cuisine
Pre-Columbian Great Plains cuisine
Plants described in 1753
Symbols of Kansas
National symbols of Ukraine
Taxa named by Carl Linnaeus
Oil seeds
Symbols of Tocantins | Common sunflower | [
"Biology"
] | 4,369 | [
"Phytoremediation plants",
"Bioremediation"
] |
57,625 | https://en.wikipedia.org/wiki/Common%20Criteria | The Common Criteria for Information Technology Security Evaluation (referred to as Common Criteria or CC) is an international standard (ISO/IEC 15408) for computer security certification. It is currently in version 3.1 revision 5.
Common Criteria is a framework in which computer system users can specify their security functional and assurance requirements (SFRs and SARs, respectively) in a Security Target (ST), and may be taken from Protection Profiles (PPs). Vendors can then implement or make claims about the security attributes of their products, and testing laboratories can evaluate the products to determine if they actually meet the claims. In other words, Common Criteria provides assurance that the process of specification, implementation and evaluation of a computer security product has been conducted in a rigorous and standard and repeatable manner at a level that is commensurate with the target environment for use. Common Criteria maintains a list of certified products, including operating systems, access control systems, databases, and key management systems.
Key concepts
Common Criteria evaluations are performed on computer security products and systems.
Target of Evaluation (TOE) – the product or system that is the subject of the evaluation. The evaluation serves to validate claims made about the target. To be of practical use, the evaluation must verify the target's security features. This is done through the following:
Protection Profile (PP) – a document, typically created by a user or user community, which identifies security requirements for a class of security devices (for example, smart cards used to provide digital signatures, or network firewalls) relevant to that user for a particular purpose. Product vendors can choose to implement products that comply with one or more PPs, and have their products evaluated against those PPs. In such a case, a PP may serve as a template for the product's ST (Security Target, as defined below), or the authors of the ST will at least ensure that all requirements in relevant PPs also appear in the target's ST document. Customers looking for particular types of products can focus on those certified against the PP that meets their requirements.
Security Target (ST) – the document that identifies the security properties of the target of evaluation. The ST may claim conformance with one or more PPs. The TOE is evaluated against the SFRs (Security Functional Requirements. Again, see below) established in its ST, no more and no less. This allows vendors to tailor the evaluation to accurately match the intended capabilities of their product. This means that a network firewall does not have to meet the same functional requirements as a database management system, and that different firewalls may in fact be evaluated against completely different lists of requirements. The ST is usually published so that potential customers may determine the specific security features that have been certified by the evaluation.
Security Functional Requirements (SFRs) – specify individual security functions which may be provided by a product. The Common Criteria presents a standard catalogue of such functions. For example, a SFR may state how a user acting a particular role might be authenticated. The list of SFRs can vary from one evaluation to the next, even if two targets are the same type of product. Although Common Criteria does not prescribe any SFRs to be included in an ST, it identifies dependencies where the correct operation of one function (such as the ability to limit access according to roles) is dependent on another (such as the ability to identify individual roles).
The evaluation process also tries to establish the level of confidence that may be placed in the product's security features through quality assurance processes:
Security Assurance Requirements (SARs) – descriptions of the measures taken during development and evaluation of the product to assure compliance with the claimed security functionality. For example, an evaluation may require that all source code is kept in a change management system, or that full functional testing is performed. The Common Criteria provides a catalogue of these, and the requirements may vary from one evaluation to the next. The requirements for particular targets or types of products are documented in the ST and PP, respectively.
Evaluation Assurance Level (EAL) – the numerical rating describing the depth and rigor of an evaluation. Each EAL corresponds to a package of security assurance requirements (SARs, see above) which covers the complete development of a product, with a given level of strictness. Common Criteria lists seven levels, with EAL 1 being the most basic (and therefore cheapest to implement and evaluate) and EAL 7 being the most stringent (and most expensive). Normally, an ST or PP author will not select assurance requirements individually but choose one of these packages, possibly 'augmenting' requirements in a few areas with requirements from a higher level. Higher EALs do not necessarily imply "better security", they only mean that the claimed security assurance of the TOE has been more extensively verified.
So far, most PPs and most evaluated STs/certified products have been for IT components (e.g., firewalls, operating systems, smart cards).
Common Criteria certification is sometimes specified for IT procurement. Other standards containing, e.g., interoperation, system management, user training, supplement CC and other product standards. Examples include the ISO/IEC 27002 and the German IT baseline protection.
Details of cryptographic implementation within the TOE are outside the scope of the CC. Instead, national standards, like FIPS 140-2, give the specifications for cryptographic modules, and various standards specify the cryptographic algorithms in use.
More recently, PP authors are including cryptographic requirements for CC evaluations that would typically be covered by FIPS 140-2 evaluations, broadening the bounds of the CC through scheme-specific interpretations.
Some national evaluation schemes are phasing out EAL-based evaluations and only accept products for evaluation that claim strict conformance with an approved PP. The United States currently only allows PP-based evaluations.
History
CC originated out of three standards:
ITSEC – The European standard, developed in the early 1990s by France, Germany, the Netherlands and the UK. It too was a unification of earlier work, such as the two UK approaches (the CESG UK Evaluation Scheme aimed at the defence/intelligence market and the DTI Green Book aimed at commercial use), and was adopted by some other countries, e.g. Australia.
CTCPEC – The Canadian standard followed from the US DoD standard, but avoided several problems and was used jointly by evaluators from both the U.S. and Canada. The CTCPEC standard was first published in May 1993.
TCSEC – The United States Department of Defense DoD 5200.28 Std, called the Orange Book and parts of the Rainbow Series. The Orange Book originated from Computer Security work including the Anderson Report, done by the National Security Agency and the National Bureau of Standards (the NBS eventually became NIST) in the late 1970s and early 1980s. The central thesis of the Orange Book follows from the work done by Dave Bell and Len LaPadula for a set of protection mechanisms.
CC was produced by unifying these pre-existing standards, predominantly so that companies selling computer products for the government market (mainly for Defence or Intelligence use) would only need to have them evaluated against one set of standards. The CC was developed by the governments of Canada, France, Germany, the Netherlands, the UK, and the U.S.
Testing organizations
All testing laboratories must comply with ISO/IEC 17025, and certification bodies will normally be approved against ISO/IEC 17065.
The compliance with ISO/IEC 17025 is typically demonstrated to a National approval authority:
In Canada, the Standards Council of Canada (SCC) under Program for the Accreditation of Laboratories (PALCAN) accredits Common Criteria Evaluation Facilities (CCEF)
In France, the (COFRAC) accredits Common Criteria evaluation facilities, commonly called (CESTI). Evaluations are done according to norms and standards specified by the Agence nationale de la sécurité des systèmes d'information (ANSSI).
In Italy, the OCSI (Organismo di Certificazione della Sicurezza Informatica) accredits Common Criteria evaluation laboratories
In India, the STQC Directorate of the Ministry of Electronics and Information Technology evaluates and certifies IT products at assurance levels EAL 1 through EAL 4.
In the UK the United Kingdom Accreditation Service (UKAS) used to accredit Commercial Evaluation Facilities (CLEF) ; the UK is since 2019 only a consumer in the CC ecosystem
In the US, the National Institute of Standards and Technology (NIST) National Voluntary Laboratory Accreditation Program (NVLAP) accredits Common Criteria Testing Laboratories (CCTL)
In Germany, the Bundesamt für Sicherheit in der Informationstechnik (BSI)
In Spain, the National Cryptologic Center (CCN) accredits Common Criteria Testing Laboratories operating in the Spanish Scheme.
In The Netherlands, the Netherlands scheme for Certification in the Area of IT Security (NSCIB) accredits IT Security Evaluation Facilities (ITSEF).
In Sweden, the Swedish Certification Body for IT Security (CSEC) licenses IT Security Evaluation Facilities (ITSEF).
Characteristics of these organizations were examined and presented at ICCC 10.
Mutual recognition arrangement
As well as the Common Criteria standard, there is also a sub-treaty level Common Criteria MRA (Mutual Recognition Arrangement), whereby each party thereto recognizes evaluations against the Common Criteria standard done by other parties. Originally signed in 1998 by Canada, France, Germany, the United Kingdom and the United States, Australia and New Zealand joined 1999, followed by Finland, Greece, Israel, Italy, the Netherlands, Norway and Spain in 2000. The Arrangement has since been renamed Common Criteria Recognition Arrangement (CCRA) and membership continues to expand. Within the CCRA only evaluations up to EAL 2 are mutually recognized (Including augmentation with flaw remediation). The European countries within the SOGIS-MRA typically recognize higher EALs as well. Evaluations at EAL5 and above tend to involve the security requirements of the host nation's government.
In September 2012, a majority of members of the CCRA produced a vision statement whereby mutual recognition of CC evaluated products will be lowered to EAL 2 (Including augmentation with flaw remediation). Further, this vision indicates a move away from assurance levels altogether and evaluations will be confined to conformance with Protection Profiles that have no stated assurance level. This will be achieved through technical working groups developing worldwide PPs, and as yet a transition period has not been fully determined.
On July 2, 2014, a new CCRA was ratified per the goals outlined within the 2012 vision statement. Major changes to the Arrangement include:
Recognition of evaluations against only a collaborative Protection Profile (cPP) or Evaluation Assurance Levels 1 through 2 and ALC_FLR.
The emergence of international Technical Communities (iTC), groups of technical experts charged with the creation of cPPs.
A transition plan from the previous CCRA, including recognition of certificates issued under the previous version of the Arrangement.
Issues
Requirements
Common Criteria is very generic; it does not directly provide a list of product security requirements or features for specific (classes of) products: this follows the approach taken by ITSEC, but has been a source of debate to those used to the more prescriptive approach of other earlier standards such as TCSEC and FIPS 140-2.
Value of certification
Common Criteria certification cannot guarantee security, but it can ensure that claims about the security attributes of the evaluated product were independently verified. In other words, products evaluated against a Common Criteria standard exhibit a clear chain of evidence that the process of specification, implementation, and evaluation has been conducted in a rigorous and standard manner.
Various Microsoft Windows versions, including Windows Server 2003 and Windows XP, have been certified, but security patches to address security vulnerabilities are still getting published by Microsoft for these Windows systems. This is possible because the process of obtaining a Common Criteria certification allows a vendor to restrict the analysis to certain security features and to make certain assumptions about the operating environment and the strength of threats faced by the product in that environment. Additionally, the CC recognizes a need to limit the scope of evaluation in order to provide cost-effective and useful security certifications, such that evaluated products are examined to a level of detail specified by the assurance level or PP. Evaluations activities are therefore only performed to a certain depth, use of time, and resources and offer reasonable assurance for the intended environment.
In the Microsoft case, the assumptions include A.PEER:
"Any other systems with which the TOE communicates are assumed to be under the same management control and operate under the same security policy constraints. The TOE is applicable to networked or distributed environments only if the entire network operates under the same constraints and resides within a single management domain. There are no security requirements that address the need to trust external systems or the communications links to such systems."
This assumption is contained in the Controlled Access Protection Profile (CAPP) to which their products adhere. Based on this and other assumptions, which may not be realistic for the common use of general-purpose operating systems, the claimed security functions of the Windows products are evaluated. Thus they should only be considered secure in the assumed, specified circumstances, also known as the evaluated configuration.
Whether you run Microsoft Windows in the precise evaluated configuration or not, you should apply Microsoft's security patches for the vulnerabilities in Windows as they continue to appear. If any of these security vulnerabilities are exploitable in the product's evaluated configuration, the product's Common Criteria certification should be voluntarily withdrawn by the vendor. Alternatively, the vendor should re-evaluate the product to include the application of patches to fix the security vulnerabilities within the evaluated configuration. Failure by the vendor to take either of these steps would result in involuntary withdrawal of the product's certification by the certification body of the country in which the product was evaluated.
The certified Microsoft Windows versions remain at EAL4+ without including the application of any Microsoft security vulnerability patches in their evaluated configuration. This shows both the limitation and strength of an evaluated configuration.
Criticisms
In August 2007, Government Computing News (GCN) columnist William Jackson critically examined Common Criteria methodology and its US implementation by the Common Criteria Evaluation and Validation Scheme (CCEVS). In the column executives from the security industry, researchers, and representatives from the National Information Assurance Partnership (NIAP) were interviewed. Objections outlined in the article include:
Evaluation is a costly process (often measured in hundreds of thousands of US dollars) – and the vendor's return on that investment is not necessarily a more secure product.
Evaluation focuses primarily on assessing the evaluation documentation, not on the actual security, technical correctness or merits of the product itself. For U.S. evaluations, only at EAL5 and higher do experts from the National Security Agency participate in the analysis; and only at EAL7 is full source code analysis required.
The effort and time necessary to prepare evaluation evidence and other evaluation-related documentation is so cumbersome that by the time the work is completed, the product in evaluation is generally obsolete.
Industry input, including that from organizations such as the Common Criteria Vendor's Forum, generally has little impact on the process as a whole.
In a 2006 research paper, computer specialist David A. Wheeler suggested that the Common Criteria process discriminates against free and open-source software (FOSS)-centric organizations and development models. Common Criteria assurance requirements tend to be inspired by the traditional waterfall software development methodology. In contrast, much FOSS software is produced using modern agile paradigms. Although some have argued that both paradigms do not align well, others have attempted to reconcile both paradigms. Political scientist Jan Kallberg raised concerns over the lack of control over the actual production of the products once they are certified, the absence of a permanently staffed organizational body that monitors compliance, and the idea that the trust in the Common Criteria IT-security certifications will be maintained across geopolitical boundaries.
In 2017, the ROCA vulnerability was found in a list of Common Criteria certified smart card products. The vulnerability highlighted several shortcomings of Common Criteria certification scheme:
The vulnerability resided in a homegrown RSA key generation algorithm that has not been published and analyzed by the cryptanalysis community. However, the testing laboratory TÜV Informationstechnik GmbH (TÜViT) in Germany approved its use and the certification body BSI in Germany issued Common Criteria certificates for the vulnerable products. The Security Target of the evaluated product claimed that RSA keys are generated according to the standard algorithm. In response to this vulnerability, BSI now plans to improve transparency by requiring that the certification report at least specifies if the implemented proprietary cryptography is not exactly conformant to a recommended standard. BSI does not plan on requiring the proprietary algorithm to be published in any way.
Even though the certification bodies are now aware that the security claims specified in the Common Criteria certificates do not hold anymore, neither ANSSI nor BSI have revoked the corresponding certificates. According to BSI, a certificate can only be withdrawn when it was issued under misconception, e.g., when it turns out that wrong evidence was submitted. After a certificate is issued, it must be presumed that the validity of the certificate decreases over time by improved and new attacks being discovered. Certification bodies can issue maintenance reports and even perform a re-certification of the product. These activities, however, have to be initiated and sponsored by the vendor.
While several Common Criteria certified products have been affected by the ROCA flaw, vendors' responses in the context of certification have been different. For some products a maintenance report was issued, which states that only RSA keys with a length of 3072 and 3584 bits have a security level of at least 100 bits, while for some products the maintenance report does not mention that the change to the TOE affects certified cryptographic security functionality, but concludes that the change is at the level of guidance documentation and has no effect on assurance.
According BSI, the users of the certified end products should have been informed of the ROCA vulnerability by the vendors. This information, however, did not reach in a timely manner the Estonian authorities who had deployed the vulnerable product on more than 750,000 Estonian identity cards.
Alternative approaches
Throughout the lifetime of CC, it has not been universally adopted even by the creator nations, with, in particular, cryptographic approvals being handled separately, such as by the Canadian / US implementation of FIPS-140, and the CESG Assisted Products Scheme (CAPS) in the UK.
The UK has also produced a number of alternative schemes when the timescales, costs and overheads of mutual recognition have been found to be impeding the operation of the market:
The CESG System Evaluation (SYSn) and Fast Track Approach (FTA) schemes for assurance of government systems rather than generic products and services, which have now been merged into the CESG Tailored Assurance Service (CTAS)
The CESG Claims Tested Mark (CCT Mark), which is aimed at handling less exhaustive assurance requirements for products and services in a cost and time efficient manner.
In early 2011, NSA/CSS published a paper by Chris Salter, which proposed a Protection Profile oriented approach towards evaluation. In this approach, communities of interest form around technology types which in turn develop protection profiles that define the evaluation methodology for the technology type. The objective is a more robust evaluation. There is some concern that this may have a negative impact on mutual recognition.
In Sept of 2012, the Common Criteria published a Vision Statement implementing to a large extent Chris Salter's thoughts from the previous year. Key elements of the Vision included:
Technical Communities will be focused on authoring Protection Profiles (PP) that support their goal of reasonable, comparable, reproducible and cost-effective evaluation results
Evaluations should be done against these PP's if possible; if not mutual recognition of Security Target evaluations would be limited to EAL2.
See also
Bell–LaPadula model
China Compulsory Certificate
Evaluation Assurance Level
FIPS 140-2
Information Assurance
ISO 9241
ISO/IEC 27001
Usability testing
Verification and validation
References
External links
The official website of the Common Criteria Project
The Common Criteria standard documents
List of Common Criteria evaluated products
List of Licensed Common Criteria Laboratories
Towards Agile Security Assurance
Important Common Criteria Acronyms
Common Criteria Users Forum
Additional Common Criteria Information on Google Knol
OpenCC Project – free Apache license CC docs, templates and tools
Common Criteria Quick Reference Card
Common Criteria process cheatsheet
Common Criteria process timeline
Computer security standards
Evaluation of computers
ISO standards | Common Criteria | [
"Technology",
"Engineering"
] | 4,224 | [
"Cybersecurity engineering",
"Computer security standards",
"Computer standards",
"Evaluation of computers",
"Computers"
] |
57,627 | https://en.wikipedia.org/wiki/Kodak | The Eastman Kodak Company, referred to simply as Kodak (), is an American public company that produces various products related to its historic basis in film photography. The company is headquartered in Rochester, New York, and is incorporated in New Jersey. It is best known for photographic film products, which it brought to a mass market for the first time.
Kodak began as a partnership between George Eastman and Henry A. Strong to develop a film roll camera. After the release of the Kodak camera, Eastman Kodak was incorporated on May 23, 1892. Under Eastman's direction, the company became one of the world's largest film and camera manufacturers, and also developed a model of welfare capitalism and a close relationship with the city of Rochester. During most of the 20th century, Kodak held a dominant position in photographic film, and produced a number of technological innovations through heavy investment in research and development at Kodak Research Laboratories. Kodak produced some of the most popular camera models of the 20th century, including the Brownie and Instamatic. The company's ubiquity was such that its "Kodak moment" tagline entered the common lexicon to describe a personal event that deserved to be recorded for posterity.
Kodak began to struggle financially in the late 1990s as a result of increasing competition from Fujifilm. The company also struggled with the transition from film to digital photography, even though Kodak had developed the first self-contained digital camera. Attempts to diversify its chemical operations failed, and as a turnaround strategy in the 2000s, Kodak instead made an aggressive turn to digital photography and digital printing. These strategies failed to improve the company's finances, and in January 2012, Kodak filed for Chapter 11 bankruptcy protection in the United States Bankruptcy Court for the Southern District of New York.
In September 2013, the company emerged from bankruptcy, having shed its large legacy liabilities, restructured, and exited several businesses. Since emerging from bankruptcy, Kodak has continued to provide commercial digital printing products and services, motion picture film, and still film, the last of which is distributed through the spinoff company Kodak Alaris. The company has licensed the Kodak brand to several products produced by other companies, such as the PIXPRO line of digital cameras manufactured by JK Imaging. In response to the COVID-19 pandemic in 2020, Kodak announced in late July that year it would begin production of pharmaceutical materials.
History
Name
The letter k was a favorite of George Eastman's; he is quoted as saying, "it seems a strong, incisive sort of letter." He and his mother, Maria, devised the name Kodak using an Anagrams set. Eastman said that there were three principal concepts he used in creating the name: it should be short, easy to pronounce, and not resemble any other name or be associated with anything else. According to a 1920 ad, the name "was simply inventedmade up from letters of the alphabet to meet our trade-mark requirements. It was short and euphonious and likely to stick in the public mind." The Kodak name was trademarked by Eastman in 1888. There was also a rumor that the name Kodak came from the sound made by the Kodak camera's shutter.
Founding
Eastman entered a partnership with Henry Strong in 1880 and the Eastman Dry Plate Company was founded on January 1, 1881, with Strong as president and Eastman as treasurer. Initially, the company sold dry plates for cameras, but Eastman's interest turned to replacing glass plates altogether with a new roll film process. On October 1, 1884, the company was re-incorporated as the Eastman Dry Plate and Film Company. In 1885, Eastman patented the first practical film roll holder with William Walker, which would allow dry plate cameras to store multiple exposures in a camera simultaneously. That same year, Eastman patented a form of paper film he called "American film". Eastman would continue experimenting with cameras and hired chemist Henry Reichenbach to improve the film. These experiments would culminate in an 1889 patent for nitrocellulose film. As the company continued to grow, it was re-incorporated several more times. In November 1889, it was renamed the Eastman Company and 10,000 shares of stock were issued for $100. On May 23, 1892, another round of capitalization occurred and it was renamed Eastman Kodak. An Eastman Kodak of New Jersey was established in 1901 and existed simultaneously with the Eastman Kodak of New York until 1936, when the New York corporation was dissolved and its assets were transferred to the New Jersey corporation. Kodak remains incorporated in New Jersey today, although its headquarters is in Rochester.
The Kodak camera
In 1888, the Kodak camera was patented by Eastman. It was a box camera with a fixed-focus lens on the front and no viewfinder; two V shape silhouettes at the top aided in aiming in the direction of the subject. At the top it had a rotating key to advance the film, a pull-string to set the shutter, and a button on the side to release it, exposing the celluloid film. Inside, it had a rotating bar to operate the shutter. When the user pressed the button to take a photograph, an inner rope was tightened and the exposure began. Once the photograph had been taken, the user had to rotate the upper key to change the selected frame within the celluloid tape.
The $25 camera came pre-loaded with a film roll of 100 exposures, and could be mailed to Eastman's headquarters in Rochester with $10 for processing. The camera would be returned with prints, negatives, and a new roll of film. Additional rolls were also sold for $2 to professional photographers who wished to develop their own photographs. By unburdening the photographer from the complicated and expensive process of film development, photography became more accessible than ever before. The camera was an immediate success with the public and launched a fad of amateur photography. Eastman's advertising slogan, "You Press the Button, We Do the Rest", soon entered the public lexicon, and was referenced by Chauncey Depew in a speech and Gilbert and Sullivan in their opera Utopia, Limited.
Expansion
In the 1890s and early 1900s, Kodak grew rapidly and outmaneuvered competitors through a combination of innovation, acquisitions, and exclusive contracts. Eastman recognized that film would return more profit than the cameras that used them, and focused on control of the film market. This razor and blades model of sales would change little for several decades. Larger facilities were soon needed in Rochester, and the construction of Kodak Park began in 1890. Kodak purchased and opened several shops and factories in Europe, particularly in the United Kingdom. The British holdings were initially organized under the Eastman Photographic Materials Company. Beginning in 1898, they were placed under the holding company Kodak Limited. An Australian subsidiary, Australia Kodak Limited, was established in 1908. In 1931, Kodak-Pathé was established in France and Kodak AG was formed in Germany following the acquisition of Nagel. The Brownie camera, marketed to children, was first released in 1900, and further expanded the amateur photography market. One of the largest markets for film became the emerging motion picture industry. When Thomas Edison and other film producers formed the Motion Picture Patents Company in 1908, Eastman negotiated for Kodak to be sole supplier of film to the industry. In 1914, Kodak built its current headquarters on State Street. By 1922, the company was the second-largest purchaser of silver in the United States, behind the U.S. Treasury. Beginning on July 18, 1930, Kodak was included in the Dow Jones Industrial Average.
During World War I, Kodak established a photographic school in Rochester to train pilots for aerial reconnaissance. The war strained supply chains, and Eastman sought out new chemical sources the company could have direct control over. At the war's end in 1920, Kodak purchased a hardwood distillation plant in Tennessee from the federal government and established Eastman Tennessee, which later became the Eastman Chemical Company.
Henry Strong died in 1919, after which Eastman became the company president. Eastman began to wind down his involvement in the daily management of the company in the mid-1920s, and formally retired in 1925, although he remained on the board of directors. William Stuber succeeded him as president, and managed the company along with Frank Lovejoy.
In 1912, Kodak established the Kodak Research Laboratories at Building 3 in Kodak Park, with Kenneth Mees as director. Research primarily focused on film emulsions for color photography and radiography. In 1915, Kodak began selling Kodachrome, a two-color film developed by John Capstaff at the research lab. Another two-color film duplitized film was marketed for photography of X-rays as it had a short exposure time and could reduce the dosage of radiation needed to take a photo.
Labor relations
Kodak became closely tied to Rochester, where most of its employees resided, and was at the vanguard of welfare capitalism during the 1910s and 1920s. Eastman implemented a number of worker benefit programs, including a welfare fund to provide workmen's compensation in 1910 and a profit-sharing program for all employees in 1912. In 1919, he sold a large portion of his stock to company employees below market value. The expansion of benefits continued after Eastman; in 1928, the company began offering life insurance, disability benefits, and retirement annuity plans for employees, at the behest of company statistician Marion Folsom. Many other employers in the Rochester area took cues from Kodak and increased their own wages and benefits in order to remain competitive in the labor market.
Eastman believed that offering these benefits served the interests of the company. He feared labor unions and believed that offering better compensation than that received by union workers would deter union organizing and avoid the potential costs of a company strike. Selling his stock to employees would simultaneously make it more appealing to investors, who were wary to purchase shares because of his large stake, and lower the price of the stock, which would keep anti-trust lawyers from investigating the company. Because Kodak was a capital-intensive industry with a low labor-cost ratio, employee benefits contributed less to the company's expenses than they would in other industries.
Employment opportunities were not extended to all Rochesterians. The company almost exclusively hired workers of an Anglo-Saxon background under Eastman, and excluded Catholic immigrants, African-Americans, and Jews. Approximately one-third of employees were female. A system of family hiring, where children of employees would be hired to follow their parents, reinforced the concept of an industrial community that Eastman sought to create. These practices were not seriously challenged until after World War II. As a consequence of this shared background and the robust company benefits, Kodak employees formed a close community that viewed unions as outsiders, and no attempt to organize workers at Kodak succeeded during the 20th century.
Great Depression
Kodak was hard-hit by the Great Depression, although Rochester was spared from its worst effects as banks were able to remain solvent. Seventeen percent of the company's employees were laid off between 1929 and 1933. Company founder George Eastman committed suicide at his home on March 14, 1932, due to his declining health. From 1931 to 1936, Kodak participated in the Rochester Plan, a privately funded unemployment insurance program to assist the jobless and boost consumer spending. The program was created by Marion Folsom, who gained national recognition for his work and would later serve as a company director and cabinet secretary for Dwight D. Eisenhower. Payments were made between 1933 and 1936, when layoffs ended at Kodak. The program led to many statistical improvements at Kodak, but overall had an insignificant effect on the Rochester community, as few companies were willing to join the program.
Research projects led to a number of new Kodak products in the 1930s. At Kodak Research Laboratories, Leopold Godowsky Jr. and Leopold Mannes invented a three-color film which would be commercially viable. In 1935, the product was launched as Kodachrome. The company also produced industrial high-speed cameras and began to diversify its chemical operations by producing vitamin concentrates and plastics. In 1934, Kodak entered a partnership with Edwin Land to supply polarized lenses, after briefly considering an offer to purchase Land's patents. Land would later launch the Polaroid Corporation and invented the first instant camera using emulsions supplied by Kodak.
Frank Lovejoy succeeded William Stuber as company president in 1934, and Thomas J. Hargrave became president in 1941.
World War II
After the American entry into World War II, Kodak ceased its production of amateur film and began supplying the American war effort at the direction of the War Production Board. The company produced film, cameras, microfilm, pontoons, synthetic fibers, RDX, variable-time fuses, and hand grenades for the government.
Kodak's European subsidiaries continued to operate during the war. Kodak AG, the German subsidiary, was transferred to two trustees in 1941 to allow the company to continue operating in the event of war between Germany and the United States. The company produced film, fuses, triggers, detonators, and other material. Slave labor was employed at Kodak AG's Stuttgart and Berlin-Kopenick plants. During the German occupation of France, Kodak-Pathé facilities in Severan and Vincennes were also used to support the German war effort. Kodak continued to import goods to the United States purchased from Nazi Germany through neutral nations such as Switzerland. This practice was criticized by many American diplomats, but defended by others as more beneficial to the American war effort than detrimental. Kodak received no penalties during or after the war for collaboration.
Manhattan Project
After a 1943 meeting between Kenneth Mees and Leslie Groves, a team of Kodak scientists joined the Manhattan Project and enriched uranium-235 at Oak Ridge.
Kodak's experiments with radiation would continue after the war. In 1945, a batch of X-ray film that the company processed mysteriously became fogged. Julian Webb, who had worked at Oak Ridge, proposed that the film had been exposed to radiation released by nuclear weapons tests. The source of the radiation was eventually traced to strawboard packaging from Vincennes, Indiana, which had been irradiated by fallout that had traveled thousands of miles northeast from the Trinity test site. After this discovery, Kodak officials became concerned that fallout would contaminate more of their film, and began monitoring atmospheric radiation levels with rainwater collection at Kodak Park. In 1951, the United States Atomic Energy Commission (AEC) began providing Kodak with a schedule of nuclear tests in exchange for its silence after the company threatened to sue the federal government for damage caused to film products. Kodak was later contracted to create emulsions for radiation tests of fallout from nuclear tests.
Post-war expansion
Kodak reached its zenith in the post-war era, as the usage of film for amateur, commercial, and government purposes all increased. In 1948, Tennessee Eastman created a working acetate film, which quickly replaced nitrate film in the movie industry because it was non-flammable. In 1958, Kodak began marketing a line of super glue, Eastman 910. Its cameras were used by NASA for space exploration. In 1963, the first Instamatic cameras were sold, which were the company's lowest-cost cameras to date. Annual sales passed $1 billion in 1962 and $2 billion in 1966. Albert K. Chapman succeeded Thomas Hargrave as president in 1952, and was succeeded by William S. Vaughn in 1960. Louis K. Eilers would serve as president and CEO between 1969 and 1972. In the 1970s, Kodak published important research in dye lasers, and patented the Bayer Filter method of RGB arrangement on photosensors.
During the Cold War, Kodak participated in a number of clandestine government projects. Beginning in 1955 they were contracted by the CIA to design cameras and develop film for the U-2 reconnaissance aircraft under the Bridgehead Program. Kodak was also contracted by the National Reconnaissance Office to produce cameras for surveillance satellites such as the KH-7 Gambit and KH-9 Hexagon. Between 1963 and 1970, Kodak engineers worked on the cancelled Manned Orbiting Laboratory program, designing optical sensors for a crewed reconnaissance satellite. The company later performed a study for NASA on the astronomical uses of the equipment developed for MOL.
Kodak doubled its number of employees worldwide between 1936 and 1966. The majority remained employed in Rochester, where it was the employer of choice for most. The company continued offering higher wages and more benefits than labor market competitors, including the annual wage dividend, a bonus for all employees which typically amounted to 15% of base salary. Employee loyalty was strong, and the company experienced a turnover rate of only 13% in the 1950s, compared to 50% for American manufacturers as a whole. Journalist Curt Gerling noted that Kodak employees behaved like a separate class from other workers in Rochester, and "From the cradle infants are impressed with the fact that 'daddy is a Kodak man'; inferentially this compares with 'our father is a 33rd degree mason'". A 1989 New York Times article compared Rochester to a company town.
Kodak's business model changed little from the 1930s to the 1970s, as the company's dominant position made change unnecessary and it made no mergers or acquisitions which might bring new perspectives. Research and development remained focused on products related to film production and development, which caused the company to fall behind rivals Polaroid and Xerox in the development of instant cameras and photocopiers. Kodak would begin selling its own versions of each in the mid-1970s, but neither became popular. Both product lines would be abandoned in the 1990s.
Rivalry with Fujifilm
Japanese competitor Fujifilm entered the U.S. market with lower-priced film and supplies in the 1980s. Fuji defeated Kodak in a bid to become the official film of the 1984 Los Angeles Olympics, which gave it a permanent foothold in the market. Fuji opened a film plant in the U.S. and its aggressive marketing and price cutting began taking market share from Kodak, rising from a 10% share in the early 1990s to 17% in 1997. Fuji also made headway into the professional market with specialty transparency films such as Velvia and Provia, which competed with Kodak's signature professional product, Kodachrome.
Encouraged by shareholders, the company began cutting benefits and making large layoffs to save money. Despite the competition, Kodak's revenues and profits continued to increase during the 1990s, due to the strategy changes and an overall expansion of the global market. Under CEO George M. C. Fisher, Kodak's annual revenue peaked at $16 billion in 1996 and profits peaked at $2.5 billion in 1999.
In May 1995, Kodak filed a petition with the US Commerce Department under section 301 of the Commerce Act arguing that its poor performance in the Japanese market was a direct result of unfair practices adopted by Fuji. The complaint was lodged by the United States with the World Trade Organization. On January 30, 1998, the WTO announced a "sweeping rejection of Kodak's complaints" about the film market in Japan.
A price war between the two companies began in 1997, eating into Kodak's profits. Kodak's financial results for 1997 showed that the company's revenues dropped from $15.97 billion in 1996 to $14.36 billion in 1997, a fall of more than 10%; its net earnings went from $1.29 billion to just $5 million for the same period. Kodak's market share declined from 80.1% to 74.7% in the United States, a one-year drop of five percentage points.
Fuji and Kodak recognized the upcoming threat of digital photography, and although both sought to diversify as a mitigation strategy, Fuji was more successful at diversification. Fuji stopped production of motion picture film in 2013, leaving Kodak as the last major producer.
Shift to digital
Despite the common misconception that the industry titan's refusal to invest in digital cameras led to its downfall, Kodak was actively involved in the development and production of digital cameras. In 1972, Roger VanHeyningen, the Director of the Physics Division in Kodak Research Labs (KRL), established a small laboratory where researchers began investigating the basic processes of the metal oxide semiconductor technology used to manufacture Charge Coupled Device (CCD) image sensors. In early 1974, KRL began an effort to develop a one-piece color video camera / recorder (now known as a camcorder), to replace home movie cameras which used 8mm film. While working on this project, Kodak Scientist Peter L. P. Dillon invented integral color image sensors and single-sensor color video cameras, which are now ubiquitous in products such as smart phone cameras, digital cameras and camcorders, digital cinema cameras, medical cameras, automobile cameras, and drones. In 1982, Kodak designed and manufactured a color CCD image sensor having 360,000 pixels, the highest resolution sensor available at the time.
Kodak employee Steven Sasson developed the first handheld digital camera in 1975 using a monochrome CCD manufactured by Fairchild having 10,000 pixels. Larry Matteson, another employee, wrote a report in 1979 predicting a complete shift to digital photography would occur by 2010. However, company executives were reluctant to make a strong pivot towards digital technology at the time, since it would require heavy investment for a very limited market and put the company into direct competition with established firms in the computer hardware industry.
Under CEOs Colby Chandler and Kay Whitmore, Kodak instead attempted to diversify its chemical operations. Although these new operations were given large budgets, there was little long-term planning or assistance from outside experts, and most of them resulted in large losses. Another effort to diversify failed when Kodak purchased Sterling Drug in 1988 at a cost of $5.1 billion. The drug company was overvalued and soon lost money. Research and development at Kodak Research Laboratories was directed into digital technology during the 1980s, laying the groundwork for a future digital shift.
Kodak sought to establish a presence in the information systems market, acquiring Unix developer Interactive Systems Corporation in early 1988 to operate as a subsidiary of Kodak's newly established software systems division. Kodak's resources enabled Interactive to expand their business, but this raised issues with regard to Kodak's strategy, since the company had previously acquired a 7% stake, diluted to 4.5% by subsequent share issues, in prominent Unix systems vendor Sun Microsystems. Interactive's role in Kodak's strategy was to deliver Unix-based imaging products and to support Kodak's introduction of Photo CD. Interactive planned to release a distribution of Unix System V Release 4 featuring comprehensive support for imaging peripherals such as scanners and printers, along with facilities for colour management and colourspace conversion. Later in 1991, however, Kodak put Interactive up for sale, attracting interest from various technology companies including the SunSoft subsidiary of Sun Microsystems, on whose Solaris operating system Interactive had undertaken development work. SunSoft eventually acquired the systems products division of Interactive in early 1992, leaving the services and technologies division with Kodak, until its eventual sale to SHL Systemhouse in 1993. Kodak would continue its relationship with Sun as a notable customer, integrating various Sun technologies in its document management products. Kodak would later sue Sun for infringing three software patents acquired by Kodak in 1997 from Wang Laboratories.
In 1993, Whitmore announced the company would restructure, and he was succeeded by George M. C. Fisher, a former Motorola CEO, later that year. Under Fisher, the company abandoned diversification in chemicals and focused on an incremental shift to digital technology. Tennessee Eastman was spun off as Eastman Chemical on January 1, 1994, and Sterling Drug's remaining operations were sold in August 1994. Eastman Chemical later became a Fortune 500 company in its own right. A key component of the incremental strategy was Kodak's line of digital self-service kiosks installed in retail locations, where consumers could upload and edit photos, as a replacement for traditional photo developers. Kodak also began manufacturing digital cameras, such as the Apple QuickTake. Film sales continued to rise during the 1990s, delaying the digital transition from occurring faster.
In 2001, film sales began to fall. Under Daniel Carp, Fisher's successor as CEO, Kodak made an aggressive move in the digital camera market with its EasyShare family of digital cameras. By 2005, Kodak ranked No. 1 in the U.S. in digital camera sales, which surged 40% to $5.7 billion. The company also began selling digital medical image systems after acquiring the Israel-based companies Algotec Systems and OREX Computed Radiography. Despite the initial high growth in sales, digital cameras had low profit margins due to strong competition, and the market rapidly matured. Its digital cameras soon were undercut by Asian competitors that could produce and sell cheaper products. Many digital cameras were sold at a loss as a result. The film business, where Kodak enjoyed high profit margins, also continued to fall. The combination of these two factors caused a decline in profits. By 2007, Kodak had dropped to No. 4 in U.S. digital camera sales with a 9.6% share, and by 2010, had dropped to a 7% share, in seventh place behind Canon, Sony, Nikon, and others, according to research firm IDC. An ever-smaller share of digital pictures were being taken on dedicated digital cameras, being gradually displaced by cameras on cellphones, smartphones, and tablets. Digital camera sales peaked in 2007 and declined afterwards.
New strategy
Kodak began another strategy shift after Antonio Pérez became CEO in 2005. While Kodak had previously done all development and manufacturing in-house, Pérez shut down factories and outsourced or eliminated manufacturing divisions. Kodak agreed to divest its digital camera manufacturing operations to Flextronics in August 2006, including assembly, production and testing. The company exited the film camera market altogether, and began to end the production of film products. In total, 13 film plants and 130 photo finishing facilities were closed, and 50,000 employees laid off between 2004 and 2007. In 2009, Kodak announced that it would cease selling Kodachrome color film, ending 74 years of production, after a dramatic decline in sales.
Pérez invested heavily in digital technologies and new services that capitalized on its technology innovation to boost profit margins. He also spent hundreds of millions of dollars to build up a high-margin printer ink business to replace falling film sales, a move which was widely criticized due to the amount of competition present in the printer market, which would make expansion difficult. Kodak's ink strategy rejected the razor and blades business model used by dominant market leader Hewlett-Packard by selling expensive printers with cheaper ink cartridges. In 2011, these new lines of inkjet printers were said to be on verge of turning a profit, although some analysts were skeptical as printouts had been replaced gradually by electronic copies on computers, tablets, and smartphones. Inkjet printers continued to be viewed as one of the company's anchors after it entered bankruptcy proceedings. However, in September 2012 declining sales forced Kodak to announce an exit from the consumer inkjet market.
Bankruptcy
Kodak's finances and stock value continued to decline, and in 2009 the company negotiated a $300 million loan from KKR. A number of divisions were sold off to repay debts from previous investments, most notably the Kodak Health Group, one of the company's profitable units. Kodak used the $2.35 billion from the sale to fully repay its approximately $1.15 billion of secured term debt. Around 8,100 employees from the Kodak Health Group transferred to Onex, which was renamed Carestream Health. In 2010, Kodak was removed from the S&P 500.
In the face of growing debts and falling revenues, Kodak also turned to patent litigation to generate revenue. In 2010, it received $838 million from patent licensing that included a settlement with LG. Between 2010 and 2012, Kodak and Apple sued each other in multiple patent infringement lawsuits.
By 2011, Kodak was rapidly using up its cash reserves, stoking fears of bankruptcy; it had $957 million in cash in June 2011, down from $1.6 billion in January 2001. Later that year, Kodak reportedly explored selling off or licensing its vast portfolio of patents to stave off bankruptcy. In December 2011, two board members who had been appointed by KKR resigned. By January 2012, analysts suggested that the company could enter bankruptcy followed by an auction of its patents, as it was reported to be in talks with Citigroup to provide debtor-in-possession financing. This was confirmed on January 19, 2012, when the company filed for Chapter 11 bankruptcy protection and obtained a $950 million, 18-month credit facility from Citigroup to enable it to continue operations. Under the terms of its bankruptcy protection, Kodak had a deadline of February 15, 2013, to produce a reorganization plan. In January 2013, the Court approved financing for Kodak to emerge from bankruptcy by mid 2013.
During bankruptcy proceedings, Kodak sold many of its patents for approximately $525 million to a group of companies (including Apple, Google, Facebook, Amazon, Microsoft, Samsung, Adobe Systems, and HTC) under the names Intellectual Ventures and RPX Corporation. Kodak announced that it would end the production of several products, including digital cameras, pocket video cameras, digital picture frames, and inkjet printers. As part of a settlement with the UK-based Kodak Pension Plan, Kodak agreed to sell its photographic film, commercial scanners, and photo kiosk operations, which were reorganized as a spinoff company, Kodak Alaris. The Image Sensor Solutions (ISS) division of Kodak was sold to Truesense Imaging Inc.
On September 3, 2013, Kodak announced that it emerged from bankruptcy as a technology company focused on imaging for business. Its main business segments would be Digital Printing & Enterprise and Graphics, Entertainment & Commercial Films.
Kodak's decline and bankruptcy were damaging to the Rochester area. Its jobs were largely replaced with lower-paying ones, contributing to a high poverty rate in the city. Between 2007 and 2018, real GDP losses from Kodak canceled out the growth in all other sectors in Rochester.
Post-bankruptcy
On March 12, 2014, Kodak announced that Jeffrey J. Clarke had been named as chief executive officer and a member of its board of directors. At the end of 2016, Kodak reported its first annual profit since bankruptcy.
In recent years, Kodak has licensed its brand to a number of other companies. The California-based company JK Imaging has manufactured Micro Four-Thirds cameras under the Kodak brand since 2013. The Kodak Ektra, a smartphone, was designed by the Bullitt Group and launched in 2016. Digital tablets were announced with Archos in 2017. In 2018, Kodak announced two failed cryptocurrency products; the cryptocurrency KodakCoin, which was developed by RYDE Holding, Inc., and the Kodak Kashminer, a Bitcoin-mining computer which was developed by Spotlite.
In 2016, the Kodak spinoff company eApeiron was founded with assets acquired from Kodak and an investment by Alibaba. The company's mission is to eliminate “knock offs” and promote authenticity.
Despite the pivot to digital technology, film remains a major component of Kodak's business. The company continues to supply film to the motion picture industry after signing new agreements with major studios in 2015 and 2020. In 2022, Kodak announced it would hire new film technicians after film photography experienced a revival among hobbyists.
Current products and services
Kodak is currently arranged in four business reporting segments: Traditional Print, Digital Print, Advanced Material & Chemicals (including Motion Picture) and Brand (Brand licensing of consumer products produced by third parties).
Kodak is the primary provider of film stock to the American motion picture industry, and also provides packaging, functional printing, graphic communications, and professional services for international businesses.
Kodak Alaris, UK holds the rights to still photographic films and the Kodak Moments photo kiosk businesses which formed part of the 2012 bankruptcy settlement. They also held the rights to the Photo Paper, Photochemicals, Display and Software businesses (PPDS) but sold these to Sino Promise, China in 2020. In 2023 Sino Promise relinquished the photochemical rights which reverted to Eastman Kodak, who re-licensed them (see brand).
Advanced Material & Chemicals
Materials
KodaCOLOR Fabric Inks
KodaLUX Fabric coating
Silver Anti-microbial materials and coating
Chemicals
Toll manufacture of specialty chemicals
Industrial films
Kodak Aerocolor IV 125 2460 Color Negative Aerial Film
Kodak ACCUMAX plotter films for printed circuit boards
Kodak ESTAR polyester films
Motion Picture
Since the 2000s, most movies across the world have been captured and distributed digitally. But, some moviemakeres still prefer to use film picture formats to achieve the desired results.
Motion Picture camera films are produced in 8mm, 16mm and 35mm. In addition to Camera films listed below a number of motion picture technical stocks are also produced e.g. inter-negatives, duplication sound and final print films, together with the process chemicals.
Camera Films
Black & White Negative Stock
Kodak Double X 5222/7222
Black & White Reversal Stock
Kodak Tri-X 7266
Color Negative Stocks
Kodak Vision 3 50D 5203/7203
Kodak Vision 3 250D 5207/7207
Kodak Vision 3 200T 5213/7213
Kodak Vision 3 500T 5219/7219
Color Reversal Stocks
Kodak Ektachrome 100D 7294
Still Film
The manufacture of Kodak branded still films are procured by Kodak Alaris from Eastman Kodak. Kodak Alaris hold the rights to the sale, marketing and distribution of these products.
Eastman Kodak also undertake contract coating and/or packaging for other still film brands, including Cinestill (remjet free versions of color movie films), Lomography color negative films and Fujifilm, who from 2022 procured production of some color negative films from their former business rival. Due to shortage of still films, 35mm motion picture stock has also been made available to still film consumers by 3rd parties such as Flic Film.
Kodak currently produces several photographic film products in 35mm and 120 film formats. In response to the growing demand for film by hobbyists, Kodak launched a newly formulated version of the discontinued Ektachrome 100 in 35mm film format in September 2018. The following year, the company announced the film stock in 120 and 4x5 film formats.
B&W Negative Film
Kodak Tri-X 320
Kodak Tri-X 400
Kodak TMAX 100
Kodak TMAX 400
Kodak TMAX P3200
Color Negative Film (Consumer)
Kodak ColorPlus/Kodacolor 200
Kodak ProImage 100
Kodak Gold 200
Kodak Ultramax 400
Kodak Ultramax 800 (Single use cameras)
Color Negative Film (Professional)
Kodak Ektar 100
Kodak Portra 160
Kodak Portra 400
Kodak Portra 800
Color Reversal Film
Kodak Ektachrome E100
Reversal film or slide film is a type of photographic film that produces a positive image on a transparent base.
Traditional and digital printing
Kodak produces commercial inkjet printers, electrophotographic printing equipment, and related consumables and services. At present, Kodak sells the Prosper, Nexfinity, and Uteco lines of commercial printers, and the Prosper and Versamark imprinting systems. Kodak designs and manufactures products for flexography printing through its Flexcel brand. The company has also sold a line of computer to plate (CTP) devices since 1995.
The company currently has partnerships with touch-panel producers for functional printing, including ones with UniPixel announced on April 16, 2013, and Kingsbury Corp. launched on June 27, 2013.
In 1997, Heidelberg Printing Machines AG and Eastman Kodak Co. created Nexpress Solutions LLC, a joint venture to develop a digital color printing press for the high-end market segment. Heidelberg acquired Eastman Kodak Co.'s Office Imaging black and white digital printing activities in 1999. In March 2004, Heidelberg transferred its Digital Print division to Kodak under mutual agreement.
Brand
The Kodak brand is licensed to several consumer products produced by other companies, such as the PIXPRO line of digital cameras manufactured by JK Imaging.
Batteries
Kodak licenses its brand on alkaline, lithium, hearing aid and button cell batteries.
Professional Photo Chemistry
The brand rights to Kodak professional photo chemistry, passed to Kodak Alaris in 2012 as part of the bankruptcy settlement. In 2020 Alaris sold these rights to Sino Promise, China a supplier of the color chemistry for minilabs. However, in early 2023 Sino Promise decided to exit the business. This enabled Photo Systems Inc. US, who had been a manufacturer of some of the products for Kodak Alaris, to acquire the brand rights directly from Eastman Kodak in September 2023, with the intent to re-introduce the full range of Black & White, C-41, RA-4 and E6 photochemistry.
Former products and services
Photographic film and paper
Kodak continues to produce specialty films and film for newer and more popular consumer formats, but it has discontinued the manufacture of film in most older formats. Among its most famous discontinued film brands was Kodachrome.
Kodak was a leading producer of silver halide paper used for printing from film and digital images. In 2005, Kodak announced it would stop producing black-and-white photo paper. All paper manufacturing operations were transferred to Kodak Alaris in 2013.
Still film cameras
Kodak sold film cameras from the time of its founding until 2007, beginning with the Kodak no. 1 in 1888. In the 20th century, Kodak's most popular models were the Brownie, sold between 1900 and 1986, and the Instamatic, sold between 1968 and 1988.
Between 1914 and 1932, an autographic feature on Kodak cameras provided a means for recording data on the margin of the negative at the time of exposure.
In 1982, Kodak launched a newly developed disc film cameras. The cameras initially sold well due to their compact size, but were unpopular due to their poor image quality, and were discontinued in 1988.
On January 13, 2004, Kodak announced it would stop marketing traditional still film cameras (excluding disposable cameras) in the United States, Canada and Western Europe, but would continue to sell film cameras in India, Latin America, Eastern Europe and China. By the end of 2005, Kodak had ceased manufacturing cameras that used the Advanced Photo System. Kodak licensed the manufacture of Kodak branded cameras to Vivitar in 2006.
Slide projectors
Kodak purchased a concept for a slide projector from Italian-American inventor Louis Misuraca in the early 1960s. The Carousel line of slide projectors was launched in 1962, and a patent was granted to Kodak employee David E. Hansen in 1965. Kodak ended the production of slide projectors in October 2004.
One early Kodak product bridging digital technology with projection techniques was the Kodak Datashow, featuring a translucent liquid crystal display panel that was placed on an overhead projector instead of a conventional transparency, with the panel being connected to the display card of a personal computer to accept its video output. This arrangement permitted the computer's display to be projected onto a projection screen or wall, making it suitable for viewing by audiences of more than "a handful of people". Limitations included the monochrome nature of the panel along with a lack of resolution and contrast. However, at just above the "psychologically important £1,000 mark", the product was competitive when considering the pricing of larger colour monitors appropriate for group viewing.
Instant cameras
Kodak was the exclusive supplier of negatives for Polaroid cameras from 1963 until 1969, when Polaroid chose to manufacture its own instant film. In 1976, Kodak began selling its own line of EK instant camera models. These were followed by the Colorburst in 1979 and the Kodamatic in 1982. After losing a patent battle with Polaroid Corporation, Kodak left the instant camera business in 1986.
Image sensors
In the early 1970s, Kodak began research into CCD sensor image sensors. Kodak developed the first megapixel sensor in a 2/3 inch format, which was marketed in the Videk Megaplus Camera in 1987. In 1991, the KAF-1300, a 1.3 megapixel sensor, was used in Kodak's first commercially sold digital camera, the DCS-100. The company began producing its first CMOS image sensors in 2005.
The Bayer filter, a method of RGB color display for image sensors, was patented by Kodak scientist Bryce Bayer in 1976. In 2007, a successor to the Bayer filter for digital cameras was created by company scientists John Compton and John Hamilton, which added white pixels to the RGB display.
In 2011, Kodak sold its Image Sensor Solutions business to Platinum Equity, which was renamed Truesense shortly after.
Floppy disks
In 1983, Kodak introduced a non-standard 3.3 million byte diskette; it was manufactured by an outside company, DriveTec. Another was announced in 1984. Kodak's 1985 purchase of Verbatim, a floppy disk manufacturer with over 2,000 employees, expanded their presence. Part of this acquisition was Verbatim's Data Encore unit, which "copies software onto floppy disks in a way that makes it difficult for software 'pirates' to re-copy the material."
In 1990 Kodak exited the diskette business and sold Verbatim to Mitsubishi Kasei, the forerunner of Mitsubishi Chemical Corporation. Kodak held onto Verbatim's optical disk unit.
Digital cameras
The Kodak DCS series of digital single-lens reflex cameras and digital camera backs were released by Kodak in the 1990s and 2000s, and discontinued in 2005. They were based on existing 35mm film SLRs from Nikon and Canon. In 2003, the Kodak EasyShare series was launched. Kodak extensively studied customer behavior, finding that women in particular enjoyed taking digital photos but were frustrated by the difficulty in moving them to their computers. Kodak attempted to fill this niche with a wide range of products which made it easy to share photos via PCs. One of their key innovations was a printer dock which allowed consumers to insert their cameras into a compact device and print photos with the press of a button. In April 2006, Kodak introduced the Kodak EasyShare V610, at that time the world's smallest 10× (38–380 mm) optical zoom camera at less than 2.5 cm (an inch) thick.
Many of Kodak's early compact digital cameras were designed and built by Chinon Industries, a Japanese camera manufacturer. In 2004, Kodak Japan acquired Chinon and many of its engineers and designers joined Kodak Japan. In July 2006, Kodak announced that Flextronics would manufacture and help design its digital cameras.
Kodak ended the production of its digital cameras in 2012.
Digital picture frames
Kodak first entered the digital picture frame market with the Kodak Smart Picture Frame in the fourth quarter of 2000. It was designed by Weave Innovations and licensed to Kodak with an exclusive relationship with Weave's StoryBox online photo network. Smart Frame owners connected to the network via an analog telephone connection built into the frame. The frame could hold 36 images internally and came with a six-month free subscription to the StoryBox network.
Kodak re-entered the digital photo frame market at CES in 2007 with the introduction of four new EasyShare-branded models, some of which included Wi-Fi capability to connect with the Kodak Gallery. Kodak ended the production of digital picture frames in 2012.
Kodak Gallery
In June 2001, Kodak purchased the photo-developing website Ofoto, later renamed Kodak Gallery. The website enabled users to upload their photos into albums, publish them into prints, and create mousepads, calendars, and other products. On March 1, 2012, Kodak announced that it sold Kodak Gallery to Shutterfly for $23.8 million.
Light-emitting diodes
Kodak research into LED technology produced a number of innovations beginning in the 1980s. In 1986, Kodak unveiled the first high-volume LED printer. Kodak chemists Ching Wan Tang and Steven Van Slyke created the first practical organic light-emitting diode (OLED) in 1987. In 1999, Kodak entered a partnership with Sanyo to produce OLED displays. Kodak sold its OLED business unit to LG Electronics in December 2009.
Medical technology
Kodak's first radiographic film was produced in 1896.
In the 1970s, Kodak developed the Ektachem Analyzer, a clinical chemistry analyzer. The devices were first sold in 1980 after receiving FDA approval.
In 2007 Kodak agreed to sell Kodak Health Group to Onex Corporation for $2.35 billion in cash, and up to $200 million in additional future payments if Onex achieved specified returns on the acquisition. The sale was completed May 1.
Document imaging
Kodak's involvement in document imaging technology began when George Eastman partnered with banks to image checks in the 1920s. Kodak subsidiary Recordak was founded in 1928 to manufacture some of the first microfilm. Kodak acquired the Bowe Bell & Howell scanner division in 2009. The Document Imaging division was transferred to Kodak Alaris in 2013.
Photocopiers and duplicators
Kodak entered the plain paper photocopier market in 1975 with the Kodak Ektaprint 100 Copier-Duplicator. In 1986 they announced the Ektaprint 235 copier-duplicator, capable of producing 5,100 copies per hour, and the Ektaprint 300, capable of producing 6,000 copies per hour. In 1988 IBM exited the Photocopier industry, selling its interests to Kodak for an undisclosed sum. On Sept. 10, 1996 Kodak announced it was selling its Copier business to Danka for $684 million in cash.
Consumer inkjet printers
Kodak entered into consumer inkjet photo printers in a joint venture with manufacturer Lexmark with the Kodak Personal Picture Maker PM100 and PM200. In February 2007, Kodak re-entered the market with a new product line of All-in-One (AiO) inkjet printers that employ several technologies marketed as Kodacolor Technology. Advertising emphasized low price for ink cartridges rather than for the printers themselves. The printers failed to become profitable and were a major contributor to the company's bankruptcy in 2012. Kodak announced plans to stop selling inkjet printers in 2013 as it focused on commercial printing.
Photo kiosks
Kodak's first self-service kiosk opened in 1988. The Picture Maker line of kiosks was launched in 1994 for digital prints. Kiosks were installed in retail locations to provide a digital equivalent to the company's film processing locations. Over time, additional features for image editing and products were added, such as the ability to remove red-eye effect from portraits. The PYNK Smart Print System, announced in 2010, would allow customers to create collages on-demand. Over 100,000 kiosks were installed worldwide during the 1990s and 2000s. The photo kiosks were transferred to Kodak Alaris as part of the Personalized Imaging division in 2013.
Photography On Demand
After two years in development, Kodak launched an on-demand photography service platform, Kodakit, in early 2016. The launch was formally announced in January 2017 at CES. Kodakit initially targeted consumers for wedding and portrait photography, but soon shifted towards businesses seeking high volume photography – real estate, food photography, and head shots. The platform was criticized for requiring photographers to relinquish copyright of their works. After failing to generate enough traction, the Singapore-based subsidiary announced that it would cease operations in January 2020.
Motion picture and TV production
In addition to the home market-oriented 8mm and Super 8 formats developed by Kodak in the 1950s and 1960s, which are still sold today, Kodak also briefly entered the professional television production video tape market in the mid-1980s under the product portfolio name of Eastman Professional Video Tape Products.
Kodak previously owned the visual effects film post-production facilities Cinesite in Los Angeles and London and LaserPacific in Los Angeles. In April 2010, Kodak sold LaserPacific and its subsidiaries Laser-Edit, Inc, and Pacific Video, Inc., for an undisclosed sum to TeleCorps Holdings, Inc. In May 2012, Kodak sold Cinesite to Endless LLP, an independent British private equity house. Kodak also sold Pro-Tek Media Preservation Services, a film storage company in Burbank, California, to LAC Group in October 2013.
Operations
Since 2015, Kodak has had five business divisions: Print Systems, Enterprise Inkjet Systems, Micro 3D Printing and Packaging, Software and Solutions, and Consumer and Film.
Kodak's corporate headquarters are located at Kodak Tower in downtown Rochester. Its primary manufacturing facility in the United States is Eastman Business Park, where film production occurs.
Subsidiaries
Kodak (Australasia) Pty Ltd
Former manufacturing facilities were located in Coburg, a Northern suburb of Melbourne.
Kodak Canada ULC (formerly Canadian Kodak Company)
Former manufacturing facilities were located in Toronto at Kodak Heights.
Kodak (Xiamen) Digital Imaging Products Company
Former manufacturing facility is located in Xiamen, China.
Kodak Graphic Communications
Current manufacturing facility located in Osterode am Harz, Germany.
Kodak Limited (UK)
Former manufacturing facilities were located in Harrow, Morley, Kirkby, and Annesley.
Kodak Teenage Film Awards
Since 1962, Kodak, with University Film Association, Council on International Nontheatrical Events, and University Film Foundation, presented the annual Teenage Film Awards
"Entrant must be no older than 19; film may be 8mm, super 8, or 16mm, on any subject;."
Eric Goldbergs Super-8 film won 1974's Grand Prize. KBYU-TV director of broadcast production, Jay Sumsion, won second place in 1971. Charles S. Cohen and Carl Weingarten were awarded Honorable Mentions.
Notable people
Leadership
Board of directors
As of July 2022:
James Continenza, chairman and CEO of Kodak
B. Thomas Golisano, founder and former president of Paychex
Philippe Katz, UECC executive
Katherine B. Lynch, former COO of UBS
Jason New, co-CEO of Onex Credit
Darren L. Richman, co-founder of KLIM investment group
Michael E. Selick Jr., president of SeaAgri Solutions
Scientists
Bryce Bayer, color scientist (1929–2012)
Harry Coover, polymer chemist (1917–2011)
F. J. Duarte, laser physicist and author (left in 2006)
Marion B. Folsom, statistician (1893–1976)
Loyd A. Jones, camouflage physicist (1884–1954)
Maurice Loyal Huggins, polymer scientist (1897–1981)
Rudolf Kingslake, optical designer (1903–2003)
David MacAdam, color scientist (1910–1998)
Kenneth Mees, film scientist and founder of the research laboratories (1882–1960)
Perley G. Nutting, physicist and founder of OSA (1873–1949)
Steven Sasson, electrical engineer
Ludwik Silberstein, physicist (1872–1948)
Steven Van Slyke, OLED scientist (left in 2010)
Warren J. Smith, optical engineer (1922–2008)
Ching W. Tang, OLED scientist (left in 2006)
John Texter, physical chemist and materials scientist (1978–1998)
Arthur Widmer, Special Effects Film Pioneer and receiver of an Academy of Motion Picture Arts and Sciences Award of Commendation (1914–2006)
Photographers
Jeannette Klute, research photographer (1918–2009)
Archive donation
In 2005, Kodak Canada donated its entire historic company archives to Ryerson University in Toronto, Ontario, Canada. The Ryerson University Library also acquired an extensive collection of materials on the history of photography from the private collection of Nicholas M. and Marilyn A. Graver of Rochester, New York. The Kodak Archives, begun in 1909, contain the company's Camera Collection, historic photos, files, trade circulars, Kodak magazines, price lists, daily record books, equipment, and other ephemera. It includes the contents of the Kodak Heritage Collection Museum, a museum established in 1999 for Kodak Canada's centennial that Kodak closed in 2005 along with the company's entire "Kodak Heights" manufacturing campus in Mount Dennis, Toronto.
Controversies and lawsuits
Early legal issues
Patent infringement
Kodak encountered a number of challenges from rival patents for film and cameras. These began while Eastman was still developing his first camera, when he was forced to pay inventor David Houston for a license on his pre-existing patents. A major lawsuit for patent infringement would come from rival film producer Ansco. Inventor Hannibal Goodwin had filed his own patent for nitrocellulose film in 1887, prior to the one owned by Kodak, but his was initially denied by the patent office. In 1898, Goodwin succeeded in convincing the patent office to change its decision and his patent was granted. Ansco purchased the patent in 1900 and sued Kodak for infringement in 1902. The lawsuit spent over a decade in court and was finally settled in 1914 at a cost of $5 Million for Kodak.
Anti-trust lawsuit
In 1911, the federal government began an anti-trust investigation into Kodak for exclusive contracts, acquisitions of competitors, and price-fixing. Eastman had cautioned the board of directors against eliminating competition, but believed that many of the company's other monopolistic actions were in the best interest of consumers by allowing the company to produce high-quality products. The investigation resulted in a lawsuit against Kodak in 1913 and a consent decree in 1921, ordering Kodak to stop fixing prices and sell many of its interests.
FIGHT
Prior to the civil rights movement, Kodak hired virtually no African-American employees. In the 1950s, Rochester's African-American population grew rapidly, rising from 7,845 in 1950 to around 40,000 in 1964. Many objected to Kodak's discriminatory hiring practices and organized to end the status quo. The civil rights organization F.I.G.H.T. (Freedom, Integration, God, Honor—Today) was formed in 1965 by Saul Alinsky, and led by Minister Franklin Florence. The organization protested Kodak and successfully negotiated an agreement with the company to hire 600 African-American workers through a job training program in 1967.
Polaroid Corporation v. Eastman Kodak Company
Kodak and Polaroid were partners from the 1930s until the 1960s, with Polaroid purchasing large quantities of film from Kodak for its cameras and further research and development. Their cooperative partnership came to an end in the late 1960s, when Polaroid pursued independent production of its film and Kodak expressed an interest in developing its own instant camera, bringing them into direct competition. In 1976, Kodak unveiled the EK series of instant cameras. Shortly after the announcement, Polaroid filed a complaint for patent infringement in the U.S. District Court District of Massachusetts, beginning a lawsuit which would last a decade. Polaroid Corporation v. Eastman Kodak Company was decided in Polaroid's favor in 1985, and after a short period of appeals, Kodak was forced to exit the instant camera market immediately in 1986. On October 12, 1990, Polaroid was awarded $909 million in damages. After appeals, Kodak agreed to pay Polaroid $925 million in 1991, then the largest settlement for patent infringement in US history.
Pollution
In the 1980s, contamination from silver and other chemicals was discovered in the land around Kodak Park and the nearby Genesee River. In 1989, a pipeline at the edge of the park burst, contaminating a nearby school with about 30,000 gallons of methylene chloride. Many families living near the factories chose to relocate as a result. Kodak pledged $100 million to study and clean up pollution. The company received a $2.15 million fine from the state of New York for illegally disposing of silver-contaminated soil and failing to disclose chemical spills in 1990, the largest environmental fine ever issued by the state at the time. Kodak remained a heavy air and water polluter during the 1990s.
In the 1990s, an elementary school was built on the site of the former Kodak factory at Vincennes in France. After three cases of pediatric cancer were reported at the school, an INVS investigation was opened into potential carcinogens left behind from the factory. The school was closed in 2001, but no link between the cancers and factory pollutants was established by medical experts. The school reopened in 2004.
In a 2014 settlement with the EPA, Kodak provided $49 million to clean up pollution it had caused in the Genesee River and at superfund sites in New York and New Jersey.
Departure from Better Business Bureau
In 2006, Kodak notified the BBB of Upstate New York that it would no longer accept or respond to consumer complaints submitted by them. On March 26, 2007, the Council of Better Business Bureaus (CBBB) announced that Eastman Kodak was resigning its national membership in the wake of expulsion proceedings initiated by the CBBB board of directors. Kodak said its customer service and customer privacy teams concluded that 99% of all complaints forwarded by the BBB already were handled directly with the customer, and that the BBB had consistently posted inaccurate information about Kodak.
Pharmaceuticals loan
On July 28, 2020, the Trump administration announced that it planned to give Kodak a $765 million loan for manufacturing ingredients used in pharmaceuticals, to rebuild the national stockpile depleted by the COVID-19 pandemic and reduce dependency on foreign factories. Kodak had not previously manufactured any such products. The funding would come through U.S. International Development Finance Corporation and was arranged by presidential trade advisor Peter Navarro. Within two days, the company's stock price had gained as much as 2,189% from its price at the close of July 27 on the NYSE.
The New York Times reported that one day before the White House announced the loan, Kodak CEO Jim Continenza was given 1.75 million stock options, some of which he was able to execute immediately. The funding was put on hold as the U.S. Securities and Exchange Commission began probing allegations of insider trading by Kodak executives ahead of the deal's announcement, and the funding agency's inspector general announced scrutiny into the loan terms. In November, CEO Jim Continenza stated that Kodak was still committed to pharmaceutical manufacturing without the loan. The DFC concluded no wrongdoing on its part in December. Continenza and other executives were ordered to testify as part of the SEC investigation in June 2021. A class-action lawsuit from Kodak investors was dismissed in 2022.
Xinjiang social media post
In July 2021, Kodak removed a post about Xinjiang from its Instagram page. The photo was taken by the photographer Patrick Wack who describes the region as an "Orwellian dystopia" in a reference to the persecution of Uyghurs in China. In later statements on Instagram and WeChat, Kodak declared its Instagram page was not a "platform for political commentary" and affirmed their "close cooperation with various [Chinese] government departments".
See also
Eastman Business Park, formerly Kodak Park
Kodak Vision Award
Notes
References
Bibliography
Further reading
Binant, Philippe, Au coeur de la projection numérique, Actions, 29, pp. 12–13, Kodak, Paris, 2007.
External links
Kodak Camera Catalog Info at Historic Camera.
Literature published by Kodak at Monroe County Genealogy
Cercle des Conservateurs de l’Image Latente , an archive of Kodak-Pathé materials maintained by former employees
Academy Award for Technical Achievement winners
American companies established in 1888
Battery manufacturers
Companies listed on the New York Stock Exchange
Companies that filed for Chapter 11 bankruptcy in 2012
Computer companies of the United States
Computer hardware companies
Computer printer companies
Consumer battery manufacturers
Electronics companies of the United States
Former components of the Dow Jones Industrial Average
Manufacturing companies based in New Jersey
Manufacturing companies based in Rochester, New York
Multinational companies headquartered in the United States
Organizations awarded an Academy Honorary Award
Photographic film makers
Photography equipment manufacturers of the United States
Technology companies established in 1888
1888 establishments in New York (state)
Recipients of the Scientific and Technical Academy Award of Merit
George Eastman | Kodak | [
"Technology"
] | 12,536 | [
"Computer hardware companies",
"Computers"
] |
57,630 | https://en.wikipedia.org/wiki/Monsoon | A monsoon () is traditionally a seasonal reversing wind accompanied by corresponding changes in precipitation but is now used to describe seasonal changes in atmospheric circulation and precipitation associated with annual latitudinal oscillation of the Intertropical Convergence Zone (ITCZ) between its limits to the north and south of the equator. Usually, the term monsoon is used to refer to the rainy phase of a seasonally changing pattern, although technically there is also a dry phase. The term is also sometimes used to describe locally heavy but short-term rains.
The major monsoon systems of the world consist of the West African, Asian–Australian, the North American, and South American monsoons.
The term was first used in English in British India and neighboring countries to refer to the big seasonal winds blowing from the Bay of Bengal and Arabian Sea in the southwest bringing heavy rainfall to the area.
Etymology
The etymology of the word monsoon is not wholly certain. The English monsoon came from Portuguese ultimately from Arabic (, "season"), "perhaps partly via early modern Dutch monson".
History
Asian monsoon
Strengthening of the Asian monsoon has been linked to the uplift of the Tibetan Plateau after the collision of the Indian subcontinent and Asia around 50 million years ago. Because of studies of records from the Arabian Sea and that of the wind-blown dust in the Loess Plateau of China, many geologists believe the monsoon first became strong around 8 million years ago. More recently, studies of plant fossils in China and new long-duration sediment records from the South China Sea led to a timing of the monsoon beginning 15–20 million years ago and linked to early Tibetan uplift. Testing of this hypothesis awaits deep ocean sampling by the Integrated Ocean Drilling Program. The monsoon has varied significantly in strength since this time, largely linked to global climate change, especially the cycle of the Pleistocene ice ages. A study of Asian monsoonal climate cycles from 123,200 to 121,210 years BP, during the Eemian interglacial, suggests that they had an average duration of around 64 years, with the minimum duration being around 50 years and the maximum approximately 80 years, similar to today.
A study of marine plankton suggested that the South Asian Monsoon (SAM) strengthened around 5 million years ago. Then, during ice periods, the sea level fell and the Indonesian Seaway closed. When this happened, cold waters in the Pacific were impeded from flowing into the Indian Ocean. It is believed that the resulting increase in sea surface temperatures in the Indian Ocean increased the intensity of monsoons. In 2018, a study of the SAM's variability over the past million years found that precipitation resulting from the monsoon was significantly reduced during glacial periods compared to interglacial periods like the present day. The Indian Summer Monsoon (ISM) underwent several intensifications during the warming following the Last Glacial Maximum, specifically during the time intervals corresponding to 16,100–14,600 BP, 13,600–13,000 BP, and 12,400–10,400 BP as indicated by vegetation changes in the Tibetan Plateau displaying increases in humidity brought by an intensifying ISM. Though the ISM was relatively weak for much of the Late Holocene, significant glacial accumulation in the Himalayas still occurred due to cold temperatures brought by westerlies from the west.
During the Middle Miocene, the July ITCZ, the zone of rainfall maximum, migrated northwards, increasing precipitation over southern China during the East Asian Summer Monsoon (EASM) while making Indochina drier. During the Late Miocene Global Cooling (LMCG), from 7.9 to 5.8 million years ago, the East Asian Winter Monsoon (EAWM) became stronger as the subarctic front shifted southwards. An abrupt intensification of the EAWM occurred 5.5 million years ago. The EAWM was still significantly weaker relative to today between 4.3 and 3.8 million years ago but abruptly became more intense around 3.8 million years ago as crustal stretching widened the Tsushima Strait and enabled greater inflow of the warm Tsushima Current into the Sea of Japan. Circa 3.0 million years ago, the EAWM became more stable, having previously been more variable and inconsistent, in addition to being enhanced further amidst a period of global cooling and sea level fall. The EASM was weaker during cold intervals of glacial periods such as the Last Glacial Maximum (LGM) and stronger during interglacials and warm intervals of glacial periods. Another EAWM intensification event occurred 2.6 million years ago, followed by yet another one around 1.0 million years ago. During Dansgaard–Oeschger events, the EASM grew in strength, but it has been suggested to have decreased in strength during Heinrich events. The EASM expanded its influence deeper into the interior of Asia as sea levels rose following the LGM; it also underwent a period of intensification during the Middle Holocene, around 6,000 years ago, due to orbital forcing made more intense by the fact that the Sahara at the time was much more vegetated and emitted less dust. This Middle Holocene interval of maximum EASM was associated with an expansion of temperate deciduous forest steppe and temperate mixed forest steppe in northern China. By around 5,000 to 4,500 BP, the East Asian monsoon's strength began to wane, weakening from that point until the present day. A particularly notable weakening took place ~3,000 BP. The location of the EASM shifted multiple times over the course of the Holocene: first, it moved southward between 12,000 and 8,000 BP, followed by an expansion to the north between approximately 8,000 and 4,000 BP, and most recently retreated southward once more between 4,000 and 0 BP.
Australian monsoon
The January ITCZ migrated further south to its present location during the Middle Miocene, strengthening the summer monsoon of Australia that had previously been weaker.
Five episodes during the Quaternary at 2.22 Ma (PL-1), 1.83 Ma (PL-2), 0.68 Ma (PL-3), 0.45 Ma (PL-4) and 0.04 Ma (PL-5) were identified which showed a weakening of the Leeuwin Current (LC). The weakening of the LC would have an effect on the sea surface temperature (SST) field in the Indian Ocean, as the Indonesian Throughflow generally warms the Indian Ocean. Thus these five intervals could probably be those of considerable lowering of SST in the Indian Ocean and would have influenced Indian monsoon intensity. During the weak LC, there is the possibility of reduced intensity of the Indian winter monsoon and strong summer monsoon, because of change in the Indian Ocean dipole due to reduction in net heat input to the Indian Ocean through the Indonesian Throughflow. Thus a better understanding of the possible links between El Niño, Western Pacific Warm Pool, Indonesian Throughflow, wind pattern off western Australia, and ice volume expansion and contraction can be obtained by studying the behaviour of the LC during Quaternary at close stratigraphic intervals.
South American monsoon
The South American summer monsoon (SASM) is known to have become weakened during Dansgaard–Oeschger events. The SASM has been suggested to have been enhanced during Heinrich events.
Process
Monsoons were once considered as a large-scale sea breeze caused by higher temperature over land than in the ocean. This is no longer considered as the cause and the monsoon is now considered a planetary-scale phenomenon involving the annual migration of the Intertropical Convergence Zone between its northern and southern limits. The limits of the ITCZ vary according to the land–sea heating contrast and it is thought that the northern extent of the monsoon in South Asia is influenced by the high Tibetan Plateau. These temperature imbalances happen because oceans and land absorb heat in different ways. Over oceans, the air temperature remains relatively stable for two reasons: water has a relatively high heat capacity (3.9 to 4.2 J g−1 K−1), and because both conduction and convection will equilibrate a hot or cold surface with deeper water (up to 50 metres). In contrast, dirt, sand, and rocks have lower heat capacities (0.19 to 0.35 J g−1 K−1), and they can only transmit heat into the earth by conduction and not by convection. Therefore, bodies of water stay at a more even temperature, while land temperatures are more variable.
During warmer months sunlight heats the surfaces of both land and oceans, but land temperatures rise more quickly. As the land's surface becomes warmer, the air above it expands and an area of low pressure develops. Meanwhile, the ocean remains at a lower temperature than the land, and the air above it retains a higher pressure. This difference in pressure causes sea breezes to blow from the ocean to the land, bringing moist air inland. This moist air rises to a higher altitude over land and then it flows back toward the ocean (thus completing the cycle). However, when the air rises, and while it is still over the land, the air cools. This decreases the air's ability to hold water, and this causes precipitation over the land. This is why summer monsoons cause so much rain over land.
In the colder months, the cycle is reversed. Then the land cools faster than the oceans and the air over the land has higher pressure than air over the ocean. This causes the air over the land to flow to the ocean. When humid air rises over the ocean, it cools, and this causes precipitation over the oceans. (The cool air then flows towards the land to complete the cycle.)
Most summer monsoons have a dominant westerly component and a strong tendency to ascend and produce copious amounts of rain (because of the condensation of water vapor in the rising air). The intensity and duration, however, are not uniform from year to year. Winter monsoons, by contrast, have a dominant easterly component and a strong tendency to diverge, subside and cause drought.
Similar rainfall is caused when moist ocean air is lifted upwards by mountains, surface heating, convergence at the surface, divergence aloft, or from storm-produced outflows at the surface. However the lifting occurs, the air cools due to expansion in lower pressure, and this produces condensation.
Global monsoon
Summary table
Africa (West African and Southeast African)
The monsoon of western Sub-Saharan Africa is the result of the seasonal shifts of the Intertropical Convergence Zone and the great seasonal temperature and humidity differences between the Sahara and the equatorial Atlantic Ocean. The ITCZ migrates northward from the equatorial Atlantic in February, reaches western Africa on or near June 22, then moves back to the south by October. The dry, northeasterly trade winds, and their more extreme form, the harmattan, are interrupted by the northern shift in the ITCZ and resultant southerly, rain-bearing winds during the summer. The semiarid Sahel and Sudan depend upon this pattern for most of their precipitation.
North America
The North American monsoon (NAM) occurs from late June or early July into September, originating over Mexico and spreading into the southwest United States by mid-July. It affects Mexico along the Sierra Madre Occidental as well as Arizona, New Mexico, Nevada, Utah, Colorado, West Texas and California. It pushes as far west as the Peninsular Ranges and Transverse Ranges of Southern California, but rarely reaches the coastal strip (a wall of desert thunderstorms only a half-hour's drive away is a common summer sight from the sunny skies along the coast during the monsoon). The North American monsoon is known to many as the Summer, Southwest, Mexican or Arizona monsoon. It is also sometimes called the Desert monsoon as a large part of the affected area are the Mojave and Sonoran deserts. However, it is controversial whether the North and South American weather patterns with incomplete wind reversal should be counted as true monsoons.
Asia
The Asian monsoons may be classified into a few sub-systems, such as the Indian Subcontinental Monsoon which affects the Indian subcontinent and surrounding regions including Nepal, and the East Asian Monsoon which affects southern China, Taiwan, Korea and parts of Japan.
South Asian monsoon
Southwest monsoon
The southwestern summer monsoons occur from June through September. The Thar Desert and adjoining areas of the northern and central Indian subcontinent heat up considerably during the hot summers. This causes a low pressure area over the northern and central Indian subcontinent. To fill this void, the moisture-laden winds from the Indian Ocean rush into the subcontinent. These winds, rich in moisture, are drawn towards the Himalayas. The Himalayas act like a high wall, blocking the winds from passing into Central Asia, and forcing them to rise. As the clouds rise, their temperature drops, and precipitation occurs. Some areas of the subcontinent receive up to of rain annually.
The southwest monsoon is generally expected to begin around the beginning of June and fade away by the end of September. The moisture-laden winds on reaching the southernmost point of the Indian Peninsula, due to its topography, become divided into two parts: the Arabian Sea Branch and the Bay of Bengal Branch.
The Arabian Sea Branch of the Southwest Monsoon first hits the Western Ghats of the coastal state of Kerala, India, thus making this area the first state in India to receive rain from the Southwest Monsoon. This branch of the monsoon moves northwards along the Western Ghats (Konkan and Goa) with precipitation on coastal areas, west of the Western Ghats. The eastern areas of the Western Ghats do not receive much rain from this monsoon as the wind does not cross the Western Ghats.
The Bay of Bengal Branch of Southwest Monsoon flows over the Bay of Bengal heading towards north-east India and Bengal, picking up more moisture from the Bay of Bengal. The winds arrive at the Eastern Himalayas with large amounts of rain. Mawsynram, situated on the southern slopes of the Khasi Hills in Meghalaya, India, is one of the wettest places on Earth. After the arrival at the Eastern Himalayas, the winds turns towards the west, travelling over the Indo-Gangetic Plain at a rate of roughly 1–2 weeks per state, pouring rain all along its way. June 1 is regarded as the date of onset of the monsoon in India, as indicated by the arrival of the monsoon in the southernmost state of Kerala.
The monsoon accounts for nearly 80% of the rainfall in India. Indian agriculture (which accounts for 25% of the GDP and employs 70% of the population) is heavily dependent on the rains, for growing crops especially like cotton, rice, oilseeds and coarse grains. A delay of a few days in the arrival of the monsoon can badly affect the economy, as evidenced in the numerous droughts in India in the 1990s.
The monsoon is widely welcomed and appreciated by city-dwellers as well, for it provides relief from the climax of summer heat in June. However, the roads take a battering every year. Often houses and streets are waterlogged and slums are flooded despite drainage systems. A lack of city infrastructure coupled with changing climate patterns causes severe economic loss including damage to property and loss of lives, as evidenced in the 2005 flooding in Mumbai that brought the city to a standstill. Bangladesh and certain regions of India like Assam and West Bengal, also frequently experience heavy floods during this season. Recently, areas in India that used to receive scanty rainfall throughout the year, like the Thar Desert, have surprisingly ended up receiving floods due to the prolonged monsoon season.
The influence of the Southwest Monsoon is felt as far north as in China's Xinjiang. It is estimated that about 70% of all precipitation in the central part of the Tian Shan Mountains falls during the three summer months, when the region is under the monsoon influence; about 70% of that is directly of "cyclonic" (i.e., monsoon-driven) origin (as opposed to "local convection"). The effects also extend westwards to the Mediterranean, where however the impact of the monsoon is to induce drought via the Rodwell-Hoskins mechanism.
Northeast monsoon
Around September, with the sun retreating south, the northern landmass of the Indian subcontinent begins to cool off rapidly, and air pressure begins to build over northern India. The Indian Ocean and its surrounding atmosphere still hold their heat, causing cold wind to sweep down from the Himalayas and Indo-Gangetic Plain towards the vast spans of the Indian Ocean south of the Deccan peninsula. This is known as the Northeast Monsoon or Retreating Monsoon.
While travelling towards the Indian Ocean, the cold dry wind picks up some moisture from the Bay of Bengal and pours it over peninsular India and parts of Sri Lanka. Cities like Chennai, which get less rain from the Southwest Monsoon, receive rain from this Monsoon. About 50% to 60% of the rain received by the state of Tamil Nadu is from the Northeast Monsoon. In Southern Asia, the northeastern monsoons take place from October to December when the surface high-pressure system is strongest. The jet stream in this region splits into the southern subtropical jet and the polar jet. The subtropical flow directs northeasterly winds to blow across southern Asia, creating dry air streams which produce clear skies over India. Meanwhile, a low pressure system known as a monsoon trough develops over South-East Asia and Australasia and winds are directed toward Australia. In the Philippines, northeast monsoon is called Amihan.
East Asian monsoon
The East Asian monsoon affects large parts of Indochina, the Philippines, China, Taiwan, Korea, Japan, and Siberia. It is characterised by a warm, rainy summer monsoon and a cold, dry winter monsoon. The rain occurs in a concentrated belt that stretches east–west except in East China where it is tilted east-northeast over Korea and Japan. The seasonal rain is known as Meiyu in China, Jangma in Korea, and Bai-u in Japan, with the latter two resembling frontal rain.
The onset of the summer monsoon is marked by a period of premonsoonal rain over South China and Taiwan in early May. From May through August, the summer monsoon shifts through a series of dry and rainy phases as the rain belt moves northward, beginning over Indochina and the South China Sea (May), to the Yangtze River Basin and Japan (June) and finally to northern China and Korea (July). When the monsoon ends in August, the rain belt moves back to southern China.
Australia
The rainy season occurs from September to February and it is a major source of energy for the Hadley circulation during boreal winter. It is associated with the development of the Siberian High and the movement of the heating maxima from the Northern Hemisphere to the Southern Hemisphere. North-easterly winds flow down Southeast Asia, are turned north-westerly/westerly by Borneo topography towards Australia. This forms a cyclonic circulation vortex over Borneo, which together with descending cold surges of winter air from higher latitudes, cause significant weather phenomena in the region. Examples are the formation of a rare low-latitude tropical storm in 2001, Tropical Storm Vamei, and the devastating flood of Jakarta in 2007.
The onset of the monsoon over Australia tends to follow the heating maxima down Vietnam and the Malay Peninsula (September), to Sumatra, Borneo and the Philippines (October), to Java, Sulawesi (November), Irian Jaya and northern Australia (December, January). However, the monsoon is not a simple response to heating but a more complex interaction of topography, wind and sea, as demonstrated by its abrupt rather than gradual withdrawal from the region. The Australian monsoon (the "Wet") occurs in the southern summer when the monsoon trough develops over Northern Australia. Over three-quarters of annual rainfall in Northern Australia falls during this time.
Europe
The European Monsoon (more commonly known as the return of the westerlies) is the result of a resurgence of westerly winds from the Atlantic, where they become loaded with wind and rain. These westerly winds are a common phenomenon during the European winter, but they ease as spring approaches in late March and through April and May. The winds pick up again in June, which is why this phenomenon is also referred to as "the return of the westerlies".
The rain usually arrives in two waves, at the beginning of June, and again in mid- to late June. The European monsoon is not a monsoon in the traditional sense in that it doesn't meet all the requirements to be classified as such. Instead, the return of the westerlies is more regarded as a conveyor belt that delivers a series of low-pressure centres to Western Europe where they create unsettled weather. These storms generally feature significantly lower-than-average temperatures, fierce rain or hail, thunder, and strong winds.
The return of the westerlies affects Europe's Northern Atlantic coastline, more precisely Ireland, Great Britain, the Benelux countries, western Germany, northern France and parts of Scandinavia.
See also
Tropical monsoon climate
References
Further reading
Chang, C.P., Wang, Z., Hendon, H., 2006, The Asian Winter Monsoon. The Asian Monsoon, Wang, B. (ed.), Praxis, Berlin, pp. 89–127.
International Committee of the Third Workshop on Monsoons. The Global Monsoon System: Research and Forecast.
External links
National Weather Service: The North American Monsoon
East Asian Monsoon Experiment
Arizona Central monsoon page
Basics of the Arizona Monsoon
Climate of Asia
Climate patterns
Flood
Weather hazards
Wind
Winds
Rain
Tropical meteorology
Articles containing video clips | Monsoon | [
"Physics",
"Environmental_science"
] | 4,411 | [
"Physical phenomena",
"Hydrology",
"Weather hazards",
"Weather",
"Flood"
] |
57,656 | https://en.wikipedia.org/wiki/Jacobson%20radical | In mathematics, more specifically ring theory, the Jacobson radical of a ring R is the ideal consisting of those elements in R that annihilate all simple right R-modules. It happens that substituting "left" in place of "right" in the definition yields the same ideal, and so the notion is left–right symmetric. The Jacobson radical of a ring is frequently denoted by J(R) or rad(R); the former notation will be preferred in this article, because it avoids confusion with other radicals of a ring. The Jacobson radical is named after Nathan Jacobson, who was the first to study it for arbitrary rings in .
The Jacobson radical of a ring has numerous internal characterizations, including a few definitions that successfully extend the notion to non-unital rings. The radical of a module extends the definition of the Jacobson radical to include modules. The Jacobson radical plays a prominent role in many ring- and module-theoretic results, such as Nakayama's lemma.
Definitions
There are multiple equivalent definitions and characterizations of the Jacobson radical, but it is useful to consider the definitions based on if the ring is commutative or not.
Commutative case
In the commutative case, the Jacobson radical of a commutative ring R is defined as the intersection of all maximal ideals . If we denote as the set of all maximal ideals in R thenThis definition can be used for explicit calculations in a number of simple cases, such as for local rings , which have a unique maximal ideal, Artinian rings, and products thereof. See the examples section for explicit computations.
Noncommutative/general case
For a general ring with unity R, the Jacobson radical J(R) is defined as the ideal of all elements such that whenever M is a simple R-module. That is,
This is equivalent to the definition in the commutative case for a commutative ring R because the simple modules over a commutative ring are of the form for some maximal ideal , and the annihilators of in R are precisely the elements of , i.e. .
Motivation
Understanding the Jacobson radical lies in a few different cases: namely its applications and the resulting geometric interpretations, and its algebraic interpretations.
Geometric applications
Although Jacobson originally introduced his radical as a technique for building a theory of radicals for arbitrary rings, one of the motivating reasons for why the Jacobson radical is considered in the commutative case is because of its appearance in Nakayama's lemma. This lemma is a technical tool for studying finitely generated modules over commutative rings that has an easy geometric interpretation: If we have a vector bundle over a topological space X, and pick a point , then any basis of E|p can be extended to a basis of sections of for some neighborhood .
Another application is in the case of finitely generated commutative rings of the form for some base ring k (such as a field, or the ring of integers). In this case the nilradical and the Jacobson radical coincide. This means we could interpret the Jacobson radical as a measure for how far the ideal I defining the ring R is from defining the ring of functions on an algebraic variety because of the Hilbert Nullstellensatz theorem. This is because algebraic varieties cannot have a ring of functions with infinitesimals: this is a structure that is only considered in scheme theory.
Equivalent characterizations
The Jacobson radical of a ring has various internal and external characterizations. The following equivalences appear in many noncommutative algebra texts such as , , and .
The following are equivalent characterizations of the Jacobson radical in rings with unity (characterizations for rings without unity are given immediately afterward):
J(R) equals the intersection of all maximal right ideals of the ring. The equivalence coming from the fact that for all maximal right ideals M, is a simple right R-module, and that in fact all simple right R-modules are isomorphic to one of this type via the map from R to S given by for any generator x of S. It is also true that J(R) equals the intersection of all maximal left ideals within the ring. These characterizations are internal to the ring, since one only needs to find the maximal right ideals of the ring. For example, if a ring is local, and has a unique maximal right ideal, then this unique maximal right ideal is exactly J(R). Maximal ideals are in a sense easier to look for than annihilators of modules. This characterization is deficient, however, because it does not prove useful when working computationally with J(R). The left-right symmetry of these two definitions is remarkable and has various interesting consequences. This symmetry stands in contrast to the lack of symmetry in the socles of R, for it may happen that soc(RR) is not equal to soc(RR). If R is a non-commutative ring, J(R) is not necessarily equal to the intersection of all maximal two-sided ideals of R. For instance, if V is a countable direct sum of copies of a field k and (the ring of endomorphisms of V as a k-module), then because R is known to be von Neumann regular, but there is exactly one maximal double-sided ideal in R consisting of endomorphisms with finite-dimensional image.
J(R) equals the sum of all superfluous right ideals (or symmetrically, the sum of all superfluous left ideals) of R. Comparing this with the previous definition, the sum of superfluous right ideals equals the intersection of maximal right ideals. This phenomenon is reflected dually for the right socle of R; soc(RR) is both the sum of minimal right ideals and the intersection of essential right ideals. In fact, these two relationships hold for the radicals and socles of modules in general.
As defined in the introduction, J(R) equals the intersection of all annihilators of simple right R-modules, however it is also true that it is the intersection of annihilators of simple left modules. An ideal that is the annihilator of a simple module is known as a primitive ideal, and so a reformulation of this states that the Jacobson radical is the intersection of all primitive ideals. This characterization is useful when studying modules over rings. For instance, if U is a right R-module, and V is a maximal submodule of U, is contained in V, where denotes all products of elements of J(R) (the "scalars") with elements in U, on the right. This follows from the fact that the quotient module is simple and hence annihilated by J(R).
J(R) is the unique right ideal of R maximal with the property that every element is right quasiregular (or equivalently left quasiregular). This characterization of the Jacobson radical is useful both computationally and in aiding intuition. Furthermore, this characterization is useful in studying modules over a ring. Nakayama's lemma is perhaps the most well-known instance of this. Although every element of the J(R) is necessarily quasiregular, not every quasiregular element is necessarily a member of J(R).
While not every quasiregular element is in J(R), it can be shown that y is in J(R) if and only if xy is left quasiregular for all x in R.
J(R) is the set of elements x in R such that every element of is a unit: . In fact, is in the Jacobson radical if and only if is invertible for any , if and only if is invertible for any . This means xy and yx behave similarly to a nilpotent element z with and .
For rings without unity it is possible to have ; however, the equation still holds. The following are equivalent characterizations of J(R) for rings without unity:
The notion of left quasiregularity can be generalized in the following way. Call an element a in R left generalized quasiregular if there exists c in R such that . Then J(R) consists of every element a for which ra is left generalized quasiregular for all r in R. It can be checked that this definition coincides with the previous quasiregular definition for rings with unity.
For a ring without unity, the definition of a left simple module M is amended by adding the condition that . With this understanding, J(R) may be defined as the intersection of all annihilators of simple left R modules, or just R if there are no simple left R modules. Rings without unity with no simple modules do exist, in which case , and the ring is called a radical ring. By using the generalized quasiregular characterization of the radical, it is clear that if one finds a ring with J(R) nonzero, then J(R) is a radical ring when considered as a ring without unity.
Examples
Commutative examples
For the ring of integers Z its Jacobson radical is the zero ideal, so , because it is given by the intersection of every ideal generated by a prime number (p). Since , and we are taking an infinite intersection with no common elements besides 0 between all maximal ideals, we have the computation.
For a local ring the Jacobson radical is simply . This is an important case because of its use in applying Nakayama's lemma. In particular, it implies if we have an algebraic vector bundle over a scheme or algebraic variety X, and we fix a basis of E|p for some point , then this basis lifts to a set of generators for all sections for some neighborhood U of p.
If k is a field and is a ring of formal power series, then J(R) consists of those power series whose constant term is zero, i.e. the power series in the ideal .
In the case of an Artinian rings, such as , the Jacobson radical is .
The previous example could be extended to the ring , giving .
The Jacobson radical of the ring Z/12Z is 6Z/12Z, which is the intersection of the maximal ideals 2Z/12Z and 3Z/12Z.
Consider the ring , where the second is the localization of by the prime ideal . Then, the Jacobson radical is trivial because the maximal ideals are generated by an element of the form for .
Noncommutative examples
Rings for which J(R) is are called semiprimitive rings, or sometimes "Jacobson semisimple rings". The Jacobson radical of any field, any von Neumann regular ring and any left or right primitive ring is . The Jacobson radical of the integers is .
If K is a field and R is the ring of all upper triangular n-by-n matrices with entries in K, then J(R) consists of all upper triangular matrices with zeros on the main diagonal.
Start with a finite, acyclic quiver Γ and a field K and consider the quiver algebra KΓ (as described in the article Quiver). The Jacobson radical of this ring is generated by all the paths in Γ of length ≥ 1.
The Jacobson radical of a C*-algebra is . This follows from the Gelfand–Naimark theorem and the fact that for a C*-algebra, a topologically irreducible *-representation on a Hilbert space is algebraically irreducible, so that its kernel is a primitive ideal in the purely algebraic sense (see Spectrum of a C*-algebra).
Properties
If R is unital and is not the trivial ring , the Jacobson radical is always distinct from R since rings with unity always have maximal right ideals. However, some important theorems and conjectures in ring theory consider the case when – "If R is a nil ring (that is, each of its elements is nilpotent), is the polynomial ring R[x] equal to its Jacobson radical?" is equivalent to the open Köthe conjecture.
For any ideal I contained in J(R), .
In particular, the Jacobson radical of the ring is zero. Rings with zero Jacobson radical are called semiprimitive rings.
A ring is semisimple if and only if it is Artinian and its Jacobson radical is zero.
If is a surjective ring homomorphism, then .
If R is a ring with unity and M is a finitely generated left R-module with , then (Nakayama's lemma).
J(R) contains all central nilpotent elements, but contains no idempotent elements except for 0.
J(R) contains every nil ideal of R. If R is left or right Artinian, then J(R) is a nilpotent ideal.This can actually be made stronger: If is a composition series for the right R-module R (such a series is sure to exist if R is right Artinian, and there is a similar left composition series if R is left Artinian), then .Note, however, that in general the Jacobson radical need not consist of only the nilpotent elements of the ring.
If R is commutative and finitely generated as an algebra over either a field or Z, then J(R) is equal to the nilradical of R.
The Jacobson radical of a (unital) ring is its largest superfluous right (equivalently, left) ideal.
See also
Frattini subgroup
Nilradical of a ring
Radical of a module
Radical of an ideal
Notes
Citations
References
Bourbaki, N. Éléments de mathématique.
Reprint of the 1968 original; With an afterword by Lance W. Small
Studies in the History of Modern Science, 9
External links
Intuitive Example of a Jacobson Radical
Ideals (ring theory)
Ring theory | Jacobson radical | [
"Mathematics"
] | 2,899 | [
"Fields of abstract algebra",
"Ring theory"
] |
57,684 | https://en.wikipedia.org/wiki/Project%20Pluto | Project Pluto was a United States government program to develop nuclear-powered ramjet engines for use in cruise missiles. Two experimental engines were tested at the Nevada Test Site (NTS) in 1961 and 1964 respectively.
On 1 January 1957, the U.S. Air Force and the U.S. Atomic Energy Commission selected the Lawrence Radiation Laboratory to study the feasibility of applying heat from a nuclear reactor to power a ramjet engine for a Supersonic Low Altitude Missile. This would have many advantages over other contemporary nuclear weapons delivery systems: operating at Mach 3, or around , and flying as low as , it would be invulnerable to interception by contemporary air defenses, carry more nuclear warheads with greater nuclear weapon yield, deliver them with greater accuracy than was possible with intercontinental ballistic missile (ICBMs) at the time and, unlike them, could be recalled.
This research became known as Project Pluto, and was directed by Theodore Charles (Ted) Merkle, leader of the laboratory's R Division. Originally carried out at Livermore, California, testing was moved to new facilities constructed for $1.2 million (equivalent to $ million in ) on at NTS Site 401, also known as Jackass Flats. The test reactors were moved about on a railroad car that could be controlled remotely. The need to maintain supersonic speed at low altitude and in all kinds of weather meant that the reactor had to survive high temperatures and intense radiation. Ceramic nuclear fuel elements were used that contained highly enriched uranium oxide fuel and beryllium oxide neutron moderator.
After a series of preliminary tests to verify the integrity of the components under conditions of strain and vibration, Tory II-A, the world's first nuclear ramjet engine, was run at full power (46 MW) on 14 May 1961. A larger, fully-functional ramjet engine was then developed called Tory II-C. This was run at full power (461 MW) on 20 May 1964, thereby demonstrating the feasibility of a nuclear-powered ramjet engine. Despite these and other successful tests, ICBM technology developed quicker than expected, and this reduced the need for cruise missiles. By the early 1960s, there was greater sensitivity about the dangers of radioactive emissions in the atmosphere, and devising an appropriate test plan for the necessary flight tests was difficult. On 1 July 1964, seven years and six months after it was started, Project Pluto was canceled.
Origins
During the 1950s, the United States Air Force (USAF) considered the use of nuclear powered aircraft and missiles as part of its Aircraft Nuclear Propulsion project, which was coordinated by the Aircraft Nuclear Propulsion Office. Research into missiles was coordinated by the Missile Projects Branch. The concept of using a nuclear reactor to provide a heat source for a ramjet was explored by Frank E. Rom and Eldon W. Sams at the National Advisory Committee for Aeronautics Lewis Research Center in 1954 and 1955.
The principle behind the nuclear ramjet was relatively simple: motion of the vehicle pushed air in through the front of the vehicle (the ram effect). If a nuclear reactor heated the air, the hot air expanded at high speed out through a nozzle at the back, providing thrust. The concept appeared feasible, so in October 1956, the USAF issued a system requirement, SR 149, for the development of a winged supersonic missile.
At the time, the United States Atomic Energy Commission (AEC) was conducting studies of the use of a nuclear rocket as an upper stage of an intercontinental ballistic missile (ICBM) on behalf of the USAF. The AEC farmed this work out to its two rival atomic weapons laboratories, the Los Alamos Scientific Laboratory (LASL) in Los Alamos, New Mexico, and the Lawrence Radiation Laboratory at Livermore, California. By late 1956 improvements in nuclear weapon design had reduced the need for a nuclear upper stage, and the development effort was concentrated at LASL, where it became known as Project Rover.
On 1 January 1957, the USAF and the AEC selected the Livermore Laboratory to study the design of a nuclear reactor to power ramjet engines. This research became known as Project Pluto. It was directed by Theodore C. (Ted) Merkle, leader of the Laboratory's R Division.
Development
The proposed use for nuclear-powered ramjets would be to power a cruise missile, called SLAM, for Supersonic Low Altitude Missile. It would have many advantages over other nuclear weapons delivery systems. It was estimated that the reactor would weigh between , permitting a payload of over . Operating at Mach 3, or around and flying as low as , it would be invulnerable to interception by contemporary air defenses. It could carry more nuclear warheads than the sixteen aboard a Polaris ballistic missile submarine, they could be larger, with nuclear weapon yields of up to , and delivered with greater accuracy. Moreover, unlike an ICBM, it could be recalled.
It was estimated that the unit cost of each missile would be less than $5 million (equivalent to $ million in ), making them much cheaper than a Boeing B-52 Stratofortress bomber. Operating costs would also be low, as keeping them in readiness would be cheaper than a submarine or bomber, and comparable with a missile silo-based ICBM. Range would not be unlimited, but would be determined by the fuel load. Merkle calculated that a MW-day of energy would burn about one gram of highly enriched uranium. A 490 MW reactor with 50 kilograms of uranium would therefore burn 1 percent of its fuel each day. Assuming that an accumulation of neutron poisons could be avoided, the missile could fly for several days. The success of the project depended upon a series of technological advances in metallurgy and materials science. Pneumatic motors necessary to control the reactor in flight had to operate while red-hot and in the presence of intense ionizing radiation. The need to maintain supersonic speed at low altitude and in all kinds of weather meant that the missile would have to fly though much denser air. In turn, this meant that it would encounter much greater air resistance and have to generate more power to overcome it. The reactor, code-named "Tory", would therefore have to survive high temperatures that would melt the metals used in most jet and rocket engines.
The solution arrived at was to use ceramic fuel elements. The core of the reactor would be made of beryllium oxide (), the only available neutron moderator material that could withstand the high temperatures required. Over 80 percent of the fueled tubes were long; the rest varied in length so as to achieve the correct column length and arrangement. The tubes consisted of a BeO matrix with a grain size between in diameter containing a solid solution of urania (), zirconia () and yttria (). The Tory II-A reactor used a uranium-beryllia mixture, but by the time Tory II-C was built zirconia and yttria was added in a 1.06:1:1 ratio of urania:zirconia:yttria. The zirconia and yttria stabilized the urania against phase transition to triuranium octoxide () at temperatures around . The fuel particles of the urania-zirconia-yttria mixture (known as "horseradish") were mostly from in size, although some were smaller or larger. The uranium was in the form of oralloy: uranium enriched to 93.2 percent uranium-235.
The tubes had a hexagonal cross-section measuring from one flat side to the opposite, with a 7.5-millimeter diameter hole in the center. They were closely packed to form a honeycomb pattern. The metal tie rods were made of René 41 and Hastelloy R235 and were cooled so they did not exceed . The ceramic tubes surrounding the tie rods (known as guard tubes) were unfueled and had smaller diameter holes. The core was surrounded by neutron reflectors on all sides.The forward reflector was thick and the aft reflector thick. Both were composed of BeO tubes. The side reflector consisted of of BeO tubes around which was of nickel shims. The reactor was controlled through the movement of hafnium control rods that moved axially within the tie rods. Twelve of the rods, known as shim rods, were located about from the central axis of the core, while two were located closer to the reflector; one was a vernier rod and the other as a safety rod. Normally the movement of the rods was restricted to but in the event of a scram they could be moved in 1.5 seconds. The shim rods were moved by four actuators, each of which handled three shim rods. The shim rods were long and in diameter, with a travel.
The contract to manufacture the fuel elements was awarded to the Coors Porcelain Company. The process of making horseradish involved mixing sinterable BeO powder with oralloy uranyl nitrate, yttrium nitrate and zirconium nitrate to form a slurry which was coprecipitated by adding ammonium nitrate. Because the process involved oralloy, criticality safety required a long, narrow geometry for the mix tanks. The mixture was filtered, dried and calcined at . It was then blended with a binding mixture containing polyvinyl alcohol, methyl cellulose and water and extruded through a die at to form the tubes. The tubes were dried, the binder was burned out by heating to , and they were fired in hydrogen at to densify them. The maximum permissible effect on reactivity due to impurities in the tubes was 2 to 3 percent. In practice it was only 0.5 percent.
Test facilities
Tests were conducted at new facilities constructed for $1.2 million (equivalent to $ million in ) on of Jackass Flats at the AEC's Nevada Test Site (NTS), known as Site 401. The facilities here were intended for use by Project Rover, but while Rover's reactor was still under development, they were used for Project Pluto. The complex included of roads, critical-assembly building, control building, assembly and shop buildings, and utilities.
An aggregate mine was purchased to supply the concrete for the walls of the disassembly building, Building 2201, which were thick. Building 2201 was designed to allow radioactive components to be adjusted, dissembled or replaced remotely. Operations in the main disassembly bay could be viewed through lead glass viewing windows. "Hot" cells adjacent to the disassembly bay were used to monitor the control rod actuators. Vaults within each cell were equipped with remote manipulators.
All controls were located in the central control room, which was air conditioned with a positive pressure so air always flowed towards the disassembly bay and the hot cells, and the used air from them was passed through filters. The main disassembly bay and the hot cells were accessible through openings that were normally covered with lead plates. There were showers and a radiation safety room for workers. Building 2201 also contained a maintenance shop, darkroom, offices, and equipment storage rooms. Scientists monitored the tests remotely via a television hook up from a tin shed located at a safe distance that had a fallout shelter stocked with two weeks' supply of food and water in the event of a major catastrophe.
Some of oil well casing was necessary to store the approximately of compressed air at used to simulate ramjet flight conditions for Pluto. Three giant compressors were borrowed from the Naval Submarine Base New London in Groton, Connecticut that could replenish the farm in five days. A five-minute, full-power test involved of air being forced over 14 million diameter steel balls that were held in four steel tanks which were heated to .
Because the test reactors were highly radioactive once they were started, they were transported to and from the test site on railroad cars. The "Jackass and Western Railroad", as it was light-heartedly described, was said to be the world's shortest and slowest railroad. There were two locomotives, the remotely controlled electric L-1, and the diesel/electric L-2, which was manually controlled but had radiation shielding around the cab. The former was normally used; the latter was as a backup. The Cold Assembly Bay (Room 101) in Building 2201 was used for storage and assembly of components of the reactor test vehicle. It also contained a maintenance service pit and battery charger for the locomotives.
Tory II-A
In 1957, the Livermore Laboratory began working on a prototype reactor called Tory II-A to test the proposed design. It was initially intended to build two Tory II-A test reactors, which were designated IIA-1 and IIA-2; ultimately only one was built. Its purpose was to test the design under conditions similar to that in a ramjet engine. To save time, money, and reduce its complexity, Tory II-A had a diameter about a third of that required for the engine, a much smaller diameter than the final design. To allow it to still reach criticality with reduced fuel, the core was surrounded by a thick nuclear graphite neutron reflector.
The Tory II-A design process was completed by early 1960. During the summer and early fall of that year, the core was assembled at Livermore inside a special fixture in a shielded containment building. It reached criticality on 7 October with the control vanes rotated 90° from the full shutdown position. A test was then carried out with the cooling passages of the core and neutron reflector filled with water. Instead of the predicted increase in reactivity, there was a drop, and the reactor could not go critical at all. The water was replaced with heavy water, but it was barely able to reach criticality. It was therefore concluded that additional fuel would be required to attain the required margin for error when more components were installed.
The reactor was shipped to the Nevada Test Site for a series of dry runs and zero- or low-power tests. Another layer of fuel elements was added. The reactor was mounted on the test vehicle and, with heavy water for coolant, reached criticality during a test run on 9 December, with the control vanes at 65°. It was estimated that without the heavy water, 71° would have been required. Boron rods were then inserted into the six central tie tubes. This lowered the reactivity of the core, and the vanes had to be turned to 132° before criticality was achieved. Oralloy foils were placed in the core tubes, and the reactor was run at 150 W for ten minutes.
The next set of tests involved blowing air through the reactor while it was subcritical to test the integrity of the components under conditions of strain and vibration. On 17 and 18 December, air flow rates of for 30 seconds. During what was intended to be the final qualification test on 11 January 1961, with an air flow rate of and a core temperature of , the clamp holding the exit nozzle to the air duct on the test vehicle broke, and the nozzle flew through the air. Following this mishap, it was decided to conduct a test of radio-controlled disconnection and removal of the reactor from the test vehicle. During this test the electrically controlled coupler between the locomotive and the test vehicle suddenly opened, and the test vehicle careered down the track and violently struck the concrete face of the test pad bunker at the end. The test vehicle was extensively damaged, and had to be stripped down and rebuilt. All the reactor components had to be checked for cracks.
With repairs completed, the Tory II-A was returned to the test pad for another series of tests. It was found that without cooling water, the reactor reached criticality with the control vanes at 75°; with heavy water for coolant it was reached with them at 67°. With hot air flowing through the reactor, the core temperature was raised to , then to , and finally to . It was then operated at 10 KW for 60 seconds at . A final test was conducted on 3 May, with an air flow rate of , a core temperature of and no incidents.
Tory II-A was operated at its designed value on 14 May, when it reached a power output of 46 MW with a core temperature of . Three high power test runs were conducted on 28 September, 5 October and 6 October. These reached power levels of 144, 166 and 162 MW with core temperatures of respectively. With the tests conducted successfully, the reactor was dissembled between December 1961 and September 1962.
Tory II-C
Tory II-A tested the reactor design and the integrity of the fuel elements under a simulation of operational conditions. Livermore now produced a second reactor, Tory II-C, which would be a fully functional engine for a ramjet missile. Issues that had been ignored in the design of Tory II-A had to be resolved in that of Tory II-C. The new design was complete by August 1962. The Tory II-C reactor was cylindrical in shape, long and in diameter. It contained about 293,000 fueled and 16,000 unfueled beryllium oxide tubes, which occupied 55 percent of its volume. The fuel loading varied through the reactor to achieve the right power profile. In operation, the core generated .
The checkout of the test facilities for Tory II-C testing commenced on 17 November 1962. The facilities were incomplete when this testing began, so many of the tests were in support of the construction program. These tests fell into four categories: testing of the air supply system; testing of the other facilities components; qualification of the test vehicle; and operator training. The facilities checkout ended on 5 March 1964, by which time 82 tests had been carried out.
Before attempting a high power reactor test, five major tests were performed. The first test, conducted on 23 March, was a subcritical test of the twelve hand-inserted and six electrically-activated auxiliary shutdown rods. The purpose of the test was to verify that the operational rods could be removed safely so long as the auxiliary rods were in place. This would mean that staff would not have to be removed from the test bunker area during checkout. The test was conducted as if it were a critical one, with all personnel evacuated from the test area and the test managed remotely from the control room. The test verified the predictions made at Livermore; the operational rods could be withdrawn safely. A cold critical test was then conducted the following day to verify that the instrumentation was working correctly.
Hot zero-power tests were conducted on 9 and 23 April. These involved testing the core under air flow conditions approaching those of a full power run. The test plan for the first test called for running air at at a rate of for 60 seconds. The test was aborted and the shim rods scrammed (shut down the reactor) when vibration exceeded a pre-set level. It turned out that the vibration of the core was not the problem: it was the transducers used to measure vibration that were not operating properly. Loose connections were repaired, and a second test scheduled. This time it was planned to operate successively at . This was done, and there was no vibration. The test also qualified the thermocouples used to monitor the core's temperature.
The next step was to conduct a low power test with air at on 7 May. As the air flow was reaching its maximum, shim actuator B2 became noisy and was placed on hold. Then, soon after the maximum was reached, actuator A1 detected a loss of air pressure and scrammed. Actuators A2 and B1 began moving to compensate for the loss of reactivity. A manual scram was then ordered, although in hindsight this was unnecessary. The problem with B2 was traced to a faulty wire, and the problem with A1 to a faulty pressure switch. Since there were no outstanding problems, the decision was taken to proceed with an intermediate power test on 12 May. This test aimed to simulate the conditions of a Mach 2.8 flight at . The reactor was taken to critical and the power increased to 750 kW. Air flow was then increased to at an average temperature of . The core reached . The test was concluded after an hour and 45 minutes.
The stage was now set for a full power test on 20 May 1964. This would simulate a Mach 2.8 flight on a hot day at sea level. The reactor was started and power raised to 700 kW. Air was introduced at and then raised to . The reactor power was then increased to around 76 MW, at which point the core temperature was . All systems were functioning normally, so the airflow was increased to and power increased until the core temperature reached , at which point the power output was around 461 MW. The reactor was run for five minutes, after which a manual scram was initiated, and the airflow reduced to for two minutes. The whole test took about an hour. Inspection of the reactor afterwards was done without disassembly. No blockages or anomalies were detected. The control rods were all in place, and there was no evidence of damage or corrosion.
Termination
Despite the successful tests, the Department of Defense, the sponsor of the Pluto project, had second thoughts. ICBM technology had developed more quickly than expected, reducing the need for such highly capable cruise missiles. The ICBM had several advantages over the SLAM: it required less ground support and maintenance, and could be launched in minutes instead of several hours, and so was less vulnerable to a nuclear first strike. An ICBM also traveled to its target faster and was less vulnerable to interception by Soviet air defenses. The main advantage of the SLAM was its ability to carry a larger payload, but the value of this was diminished by improvements in nuclear weapon design that made them smaller and lighter, and the subsequent development of multiple warhead capability in ICBMs.
The other major problem with the SLAM concept was the environmental damage caused by radioactive emissions during flight, and the disposal of the reactor at the end of the mission. Merkle estimated that about 100 grams of fission products would be produced, of which he expected a few grams to be released and dispersed over a wide area. Atmospheric nuclear testing was still ongoing in the early 1960s, so the radioactive emissions were not considered to be a major problem by comparison. Although small compared to that produced by a nuclear explosion, it was a problem for testing. It was anticipated that numerous test flights would be required. The noise level was estimated to be a deafening 150 decibels. There was also the possibility of the missile going out of control.
The idea of testing it over Nevada was quickly discarded. It was proposed to conduct test flights in the vicinity of Wake Island, flying a figure-eight course. The reactor would then be dumped into the Pacific Ocean where it was deep. By the early 1960s there was increasing public awareness of the undesirable environmental impacts of radioactive contamination of the atmosphere and the ocean, and the radioactive emissions from the missile were considered unacceptable wherever the tests were conducted.
The AEC requested $8 million (equivalent to $ million in ) in fiscal year 1965 for continued tests of Tory II-C and the development of Tory III, an improved version. In April 1964, the Joint Committee on Atomic Energy recommended that $1.5 million be cut from this request. This provided continued funding for Tory II-C, but not for the development of Tory III. The Department of Defense's Director of Research and Engineering, Harold Brown, favored the continuation of Project Pluto at a low level of funding to progress the technology. This was rejected by the House Appropriations Committee; the technology had been demonstrated by the successful Tory II-C tests, and if there was no longer a military requirement for it, there was no reason to continue funding. It therefore cut another $5.5 million from the funding request, leaving only $1 million for "mothballing" the project. This led to the decision by the Department of Defense and the Department of State to terminate the project.
On 1 July 1964, seven years and six months after it was started, Project Pluto was canceled. Merkle hosted a celebratory dinner at a nearby country club for project participants where SLAM tie tacks and bottles of "Pluto" mineral water were given away as souvenirs. At its peak, Project Pluto had employed around 350 people at Livermore and 100 at Site 401, and the total amount spent had been about $260 million (equivalent to $ billion in ).
Cleanup
The Tory II-C reactor was not disassembled after the high power test, and remained at Jackass Flats until 1976, when it was dissembled at the Engine Maintenance, Assembly, and Disassembly (E-MAD) building there. In 1971 and 1972, Building 2201 was used by the Fuel Repackaging Operations Project. Fuel elements from the Tory II reactors were removed from the hot cells in Building 2201 and taken to Area 6, from whence they were shipped to the Idaho National Laboratory. Building 2201 was used in the 1970s and 1980s to house the Hydrogen Content Test Facility. Starting in 1986, the Sandia National Laboratory used it for a series of classified nuclear weapons related projects, and in 1998 an unidentified organization used it for a classified project. Building 2201 was cleaned and decontaminated between 2007 and 2009 to make it safe for future demolition. In September 2013, it was reported that it had been demolished.
Notes
References
External links
Directory of U.S. Military Rockets and Missiles
Vought SLAM pages
Abandoned military projects of the United States
Pluto
Pluto
Military nuclear reactors
Nevada Test Site
Nuclear propulsion
Ramjet engines
Nuclear research reactors | Project Pluto | [
"Engineering"
] | 5,267 | [
"nan"
] |
57,717 | https://en.wikipedia.org/wiki/Direct%20memory%20access | Direct memory access (DMA) is a feature of computer systems that allows certain hardware subsystems to access main system memory independently of the central processing unit (CPU).
Without DMA, when the CPU is using programmed input/output, it is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU first initiates the transfer, then it does other operations while the transfer is in progress, and it finally receives an interrupt from the DMA controller (DMAC) when the operation is done. This feature is useful at any time that the CPU cannot keep up with the rate of data transfer, or when the CPU needs to perform work while waiting for a relatively slow I/O data transfer.
Many hardware systems use DMA, including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in some multi-core processors. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without DMA channels. Similarly, a processing circuitry inside a multi-core processor can transfer data to and from its local memory without occupying its processor time, allowing computation and data transfer to proceed in parallel.
DMA can also be used for "memory to memory" copying or moving of data within memory. DMA can offload expensive memory operations, such as large copies or scatter-gather operations, from the CPU to a dedicated DMA engine. An implementation example is the I/O Acceleration Technology. DMA is of interest in network-on-chip and in-memory computing architectures.
Principles
Third-party
Standard DMA, also called third-party DMA, uses a DMA controller. A DMA controller can generate memory addresses and initiate memory read or write cycles. It contains several hardware registers that can be written and read by the CPU. These include a memory address register, a byte count register, and one or more control registers. Depending on what features the DMA controller provides, these control registers might specify some combination of the source, the destination, the direction of the transfer (reading from the I/O device or writing to the I/O device), the size of the transfer unit, and/or the number of bytes to transfer in one burst.
To carry out an input, output or memory-to-memory operation, the host processor initializes the DMA controller with a count of the number of words to transfer, and the memory address to use. The CPU then commands the peripheral device to initiate a data transfer. The DMA controller then provides addresses and read/write control lines to the system memory. Each time a byte of data is ready to be transferred between the peripheral device and memory, the DMA controller increments its internal address register until the full block of data is transferred.
Some examples of buses using third-party DMA are PATA, USB (before USB4), and SATA; however, their host controllers use bus mastering.
Bus mastering
In a bus mastering system, also known as a first-party DMA system, the CPU and peripherals can each be granted control of the memory bus. Where a peripheral can become a bus master, it can directly write to system memory without the involvement of the CPU, providing memory address and control signals as required. Some measures must be provided to put the processor into a hold condition so that bus contention does not occur.
Modes of operation
Burst mode
In burst mode, an entire block of data is transferred in one contiguous sequence. Once the DMA controller is granted access to the system bus by the CPU, it transfers all bytes of data in the data block before releasing control of the system buses back to the CPU, but renders the CPU inactive for relatively long periods of time. The mode is also called "Block Transfer Mode".
Cycle stealing mode
The cycle stealing mode is used in systems in which the CPU should not be disabled for the length of time needed for burst transfer modes. In the cycle stealing mode, the DMA controller obtains access to the system bus the same way as in burst mode, using BR (Bus Request) and BG (Bus Grant) signals, which are the two signals controlling the interface between the CPU and the DMA controller. However, in cycle stealing mode, after one unit of data transfer, the control of the system bus is deasserted to the CPU via BG. It is then continually requested again via BR, transferring one unit of data per request, until the entire block of data has been transferred. By continually obtaining and releasing the control of the system bus, the DMA controller essentially interleaves instruction and data transfers. The CPU processes an instruction, then the DMA controller transfers one data value, and so on. Data is not transferred as quickly, but CPU is not idled for as long as in burst mode. Cycle stealing mode is useful for controllers that monitor data in real time.
Transparent mode
Transparent mode takes the most time to transfer a block of data, yet it is also the most efficient mode in terms of overall system performance. In transparent mode, the DMA controller transfers data only when the CPU is performing operations that do not use the system buses. The primary advantage of transparent mode is that the CPU never stops executing its programs and the DMA transfer is free in terms of time, while the disadvantage is that the hardware needs to determine when the CPU is not using the system buses, which can be complex. This is also called "Hidden DMA data transfer mode".
Cache coherency
DMA can lead to cache coherency problems. Imagine a CPU equipped with a cache and an external memory that can be accessed directly by devices using DMA. When the CPU accesses location X in the memory, the current value will be stored in the cache. Subsequent operations on X will update the cached copy of X, but not the external memory version of X, assuming a write-back cache. If the cache is not flushed to the memory before the next time a device tries to access X, the device will receive a stale value of X.
Similarly, if the cached copy of X is not invalidated when a device writes a new value to the memory, then the CPU will operate on a stale value of X.
This issue can be addressed in one of two ways in system design: Cache-coherent systems implement a method in hardware, called bus snooping, whereby external writes are signaled to the cache controller which then performs a cache invalidation for DMA writes or cache flush for DMA reads. Non-coherent systems leave this to software, where the OS must then ensure that the cache lines are flushed before an outgoing DMA transfer is started and invalidated before a memory range affected by an incoming DMA transfer is accessed. The OS must make sure that the memory range is not accessed by any running threads in the meantime. The latter approach introduces some overhead to the DMA operation, as most hardware requires a loop to invalidate each cache line individually.
Hybrids also exist, where the secondary L2 cache is coherent while the L1 cache (typically on-CPU) is managed by software.
Examples
ISA
In the original IBM PC (and the follow-up PC/XT), there was only one Intel 8237 DMA controller capable of providing four DMA channels (numbered 0–3). These DMA channels performed 8-bit transfers (as the 8237 was an 8-bit device, ideally matched to the PC's i8088 CPU/bus architecture), could only address the first (i8086/8088-standard) megabyte of RAM, and were limited to addressing single 64 kB segments within that space (although the source and destination channels could address different segments). Additionally, the controller could only be used for transfers to, from or between expansion bus I/O devices, as the 8237 could only perform memory-to-memory transfers using channels 0 & 1, of which channel 0 in the PC (& XT) was dedicated to dynamic memory refresh. This prevented it from being used as a general-purpose "Blitter", and consequently block memory moves in the PC, limited by the general PIO speed of the CPU, were very slow.
With the IBM PC/AT, the enhanced AT bus (more familiarly retronymed as the Industry Standard Architecture (ISA)) added a second 8237 DMA controller to provide three additional, and as highlighted by resource clashes with the XT's additional expandability over the original PC, much-needed channels (5–7; channel 4 is used as a cascade to the first 8237). The page register was also rewired to address the full 16 MB memory address space of the 80286 CPU. This second controller was also integrated in a way capable of performing 16-bit transfers when an I/O device is used as the data source and/or destination (as it actually only processes data itself for memory-to-memory transfers, otherwise simply controlling the data flow between other parts of the 16-bit system, making its own data bus width relatively immaterial), doubling data throughput when the upper three channels are used. For compatibility, the lower four DMA channels were still limited to 8-bit transfers only, and whilst memory-to-memory transfers were now technically possible due to the freeing up of channel 0 from having to handle DRAM refresh, from a practical standpoint they were of limited value because of the controller's consequent low throughput compared to what the CPU could now achieve (i.e., a 16-bit, more optimised 80286 running at a minimum of 6 MHz, vs an 8-bit controller locked at 4.77 MHz). In both cases, the 64 kB segment boundary issue remained, with individual transfers unable to cross segments (instead "wrapping around" to the start of the same segment) even in 16-bit mode, although this was in practice more a problem of programming complexity than performance as the continued need for DRAM refresh (however handled) to monopolise the bus approximately every 15 μs prevented use of large (and fast, but uninterruptible) block transfers.
Due to their lagging performance (1.6 MB/s maximum 8-bit transfer capability at 5 MHz, but no more than 0.9 MB/s in the PC/XT and 1.6 MB/s for 16-bit transfers in the AT due to ISA bus overheads and other interference such as memory refresh interruptions) and unavailability of any speed grades that would allow installation of direct replacements operating at speeds higher than the original PC's standard 4.77 MHz clock, these devices have been effectively obsolete since the late 1980s. Particularly, the advent of the 80386 processor in 1985 and its capacity for 32-bit transfers (although great improvements in the efficiency of address calculation and block memory moves in Intel CPUs after the 80186 meant that PIO transfers even by the 16-bit-bus 286 and 386SX could still easily outstrip the 8237), as well as the development of further evolutions to (EISA) or replacements for (MCA, VLB and PCI) the "ISA" bus with their own much higher-performance DMA subsystems (up to a maximum of 33 MB/s for EISA, 40 MB/s MCA, typically 133 MB/s VLB/PCI) made the original DMA controllers seem more of a performance millstone than a booster. They were supported to the extent they are required to support built-in legacy PC hardware on later machines. The pieces of legacy hardware that continued to use ISA DMA after 32-bit expansion buses became common were Sound Blaster cards that needed to maintain full hardware compatibility with the Sound Blaster standard; and Super I/O devices on motherboards that often integrated a built-in floppy disk controller, an IrDA infrared controller when FIR (fast infrared) mode is selected, and an IEEE 1284 parallel port controller when ECP mode is selected. In cases where an original 8237s or direct compatibles were still used, transfer to or from these devices may still be limited to the first 16 MB of main RAM regardless of the system's actual address space or amount of installed memory.
Each DMA channel has a 16-bit address register and a 16-bit count register associated with it. To initiate a data transfer the device driver sets up the DMA channel's address and count registers together with the direction of the data transfer, read or write. It then instructs the DMA hardware to begin the transfer. When the transfer is complete, the device interrupts the CPU.
Scatter-gather or vectored I/O DMA allows the transfer of data to and from multiple memory areas in a single DMA transaction. It is equivalent to the chaining together of multiple simple DMA requests. The motivation is to off-load multiple input/output interrupt and data copy tasks from the CPU.
DRQ stands for Data request; DACK for Data acknowledge. These symbols, seen on hardware schematics of computer systems with DMA functionality, represent electronic signaling lines between the CPU and DMA controller. Each DMA channel has one Request and one Acknowledge line. A device that uses DMA must be configured to use both lines of the assigned DMA channel.
16-bit ISA permitted bus mastering.
Standard ISA DMA assignments:
PCI
A PCI architecture has no central DMA controller, unlike ISA. Instead, A PCI device can request control of the bus ("become the bus master") and request to read from and write to system memory. More precisely, a PCI component requests bus ownership from the PCI bus controller (usually PCI host bridge, and PCI to PCI bridge), which will arbitrate if several devices request bus ownership simultaneously, since there can only be one bus master at one time. When the component is granted ownership, it will issue normal read and write commands on the PCI bus, which will be claimed by the PCI bus controller.
As an example, on an Intel Core-based PC, the southbridge will forward the transactions to the memory controller (which is integrated on the CPU die) using DMI, which will in turn convert them to DDR operations and send them out on the memory bus. As a result, there are quite a number of steps involved in a PCI DMA transfer; however, that poses little problem, since the PCI device or PCI bus itself are an order of magnitude slower than the rest of the components (see list of device bandwidths).
A modern x86 CPU may use more than 4 GB of memory, either utilizing the native 64-bit mode of x86-64 CPU, or the Physical Address Extension (PAE), a 36-bit addressing mode. In such a case, a device using DMA with a 32-bit address bus is unable to address memory above the 4 GB line. The new Double Address Cycle (DAC) mechanism, if implemented on both the PCI bus and the device itself, enables 64-bit DMA addressing. Otherwise, the operating system would need to work around the problem by either using costly double buffers (DOS/Windows nomenclature) also known as bounce buffers (FreeBSD/Linux), or it could use an IOMMU to provide address translation services if one is present.
I/OAT
As an example of DMA engine incorporated in a general-purpose CPU, some Intel Xeon chipsets include a DMA engine called I/O Acceleration Technology (I/OAT), which can offload memory copying from the main CPU, freeing it to do other work. In 2006, Intel's Linux kernel developer Andrew Grover performed benchmarks using I/OAT to offload network traffic copies and found no more than 10% improvement in CPU utilization with receiving workloads.
DDIO
Further performance-oriented enhancements to the DMA mechanism have been introduced in Intel Xeon E5 processors with their Data Direct I/O (DDIO) feature, allowing the DMA "windows" to reside within CPU caches instead of system RAM. As a result, CPU caches are used as the primary source and destination for I/O, allowing network interface controllers (NICs) to DMA directly to the Last level cache (L3 cache) of local CPUs and avoid costly fetching of the I/O data from system RAM. As a result, DDIO reduces the overall I/O processing latency, allows processing of the I/O to be performed entirely in-cache, prevents the available RAM bandwidth/latency from becoming a performance bottleneck, and may lower the power consumption by allowing RAM to remain longer in low-powered state.
AHB
In systems-on-a-chip and embedded systems, typical system bus infrastructure is a complex on-chip bus such as AMBA High-performance Bus. AMBA defines two kinds of AHB components: master and slave. A slave interface is similar to programmed I/O through which the software (running on embedded CPU, e.g. ARM) can write/read I/O registers or (less commonly) local memory blocks inside the device. A master interface can be used by the device to perform DMA transactions to/from system memory without heavily loading the CPU.
Therefore, high bandwidth devices such as network controllers that need to transfer huge amounts of data to/from system memory will have two interface adapters to the AHB: a master and a slave interface. This is because on-chip buses like AHB do not support tri-stating the bus or alternating the direction of any line on the bus. Like PCI, no central DMA controller is required since the DMA is bus-mastering, but an arbiter is required in case of multiple masters present on the system.
Internally, a multichannel DMA engine is usually present in the device to perform multiple concurrent scatter-gather operations as programmed by the software.
Cell
As an example usage of DMA in a multiprocessor-system-on-chip, IBM/Sony/Toshiba's Cell processor incorporates a DMA engine for each of its 9 processing elements including one Power processor element (PPE) and eight synergistic processor elements (SPEs). Since the SPE's load/store instructions can read/write only its own local memory, an SPE entirely depends on DMAs to transfer data to and from the main memory and local memories of other SPEs. Thus the DMA acts as a primary means of data transfer among cores inside this CPU (in contrast to cache-coherent CMP architectures such as Intel's cancelled general-purpose GPU, Larrabee).
DMA in Cell is fully cache coherent (note however local stores of SPEs operated upon by DMA do not act as globally coherent cache in the standard sense). In both read ("get") and write ("put"), a DMA command can transfer either a single block area of size up to 16 KB, or a list of 2 to 2048 such blocks. The DMA command is issued by specifying a pair of a local address and a remote address: for example when a SPE program issues a put DMA command, it specifies an address of its own local memory as the source and a virtual memory address (pointing to either the main memory or the local memory of another SPE) as the target, together with a block size. According to an experiment, an effective peak performance of DMA in Cell (3 GHz, under uniform traffic) reaches 200 GB per second.
DMA controllers
Intel 8257
Am9517
Intel 8237
Z80 DMA
LH0083, compatible to Z80 DMA
μPD71037, capable of addressing a 64K-byte of memory
μPD71071, capable of addressing a 16M-byte of memory
Pipelining
Processors with scratchpad memory and DMA (such as digital signal processors and the Cell processor) may benefit from software overlapping DMA memory operations with processing, via double buffering or multibuffering. For example, the on-chip memory is split into two buffers; the processor may be operating on data in one, while the DMA engine is loading and storing data in the other. This allows the system to avoid memory latency and exploit burst transfers, at the expense of needing a predictable memory access pattern.
See also
References
Sources
DMA Fundamentals on Various PC Platforms, from A. F. Harvey and Data Acquisition Division Staff NATIONAL INSTRUMENTS
mmap() and DMA, from Linux Device Drivers, 2nd Edition, Alessandro Rubini & Jonathan Corbet
Memory Mapping and DMA, from Linux Device Drivers, 3rd Edition, Jonathan Corbet, Alessandro Rubini, Greg Kroah-Hartman
DMA and Interrupt Handling
DMA Modes & Bus Mastering
External links
Mastering the DMA and IOMMU APIs, Embedded Linux Conference 2014, San Jose, by Laurent Pinchart
Computer memory
Motherboard
Computer storage buses
Hardware acceleration
Input/output | Direct memory access | [
"Technology"
] | 4,362 | [
"Hardware acceleration",
"Computer systems"
] |
57,759 | https://en.wikipedia.org/wiki/Biophoton | Biophotons (from the Greek βίος meaning "life" and φῶς meaning "light") are photons of light in the ultraviolet and low visible light range that are produced by a biological system. They are non-thermal in origin, and the emission of biophotons is technically a type of bioluminescence, though the term "bioluminescence" is generally reserved for higher luminance systems (typically with emitted light visible to the naked eye, using biochemical means such as luciferin/luciferase). The term biophoton used in this narrow sense should not be confused with the broader field of biophotonics, which studies the general interaction of light with biological systems.
Biological tissues typically produce an observed radiant emittance in the visible and ultraviolet frequencies ranging from 10−17 to 10−23 W/cm2 (approx 1-1000 photons/cm2/second). This low level of light has a much weaker intensity than the visible light produced by bioluminescence, but biophotons are detectable above the background of thermal radiation that is emitted by tissues at their normal temperature.
While detection of biophotons has been reported by several groups, hypotheses that such biophotons indicate the state of biological tissues and facilitate a form of cellular communication are still under investigation, Alexander Gurwitsch, who discovered the existence of biophotons, was awarded the Stalin Prize in 1941 for his work.
Detection and measurement
Biophotons may be detected with photomultipliers or by means of an ultra low noise CCD camera to produce an image, using an exposure time of typically 15 minutes for plant materials. Photomultiplier tubes have been used to measure biophoton emissions from fish eggs, and some applications have measured biophotons from animals and humans. Electron Multiplying CCD (EM-CCD) optimized for the detection of ultraweak light have also been used to detect the bioluminescence produced by yeast cells at the onset of their growth.
The typical observed radiant emittance of biological tissues in the visible and ultraviolet frequencies ranges from 10−17 to 10−23 W/cm2 with a photon count from a few to nearly 1000 photons per cm2 in the range of 200 nm to 800 nm.
Proposed physical mechanisms
Chemi-excitation via oxidative stress by reactive oxygen species or catalysis by enzymes (i.e., peroxidase, lipoxygenase) is a common event in the biomolecular milieu. Such reactions can lead to the formation of triplet excited species, which release photons upon returning to a lower energy level in a process analogous to phosphorescence. That this process is a contributing factor to spontaneous biophoton emission has been indicated by studies demonstrating that biophoton emission can be increased by depleting assayed tissue of antioxidants or by addition of carbonyl derivatizing agents. Further support is provided by studies indicating that emission can be increased by addition of reactive oxygen species.
Plants
Imaging of biophotons from leaves has been used as a method for assaying R gene responses. These genes and their associated proteins are responsible for pathogen recognition and activation of defense signaling networks leading to the hypersensitive response, which is one of the mechanisms of the resistance of plants to pathogen infection. It involves the generation of reactive oxygen species (ROS), which have crucial roles in signal transduction or as toxic agents leading to cell death.
Biophotons have been also observed in the roots of stressed plants. In healthy cells, the concentration of ROS is minimized by a system of biological antioxidants. However, heat shock and other stresses changes the equilibrium between oxidative stress and antioxidant activity, for example, the rapid rise in temperature induces biophoton emission by ROS.
Hypothesized involvement in cellular communication
In the 1920s, the Russian embryologist Alexander Gurwitsch reported "ultraweak" photon emissions from living tissues in the UV-range of the spectrum. He named them "mitogenetic rays" because his experiments convinced him that they had a stimulating effect on cell division.
In the 1970s Fritz-Albert Popp and his research group at the University of Marburg (Germany) showed that the spectral distribution of the emission fell over a wide range of wavelengths, from 200 to 750 nm. Popp's work on the biophoton emission's statistical properties, namely the claims on its coherence, was criticised for lack of scientific rigour.
One biophoton mechanism focuses on injured cells that are under higher levels of oxidative stress, which is one source of light, and can be deemed to constitute a "distress signal" or background chemical process, but this mechanism is yet to be demonstrated. The difficulty of teasing out the effects of any supposed biophotons amid the other numerous chemical interactions between cells makes it difficult to devise a testable hypothesis. A 2010 review article discusses various published theories on this kind of signaling.
The hypothesis of cellular communication by biophotons was highly criticised for failing to explain how could cells detect photonic signals several orders of magnitude weaker than the natural background illumination.
See also
Chemiluminescence
Luminophore
Phosphorescence
References
Further reading
External links
Bioluminescence
Photons | Biophoton | [
"Chemistry",
"Biology"
] | 1,118 | [
"Biochemistry",
"Luminescence",
"Bioluminescence"
] |
57,761 | https://en.wikipedia.org/wiki/Valproate | Valproate (valproic acid, VPA, sodium valproate, and valproate semisodium forms) are medications primarily used to treat epilepsy and bipolar disorder and prevent migraine headaches. They are useful for the prevention of seizures in those with absence seizures, partial seizures, and generalized seizures. They can be given intravenously or by mouth, and the tablet forms exist in both long- and short-acting formulations.
Common side effects of valproate include nausea, vomiting, somnolence, and dry mouth. Serious side effects can include liver failure, and regular monitoring of liver function tests is therefore recommended. Other serious risks include pancreatitis and an increased suicide risk. Valproate is known to cause serious abnormalities or birth defects in the unborn child if taken during pregnancy, and is contra-indicated for women of childbearing age unless the drug is essential to their medical condition and the person is also prescribed a contraceptive. Reproductive warnings have also been issued for men using the drug. The United States Food and Drug Administration has indicated a black box warning given the frequency and severity of the side effects and teratogenicity. Additionally, there is also a black box warning due to risk of hepatotoxicity and pancreatitis. As of 2022 the drug was still prescribed in the UK to potentially pregnant women, but use declined by 51% from 2018–19 to 2020–21.
Valproate's precise mechanism of action is unclear. Proposed mechanisms include affecting GABA levels, blocking voltage-gated sodium channels, inhibiting histone deacetylases, and increasing LEF1. Valproic acid is a branched short-chain fatty acid (SCFA), a derivative of valeric acid.
Valproate was originally synthesized in 1881 and came into medical use in 1962. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 174th most commonly prescribed medication in the United States, with more than 2million prescriptions.
Medical uses
Valproate or valproic acid is used primarily to treat epilepsy and bipolar disorder and to prevent migraine headaches.
Epilepsy
Valproate has a broad spectrum of anticonvulsant activity, although it is primarily used as a first-line treatment for tonic–clonic seizures, absence seizures and myoclonic seizures and as a second-line treatment for partial seizures and infantile spasms. It has also been successfully given intravenously to treat status epilepticus.
In the US, valproic acid is also prescribed as an anti-epileptic drug indicated for the treatment of manic episodes associated with bipolar disorder; monotherapy and adjunctive therapy of complex partial seizures and simple and complex absence seizures; adjunctive therapy in people with multiple seizure types that include absence seizures.
Mental illness
Valproate products are used to treat manic or mixed episodes of bipolar disorder.
A 2016 systematic review compared the efficacy of valproate as an add-on for people with schizophrenia:
Other neurological indications
Based upon five case reports, valproic acid may have efficacy in controlling the symptoms of the dopamine dysregulation syndrome that arise from the treatment of Parkinson's disease with levodopa.
Valproate is not commonly used to prevent or treat migraine headaches, but it may be prescribed if other medications are effective.
Other
The medication has been tested in the treatment of AIDS and cancer, owing to its histone-deacetylase-inhibiting effects. It has cardioprotective, kidney protective, antiinflammatory, and antimicrobial effects.
Contraindications
Contraindications include:
Pre-existing acute or chronic liver dysfunction or family history of severe liver inflammation (hepatitis), particularly medicine related.
Pregnancy 11% risk of birth defects and 30-40% risk of neuro-developmental disabilities which can be permanent
Known hypersensitivity to valproate or any of the ingredients used in the preparation
Urea cycle disorders
Hepatic porphyria
Hepatotoxicity
Mitochondrial disease
Pancreatitis
Porphyria
Adverse effects
Most common adverse effects include:
Nausea (22%)
Drowsiness (19%)
Dizziness (12%)
Vomiting (12%)
Weakness (10%)
Serious adverse effects include:
Bleeding
Low blood platelets
Encephalopathy
Suicidal behavior and thoughts
Low body temperature
Valproic acid has a black box warning for hepatotoxicity, pancreatitis, and fetal abnormalities.
There is evidence that valproic acid may cause premature growth plate ossification in children and adolescents, resulting in decreased height. Valproic acid can also cause mydriasis, a dilation of the pupils. There is evidence that shows valproic acid may increase the chance of polycystic ovary syndrome (PCOS) in women with epilepsy or bipolar disorder. Studies have shown this risk of PCOS is higher in women with epilepsy compared to those with bipolar disorder. Weight gain is also possible.
Pregnancy
Elderly
Valproate may cause increased somnolence in the elderly. In a trial of valproate in elderly patients with dementia, a significantly higher portion of valproate patients had somnolence compared to placebo. In approximately one-half of such patients, there was associated reduced nutritional intake and weight loss.
Overdose and toxicity
Excessive amounts of valproic acid can result in somnolence, tremor, stupor, respiratory depression, coma, metabolic acidosis, and death. In general, serum or plasma valproic acid concentrations are in a range of 20–100 mg/L during controlled therapy, but may reach 150–1500 mg/L following acute poisoning. Monitoring of the serum level is often accomplished using commercial immunoassay techniques, although some laboratories employ gas or liquid chromatography.
In contrast to other antiepileptic drugs, at present there is little favorable evidence for salivary therapeutic drug monitoring. Salivary levels of valproic acid correlate poorly with serum levels, partly due to valproate's weak acid property (pKa of 4.9).
In severe intoxication, hemoperfusion or hemofiltration can be an effective means of hastening elimination of the drug from the body. Supportive therapy should be given to all patients experiencing an overdose and urine output should be monitored. Supplemental L-carnitine is indicated in patients having an acute overdose and also prophylactically in high risk patients. Acetyl-L-carnitine lowers hyperammonemia less markedly than L-carnitine.
Interactions
Valproate inhibits CYP2C9, glucuronyl transferase, and epoxide hydrolase and is highly protein bound and hence may interact with drugs that are substrates for any of these enzymes or are highly protein bound themselves. It may also potentiate the CNS depressant effects of alcohol. It should not be given in conjunction with other antiepileptics due to the potential for reduced clearance of other antiepileptics (including carbamazepine, lamotrigine, phenytoin and phenobarbitone) and itself. It may also interact with:
Aspirin: may increase valproate concentrations. May also interfere with valproate's metabolism.
Benzodiazepines: may cause CNS depression and there are possible pharmacokinetic interactions.
Carbapenem antibiotics: reduce valproate levels, potentially leading to seizures.
Cimetidine: inhibits valproate's metabolism in the liver, leading to increased valproate concentrations.
Erythromycin: inhibits valproate's metabolism in the liver, leading to increased valproate concentrations.
Ethosuximide: valproate may increase ethosuximide concentrations and lead to toxicity.
Felbamate: may increase plasma concentrations of valproate.
Mefloquine: may increase valproate metabolism combined with the direct epileptogenic effects of mefloquine.
Oral contraceptives: may reduce plasma concentrations of valproate.
Primidone: may accelerate metabolism of valproate, leading to a decline of serum levels and potential breakthrough seizure.
Rifampicin: increases the clearance of valproate, leading to decreased valproate concentrations.
Warfarin: valproate may increase free warfarin concentration and prolong bleeding time.
Zidovudine: valproate may increase zidovudine serum concentration and lead to toxicity.
Pharmacology
Pharmacodynamics
Although the mechanism of action of valproate is not fully understood, traditionally, its anticonvulsant effect has been attributed to the blockade of voltage-gated sodium channels and increased brain levels of the inhibitory synaptic neurotransmitter gamma-aminobutyric acid (GABA). The GABAergic effect is also believed to contribute towards the anti-manic properties of valproate. In animals, sodium valproate raises cerebral and cerebellar levels of GABA, possibly by inhibiting GABA degradative enzymes, such as GABA transaminase, succinate-semialdehyde dehydrogenase and by inhibiting the re-uptake of GABA by neuronal cells.
Prevention of neurotransmitter-induced hyperexcitability of nerve cells via Kv7.2 channel and AKAP5 may also contribute to its mechanism. Valproate has been shown to protect against a seizure-induced reduction in phosphatidylinositol (3,4,5)-trisphosphate (PIP3) as a potential therapeutic mechanism.
Valproate is a histone deacetylase inhibitor. By inhibition of histone deacetylase, it promotes more transcriptionally active chromatin structures, that is it exerts an epigenetic effect. This has been proven in mice: Valproic acid induced histone hyperacetylation had brain function effects on the next generation of mice through changes in sperm DNA methylation. Intermediate molecules include VEGF, BDNF, and GDNF.
Endocrine actions
Valproic acid has been found to be an antagonist of the androgen and progesterone receptors, and hence as a nonsteroidal antiandrogen and antiprogestogen, at concentrations much lower than therapeutic serum levels. In addition, the drug has been identified as a potent aromatase inhibitor, and suppresses estrogen concentrations. These actions are likely to be involved in the reproductive endocrine disturbances seen with valproic acid treatment.
Valproic acid has been found to directly stimulate androgen biosynthesis in the gonads via inhibition of histone deacetylases and has been associated with hyperandrogenism in women and increased 4-androstenedione levels in men. High rates of polycystic ovary syndrome and menstrual disorders have also been observed in women treated with valproic acid.
Pharmacokinetics
Taken by mouth, valproate is rapidly and virtually completely absorbed from the gut. When in the bloodstream, 80–90% of the substance are bound to plasma proteins, mainly albumin. Protein binding is saturable: it decreases with increasing valproate concentration, low albumin concentrations, the patient's age, additional use of other drugs such as aspirin, as well as liver and kidney impairment. Concentrations in the cerebrospinal fluid and in breast milk are 1 to 10% of blood plasma concentrations.
The vast majority of valproate metabolism occurs in the liver. Valproate is known to be metabolized by the cytochrome P450 enzymes CYP2A6, CYP2B6, CYP2C9, and CYP3A5. It is also known to be metabolized by the UDP-glucuronosyltransferase enzymes UGT1A3, UGT1A4, UGT1A6, UGT1A8, UGT1A9, UGT1A10, UGT2B7, and UGT2B15. Some of the known metabolites of valproate by these enzymes and uncharacterized enzymes include (see image):
via glucuronidation (30–50%): valproic acid β-O-glucuronide
via beta oxidation (>40%): 2E-ene-valproic acid, 2Z-ene-valproic acid, 3-hydroxyvalproic acid, 3-oxovalproic acid
via omega oxidation: 5-hydroxyvalproic acid, 2-propyl-glutaric acid
some others: 3E-ene-valproic acid, 3Z-ene-valproic acid, 4-ene-valproic acid, 4-hydroxyvalproic acid
All in all, over 20 metabolites are known.
In adult patients taking valproate alone, 30–50% of an administered dose is excreted in urine as the glucuronide conjugate. The other major pathway in the metabolism of valproate is mitochondrial beta oxidation, which typically accounts for over 40% of an administered dose. Typically, less than 20% of an administered dose is eliminated by other oxidative mechanisms. Less than 3% of an administered dose of valproate is excreted unchanged (i.e., as valproate) in urine. Only a small amount is excreted via the faeces. Elimination half-life is 16±3 hours and can decrease to 4–9 hours when combined with enzyme inducers.
Chemistry
Valproic acid is a branched short-chain fatty acid and the 2-n-propyl derivative of valeric acid.
History
Valproic acid was first synthesized in 1882 by Beverly S. Burton as an analogue of valeric acid, found naturally in valerian. Valproic acid is a carboxylic acid, a clear liquid at room temperature. For many decades, its only use was in laboratories as a "metabolically inert" solvent for organic compounds. In 1962, the French researcher Pierre Eymard serendipitously discovered the anticonvulsant properties of valproic acid while using it as a vehicle for a number of other compounds that were being screened for antiseizure activity. He found it prevented pentylenetetrazol-induced convulsions in laboratory rats. It was approved as an antiepileptic drug in 1967 in France and has become the most widely prescribed antiepileptic drug worldwide. Valproic acid has also been used for migraine prophylaxis and bipolar disorder.
Society and culture
Valproate is available as a generic medication.
Approval status
Off-label uses
In 2012, pharmaceutical company Abbott paid $1.6 billion in fines to US federal and state governments for illegal promotion of off-label uses for Depakote, including the sedation of elderly nursing home residents.
Some studies have suggested that valproate may reopen the critical period for learning absolute pitch and possibly other skills such as language.
Formulations
Valproate exists in two main molecular variants: sodium valproate and valproic acid without sodium (often implied by simply valproate). A mixture between these two is termed semisodium valproate. It is unclear whether there is any difference in efficacy between these variants, except from the fact that about 10% more mass of sodium valproate is needed than valproic acid without sodium to compensate for the sodium itself.
Terminology
Valproate is a negative ion. The conjugate acid of valproate is valproic acid (VPA). Valproic acid is fully ionized into valproate at the physiologic pH of the human body, and valproate is the active form of the drug. Sodium valproate is the sodium salt of valproic acid. Divalproex sodium is a coordination complex composed of equal parts of valproic acid and sodium valproate.
Brand names of valproic acid
Branded products include:
Absenor (Orion Corporation Finland)
Convulex (G.L. Pharma GmbH Austria)
Depakene (Abbott Laboratories in US and Canada)
Depakin (Sanofi S.R.L. Italy)
Depakine (Sanofi Aventis France)
Depakine (Sanofi Synthelabo Romania)
Depalept (Sanofi Aventis Israel)
Deprakine (Sanofi Aventis Finland)
Encorate (Sun Pharmaceuticals India)
Epilim (Sanofi Synthelabo Australia and South Africa)
Stavzor (Noven Pharmaceuticals Inc.)
Valcote (Abbott Laboratories Argentina)
Valpakine (Sanofi Aventis Brazil)
Orfiril (Desitin Arzneimittel GmbH Norway)
Brand names of sodium valproate
Portugal
Tablets Diplexil-R by Bial.
United States
Intravenous injection Depacon by Abbott Laboratories.
Syrup Depakene by Abbott Laboratories. (Note: Depakene capsules are valproic acid).
Depakote tablets are a mixture of sodium valproate and valproic acid.
Tablets Eliaxim by Bial.
Australia
Epilim Crushable Tablets Sanofi
Epilim Sugar Free Liquid Sanofi
Epilim Syrup Sanofi
Epilim Tablets Sanofi
Sodium Valproate Sandoz Tablets Sanofi
Valpro Tablets Alphapharm
Valproate Winthrop Tablets Sanofi
Valprease tablets Sigma
New Zealand
Epilim by Sanofi-Aventis
All the above formulations are Pharmac-subsidised.
UK
Depakote Tablets (as in USA)
Tablets Orlept by Wockhardt and Epilim by Sanofi
Oral solution Orlept Sugar Free by Wockhardt and Epilim by Sanofi
Syrup Epilim by Sanofi-Aventis
Intravenous injection Epilim Intravenous by Sanofi
Extended release tablets Epilim Chrono by Sanofi is a combination of sodium valproate and valproic acid in a 2.3:1 ratio.
Enteric-coated tablets Epilim EC200 by Sanofi is a 200 mg sodium valproate enteric-coated tablet.
UK only
Capsules Episenta prolonged release by Beacon
Sachets Episenta prolonged release by Beacon
Intravenous solution for injection Episenta solution for injection by Beacon
Germany, Switzerland, Norway, Finland, Sweden
Tablets Orfiril by Desitin Pharmaceuticals
Intravenous injection Orfiril IV by Desitin Pharmaceuticals
South Africa
Syrup Convulex by Byk Madaus
Tablets Epilim by Sanofi-synthelabo
Malaysia
Tablets Epilim (200 ENTERIC COATED) by Sanofi-Aventis
Controlled release tablets Epilim Chrono (500 CONTROLLED RELEASE) by Sanofi-Aventis
Romania
Companies are SANOFI-AVENTIS FRANCE, GEROT PHARMAZEUTIKA GMBH and DESITIN ARZNEIMITTEL GMBH
Types are Syrup, Extended release mini tablets, Gastric resistant coated tablets, Gastric resistant soft capsules, Extended release capsules, Extended release tablets and Extended release coated tablets
Canada
Intravenous injection Epival or Epiject by Abbott Laboratories.
Syrup Depakene by Abbott Laboratories its generic formulations include Apo-Valproic and ratio-Valproic.
Japan
Tablets Depakene by Kyowa Hakko Kirin
Extended release tablets Depakene-R by Kyowa Hakko Kogyo and Selenica-R by Kowa
Syrup Depakene by Kyowa Hakko Kogyo
Europe
In much of Europe, Dépakine and Depakine Chrono (tablets) are equivalent to Epilim and Epilim Chrono above.
Taiwan
Tablets (white round tablet) Depakine () by Sanofi Winthrop Industrie (France)
Iran
Tablets Epival 200 (enteric coated tablet) and Epival 500 (extended release tablet) by Iran Najo
Slow release tablets Depakine Chrono by Sanofi Winthrop Industrie (France)
Israel
Depalept and Depalept Chrono (extended release tablets) are equivalent to Epilim and Epilim Chrono above. Manufactured and distributed by Sanofi-Aventis.
India, Russia and CIS countries
Valparin Chrono by Sanofi India
Valprol CR by Intas Pharmaceutical (India)
Encorate Chrono by Sun Pharmaceutical (India)
Serven Chrono by Leeven APL Biotech (India)
Uruguay
Tablets DI DPA by Megalabs
Brand names of valproate semisodium
Brazil Depakote by Abbott Laboratories and Torval CR by Torrent do Brasil
Canada Epival by Abbott Laboratories
Mexico Epival and Epival ER (extended release) by Abbott Laboratories
United Kingdom Depakote (for psychiatric conditions) and Epilim (for epilepsy) by Sanofi-Aventis and generics
United States Depakote and Depakote ER (extended release) by Abbott Laboratories and generics
India Valance and Valance OD by Abbott Healthcare Pvt Ltd, Divalid ER by Linux laboratories Pvt Ltd, Valex ER by Sigmund Promedica, Dicorate by Sun Pharma
Germany Ergenyl Chrono by Sanofi-Aventis and generics
Chile Valcote and Valcote ER by Abbott Laboratories
France and other European countries Depakote
Peru Divalprax by AC Farma Laboratories
China Diprate OD
Research
A 2023 systematic review of the literature identified only one study in which valproate was evaluated in the treatment of seizures in infants aged 1 to 36 months. In a randomized control trial, valproate alone was found to show poorer outcomes for infants than valproate plus levetiracetam in terms of reduction of seizures, freedom from seizures, daily living ability, quality of life, and cognitive abilities.
References
Anticonvulsants
Antiprogestogens
Aromatase inhibitors
Drugs developed by AbbVie
Carboxylic acids
CYP3A4 inhibitors
Endocrine disruptors
GABA analogues
GABA transaminase inhibitors
Hepatotoxins
Histone deacetylase inhibitors
Mood stabilizers
Nonsteroidal antiandrogens
Teratogens
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Valproate | [
"Chemistry"
] | 4,756 | [
"Endocrine disruptors",
"Teratogens",
"Carboxylic acids",
"Functional groups"
] |
57,762 | https://en.wikipedia.org/wiki/Psychiatric%20medication | A psychiatric or psychotropic medication is a psychoactive drug taken to exert an effect on the chemical makeup of the brain and nervous system. Thus, these medications are used to treat mental illnesses. These medications are typically made of synthetic chemical compounds and are usually prescribed in psychiatric settings, potentially involuntarily during commitment. Since the mid-20th century, such medications have been leading treatments for a broad range of mental disorders and have decreased the need for long-term hospitalization, thereby lowering the cost of mental health care. The recidivism or rehospitalization of the mentally ill is at a high rate in many countries, and the reasons for the relapses are under research.
History
Several significant psychiatric drugs were developed in the mid-20th century. In 1948, lithium was first used as a psychiatric medicine. One of the most important discoveries was chlorpromazine, an antipsychotic that was first given to a patient in 1952. In the same decade, Julius Axelrod carried out research into the interaction of neurotransmitters, which provided a foundation for the development of further drugs. The popularity of these drugs have increased significantly since then, with millions prescribed annually.
The introduction of these drugs brought profound changes to the treatment of mental illness. It meant that more patients could be treated without the need for confinement in a psychiatric hospital. It was one of the key reasons why many countries moved towards deinstitutionalization, closing many of these hospitals so that patients could be treated at home, in general hospitals and smaller facilities. Use of physical restraints such as straitjackets also declined.
As of 2013, the 10 most prescribed psychiatric drugs by number of prescriptions were alprazolam, sertraline, citalopram, fluoxetine, lorazepam, trazodone, escitalopram, duloxetine, bupropion XL, and venlafaxine XR.
Administration
Psychiatric medications are prescription medications, requiring a prescription from a physician, such as a psychiatrist, or a psychiatric nurse practitioner, PMHNP, before they can be obtained. Some U.S. states and territories, following the creation of the prescriptive authority for psychologists movement, have granted prescriptive privileges to clinical psychologists who have undergone additional specialised education and training in medical psychology. In addition to the familiar dosage in pill form, psychiatric medications are evolving into more novel methods of drug delivery. New technologies include transdermal, transmucosal, inhalation, suppository or depot injection supplements.
Research
Psychopharmacology studies a wide range of substances with various types of psychoactive properties. The professional and commercial fields of pharmacology and psychopharmacology do not typically focus on psychedelic or recreational drugs, and so the majority of studies are conducted on psychiatric medication. While studies are conducted on all psychoactive drugs by both fields, psychopharmacology focuses on psychoactive and chemical interactions within the brain. Physicians who research psychiatric medications are psychopharmacologists, specialists in the field of psychopharmacology.
Adverse and withdrawal effects
Psychiatric disorders, including depression, psychosis, and bipolar disorder, are common and gaining more acceptance in the United States. The most commonly used classes of medications for these disorders are antidepressants, antipsychotics, and lithium. Unfortunately, these medications are associated with significant neurotoxicities.
Psychiatric medications carry risk for neurotoxic adverse effects. The occurrence of neurotoxic effects can potentially reduce drug compliance. Some adverse effects can be treated symptomatically by using adjunct medications such as anticholinergics (antimuscarinics). Some rebound or withdrawal adverse effects, such as the possibility of a sudden or severe emergence or re-emergence of psychosis in antipsychotic withdrawal, may appear when the drugs are discontinued, or discontinued too rapidly.
Medicine combinations with clinically untried risks
While clinical trials of psychiatric medications, like other medications, typically test medicines separately, there is a practice in psychiatry (more so than in somatic medicine) to use polypharmacy in combinations of medicines that have never been tested together in clinical trials (though all medicines involved have passed clinical trials separately). It is argued that this presents a risk of adverse effects, especially brain damage, in real-life mixed medication psychiatry that are not visible in the clinical trials of one medicine at a time (similar to mixed drug abuse causing significantly more damage than the additive effects of brain damages caused by using only one illegal drug). Outside clinical trials, there is evidence for an increase in mortality when psychiatric patients are transferred to polypharmacy with an increased number of medications being mixed.
Types
There are five main groups of psychiatric medications.
Antidepressants, which treat disparate disorders such as clinical depression, dysthymia, anxiety disorders, eating disorders and borderline personality disorder.
Antipsychotics, which treat psychotic disorders such as schizophrenia and psychotic symptoms occurring in the context of other disorders such as mood disorders. They are also used for the treatment of bipolar disorder.
Anxiolytics, which treat anxiety disorders, and include hypnotics and sedatives
Mood stabilizers, which treat bipolar disorder and schizoaffective disorder.
Stimulants, which treat disorders such as attention deficit hyperactivity disorder and narcolepsy.
Antidepressants
Antidepressants are drugs used to treat clinical depression, and they are also often used for anxiety and other disorders. Most antidepressants will hinder the breakdown of serotonin, norepinephrine, and/or dopamine. A commonly used class of antidepressants are called selective serotonin reuptake inhibitors (SSRIs), which act on serotonin transporters in the brain to increase levels of serotonin in the synaptic cleft. Another is the serotonin-norepinephrine reuptake inhibitors (SNRIs), which increase both serotonin and norepinephrine. Antidepressants will often take 3–5 weeks to have a noticeable effect as the regulation of receptors in the brain adapts. There are multiple classes of antidepressants which have different mechanisms of action. Another type of antidepressant is a monoamine oxidase inhibitor (MAOI), which is thought to block the action of monoamine oxidase, an enzyme that breaks down serotonin and norepinephrine. MAOIs are not used as first-line treatment due to the risk of hypertensive crisis related to the consumption of foods containing the amino acid tyramine.
Common antidepressants:
Fluoxetine (Prozac), SSRI
Paroxetine (Paxil, Seroxat), SSRI
Citalopram (Celexa), SSRI
Escitalopram (Lexapro), SSRI
Sertraline (Zoloft), SSRI
Duloxetine (Cymbalta), SNRI
Venlafaxine (Effexor), SNRI
Bupropion (Wellbutrin), NDRI
Mirtazapine (Remeron), NaSSA
Isocarboxazid (Marplan), MAOI
Phenelzine (Nardil), MAOI
Tranylcypromine (Parnate), MAOI
Amitriptyline (Elavil), TCA
Antipsychotics
Antipsychotics are drugs used to treat various symptoms of psychosis, such as those caused by psychotic disorders or schizophrenia. Atypical antipsychotics are also used as mood stabilizers in the treatment of bipolar disorder, and they can augment the action of antidepressants in major depressive disorder.
Antipsychotics are sometimes referred to as neuroleptic drugs and some antipsychotics are branded "major tranquilizers".
There are two categories of antipsychotics: typical antipsychotics and atypical antipsychotics. Most antipsychotics are available only by prescription.
Common antipsychotics:
Anxiolytics and hypnotics
Benzodiazepines are effective as hypnotics, anxiolytics, anticonvulsants, myorelaxants and amnesics. Having less proclivity for overdose and toxicity, they have widely supplanted barbiturates, although barbiturates (such as pentobarbital) are still used for euthanasia.
Developed in the 1950s onward, benzodiazepines were originally thought to be non-addictive at therapeutic doses, but are now known to cause withdrawal symptoms similar to barbiturates and alcohol. Benzodiazepines are generally recommended for short-term use.
Z-drugs are a group of drugs with effects generally similar to benzodiazepines, which are used in the treatment of insomnia.
Common benzodiazepines and z-drugs include:
Mood stabilizers
In 1949, the Australian John Cade discovered that lithium salts could control mania, reducing the frequency and severity of manic episodes. This introduced the now popular drug lithium carbonate to the mainstream public, as well as being the first mood stabilizer to be approved by the U.S. Food & Drug Administration.
Besides lithium, several anticonvulsants and atypical antipsychotics have mood stabilizing activity. The mechanism of action of mood stabilizers is not well understood.
Common non-antipsychotic mood stabilizers include:
Lithium (Lithobid, Eskalith), the oldest mood stabilizer
Anticonvulsants
Carbamazepine (Tegretol) and the related compound oxcarbazepine (Trileptal)
Valproic acid, and salts (Depakene, Depakote)
Lamotrigine (Lamictal)
Stimulants
A stimulant is a drug that stimulates the central nervous system, increasing arousal, attention and endurance. Stimulants are used in psychiatry to treat attention deficit-hyperactivity disorder. Because the medications can be addictive, patients with a history of drug abuse are typically monitored closely or treated with a non-stimulant.
Common stimulants:
Methylphenidate (Ritalin, Concerta), a norepinephrine-dopamine reuptake inhibitor
Dexmethylphenidate (Focalin), the active dextro-enantiomer of methylphenidate
Serdexmethylphenidate/dexmethylphenidate (Azstarys)
Mixed amphetamine salts (Adderall), a 3:1 mix of dextro/levo-enantiomers of amphetamine
Dextroamphetamine (Dexedrine), the dextro-enantiomer of amphetamine
Lisdexamfetamine (Vyvanse), a prodrug containing the dextro-enantiomer of amphetamine
Methamphetamine (Desoxyn), a potent but infrequently prescribed amphetamine
Controversies
Professionals, such as David Rosenhan, Peter Breggin, Paula Caplan, Thomas Szasz and Stuart A. Kirk sustain that psychiatry engages "in the systematic medicalization of normality". More recently these concerns have come from insiders who have worked for and promoted the APA (e.g., Robert Spitzer, Allen Frances).
Scholars such as Cooper, Foucalt, Goffman, Deleuze and Szasz believe that pharmacological "treatment" is only a placebo effect, and that administration of drugs is just a religion in disguise and ritualistic chemistry. Other scholars have argued against psychiatric medication in that significant aspects of mental illness are related to the psyche or environmental factors, but medication works exclusively on a pharmacological basis.
Antipsychotics have been associated with decreases in brain volume over time, which may indicate a neurotoxic effect. However, untreated psychosis has also been associated with decreases in brain volume and treatments have been shown improve cognitive functioning.
See also
List of long term side effects of antipsychotics
Medication
Medicine
Psychopharmacology
References
External links
Children and Psychiatric Medication – a multimodal presentation
Psychiatric Drugs: Antidepressant, Antipsychotic, Antianxiety, Antimanic Agent, Stimulant Prescription Drugs
Psychoactive drugs
Neuropharmacology | Psychiatric medication | [
"Chemistry"
] | 2,608 | [
"Psychoactive drugs",
"Pharmacology",
"Neurochemistry",
"Neuropharmacology"
] |
57,763 | https://en.wikipedia.org/wiki/Aerosol | An aerosol is a suspension of fine solid particles or liquid droplets in air or another gas. Aerosols can be generated from natural or human causes. The term aerosol commonly refers to the mixture of particulates in air, and not to the particulate matter alone. Examples of natural aerosols are fog, mist or dust. Examples of human caused aerosols include particulate air pollutants, mist from the discharge at hydroelectric dams, irrigation mist, perfume from atomizers, smoke, dust, sprayed pesticides, and medical treatments for respiratory illnesses.
Several types of atmospheric aerosol have a significant effect on Earth's climate: volcanic, desert dust, sea-salt, that originating from biogenic sources and human-made. Volcanic aerosol forms in the stratosphere after an eruption as droplets of sulfuric acid that can prevail for up to two years, and reflect sunlight, lowering temperature. Desert dust, mineral particles blown to high altitudes, absorb heat and may be responsible for inhibiting storm cloud formation. Human-made sulfate aerosols, primarily from burning oil and coal, affect the behavior of clouds. When aerosols absorb pollutants, it facilitates the deposition of pollutants to the surface of the earth as well as to bodies of water. This has the potential to be damaging to both the environment and human health.
Ship tracks are clouds that form around the exhaust released by ships into the still ocean air. Water molecules collect around the tiny particles (aerosols) from exhaust to form a cloud seed. More and more water accumulates on the seed until a visible cloud is formed. In the case of ship tracks, the cloud seeds are stretched over a long narrow path where the wind has blown the ship's exhaust, so the resulting clouds resemble long strings over the ocean.
The warming caused by human-produced greenhouse gases has been somewhat offset by the cooling effect of human-produced aerosols. In 2020, regulations on fuel significantly cut sulfur dioxide emissions from international shipping by approximately 80%, leading to an unexpected global geoengineering termination shock.
The liquid or solid particles in an aerosol have diameters typically less than 1 μm. Larger particles with a significant settling speed make the mixture a suspension, but the distinction is not clear. In everyday language, aerosol often refers to a dispensing system that delivers a consumer product from a spray can.
Diseases can spread by means of small droplets in the breath, sometimes called bioaerosols.
Definitions
Aerosol is defined as a suspension system of solid or liquid particles in a gas. An aerosol includes both the particles and the suspending gas, which is usually air. Meteorologists and climatologists often refer to them as particle matter, while the classification in sizes ranges like PM2.5 or PM10, is useful in the field of atmospheric pollution as these size range play a role in ascertain the harmful effects in human health. Frederick G. Donnan presumably first used the term aerosol during World War I to describe an aero-solution, clouds of microscopic particles in air. This term developed analogously to the term hydrosol, a colloid system with water as the dispersed medium. Primary aerosols contain particles introduced directly into the gas; secondary aerosols form through gas-to-particle conversion.
Key aerosol groups include sulfates, organic carbon, black carbon, nitrates, mineral dust, and sea salt, they usually clump together to form a complex mixture. Various types of aerosol, classified according to physical form and how they were generated, include dust, fume, mist, smoke and fog.
There are several measures of aerosol concentration. Environmental science and environmental health often use the mass concentration (M), defined as the mass of particulate matter per unit volume, in units such as μg/m3. Also commonly used is the number concentration (N), the number of particles per unit volume, in units such as number per m3 or number per cm3.
Particle size has a major influence on particle properties, and the aerosol particle radius or diameter (dp) is a key property used to characterise aerosols.
Aerosols vary in their dispersity. A monodisperse aerosol, producible in the laboratory, contains particles of uniform size. Most aerosols, however, as polydisperse colloidal systems, exhibit a range of particle sizes. Liquid droplets are almost always nearly spherical, but scientists use an equivalent diameter to characterize the properties of various shapes of solid particles, some very irregular. The equivalent diameter is the diameter of a spherical particle with the same value of some physical property as the irregular particle. The equivalent volume diameter (de) is defined as the diameter of a sphere of the same volume as that of the irregular particle. Also commonly used is the aerodynamic diameter, da.
Generation and applications
People generate aerosols for various purposes, including:
as test aerosols for calibrating instruments, performing research, and testing sampling equipment and air filters;
to deliver deodorants, paints, and other consumer products in sprays;
for dispersal and agricultural application
for medical treatment of respiratory disease; and
in fuel injection systems and other combustion technology.
Some devices for generating aerosols are:
Aerosol spray
Atomizer nozzle or nebulizer
Electrospray
Electronic cigarette
Vibrating orifice aerosol generator (VOAG)
In the atmosphere
Several types of atmospheric aerosol have a significant effect on Earth's climate: volcanic, desert dust, sea-salt, that originating from biogenic sources and human-made. Volcanic aerosol forms in the stratosphere after an eruption as droplets of sulfuric acid that can prevail for up to two years, and reflect sunlight, lowering temperature. Desert dust, mineral particles blown to high altitudes, absorb heat and may be responsible for inhibiting storm cloud formation. Human-made sulfate aerosols, primarily from burning oil and coal, affect the behavior of clouds.
Although all hydrometeors, solid and liquid, can be described as aerosols, a distinction is commonly made between such dispersions (i.e. clouds) containing activated drops and crystals, and aerosol particles. The atmosphere of Earth contains aerosols of various types and concentrations, including quantities of:
natural inorganic materials: fine dust, sea salt, or water droplets
natural organic materials: smoke, pollen, spores, or bacteria
anthropogenic products of combustion such as: smoke, ashes or dusts
Aerosols can be found in urban ecosystems in various forms, for example:
Dust
Cigarette smoke
Mist from aerosol spray cans
Soot or fumes in car exhaust
The presence of aerosols in the Earth's atmosphere can influence its climate, as well as human health.
Effects
Volcanic eruptions release large amounts of sulphuric acid, hydrogen sulfide and hydrochloric acid into the atmosphere. These gases represent aerosols and eventually return to earth as acid rain, having a number of adverse effects on the environment and human life.
When aerosols absorb pollutants, it facilitates the deposition of pollutants to the surface of the earth as well as to bodies of water. This has the potential to be damaging to both the environment and human health.
Aerosols interact with the Earth's energy budget in two ways, directly and indirectly.
E.g., a direct effect is that aerosols scatter and absorb incoming solar radiation. This will mainly lead to a cooling of the surface (solar radiation is scattered back to space) but may also contribute to a warming of the surface (caused by the absorption of incoming solar energy). This will be an additional element to the greenhouse effect and therefore contributing to the global climate change.
The indirect effects refer to the aerosol interfering with formations that interact directly with radiation. For example, they are able to modify the size of the cloud particles in the lower atmosphere, thereby changing the way clouds reflect and absorb light and therefore modifying the Earth's energy budget.
There is evidence to suggest that anthropogenic aerosols actually offset the effects of greenhouse gases in some areas, which is why the Northern Hemisphere shows slower surface warming than the Southern Hemisphere, although that just means that the Northern Hemisphere will absorb the heat later through ocean currents bringing warmer waters from the South. On a global scale however, aerosol cooling decreases greenhouse-gases-induced heating without offsetting it completely.
Ship tracks are clouds that form around the exhaust released by ships into the still ocean air. Water molecules collect around the tiny particles (aerosols) from exhaust to form a cloud seed. More and more water accumulates on the seed until a visible cloud is formed. In the case of ship tracks, the cloud seeds are stretched over a long narrow path where the wind has blown the ship's exhaust, so the resulting clouds resemble long strings over the ocean.
The warming caused by human-produced greenhouse gases has been somewhat offset by the cooling effect of human-produced aerosols. In 2020, regulations on fuel significantly cut sulfur dioxide emissions from international shipping by approximately 80%, leading to an unexpected global geoengineering termination shock.
Aerosols in the 20 μm range show a particularly long persistence time in air conditioned rooms due to their "jet rider" behaviour (move with air jets, gravitationally fall out in slowly moving air); as this aerosol size is most effectively adsorbed in the human nose, the primordial infection site in COVID-19, such aerosols may contribute to the pandemic.
Aerosol particles with an effective diameter smaller than 10 μm can enter the bronchi, while the ones with an effective diameter smaller than 2.5 μm can enter as far as the gas exchange region in the lungs, which can be hazardous to human health.
Size distribution
For a monodisperse aerosol, a single number—the particle diameter—suffices to describe the size of the particles. However, more complicated particle-size distributions describe the sizes of the particles in a polydisperse aerosol. This distribution defines the relative amounts of particles, sorted according to size. One approach to defining the particle size distribution uses a list of the sizes of every particle in a sample. However, this approach proves tedious to ascertain in aerosols with millions of particles and awkward to use. Another approach splits the size range into intervals and finds the number (or proportion) of particles in each interval. These data can be presented in a histogram with the area of each bar representing the proportion of particles in that size bin, usually normalised by dividing the number of particles in a bin by the width of the interval so that the area of each bar is proportionate to the number of particles in the size range that it represents. If the width of the bins tends to zero, the frequency function is:
where
is the diameter of the particles
is the fraction of particles having diameters between and +
is the frequency function
Therefore, the area under the frequency curve between two sizes a and b represents the total fraction of the particles in that size range:
It can also be formulated in terms of the total number density N:
Assuming spherical aerosol particles, the aerosol surface area per unit volume (S) is given by the second moment:
And the third moment gives the total volume concentration (V) of the particles:
The particle size distribution can be approximated. The normal distribution usually does not suitably describe particle size distributions in aerosols because of the skewness associated with a long tail of larger particles. Also for a quantity that varies over a large range, as many aerosol sizes do, the width of the distribution implies negative particles sizes, which is not physically realistic. However, the normal distribution can be suitable for some aerosols, such as test aerosols, certain pollen grains and spores.
A more widely chosen log-normal distribution gives the number frequency as:
where:
is the standard deviation of the size distribution and
is the arithmetic mean diameter.
The log-normal distribution has no negative values, can cover a wide range of values, and fits many observed size distributions reasonably well.
Other distributions sometimes used to characterise particle size include: the Rosin-Rammler distribution, applied to coarsely dispersed dusts and sprays; the Nukiyama–Tanasawa distribution, for sprays of extremely broad size ranges; the power function distribution, occasionally applied to atmospheric aerosols; the exponential distribution, applied to powdered materials; and for cloud droplets, the Khrgian–Mazin distribution.
Physics
Terminal velocity of a particle in a fluid
For low values of the Reynolds number (<1), true for most aerosol motion, Stokes' law describes the force of resistance on a solid spherical particle in a fluid. However, Stokes' law is only valid when the velocity of the gas at the surface of the particle is zero. For small particles (< 1 μm) that characterize aerosols, however, this assumption fails. To account for this failure, one can introduce the Cunningham correction factor, always greater than 1. Including this factor, one finds the relation between the resisting force on a particle and its velocity:
where
is the resisting force on a spherical particle
is the dynamic viscosity of the gas
is the particle velocity
is the Cunningham correction factor.
This allows us to calculate the terminal velocity of a particle undergoing gravitational settling in still air. Neglecting buoyancy effects, we find:
where
is the terminal settling velocity of the particle.
The terminal velocity can also be derived for other kinds of forces. If Stokes' law holds, then the resistance to motion is directly proportional to speed. The constant of proportionality is the mechanical mobility (B) of a particle:
A particle traveling at any reasonable initial velocity approaches its terminal velocity exponentially with an e-folding time equal to the relaxation time:
where:
is the particle speed at time t
is the final particle speed
is the initial particle speed
To account for the effect of the shape of non-spherical particles, a correction factor known as the dynamic shape factor is applied to Stokes' law. It is defined as the ratio of the resistive force of the irregular particle to that of a spherical particle with the same volume and velocity:
where:
is the dynamic shape factor
Aerodynamic diameter
The aerodynamic diameter of an irregular particle is defined as the diameter of the spherical particle with a density of 1000 kg/m3 and the same settling velocity as the irregular particle.
Neglecting the slip correction, the particle settles at the terminal velocity proportional to the square of the aerodynamic diameter, da:
where
= standard particle density (1000 kg/m3).
This equation gives the aerodynamic diameter:
One can apply the aerodynamic diameter to particulate pollutants or to inhaled drugs to predict where in the respiratory tract such particles deposit. Pharmaceutical companies typically use aerodynamic diameter, not geometric diameter, to characterize particles in inhalable drugs.
Dynamics
The previous discussion focused on single aerosol particles. In contrast, aerosol dynamics explains the evolution of complete aerosol populations. The concentrations of particles will change over time as a result of many processes. External processes that move particles outside a volume of gas under study include diffusion, gravitational settling, and electric charges and other external forces that cause particle migration. A second set of processes internal to a given volume of gas include particle formation (nucleation), evaporation, chemical reaction, and coagulation.
A differential equation called the Aerosol General Dynamic Equation (GDE) characterizes the evolution of the number density of particles in an aerosol due to these processes.
Change in time = Convective transport + brownian diffusion + gas-particle interactions + coagulation + migration by external forces
Where:
is number density of particles of size category
is the particle velocity
is the particle Stokes-Einstein diffusivity
is the particle velocity associated with an external force
Coagulation
As particles and droplets in an aerosol collide with one another, they may undergo coalescence or aggregation. This process leads to a change in the aerosol particle-size distribution, with the mode increasing in diameter as total number of particles decreases. On occasion, particles may shatter apart into numerous smaller particles; however, this process usually occurs primarily in particles too large for consideration as aerosols.
Dynamics regimes
The Knudsen number of the particle define three different dynamical regimes that govern the behaviour of an aerosol:
where is the mean free path of the suspending gas and is the diameter of the particle. For particles in the free molecular regime, Kn >> 1; particles small compared to the mean free path of the suspending gas. In this regime, particles interact with the suspending gas through a series of "ballistic" collisions with gas molecules. As such, they behave similarly to gas molecules, tending to follow streamlines and diffusing rapidly through Brownian motion. The mass flux equation in the free molecular regime is:
where a is the particle radius, P∞ and PA are the pressures far from the droplet and at the surface of the droplet respectively, kb is the Boltzmann constant, T is the temperature, CA is mean thermal velocity and α is mass accommodation coefficient. The derivation of this equation assumes constant pressure and constant diffusion coefficient.
Particles are in the continuum regime when Kn << 1. In this regime, the particles are big compared to the mean free path of the suspending gas, meaning that the suspending gas acts as a continuous fluid flowing round the particle. The molecular flux in this regime is:
where a is the radius of the particle A, MA is the molecular mass of the particle A, DAB is the diffusion coefficient between particles A and B, R is the ideal gas constant, T is the temperature (in absolute units like kelvin), and PA∞ and PAS are the pressures at infinite and at the surface respectively.
The transition regime contains all the particles in between the free molecular and continuum regimes or Kn ≈ 1. The forces experienced by a particle are a complex combination of interactions with individual gas molecules and macroscopic interactions. The semi-empirical equation describing mass flux is:
where Icont is the mass flux in the continuum regime. This formula is called the Fuchs-Sutugin interpolation formula. These equations do not take into account the heat release effect.
Partitioning
Aerosol partitioning theory governs condensation on and evaporation from an aerosol surface, respectively. Condensation of mass causes the mode of the particle-size distributions of the aerosol to increase; conversely, evaporation causes the mode to decrease. Nucleation is the process of forming aerosol mass from the condensation of a gaseous precursor, specifically a vapor. Net condensation of the vapor requires supersaturation, a partial pressure greater than its vapor pressure. This can happen for three reasons:
Lowering the temperature of the system lowers the vapor pressure.
Chemical reactions may increase the partial pressure of a gas or lower its vapor pressure.
The addition of additional vapor to the system may lower the equilibrium vapor pressure according to Raoult's law.
There are two types of nucleation processes. Gases preferentially condense onto surfaces of pre-existing aerosol particles, known as heterogeneous nucleation. This process causes the diameter at the mode of particle-size distribution to increase with constant number concentration. With sufficiently high supersaturation and no suitable surfaces, particles may condense in the absence of a pre-existing surface, known as homogeneous nucleation. This results in the addition of very small, rapidly growing particles to the particle-size distribution.
Activation
Water coats particles in aerosols, making them activated, usually in the context of forming a cloud droplet (such as natural cloud seeding by aerosols from trees in a forest). Following the Kelvin equation (based on the curvature of liquid droplets), smaller particles need a higher ambient relative humidity to maintain equilibrium than larger particles do. The following formula gives relative humidity at equilibrium:
where is the saturation vapor pressure above a particle at equilibrium (around a curved liquid droplet), p0 is the saturation vapor pressure (flat surface of the same liquid) and S is the saturation ratio.
Kelvin equation for saturation vapor pressure above a curved surface is:
where rp droplet radius, σ surface tension of droplet, ρ density of liquid, M molar mass, T temperature, and R molar gas constant.
Solution to the general dynamic equation
There are no general solutions to the general dynamic equation (GDE); common methods used to solve the general dynamic equation include:
Moment method
Modal/sectional method, and
Quadrature method of moments/Taylor-series expansion method of moments, and
Monte Carlo method.
Detection
Aerosols can either be measured in-situ or with remote sensing techniques either ground-based on airborne-based.
In situ observations
Some available in situ measurement techniques include:
Aerosol mass spectrometer (AMS)
Differential mobility analyzer (DMA)
Electrical aerosol spectrometer (EAS)
Aerodynamic particle sizer (APS)
Aerodynamic aerosol classifier (AAC)
Wide range particle spectrometer (WPS)
Micro-Orifice Uniform Deposit Impactor(MOUDI)
Condensation particle counter (CPC)
Epiphaniometer
Electrical low pressure impactor (ELPI)
Aerosol particle mass-analyser (APM)
Centrifugal Particle Mass Analyser (CPMA)
Remote sensing approach
Remote sensing approaches include:
Sun photometer
Lidar
Imaging spectroscopy
Size selective sampling
Particles can deposit in the nose, mouth, pharynx and larynx (the head airways region), deeper within the respiratory tract (from the trachea to the terminal bronchioles), or in the alveolar region. The location of deposition of aerosol particles within the respiratory system strongly determines the health effects of exposure to such aerosols. This phenomenon led people to invent aerosol samplers that select a subset of the aerosol particles that reach certain parts of the respiratory system.
Examples of these subsets of the particle-size distribution of an aerosol, important in occupational health, include the inhalable, thoracic, and respirable fractions. The fraction that can enter each part of the respiratory system depends on the deposition of particles in the upper parts of the airway. The inhalable fraction of particles, defined as the proportion of particles originally in the air that can enter the nose or mouth, depends on external wind speed and direction and on the particle-size distribution by aerodynamic diameter. The thoracic fraction is the proportion of the particles in ambient aerosol that can reach the thorax or chest region. The respirable fraction is the proportion of particles in the air that can reach the alveolar region. To measure the respirable fraction of particles in air, a pre-collector is used with a sampling filter. The pre-collector excludes particles as the airways remove particles from inhaled air. The sampling filter collects the particles for measurement. It is common to use cyclonic separation for the pre-collector, but other techniques include impactors, horizontal elutriators, and large pore membrane filters.
Two alternative size-selective criteria, often used in atmospheric monitoring, are PM10 and PM2.5. PM10 is defined by ISO as particles which pass through a size-selective inlet with a 50% efficiency cut-off at 10 μm aerodynamic diameter and PM2.5 as particles which pass through a size-selective inlet with a 50% efficiency cut-off at 2.5 μm aerodynamic diameter. PM10 corresponds to the "thoracic convention" as defined in ISO 7708:1995, Clause 6; PM2.5 corresponds to the "high-risk respirable convention" as defined in ISO 7708:1995, 7.1. The United States Environmental Protection Agency replaced the older standards for particulate matter based on Total Suspended Particulate with another standard based on PM10 in 1987 and then introduced standards for PM2.5 (also known as fine particulate matter) in 1997.
See also
Aerogel
Aeroplankton
Aerosol transmission
Bioaerosol
Deposition (Aerosol physics)
Global dimming
Nebulizer
Monoterpene
Stratospheric aerosol injection
References
Sources
External links
International Aerosol Research Assembly
American Association for Aerosol Research
NIOSH Manual of Analytical Methods (see chapters on aerosol sampling)
Colloidal chemistry
Colloids
Fluid dynamics
Liquids
Physical chemistry
Pollution
Solids | Aerosol | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 5,026 | [
"Colloidal chemistry",
"Applied and interdisciplinary physics",
"Chemical engineering",
"Phases of matter",
"Colloids",
"Surface science",
"Piping",
"Aerosols",
"Chemical mixtures",
"Condensed matter physics",
"nan",
"Solids",
"Fluid dynamics",
"Physical chemistry",
"Matter",
"Liquids"... |
57,766 | https://en.wikipedia.org/wiki/Charlie%20and%20the%20Chocolate%20Factory | Charlie and the Chocolate Factory is a 1964 children's novel by British author Roald Dahl. The story features the adventures of young Charlie Bucket inside the chocolate factory of eccentric chocolatier Willy Wonka.
The story was originally inspired by Roald Dahl's experience of chocolate companies during his schooldays at Repton School in Derbyshire. Cadbury would often send test packages to the schoolchildren in exchange for their opinions on the new products. At that time (around the 1920s), Cadbury and Rowntree's were England's two largest chocolate makers and they each often tried to steal trade secrets by sending spies, posing as employees, into the other's factory—inspiring Dahl's idea for the recipe-thieving spies (such as Wonka's rival Slugworth) depicted in the book. Because of this, both companies became highly protective of their chocolate-making processes. It was a combination of this secrecy and the elaborate, often gigantic, machines in the factory that inspired Dahl to write the story.
Charlie and the Chocolate Factory is frequently ranked among the most popular works in children's literature. In 2012, Charlie Bucket brandishing a Golden Ticket appeared in a Royal Mail first class stamp in the UK. The novel was first published in the U.S. by Alfred A. Knopf, Inc. in 1964 and in the U.K. by George Allen & Unwin 11 months later. The book's sequel, Charlie and the Great Glass Elevator, was written by Dahl in 1971 and published in 1972. Dahl had also planned to write a third book in the series but never finished it.
The book has also been adapted into two major motion pictures: Willy Wonka & the Chocolate Factory in 1971 and Charlie and the Chocolate Factory in 2005. A stand-alone film exploring Willy Wonka's origins titled Wonka was released in 2023. The book has spawned a media franchise with multiple video games, theatrical productions and merchandise.
Plot
Charlie Bucket lives in poverty with his parents and grandparents in a town which is home to a world-famous chocolate factory. One day, Charlie's bedridden Grandpa Joe tells him about Willy Wonka, the factory's eccentric owner, and all of his fantastical candies. Rival chocolatiers sent in spies to steal his recipes, forcing Wonka to close the factory and disappear. He reopened the factory years later, but the gates remain locked, and nobody knows who is providing the factory with its workforce.
The next day, the newspaper announces that Wonka has hidden five Golden Tickets in Wonka Bars; the finders of these tickets will be invited to come and tour the factory. The first four tickets are found by gluttonous Augustus Gloop, spoiled Veruca Salt, compulsive gum-chewer Violet Beauregarde, and television addict Mike Teavee. One day, Charlie buys two Wonka Bars with some money he found in the snow. When he opens the second, Charlie discovers that the second bar he bought contains the fifth and final ticket. Upon hearing the news, Grandpa Joe regains his mobility and volunteers to accompany Charlie to the factory.
On the day of the tour, Wonka welcomes the five children and their parents inside the factory, a wonderland of confectionery creations that defy logic. They also meet the Oompa-Loompas, a race of impish humanoids who help him operate the factory as a thanks for his rescuing them from a land of dangerous monsters. During the tour, the other children besides Charlie give in to their impulses and are ejected from the tour in darkly comical ways: Augustus falls into the Chocolate River and is sucked up a pipe, Violet turns blue and inflates into a giant human blueberry after chewing an experimental stick of three-course dinner gum ending with a blueberry pie flavor, Veruca and her parents fall down a garbage chute after the former tries to capture one of the nut-testing squirrels, and Mike is shrunk down to the size of a chocolate bar after misusing a machine that sends chocolate by television despite Wonka's warnings. The Oompa-Loompas sing about the children's misbehaviour each time disaster strikes.
With only Charlie remaining, Wonka congratulates him for "winning" the factory. Wonka explains that the whole tour was designed to help him find a worthy heir to his business, and Charlie was the only child whose inherent genuineness passed the test. They ride the Great Glass Elevator and watch the other four children leave the factory before flying to Charlie's house, where Wonka invites the entire Bucket family to come and live with him in the factory.
Characters
Publication
Race, editing, and censorship
Dahl's widow said that Charlie was originally written as "a little black boy though this thought to be a falsehood." Dahl's biographer said the change to a white character was driven by Dahl's agent, who thought a black Charlie would not appeal to readers.
In the first published edition, the Oompa-Loompas were described as African pygmies, and were drawn this way in the original printed edition. After the announcement of a film adaptation sparked a statement from the NAACP, which expressed concern that the transportation of Oompa-Loompas to Wonka's factory resembled slavery, Dahl found himself sympathising with their concerns and published a revised edition. In this edition, as well as the subsequent sequel, the Oompa-Loompas were drawn as being white and appearing similar to hippies, and the references to Africa were deleted.
In 2023, publisher Puffin made more than eighty additional changes to the original text of the book, such as: removing every occurrence of the word fat (including referring to Augustus Gloop as "enormous" rather than "enormously fat" and greatly changing the words of his song); removing most references to the Oompa-Loompa's diminutive size and physical appearance and omitting descriptions of them living in trees and wearing deerskins and leaves; removing or changing the words mad, crazy, and queer; omitting many references to Mike Teavee's toy guns; and removing references to corporal punishment (such as changing "She needs a really good spanking" to "She needs a really good talking to" and "She wants a good kick in the pants" to "She needs to learn some manners").
Unused chapters
Various unused and draft material from Dahl's early versions of the novel have been found. In the initial, unpublished drafts of Charlie and the Chocolate Factory nine golden tickets were distributed to tour Willy Wonka's secret chocolate factory and the children faced more rooms and more temptations to test their self-control. Some of the names of the children cut from the final work include:
Clarence Crump, Bertie Upside, and Terence Roper (who overindulge in Warming Candies)
Elvira Entwhistle (lost down a rubbish chute, renamed Veruca Salt)
Violet Glockenberry (renamed Strabismus and finally Beauregarde)
Miranda Grope and Augustus Pottle (lost up a chocolate pipe, combined into the character Augustus Gloop)
Miranda Mary Piker (renamed from Miranda Grope, became the subject of Spotty Powder)
Marvin Prune (a conceited boy involved in The Children's-Delight Room)
Wilbur Rice and Tommy Troutbeck, the subjects of The Vanilla Fudge Room
Herpes Trout (renamed Mike Teavee)
"Spotty Powder"
"Spotty Powder" was first published as a short story in 1973. In 1998, it was included in the children's horror anthology Scary! Stories That Will Make You Scream edited by Peter Haining. The brief note before the story described the story as having been left out of Charlie and the Chocolate Factory due to an already brimming number of misbehaving children characters in the tale. In 2005, The Times reprinted "Spotty Powder" as a "lost" chapter, saying that it had been found in Dahl's desk, written backwards in mirror writing (the same way that Leonardo da Vinci wrote in his journals). Spotty Powder looks and tastes like sugar, but causes bright red pox-like spots to appear on faces and necks five seconds after ingestion, so children who eat Spotty Powder do not have to go to school. The spots fade on their own a few hours later. After learning the purpose of Spotty Powder, the humourless, smug Miranda Piker and her equally humourless father (a schoolmaster) are enraged and disappear into the Spotty Powder room to sabotage the machine. Soon after entering, they are heard making what Mrs. Piker interprets as screams. Mr. Wonka assures her (after making a brief joke where he claims that headmasters are one of the occasional ingredients) that it is only laughter. Exactly what happens to them is not revealed in the extract.
In an early draft, sometime after being renamed from Miranda Grope to Miranda Piker, but before "Spotty Powder" was written, she falls down the chocolate waterfall and ends up in the Peanut-Brittle Mixer. This results in the "rude and disobedient little kid" becoming "quite delicious." This early draft poem was slightly rewritten as an Oompa-Loompa song in the lost chapter, which now puts her in the "Spotty-Powder mixer" and instead of being "crunchy and ... good [peanut brittle]" she is now "useful [for truancy] and ... good."
"The Vanilla Fudge Room"
In 2014, The Guardian revealed that Dahl had removed another chapter ("The Vanilla Fudge Room") from an early draft of the book. The Guardian reported the now-eliminated passage was "deemed too wild, subversive and insufficiently moral for the tender minds of British children almost 50 years ago." In what was originally chapter five in that version of the book, Charlie goes to the factory with his mother instead of Grandpa Joe as originally published. At this point, the chocolate factory tour is down to eight kids, including Tommy Troutbeck and Wilbur Rice. After the entire group climbs to the top of the titular fudge mountain, eating vanilla fudge along the way, Troutbeck and Rice decide to take a ride on the wagons carrying away chunks of fudge. The wagons take them directly to the Pounding And Cutting Room, where the fudge is reformed and sliced into small squares for retail sale. Wonka states the machine is equipped with "a large wire strainer ... which is used specially for catching children before they fall into the machine" adding that "It always catches them. At least it always has up to now."
The chapter dates back to an early draft with ten golden tickets, including one each for Miranda Grope and Augustus Pottle, who fell into the chocolate river prior to the events of "Fudge Mountain". Augustus Pottle was routed to the Chocolate Fudge Room, not the Vanilla Fudge Room explored in this chapter, and Miranda Grope ended up in the Fruit and Nuts Room.
"The Warming Candy Room"
Also in 2014, Vanity Fair published a plot summary of "The Warming Candy Room", wherein three boys eat too many "warming candies" and end up "bursting with heat."
The Warming Candy Room is dominated by a boiler, which heats a scarlet liquid. The liquid is dispensed one drop at a time, where it cools and forms a hard shell, storing the heat and "by a magic process ... the hot heat changes into an amazing thing called 'cold heat.'" After eating a single warming candy, one could stand naked in the snow comfortably. This is met with predictable disbelief from Clarence Crump, Bertie Upside, and Terence Roper, who proceed to eat at least 100 warming candies each, resulting in profuse perspiration. The three boys and their families discontinue the tour after they are taken to cool off "in the large refrigerator for a few hours."
"The Children's-Delight Room"
Dahl originally planned for a child called Marvin Prune to be included. He submitted the excised chapter regarding Prune to The Horn Book Review in the early 1970s. Rather than publish the chapter, Horn Book responded with a critical essay by novelist Eleanor Cameron, who called Charlie and the Chocolate Factory “one of the most tasteless books ever written for children”.
Reception
In a 2006 list for the Royal Society of Literature, author J. K. Rowling (author of the Harry Potter books) named Charlie and the Chocolate Factory among her top ten books that every child should read. A fan of the book since childhood, film director Tim Burton wrote: "I responded to Charlie and the Chocolate Factory because it respected the fact that children can be adults."
A 2004 study found that it was a common read-aloud book for fourth-graders in schools in San Diego County, California. A 2012 survey by the University of Worcester determined that it was one of the most common books that UK adults had read as children, after Alice's Adventures in Wonderland, The Lion, the Witch and the Wardrobe, and The Wind in the Willows.
Groups who have praised the book include:
New England Round Table of Children's Librarians Award (US, 1972)
Surrey School Award (UK, 1973)
Read Aloud BILBY Award (Australia, 1992)
Millennium Children's Book Award (UK, 2000)
The Big Read, ranked number 35 in a BBC survey of the British public to identify the "Nation's Best-loved Novel" (UK, 2003)
National Education Association, listed as one of "Teachers' Top 100 Books for Children" based on a poll (US, 2007)
School Library Journal, ranked 61 among all-time children's novels (US, 2012)
In the 2012 survey published by SLJ, a monthly with primarily US audience, Charlie was the second of four books by Dahl among their Top 100 Chapter Books, one more than any other writer. Time magazine in the US included the novel in its list of the 100 Best Young-Adult Books of All Time; it was one of three Dahl novels on the list, more than any other author. In 2016 the novel topped the list of Amazon's best-selling children's books by Dahl in Print and on Kindle. In 2023, the novel was ranked by BBC at no. 18 in their poll of "The 100 greatest children's books of all time".
Although the book has always been popular and considered a children's classic by many literary critics, a number of prominent individuals have spoken unfavourably of the novel over the years. Children's novelist and literary historian John Rowe Townsend has described the book as "fantasy of an almost literally nauseating kind" and accused it of "astonishing insensitivity" regarding the original portrayal of the Oompa-Loompas as African black pygmies, although Dahl did revise this in later editions. Another novelist, Eleanor Cameron, compared the book to the sweets that form its subject matter, commenting that it is "delectable and soothing while we are undergoing the brief sensory pleasure it affords but leaves us poorly nourished with our taste dulled for better fare."
Ursula K. Le Guin wrote in support of this assessment in a letter to The Horn Book Review, saying that her own daughter would turn "quite nasty" upon finishing the book. Dahl responded to Cameron's criticisms by noting that the classics that she had cited would not be well received by contemporary children.
Adaptations
Charlie and the Chocolate Factory has frequently been adapted for other media, including games, radio, the screen, and stage, most often as plays or musicals for children – often titled Willy Wonka or Willy Wonka, Jr. and almost always featuring musical numbers by all the main characters (Wonka, Charlie, Grandpa Joe, Violet, Veruca, etc.); many of the songs are revised versions from the 1971 film.
Film
The book was first made into a feature film as a musical, titled Willy Wonka & the Chocolate Factory (1971), directed by Mel Stuart, produced by David L. Wolper, and starring Gene Wilder as Willy Wonka, character actor Jack Albertson as Grandpa Joe, and Peter Ostrum as Charlie Bucket, with music by Leslie Bricusse and Anthony Newley. Dahl was credited for writing the screenplay, but David Seltzer was brought in by Stuart and Wolper to make changes against Dahl's wishes, leaving his original adaptation, in one critic's opinion, "scarcely detectable". Amongst other things, Dahl was unhappy with the foregrounding of Wonka over Charlie, and disliked the musical score. Because of this, Dahl disowned the film. The film had an estimated budget of $2.9 million but grossed only $4 million and was considered a box-office disappointment, though it received positive reviews from critics. Home video and DVD sales, as well as repeated television airings, resulted in the film subsequently becoming a cult classic. Concurrently with the 1971 film, the Quaker Oats Company introduced a line of candies whose marketing uses the book's characters and imagery.
Warner Bros. and the Dahl estate reached an agreement in 1998 to produce another film version of Charlie and the Chocolate Factory, with the Dahl family receiving total artistic control. The project languished in development hell until Tim Burton signed on to direct in 2003. The film, titled Charlie and the Chocolate Factory, starred Johnny Depp as Willy Wonka. It was released in 2005 to positive reviews and massive box office returns, becoming the eighth-highest-grossing film of the year.
In October 2016, Variety reported that Warner Bros. had acquired the rights to the Willy Wonka character from the Roald Dahl Estate and would be planning a new film centered on the eccentric character with David Heyman producing. In February 2018, Paul King entered final negotiations to direct the film. In May 2021, it was reported that the film would be a musical titled Wonka, with Timothée Chalamet playing a younger version of the titular character in an origin story. King was confirmed as director and co-writer along with comedian Simon Farnaby; the film was released globally in December 2023.
Other adaptations
In 1983, the BBC produced an adaptation for Radio 4. Titled Charlie, it aired in seven episodes between 6 February and 20 March.
Also in 1983, a miniseries titled Kalle Och Chokladfabriken was aired on Swedish television. The series consisted of highly-detailed static illustrations that were accompanied by an unseen narrator reading an adapted translation of the novel, in a manner similar to the BBC television series Jackanory.
In 1985, the Charlie and the Chocolate Factory video game was released for the ZX Spectrum by developer Soft Options and publisher Hill MacGibbon.
A video game, Charlie and the Chocolate Factory, based on Burton's adaptation, was released on 11 July 2005.
On 1 April 2006, the British theme park Alton Towers opened a family attraction themed around the story. The ride featured a boat section, where guests travel around the chocolate factory in bright pink boats on a chocolate river. In the final stage of the ride, guests enter one of two glass elevators, where they join Willy Wonka as they travel around the factory, eventually shooting up and out through the glass roof. Running for nine years, the ride was closed for good at the end of the 2015 season.
The Estate of Roald Dahl sanctioned an operatic adaptation called The Golden Ticket. It was written by American composer Peter Ash and British librettist Donald Sturrock. The Golden Ticket has completely original music and was commissioned by American Lyric Theater, Lawrence Edelson (producing artistic director), and Felicity Dahl. The opera received its world premiere at Opera Theatre of Saint Louis on 13 June 2010, in a co-production with American Lyric Theater and Wexford Festival Opera.
A musical based on the novel, titled Charlie and the Chocolate Factory, premiered at the West End's Theatre Royal, Drury Lane in May 2013 and officially opened on 25 June. The show was directed by Sam Mendes, with new songs by Marc Shaiman and Scott Wittman, and stars Douglas Hodge as Willy Wonka. The production broke records for weekly ticket sales.
In July 2017, an animated film Tom and Jerry: Willy Wonka and the Chocolate Factory was released in which the titular cat and mouse were put into the story of the 1971 film.
On 27 November 2018, Netflix was revealed to be developing an "animated series event" based on Roald Dahl's books, which will include a television series based on Charlie and the Chocolate Factory and the novel's sequel Charlie and the Great Glass Elevator. On 5 March 2020, it was reported that Taika Waititi will write, direct, and executive-produce both the series and a spin-off animated series focused on the Oompa Loompas.
In 2021, Melbourne based comedians Big Big Big released a six part podcast called The Candyman that satirically presents events at the chocolate factory in a true crime genre.
An unlicensed attraction, "Willy’s Chocolate Experience", opened on 24 February 2024 in Glasgow, and closed within a day. The event was advertised using highly misleading AI-generated artwork, promising features such as "an enchanted garden, an Imagination Lab, a Twilight Tunnel, and captivating entertainment", though instead contained a low-effort mock-up of a chocolate factory in a mostly empty warehouse. The event spawned many internet memes, and featured factory tours offered by several actors playing Willy Wonka, that involved a story in which Wonka would defeat an "evil chocolate maker who lives in the walls" called "The Unknown". According to actor Paul Connell, who portrayed Willy Wonka in the tours, his script contained "15 pages of AI-generated gibberish". Despite the high entrance fee and promised chocolate theme of the event, guests were only given a single jellybean and a cup of lemonade, and the misleading advertisements led to the police being called to the event shortly prior to it being shut down.
Animated series
On 27 November 2018, Netflix and The Roald Dahl Story Company jointly announced that Netflix would be producing an animated series based on Dahl's books, including Charlie and the Chocolate Factory, Matilda, The BFG, The Twits, and other titles. Production commenced on the first of the Netflix Dahl animated series in 2019. On 5 March 2020, Variety announced that Taika Waititi was partnering with Netflix on a pair of animated series — one based on the world of 'Charlie and the Chocolate Factory and another based on the Oompa-Loompa characters. “The shows will retain the quintessential spirit and tone of the original story while building out the world and characters far beyond the pages of the Dahl book for the very first time,” Netflix said. On 23 February 2022, Mikros Animation revealed that they would be producing a new collaboration with Netflix. The collaboration was announced as Charlie and the Chocolate Factory. The long-format animated event series is based on the 1964 novel and is written, directed and executive produced by Waititi.
Audiobook
The book has been recorded a number of times:
Roald Dahl himself narrated an abridged version of the book in 1975 for Caedmon Records (CDL 51476).
In 2002, Monty Python member Eric Idle narrated the audiobook version of the American Edition of Charlie and the Chocolate Factory on Harper Childrens Audio ().
In 2004, James Bolam narrated an abridged recording of the story for Puffin Audiobooks ().
Douglas Hodge, who played Willy Wonka in the London production of the stage musical, narrated the UK Edition of the audiobook for Penguin Audio in 2013 (), and the title was later released on Audible.
Editions
Charlie and the Chocolate Factory has undergone numerous editions and been illustrated by numerous artists.
Books
1964, OCLC 9318922 (hardcover, Alfred A. Knopf, Inc., original, first US edition, illustrated by Joseph Schindelman)
1967, (hardcover, George Allen & Unwin, original, first UK edition, illustrated by Faith Jaques)
1973, (hardcover, revised Oompa Loompa edition)
1976, (paperback)
1980, (paperback, illustrated by Joseph Schindelman)
1984, (UK paperback, illustrated by Faith Jaques)
1985, (paperback, illustrated by Michael Foreman)
1987, (hardcover)
1988, (prebound)
1992, (library binding, reprint)
1995 (illustrated by Quentin Blake)
1998, (paperback)
2001, (hardcover)
2001, (illustrated by Quentin Blake)
2002, (audio CD read by Eric Idle)
2003, (library binding)
2004, (paperback)
(hardcover)
2011, (paperback), Penguin Classics Deluxe Edition, cover by Ivan Brunetti
2014, (hardcover, Penguin UK/Modern Classics, 50th anniversary edition)
2014, (hardcover, Penguin UK/Puffin celebratory golden edition, illustrated by Sir Quentin Blake)
2014, (double-cover paperback)
50th anniversary cover controversy
The cover photo of the 50th anniversary edition, published by Penguin Modern Classics for sale in the UK and aimed at the adult market, received widespread commentary and criticism. The cover is a photo of a heavily made up young girl seated on her mother's knee and wearing a doll-like expression, taken by the photographers Sofia Sanchez and Mauro Mongiello as part of a photo shoot for a 2008 fashion article in a French magazine, for a fashion article titled "Mommie Dearest." In addition to writing that "the image seemingly has little to do with the beloved children's classic", reviewers and commentators in social media (such as posters on the publisher's Facebook page) have said the art evokes Lolita, Valley of the Dolls, and JonBenet Ramsey; looks like a scene from Toddlers & Tiaras; and is "misleading," "creepy," "sexualised," "grotesque," "misjudged on every level," "distasteful and disrespectful to a gifted author and his work," "pretentious," "trashy", "outright inappropriate," "terrifying," "really obnoxious," and "weird & kind of paedophilic."
The publisher explained its objective in a blog post accompanying the announcement about the jacket art: "This new image . . . looks at the children at the center of the story, and highlights the way Roald Dahl’s writing manages to embrace both the light and the dark aspects of life." Additionally, Penguin Press's Helen Conford told the Bookseller: "We wanted something that spoke about the other qualities in the book. It's a children's story that also steps outside children's and people aren't used to seeing Dahl in that way." She continued: "[There is] a lot of ill feeling about it, I think because it's such a treasured book and a book which isn't really a 'crossover book'" As she acknowledged: "People want it to remain as a children's book."
The New Yorker describes what it calls this "strangely but tellingly misbegotten" cover design thusly: "The image is a photograph, taken from a French fashion shoot, of a glassy-eyed, heavily made-up little girl. Behind her sits, a mother figure, stiff and coiffed, casting an ominous shadow. The girl, with her long, perfectly waved platinum-blond hair and her pink feather boa, looks like a pretty and inert doll—" The article continues: "And if the Stepford daughter on the cover is meant to remind us of Veruca Salt or Violet Beauregarde, she doesn't: those badly behaved squirts are bubbling over with rude life." Moreover, writes Talbot, "The Modern Classics cover has not a whiff of this validation of childish imagination; instead, it seems to imply a deviant adult audience."
References
External links
Official Roald Dahl website
The Willy Wonka Candy Company
Deleted chapters
"Fudge Mountain":
"Fudge Mountain":
"Spotty Powder":
"The Warming Candy Room":
1964 British novels
Alfred A. Knopf books
BILBY Award–winning works
British children's novels
British fantasy novels
British novels adapted into films
Fictional food and drink
Fiction about size change
British novels adapted into operas
British children's books
British novels adapted into plays
Novels adapted into radio programs
Novels adapted into video games
Novels about dysfunctional families
1964 children's books
Children's fantasy novels
Comedy novels
Obscenity controversies in literature
Children's books set in factories
Novels set in factories | Charlie and the Chocolate Factory | [
"Physics",
"Mathematics"
] | 5,946 | [
"Fiction about size change",
"Quantity",
"Physical quantities",
"Size"
] |
57,794 | https://en.wikipedia.org/wiki/Fractal%20antenna | A fractal antenna is an antenna that uses a fractal, self-similar design to maximize the effective length, or increase the perimeter (on inside sections or the outer structure), of material that can receive or transmit electromagnetic radiation within a given total surface area or volume.
Such fractal antennas are also referred to as multilevel and space filling curves, but the key aspect lies in their repetition of a motif over two or more scale sizes, or "iterations". For this reason, fractal antennas are very compact, multiband or wideband, and have useful applications in cellular telephone and microwave communications.
A fractal antenna's response differs markedly from traditional antenna designs, in that it is capable of operating with good-to-excellent performance at many different frequencies simultaneously. Normally, standard antennas have to be "cut" for the frequency for which they are to be used—and thus the standard antennas only work well at that frequency.
In addition, the fractal nature of the antenna shrinks its size, without the use of any extra components such as inductors or capacitors.
Log-periodic antennas
Log-periodic antennas are arrays invented in 1952 and commonly seen as TV antennas. This was long before Mandelbrot coined the word fractal in 1975. Some authors (for instance Cohen) consider log-periodic antennas to be an early form of fractal antenna due to their infinite self similarity at all scales. However, they have a finite length even in the theoretical limit with an infinite number of elements and therefore do not have a fractal dimension that exceeds their topological dimension – which is one way of defining fractals. More typically, (for instance Pandey) authors treat them as a separate but related class of antenna.
Performance
Antenna elements (as opposed to antenna arrays, which are usually not included as fractal antennas) made from self-similar shapes were first created by Nathan Cohen then a professor at Boston University, starting in 1988. Cohen's efforts with a variety of fractal antenna designs were first published in 1995, which marked the inaugural scientific publication on fractal antennas.
Many fractal element antennas use the fractal structure as a virtual combination of capacitors and inductors. This makes the antenna so that it has many different resonances, which can be chosen and adjusted by choosing the proper fractal design. This complexity arises because the current on the structure has a complex arrangement caused by the inductance and self capacitance. In general, although their effective electrical length is longer, the fractal element antennas are themselves physically smaller, again due to this reactive loading.
Thus, fractal element antennas are shrunken compared to conventional designs and do not need additional components, assuming the structure happens to have the desired resonant input impedance. In general, the fractal dimension of a fractal antenna is a poor predictor of its performance and application. Not all fractal antennas work well for a given application or set of applications. Computer search methods and antenna simulations are commonly used to identify which fractal antenna designs best meet the needs of the application.
Studies during the 2000s showed advantages of the fractal element technology in real-life applications, such as RFID
and cell phones.
Fractals have been used commercially in antennas since the 2010s.
Their advantages are good multiband performance, wide bandwidth, and small area.
The gain with small size results from constructive interference with multiple current maxima, afforded by the electrically long structure in a small area.
Some researchers have disputed that fractal antennas have superior performance. S.R. Best (2003) observed "that antenna geometry alone, fractal or otherwise, does not uniquely determine the electromagnetic properties of the small antenna".
Hansen & Collin (2011) reviewed many papers on fractal antennas and concluded that they offer no advantage over fat dipoles, loaded dipoles, or simple loops, and that non-fractals are always better.
Balanis (2011) reported on several fractal antennas and found them equivalent in performance to the electrically small antennas they were compared to.
Log periodics, a form of fractal antenna, have their electromagnetic characteristics uniquely determined by geometry, via an opening angle.
Frequency invariance and Maxwell's equations
One different and useful attribute of some fractal element antennas is their self-scaling aspect. In 1957, V.H. Rumsey presented results that angle-defined scaling was one of the underlying requirements to make antennas invariant (have same radiation properties) at a number, or range, of frequencies. Work by Y. Mushiake in Japan starting in 1948 demonstrated similar results of frequency independent antennas having self-complementarity.
It was believed that antennas had to be defined by angles for this to be true, but in 1999 it was discovered that self-similarity was one of the underlying requirements to make antennas frequency and bandwidth invariant. In other words, the self-similar aspect was the underlying requirement, along with origin symmetry, for frequency independence. Angle-defined antennas are self-similar, but other self-similar antennas are frequency independent although not angle-defined.
This analysis, based on Maxwell's equations, showed fractal antennas offer a closed-form and unique insight into a key aspect of electromagnetic phenomena. To wit: the invariance property of Maxwell's equations. This is now known as the Hohlfeld-Cohen-Rumsey (HCR) Principle. Mushiake's earlier work on self complementarity was shown to be limited to impedance smoothness, as expected from Babinet's Principle, but not frequency invariance.
Other uses
In addition to their use as antennas, fractals have also found application in other antenna system components, including loads, counterpoises, and ground planes.
Fractal inductors and fractal tuned circuits (fractal resonators) were also discovered and invented simultaneously with fractal element antennas. An emerging example of such is in metamaterials. A recent invention demonstrates using close-packed fractal resonators to make the first wideband metamaterial invisibility cloak at microwave frequencies.
Fractal filters (a type of tuned circuit) are another example where the superiority of the fractal approach for smaller size and better rejection has been proven.
As fractals can be used as counterpoises, loads, ground planes, and filters, all parts that can be integrated with antennas, they are considered parts of some antenna systems and thus are discussed in the context of fractal antennas.
See also
Waveguide (electromagnetism)
References
External links
How to make a fractal antenna for HDTV or DTV
CPW-fed H-tree fractal antenna for WLAN, WIMAX, RFID, C-band, HiperLAN, and UWB applications
Video of a fractal antenna monopole using fractal metamaterials
Radio frequency antenna types
Antennas (radio)
Fractals | Fractal antenna | [
"Mathematics"
] | 1,451 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Fractals",
"Mathematical relations"
] |
57,819 | https://en.wikipedia.org/wiki/The%20Open%20Group | The Open Group is a global consortium that seeks to "enable the achievement of business objectives" by developing "open, vendor-neutral technology standards and certifications." It has 900+ member organizations and provides a number of services, including strategy, management, innovation and research, standards, certification, and test development. It was established in 1996 when X/Open merged with the Open Software Foundation.
The Open Group is the certifying body for the UNIX trademark, and publishes the Single UNIX Specification technical standard, which extends the POSIX standards. The Open Group also develops and manages the TOGAF standard, which is an industry standard enterprise architecture framework.
Members
The 900+ members include a range of technology vendors and buyers as well as government agencies, including, for example, Capgemini, Fujitsu, Hewlett Packard Enterprise, Orbus Software, IBM, Huawei, the United States Department of Defense and NASA. There is no obligation on product developers or vendors to adopt the standards developed by the association.
Platinum members:
DXC Technology, United States
Fujitsu, Japan
Huawei Technologies, China
IBM, United States
Intel, United States
OpenText, Canada
Shell, Netherlands
History
By the early 1990s, the major UNIX system vendors had begun to realize that the standards rivalries (often called the "Unix wars") were causing all participants more harm than good, leaving the UNIX industry open to emerging competition from Microsoft. The COSE initiative in 1993 can be considered to be the first unification step, and the merger of the Open Software Foundation (OSF) and X/Open in 1996 as the ultimate step, in the end of those skirmishes. OSF had previously merged with Unix International in 1994, meaning that the new entity effectively represented all elements of the Unix community of the time.
In January 1997, the responsibility for the X Window System was transferred to The Open Group from the defunct X Consortium. In 1999, X.Org was formed to manage the X Window System, with management services provided by The Open Group. The X.Org members made a number of releases up to and including X11R6.8 while The Open Group provided management services. In 2004, X.Org and The Open Group worked together to establish the newly formed X.Org Foundation which then took control of the x.org domain name, and the stewardship of the X Window System.
Programs
Certification
Key services of The Open Group are certification programs, including certification for products and best practices: POSIX, UNIX, and O-TTPS.
The Open Group offers certifications for technology professionals. In addition to TOGAF certification which covers tools, services and people certification, The Open Group also administers the following experience-based Professional Certifications: Certified Architect (Open CA), Certification Program Accreditation, Certified Data Scientist (Open CDS), Certified Technical Specialist (Open CTS), and Certified Trusted Technology Practitioner (Open CTTP). The Open Group also offers certification for ArchiMate tools and people, as well as people certification for Open FAIR and IT4IT, standards of The Open Group.
Collaboration Services
The Open Group also provides a range of services, from initial setup and ongoing operational support to collaboration, standards and best practices development, and assistance with market impact activities. They assist organizations with setting business objectives, strategy and procurement, and also provide certification and test development. This includes services to the government agencies, suppliers, and companies or organizations set up by governments.
Inventions and standards
The ArchiMate Technical standard
The ArchiMate Exchange File Format standard
The Open Trusted Technology Provider Standard (O-TTPS)
The Call Level Interface (the basis for ODBC)
The Common Desktop Environment (CDE)
The Distributed Computing Environment (the basis for DCOM)
The Distributed Relational Database Architecture (DRDA)
The Future Airborne Capability Environment (FACE) Technical standard
The Motif GUI widget toolkit (used in CDE)
The Open Process Automation Standard (O-PAS Standard)
The Open Group Service Integration Maturity Model (OSIMM)
The Open Information Security Maturity Model (O-ISM3)
The Single UNIX Specification (SUS)
The Service-Oriented Architecture (SOA) Source Book
TOGAF (Enterprise Architecture Framework)
The Application Response Measurement (ARM) standard
The Common Manageability Programming Interface (CMPI) standard
The Universal Data Element Framework (UDEF) standard
The XA Specification
See also
Joint Inter-Domain Management
References
External links
POSIX
Standards organizations
Technology consortia
Unix
Unix standards
X Window System | The Open Group | [
"Technology"
] | 911 | [
"Computer standards",
"POSIX",
"Unix standards"
] |
57,821 | https://en.wikipedia.org/wiki/Wireless%20Markup%20Language | Wireless Markup Language (WML), based on XML, is an obsolete markup language intended for devices that implement the Wireless Application Protocol (WAP) specification, such as mobile phones. It provides navigational support, data input, hyperlinks, text and image presentation, and forms, much like HTML (Hypertext Markup Language). It preceded the use of other markup languages used with WAP, such as XHTML and HTML itself, which achieved dominance as processing power in mobile devices increased.
WML history
Building on Openwave's HDML, Nokia's "Tagged Text Markup Language" (TTML) and Ericsson's proprietary markup language for mobile content, the WAP Forum created the WML 1.1 standard in 1998. WML 2.0 was specified in 2001, but has not been widely adopted. It was an attempt at bridging WML and XHTML Basic before the WAP 2.0 spec was finalized. In the end, XHTML Mobile Profile became the markup language used in WAP 2.0. The newest WML version in active use is 1.3.
The first company to launch a public WML site was Dutch mobile phone network operator Telfort in October 1999 and the first company in the world to launch the Nokia 7110. The Telfort WML site was created and developed as side project to test the device's capabilities by a billing engineer called Christopher Bee and National Deployment Manager, Euan McLeod. The WML site consists of four pages in both Dutch and English that contained many grammatical errors in Dutch as the two developers were unaware the WML was configured on the Nokia 7110 as the home page and neither were native Dutch speakers.
WML markup
WML documents are XML documents that validate against the WML DTD (Document Type Definition)
. The W3C Markup Validation service (http://validator.w3.org/) can be used to validate WML documents (they are validated against their declared document type).
For example, the following WML page could be saved as "example.wml":
<?xml version="1.0"?>
<!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.1//EN"
"http://www.wapforum.org/DTD/wml_1.1.xml" >
<wml>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<card id="main" title="First Card">
<p mode="wrap">This is a sample WML page.</p>
</card>
</wml>
A WML document is known as a "deck". Data in the deck is structured into one or more "cards" (pages), each of which represents a single interaction with the user.
WML decks are stored on an ordinary web server configured to serve the text/vnd.wap.wml MIME type in addition to plain HTML and variants. The WML cards when requested by a device are accessed by a bridge (WAP gateway), which sits between mobile devices and the World Wide Web, passing pages from one to the other much like a proxy. The gateways send the WML pages on in a form suitable for mobile device reception (WAP Binary XML). This process is hidden from the phone, so it may access the page in the same way as a browser accesses HTML, using a URL (for example, http://example.com/foo.wml). (Provided the mobile phone operator has not specifically locked the phone to prevent access of user-specified URLs.)
WML has a scaled-down set of procedural elements, which can be used by the author to control navigation to other cards.
Mobile devices are moving towards allowing more XHTML and even standard HTML as processing power in handsets increases. These standards are concerned with formatting and presentation. They do not however address cell-phone or mobile device hardware interfacing in the same way as WML.
WML capability in desktop browsers
The Presto layout engine (used by Opera before its switch to Blink) understood WML natively. Mozilla-based browsers (Firefox (before version 57), SeaMonkey, MicroB) could interpret WML through the WMLBrowser add-on. Google Chrome can also interpret WML via two extensions: WML and FireMobileSimulator.
Criticism
See also
WMLScript
Wireless Application Protocol Bitmap Format
Mobile browser
List of document markup languages
Comparison of document markup languages
XHTML Mobile Profile
References
External links
Technical Specifications at the WAP Forum
XHTML-MP Authoring Practices
Open Mobile Alliance
Wireless Application Protocol
Open Mobile Alliance standards
XML markup languages | Wireless Markup Language | [
"Technology"
] | 1,038 | [
"Wireless networking",
"Wireless Application Protocol"
] |
57,824 | https://en.wikipedia.org/wiki/Semi-continuity | In mathematical analysis, semicontinuity (or semi-continuity) is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function is upper (respectively, lower) semicontinuous at a point if, roughly speaking, the function values for arguments near are not much higher (respectively, lower) than Briefly, a function on a domain is lower semi-continuous if its epigraph is closed in , and upper semi-continuous if is lower semi-continuous.
A function is continuous if and only if it is both upper and lower semicontinuous. If we take a continuous function and increase its value at a certain point to for some , then the result is upper semicontinuous; if we decrease its value to then the result is lower semicontinuous.
The notion of upper and lower semicontinuous function was first introduced and studied by René Baire in his thesis in 1899.
Definitions
Assume throughout that is a topological space and is a function with values in the extended real numbers .
Upper semicontinuity
A function is called upper semicontinuous at a point if for every real there exists a neighborhood of such that for all .
Equivalently, is upper semicontinuous at if and only if
where lim sup is the limit superior of the function at the point
If is a metric space with distance function and this can also be restated using an - formulation, similar to the definition of continuous function. Namely, for each there is a such that whenever
A function is called upper semicontinuous if it satisfies any of the following equivalent conditions:
(1) The function is upper semicontinuous at every point of its domain.
(2) For each , the set is open in , where .
(3) For each , the -superlevel set is closed in .
(4) The hypograph is closed in .
(5) The function is continuous when the codomain is given the left order topology. This is just a restatement of condition (2) since the left order topology is generated by all the intervals .
Lower semicontinuity
A function is called lower semicontinuous at a point if for every real there exists a neighborhood of such that for all .
Equivalently, is lower semicontinuous at if and only if
where is the limit inferior of the function at point
If is a metric space with distance function and this can also be restated as follows: For each there is a such that whenever
A function is called lower semicontinuous if it satisfies any of the following equivalent conditions:
(1) The function is lower semicontinuous at every point of its domain.
(2) For each , the set is open in , where .
(3) For each , the -sublevel set is closed in .
(4) The epigraph is closed in .
(5) The function is continuous when the codomain is given the right order topology. This is just a restatement of condition (2) since the right order topology is generated by all the intervals .
Examples
Consider the function piecewise defined by:
This function is upper semicontinuous at but not lower semicontinuous.
The floor function which returns the greatest integer less than or equal to a given real number is everywhere upper semicontinuous. Similarly, the ceiling function is lower semicontinuous.
Upper and lower semicontinuity bear no relation to continuity from the left or from the right for functions of a real variable. Semicontinuity is defined in terms of an ordering in the range of the functions, not in the domain. For example the function
is upper semicontinuous at while the function limits from the left or right at zero do not even exist.
If is a Euclidean space (or more generally, a metric space) and is the space of curves in (with the supremum distance ), then the length functional which assigns to each curve its length is lower semicontinuous. As an example, consider approximating the unit square diagonal by a staircase from below. The staircase always has length 2, while the diagonal line has only length .
Let be a measure space and let denote the set of positive measurable functions endowed with the
topology of convergence in measure with respect to Then by Fatou's lemma the integral, seen as an operator from to is lower semicontinuous.
Tonelli's theorem in functional analysis characterizes the weak lower semicontinuity of nonlinear functionals on Lp spaces in terms of the convexity of another function.
Properties
Unless specified otherwise, all functions below are from a topological space to the extended real numbers Several of the results hold for semicontinuity at a specific point, but for brevity they are only stated for semicontinuity over the whole domain.
A function is continuous if and only if it is both upper and lower semicontinuous.
The characteristic function or indicator function of a set (defined by if and if ) is upper semicontinuous if and only if is a closed set. It is lower semicontinuous if and only if is an open set.
In the field of convex analysis, the characteristic function of a set is defined differently, as if and if . With that definition, the characteristic function of any is lower semicontinuous, and the characteristic function of any is upper semicontinuous.
Binary Operations on Semicontinuous Functions
Let .
If and are lower semicontinuous, then the sum is lower semicontinuous (provided the sum is well-defined, i.e., is not the indeterminate form ). The same holds for upper semicontinuous functions.
If and are lower semicontinuous and non-negative, then the product function is lower semicontinuous. The corresponding result holds for upper semicontinuous functions.
The function is lower semicontinuous if and only if is upper semicontinuous.
If and are upper semicontinuous and is non-decreasing, then the composition is upper semicontinuous. On the other hand, if is not non-decreasing, then may not be upper semicontinuous.
If and are lower semicontinuous, their (pointwise) maximum and minimum (defined by and ) are also lower semicontinuous. Consequently, the set of all lower semicontinuous functions from to (or to ) forms a lattice. The corresponding statements also hold for upper semicontinuous functions.
Optimization of Semicontinuous Functions
The (pointwise) supremum of an arbitrary family of lower semicontinuous functions (defined by ) is lower semicontinuous.
In particular, the limit of a monotone increasing sequence of continuous functions is lower semicontinuous. (The Theorem of Baire below provides a partial converse.) The limit function will only be lower semicontinuous in general, not continuous. An example is given by the functions defined for for
Likewise, the infimum of an arbitrary family of upper semicontinuous functions is upper semicontinuous. And the limit of a monotone decreasing sequence of continuous functions is upper semicontinuous.
If is a compact space (for instance a closed bounded interval ) and is upper semicontinuous, then attains a maximum on If is lower semicontinuous on it attains a minimum on
(Proof for the upper semicontinuous case: By condition (5) in the definition, is continuous when is given the left order topology. So its image is compact in that topology. And the compact sets in that topology are exactly the sets with a maximum. For an alternative proof, see the article on the extreme value theorem.)
Other Properties
(Theorem of Baire) Let be a metric space. Every lower semicontinuous function is the limit of a point-wise increasing sequence of extended real-valued continuous functions on In particular, there exists a sequence of continuous functions such that
and
If does not take the value , the continuous functions can be taken to be real-valued.
Additionally, every upper semicontinuous function is the limit of a monotone decreasing sequence of extended real-valued continuous functions on if does not take the value the continuous functions can be taken to be real-valued.
Any upper semicontinuous function on an arbitrary topological space is locally constant on some dense open subset of
If the topological space is sequential, then is upper semi-continuous if and only if it is sequentially upper semi-continuous, that is, if for any and any sequence that converges towards , there holds . Equivalently, in a sequential space, is upper semicontinuous if and only if its superlevel sets are sequentially closed for all . In general, upper semicontinuous functions are sequentially upper semicontinuous, but the converse may be false.
Semicontinuity of Set-valued Functions
For set-valued functions, several concepts of semicontinuity have been defined, namely upper, lower, outer, and inner semicontinuity, as well as upper and lower hemicontinuity.
A set-valued function from a set to a set is written For each the function defines a set
The preimage of a set under is defined as
That is, is the set that contains every point in such that is not disjoint from .
Upper and Lower Semicontinuity
A set-valued map is upper semicontinuous at if for every open set such that , there exists a neighborhood of such that
A set-valued map is lower semicontinuous at if for every open set such that there exists a neighborhood of such that
Upper and lower set-valued semicontinuity are also defined more generally for a set-valued maps between topological spaces by replacing and in the above definitions with arbitrary topological spaces.
Note, that there is not a direct correspondence between single-valued lower and upper semicontinuity and set-valued lower and upper semicontinuouty.
An upper semicontinuous single-valued function is not necessarily upper semicontinuous when considered as a set-valued map.
For example, the function defined by
is upper semicontinuous in the single-valued sense but the set-valued map is not upper semicontinuous in the set-valued sense.
Inner and Outer Semicontinuity
A set-valued function is called inner semicontinuous at if for every and every convergent sequence in such that , there exists
a sequence in such that and for all sufficiently large
A set-valued function is called outer semicontinuous at if for every convergence sequence in such that and every convergent sequence in such that for each the sequence converges to a point in (that is, ).
See also
Notes
References
Bibliography
Theory of continuous functions
Mathematical analysis
Variational analysis | Semi-continuity | [
"Mathematics"
] | 2,226 | [
"Theory of continuous functions",
"Mathematical analysis",
"Topology"
] |
57,829 | https://en.wikipedia.org/wiki/Keystroke%20logging | Keystroke logging, often referred to as keylogging or keyboard capturing, is the action of recording (logging) the keys struck on a keyboard, typically covertly, so that a person using the keyboard is unaware that their actions are being monitored. Data can then be retrieved by the person operating the logging program. A keystroke recorder or keylogger can be either software or hardware.
While the programs themselves are legal, with many designed to allow employers to oversee the use of their computers, keyloggers are most often used for stealing passwords and other confidential information. Keystroke logging can also be utilized to monitor activities of children in schools or at home and by law enforcement officials to investigate malicious usage.
Keylogging can also be used to study keystroke dynamics or human-computer interaction. Numerous keylogging methods exist, ranging from hardware and software-based approaches to acoustic cryptanalysis.
History
In the mid-1970s, the Soviet Union developed and deployed a hardware keylogger targeting typewriters. Termed the "selectric bug", it measured the movements of the print head of IBM Selectric typewriters via subtle influences on the regional magnetic field caused by the rotation and movements of the print head. An early keylogger was written by Perry Kivolowitz and posted to the Usenet newsgroup net.unix-wizards, net.sources on November 17, 1983. The posting seems to be a motivating factor in restricting access to /dev/kmem on Unix systems. The user-mode program operated by locating and dumping character lists (clients) as they were assembled in the Unix kernel.
In the 1970s, spies installed keystroke loggers in the US Embassy and Consulate buildings in Moscow.
They installed the bugs in Selectric II and Selectric III electric typewriters.
Soviet embassies used manual typewriters, rather than electric typewriters, for classified information—apparently because they are immune to such bugs.
As of 2013, Russian special services still use typewriters.
Application of keylogger
Software-based keyloggers
A software-based keylogger is a computer program designed to record any input from the keyboard. Keyloggers are used in IT organizations to troubleshoot technical problems with computers and business networks. Families and businesspeople use keyloggers legally to monitor network usage without their users' direct knowledge. Microsoft publicly stated that Windows 10 has a built-in keylogger in its final version "to improve typing and writing services". However, malicious individuals can use keyloggers on public computers to steal passwords or credit card information. Most keyloggers are not stopped by HTTPS encryption because that only protects data in transit between computers; software-based keyloggers run on the affected user's computer, reading keyboard inputs directly as the user types.
From a technical perspective, there are several categories:
Hypervisor-based: The keylogger can theoretically reside in a malware hypervisor running underneath the operating system, which thus remains untouched. It effectively becomes a virtual machine. Blue Pill is a conceptual example.
Kernel-based: A program on the machine obtains root access to hide in the OS and intercepts keystrokes that pass through the kernel. This method is difficult both to write and to combat. Such keyloggers reside at the kernel level, which makes them difficult to detect, especially for user-mode applications that do not have root access. They are frequently implemented as rootkits that subvert the operating system kernel to gain unauthorized access to the hardware. This makes them very powerful. A keylogger using this method can act as a keyboard device driver, for example, and thus gain access to any information typed on the keyboard as it goes to the operating system.
API-based: These keyloggers hook keyboard APIs inside a running application. The keylogger registers keystroke events as if it was a normal piece of the application instead of malware. The keylogger receives an event each time the user presses or releases a key. The keylogger simply records it.
Windows APIs such as GetAsyncKeyState(), GetForegroundWindow(), etc. are used to poll the state of the keyboard or to subscribe to keyboard events. A more recent example simply polls the BIOS for pre-boot authentication PINs that have not been cleared from memory.
Form grabbing based: Form grabbing-based keyloggers log Web form submissions by recording the form data on submit events. This happens when the user completes a form and submits it, usually by clicking a button or pressing enter. This type of keylogger records form data before it is passed over the Internet.
JavaScript-based: A malicious script tag is injected into a targeted web page, and listens for key events such as onKeyUp(). Scripts can be injected via a variety of methods, including cross-site scripting, man-in-the-browser, man-in-the-middle, or a compromise of the remote website.
Memory-injection-based: Memory Injection (MitB)-based keyloggers perform their logging function by altering the memory tables associated with the browser and other system functions. By patching the memory tables or injecting directly into memory, this technique can be used by malware authors to bypass Windows UAC (User Account Control). The Zeus and SpyEye trojans use this method exclusively. Non-Windows systems have protection mechanisms that allow access to locally recorded data from a remote location. Remote communication may be achieved when one of these methods is used:
Data is uploaded to a website, database or an FTP server.
Data is periodically emailed to a pre-defined email address.
Data is wirelessly transmitted employing an attached hardware system.
The software enables a remote login to the local machine from the Internet or the local network, for data logs stored on the target machine.
Keystroke logging in writing process research
Since 2006, keystroke logging has been an established research method for the study of writing processes. Different programs have been developed to collect online process data of writing activities, including Inputlog, Scriptlog, Translog and GGXLog.
Keystroke logging is used legitimately as a suitable research instrument in several writing contexts. These include studies on cognitive writing processes, which include
descriptions of writing strategies; the writing development of children (with and without writing difficulties),
spelling,
first and second language writing, and
specialist skill areas such as translation and subtitling.
Keystroke logging can be used to research writing, specifically. It can also be integrated into educational domains for second language learning, programming skills, and typing skills.
Related features
Software keyloggers may be augmented with features that capture user information without relying on keyboard key presses as the sole input. Some of these features include:
Clipboard logging. Anything that has been copied to the clipboard can be captured by the program.
Screen logging. Screenshots are taken to capture graphics-based information. Applications with screen logging abilities may take screenshots of the whole screen, of just one application, or even just around the mouse cursor. They may take these screenshots periodically or in response to user behaviors (for example, when a user clicks the mouse). Screen logging can be used to capture data inputted with an on-screen keyboard.
Programmatically capturing the text in a control. The Microsoft Windows API allows programs to request the text 'value' in some controls. This means that some passwords may be captured, even if they are hidden behind password masks (usually asterisks).
The recording of every program/folder/window opened including a screenshot of every website visited.
The recording of search engines queries, instant messenger conversations, FTP downloads and other Internet-based activities (including the bandwidth used).
Hardware-based keyloggers
Hardware-based keyloggers do not depend upon any software being installed as they exist at a hardware level in a computer system.
Firmware-based: BIOS-level firmware that handles keyboard events can be modified to record these events as they are processed. Physical and/or root-level access is required to the machine, and the software loaded into the BIOS needs to be created for the specific hardware that it will be running on.
Keyboard hardware: Hardware keyloggers are used for keystroke logging utilizing a hardware circuit that is attached somewhere in between the computer keyboard and the computer, typically inline with the keyboard's cable connector. There are also USB connector-based hardware keyloggers, as well as ones for laptop computers (the Mini-PCI card plugs into the expansion slot of a laptop). More stealthy implementations can be installed or built into standard keyboards so that no device is visible on the external cable. Both types log all keyboard activity to their internal memory, which can be subsequently accessed, for example, by typing in a secret key sequence. Hardware keyloggers do not require any software to be installed on a target user's computer, therefore not interfering with the computer's operation and less likely to be detected by software running on it. However, its physical presence may be detected if, for example, it is installed outside the case as an inline device between the computer and the keyboard. Some of these implementations can be controlled and monitored remotely using a wireless communication standard.
Wireless keyboard and mouse sniffers: These passive sniffers collect packets of data being transferred from a wireless keyboard and its receiver. As encryption may be used to secure the wireless communications between the two devices, this may need to be cracked beforehand if the transmissions are to be read. In some cases, this enables an attacker to type arbitrary commands into a victim's computer.
Keyboard overlays: Criminals have been known to use keyboard overlays on ATMs to capture people's PINs. Each keypress is registered by the keyboard of the ATM as well as the criminal's keypad that is placed over it. The device is designed to look like an integrated part of the machine so that bank customers are unaware of its presence.
Acoustic keyloggers: Acoustic cryptanalysis can be used to monitor the sound created by someone typing on a computer. Each key on the keyboard makes a subtly different acoustic signature when struck. It is then possible to identify which keystroke signature relates to which keyboard character via statistical methods such as frequency analysis. The repetition frequency of similar acoustic keystroke signatures, the timings between different keyboard strokes and other context information such as the probable language in which the user is writing are used in this analysis to map sounds to letters. A fairly long recording (1000 or more keystrokes) is required so that a large enough sample is collected.
Electromagnetic emissions: It is possible to capture the electromagnetic emissions of a wired keyboard from up to away, without being physically wired to it. In 2009, Swiss researchers tested 11 different USB, PS/2 and laptop keyboards in a semi-anechoic chamber and found them all vulnerable, primarily because of the prohibitive cost of adding shielding during manufacture. The researchers used a wide-band receiver to tune into the specific frequency of the emissions radiated from the keyboards.
Optical surveillance: Optical surveillance, while not a keylogger in the classical sense, is nonetheless an approach that can be used to capture passwords or PINs. A strategically placed camera, such as a hidden surveillance camera at an ATM, can allow a criminal to watch a PIN or password being entered.
Physical evidence: For a keypad that is used only to enter a security code, the keys which are in actual use will have evidence of use from many fingerprints. A passcode of four digits, if the four digits in question are known, is reduced from 10,000 possibilities to just 24 possibilities (104 versus 4! [factorial of 4]). These could then be used on separate occasions for a manual "brute force attack".
Smartphone sensors: Researchers have demonstrated that it is possible to capture the keystrokes of nearby computer keyboards using only the commodity accelerometer found in smartphones. The attack is made possible by placing a smartphone near a keyboard on the same desk. The smartphone's accelerometer can then detect the vibrations created by typing on the keyboard and then translate this raw accelerometer signal into readable sentences with as much as 80 percent accuracy. The technique involves working through probability by detecting pairs of keystrokes, rather than individual keys. It models "keyboard events" in pairs and then works out whether the pair of keys pressed is on the left or the right side of the keyboard and whether they are close together or far apart on the QWERTY keyboard. Once it has worked this out, it compares the results to a preloaded dictionary where each word has been broken down in the same way. Similar techniques have also been shown to be effective at capturing keystrokes on touchscreen keyboards while in some cases, in combination with gyroscope or with the ambient-light sensor.
Body keyloggers: Body keyloggers track and analyze body movements to determine which keys were pressed. The attacker needs to be familiar with the keys layout of the tracked keyboard to correlate between body movements and keys position, although with a suitably large sample this can be deduced. Tracking audible signals of the user' interface (e.g. a sound the device produce to informs the user that a keystroke was logged) may reduce the complexity of the body keylogging algorithms, as it marks the moment at which a key was pressed.
Cracking
Writing simple software applications for keylogging can be trivial, and like any nefarious computer program, can be distributed as a trojan horse or as part of a virus. What is not trivial for an attacker, however, is installing a covert keystroke logger without getting caught and downloading data that has been logged without being traced. An attacker that manually connects to a host machine to download logged keystrokes risks being traced. A trojan that sends keylogged data to a fixed e-mail address or IP address risks exposing the attacker.
Trojans
Researchers Adam Young and Moti Yung discussed several methods of sending keystroke logging. They presented a deniable password snatching attack in which the keystroke logging trojan is installed using a virus or worm. An attacker who is caught with the virus or worm can claim to be a victim. The cryptotrojan asymmetrically encrypts the pilfered login/password pairs using the public key of the trojan author and covertly broadcasts the resulting ciphertext. They mentioned that the ciphertext can be steganographically encoded and posted to a public bulletin board such as Usenet.
Use by police
In 2000, the FBI used FlashCrest iSpy to obtain the PGP passphrase of Nicodemo Scarfo, Jr., son of mob boss Nicodemo Scarfo.
Also in 2000, the FBI lured two suspected Russian cybercriminals to the US in an elaborate ruse, and captured their usernames and passwords with a keylogger that was covertly installed on a machine that they used to access their computers in Russia. The FBI then used these credentials to gain access to the suspects' computers in Russia to obtain evidence to prosecute them.
Countermeasures
The effectiveness of countermeasures varies because keyloggers use a variety of techniques to capture data and the countermeasure needs to be effective against the particular data capture technique. In the case of Windows 10 keylogging by Microsoft, changing certain privacy settings may disable it. An on-screen keyboard will be effective against hardware keyloggers; transparency will defeat some—but not all—screen loggers. An anti-spyware application that can only disable hook-based keyloggers will be ineffective against kernel-based keyloggers.
Keylogger program authors may be able to update their program's code to adapt to countermeasures that have proven effective against it.
Anti-keyloggers
An anti-keylogger is a piece of software specifically designed to detect keyloggers on a computer, typically comparing all files in the computer against a database of keyloggers, looking for similarities which might indicate the presence of a hidden keylogger. As anti-keyloggers have been designed specifically to detect keyloggers, they have the potential to be more effective than conventional antivirus software; some antivirus software do not consider keyloggers to be malware, as under some circumstances a keylogger can be considered a legitimate piece of software.
Live CD/USB
Rebooting the computer using a Live CD or write-protected Live USB is a possible countermeasure against software keyloggers if the CD is clean of malware and the operating system contained on it is secured and fully patched so that it cannot be infected as soon as it is started. Booting a different operating system does not impact the use of a hardware or BIOS based keylogger.
Anti-spyware / Anti-virus programs
Many anti-spyware applications can detect some software based keyloggers and quarantine, disable, or remove them. However, because many keylogging programs are legitimate pieces of software under some circumstances, anti-spyware often neglects to label keylogging programs as spyware or a virus. These applications can detect software-based keyloggers based on patterns in executable code, heuristics and keylogger behaviors (such as the use of hooks and certain APIs).
No software-based anti-spyware application can be 100% effective against all keyloggers. Software-based anti-spyware cannot defeat non-software keyloggers (for example, hardware keyloggers attached to keyboards will always receive keystrokes before any software-based anti-spyware application).
The particular technique that the anti-spyware application uses will influence its potential effectiveness against software keyloggers. As a general rule, anti-spyware applications with higher privileges will defeat keyloggers with lower privileges. For example, a hook-based anti-spyware application cannot defeat a kernel-based keylogger (as the keylogger will receive the keystroke messages before the anti-spyware application), but it could potentially defeat hook- and API-based keyloggers.
Network monitors
Network monitors (also known as reverse-firewalls) can be used to alert the user whenever an application attempts to make a network connection. This gives the user the chance to prevent the keylogger from "phoning home" with their typed information.
Automatic form filler programs
Automatic form-filling programs may prevent keylogging by removing the requirement for a user to type personal details and passwords using the keyboard. Form fillers are primarily designed for Web browsers to fill in checkout pages and log users into their accounts. Once the user's account and credit card information has been entered into the program, it will be automatically entered into forms without ever using the keyboard or clipboard, thereby reducing the possibility that private data is being recorded. However, someone with physical access to the machine may still be able to install software that can intercept this information elsewhere in the operating system or while in transit on the network. (Transport Layer Security (TLS) reduces the risk that data in transit may be intercepted by network sniffers and proxy tools.)
One-time passwords (OTP)
Using one-time passwords may prevent unauthorized access to an account which has had its login details exposed to an attacker via a keylogger, as each password is invalidated as soon as it is used. This solution may be useful for someone using a public computer. However, an attacker who has remote control over such a computer can simply wait for the victim to enter their credentials before performing unauthorized transactions on their behalf while their session is active.
Another common way to protect access codes from being stolen by keystroke loggers is by asking users to provide a few randomly selected characters from their authentication code. For example, they might be asked to enter the 2nd, 5th, and 8th characters. Even if someone is watching the user or using a keystroke logger, they would only get a few characters from the code without knowing their positions.
Security tokens
Use of smart cards or other security tokens may improve security against replay attacks in the face of a successful keylogging attack, as accessing protected information would require both the (hardware) security token as well as the appropriate password/passphrase. Knowing the keystrokes, mouse actions, display, clipboard, etc. used on one computer will not subsequently help an attacker gain access to the protected resource. Some security tokens work as a type of hardware-assisted one-time password system, and others implement a cryptographic challenge–response authentication, which can improve security in a manner conceptually similar to one time passwords. Smartcard readers and their associated keypads for PIN entry may be vulnerable to keystroke logging through a so-called supply chain attack where an attacker substitutes the card reader/PIN entry hardware for one which records the user's PIN.
On-screen keyboards
Most on-screen keyboards (such as the on-screen keyboard that comes with Windows XP) send normal keyboard event messages to the external target program to type text. Software key loggers can log these typed characters sent from one program to another.
Keystroke interference software
Keystroke interference software is also available.
These programs attempt to trick keyloggers by introducing random keystrokes, although this simply results in the keylogger recording more information than it needs to. An attacker has the task of extracting the keystrokes of interest—the security of this mechanism, specifically how well it stands up to cryptanalysis, is unclear.
Speech recognition
Similar to on-screen keyboards, speech-to-text conversion software can also be used against keyloggers, since there are no typing or mouse movements involved. The weakest point of using voice-recognition software may be how the software sends the recognized text to target software after the user's speech has been processed.
Handwriting recognition and mouse gestures
Many PDAs and lately tablet PCs can already convert pen (also called stylus) movements on their touchscreens to computer understandable text successfully. Mouse gestures use this principle by using mouse movements instead of a stylus. Mouse gesture programs convert these strokes to user-definable actions, such as typing text. Similarly, graphics tablets and light pens can be used to input these gestures, however, these are becoming less common.
The same potential weakness of speech recognition applies to this technique as well.
Macro expanders/recorders
With the help of many programs, a seemingly meaningless text can be expanded to a meaningful text and most of the time context-sensitively, e.g. "en.wikipedia.org" can be expanded when a web browser window has the focus. The biggest weakness of this technique is that these programs send their keystrokes directly to the target program. However, this can be overcome by using the 'alternating' technique described below, i.e. sending mouse clicks to non-responsive areas of the target program, sending meaningless keys, sending another mouse click to the target area (e.g. password field) and switching back-and-forth.
Deceptive typing
Alternating between typing the login credentials and typing characters somewhere else in the focus window can cause a keylogger to record more information than it needs to, but this could be easily filtered out by an attacker. Similarly, a user can move their cursor using the mouse while typing, causing the logged keystrokes to be in the wrong order e.g., by typing a password beginning with the last letter and then using the mouse to move the cursor for each subsequent letter. Lastly, someone can also use context menus to remove, cut, copy, and paste parts of the typed text without using the keyboard. An attacker who can capture only parts of a password will have a larger key space to attack if they choose to execute a brute-force attack.
Another very similar technique uses the fact that any selected text portion is replaced by the next key typed. e.g., if the password is "secret", one could type "s", then some dummy keys "asdf". These dummy characters could then be selected with the mouse, and the next character from the password "e" typed, which replaces the dummy characters "asdf".
These techniques assume incorrectly that keystroke logging software cannot directly monitor the clipboard, the selected text in a form, or take a screenshot every time a keystroke or mouse click occurs. They may, however, be effective against some hardware keyloggers.
See also
Anti-keylogger
Black-bag cryptanalysis
Computer surveillance
Cybercrime
Digital footprint
Hardware keylogger
Reverse connection
Session replay
Spyware
Trojan horse
Virtual keyboard
Web tracking
References
External links
Cryptographic attacks
Spyware
Surveillance
Cybercrime
Security breaches | Keystroke logging | [
"Technology"
] | 5,132 | [
"Cryptographic attacks",
"Computer security exploits"
] |
57,854 | https://en.wikipedia.org/wiki/Ride%20cymbal | The ride cymbal is a cymbal of material sustain used to maintain a beat in music. A standard in most drum kits, the ride's function is to maintain a steady pattern, sometimes called a ride pattern, rather than provide the accent of a crash cymbal. It is normally placed on the extreme right (or dominant hand) of a drum set, above the floor tom. It is often described as delivering a "shimmering" sound when struck soundly with a drumstick, and a clear ping when struck atop its bell.
The ride can fulfill any function or rhythm the hi-hat cymbal does, with the exception of an open and closed sound.
Types
The term ride may depict either the function or characteristic of the instrument. Most cymbal makers manufacture specific cymbals for the purpose.
Alternatively, some drummers use a china cymbal, a sizzle cymbal or a specialized tone such as a swish or pang as a ride cymbal. When playing extremely softly, when using brushes, and when recording in a studio, even a thin crash may serve well as a ride cymbal.
When playing extremely loudly, a cymbal designed as a ride may deliver a very loud, long crash, due to its superior sustain after being struck.
Crash/ride
Cymbals designated crash/ride or more rarely ride/crash serve as either a large slow crash or secondary ride, or in very small kits as the only suspended cymbal.
Flat ride
Bell-less ride cymbals, known as flat rides, have a dry crash and clear stick definition. Quieter, they are popular in jazz drumming. Developed by Paiste in the 1960s, flat rides are used by notable drummers Roy Haynes, Jack DeJohnette, Paul Wertico, Carter Beauford, Jo Jones and Charlie Watts.
The highly regarded Paiste 602 Flat Ride was reissued in 2010, but is only available in 20" medium.
Swish and pang
Swish and pang cymbals are exotic ride and crash/ride cymbals similar to china cymbals in tone.
Sizzle cymbal
A sizzle cymbal, thinner and one size larger than the main ride, was common in some styles of early rock music as a secondary ride cymbal, particularly for accompanying guitar lead breaks.
Sound
When struck, a ride cymbal makes a sustained, shimmering sound rather than the shorter, decaying sound of a crash cymbal. The most common diameter for a ride cymbal is about , but anything from to is standard. Smaller and thinner cymbals tend to be darker with more shimmer, while larger and thicker cymbals tend to respond better in louder volume situations, and conversely. Rides of up to and down to are readily available, and down to are currently manufactured. The very thickest and loudest tend to be about 22 inches, with larger rides restricted to medium and medium thin thicknesses.
In rock or jazz, the ride cymbal is most often struck regularly in a rhythmic pattern as part of the accompaniment to the song. Often the drummer will vary between the same pattern either on the hi-hat cymbal or the ride cymbal, playing for example the hi-hat in the verses and the ride in the instrumentals and/or choruses.
The sound of a ride cymbal also varies depending on what kind of mallet is used to hit it. In rock and metal, wood and nylon-tipped drum sticks are common; wood creates a smoother, quieter sound, whereas nylon tips create more of a "ping". It creates a low vibration to keep a steady beat, but a low sound volume. The bell, the bulge in the center of the cymbal, creates a brighter, less sustained sound. The bell creates such a brilliant tone compared to the subtle sound of the bow that it is often used as somewhat of another cymbal. Some ride cymbals, seen more often in various forms of metal and harder subgenres of rock, have an unusually large bell. This lessens the accuracy required to repeatedly hit the bell in fast patterns, and produces a louder, brighter tone than in most ride-cymbal bells.
Pattern
Modern use of the ride cymbal was inspired by jazz drummer Baby Dodds's press roll rhythms. According to the Percussive Arts Society, which inducted him into its hall of fame, "Dodds' way of playing press rolls ultimately evolved into the standard jazz ride-cymbal pattern. Whereas many drummers would play very short press rolls on the backbeats, Dodds would start his rolls on the backbeats but extend each one to the following beat, providing a smoother time flow."
The most basic ride pattern in rock and other styles is:
In jazz, this would normally be played with a swing.
Sources
Cymbals
Drum kit components
Drum patterns
Rhythm and meter | Ride cymbal | [
"Physics"
] | 1,010 | [
"Spacetime",
"Rhythm and meter",
"Physical quantities",
"Time"
] |
57,866 | https://en.wikipedia.org/wiki/Kudzu | Kudzu (), also called Japanese arrowroot or Chinese arrowroot, is a group of climbing, coiling, and trailing deciduous perennial vines native to much of East Asia, Southeast Asia, and some Pacific islands. It is invasive in many parts of the world, primarily North America.
The vine densely climbs over other plants and trees and grows so rapidly that it smothers and kills them by blocking most of the sunlight and taking root space. The plants are in the genus Pueraria, in the pea family Fabaceae, subfamily Faboideae. The name is derived from the Japanese name for the plant East Asian arrowroot, (Pueraria montana var. lobata), . Where these plants are naturalized, they can be invasive and are considered noxious weeds. The plant is edible, but often sprayed with herbicides.
Taxonomy
The name kudzu describes one or more species in the genus Pueraria that are closely related, and some of them are considered to be varieties rather than full species. The morphological differences between the subspecies of P. montana are subtle; they can breed with each other, and introduced kudzu populations in the United States apparently have ancestry from more than one of the subspecies. They are:
P. montana
Pueraria montana var. chinensis (Ohwi) Sanjappa & Pradeep (= P. chinensis)
Pueraria montana var. lobata (Willd.) Sanjappa & Pradeep (= P. lobata)
Pueraria montana var. thomsonii (Benth.) Wiersema ex D.B. Ward (= P. thomsonii)
P. edulis
P. phaseoloides – proposed to be moved to Neustanthus
Various other species in Pueraria sensu stricto are also known as "kudzu" with an adjective, but they are not as widely cultivated or introduced.
Ecology
Kudzu has been referred to as "quasi-wild" due to its long history of cultivation, selective breeding into various cultivars, and subsequent return to wild conditions. Some researchers suggest that humans are the main predator of kudzu in its native range, and that human use and cultivation of kudzu both contributes to its success as an invasive species and is a form of biological control for kudzu.
Propagation
Kudzu spreads by vegetative reproduction via stolons (runners) that root at the nodes to form new plants and by rhizomes. Kudzu also spreads by seeds, which are contained in pods and mature in the autumn, although this is rare. One or two viable seeds are produced per cluster of pods. The hard-coated seeds can remain viable for several years, and can successfully germinate only when soil is persistently soggy for 5–7 days, with temperatures above 20 °C (68 °F).
Once germinated, saplings must be kept in a well-drained medium that retains high moisture. During this stage of growth, kudzu must receive as much sunlight as possible. Kudzu saplings are sensitive to mechanical disturbance and are damaged by chemical fertilizers. They do not tolerate long periods of shade or high water tables. Kudzu is able to withstand environments ranging from sunny to shady upon reaching its mature stage; however, forest edges with greater light availability are optimal.
Invasive species
Kudzu's environmental and ecological damage results from its outcompeting other species for a resource. Kudzu competes with native flora for light, and acts to block their access to this vital resource by growing over them and shading them with its leaves. Native plants may then die as a result.When kudzu invades an ecosystem, it makes the leaf litter more labile, thereby lessening the carbon sequestration ability of the soil. This feeds climate change.
Americas
Kudzu is an infamous weed in the United States, where it can be found in 32 states. It is common along roadsides and other disturbed areas throughout most of the southeast, as far north as rural areas of Pulaski County, Illinois. The vine has become a sore point in Southern US culture. Estimates of its rate of spreading differ wildly; it has been described as spreading at the rate of annually, although in 2015 the United States Forest Service estimated the rate to be only per year.
A small patch of kudzu was discovered in 2009 in Leamington, Ontario, the second-warmest growing region of Canada after south coastal British Columbia.
Kudzu was introduced from Japan into the United States at the Japanese pavilion in the 1876 Centennial Exposition in Philadelphia. It was also shown at the Chicago World's Fair. It remained a garden plant until the Dust Bowl era (1930s–1940s), when the vine was marketed as a way for farmers to stop soil erosion. The new Soil Conservation Service grew seventy million kudzu seedlings and paid $8 an acre () to anyone who would sow the vine. Road and rail builders planted kudzu to stabilize steep slopes. Farmer and journalist Channing Cope, dubbed "kudzu kid" in a 1949 Time profile, popularised it in the South as a fix for eroded soils. He started the Kudzu Club of America, which, by 1943, had 20,000 members. The club aimed to plant across the South. Cultivation peaked at over by 1945. Once Soil Service payments ended, much of the kudzu was destroyed as farmers turned the land over to more profitable uses. The Soil Conservation Service stopped promoting kudzu altogether by the 1950s.
Kudzu's ongoing mythos as a mile-a-minute invader is likely due to its visibility coating trees at wooded roadsides, thriving in the sunshine at the forest edge. Despite kudzu's notoriety, Asian privet and invasive roses have each proved to be greater threats in the United States.
Europe
In Europe, kudzu has been included since 2016 on the list of Invasive Alien Species of Union concern (the Union list). This means that this species cannot be imported, cultivated, transported, commercialized, planted, or intentionally released into the environment anywhere in the European Union.
There are only some kudzu populations in certain regions of Italy and Switzerland. In Switzerland it occurs almost exclusively in Ticino, where it has been found in the wild since at least 1956. Most outbreaks are concentrated around Lake Lugano and Lake Maggiore, where the climate (hot summers and mild winters) encourages its growth. However, outbreaks in peripheral areas such as the Onsernone Valley and Lower Leventina are likely due to the illegal disposal of plant waste. A plan is currently in place to reduce and eventually eradicate the kudzu population in Ticino.
Other regions
During World War II, kudzu was introduced to Vanuatu and Fiji by United States Armed Forces to serve as camouflage for equipment and has become a major weed.
In Australia, Kudzu is also becoming a problem in Queensland, Northern Territory and New South Wales.
In New Zealand, kudzu was declared an "unwanted organism" and was added to the Biosecurity New Zealand register in 2002.
Control
Crown removal
Destroying the full underground system, which can be extremely large and deep, is not necessary for successful long-term control of kudzu. Killing or removing the kudzu root crown and all rooting runners is sufficient. The root crown is a fibrous knob of tissue that sits on top of the roots. Crowns form from multiple vine nodes that root to the ground, and range from pea- to basketball-sized. These crowns and attached tuberous roots can weigh 400 or 500 pounds (180 to 225 kilograms) and extend up to twenty feet (six meters) into the ground. The age of the crowns is correlated to how deep they are in the ground. Nodes and crowns are the source of all kudzu vines, and roots cannot produce vines. If any portion of a root crown remains after attempted removal, the kudzu plant may still grow back.
Mechanical methods of control involve cutting off crowns from roots, usually just below ground level. This immediately kills the plant. Cutting off the above-ground vines is not sufficient for an immediate kill. Destroying all removed crown material is necessary. Buried crowns can regenerate into healthy kudzu. Transporting crowns in soil removed from a kudzu infestation is one common way that kudzu unexpectedly spreads and shows up in new locations.
Close mowing every week, regular heavy grazing for many successive years, or repeated cultivation may be effective, as this serves to deplete root reserves. If done in the spring, cutting off vines must be repeated. Regrowth appears to exhaust the plant's stored carbohydrate reserves. Harvested kudzu can be fed to livestock, burned, or composted.
In the United States, the city of Chattanooga, Tennessee undertook a trial program in 2010 using goats and llamas to graze on the plant. Similar efforts to reduce widespread nuisance kudzu growth have also been undertaken in the cities of Winston-Salem, North Carolina and Tallahassee, Florida.
Prescribed burning is used on old extensive infestations to remove vegetative cover and promote seed germination for removal or treatment. While fire is not an effective way to kill kudzu, equipment, such as a skid loader, can later remove crowns and kill kudzu with minimal disturbance or erosion of soil.
Herbicide
A systemic herbicide, for example, glyphosate, triclopyr, or picloram, can be applied directly on cut stems, which is an effective means of transporting the herbicide into the kudzu's extensive root system. Herbicides can be used after other methods of control, such as mowing, grazing, or burning, which can allow for an easier application of the chemical to the weakened plants. In large-scale forestry infestations, soil-active herbicides have been shown to be highly effective.
After initial herbicidal treatment, follow-up treatments and monitoring are usually necessary, depending on how long the kudzu has been growing in an area. Up to 10 years of supervision may be needed after the initial chemical placement to make sure the plant does not return.
Fungi
Since 1998, the United States' Agricultural Research Service has experimented with using the fungus Myrothecium verrucaria as a biologically based herbicide against kudzu. A diacetylverrucarol spray based on M. verrucaria works under a variety of conditions (including the absence of dew), causes minimal injury to many of the other woody plants in kudzu-infested habitats, and takes effect quickly enough that kudzu treated with it in the morning starts showing evidence of damage by midafternoon. Initial formulations of the herbicide produced toxic levels of other trichothecenes as byproducts, though the ARS discovered that growing M. verrucaria in a fermenter on a liquid diet (instead of a solid) limited or eliminated the problem.
Uses
Soil improvement and preservation
Kudzu has been used as a form of erosion control and to enhance the soil. As a legume, it increases the nitrogen in the soil by a symbiotic relationship with nitrogen-fixing bacteria. Its deep taproots also transfer valuable minerals from the subsoil to the topsoil, thereby improving the topsoil. In the deforested section of the central Amazon Basin in Brazil, it has been used for improving the soil pore-space in clay latosols, thus freeing even more water for plants than in the soil prior to deforestation.
Animal feed
Kudzu can be used by grazing animals, as it is high in quality as a forage and palatable to livestock. It can be grazed until frost and even slightly after. Kudzu had been used in the southern United States specifically to feed goats on land that had limited resources. Kudzu hay typically has a 22–23% crude protein content and over 60% total digestible nutrient value. The quality of the leaves decreases as vine content increases relative to the leaf content. Kudzu also has low forage yields despite its rate of growth, yielding around two to four tons of dry matter per acre annually. It is also difficult to bale due to its vining growth and its slowness in shedding water. This makes it necessary to place kudzu hay under sheltered protection after being baled. Fresh kudzu is readily consumed by all types of grazing animals, but frequent grazing over three to four years can ruin even established stands. Thus, kudzu only serves well as a grazing crop on a temporary basis.
Basketry
Kudzu fiber has long been used for fiber art and basketry. The long runners which propagate the kudzu fields and the larger vines which cover trees make excellent weaving material. Some basketmakers use the material green. Others use it after splitting it in half, allowing it to dry and then rehydrating it using hot water. Both traditional and contemporary basketry artists use kudzu.
Phytochemicals and uses
Kudzu contains isoflavones, including puerarin (about 60% of the total isoflavones), daidzein, daidzin (structurally related to genistein), mirificin, and salvianolic acid, among numerous others identified. In traditional Chinese medicine, where it is known as gé gēn (gegen), kudzu is considered one of the 50 fundamental herbs thought to have therapeutic effects, although there is no high-quality clinical research to indicate it has any activity or therapeutic use in humans. Compounds of icariin, astragalus, and puerarin mitigates iron overload in the cerebral cortex of mice with Alzheimer's disease. Adverse effects may occur if kudzu is taken by people with hormone-sensitive cancer or those taking tamoxifen, antidiabetic medications, or methotrexate.
Food
The roots contain starch, which has traditionally been used as a food ingredient in East and Southeast Asia. In Vietnam, the starch, called bột sắn dây, is flavoured with pomelo oil and then used as a drink in the summer. In Korea, the plant root is made into chikcha (칡차; "arrowroot tea"), used in traditional medicine, and processed starch used for culinary purposes such as primary ingredient for naengmyeon (칡냉면). In Japan, the plant is known as kuzu and the starch named kuzuko. Kuzuko is used in dishes including kuzumochi, mizu manjū, and kuzuyu. It also serves as a thickener for sauces, and can substitute for cornstarch.
The flowers are used to make a jelly that tastes similar to grape jelly. Roots, flowers, and leaves of kudzu show antioxidant activity that suggests food uses. Nearby bee colonies may forage on kudzu nectar during droughts as a last resort, producing a low-viscosity red or purple honey that tastes of grape jelly or bubblegum.
Folk medicine
Kudzu has also been used for centuries in East Asia as folk medicine using herbal teas and tinctures. Kudzu powder is used in Japan to make an herbal tea called kuzuyu. Kakkonto () is a herbal drink with its origin in traditional Chinese medicine, intended for people having various mild illnesses, such as headache.
Fiber
Kudzu fiber, known as ko-hemp, is used traditionally to make clothing and paper, and has also been investigated for industrial-scale use. Kudzu fiber is a bast fiber similar to hemp or linen, and has been used for clothing in China for at least 6,000 years and in Japan for at least 1,500 years. In ancient China, kudzu was one of three main clothing and textile materials, with silk and ramie being the other two.
Kudzu is still currently used in Japan, primarily used to weave worn in the summer.
Other uses
It may become a valuable asset for the production of cellulosic ethanol. In the Southern United States, kudzu is used to make soaps, lotions, and compost.
See also
Kudzu bug
Kudzu tea
Kudzu powder
Notes
References
This article was based in part on content from public domain web pages from the United States National Park Service and the United States Bureau of Land Management
External links
Artists and Origins, Kudzu-fu. Ginza Motoji
Edible thickening agents
Japanese cuisine
Plants used in traditional Chinese medicine
Fiber plants
Pueraria
Starch
Plant common names
Flora invasive in North America | Kudzu | [
"Biology"
] | 3,464 | [
"Plant common names",
"Common names of organisms",
"Plants"
] |
57,875 | https://en.wikipedia.org/wiki/Soap | Soap is a salt of a fatty acid (sometimes other carboxylic acids) used for cleaning and lubricating products as well as other applications. In a domestic setting, soaps, specifically "toilet soaps", are surfactants usually used for washing, bathing, and other types of housekeeping. In industrial settings, soaps are used as thickeners, components of some lubricants, emulsifiers, and catalysts.
Soaps are often produced by mixing fats and oils with a base. Humans have used soap for millennia; evidence exists for the production of soap-like materials in ancient Babylon around 2800 BC.
Types
Toilet soaps
In a domestic setting, "soap" usually refers to what is technically called a toilet soap, used for household and personal cleaning. Toilet soaps are salts of fatty acids with the general formula (RCO2−)M+, where M is Na (sodium) or K (potassium).
When used for cleaning, soap solubilizes particles and grime, which can then be separated from the article being cleaned. The insoluble oil/fat "dirt" become associated inside micelles, tiny spheres formed from soap molecules with polar hydrophilic (water-attracting) groups on the outside and encasing a lipophilic (fat-attracting) pocket, which shields the oil/fat molecules from the water, making them soluble. Anything that is soluble will be washed away with the water. In hand washing, as a surfactant, when lathered with a little water, soap kills microorganisms by disorganizing their membrane lipid bilayer and denaturing their proteins. It also emulsifies oils, enabling them to be carried away by running water.
When used in hard water, soap does not lather well but forms soap scum (related to metallic soaps, see below).
Non-toilet soaps
So-called metallic soaps are key components of most lubricating greases and thickeners. A commercially important example is lithium stearate. Greases are usually emulsions of calcium soap or lithium soap and mineral oil. Many other metallic soaps are also useful, including those of aluminium, sodium, and mixtures thereof. Such soaps are also used as thickeners to increase the viscosity of oils. In ancient times, lubricating greases were made by the addition of lime to olive oil, which would produce calcium soaps. Metal soaps are also included in modern artists' oil paints formulations as a rheology modifier. Metal soaps can be prepared by neutralizing fatty acids with metal oxides:
2 RCO2H + CaO → (RCO2)2Ca + H2O
A cation from an organic base such as ammonium can be used instead of a metal; ammonium nonanoate is an ammonium-based soap that is used as an herbicide.
Another class of non-toilet soaps are resin soaps, which are produced in the paper industry by the action of tree rosin with alkaline reagents used to separate cellulose from raw wood. A major component of such soaps is the sodium salt of abietic acid. Resin soaps are used as emulsifiers.
Soapmaking
The production of toilet soaps usually entails saponification of triglycerides, which are vegetable or animal oils and fats. An alkaline solution (often lye or sodium hydroxide) induces saponification whereby the triglyceride fats first hydrolyze into salts of fatty acids. Glycerol (glycerin) is liberated. The glycerin is sometimes left in the soap product as a softening agent, although it is sometimes separated. Handmade soap can differ from industrially made soap in that an excess of fat or coconut oil beyond that needed to consume the alkali is used (in a cold-pour process, this excess fat is called "superfatting"), and the glycerol left in acts as a moisturizing agent. However, the glycerine also makes the soap softer. The addition of glycerol and processing of this soap produces glycerin soap. Superfatted soap is more skin-friendly than one without extra fat, although it can leave a "greasy" feel. Sometimes, an emollient is added, such as jojoba oil or shea butter. Sand or pumice may be added to produce a scouring soap. The scouring agents serve to remove dead cells from the skin surface being cleaned. This process is called exfoliation.
To make antibacterial soap, compounds such as triclosan or triclocarban can be added. There is some concern that use of antibacterial soaps and other products might encourage antimicrobial resistance in microorganisms.
The type of alkali metal used determines the kind of soap product. Sodium soaps, prepared from sodium hydroxide, are firm, whereas potassium soaps, derived from potassium hydroxide, are softer or often liquid. Historically, potassium hydroxide was extracted from the ashes of bracken or other plants. Lithium soaps also tend to be hard. These are used exclusively in greases.
For making toilet soaps, triglycerides (oils and fats) are derived from coconut, olive, or palm oils, as well as tallow. Triglyceride is the chemical name for the triesters of fatty acids and glycerin. Tallow, i.e., rendered fat, is the most available triglyceride from animals. Each species offers quite different fatty acid content, resulting in soaps of distinct feel. The seed oils give softer but milder soaps. Soap made from pure olive oil, sometimes called Castile soap or Marseille soap, is reputed for its particular mildness. The term "Castile" is also sometimes applied to soaps from a mixture of oils with a high percentage of olive oil.
Gallery
History
Proto-soaps in the Ancient world
Proto-soaps, which mixed fat and alkali and were used for cleansing, are mentioned in Sumerian, Babylonian and Egyptian texts.
The earliest recorded evidence of the production of soap-like materials dates back to around 2800 BC in ancient Babylon. A formula for making a soap-like substance was written on a Sumerian clay tablet around 2500 BC. This was produced by heating a mixture of oil and wood ash, the earliest recorded chemical reaction, and used for washing woolen clothing.
The Ebers papyrus (Egypt, 1550 BC) indicates the ancient Egyptians used a soap-like product as a medicine and created this by combining animal fats or vegetable oils with a soda ash substance called trona. Egyptian documents mention a similar substance was used in the preparation of wool for weaving.
In the reign of Nabonidus (556–539 BC), a recipe for a soap-like substance consisted of uhulu [ashes], cypress [oil] and sesame [seed oil] "for washing the stones for the servant girls".
True soaps in the Ancient world
True soaps, which we might recognise as soaps today, were different to proto-soaps. They foamed, were made deliberately, and could be produced in a hard or soft form because of an understanding of lye sources. It is uncertain as to who was the first to invent true soap.
Knowledge of how to produce true soap emerged at some point between early mentions of proto-soaps and the first century AD. Alkali was used to clean textiles such as wool for thousands of years but soap only forms when there is enough fat, and experiments show that washing wool does not create visible quantities of soap. Experiments by Sally Pointer show that the repeated laundering of materials used in perfume-making lead to noticeable amounts of soap forming. This fits with other evidence from Mesopotamian culture.
Pliny the Elder, whose writings chronicle life in the first century AD, describes soap as "an invention of the Gauls". The word , Latin for soap, has connected to a mythical Mount Sapo, a hill near the River Tiber where animals were sacrificed. But in all likelihood, the word was borrowed from an early Germanic language and is cognate with Latin , "tallow". It first appears in Pliny the Elder's account, Historia Naturalis, which discusses the manufacture of soap from tallow and ashes. There he mentions its use in the treatment of scrofulous sores, as well as among the Gauls as a dye to redden hair which the men in Germania were more likely to use than women. The Romans avoided washing with harsh soaps before encountering the milder soaps used by the Gauls around 58 BC. Aretaeus of Cappadocia, writing in the 2nd century AD, observes among "Celts, which are men called Gauls, those alkaline substances that are made into balls [...] called soap". The Romans' preferred method of cleaning the body was to massage oil into the skin and then scrape away both the oil and any dirt with a strigil. The standard design is a curved blade with a handle, all of which is made of metal.
The 2nd-century AD physician Galen describes soap-making using lye and prescribes washing to carry away impurities from the body and clothes. The use of soap for personal cleanliness became increasingly common in this period. According to Galen, the best soaps were Germanic, and soaps from Gaul were second best. Zosimos of Panopolis, circa 300 AD, describes soap and soapmaking.
In the Southern Levant, the ashes from barilla plants, such as species of Salsola, saltwort (Seidlitzia rosmarinus) and Anabasis, were used to make potash. Traditionally, olive oil was used instead of animal lard throughout the Levant, which was boiled in a copper cauldron for several days. As the boiling progresses, alkali ashes and smaller quantities of quicklime are added and constantly stirred. In the case of lard, it required constant stirring while kept lukewarm until it began to trace. Once it began to thicken, the brew was poured into a mold and left to cool and harden for two weeks. After hardening, it was cut into smaller cakes. Aromatic herbs were often added to the rendered soap to impart their fragrance, such as yarrow leaves, lavender, germander, etc.
Ancient China
A detergent similar to soap was manufactured in ancient China from the seeds of Gleditsia sinensis. Another traditional detergent is a mixture of pig pancreas and plant ash called zhuyizi (). Soap made of animal fat did not appear in China until the modern era. Soap-like detergents were not as popular as ointments and creams.
Islamic Golden Age
Hard toilet soap with a pleasant smell was produced in the Middle East during the Islamic Golden Age, when soap-making became an established industry. Recipes for soap-making are described by Muhammad ibn Zakariya al-Razi (c. 865–925), who also gave a recipe for producing glycerine from olive oil. In the Middle East, soap was produced from the interaction of fatty oils and fats with alkali. In Syria, soap was produced using olive oil together with alkali and lime. Soap was exported from Syria to other parts of the Muslim world and to Europe.
A 12th-century document describes the process of soap production. It mentions the key ingredient, alkali, which later became crucial to modern chemistry, derived from al-qaly or "ashes".
By the 13th century, the manufacture of soap in the Middle East had become a major cottage industry, with sources in Nablus, Fes, Damascus, and Aleppo.
Medieval Europe
Soapmakers in Naples were members of a guild in the late sixth century (then under the control of the Eastern Roman Empire), and in the eighth century, soap-making was well known in Italy and Spain. The Carolingian capitulary De Villis, dating to around 800, representing the royal will of Charlemagne, mentions soap as being one of the products the stewards of royal estates are to tally. The lands of Medieval Spain were a leading soapmaker by 800, and soapmaking began in the Kingdom of England about 1200. Soapmaking is mentioned both as "women's work" and as the produce of "good workmen" alongside other necessities, such as the produce of carpenters, blacksmiths, and bakers.
In Europe, soap in the 9th century was produced from animal fats and had an unpleasant smell. This changed when olive oil began to be used in soap formulas instead, after which much of Europe's soap production moved to the Mediterranean olive-growing regions. Hard toilet soap was introduced to Europe by Arabs and gradually spread as a luxury item. It was often perfumed.
By the 15th century, the manufacture of soap in Christendom often took place on an industrial scale, with sources in Antwerp, Castile, Marseille, Naples and Venice.
16th–17th century
In France, by the second half of the 16th century, the semi-industrialized professional manufacture of soap was concentrated in a few centers of Provence—Toulon, Hyères, and Marseille—which supplied the rest of France. In Marseilles, by 1525, production was concentrated in at least two factories, and soap production at Marseille tended to eclipse the other Provençal centers.
English manufacture tended to concentrate in London. The demand for high-quality hard soap was significant enough during the Tudor period that barrels of ashes were imported for the manufacture of soap.
Finer soaps were later produced in Europe from the 17th century, using vegetable oils (such as olive oil) as opposed to animal fats. Many of these soaps are still produced, both industrially and by small-scale artisans. Castile soap is a popular example of the vegetable-only soaps derived from the oldest "white soap" of Italy. In 1634 Charles I granted the newly formed Society of Soapmakers a monopoly in soap production who produced certificates from 'foure Countesses, and five Viscountesses, and divers other Ladies and Gentlewomen of great credite and quality, besides common Laundresses and others', testifying that 'the New White Soap washeth whiter and sweeter than the Old Soap'.
During the Restoration era (February 1665 – August 1714) a soap tax was introduced in England, which meant that until the mid-1800s, soap was a luxury, used regularly only by the well-to-do. The soap manufacturing process was closely supervised by revenue officials who made sure that soapmakers' equipment was kept under lock and key when not being supervised. Moreover, soap could not be produced by small makers because of a law that stipulated that soap boilers must manufacture a minimum quantity of one imperial ton at each boiling, which placed the process beyond the reach of the average person. The soap trade was boosted and deregulated when the tax was repealed in 1853.
Modern period
Industrially manufactured bar soaps became available in the late 18th century, as advertising campaigns in Europe and America promoted popular awareness of the relationship between cleanliness and health. In modern times, the use of soap has become commonplace in industrialized nations due to a better understanding of the role of hygiene in reducing the population size of pathogenic microorganisms.
Until the Industrial Revolution, soapmaking was conducted on a small scale and the product was rough. In 1780, James Keir established a chemical works at Tipton, for the manufacture of alkali from the sulfates of potash and soda, to which he afterwards added a soap manufactory. The method of extraction proceeded on a discovery of Keir's. In 1790, Nicolas Leblanc discovered how to make alkali from common salt. Andrew Pears started making a high-quality, transparent soap, Pears soap, in 1807 in London. His son-in-law, Thomas J. Barratt, became the brand manager (the first of its kind) for Pears in 1865. In 1882, Barratt recruited English actress and socialite Lillie Langtry to become the poster-girl for Pears soap, making her the first celebrity to endorse a commercial product.
William Gossage produced low-priced, good-quality soap from the 1850s. Robert Spear Hudson began manufacturing a soap powder in 1837, initially by grinding the soap with a mortar and pestle. American manufacturer Benjamin T. Babbitt introduced marketing innovations that included the sale of bar soap and distribution of product samples. William Hesketh Lever and his brother, James, bought a small soap works in Warrington in 1886 and founded what is still one of the largest soap businesses, formerly called Lever Brothers and now called Unilever. These soap businesses were among the first to employ large-scale advertising campaigns.
Liquid soap
Liquid soap was invented in the nineteenth century; in 1865, William Sheppard patented a liquid version of soap. In 1898, B.J. Johnson developed a soap derived from palm and olive oils; his company, the B.J. Johnson Soap Company, introduced "Palmolive" brand soap that same year. This new brand of soap became popular rapidly, and to such a degree that B.J. Johnson Soap Company changed its name to Palmolive.
In the early 1900s, other companies began to develop their own liquid soaps. Such products as Pine-Sol and Tide appeared on the market, making the process of cleaning things other than skin, such as clothing, floors, and bathrooms, much easier.
Liquid soap also works better for more traditional or non-machine washing methods, such as using a washboard.
See also
Soap-related
Antibiotic misuse
Dishwashing soap
Foam
List of cleaning products
Hand washing
Palm oil
Soap bubble
Soap dish
Soap dispenser
Soap plant
Soap substitute
Soapwort
Shampoo
Shower gel
Toothpaste
Soap made from human corpses
References
Further reading
Free ebook at Google Books.
Donkor, Peter (1986). Small-Scale Soapmaking: A Handbook. Ebook online at SlideShare. .
Dunn, Kevin M. (2010). Scientific Soapmaking: The Chemistry of Cold Process. Clavicula Press. .
Garzena, Patrizia, and Marina Tadiello (2004). Soap Naturally: Ingredients, methods and recipes for natural handmade soap. Online information and Table of Contents. /
Garzena, Patrizia, and Marina Tadiello (2013). The Natural Soapmaking Handbook. Online information and Table of Contents. /
Mohr, Merilyn (1979). The Art of Soap Making. A Harrowsmith Contemporary Primer. Firefly Books. .
Spencer, Bob; Practical Action (2005). SOAPMAKING . Ebook online.
Thomssen, E. G., Ph.D. (1922). Soap-Making Manual. Free ebook at Project Gutenberg.
External links
History of Soap making – SoapHistory
Anionic surfactants
Cleaning products
Salts
Skin care
Bathing
Articles containing video clips | Soap | [
"Chemistry"
] | 3,943 | [
"Products of chemical industry",
"Cleaning products",
"Salts"
] |
57,877 | https://en.wikipedia.org/wiki/Sodium%20hydroxide | Sodium hydroxide, also known as lye and caustic soda, is an inorganic compound with the formula . It is a white solid ionic compound consisting of sodium cations and hydroxide anions .
Sodium hydroxide is a highly corrosive base and alkali that decomposes lipids and proteins at ambient temperatures and may cause severe chemical burns. It is highly soluble in water, and readily absorbs moisture and carbon dioxide from the air. It forms a series of hydrates . The monohydrate crystallizes from water solutions between 12.3 and 61.8 °C. The commercially available "sodium hydroxide" is often this monohydrate, and published data may refer to it instead of the anhydrous compound.
As one of the simplest hydroxides, sodium hydroxide is frequently used alongside neutral water and acidic hydrochloric acid to demonstrate the pH scale to chemistry students.
Sodium hydroxide is used in many industries: in the making of wood pulp and paper, textiles, drinking water, soaps and detergents, and as a drain cleaner. Worldwide production in 2022 was approximately 83 million tons.
Properties
Physical properties
Pure sodium hydroxide is a colorless crystalline solid that melts at without decomposition and boils at . It is highly soluble in water, with a lower solubility in polar solvents such as ethanol and methanol. Sodium hydroxide is insoluble in ether and other non-polar solvents.
Similar to the hydration of sulfuric acid, dissolution of solid sodium hydroxide in water is a highly exothermic reaction where a large amount of heat is liberated, posing a threat to safety through the possibility of splashing. The resulting solution is usually colorless and odorless. As with other alkaline solutions, it feels slippery with skin contact due to the process of saponification that occurs between and natural skin oils.
Viscosity
Concentrated (50%) aqueous solutions of sodium hydroxide have a characteristic viscosity, 78 mPa·s, that is much greater than that of water (1.0 mPa·s) and near that of olive oil (85 mPa·s) at room temperature. The viscosity of aqueous , as with any liquid chemical, is inversely related to its temperature, i.e., its viscosity decreases as temperature increases, and vice versa. The viscosity of sodium hydroxide solutions plays a direct role in its application as well as its storage.
Hydrates
Sodium hydroxide can form several hydrates , which result in a complex solubility diagram that was described in detail by Spencer Umfreville Pickering in 1893. The known hydrates and the approximate ranges of temperature and concentration (mass percent of NaOH) of their saturated water solutions are:
Heptahydrate, : from −28 °C (18.8%) to −24 °C (22.2%).
Pentahydrate, : from −24 °C (22.2%) to −17.7 °C (24.8%).
Tetrahydrate, , α form: from −17.7 °C (24.8%) to 5.4 °C (32.5%).
Tetrahydrate, , β form: metastable.
Trihemihydrate, : from 5.4 °C (32.5%) to 15.38 °C (38.8%) and then to 5.0 °C (45.7%).
Trihydrate, : metastable.
Dihydrate, : from 5.0 °C (45.7%) to 12.3 °C (51%).
Monohydrate, : from 12.3 °C (51%) to 65.10 °C (69%) then to 62.63 °C (73.1%).
Early reports refer to hydrates with n = 0.5 or n = 2/3, but later careful investigations failed to confirm their existence.
The only hydrates with stable melting points are (65.10 °C) and (15.38 °C). The other hydrates, except the metastable ones and (β) can be crystallized from solutions of the proper composition, as listed above. However, solutions of NaOH can be easily supercooled by many degrees, which allows the formation of hydrates (including the metastable ones) from solutions with different concentrations.
For example, when a solution of NaOH and water with 1:2 mole ratio (52.6% NaOH by mass) is cooled, the monohydrate normally starts to crystallize (at about 22 °C) before the dihydrate. However, the solution can easily be supercooled down to −15 °C, at which point it may quickly crystallize as the dihydrate. When heated, the solid dihydrate might melt directly into a solution at 13.35 °C; however, once the temperature exceeds 12.58 °C it often decomposes into solid monohydrate and a liquid solution. Even the n = 3.5 hydrate is difficult to crystallize, because the solution supercools so much that other hydrates become more stable.
A hot water solution containing 73.1% (mass) of NaOH is a eutectic that solidifies at about 62.63 °C as an intimate mix of anhydrous and monohydrate crystals.
A second stable eutectic composition is 45.4% (mass) of NaOH, that solidifies at about 4.9 °C into a mixture of crystals of the dihydrate and of the 3.5-hydrate.
The third stable eutectic has 18.4% (mass) of NaOH. It solidifies at about −28.7 °C as a mixture of water ice and the heptahydrate .
When solutions with less than 18.4% NaOH are cooled, water ice crystallizes first, leaving the NaOH in solution.
The α form of the tetrahydrate has density 1.33 g/cm3. It melts congruously at 7.55 °C into a liquid with 35.7% NaOH and density 1.392 g/cm3, and therefore floats on it like ice on water. However, at about 4.9 °C it may instead melt incongruously into a mixture of solid and a liquid solution.
The β form of the tetrahydrate is metastable, and often transforms spontaneously to the α form when cooled below −20 °C. Once initiated, the exothermic transformation is complete in a few minutes, with a 6.5% increase in volume of the solid. The β form can be crystallized from supercooled solutions at −26 °C, and melts partially at −1.83 °C.
The "sodium hydroxide" of commerce is often the monohydrate (density 1.829 g/cm3). Physical data in technical literature may refer to this form, rather than the anhydrous compound.
Crystal structure
NaOH and its monohydrate form orthorhombic crystals with the space groups Cmcm (oS8) and Pbca (oP24), respectively. The monohydrate cell dimensions are a = 1.1825, b = 0.6213, c = 0.6069 nm. The atoms are arranged in a hydrargillite-like layer structure, with each sodium atom surrounded by six oxygen atoms, three each from hydroxide ions and three from water molecules. The hydrogen atoms of the hydroxyls form strong bonds with oxygen atoms within each O layer. Adjacent O layers are held together by hydrogen bonds between water molecules.
Chemical properties
Reaction with acids
Sodium hydroxide reacts with protic acids to produce water and the corresponding salts. For example, when sodium hydroxide reacts with hydrochloric acid, sodium chloride is formed:
In general, such neutralization reactions are represented by one simple net ionic equation:
This type of reaction with a strong acid releases heat, and hence is exothermic. Such acid–base reactions can also be used for titrations. However, sodium hydroxide is not used as a primary standard because it is hygroscopic and absorbs carbon dioxide from air.
Reaction with acidic oxides
Sodium hydroxide also reacts with acidic oxides, such as sulfur dioxide. Such reactions are often used to "scrub" harmful acidic gases (like and ) produced in the burning of coal and thus prevent their release into the atmosphere. For example,
Reaction with metals and oxides
Glass reacts slowly with aqueous sodium hydroxide solutions at ambient temperatures to form soluble silicates. Because of this, glass joints and stopcocks exposed to sodium hydroxide have a tendency to "freeze". Flasks and glass-lined chemical reactors are damaged by long exposure to hot sodium hydroxide, which also frosts the glass. Sodium hydroxide does not attack iron at room temperature, since iron does not have amphoteric properties (i.e., it only dissolves in acid, not base).
Nevertheless, at high temperatures (e.g. above 500 °C), iron can react endothermically with sodium hydroxide to form iron(III) oxide, sodium metal, and hydrogen gas. This is due to the lower enthalpy of formation of iron(III) oxide (−824.2 kJ/mol) compared to sodium hydroxide (−500 kJ/mol) and positive entropy change of the reaction, which implies spontaneity at high temperatures (, ) and non-spontaneity at low temperatures (, ). Consider the following reaction between molten sodium hydroxide and finely divided iron filings:
A few transition metals, however, may react quite vigorously with sodium hydroxide under milder conditions.
In 1986, an aluminium road tanker in the UK was mistakenly used to transport 25% sodium hydroxide solution, causing pressurization of the contents and damage to tankers. The pressurization is due to the hydrogen gas which is produced in the reaction between sodium hydroxide and aluminium:
Precipitant
Unlike sodium hydroxide, which is soluble, the hydroxides of most transition metals are insoluble, and therefore sodium hydroxide can be used to precipitate transition metal hydroxides. The following colours are observed:
Copper - blue
Iron(II) - green
Iron(III) - yellow / brown
Zinc and lead salts dissolve in excess sodium hydroxide to give a clear solution of or .
Aluminium hydroxide is used as a gelatinous flocculant to filter out particulate matter in water treatment. Aluminium hydroxide is prepared at the treatment plant from aluminium sulfate by reacting it with sodium hydroxide or bicarbonate.
Saponification
Sodium hydroxide can be used for the base-driven hydrolysis of esters (also called saponification), amides and alkyl halides. However, the limited solubility of sodium hydroxide in organic solvents means that the more soluble potassium hydroxide (KOH) is often preferred. Touching a sodium hydroxide solution with bare hands, while not recommended, produces a slippery feeling. This happens because oils on the skin such as sebum are converted to soap.
Despite solubility in propylene glycol it is unlikely to replace water in saponification due to propylene glycol's primary reaction with fat before reaction between sodium hydroxide and fat.
Production
Sodium hydroxide is industrially produced, first as a 32% solution, and then evaporated to a 50% solution by variations of the electrolytic chloralkali process. Chlorine gas is also produced in this process. Solid sodium hydroxide is obtained from this solution by the evaporation of water. Solid sodium hydroxide is most commonly sold as flakes, prills, and cast blocks.
In 2022, world production was estimated at 83 million dry tonnes of sodium hydroxide, and demand was estimated at 51 million tonnes. In 1998, total world production was around 45 million tonnes. North America and Asia each contributed around 14 million tonnes, while Europe produced around 10 million tonnes. In the United States, the major producer of sodium hydroxide is Olin, which has annual production around 5.7 million tonnes from sites at Freeport, Texas; Plaquemine, Louisiana; St. Gabriel, Louisiana; McIntosh, Alabama; Charleston, Tennessee; Niagara Falls, New York; and Bécancour, Canada. Other major US producers include Oxychem, Westlake, Shintek, and Formosa. All of these companies use the chloralkali process.
Historically, sodium hydroxide was produced by treating sodium carbonate with calcium hydroxide (slaked lime) in a metathesis reaction which takes advantage of the fact that sodium hydroxide is soluble, while calcium carbonate is not. This process was called causticizing.
The sodium carbonate for this reaction was produced by the Leblanc process in the early 19th century, or the Solvay process in the late 19th century. The conversion of sodium carbonate to sodium hydroxide was superseded entirely by the chloralkali process, which produces sodium hydroxide in a single process.
Sodium hydroxide is also produced by combining pure sodium metal with water. The byproducts are hydrogen gas and heat, often resulting in a flame.
This reaction is commonly used for demonstrating the reactivity of alkali metals in academic environments; however, it is not used commercially aside from a reaction within the mercury cell chloralkali process where sodium amalgam is reacted with water.
Uses
Sodium hydroxide is a popular strong base used in industry. Sodium hydroxide is used in the manufacture of sodium salts and detergents, pH regulation, and organic synthesis. In bulk, it is most often handled as an aqueous solution, since solutions are cheaper and easier to handle.
Sodium hydroxide is used in many scenarios where it is desirable to increase the alkalinity of a mixture, or to neutralize acids. For example, in the petroleum industry, sodium hydroxide is used as an additive in drilling mud to increase alkalinity in bentonite mud systems, to increase the mud viscosity, and to neutralize any acid gas (such as hydrogen sulfide and carbon dioxide) which may be encountered in the geological formation as drilling progresses. Another use is in salt spray testing where pH needs to be regulated. Sodium hydroxide is used with hydrochloric acid to balance pH. The resultant salt, NaCl, is the corrosive agent used in the standard neutral pH salt spray test.
Poor quality crude oil can be treated with sodium hydroxide to remove sulfurous impurities in a process known as caustic washing. Sodium hydroxide reacts with weak acids such as hydrogen sulfide and mercaptans to yield non-volatile sodium salts, which can be removed. The waste which is formed is toxic and difficult to deal with, and the process is banned in many countries because of this. In 2006, Trafigura used the process and then dumped the waste in Ivory Coast.
Other common uses of sodium hydroxide include:
for making soaps and detergents. Sodium hydroxide is used for hard bar soap, while potassium hydroxide is used for liquid soaps. Sodium hydroxide is used more often than potassium hydroxide because it is cheaper and a smaller quantity is needed.
as drain cleaners that convert pipe-clogging fats and grease into soap, which dissolves in water
for making artificial textile fibres such as rayon
in the manufacture of paper. Around 56% of sodium hydroxide produced is used by industry, 25% of which is used in the paper industry.
in purifying bauxite ore from which aluminium metal is extracted. This is known as the Bayer process.
de-greasing metals
oil refining
making dyes and bleaches
in water treatment plants for pH regulation
to treat bagels and pretzel dough, giving the distinctive shiny finish
Chemical pulping
Sodium hydroxide is also widely used in pulping of wood for making paper or regenerated fibers. Along with sodium sulfide, sodium hydroxide is a key component of the white liquor solution used to separate lignin from cellulose fibers in the kraft process. It also plays a key role in several later stages of the process of bleaching the brown pulp resulting from the pulping process. These stages include oxygen delignification, oxidative extraction, and simple extraction, all of which require a strong alkaline environment with a pH > 10.5 at the end of the stages.
Tissue digestion
In a similar fashion, sodium hydroxide is used to digest tissues, as in a process that was used with farm animals at one time. This process involved placing a carcass into a sealed chamber, then adding a mixture of sodium hydroxide and water (which breaks the chemical bonds that keep the flesh intact). This eventually turns the body into a liquid with a dark brown color, and the only solids that remain are bone hulls, which can be crushed between one's fingertips.
Sodium hydroxide is frequently used in the process of decomposing roadkill dumped in landfills by animal disposal contractors. Due to its availability and low cost, it has been used by criminals to dispose of corpses. Italian serial killer Leonarda Cianciulli used this chemical to turn dead bodies into soap. In Mexico, a man who worked for drug cartels admitted disposing of over 300 bodies with it.
Sodium hydroxide is a dangerous chemical due to its ability to hydrolyze protein. If a dilute solution is spilled on the skin, burns may result if the area is not washed thoroughly and for several minutes with running water. Splashes in the eye can be more serious and can lead to blindness.
Dissolving amphoteric metals and compounds
Strong bases attack aluminium. Sodium hydroxide reacts with aluminium and water to release hydrogen gas. The aluminium takes an oxygen atom from sodium hydroxide, which in turn takes an oxygen atom from water, and releases two hydrogen atoms. The reaction thus produces hydrogen gas and sodium aluminate. In this reaction, sodium hydroxide acts as an agent to make the solution alkaline, which aluminium can dissolve in.
→ 2 +
Sodium aluminate is an inorganic chemical that is used as an effective source of aluminium hydroxide for many industrial and technical applications. Pure sodium aluminate (anhydrous) is a white crystalline solid having a formula variously given as , , , or . Formation of sodium tetrahydroxoaluminate(III) or hydrated sodium aluminate is given by:
O
This reaction can be useful in etching, removing anodizing, or converting a polished surface to a satin-like finish, but without further passivation such as anodizing or alodining the surface may become degraded, either under normal use or in severe atmospheric conditions.
In the Bayer process, sodium hydroxide is used in the refining of alumina containing ores (bauxite) to produce alumina (aluminium oxide) which is the raw material used to produce aluminium via the electrolytic Hall-Héroult process. Since the alumina is amphoteric, it dissolves in the sodium hydroxide, leaving impurities less soluble at high pH such as iron oxides behind in the form of a highly alkaline red mud.
Other amphoteric metals are zinc and lead which dissolve in concentrated sodium hydroxide solutions to give sodium zincate and sodium plumbate respectively.
Esterification and transesterification reagent
Sodium hydroxide is traditionally used in soap making (cold process soap, saponification). It was made in the nineteenth century for a hard surface rather than liquid product because it was easier to store and transport.
For the manufacture of biodiesel, sodium hydroxide is used as a catalyst for the transesterification of methanol and triglycerides. This only works with anhydrous sodium hydroxide, because combined with water the fat would turn into soap, which would be tainted with methanol. NaOH is used more often than potassium hydroxide because it is cheaper and a smaller quantity is needed. Due to production costs, NaOH, which is produced using common salt is cheaper than potassium hydroxide.
Skincare ingredient
Sodium hydroxide is an ingredient used in some skincare and cosmetic products, such as facial cleansers, creams, lotions, and makeup. It is typically used in low concentration as a pH balancer, due its highly alkaline nature.
Food preparation
Food uses of sodium hydroxide include washing or chemical peeling of fruits and vegetables, chocolate and cocoa processing, caramel coloring production, poultry scalding, soft drink processing, and thickening ice cream. Olives are often soaked in sodium hydroxide for softening; pretzels and German lye rolls are glazed with a sodium hydroxide solution before baking to make them crisp. Owing to the difficulty in obtaining food grade sodium hydroxide in small quantities for home use, sodium carbonate is often used in place of sodium hydroxide. It is known as E number E524.
Specific foods processed with sodium hydroxide include:
German pretzels are poached in a boiling sodium carbonate solution or cold sodium hydroxide solution before baking, which contributes to their unique crust.
Lye water is an essential ingredient in the crust of the traditional baked Chinese moon cakes.
Most yellow coloured Chinese noodles are made with lye water but are commonly mistaken for containing egg.
One variety of zongzi uses lye water to impart a sweet flavor.
Sodium hydroxide causes gelling of egg whites in the production of century eggs.
Some methods of preparing olives involve subjecting them to a lye-based brine.
The Filipino dessert () called uses a small quantity of lye water to help give the rice flour batter a jelly-like consistency. A similar process is also used in the kakanin known as or except that the mixture uses grated cassava instead of rice flour.
The Norwegian dish known as lutefisk ().
Bagels are often boiled in a lye solution before baking, contributing to their shiny crust.
Hominy is dried maize (corn) kernels reconstituted by soaking in lye-water. These expand considerably in size and may be further processed by frying to make corn nuts or by drying and grinding to make grits. Hominy is used to create masa, a popular flour used in Mexican cuisine to make corn tortillas and tamales. Nixtamal is similar, but uses calcium hydroxide instead of sodium hydroxide.
Cleaning agent
Sodium hydroxide is frequently used as an industrial cleaning agent where it is often called "caustic". It is added to water, heated, and then used to clean process equipment, storage tanks, etc. It can dissolve grease, oils, fats and protein-based deposits. It is also used for cleaning waste discharge pipes under sinks and drains in domestic properties. Surfactants can be added to the sodium hydroxide solution in order to stabilize dissolved substances and thus prevent redeposition. A sodium hydroxide soak solution is used as a powerful degreaser on stainless steel and glass bakeware. It is also a common ingredient in oven cleaners.
A common use of sodium hydroxide is in the production of parts washer detergents. Parts washer detergents based on sodium hydroxide are some of the most aggressive parts washer cleaning chemicals. The sodium hydroxide-based detergents include surfactants, rust inhibitors and defoamers. A parts washer heats water and the detergent in a closed cabinet and then sprays the heated sodium hydroxide and hot water at pressure against dirty parts for degreasing applications. Sodium hydroxide used in this manner replaced many solvent-based systems in the early 1990s when trichloroethane was outlawed by the Montreal Protocol. Water and sodium hydroxide detergent-based parts washers are considered to be an environmental improvement over the solvent-based cleaning methods.
Sodium hydroxide is used in the home as a type of drain openers to unblock clogged drains, usually in the form of a dry crystal or as a thick liquid gel. The alkali dissolves greases to produce water soluble products. It also hydrolyzes proteins, such as those found in hair, which may block water pipes. These reactions are sped by the heat generated when sodium hydroxide and the other chemical components of the cleaner dissolve in water. Such alkaline drain cleaners and their acidic versions are highly corrosive and should be handled with great caution.
Relaxer
Sodium hydroxide is used in some relaxers to straighten hair. However, because of the high incidence and intensity of chemical burns, manufacturers of chemical relaxers use other alkaline chemicals in preparations available to consumers. Sodium hydroxide relaxers are still available, but they are used mostly by professionals.
Paint stripper
A solution of sodium hydroxide in water was traditionally used as the most common paint stripper on wooden objects. Its use has become less common, because it can damage the wood surface, raising the grain and staining the colour.
Water treatment
Sodium hydroxide is sometimes used during water purification to raise the pH of water supplies. Increased pH makes the water less corrosive to plumbing and reduces the amount of lead, copper and other toxic metals that can dissolve into drinking water.
Historical uses
Sodium hydroxide has been used for detection of carbon monoxide poisoning, with blood samples of such patients turning to a vermilion color upon the addition of a few drops of sodium hydroxide. Today, carbon monoxide poisoning can be detected by CO oximetry.
In cement mixes, mortars, concrete, grouts
Sodium hydroxide is used in some cement mix plasticisers. This helps homogenise cement mixes, preventing segregation of sands and cement, decreases the amount of water required in a mix and increases workability of the cement product, be it mortar, render or concrete.
Safety
Like other corrosive acids and alkalis, a few drops of sodium hydroxide solutions can readily decompose proteins and lipids in living tissues via amide hydrolysis and ester hydrolysis, which consequently cause chemical burns and may induce permanent blindness upon contact with eyes. Solid alkali can also express its corrosive nature if there is water, such as water vapor. Thus, protective equipment, like rubber gloves, safety clothing and eye protection, should always be used when handling this chemical or its solutions. The standard first aid measures for alkali spills on the skin is, as for other corrosives, irrigation with large quantities of water. Washing is continued for at least ten to fifteen minutes.
Moreover, dissolution of sodium hydroxide is highly exothermic, and the resulting heat may cause heat burns or ignite flammables. It also produces heat when reacted with acids.
Sodium hydroxide is mildly corrosive to glass, which can cause damage to glazing or cause ground glass joints to bind. Sodium hydroxide is corrosive to several metals, like aluminium which reacts with the alkali to produce flammable hydrogen gas on contact.
Storage
Careful storage is needed when handling sodium hydroxide for use, especially bulk volumes. Following proper NaOH storage guidelines and maintaining worker/environment safety is always recommended given the chemical's burn hazard.
Sodium hydroxide is often stored in bottles for small-scale laboratory use, within intermediate bulk containers (medium volume containers) for cargo handling and transport, or within large stationary storage tanks with volumes up to 100,000 gallons for manufacturing or waste water plants with extensive NaOH use. Common materials that are compatible with sodium hydroxide and often utilized for NaOH storage include: polyethylene (HDPE, usual, XLPE, less common), carbon steel, polyvinyl chloride (PVC), stainless steel, and fiberglass reinforced plastic (FRP, with a resistant liner).
Sodium hydroxide must be stored in airtight containers to preserve its normality as it will absorb water and carbon dioxide from the atmosphere.
History
Sodium hydroxide was first prepared by soap makers. A procedure for making sodium hydroxide appeared as part of a recipe for making soap in an Arab book of the late 13th century: (Inventions from the Various Industrial Arts), which was compiled by al-Muzaffar Yusuf ibn 'Umar ibn 'Ali ibn Rasul (d. 1295), a king of Yemen. The recipe called for passing water repeatedly through a mixture of alkali (Arabic: , where is ash from saltwort plants, which are rich in sodium; hence alkali was impure sodium carbonate) and quicklime (calcium oxide, CaO), whereby a solution of sodium hydroxide was obtained. European soap makers also followed this recipe. When in 1791 the French chemist and surgeon Nicolas Leblanc (1742–1806) patented a process for mass-producing sodium carbonate, natural "soda ash" (impure sodium carbonate that was obtained from the ashes of plants that are rich in sodium) was replaced by this artificial version. However, by the 20th century, the electrolysis of sodium chloride had become the primary method for producing sodium hydroxide.
See also
Acid and base
HAZMAT Class 8 Corrosive Substances
List of cleaning agents
References
Bibliography
External links
International Chemical Safety Card 0360
Euro Chlor-How is chlorine made? Chlorine Online
NIOSH Pocket Guide to Chemical Hazards
CDC – Sodium Hydroxide – NIOSH Workplace Safety and Health Topic
Production by brine electrolysis
Data sheets
Technical charts (page 33—41) for enthalpy, temperature and pressure
Sodium Hydroxide MSDS
Certified Lye MSDS
Hill Brothers MSDS
Titration of acids with sodium hydroxide; freeware for data analysis, simulation of curves and pH calculation
Caustic soda production in continuous causticising plant by lime soda process
Chemical engineering
Cleaning products
Deliquescent materials
Desiccants
Household chemicals
Hydroxides
Inorganic compounds
Photographic chemicals
Sodium compounds
E-number additives
Food acidity regulators | Sodium hydroxide | [
"Physics",
"Chemistry",
"Engineering"
] | 6,213 | [
"Inorganic compounds",
"Products of chemical industry",
"Chemical engineering",
"Hydroxides",
"Cleaning products",
"Desiccants",
"Materials",
"nan",
"Deliquescent materials",
"Bases (chemistry)",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.