id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
54,672,540 | https://en.wikipedia.org/wiki/Quantum%20Boltzmann%20equation | The quantum Boltzmann equation, also known as the Uehling–Uhlenbeck equation, is the quantum mechanical modification of the Boltzmann equation, which gives the nonequilibrium time evolution of a gas of quantum-mechanically interacting particles. Typically, the quantum Boltzmann equation is given as only the “collision term” of the full Boltzmann equation, giving the change of the momentum distribution of a locally homogeneous gas, but not the drift and diffusion in space. It was originally formulated by L.W. Nordheim (1928), and by and E. A. Uehling and George Uhlenbeck (1933).
In full generality (including the p-space and x-space drift terms, which are often neglected) the equation is represented analogously to the Boltzmann equation.
where represents an externally applied potential acting on the gas' p-space distribution and is the collision operator, accounting for the interactions between the gas particles. The quantum mechanics must be represented in the exact form of , which depends on the physics of the system to be modeled.
The quantum Boltzmann equation gives irreversible behavior, and therefore an arrow of time; that is, after a long enough time it gives an equilibrium distribution which no longer changes. Although quantum mechanics is microscopically time-reversible, the quantum Boltzmann equation gives irreversible behavior because phase information is discarded only the average occupation number of the quantum states is kept. The solution of the quantum Boltzmann equation is therefore a good approximation to the exact behavior of the system on time scales short compared to the Poincaré recurrence time, which is usually not a severe limitation, because the Poincaré recurrence time can be many times the age of the universe even in small systems.
The quantum Boltzmann equation has been verified by direct comparison to time-resolved experimental measurements, and in general has found much use in semiconductor optics. For example, the energy distribution of a gas of excitons as a function of time (in picoseconds), measured using a streak camera, has been shown to approach an equilibrium Maxwell-Boltzmann distribution.
Application to semiconductor physics
A typical model of a semiconductor may be built on the assumptions that:
The electron distribution is spatially homogeneous to a reasonable approximation (so all x-dependence may be suppressed)
The external potential is a function only of position and isotropic in p-space, and so may be set to zero without losing any further generality
The gas is sufficiently dilute that three-body interactions between electrons may be ignored.
Considering the exchange of momentum between electrons with initial momenta and , it is possible to derive the expression
References
Statistical mechanics | Quantum Boltzmann equation | Physics | 555 |
42,227,280 | https://en.wikipedia.org/wiki/Orscholz%20Switch | The Orscholz Switch (), or Siegfried Switch, was a military defensive "switch" position and part of the Siegfried Line (Westwall) located in the triangle between the rivers Saar and Moselle. It was built in 1939 and 1940 and incorporated 75 bunkers as well as 10.2 km of tank obstacles in the form of dragon's teeth. This defensive line ran from Trier to Nennig along the Moselle and from Nennig in an easterly direction to Orscholz on the loop in the Saar river at Mettlach.
In 1945 - towards the end of the Second World War - the Orscholz Switch was the scene of hard fighting for months.
During Operation Undertone (15 - 24 March 1945) it lay on the left flank of advancing US Army units.
See also
Besseringen B-Werk
Sources
External links
US Army in World War II - The Last Offensive
Fortresses in Germany
Merzig-Wadern
Siegfried Line | Orscholz Switch | Engineering | 203 |
13,081,775 | https://en.wikipedia.org/wiki/Wood%20Screw%20Pump | The Wood Screw Pump is a low-lift axial-flow drainage pump designed by A. Baldwin Wood in 1913 to cope with the drainage problems of New Orleans. Wood's extremely efficient pumps replaced less efficient pumps in the city's drainage system, prior to which the city had experienced chronic flooding problems, bringing diseases such as malaria and yellow fever along with contamination of drinking water supplies. The pumps are driven by synchronous Allis-Chalmers and General Electric motors, built in the early 1900s. They were designed to lift a large volume of water into outfall canals that flowed into Lake Pontchartrain.
Having proved their operational efficiency in New Orleans, officials around the world wanted Wood to make pumps for them, especially those in the Netherlands. Wood rejected all requests as he refused to leave Louisiana. Until the arrival of Hurricane Katrina, the pumps had kept much of New Orleans from experiencing major inundation for nearly 100 years.
These pumps played a crucial role in protecting New Orleans from flooding for almost a century.
References
Pumps
Hydraulics | Wood Screw Pump | Physics,Chemistry | 211 |
17,770,053 | https://en.wikipedia.org/wiki/Comparison%20of%20cluster%20software | The following tables compare general and technical information for notable computer cluster software. This software can be grossly separated in four categories: Job scheduler, nodes management, nodes installation and integrated stack (all the above).
General information
Table explanation
Software: The name of the application that is described
Technical information
Table Explanation
Software: The name of the application that is described
SMP aware:
basic: hard split into multiple virtual host
basic+: hard split into multiple virtual host with some minimal/incomplete communication between virtual host on the same computer
dynamic: split the resource of the computer (CPU/Ram) on demand
See also
List of volunteer computing projects
List of cluster management software
Computer cluster
Grid computing
World Community Grid
Distributed computing
Distributed resource management
High-Throughput Computing
Job Processing Cycle
Batch processing
Fallacies of Distributed Computing
References
Cluster computing
Cluster software
Job scheduling | Comparison of cluster software | Technology | 168 |
11,306,483 | https://en.wikipedia.org/wiki/Neodeightonia%20phoenicum | Neodeightonia phoenicum is a plant pathogen.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Botryosphaeriales
Fungi described in 1890
Fungus species | Neodeightonia phoenicum | Biology | 41 |
33,926,288 | https://en.wikipedia.org/wiki/Germacranolide | Germacranolides are a group of natural chemical compounds classified as sesquiterpene lactones. They are found in a variety of plant species and are known for their diverse and complex topology, as well as a wide array of pharmacological activities.
References
Sesquiterpene lactones
Oxygen heterocycles | Germacranolide | Chemistry | 72 |
3,150,728 | https://en.wikipedia.org/wiki/RecLOH | RecLOH is a term in genetics that is an abbreviation for "Recombinant Loss of Heterozygosity".
This is a type of mutation which occurs with DNA by recombination. From a pair of equivalent ("homologous"), but slightly different (heterozygous) genes, a pair of identical genes results. In this case there is a non-reciprocal exchange of genetic code between the chromosomes, in contrast to chromosomal crossover, because genetic information is lost.
For Y chromosome
In genetic genealogy, the term is used particularly concerning similar seeming events in Y chromosome DNA. This type of mutation happens within one chromosome, and does not involve a reciprocal transfer. Rather, one homologous segment "writes over" the other. The mechanism is presumed to be different from RecLOH events in autosomal chromosomes, since the target is the very same chromosome instead of the homologous one.
During the mutation one of these copies overwrites the other. Thus the differences between the two are lost. Because differences are lost, heterozygosity is lost.
Recombination on the Y-chromosome does not only take place during meiosis, but virtually at every mitosis when the Y chromosome condenses, because it doesn't require pairing between chromosomes. Recombination frequency even exceeds the frame shift mutation frequency (slipped strand mispairing) of (average fast) Y-STRs, however many recombination products may lead to infertile germ cells and "daughter out".
Recombination events (RecLOH) can be observed if YSTR databases are searched for twin alleles at 3 or more duplicated markers on the same palindrome (hairpin).
E.g. DYS459, DYS464 and DYS724 (CDY) are located on the same palindrome P1. A high proportion of 9-9, 15-15-17-17, 36-36 combinations and similar twin allelic patterns will be found. PCR typing technologies have been developed (e.g. DYS464X) that are able to verify that there are most frequently really two alleles of each, so we can be sure that there is no gene deletion. Family genealogies have proven many times, that parallel changes on all markers located on the same palindrome are frequently observed and the result of those changes are always twin alleles. So a 9–10, 15-16-17-17, 36-38 haplotype can change in one recombination event to the one mentioned above, because all three markers (DYS459, DYS464 and DYS724) are affected by one and the same recLOH event.
See also
Null allele
Paternal mtDNA transmission
List of genetic genealogy topics
References
External links
RecLOH explained
Genetics
Genetic genealogy | RecLOH | Biology | 603 |
58,482,655 | https://en.wikipedia.org/wiki/Aspergillus%20neoindicus | Aspergillus neoindicus is a species of fungus in the genus Aspergillus. It is from the Terrei section. The species was first described in 2011. It has been reported to produce citrinin, naphthalic anhydride, and atrovenetins.
Growth and morphology
A. neoindicus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
neoindicus
Fungi described in 2011
Fungus species | Aspergillus neoindicus | Biology | 129 |
11,440,506 | https://en.wikipedia.org/wiki/Nickel-62 | Nickel-62 is an isotope of nickel having 28 protons and 34 neutrons.
It is a stable isotope, with the highest binding energy per nucleon of any known nuclide (8.7945 MeV). It is often stated that 56Fe is the "most stable nucleus", but only because 56Fe has the lowest mass per nucleon (not binding energy per nucleon) of all nuclides. The lower mass per nucleon of 56Fe is possible because 56Fe has 26/56 ≈ 46.43% protons, while 62Ni has only 28/62 ≈ 45.16% protons. Protons are less massive than neutrons, meaning that the larger fraction of protons in 56Fe lowers its mean mass-per-nucleon ratio in a way that has no effect on its binding energy. In other words, Nickel-62 still has the least massive protons and neutrons of any isotope.
Properties
The high binding energy of nickel isotopes in general makes nickel an "end product" of many nuclear reactions (including neutron capture reactions) throughout the universe and accounts for the high relative abundance of nickel—although most nickel in space (and thus produced by supernova explosions) is nickel-58 (the most common isotope) and nickel-60 (the second-most), with the other stable isotopes (nickel-61, nickel-62, and nickel-64) being quite rare. This suggests that most nickel is produced in supernovas in the r-process of neutron capture from nickel-56 immediately after the core-collapse, with any nickel-56 that escapes the supernova explosion rapidly decaying to cobalt-56 and then stable iron-56.
Relationship to iron-56
The second and third most tightly bound nuclei are those of Fe and Fe, with binding energies per nucleon of 8.7922 MeV and 8.7903 MeV, respectively.
As noted above, the isotope Fe has the lowest mass per nucleon of any nuclide, 930.412 MeV/c, followed by Ni with 930.417 MeV/c and Ni with 930.420 MeV/c. As noted, this does not contradict binding numbers because Ni has a greater proportion of neutrons which are more massive than protons.
If one looks only at the nuclei, without including the electrons, Fe again shows the lowest mass per nucleon (930.175 MeV/c), followed by Ni (930.181 MeV/c), and Ni (930.187 MeV/c).
The misconception of Fe's higher nuclear binding energy probably originated from astrophysics. During nucleosynthesis in stars the competition between photodisintegration and alpha capturing causes more Ni to be produced than Ni (Fe is produced later in the star's ejection shell as Ni decays). The Ni is the natural end product of silicon-burning at the end of a supernova's life and is the product of 14 alpha captures in the alpha process which builds more massive elements in steps of 4 nucleons, from carbon. This alpha process in supernovas burning ends here because of the higher energy of zinc-60, which would be produced in the next step, after addition of another "alpha" (or more properly termed, helium nucleus).
Nonetheless, 28 atoms of nickel-62 fusing into 31 atoms of iron-56 releases of energy; hence the future of an expanding universe without proton decay includes iron stars rather than "nickel stars".
See also
Isotopes of nickel
References
Isotopes of nickel | Nickel-62 | Chemistry | 747 |
56,434 | https://en.wikipedia.org/wiki/Julia%20set | In complex dynamics, the Julia set and the Fatou set are two complementary sets (Julia "laces" and Fatou "dusts") defined from a function. Informally, the Fatou set of the function consists of values with the property that all nearby values behave similarly under repeated iteration of the function, and the Julia set consists of values such that an arbitrarily small perturbation can cause drastic changes in the sequence of iterated function values.
Thus the behavior of the function on the Fatou set is "regular", while on the Julia set its behavior is "chaotic".
The Julia set of a function is commonly denoted and the Fatou set is denoted These sets are named after the French mathematicians Gaston Julia and Pierre Fatou whose work began the study of complex dynamics during the early 20th century.
Formal definition
Let be a non-constant meromorphic function from the Riemann sphere onto itself. Such functions are precisely the non-constant complex rational functions, that is, where and are complex polynomials. Assume that p and q have no common roots, and at least one has degree larger than 1. Then there is a finite number of open sets that are left invariant by and are such that:
The union of the sets is dense in the plane and
behaves in a regular and equal way on each of the sets .
The last statement means that the termini of the sequences of iterations generated by the points of are either precisely the same set, which is then a finite cycle, or they are finite cycles of circular or annular shaped sets that are lying concentrically. In the first case the cycle is attracting, in the second case it is neutral.
These sets are the Fatou domains of , and their union is the Fatou set of . Each of the Fatou domains contains at least one critical point of , that is, a (finite) point z satisfying , or if the degree of the numerator is at least two larger than the degree of the denominator , or if for some c and a rational function satisfying this condition.
The complement of is the Julia set of . If all the critical points are preperiodic, that is they are not periodic but eventually land on a periodic cycle, then is all the sphere. Otherwise, is a nowhere dense set (it is without interior points) and an uncountable set (of the same cardinality as the real numbers). Like , is left invariant by , and on this set the iteration is repelling, meaning that for all w in a neighbourhood of z (within ). This means that behaves chaotically on the Julia set. Although there are points in the Julia set whose sequence of iterations is finite, there are only a countable number of such points (and they make up an infinitesimal part of the Julia set). The sequences generated by points outside this set behave chaotically, a phenomenon called deterministic chaos.
There has been extensive research on the Fatou set and Julia set of iterated rational functions, known as rational maps. For example, it is known that the Fatou set of a rational map has either 0, 1, 2 or infinitely many components. Each component of the Fatou set of a rational map can be classified into one of four different classes.
Equivalent descriptions of the Julia set
is the smallest closed set containing at least three points which is completely invariant under f.
is the closure of the set of repelling periodic points.
For all but at most two points the Julia set is the set of limit points of the full backwards orbit (This suggests a simple algorithm for plotting Julia sets, see below.)
If f is an entire function, then is the boundary of the set of points which converge to infinity under iteration.
If f is a polynomial, then is the boundary of the filled Julia set; that is, those points whose orbits under iterations of f remain bounded.
Properties of the Julia set and Fatou set
The Julia set and the Fatou set of f are both completely invariant under iterations of the holomorphic function f:
Examples
For the Julia set is the unit circle and on this the iteration is given by doubling of angles (an operation that is chaotic on the points whose argument is not a rational fraction of ). There are two Fatou domains: the interior and the exterior of the circle, with iteration towards 0 and ∞, respectively.
For the Julia set is the line segment between −2 and 2. There is one Fatou domain: the points not on the line segment iterate towards ∞. (Apart from a shift and scaling of the domain, this iteration is equivalent to on the unit interval, which is commonly used as an example of chaotic system.)
The functions f and g are of the form , where c is a complex number. For such an iteration the Julia set is not in general a simple curve, but is a fractal, and for some values of c it can take surprising shapes. See the pictures below.
For some functions f(z) we can say beforehand that the Julia set is a fractal and not a simple curve. This is because of the following result on the iterations of a rational function:
This means that each point of the Julia set is a point of accumulation for each of the Fatou domains. Therefore, if there are more than two Fatou domains, each point of the Julia set must have points of more than two different open sets infinitely close, and this means that the Julia set cannot be a simple curve. This phenomenon happens, for instance, when f(z) is the Newton iteration for solving the equation :
The image on the right shows the case n = 3.
Quadratic polynomials
A very popular complex dynamical system is given by the family of complex quadratic polynomials, a special case of rational maps. Such quadratic polynomials can be expressed as
where c is a complex parameter. Fix some large enough that (For example, if is in the Mandelbrot set, then so we may simply let ) Then the filled Julia set for this system is the subset of the complex plane given by
where is the nth iterate of The Julia set of this function is the boundary of .
The parameter plane of quadratic polynomials – that is, the plane of possible c values – gives rise to the famous Mandelbrot set. Indeed, the Mandelbrot set is defined as the set of all c such that is connected. For parameters outside the Mandelbrot set, the Julia set is a Cantor space: in this case it is sometimes referred to as Fatou dust.
In many cases, the Julia set of c looks like the Mandelbrot set in sufficiently small neighborhoods of c. This is true, in particular, for so-called Misiurewicz parameters, i.e. parameters c for which the critical point is pre-periodic. For instance:
At c = i, the shorter, front toe of the forefoot, the Julia set looks like a branched lightning bolt.
At c = −2, the tip of the long spiky tail, the Julia set is a straight line segment.
In other words, the Julia sets are locally similar around Misiurewicz points.
Generalizations
The definition of Julia and Fatou sets easily carries over to the case of certain maps whose image contains their domain; most notably transcendental meromorphic functions and Adam Epstein's finite-type maps.
Julia sets are also commonly defined in the study of dynamics in several complex variables.
Pseudocode
The below pseudocode implementations hard code the functions for each fractal. Consider implementing complex number operations to allow for more dynamic and reusable code.
Pseudocode for normal Julia sets
R = escape radius # choose R > 0 such that R**2 - R >= sqrt(cx**2 + cy**2)
for each pixel (x, y) on the screen, do:
{
zx = scaled x coordinate of pixel; # (scale to be between -R and R)
# zx represents the real part of z.
zy = scaled y coordinate of pixel; # (scale to be between -R and R)
# zy represents the imaginary part of z.
iteration = 0;
max_iteration = 1000;
while (zx * zx + zy * zy < R**2 AND iteration < max_iteration)
{
xtemp = zx * zx - zy * zy;
zy = 2 * zx * zy + cy;
zx = xtemp + cx;
iteration = iteration + 1;
}
if (iteration == max_iteration)
return black;
else
return iteration;
}
Pseudocode for multi-Julia sets
R = escape radius # choose R > 0 such that R**n - R >= sqrt(cx**2 + cy**2)
for each pixel (x, y) on the screen, do:
{
zx = scaled x coordinate of pixel; # (scale to be between -R and R)
zy = scaled y coordinate of pixel; # (scale to be between -R and R)
iteration = 0;
max_iteration = 1000;
while (zx * zx + zy * zy < R**2 AND iteration < max_iteration)
{
xtmp = (zx * zx + zy * zy) ^ (n / 2) * cos(n * atan2(zy, zx)) + cx;
zy = (zx * zx + zy * zy) ^ (n / 2) * sin(n * atan2(zy, zx)) + cy;
zx = xtmp;
iteration = iteration + 1;
}
if (iteration == max_iteration)
return black;
else
return iteration;
}
Another recommended option is to reduce color banding between iterations by using a renormalization formula for the iteration.
Such formula is given to be,
where is the escaping iteration, bounded by some such that and , and is the magnitude of the last iterate before escaping.
This can be implemented, very simply, like so:
# simply replace the last 4 lines of code from the last example with these lines of code:
if(iteration == max_iteration)
return black;
else
abs_z = zx * zx + zy * zy;
return iteration + 1 - log(log(abs_z))/log(n);
The difference is shown below with a Julia set defined as where .
The potential function and the real iteration number
The Julia set for is the unit circle, and on the outer Fatou domain, the potential function φ(z) is defined by φ(z) = log|z|. The equipotential lines for this function are concentric circles. As we have
where is the sequence of iteration generated by z. For the more general iteration , it has been proved that if the Julia set is connected (that is, if c belongs to the (usual) Mandelbrot set), then there exist a biholomorphic map ψ between the outer Fatou domain and the outer of the unit circle such that . This means that the potential function on the outer Fatou domain defined by this correspondence is given by:
This formula has meaning also if the Julia set is not connected, so that we for all c can define the potential function on the Fatou domain containing ∞ by this formula. For a general rational function f(z) such that ∞ is a critical point and a fixed point, that is, such that the degree m of the numerator is at least two larger than the degree n of the denominator, we define the potential function on the Fatou domain containing ∞ by:
where d = m − n is the degree of the rational function.
If N is a very large number (e.g. 10100), and if k is the first iteration number such that , we have that
for some real number , which should be regarded as the real iteration number, and we have that:
where the last number is in the interval [0, 1).
For iteration towards a finite attracting cycle of order r, we have that if is a point of the cycle, then (the r-fold composition), and the number
is the attraction of the cycle. If w is a point very near and w′ is w iterated r times, we have that
Therefore, the number is almost independent of k. We define the potential function on the Fatou domain by:
If ε is a very small number and k is the first iteration number such that , we have that
for some real number , which should be regarded as the real iteration number, and we have that:
If the attraction is ∞, meaning that the cycle is super-attracting, meaning again that one of the points of the cycle is a critical point, we must replace α by
where w′ is w iterated r times and the formula for φ(z) by:
And now the real iteration number is given by:
For the colouring we must have a cyclic scale of colours (constructed mathematically, for instance) and containing H colours numbered from 0 to H−1 (H = 500, for instance). We multiply the real number by a fixed real number determining the density of the colours in the picture, and take the integral part of this number modulo H.
The definition of the potential function and our way of colouring presuppose that the cycle is attracting, that is, not neutral. If the cycle is neutral, we cannot colour the Fatou domain in a natural way. As the terminus of the iteration is a revolving movement, we can, for instance, colour by the minimum distance from the cycle left fixed by the iteration.
Field lines
In each Fatou domain (that is not neutral) there are two systems of lines orthogonal to each other: the equipotential lines (for the potential function or the real iteration number) and the field lines.
If we colour the Fatou domain according to the iteration number (and not the real iteration number , as defined in the previous section), the bands of iteration show the course of the equipotential lines. If the iteration is towards ∞ (as is the case with the outer Fatou domain for the usual iteration ), we can easily show the course of the field lines, namely by altering the colour according as the last point in the sequence of iteration is above or below the x-axis (first picture), but in this case (more precisely: when the Fatou domain is super-attracting) we cannot draw the field lines coherently - at least not by the method we describe here. In this case a field line is also called an external ray.
Let z be a point in the attracting Fatou domain. If we iterate z a large number of times, the terminus of the sequence of iteration is a finite cycle C, and the Fatou domain is (by definition) the set of points whose sequence of iteration converges towards C. The field lines issue from the points of C and from the (infinite number of) points that iterate into a point of C. And they end on the Julia set in points that are non-chaotic (that is, generating a finite cycle). Let r be the order of the cycle C (its number of points) and let be a point in C. We have (the r-fold composition), and we define the complex number α by
If the points of C are , α is the product of the r numbers . The real number 1/|α| is the attraction of the cycle, and our assumption that the cycle is neither neutral nor super-attracting, means that . The point is a fixed point for , and near this point the map has (in connection with field lines) character of a rotation with the argument β of α (that is, ).
In order to colour the Fatou domain, we have chosen a small number ε and set the sequences of iteration to stop when , and we colour the point z according to the number k (or the real iteration number, if we prefer a smooth colouring). If we choose a direction from given by an angle θ, the field line issuing from in this direction consists of the points z such that the argument ψ of the number satisfies the condition that
For if we pass an iteration band in the direction of the field lines (and away from the cycle), the iteration number k is increased by 1 and the number ψ is increased by β, therefore the number is constant along the field line.
A colouring of the field lines of the Fatou domain means that we colour the spaces between pairs of field lines: we choose a number of regularly situated directions issuing from , and in each of these directions we choose two directions around this direction. As it can happen that the two field lines of a pair do not end in the same point of the Julia set, our coloured field lines can ramify (endlessly) in their way towards the Julia set. We can colour on the basis of the distance to the center line of the field line, and we can mix this colouring with the usual colouring. Such pictures can be very decorative (second picture).
A coloured field line (the domain between two field lines) is divided up by the iteration bands, and such a part can be put into a one-to-one correspondence with the unit square: the one coordinate is (calculated from) the distance from one of the bounding field lines, the other is (calculated from) the distance from the inner of the bounding iteration bands (this number is the non-integral part of the real iteration number). Therefore, we can put pictures into the field lines (third picture).
Plotting the Julia set
Methods :
Distance Estimation Method for Julia set (DEM/J)
Inverse Iteration Method (IIM)
Using backwards (inverse) iteration (IIM)
As mentioned above, the Julia set can be found as the set of limit points of the set of pre-images of (essentially) any given point. So we can try to plot the Julia set of a given function as follows. Start with any point z we know to be in the Julia set, such as a repelling periodic point, and compute all pre-images of z under some high iterate of f.
Unfortunately, as the number of iterated pre-images grows exponentially, this is not feasible computationally. However, we can adjust this method, in a similar way as the "random game" method for iterated function systems. That is, in each step, we choose at random one of the inverse images of f.
For example, for the quadratic polynomial fc, the backwards iteration is described by
At each step, one of the two square roots is selected at random.
Note that certain parts of the Julia set are quite difficult to access with the reverse Julia algorithm. For this reason, one must modify IIM/J ( it is called MIIM/J) or use other methods to produce better images.
Using DEM/J
As a Julia set is infinitely thin we cannot draw it effectively by backwards iteration from the pixels. It will appear fragmented because of the impracticality of examining infinitely many startpoints. Since the iteration count changes vigorously near the Julia set, a partial solution is to imply the outline of the set from the nearest color contours, but the set will tend to look muddy.
A better way to draw the Julia set in black and white is to estimate the distance of pixels (DEM) from the set and to color every pixel whose center is close to the set. The formula for the distance estimation is derived from the formula for the potential function φ(z). When the equipotential lines for φ(z) lie close, the number is large, and conversely, therefore the equipotential lines for the function should lie approximately regularly. It has been proven that the value found by this formula (up to a constant factor) converges towards the true distance for z converging towards the Julia set.
We assume that f(z) is rational, that is, where p(z) and q(z) are complex polynomials of degrees m and n, respectively, and we have to find the derivative of the above expressions for φ(z). And as it is only that varies, we must calculate the derivative of with respect to z. But as (the k-fold composition), is the product of the numbers , and this sequence can be calculated recursively by , starting with (before the calculation of the next iteration ).
For iteration towards ∞ (more precisely when , so that ∞ is a super-attracting fixed point), we have
() and consequently:
For iteration towards a finite attracting cycle (that is not super-attracting) containing the point and having order r, we have
and consequently:
For a super-attracting cycle, the formula is:
We calculate this number when the iteration stops. Note that the distance estimation is independent of the attraction of the cycle. This means that it has meaning for transcendental functions of "degree infinity" (e.g. sin(z) and tan(z)).
Besides drawing of the boundary, the distance function can be introduced as a 3rd dimension to create a solid fractal landscape.
See also
Douady rabbit
Limit set
Stable and unstable sets
No wandering domain theorem
Chaos theory
Notes
References
Bibliography
First appeared in as a available as
External links
– Windows, 370 kB
– one of the applets can render Julia sets, via Iterated Function Systems.
– A visual explanation of Julia Sets.
– Mandelbrot, Burning ship and corresponding Julia set generator.
- A visual explanation.
Fractals
Limit sets
Complex dynamics
Articles containing video clips
Articles with example pseudocode | Julia set | Mathematics | 4,482 |
1,503,549 | https://en.wikipedia.org/wiki/Chrispijn%20van%20den%20Broeck | Chrispijn van den Broeck (1523 – c. 1591) was a Flemish painter, draughtsman, print designer and designer of temporary decorations. He was a scion of a family of artists, which had its origins in Mechelen and later moved to Antwerp. He is known for his religious compositions and portraits as well as his extensive output of designs for prints. He was active in Antwerp which he left for some time because of the prosecution of persons adhering to his religious convictions.
Life
Chrispijn van den Broeck was born in Mechelen as the son of Jan van den Broeck, a painter. His family members included artists who were active in Mechelen. His family also used the Latinised name 'Paludanus'. The Latinized name is based on the Latin translation ('palus') of the Dutch word 'broeck' which is part of the family name and means a marsh or swamp land. He was likely a relative of the sculptor and painter Willem van den Broecke and the painter Hendrick van den Broeck. He was probably trained by his father. He moved to Antwerp some time before 1555 since Chrispijn was registered as a master painter of the Guild of St. Luke of Antwerp for the first time in 1555.
Chrispijn was then working in the workshop of the leading history painter Frans Floris. Frans Floris was one of the Romanist painters active in Antwerp. The Romanists were Netherlandish artists who had trained in Italy and upon their return to their home countries painted in a style that assimilated Italian influences into the Northern painting tradition. Van den Broeck remained in Frans Floris' workshop until the master's death in 1570. He was together with Frans Pourbus the Elder one of the collaborators of Floris who helped finish Floris' paintings after the master had become incapacitated due to the alcoholism in which he had sunk in his later years. According to the Flemish contemporary art historian and artist Karel van Mander, Chrispijn van den Broeck and Frans Pourbus completed an altarpiece for the Grand Prior of Spain left incomplete at the time of Floris' death.
Van den Broeck became a citizen of Antwerp in 1559. He married Barbara de Bruyn. Their daughter Barbara van den Broeck (1560 -?) became an engraver who mainly created reproductions after her father's work. Crispijn may have lived in Italy for some time, but there is no evidence of this.
He received a pupil named Niclaes Ficet in his Antwerp workshop in 1577.
In 1584, van de Broeck resided in Middelburg for a short time to escape the political and religious unrest in Antwerp. His name was last mentioned in the Guild records of 1589 in connection with a payment. His wife is mentioned as a widow on 6 February 1591. Chrispijn van den Broeck must therefore have died sometime between 1589 and 6 February 1591, most likely in Antwerp.
Work
Van Mander stated that Chrispijn van den Broeck was 'a good inventor... clever at large nudes and just as good an architect'. The latter may refer to his involvement in temporary constructions and decorations during festivities in the city, such as the theatre competition called the Landjuweel held in Antwerp in 1561 and the Joyous Entries in Antwerp of 1570 and 1582.
About 23 paintings are attributed to Chrispijn van den Broeck, some of which are signed. From 16th and 17th century inventories in Antwerp van den Broeck's work is regularly mentioned, which indicates that his output must have been larger. While there is no evidence that the artist visited Italy, his work shows the influence of the Venetian Jacopo Bassano in the use of large, solid figures placed within a landscape. As he was a pupil of the Romanist Frans Floris who did study in Italy he may have received the Italian influence through his master. He may also have seen prints after Italian artworks. Van den Broeck further adopted Floris technique of applying a brown preparatory ground underneath the main colours of his paintings. As a result, his works typically display a brown hue. His palette favours pink, brown, grey and yellow tones.
Van den Broeck's painting Two Young Men (Fitzwilliam Museum) is a double portrait of two cheerful young men or adolescent boys. They are wearing fancy clothes in Italian fashion which were likely also worn by fashionable young men in 16th century Flanders. Their embrace and smiling glances show that the relationship between the two men is close. The boy in black seems to be offering his friend an apple while he looks at the viewer with a smile. The other boy looks with a smile at the boy in black. While an apple was often used as a symbol of physical love, it would be wrong to assume the painting depicts two homosexual lovers. The boys' physical likeness indicates that they are more likely brothers. Based on the symbols used throughout the painting, its subject appears to be death. Two dark owl heads peek out over each shoulder of the boy in black while a crow's or raven's head in profile with its sharp beak pointed towards the boy in black juts out from the right side of the head of the boy in red. Both the owl and the raven are traditional symbols of death. The stone panel at the top of the picture bears the artist's initials and recalls funerary sculpture.
A total of 146 drawings have been attributed with certainty to van den Broeck. Of these, 89 are designs for engravings. His earliest drawing is dated 1560. It is possible that his tendency to accentuate the contours of forms made his drawings suited as designs for prints. Whether Van den Broeck himself also etched or engraved is unknown. From 1566 onwards van den Broeck started to create design drawings for publications by Christoffel Plantin, such as for Benito Arias Montano's Humanae salutis monumenta published in 1571. Van den Broeck also worked for print publishers Gerard de Jode, Adriaen Huybrechts, Hans van Luijck, Willem van Haecht the Elder and Plantin's successor Jan Moretus I.
His designs were engraved by engravers such as Abraham de Bruyn, Jan Collaert the Elder and Johannes Wierix. He designed the illustration of the allegory of the Low Countries used in Lodovico Guicciardini's Descrizione di tutti I Paesi Bassi (1567).
References
External links
1523 births
1591 deaths
Flemish Renaissance painters
Flemish portrait painters
Flemish history painters
Painters from Antwerp
Artists from Mechelen | Chrispijn van den Broeck | Engineering | 1,379 |
61,310,341 | https://en.wikipedia.org/wiki/Totient%20summatory%20function | In number theory, the totient summatory function is a summatory function of Euler's totient function defined by:
It is the number of coprime integer pairs .
The first few values are 0, 1, 2, 4, 6, 10, 12, 18, 22, 28, 32 . Values for powers of 10 at .
Properties
Using Möbius inversion to the totient function, we obtain
has the asymptotic expansion
where is the Riemann zeta function for the value 2, which is
¶.
is the number of coprime integer pairs .
The summatory of reciprocal totient function
The summatory of reciprocal totient function is defined as
Edmund Landau showed in 1900 that this function has the asymptotic behavior
where is the Euler–Mascheroni constant,
and
The constant is sometimes known as Landau's totient constant. The sum is convergent and equal to:
In this case, the product over the primes in the right side is a constant known as totient summatory constant, and its value is:
See also
Arithmetic function
References
External links
OEIS Totient summatory function
Decimal expansion of totient constant product(1 + 1/(p^2*(p-1))), p prime >= 2)
Arithmetic functions | Totient summatory function | Mathematics | 279 |
24,156,703 | https://en.wikipedia.org/wiki/List%20of%20Shuowen%20Jiezi%20radicals | The Shuowen Jiezi dictionary created by Xu Shen uses 540 radicals to index its characters.
List
Seal script - regular script comparison
Vol. 2
Vol. 3
Vol. 4
Vol. 5
Vol. 6
Vol. 7
Vol. 8
Vol. 9
Vol. 10
Vol. 11
Vol. 12
Vol. 13
Vol. 14
Vol. 15
See also
List of Kangxi radicals - a system of 214 components used by the Kangxi dictionary (1716), made under the leadership of the Kangxi Emperor
List of Unicode radicals - CJK radicals included in the Unicode Standard.
List of Xinhua Zidian radicals
Chinese characters description languages - computer and SVG based description of CJK characters
CJK characters
References
Sources
《說文解字》, electronic edition - Donald Sturgeon
《说文解字注》 全文检索 - 许慎撰 段玉裁注, facsimile edition
Shuowenjiezi.com, by the CRLAO research institut, Paris, France.
Chinese lexicography
Chinese calligraphy
Chinese character lists
Chinese character components | List of Shuowen Jiezi radicals | Technology | 218 |
48,430,955 | https://en.wikipedia.org/wiki/Cortinarius%20sanguineus | Cortinarius sanguineus, commonly known as the blood red webcap or blood red cortinarius, is a species of fungus in the genus Cortinarius.
Taxonomy
Austrian naturalist Franz Xaver von Wulfen described the species as Agaricus sanguineus in 1781, reporting that it appeared in the fir tree forests around Klagenfurt and Ebenthal and in October. He noted that it was very pretty but edible. The specific epithet is the Latin word sanguineus, meaning "bloody". Samuel Frederick Gray established Cortinarius as a genus in the first volume of his 1821 work A Natural Arrangement of British Plants, recording the species as Cortinaria sanguinea "the bloody curtain-stool".
Friedrich Otto Wünsche described it as Dermocybe sanguinea in 1877. Most mycologists retain Dermocybe as merely a subgenus of Cortinarius as genetically all the species lie within the latter genus.
It is closely related to Cortinarius puniceus, which grows under oak and beech from England and France.
Description
The dark blood-red cap is convex, and later flattens, measuring 2–5 cm (0.8–2 in) across, its surface covered in silky fibres radiating from the centre. The stipe is usually the same colour as the cap or paler. Long, slim, and cylindrical, it is 3–6 cm high by 0.3–0.8 cm wide. The veil (cortina) and its remnants are red. The gills are adnate. They are initially blood-red, but turn brown upon aging as the spores mature. The purple-red flesh has a pleasant smell. The spore print is rust-coloured, while the oval spores themselves measure 7 to 9 μm by 4 to 6 μm, and are rough.
Cortinarius sanguineus grows in conifer woodlands in autumn. It is edible. Its pigment can be used as a dye for wool, rendering it shades of pink, purple or red.
The major pigments in C. sanguineus are emodin, dermocybin and .
See also
List of Cortinarius species
References
External links
sanguineus
Fungi of Europe
Fungi described in 1781
Inedible fungi
Taxa named by Franz Xaver von Wulfen
Fungus species | Cortinarius sanguineus | Biology | 484 |
12,213,526 | https://en.wikipedia.org/wiki/Metababy | Metababy was a wiki created and coded by Greg Knauss and designed by Leslie Harpold, that allowed raw HTML, JavaScript and CSS in its pages. The wiki ran from November 1998 to May 2003.
Metababy would display the contents of every email sent to the address metababy@metababy.com.
It was nominated for a Webby Award in 2000.
The wiki died out due to the increasing number of spam and offensive posts. It was later resurrected, but was then shut down again. Since its shutdown, the site displays the static text "I think we could all probably use a rest."
References
External links
Metababy
Wiki communities | Metababy | Technology | 145 |
27,075,999 | https://en.wikipedia.org/wiki/World%20glyph%20set | The world glyph sets are character repertoires comprising a subset of Unicode characters. Their purpose is to provide an implementation guideline for producers of fonts for the representation of natural languages. Unlike Windows Glyph List 4 (WGL) it is specified by font foundries and not by operating system manufacturers. It is, however, very similar in glyph coverage to WGL4, but neither contains all the characters of the other.
Digital fonts for the European and American market have traditionally often been sold in a standard ("Std") package for western languages and additional separate files to cover central ("CE"), eastern ("Baltic") and southern ("Turk") European languages in the Roman script and sometimes also packages to support for the Greek (monotonic) and Cyrillic scripts. With the advent of the OpenType format, which supports Unicode, all characters could be included in a single font file. Some font foundries continue to sell packages with differing glyph coverage where the basic one hardly covers more than Windows-1252. They introduced certain variants like Professional ("Pro") which supports all major languages written with Latin letters, Commercial ("Com") for international communication in office use, such as those covering Linotype Extended European Characterset (LEEC) or, adding Greek and Cyrillic, WGL or world glyph sets.
The set is used in several font families by Linotype and Monotype such as Neue Frutiger W1G.
Character table
See also
Adobe Glyph List
DIN 91379 Unicode subset for Europe
References
Digital typography
Character encoding | World glyph set | Technology | 331 |
5,144,571 | https://en.wikipedia.org/wiki/Sarcoglycan | The sarcoglycans are a family of transmembrane proteins (α, β, γ, δ or ε) involved in the protein complex responsible for connecting the muscle fibre cytoskeleton to the extracellular matrix, preventing damage to the muscle fibre sarcolemma through shearing forces.
The dystrophin glycoprotein complex (DGC) is a membrane-spanning complex that links the interior cytoskeleton to the extracellular matrix in muscle. The sarcoglycan complex is a subcomplex within the DGC and is composed of six muscle-specific, transmembrane proteins (alpha-, beta-, gamma-, delta-, epsilon-,and zeta-sarcoglycan). The sarcoglycans are asparagine-linked glycosylated proteins with single transmembrane domains.
The disorders caused by the mutations of the sarcoglycans are called sarcoglycanopathies. Mutations in the α, β, γ or δ genes (not ε) encoding these proteins can lead to the associated limb-girdle muscular dystrophy.
Genes
SGCA
SGCB
SGCD
SGCE
SGCG
SGCZ
References
Protein families | Sarcoglycan | Chemistry,Biology | 254 |
13,737,596 | https://en.wikipedia.org/wiki/SN%202005ap | SN 2005ap was an extremely energetic type Ic supernova in the galaxy SDSS J130115.12+274327.5. With a peak absolute magnitude of around −22.7, it is the second-brightest superluminous supernova yet recorded, twice as bright as the previous record holder, SN 2006gy, though SN 2005ap was eventually surpassed by ASASSN-15lh. It was initially classified as type II-L, but later revised to type Ic. It was discovered on 3 March 2005, on unfiltered optical images taken with the 0.45 m ROTSE-IIIb (Robotic Optical Transient Search Experiment) telescope, which is located at the McDonald Observatory in West Texas, by Robert Quimby, as part of the Texas Supernova Search that also discovered SN 2006gy. Although it was discovered before SN 2006gy, it was not recognized as being brighter until October 2007. As it occurred 4.7 billion light years from Earth, it was not visible to the naked eye.
Although SN 2005ap was twice as bright at its peak than SN 2006gy, it was not as energetic overall, as the former brightened and dimmed in a typical period of a few days whereas the latter remained very bright for many months. SN 2005ap was about 300 times brighter than normal for a type II supernova. It has been speculated that this hypernova involved the formation of a quark star. Quimby has suggested that the hypernova is of a new type distinct from the standard type II supernova, and his research group have identified five other supernovae similar to SN 2005ap and SCP 06F6, all of which were extremely bright and lacking in hydrogen.
References
Further reading
External links
Light curves and spectra on the Open Supernova Catalog
Coma Berenices
Supernovae
Hypernovae
20050303 | SN 2005ap | Chemistry,Astronomy | 390 |
77,048,203 | https://en.wikipedia.org/wiki/Boon%20Ooi | Boon S. Ooi is a Malaysian–American academic researcher and a Professor of Electrical and Computer Engineering at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. He was faculty member at Nanyang Technological University (Singapore) from 1996 to 2000 and at Lehigh University (Pennsylvania, USA) from 2003 to 2009. He served as Director of KACST-Technology Innovation Center at KAUST from 2012 to 2020.
Education
Ooi received his early education in Kedah and Penang, Malaysia, before earning his BEng and PhD degrees in electronics and electrical engineering from the University of Glasgow in Scotland, UK, in 1992 and 1995, respectively.
Career
Research
His research lab focuses on the development of high-speed optoelectronics for optical fiber, and optical wireless communications. He has made significant contributions broadband emitters, and underwater photonics. Ooi has published more than 400 peer-reviewed journal papers and holds more than 40 issued US Patents. His research interests include high-speed optoelectronics, optical wireless communications, and distributed fiber optic sensors.
Professional Engagements
Ooi has served as Associate Editor of the Optics Express, Senior Editor of the IEEE Photonics Journal. Since 2022, he has been Editor-in-Chief of the IEEE Photonics Technology Letters. He has also served on IEEE Fellow Committee and SPIE Fellow Selection Committee.
Recognition
Ooi is the receipt of the Optica/OSA Sang Soo Lee Award, the Khalifa International Award, and several IEEE and OSA paper awards.
Ooi is a Fellow of the U.S. National Academy of Inventors (NAI), the Institute of Electrical and Electronics Engineers (IEEE), the Optica, the International Society for Optics and Photonics (SPIE), and the Institute of Physics.
References
Year of birth missing (living people)
Living people
Electrical engineers
Electrical engineering academics
Academic staff of King Abdullah University of Science and Technology
Fellows of the IEEE | Boon Ooi | Engineering | 395 |
8,789,505 | https://en.wikipedia.org/wiki/Hydroskimming | Hydroskimming is one of the simplest types of refinery used in the petroleum industry and still represents a large proportion of refining facilities, particularly in developing countries. A hydroskimming refinery is defined as a refinery equipped with atmospheric distillation, naphtha reforming and necessary treating processes. A hydroskimming refinery is therefore more complex than a topping refinery (which just separates the crude into its constituent petroleum products by distillation, known as atmospheric distillation, and produces naphtha but no gasoline) and it produces gasoline. The addition of catalytic reformer enables a hydroskimming refinery to generate higher octane reformate; benzene, toluene, and xylene; and hydrogen for hydrotreating units. However, a hydroskimming refinery produces a surplus of fuel oil with a relatively unattractive price and demand.
Most refineries, therefore, add vacuum distillation and catalytic cracking, which adds one more level of complexity by reducing fuel oil by conversion to light distillates and middle distillates. A coking refinery adds further complexity to the cracking refinery by high conversion of fuel oil into distillates and petroleum coke.
Catalytic cracking, coking and other such conversion units are referred to as secondary processing units. The Nelson Complexity Index, captures the proportion of the secondary conversion unit capacities relative to the primary distillation or topping capacity. The Nelson Complexity Index typically varies from about 2 for hydroskimming refineries, to about 5 for the Cracking refineries and over 9 for the Coking refineries.
Notes and references
Oil refineries | Hydroskimming | Chemistry | 326 |
52,439,884 | https://en.wikipedia.org/wiki/MethBase | MethBase is a database of DNA methylation data derived from next-generation sequencing data. MethBase provides a visualization of publicly available bisulfite sequencing and reduced representation bisulfite sequencing experiments through the UCSC Genome Browser. MethBase contents include single-CpG site resolution methylation levels for each CpG site in the genome of interest, annotation of regions of hypomethylation often associated with gene promoters, and annotation of allele-specific methylation associated with genomic imprinting.
See also
DNA methylation
MethDB
NGSmethDB
References
External links
http://smithlabresearch.org/software/methbase
Genetics databases
Epigenetics
DNA
DNA sequencing | MethBase | Chemistry,Biology | 151 |
18,284,100 | https://en.wikipedia.org/wiki/IYOUIT | IYOUIT is a mobile alpha service to share personal experiences with others while on the go. It was released in June 2008 by NTT Docomo Euro-Labs and discontinued in August 2011.
IYOUIT allows for an instant automated sharing of personal experiences within communities online. It offers contextual tagging for use in everyday life. By hooking a mobile phone up to the Web 2.0 services Flickr and Twitter, sharing can be instant, by posting single data items to such services, or through the aggregation of context information in online blogs.
IYOUIT provides users with access to the whereabouts of their friends, informs them about weather conditions and uploads photos taken and sounds recorded. If the user comes across a book (or other products), they can take a picture of the ISBN code or the product ID with the phone's camera, and IYOUIT will fill in the blanks for instant exchange with friends. IYOUIT records scanned Bluetooth or WLAN beacons and aggregates all data into context information that can be shared with others worldwide on the Web and on the mobile phone.
Software
IYOUIT has been developed by NTT Docomo Euro-Labs in Munich together with the Dutch Telematica Instituut in a joint research project on platform support for Context Awareness in mobile services and applications.
The application is available from the IYOUIT portal at http://www.iyouit.eu. It is written in Python and runs on the Nokia Series-S60 platform (see PyS60).
IYOUIT is based on its own framework of software components to host services and data sources. Framework components, for instance, track the positions of users via GPS and cellular information and identify places of interest over time by learning form their past behavior. Each component offers an API, which allows programmers to integrate their own third party software components.
See also
Jaiku
References
External links
IYOUIT
DOCOMO Communications Laboratories Europe GmbH
Telematica Instituut
Jaiku (bought by Google in 10/2007)
Plazes (bought by Nokia in 6/2008)
Shozu
Quiro by Deutsche Telekom
smart2go
PlaceEngine by Sony
Mobile social software
Blog hosting services
Nippon Telegraph and Telephone
NTT Docomo | IYOUIT | Technology | 461 |
2,341 | https://en.wikipedia.org/wiki/Alkaloid | Alkaloids are a broad class of naturally occurring organic compounds that contain at least one nitrogen atom. Some synthetic compounds of similar structure may also be termed alkaloids.
Alkaloids are produced by a large variety of organisms including bacteria, fungi, plants, and animals. They can be purified from crude extracts of these organisms by acid-base extraction, or solvent extractions followed by silica-gel column chromatography. Alkaloids have a wide range of pharmacological activities including antimalarial (e.g. quinine), antiasthma (e.g. ephedrine), anticancer (e.g. homoharringtonine), cholinomimetic (e.g. galantamine), vasodilatory (e.g. vincamine), antiarrhythmic (e.g. quinidine), analgesic (e.g. morphine), antibacterial (e.g. chelerythrine), and antihyperglycemic activities (e.g. berberine). Many have found use in traditional or modern medicine, or as starting points for drug discovery. Other alkaloids possess psychotropic (e.g. psilocin) and stimulant activities (e.g. cocaine, caffeine, nicotine, theobromine), and have been used in entheogenic rituals or as recreational drugs. Alkaloids can be toxic too (e.g. atropine, tubocurarine). Although alkaloids act on a diversity of metabolic systems in humans and other animals, they almost uniformly evoke a bitter taste.
The boundary between alkaloids and other nitrogen-containing natural compounds is not clear-cut. Most alkaloids are basic, although some have neutral and even weakly acidic properties. In addition to carbon, hydrogen and nitrogen, alkaloids may also contain oxygen or sulfur. Rarer still, they may contain elements such as phosphorus, chlorine, and bromine. Compounds like amino acid peptides, proteins, nucleotides, nucleic acid, amines, and antibiotics are usually not called alkaloids. Natural compounds containing nitrogen in the exocyclic position (mescaline, serotonin, dopamine, etc.) are usually classified as amines rather than as alkaloids. Some authors, however, consider alkaloids a special case of amines.
Naming
The name "alkaloids" () was introduced in 1819 by German chemist Carl Friedrich Wilhelm Meissner, and is derived from late Latin root and the Greek-language suffix -('like'). However, the term came into wide use only after the publication of a review article, by Oscar Jacobsen in the chemical dictionary of Albert Ladenburg in the 1880s.
There is no unique method for naming alkaloids. Many individual names are formed by adding the suffix "ine" to the species or genus name. For example, atropine is isolated from the plant Atropa belladonna; strychnine is obtained from the seed of the Strychnine tree (Strychnos nux-vomica L.). Where several alkaloids are extracted from one plant their names are often distinguished by variations in the suffix: "idine", "anine", "aline", "inine" etc. There are also at least 86 alkaloids whose names contain the root "vin" because they are extracted from vinca plants such as Vinca rosea (Catharanthus roseus); these are called vinca alkaloids.
History
Alkaloid-containing plants have been used by humans since ancient times for therapeutic and recreational purposes. For example, medicinal plants have been known in Mesopotamia from about 2000 BC. The Odyssey of Homer referred to a gift given to Helen by the Egyptian queen, a drug bringing oblivion. It is believed that the gift was an opium-containing drug. A Chinese book on houseplants written in 1st–3rd centuries BC mentioned a medical use of ephedra and opium poppies. Also, coca leaves have been used by Indigenous South Americans since ancient times.
Extracts from plants containing toxic alkaloids, such as aconitine and tubocurarine, were used since antiquity for poisoning arrows.
Studies of alkaloids began in the 19th century. In 1804, the German chemist Friedrich Sertürner isolated from opium a "soporific principle" (), which he called "morphium", referring to Morpheus, the Greek god of dreams; in German and some other Central-European languages, this is still the name of the drug. The term "morphine", used in English and French, was given by the French physicist Joseph Louis Gay-Lussac.
A significant contribution to the chemistry of alkaloids in the early years of its development was made by the French researchers Pierre Joseph Pelletier and Joseph Bienaimé Caventou, who discovered quinine (1820) and strychnine (1818). Several other alkaloids were discovered around that time, including xanthine (1817), atropine (1819), caffeine (1820), coniine (1827), nicotine (1828), colchicine (1833), sparteine (1851), and cocaine (1860). The development of the chemistry of alkaloids was accelerated by the emergence of spectroscopic and chromatographic methods in the 20th century, so that by 2008 more than 12,000 alkaloids had been identified.
The first complete synthesis of an alkaloid was achieved in 1886 by the German chemist Albert Ladenburg. He produced coniine by reacting 2-methylpyridine with acetaldehyde and reducing the resulting 2-propenyl pyridine with sodium.
Classifications
Compared with most other classes of natural compounds, alkaloids are characterized by a great structural diversity. There is no uniform classification. Initially, when knowledge of chemical structures was lacking, botanical classification of the source plants was relied on. This classification is now considered obsolete.
More recent classifications are based on similarity of the carbon skeleton (e.g., indole-, isoquinoline-, and pyridine-like) or biochemical precursor (ornithine, lysine, tyrosine, tryptophan, etc.). However, they require compromises in borderline cases; for example, nicotine contains a pyridine fragment from nicotinamide and a pyrrolidine part from ornithine and therefore can be assigned to both classes.
Alkaloids are often divided into the following major groups:
"True alkaloids" contain nitrogen in the heterocycle and originate from amino acids. Their characteristic examples are atropine, nicotine, and morphine. This group also includes some alkaloids that besides the nitrogen heterocycle contain terpene (e.g., evonine) or peptide fragments (e.g. ergotamine). The piperidine alkaloids coniine and coniceine may be regarded as true alkaloids (rather than pseudoalkaloids: see below) although they do not originate from amino acids.
"Protoalkaloids", which contain nitrogen (but not the nitrogen heterocycle) and also originate from amino acids. Examples include mescaline, adrenaline and ephedrine.
Polyamine alkaloids – derivatives of putrescine, spermidine, and spermine.
Peptide and cyclopeptide alkaloids.
Pseudoalkaloids – alkaloid-like compounds that do not originate from amino acids. This group includes terpene-like and steroid-like alkaloids, as well as purine-like alkaloids such as caffeine, theobromine, theacrine and theophylline. Some authors classify ephedrine and cathinone as pseudoalkaloids. Those originate from the amino acid phenylalanine, but acquire their nitrogen atom not from the amino acid but through transamination.
Some alkaloids do not have the carbon skeleton characteristic of their group. So, galanthamine and homoaporphines do not contain isoquinoline fragment, but are, in general, attributed to isoquinoline alkaloids.
Main classes of monomeric alkaloids are listed in the table below:
Properties
Most alkaloids contain oxygen in their molecular structure; those compounds are usually colorless crystals at ambient conditions. Oxygen-free alkaloids, such as nicotine or coniine, are typically volatile, colorless, oily liquids. Some alkaloids are colored, like berberine (yellow) and sanguinarine (orange).
Most alkaloids are weak bases, but some, such as theobromine and theophylline, are amphoteric. Many alkaloids dissolve poorly in water but readily dissolve in organic solvents, such as diethyl ether, chloroform or 1,2-dichloroethane. Caffeine, cocaine, codeine and nicotine are slightly soluble in water (with a solubility of ≥1g/L), whereas others, including morphine and yohimbine are very slightly water-soluble (0.1–1 g/L). Alkaloids and acids form salts of various strengths. These salts are usually freely soluble in water and ethanol and poorly soluble in most organic solvents. Exceptions include scopolamine hydrobromide, which is soluble in organic solvents, and the water-soluble quinine sulfate.
Most alkaloids have a bitter taste or are poisonous when ingested. Alkaloid production in plants appeared to have evolved in response to feeding by herbivorous animals; however, some animals have evolved the ability to detoxify alkaloids. Some alkaloids can produce developmental defects in the offspring of animals that consume but cannot detoxify the alkaloids. One example is the alkaloid cyclopamine, produced in the leaves of corn lily. During the 1950s, up to 25% of lambs born by sheep that had grazed on corn lily had serious facial deformations. These ranged from deformed jaws to cyclopia. After decades of research, in the 1980s, the compound responsible for these deformities was identified as the alkaloid 11-deoxyjervine, later renamed to cyclopamine.
Distribution in nature
Alkaloids are generated by various living organisms, especially by higher plants – about 10 to 25% of those contain alkaloids. Therefore, in the past the term "alkaloid" was associated with plants.
The alkaloids content in plants is usually within a few percent and is inhomogeneous over the plant tissues. Depending on the type of plants, the maximum concentration is observed in the leaves (for example, black henbane), fruits or seeds (Strychnine tree), root (Rauvolfia serpentina) or bark (cinchona). Furthermore, different tissues of the same plants may contain different alkaloids.
Beside plants, alkaloids are found in certain types of fungus, such as psilocybin in the fruiting bodies of the genus Psilocybe, and in animals, such as bufotenin in the skin of some toads and a number of insects, markedly ants. Many marine organisms also contain alkaloids. Some amines, such as adrenaline and serotonin, which play an important role in higher animals, are similar to alkaloids in their structure and biosynthesis and are sometimes called alkaloids.
Extraction
Because of the structural diversity of alkaloids, there is no single method of their extraction from natural raw materials. Most methods exploit the property of most alkaloids to be soluble in organic solvents but not in water, and the opposite tendency of their salts.
Most plants contain several alkaloids. Their mixture is extracted first and then individual alkaloids are separated. Plants are thoroughly ground before extraction. Most alkaloids are present in the raw plants in the form of salts of organic acids. The extracted alkaloids may remain salts or change into bases. Base extraction is achieved by processing the raw material with alkaline solutions and extracting the alkaloid bases with organic solvents, such as 1,2-dichloroethane, chloroform, diethyl ether or benzene. Then, the impurities are dissolved by weak acids; this converts alkaloid bases into salts that are washed away with water. If necessary, an aqueous solution of alkaloid salts is again made alkaline and treated with an organic solvent. The process is repeated until the desired purity is achieved.
In the acidic extraction, the raw plant material is processed by a weak acidic solution (e.g., acetic acid in water, ethanol, or methanol). A base is then added to convert alkaloids to basic forms that are extracted with organic solvent (if the extraction was performed with alcohol, it is removed first, and the remainder is dissolved in water). The solution is purified as described above.
Alkaloids are separated from their mixture using their different solubility in certain solvents and different reactivity with certain reagents or by distillation.
A number of alkaloids are identified from insects, among which the fire ant venom alkaloids known as solenopsins have received greater attention from researchers. These insect alkaloids can be efficiently extracted by solvent immersion of live fire ants or by centrifugation of live ants followed by silica-gel chromatography purification. Tracking and dosing the extracted solenopsin ant alkaloids has been described as possible based on their absorbance peak around 232 nanometers.
Biosynthesis
Biological precursors of most alkaloids are amino acids, such as ornithine, lysine, phenylalanine, tyrosine, tryptophan, histidine, aspartic acid, and anthranilic acid. Nicotinic acid can be synthesized from tryptophan or aspartic acid. Ways of alkaloid biosynthesis are too numerous and cannot be easily classified. However, there are a few typical reactions involved in the biosynthesis of various classes of alkaloids, including synthesis of Schiff bases and Mannich reaction.
Synthesis of Schiff bases
Schiff bases can be obtained by reacting amines with ketones or aldehydes. These reactions are a common method of producing C=N bonds.
In the biosynthesis of alkaloids, such reactions may take place within a molecule, such as in the synthesis of piperidine:
Mannich reaction
An integral component of the Mannich reaction, in addition to an amine and a carbonyl compound, is a carbanion, which plays the role of the nucleophile in the nucleophilic addition to the ion formed by the reaction of the amine and the carbonyl.
The Mannich reaction can proceed both intermolecularly and intramolecularly:
Dimer alkaloids
In addition to the described above monomeric alkaloids, there are also dimeric, and even trimeric and tetrameric alkaloids formed upon condensation of two, three, and four monomeric alkaloids. Dimeric alkaloids are usually formed from monomers of the same type through the following mechanisms:
Mannich reaction, resulting in, e.g., voacamine
Michael reaction (villalstonine)
Condensation of aldehydes with amines (toxiferine)
Oxidative addition of phenols (dauricine, tubocurarine)
Lactonization (carpaine).
There are also dimeric alkaloids formed from two distinct monomers, such as the vinca alkaloids vinblastine and vincristine, which are formed from the coupling of catharanthine and vindoline. The newer semi-synthetic chemotherapeutic agent vinorelbine is used in the treatment of non-small-cell lung cancer. It is another derivative dimer of vindoline and catharanthine and is synthesised from anhydrovinblastine, starting either from leurosine or the monomers themselves.
Biological role
Alkaloids are among the most important and best-known secondary metabolites, i.e. biogenic substances not directly involved in the normal growth, development, or reproduction of the organism. Instead, they generally mediate ecological interactions, which may produce a selective advantage for the organism by increasing its survivability or fecundity. In some cases their function, if any, remains unclear. An early hypothesis, that alkaloids are the final products of nitrogen metabolism in plants, as urea and uric acid are in mammals, was refuted by the finding that their concentration fluctuates rather than steadily increasing.
Most of the known functions of alkaloids are related to protection. For example, aporphine alkaloid liriodenine produced by the tulip tree protects it from parasitic mushrooms. In addition, the presence of alkaloids in the plant prevents insects and chordate animals from eating it. However, some animals are adapted to alkaloids and even use them in their own metabolism. Such alkaloid-related substances as serotonin, dopamine and histamine are important neurotransmitters in animals. Alkaloids are also known to regulate plant growth. One example of an organism that uses alkaloids for protection is the Utetheisa ornatrix, more commonly known as the ornate moth. Pyrrolizidine alkaloids render these larvae and adult moths unpalatable to many of their natural enemies like coccinelid beetles, green lacewings, insectivorous hemiptera and insectivorous bats. Another example of alkaloids being utilized occurs in the poison hemlock moth (Agonopterix alstroemeriana). This moth feeds on its highly toxic and alkaloid-rich host plant poison hemlock (Conium maculatum) during its larval stage. A. alstroemeriana may benefit twofold from the toxicity of the naturally-occurring alkaloids, both through the unpalatability of the species to predators and through the ability of A. alstroemeriana to recognize Conium maculatum as the correct location for oviposition. A fire ant venom alkaloid known as solenopsin has been demonstrated to protect queens of invasive fire ants during the foundation of new nests, thus playing a central role in the spread of this pest ant species around the world.
Applications
In medicine
Medical use of alkaloid-containing plants has a long history, and, thus, when the first alkaloids were isolated in the 19th century, they immediately found application in clinical practice. Many alkaloids are still used in medicine, usually in the form of salts widely used including the following:
Many synthetic and semisynthetic drugs are structural modifications of the alkaloids, which were designed to enhance or change the primary effect of the drug and reduce unwanted side-effects. For example, naloxone, an opioid receptor antagonist, is a derivative of thebaine that is present in opium.
In agriculture
Prior to the development of a wide range of relatively low-toxic synthetic pesticides, some alkaloids, such as salts of nicotine and anabasine, were used as insecticides. Their use was limited by their high toxicity to humans.
Use as psychoactive drugs
Preparations of plants and fungi containing alkaloids and their extracts, and later pure alkaloids, have long been used as psychoactive substances. Cocaine, caffeine, and cathinone are stimulants of the central nervous system. Mescaline and many indole alkaloids (such as psilocybin, dimethyltryptamine and ibogaine) have hallucinogenic effect. Morphine and codeine are strong narcotic pain killers.
There are alkaloids that do not have strong psychoactive effect themselves, but are precursors for semi-synthetic psychoactive drugs. For example, ephedrine and pseudoephedrine are used to produce methcathinone and methamphetamine. Thebaine is used in the synthesis of many painkillers such as oxycodone.
See also
Amine
Base (chemistry)
List of poisonous plants
Mayer's reagent
Natural products
Palau'amine
Secondary metabolite
Explanatory notes
Citations
General and cited references
External links | Alkaloid | Chemistry | 4,335 |
22,819,973 | https://en.wikipedia.org/wiki/Bioelectrospray | Bio-electrospraying is a technology that enables the deposition of living cells on various targets with a resolution that depends on cell size and not on the jetting phenomenon. It is envisioned that "unhealthy cells would draw a different charge at the needle from healthy ones, and could be identified by the mass spectrometer", with tremendous implications in the health care industry.
The early versions of bio-electrosprays were employed in several areas of research, most notably self-assembly of carbon nanotubes. Although the self-assembly mechanism is not clear yet, "elucidating electrosprays as a competing nanofabrication route for forming self-assemblies with a wide range of nanomaterials in the nanoscale for top-down based bottom-up assembly of structures." Future research may reveal important interactions between migrating cells and self-assembled nanostructures. Such nano-assemblies formed by means of this top-down approach could be explored as a bottom-up methodology for encouraging cell migration to those architectures for forming cell patterns to nano-electronics, which are a few examples, respectively.
After initial exploration with a single protein, increasingly complex systems were studied by bio-electrosprays. These include, but are not limited to, neuronal cells, stem cells, and even whole embryos. The potential of the method was demonstrated by investigating cytogenetic and physiological changes of human lymphocyte cells as well as conducting comprehensive genetic, genomic and physiological state studies of human cells and cells of the model yeast Saccharomyces cerevisiae.
See also
Electrospray ionization
References
Electric and magnetic fields in matter
Biological techniques and tools
Equipment | Bioelectrospray | Physics,Chemistry,Materials_science,Engineering,Biology | 354 |
73,631,860 | https://en.wikipedia.org/wiki/Twin%20towers%20%28architecture%29 | Twin towers are a concept in architecture where 2 similar looking towers are built in close proximity to each other. They have been an architectural motif in human civilization for millennia.
Early examples include the use of twin gate towers in urban and palatial architecture in Chinese cities from the Warring States period, when they were viewed as "signifiers of the celestial realm". In the medieval period, examples include the Seljuk Kharraqan towers, twin towers wrought in decorative brickwork that represent a prominent work in the art and architecture of Islamic Iran.
In the contemporary era, the Petronas Twin Towers in Kuala Lumpur, Malaysia are a particularly celebrated example of twin tower architecture that, from 1998 to 2003, were the tallest structure in the world. The twin towers that were part of the original World Trade Center in New York City, New York until 2001 are also very iconic, although infamous due to the September 11 attacks. Twin towers also recur in Chinese construction, where, while the structures still representing a relative rarity, architects and engineers have developed novel interlinking twinned structures such as the CCTV Headquarters.
In contemporary architecture, structurally connected twin towers with unequal heights have found particular favor among architects for their earthquake-resistant properties, due to such couplings yielding two differing vibration frequencies, enabling the twinned towers to support their counterparts at their more vulnerable frequencies.
References
Towers | Twin towers (architecture) | Engineering | 280 |
605,477 | https://en.wikipedia.org/wiki/Behavioral%20neuroscience | Behavioral neuroscience, also known as biological psychology, biopsychology, or psychobiology, is part of the broad, interdisciplinary field of neuroscience, with its primary focus being on the biological and neural substrates underlying human experiences and behaviors, as in our psychology. Derived from an earlier field known as physiological psychology, behavioral neuroscience applies the principles of biology to study the physiological, genetic, and developmental mechanisms of behavior in humans and other animals. Behavioral neuroscientists examine the biological bases of behavior through research that involves neuroanatomical substrates, environmental and genetic factors, effects of lesions and electrical stimulation, developmental processes, recording electrical activity, neurotransmitters, hormonal influences, chemical components, and the effects of drugs. Important topics of consideration for neuroscientific research in behavior include learning and memory, sensory processes, motivation and emotion, as well as genetic and molecular substrates concerning the biological bases of behavior. Subdivisions of behavioral neuroscience include the field of cognitive neuroscience, which emphasizes the biological processes underlying human cognition. Behavioral and cognitive neuroscience are both concerned with the neuronal and biological bases of psychology, with a particular emphasis on either cognition or behavior depending on the field.
History
Behavioral neuroscience as a scientific discipline emerged from a variety of scientific and philosophical traditions in the 18th and 19th centuries. René Descartes proposed physical models to explain animal as well as human behavior. Descartes suggested that the pineal gland, a midline unpaired structure in the brain of many organisms, was the point of contact between mind and body. Descartes also elaborated on a theory in which the pneumatics of bodily fluids could explain reflexes and other motor behavior. This theory was inspired by moving statues in a garden in Paris.
Other philosophers also helped give birth to psychology. One of the earliest textbooks in the new field, The Principles of Psychology by William James, argues that the scientific study of psychology should be grounded in an understanding of biology.
The emergence of psychology and behavioral neuroscience as legitimate sciences can be traced from the emergence of physiology from anatomy, particularly neuroanatomy. Physiologists conducted experiments on living organisms, a practice that was distrusted by the dominant anatomists of the 18th and 19th centuries. The influential work of Claude Bernard, Charles Bell, and William Harvey helped to convince the scientific community that reliable data could be obtained from living subjects.
Even before the 18th and 19th centuries, behavioral neuroscience was beginning to take form as far back as 1700 B.C. The question that seems to continually arise is: what is the connection between the mind and body? The debate is formally referred to as the mind-body problem. There are two major schools of thought that attempt to resolve the mind–body problem; monism and dualism. Plato and Aristotle are two of several philosophers who participated in this debate. Plato believed that the brain was where all mental thought and processes happened. In contrast, Aristotle believed the brain served the purpose of cooling down the emotions derived from the heart. The mind-body problem was a stepping stone toward attempting to understand the connection between the mind and body.
Another debate arose about localization of function or functional specialization versus equipotentiality which played a significant role in the development in behavioral neuroscience. As a result of localization of function research, many famous people found within psychology have come to various different conclusions. Wilder Penfield was able to develop a map of the cerebral cortex through studying epileptic patients along with Rassmussen. Research on localization of function has led behavioral neuroscientists to a better understanding of which parts of the brain control behavior. This is best exemplified through the case study of Phineas Gage.
The term "psychobiology" has been used in a variety of contexts, emphasizing the importance of biology, which is the discipline that studies organic, neural and cellular modifications in behavior, plasticity in neuroscience, and biological diseases in all aspects, in addition, biology focuses and analyzes behavior and all the subjects it is concerned about, from a scientific point of view. In this context, psychology helps as a complementary, but important discipline in the neurobiological sciences. The role of psychology in this questions is that of a social tool that backs up the main or strongest biological science. The term "psychobiology" was first used in its modern sense by Knight Dunlap in his book An Outline of Psychobiology (1914). Dunlap also was the founder and editor-in-chief of the journal Psychobiology. In the announcement of that journal, Dunlap writes that the journal will publish research "...bearing on the interconnection of mental and physiological functions", which describes the field of behavioral neuroscience even in its modern sense.
Neuroscience is considered a relatively new discipline, with the first conference for the Society of Neuroscience occurring in 1971. The meeting was held to merge different fields focused on studying the nervous system (ex. neuroanatomy, neurochemistry, physiological psychology, neuroendocrinology, clinical neurology, neurophysiology, neuropharmacology, etc.) by creating one interdisciplinary field. In 1983, the Journal of Comparative and Physiological Psychology, published by the American Psychological Association, was split into two separate journals: Behavioral Neuroscience and the Journal of Comparative Psychology. The author of the journal at the time gave reasoning for this separation, with one being that behavioral neuroscience is the broader contemporary advancement of physiological psychology. Furthermore, in all animals, the nervous system is the organ of behavior. Therefore, every biological and behavioral variable that influences behavior must go through the nervous system to do so. Present-day research in behavioral neuroscience studies all biological variables which act through the nervous system and relate to behavior.
Relationship to other fields of psychology and biology
In many cases, humans may serve as experimental subjects in behavioral neuroscience experiments; however, a great deal of the experimental literature in behavioral neuroscience comes from the study of non-human species, most frequently rats, mice, and monkeys. As a result, a critical assumption in behavioral neuroscience is that organisms share biological and behavioral similarities, enough to permit extrapolations across species. This allies behavioral neuroscience closely with comparative psychology, ethology, evolutionary biology, and neurobiology. Behavioral neuroscience also has paradigmatic and methodological similarities to neuropsychology, which relies heavily on the study of the behavior of humans with nervous system dysfunction (i.e., a non-experimentally based biological manipulation). Synonyms for behavioral neuroscience include biopsychology, biological psychology, and psychobiology. Physiological psychology is a subfield of behavioral neuroscience, with an appropriately narrower definition.
Research methods
The distinguishing characteristic of a behavioral neuroscience experiment is that either the independent variable of the experiment is biological, or some dependent variable is biological. In other words, the nervous system of the organism under study is permanently or temporarily altered, or some aspect of the nervous system is measured (usually to be related to a behavioral variable).
Disabling or decreasing neural function
Lesions – A classic method in which a brain-region of interest is naturally or intentionally destroyed to observe any resulting changes such as degraded or enhanced performance on some behavioral measure. Lesions can be placed with relatively high accuracy "Thanks to a variety of brain 'atlases' which provide a map of brain regions in 3-dimensional" stereotactic coordinates.
Surgical lesions – Neural tissue is destroyed by removing it surgically.
Electrolytic lesions – Neural tissue is destroyed through the application of electrical shock trauma.
Chemical lesions – Neural tissue is destroyed by the infusion of a neurotoxin.
Temporary lesions – Neural tissue is temporarily disabled by cooling or by the use of anesthetics such as tetrodotoxin.
Transcranial magnetic stimulation – A new technique usually used with human subjects in which a magnetic coil applied to the scalp causes unsystematic electrical activity in nearby cortical neurons which can be experimentally analyzed as a functional lesion.
Synthetic ligand injection – A receptor activated solely by a synthetic ligand (RASSL) or Designer Receptor Exclusively Activated by Designer Drugs (DREADD), permits spatial and temporal control of G protein signaling in vivo. These systems utilize G protein-coupled receptors (GPCR) engineered to respond exclusively to synthetic small molecules ligands, like clozapine N-oxide (CNO), and not to their natural ligand(s). RASSL's represent a GPCR-based chemogenetic tool. These synthetic ligands upon activation can decrease neural function by G-protein activation. This can with Potassium attenuating neural activity.
Optogenetic inhibition – A light activated inhibitory protein is expressed in cells of interest. Powerful millisecond timescale neuronal inhibition is instigated upon stimulation by the appropriate frequency of light delivered via fiber optics or implanted LEDs in the case of vertebrates, or via external illumination for small, sufficiently translucent invertebrates. Bacterial Halorhodopsins or Proton pumps are the two classes of proteins used for inhibitory optogenetics, achieving inhibition by increasing cytoplasmic levels of halides () or decreasing the cytoplasmic concentration of protons, respectively.
Enhancing neural function
Electrical stimulation – A classic method in which neural activity is enhanced by application of a small electric current (too small to cause significant cell death).
Psychopharmacological manipulations – A chemical receptor antagonist induces neural activity by interfering with neurotransmission. Antagonists can be delivered systemically (such as by intravenous injection) or locally (intracerebrally) during a surgical procedure into the ventricles or into specific brain structures. For example, NMDA antagonist AP5 has been shown to inhibit the initiation of long term potentiation of excitatory synaptic transmission (in rodent fear conditioning) which is believed to be a vital mechanism in learning and memory.
Synthetic Ligand Injection – Likewise, Gq-DREADDs can be used to modulate cellular function by innervation of brain regions such as Hippocampus. This innervation results in the amplification of γ-rhythms, which increases motor activity.
Transcranial magnetic stimulation – In some cases (for example, studies of motor cortex), this technique can be analyzed as having a stimulatory effect (rather than as a functional lesion).
Optogenetic excitation – A light activated excitatory protein is expressed in select cells. Channelrhodopsin-2 (ChR2), a light activated cation channel, was the first bacterial opsin shown to excite neurons in response to light, though a number of new excitatory optogenetic tools have now been generated by improving and imparting novel properties to ChR2.
Measuring neural activity
Optical techniques – Optical methods for recording neuronal activity rely on methods that modify the optical properties of neurons in response to the cellular events associated with action potentials or neurotransmitter release.
Voltage sensitive dyes (VSDs) were among the earliest method for optically detecting neuronal activity. VSDs commonly changed their fluorescent properties in response to a voltage change across the neuron's membrane, rendering membrane sub-threshold and supra-threshold (action potentials) electrical activity detectable. Genetically encoded voltage sensitive fluorescent proteins have also been developed.
Calcium imaging relies on dyes or genetically encoded proteins that fluoresce upon binding to the calcium that is transiently present during an action potential.
Synapto-pHluorin is a technique that relies on a fusion protein that combines a synaptic vesicle membrane protein and a pH sensitive fluorescent protein. Upon synaptic vesicle release, the chimeric protein is exposed to the higher pH of the synaptic cleft, causing a measurable change in fluorescence.
Single-unit recording – A method whereby an electrode is introduced into the brain of a living animal to detect electrical activity that is generated by the neurons adjacent to the electrode tip. Normally this is performed with sedated animals but sometimes it is performed on awake animals engaged in a behavioral event, such as a thirsty rat whisking a particular sandpaper grade previously paired with water in order to measure the corresponding patterns of neuronal firing at the decision point.
Multielectrode recording – The use of a bundle of fine electrodes to record the simultaneous activity of up to hundreds of neurons.
Functional magnetic resonance imaging – fMRI, a technique most frequently applied on human subjects, in which changes in cerebral blood flow can be detected in an MRI apparatus and are taken to indicate relative activity of larger scale brain regions (i.e., on the order of hundreds of thousands of neurons).
Positron emission tomography - PET detects particles called photons using a 3-D nuclear medicine examination. These particles are emitted by injections of radioisotopes such as fluorine. PET imaging reveal the pathological processes which predict anatomic changes making it important for detecting, diagnosing and characterising many pathologies.
Electroencephalography – EEG, and the derivative technique of event-related potentials, in which scalp electrodes monitor the average activity of neurons in the cortex (again, used most frequently with human subjects). This technique uses different types of electrodes for recording systems such as needle electrodes and saline-based electrodes. EEG allows for the investigation of mental disorders, sleep disorders and physiology. It can monitor brain development and cognitive engagement.
Functional neuroanatomy – A more complex counterpart of phrenology. The expression of some anatomical marker is taken to reflect neural activity. For example, the expression of immediate early genes is thought to be caused by vigorous neural activity. Likewise, the injection of 2-deoxyglucose prior to some behavioral task can be followed by anatomical localization of that chemical; it is taken up by neurons that are electrically active.
Magnetoencephalography – MEG shows the functioning of the human brain through the measurement of electromagnetic activity. Measuring the magnetic fields created by the electric current flowing within the neurons identifies brain activity associated with various human functions in real time, with millimeter spatial accuracy. Clinicians can noninvasively obtain data to help them assess neurological disorders and plan surgical treatments.
Genetic techniques
QTL mapping – The influence of a gene in some behavior can be statistically inferred by studying inbred strains of some species, most commonly mice. The recent sequencing of the genome of many species, most notably mice, has facilitated this technique.
Selective breeding – Organisms, often mice, may be bred selectively among inbred strains to create a recombinant congenic strain. This might be done to isolate an experimentally interesting stretch of DNA derived from one strain on the background genome of another strain to allow stronger inferences about the role of that stretch of DNA.
Genetic engineering – The genome may also be experimentally-manipulated; for example, knockout mice can be engineered to lack a particular gene, or a gene may be expressed in a strain which does not normally do so (the 'transgenic'). Advanced techniques may also permit the expression or suppression of a gene to occur by injection of some regulating chemical.
Quantifying behavior
Markerless pose estimation – The advancement of computer vision techniques in recent years have allowed for precise quantifications of animal movements without needing to fit physical markers onto the subject. On high-speed video captured in a behavioral assay, keypoints from the subject can be extracted frame-by-frame, which is often useful to analyze in tandem with neural recordings/manipulations. Analyses can be conducted on how keypoints (i.e. parts of the animal) move within different phases of a particular behavior (on a short timescale), or throughout an animal's behavioral repertoire (longer timescale). These keypoint changes can be compared with corresponding changes in neural activity. A machine learning approach can also be used to identify specific behaviors (e.g. forward walking, turning, grooming, courtship, etc.), and quantify the dynamics of transitions between behaviors.
Other research methods
Computational models - Using a computer to formulate real-world problems to develop solutions. Although this method is often focused in computer science, it has begun to move towards other areas of study. For example, psychology is one of these areas. Computational models allow researchers in psychology to enhance their understanding of the functions and developments in nervous systems. Examples of methods include the modelling of neurons, networks and brain systems and theoretical analysis. Computational methods have a wide variety of roles including clarifying experiments, hypothesis testing and generating new insights. These techniques play an increasing role in the advancement of biological psychology.
Limitations and advantages
Different manipulations have advantages and limitations. Neural tissue destroyed as a primary consequence of a surgery, electric shock or neurotoxin can confound the results so that the physical trauma masks changes in the fundamental neurophysiological processes of interest.
For example, when using an electrolytic probe to create a purposeful lesion in a distinct region of the rat brain, surrounding tissue can be affected: so, a change in behavior exhibited by the experimental group post-surgery is to some degree a result of damage to surrounding neural tissue, rather than by a lesion of a distinct brain region. Most genetic manipulation techniques are also considered permanent. Temporary lesions can be achieved with advanced in genetic manipulations, for example, certain genes can now be switched on and off with diet. Pharmacological manipulations also allow blocking of certain neurotransmitters temporarily as the function returns to its previous state after the drug has been metabolized.
Topic areas
In general, behavioral neuroscientists study various neuronal and biological processes underlying behavior, though limited by the need to use nonhuman animals. As a result, the bulk of literature in behavioral neuroscience deals with experiences and mental processes that are shared across different animal models such as:
Sensation and perception
Motivated behavior (hunger, thirst, sex)
Control of movement
Learning and memory
Sleep and biological rhythms
Emotion
However, with increasing technical sophistication and with the development of more precise noninvasive methods that can be applied to human subjects, behavioral neuroscientists are beginning to contribute to other classical topic areas of psychology, philosophy, and linguistics, such as:
Language
Reasoning and decision making
Consciousness
Behavioral neuroscience has also had a strong history of contributing to the understanding of medical disorders, including those that fall under the purview of clinical psychology and biological psychopathology (also known as abnormal psychology). Although animal models do not exist for all mental illnesses, the field has contributed important therapeutic data on a variety of conditions, including:
Parkinson's disease, a degenerative disorder of the central nervous system that often impairs motor skills and speech.
Huntington's disease, a rare inherited neurological disorder whose most obvious symptoms are abnormal body movements and a lack of coordination. It also affects a number of mental abilities and some aspects of personality.
Alzheimer's disease, a neurodegenerative disease that, in its most common form, is found in people over the age of 65 and is characterized by progressive cognitive deterioration, together with declining activities of daily living and by neuropsychiatric symptoms or behavioral changes.
Clinical depression, a common psychiatric disorder, characterized by a persistent lowering of mood, loss of interest in usual activities and diminished ability to experience pleasure.
Schizophrenia, a psychiatric diagnosis that describes a mental illness characterized by impairments in the perception or expression of reality, most commonly manifesting as auditory hallucinations, paranoid or bizarre delusions or disorganized speech and thinking in the context of significant social or occupational dysfunction.
Autism, a brain development disorder that impairs social interaction and communication, and causes restricted and repetitive behavior, all starting before a child is three years old.
Anxiety, a physiological state characterized by cognitive, somatic, emotional, and behavioral components. These components combine to create the feelings that are typically recognized as fear, apprehension, or worry.
Drug abuse, including alcoholism.
Research on topic areas
Cognition
Behavioral neuroscientists conduct research on various cognitive processes through the use of different neuroimaging techniques. Examples of cognitive research might involve examination of neural correlates during emotional information processing, such as one study that analyzed the relationship between subjective affect and neural reactivity during sustained processing of positive (savoring) and negative (rumination) emotion. The aim of the study was to analyze whether repetitive positive thinking (seen as being beneficial) and repetitive negative thinking (significantly related to worse mental health) would have similar underlying neural mechanisms. Researchers found that the individuals who had a more intense positive affect during savoring, were also the same individuals who had a more intense negative affect during rumination. fMRI data showed similar activations in brain regions during both rumination and savoring, suggesting shared neural mechanisms between the two types of repetitive thinking. The results of the study suggest there are similarities, both subjectively and mechanistically, with repetitive thinking about positive and negative emotions. This overall suggests shared neural mechanisms by which sustained emotional processing of both positive and negative information occurs.
Stress
Research within the field of behavioral neuroscience involves looking at the complex neuroanatomy underlying different emotional processes, such as stress. Godoy et al. (2018) did so by providing an in-depth analyzation of the neurobiological underpinnings of the stress response. The article features on an overview on the historical development of stress research and its importance leading up to research related to both physical and psychological stressors today. The authors explored various significators of stress and their corresponding neuroanatomical processing, along with the temporal dynamics of both acute and chronic stress and its effects on the brain. Overall, the article provides a comprehensive scientific overview of stress through a neurobiological lens, highlighting the importance of our current knowledge in stress-related research areas today.
Awards
Nobel Laureates
The following Nobel Prize winners could reasonably be considered behavioral neuroscientists or neurobiologists. (This list omits winners who were almost exclusively neuroanatomists or neurophysiologists; i.e., those that did not measure behavioral or neurobiological variables.)
Kavli Prize in Neuroscience
Ann Graybiel (1942)
Cornelia Bargmann (1961)
Winfried Denk (1957)
See also
References
External links
Biological Psychology Links
Theory of Biological Psychology (Documents No. 9 and 10 in English)
IBRO (International Brain Research Organization)
Neuropsychology
Psychoneuroimmunology | Behavioral neuroscience | Biology | 4,625 |
1,145,964 | https://en.wikipedia.org/wiki/CIMOSA | CIMOSA, standing for "Computer Integrated Manufacturing Open System Architecture", is an enterprise modeling framework, which aims to support the enterprise integration of machines, computers and people. The framework is based on the system life cycle concept, and offers a modelling language, methodology and supporting technology to support these goals.
It was developed in the 1990s by the AMICE Consortium, in an EU project. A non-profit organization CIMOSA Association was later established to keep ownership of the CIMOSA specification, to promote it and to support its further evolution.
Overview
The original aim of CIMOSA (1992) was "to elaborate an open system architecture for CIM and to define a set of concepts and rules to facilitate the building of future CIM systems". One of the main ideas of CIMOSA is the categorization of manufacturing operations in:
Generic functions: generic parts of every enterprise, independent of organisation structure or business area.
Specific (partial and particular) functions: specific for individual enterprises.
The development of CIMOSA has ultimately resulted in two key items:
Modeling Framework: This framework supports "all phases of the CIM system life-cycle from requirements definition, through design specification, implementation description and execution of the daily enterprise operation".
Integrating Infrastructure: This infrastructure provides "specific information technology services for the execution of the Particular Implementation Model", which has proven to be vendor independent and portable.
The framework furthermore offers an "event-driven, process-based modeling approach with the goal to cover essential enterprise aspects in one integrated model. The main aspects are the functional, behavioral, resource, information and organizational aspect".
CIMOSA can be applied in process simulation and analysis. Standardized CIMOSA models "can also be used on line in the manufacturing enterprise for scheduling, dispatching, monitoring and providing process information". One of the standards based on CIMOSA is the Generalised Enterprise Reference Architecture and Methodology (GERAM).
Building blocks
The main focus of CIMOSA has been to construct:
a framework for enterprise modelling, a reference architecture
an enterprise modelling language
an integrating infrastructure for model enactment supported by
a common terminology
A close liaison with European and international standardization organisations was established to stimulate the standardization process for enterprise integration.
CIMOSA aims at integrating enterprise operations by means of efficient information exchange within the enterprise. CIMOSA models enterprises using four perspectives:
the function view describes the functional structure required to satisfy the objectives of an enterprise and related control structures;
the information view describes the information required by each function;
the resource view describes the resources and their relations to functional and control structures; and
the organization view describes the responsibilities assigned to individuals for functional and control structures.
AMICE Consortium
AMICE Consortium was a European organization of major companies concerned with computer-integrated manufacturing (CIM). It was initiated in 1985 and dissolved in 1995, and eventually included users, vendors, consulting companies, and academia. Among the participating companies were IBM, Hewlett-Packard, Digital Equipment Corporation (DEC), Siemens, Fiat, and Daimler-Benz.
The AMICE Consortium was initiated as European Strategic Program on Research in Information Technology (ESPRIT) project to bring together stakeholders in the development of CIM for the development of new standards for CIM systems. This led to the development of the CIMOSA, which defined "a comprehensive set of constructs sufficient to describe all aspects of manufacturing systems." It also established the CIMOSA Association.
Publications
The AMICE Consortium has published several books and papers. A selection:
1989. Open System Architecture for CIM, Research Report of ESPRIT Project 688, Vol. 1, Springer-Verlag.
1991. Open System Architecture, CIMOSA, AD 1.0, Architecture Description, ESPRIT Consortium AMICE, Brussels, Belgium.
1992. ESPRIT Project 5288, Milestone M-2, AD2.0, 2, Architecture description, document RO443/1. Consortium AMICE, Brussels, Belgium.
1993. CIMOSA: open system architecture for CIM, Springer, 1993.
CIMOSA Association
At the start of the 1990s the CIMOSA Association (COA) was founded as a non-profit organisation by the AMICE Consortium, aiming to promote enterprise engineering and integration (EE&I) based on CIMOSA. It has extended its goals in the new millennium towards "upcoming new enterprise paradigms of extended, virtual and agile enterprises, which cause new requirements on organisational concepts and supporting technologies. Enhanced decision support and operation monitoring and control are some of the needs of today and tomorrow. Capturing knowledge and using it across organisational boundaries will be a major challenge in the new types of businesses. This real-time knowledge needed to support the establishment, deployment and discontinuation of the inter and intra organisational relations".
From the start CIMOSA has been an active supporter for national, European and international standardization of Enterprise Integration.
In 2010 the CIMOSA Association closed due "loss of membership according to people retirements."
See also
Architecture of Integrated Information Systems (ARIS)
Computer Integrated Manufacturing
Generalised Enterprise Reference Architecture and Methodology (GERAM)
ISO 19439 Framework for enterprise modelling
References
Further reading
AMICE (1993) CIMOSA: OPen System Architecture for CIM, 2nd edition, Springer-Verlag, Berlin
Kosanke, Kurt. "CIMOSA—overview and status." Computers in industry 27.2 (1995): 101–109.
Kosanke, Kurt, F. Vernadat, and Martin Zelm. "CIMOSA: enterprise engineering and integration." Computers in industry 40.2 (1999): 83–97.
Kosanke, Kurt, and Martin Zelm. "CIMOSA modelling processes." Computers in Industry 40.2 (1999): 141–153.
François Vernadat (1996) Enterprise Modeling and Integration: Principles and Applications, Chapman & Hall, London,
Klittich, M. (1988). CIM-OSA: the implementation viewpoint. Puente, E. and MacCnaill, P., Ed. Computer Integrated Manufacturing: Proceedings of the 4th CIM Europe Conference, Madrid, May 18–20, 1988. pp. 251–264. Bedford, UK, IFS Publications.
W. N. Hou, M. Klittich, R van Gerwen: The Knowledge Base Oriented Machine Front End Services for the Integrating Infrastructure of CIM Open System Architecture(CIMOSA), Proc. Of the 8th CIM Europe Annual Conference 27–29 May 1992, Birmingham UK.
Klittich, M.: CIMOSA Part 3: CIMOSA Integrating Infrastructure - The Operational Basis for Integrated Manufacturing Systems, Int. J. Computer Integrated Manufacturing, Vol. 3, Nos. 3 and 4, pp. 168–180 (1990)
Klittich, M. (1989). CIM-OSA and its relationship to MAP. Halatsis, C. and Torres, J., Ed. Computer Integrated Manufacturing: Proceedings of the 5th CIM Europe Conference. pp. 131–142. Bedford, UK, IFS Publications.
Milena Didic Frank Neuscheler Leszek Bogdanowicz Manfred Klittich McCIM: Execution of CIMOSA Models, Proceedings of the ninth CIM-Europe annual conference on realising CIM's industrial potential, Pages 223–232, IOS Press Amsterdam, the Netherlands, ©1993.
M. Klittich, F. Neuscheler: Ist die Zeit reif für CIMOSA? CIM Management 10 (1994) 6, Seite 17–21.
External links
CIMOSA homepage
CIMOSA bibliography
Enterprise modelling
Enterprise architecture frameworks
Industrial computing | CIMOSA | Technology,Engineering | 1,587 |
3,828,409 | https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli%20theorem | In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the Fundamental Theorem of Statistics), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, describes the asymptotic behaviour of the empirical distribution function as the number of independent and identically distributed observations grows. Specifically, the empirical distribution function converges uniformly to the true distribution function almost surely.
The uniform convergence of more general empirical measures becomes an important property of the Glivenko–Cantelli classes of functions or sets. The Glivenko–Cantelli classes arise in Vapnik–Chervonenkis theory, with applications to machine learning. Applications can be found in econometrics making use of M-estimators.
Statement
Assume that are independent and identically distributed random variables in with common cumulative distribution function . The empirical distribution function for is defined by
where is the indicator function of the set For every (fixed) is a sequence of random variables which converge to almost surely by the strong law of large numbers. Glivenko and Cantelli strengthened this result by proving uniform convergence of to
Theorem
almost surely.
This theorem originates with Valery Glivenko and Francesco Cantelli, in 1933.
Remarks
If is a stationary ergodic process, then converges almost surely to The Glivenko–Cantelli theorem gives a stronger mode of convergence than this in the iid case.
An even stronger uniform convergence result for the empirical distribution function is available in the form of an extended type of law of the iterated logarithm. See asymptotic properties of the empirical distribution function for this and related results.
Proof
For simplicity, consider a case of continuous random variable . Fix such that for . Now for all there exists such that .
Therefore,
Since by strong law of large numbers, we can guarantee that for any positive and any integer such that , we can find such that for all , we have . Combined with the above result, this further implies that , which is the definition of almost sure convergence.
Empirical measures
One can generalize the empirical distribution function by replacing the set by an arbitrary set C from a class of sets to obtain an empirical measure indexed by sets
Where is the indicator function of each set .
Further generalization is the map induced by on measurable real-valued functions f, which is given by
Then it becomes an important property of these classes whether the strong law of large numbers holds uniformly on or .
Glivenko–Cantelli class
Consider a set with a sigma algebra of Borel subsets and a probability measure For a class of subsets,
and a class of functions
define random variables
where is the empirical measure, is the corresponding map, and
assuming that it exists.
Definitions
A class is called a Glivenko–Cantelli class (or GC class, or sometimes strong GC class) with respect to a probability measure if
almost surely as
A class is a weak Glivenko-Cantelli class with respect to if it instead satisfies the weaker condition
in probability as
A class is called a universal Glivenko–Cantelli class if it is a GC class with respect to any probability measure on .
A class is a weak uniform Glivenko–Cantelli class if the convergence occurs uniformly over all probability measures on : For every ,
as
A class is a (strong) uniform Glivenko-Cantelli class if it satisfies the stronger condition that for every ,
as
Glivenko–Cantelli classes of functions (as well as their uniform and universal forms) are defined similarly, replacing all instances of with .
The weak and strong versions of the various Glivenko-Cantelli properties often coincide under certain regularity conditions. The following definition commonly appears in such regularity conditions:
A class of functions is image-admissible Suslin if there exists a Suslin space and a surjection such that the map is measurable .
A class of measurable sets is image-admissible Suslin if the class of functions is image-admissible Suslin, where denotes the indicator function for the set .
Theorems
The following two theorems give sufficient conditions for the weak and strong versions of the Glivenko-Cantelli property to be equivalent.
Theorem (Talagrand, 1987)
Let be a class of functions that is integrable , and define . Then the following are equivalent:
is a weak Glivenko-Cantelli class and is dominated by an integrable function
is a Glivenko-Cantelli class
Theorem (Dudley, Giné, and Zinn, 1991)
Suppose that a function class is bounded. Also suppose that the set is image-admissible Suslin. Then is a weak uniform Glivenko-Cantelli class if and only if it is a strong uniform Glivenko-Cantelli class.
The following theorem is central to statistical learning of binary classification tasks.
Theorem (Vapnik and Chervonenkis, 1968)
Under certain consistency conditions, a universally measurable class of sets is a uniform Glivenko-Cantelli class if and only if it is a Vapnik–Chervonenkis class.
There exist a variety of consistency conditions for the equivalence of uniform Glivenko-Cantelli and Vapnik-Chervonenkis classes. In particular, either of the following conditions for a class suffice:
is image-admissible Suslin.
is universally separable: There exists a countable subset of such that each set can be written as the pointwise limit of sets in .
Examples
Let and . The classical Glivenko–Cantelli theorem implies that this class is a universal GC class. Furthermore, by Kolmogorov's theorem,
, that is is uniformly Glivenko–Cantelli class.
Let P be a nonatomic probability measure on S and be a class of all finite subsets in S. Because , , , we have that and so is not a GC class with respect to P.
See also
Donsker's theorem
Dvoretzky–Kiefer–Wolfowitz inequality – strengthens the Glivenko–Cantelli theorem by quantifying the rate of convergence.
References
Further reading
Empirical process
Asymptotic theory (statistics)
Probability theorems
Theorems in statistics | Glivenko–Cantelli theorem | Mathematics | 1,323 |
11,716,544 | https://en.wikipedia.org/wiki/Classical%20fluid | Classical fluids are systems of particles which retain a definite volume, and are at sufficiently high temperatures (compared to their Fermi energy) that quantum effects can be neglected. A system of hard spheres, interacting only by hard collisions (e.g., billiards, marbles), is a model classical fluid. Such a system is well described by the Percus–Yevik equation. Common liquids, e.g., liquid air, gasoline etc., are essentially mixtures of classical fluids. Electrolytes, molten salts, salts dissolved in water, are classical charged fluids. A classical fluid when cooled undergoes a freezing transition. On heating it undergoes an evaporation transition and becomes a classical gas that obeys Boltzmann statistics.
A system of charged classical particles moving in a uniform positive neutralizing background is known as a one-component plasma (OCP). This is well described by the hyper-netted chain equation (see classical-map hypernetted-chain method or CHNC). An essentially very accurate way of determining the properties of classical fluids is provided by the method of molecular dynamics. An electron gas confined in a metal is not a classical fluid, whereas a very high-temperature plasma of electrons could behave as a classical fluid. Such non-classical Fermi systems, i.e., quantum fluids, can be studied using quantum Monte Carlo methods, Feynman path integral formulation, and approximately via CHNC integral-equation methods.
See also
Bose–Einstein condensate
Fermi liquid
Many-body theory
Quantum fluid
References
Concepts in physics | Classical fluid | Physics | 321 |
40,014,327 | https://en.wikipedia.org/wiki/Nocturnal%20clitoral%20tumescence | Nocturnal clitoral tumescence (NCT), colloquially known as morning bean, is a spontaneous swelling of the clitoris during sleep or when waking up. Similar to the process in males, nocturnal penile tumescence, females experience clitoris tumescence and engorgement of the vagina, mainly during the REM sleep phase.
According to Fisher et al., the increase in vaginal blood flow associated with NCT during REM sleep is similar to the process in men in frequency, i.e. 95% of REM phases. It does occur a bit more often in non-REM sleep, and each episode appears shorter in duration. In terms of size, NCT is similar to that induced by erotic stimulation when awake. The erection may be associated with erotic dreams and even, occasionally, sleep orgasms. The phenomenon was first documented in 1970 by Karacan et al., with a single aforementioned follow-up study in 1983 by Fisher et al. More recent research includes a 2023 study by Gören et al., who found that their subjects displayed vaginal pH changes at different periods throughout the night. The study did not measure specific sleep phases. Increases in vaginal pH are associated with sexual arousal and clitoral erection. Bartholin's glands secrete alkaline fluid to lubricate the vagina during arousal, increasing vaginal pH.
See also
Rapid eye movement sleep
References
Sleep
Clitoris
Sexual arousal | Nocturnal clitoral tumescence | Biology | 298 |
33,306 | https://en.wikipedia.org/wiki/Water | Water is an inorganic compound with the chemical formula . It is a transparent, tasteless, odorless, and nearly colorless chemical substance. It is the main constituent of Earth's hydrosphere and the fluids of all known living organisms (in which it acts as a solvent). It is vital for all known forms of life, despite not providing food energy or organic micronutrients. Its chemical formula, , indicates that each of its molecules contains one oxygen and two hydrogen atoms, connected by covalent bonds. The hydrogen atoms are attached to the oxygen atom at an angle of 104.45°. In liquid form, is also called "water" at standard temperature and pressure.
Because Earth's environment is relatively close to water's triple point, water exists on Earth as a solid, a liquid, and a gas. It forms precipitation in the form of rain and aerosols in the form of fog. Clouds consist of suspended droplets of water and ice, its solid state. When finely divided, crystalline ice may precipitate in the form of snow. The gaseous state of water is steam or water vapor.
Water covers about 71% of the Earth's surface, with seas and oceans making up most of the water volume (about 96.5%). Small portions of water occur as groundwater (1.7%), in the glaciers and the ice caps of Antarctica and Greenland (1.7%), and in the air as vapor, clouds (consisting of ice and liquid water suspended in air), and precipitation (0.001%). Water moves continually through the water cycle of evaporation, transpiration (evapotranspiration), condensation, precipitation, and runoff, usually reaching the sea.
Water plays an important role in the world economy. Approximately 70% of the fresh water used by humans goes to agriculture. Fishing in salt and fresh water bodies has been, and continues to be, a major source of food for many parts of the world, providing 6.5% of global protein. Much of the long-distance trade of commodities (such as oil, natural gas, and manufactured products) is transported by boats through seas, rivers, lakes, and canals. Large quantities of water, ice, and steam are used for cooling and heating in industry and homes. Water is an excellent solvent for a wide variety of substances, both mineral and organic; as such, it is widely used in industrial processes and in cooking and washing. Water, ice, and snow are also central to many sports and other forms of entertainment, such as swimming, pleasure boating, boat racing, surfing, sport fishing, diving, ice skating, snowboarding, and skiing.
Etymology
The word water comes from Old English , from Proto-Germanic (source also of Old Saxon , Old Frisian , Dutch , Old High German , German , , Gothic ()), from Proto-Indo-European , suffixed form of root (; ). Also cognate, through the Indo-European root, with Greek (; from Ancient Greek (), whence English ), Russian (), Irish , and Albanian .
History
On Earth
Properties
Water () is a polar inorganic compound. At room temperature it is a tasteless and odorless liquid, nearly colorless with a hint of blue. The simplest hydrogen chalcogenide, it is by far the most studied chemical compound and is sometimes described as the "universal solvent" for its ability to dissolve more substances than any other liquid, though it is poor at dissolving nonpolar substances. This allows it to be the "solvent of life": indeed, water as found in nature almost always includes various dissolved substances, and special steps are required to obtain chemically pure water. Water is the only common substance to exist as a solid, liquid, and gas in normal terrestrial conditions.
States
Along with oxidane, water is one of the two official names for the chemical compound ; it is also the liquid phase of . The other two common states of matter of water are the solid phase, ice, and the gaseous phase, water vapor or steam. The addition or removal of heat can cause phase transitions: freezing (water to ice), melting (ice to water), vaporization (water to vapor), condensation (vapor to water), sublimation (ice to vapor) and deposition (vapor to ice).
Density
Water is one of only a few common naturally occurring substances which, for some temperature ranges, become less dense as they cool, and the only known naturally occurring substance which does so while liquid. In addition it is unusual as it becomes significantly less dense as it freezes, though it is not unique in that respect.
At 1 atm pressure, it reaches its maximum density of at .
Below that temperature, but above the freezing point of , it expands becoming less dense until it reaches freezing point, reaching a density in its liquid phase of .
Once it freezes and becomes ice, it expands by about 9%, with a density of . This expansion can exert enormous pressure, bursting pipes and cracking rocks. As a solid, it displays the usual behavior of contracting and becoming more dense as it cools. These unusual thermal properties have important consequences for life on earth.
In a lake or ocean, water at sinks to the bottom, and ice forms on the surface, floating on the liquid water. This ice insulates the water below, preventing it from freezing solid. Without this protection, most aquatic organisms residing in lakes would perish during the winter. In addition, this anomalous behavior is an important part of the thermohaline circulation which distributes heat around the planet's oceans.
Magnetism
Water is a diamagnetic material. Though interaction is weak, with superconducting magnets it can attain a notable interaction.
Phase transitions
At a pressure of one atmosphere (atm), ice melts or water freezes (solidifies) at and water boils or vapor condenses at . However, even below the boiling point, water can change to vapor at its surface by evaporation (vaporization throughout the liquid is known as boiling). Sublimation and deposition also occur on surfaces. For example, frost is deposited on cold surfaces while snowflakes form by deposition on an aerosol particle or ice nucleus. In the process of freeze-drying, a food is frozen and then stored at low pressure so the ice on its surface sublimates.
The melting and boiling points depend on pressure. A good approximation for the rate of change of the melting temperature with pressure is given by the Clausius–Clapeyron relation:
where and are the molar volumes of the liquid and solid phases, and is the molar latent heat of melting. In most substances, the volume increases when melting occurs, so the melting temperature increases with pressure. However, because ice is less dense than water, the melting temperature decreases. In glaciers, pressure melting can occur under sufficiently thick volumes of ice, resulting in subglacial lakes.
The Clausius-Clapeyron relation also applies to the boiling point, but with the liquid/gas transition the vapor phase has a much lower density than the liquid phase, so the boiling point increases with pressure. Water can remain in a liquid state at high temperatures in the deep ocean or underground. For example, temperatures exceed in Old Faithful, a geyser in Yellowstone National Park. In hydrothermal vents, the temperature can exceed .
At sea level, the boiling point of water is . As atmospheric pressure decreases with altitude, the boiling point decreases by 1 °C every 274 meters. High-altitude cooking takes longer than sea-level cooking. For example, at , cooking time must be increased by a fourth to achieve the desired result. Conversely, a pressure cooker can be used to decrease cooking times by raising the boiling temperature. In a vacuum, water will boil at room temperature.
Triple and critical points
On a pressure/temperature phase diagram (see figure), there are curves separating solid from vapor, vapor from liquid, and liquid from solid. These meet at a single point called the triple point, where all three phases can coexist. The triple point is at a temperature of and a pressure of ; it is the lowest pressure at which liquid water can exist. Until 2019, the triple point was used to define the Kelvin temperature scale.
The water/vapor phase curve terminates at and . This is known as the critical point. At higher temperatures and pressures the liquid and vapor phases form a continuous phase called a supercritical fluid. It can be gradually compressed or expanded between gas-like and liquid-like densities; its properties (which are quite different from those of ambient water) are sensitive to density. For example, for suitable pressures and temperatures it can mix freely with nonpolar compounds, including most organic compounds. This makes it useful in a variety of applications including high-temperature electrochemistry and as an ecologically benign solvent or catalyst in chemical reactions involving organic compounds. In Earth's mantle, it acts as a solvent during mineral formation, dissolution and deposition.
Phases of ice and water
The normal form of ice on the surface of Earth is ice Ih, a phase that forms crystals with hexagonal symmetry. Another with cubic crystalline symmetry, ice Ic, can occur in the upper atmosphere. As the pressure increases, ice forms other crystal structures. As of 2024, twenty have been experimentally confirmed and several more are predicted theoretically. The eighteenth form of ice, ice XVIII, a face-centred-cubic, superionic ice phase, was discovered when a droplet of water was subject to a shock wave that raised the water's pressure to millions of atmospheres and its temperature to thousands of degrees, resulting in a structure of rigid oxygen atoms in which hydrogen atoms flowed freely. When sandwiched between layers of graphene, ice forms a square lattice.
The details of the chemical nature of liquid water are not well understood; some theories suggest that its unusual behavior is due to the existence of two liquid states.
Taste and odor
Pure water is usually described as tasteless and odorless, although humans have specific sensors that can feel the presence of water in their mouths, and frogs are known to be able to smell it. However, water from ordinary sources (including mineral water) usually has many dissolved substances that may give it varying tastes and odors. Humans and other animals have developed senses that enable them to evaluate the potability of water in order to avoid water that is too salty or putrid.
Color and appearance
Pure water is visibly blue due to absorption of light in the region c. 600–800 nm. The color can be easily observed in a glass of tap-water placed against a pure white background, in daylight. The principal absorption bands responsible for the color are overtones of the O–H stretching vibrations. The apparent intensity of the color increases with the depth of the water column, following Beer's law. This also applies, for example, with a swimming pool when the light source is sunlight reflected from the pool's white tiles.
In nature, the color may also be modified from blue to green due to the presence of suspended solids or algae.
In industry, near-infrared spectroscopy is used with aqueous solutions as the greater intensity of the lower overtones of water means that glass cuvettes with short path-length may be employed. To observe the fundamental stretching absorption spectrum of water or of an aqueous solution in the region around 3,500 cm (2.85 μm) a path length of about 25 μm is needed. Also, the cuvette must be both transparent around 3500 cm and insoluble in water; calcium fluoride is one material that is in common use for the cuvette windows with aqueous solutions.
The Raman-active fundamental vibrations may be observed with, for example, a 1 cm sample cell.
Aquatic plants, algae, and other photosynthetic organisms can live in water up to hundreds of meters deep, because sunlight can reach them.
Practically no sunlight reaches the parts of the oceans below of depth.
The refractive index of liquid water (1.333 at ) is much higher than that of air (1.0), similar to those of alkanes and ethanol, but lower than those of glycerol (1.473), benzene (1.501), carbon disulfide (1.627), and common types of glass (1.4 to 1.6). The refraction index of ice (1.31) is lower than that of liquid water.
Molecular polarity
In a water molecule, the hydrogen atoms form a 104.5° angle with the oxygen atom. The hydrogen atoms are close to two corners of a tetrahedron centered on the oxygen. At the other two corners are lone pairs of valence electrons that do not participate in the bonding. In a perfect tetrahedron, the atoms would form a 109.5° angle, but the repulsion between the lone pairs is greater than the repulsion between the hydrogen atoms. The O–H bond length is about 0.096 nm.
Other substances have a tetrahedral molecular structure, for example methane () and hydrogen sulfide (). However, oxygen is more electronegative than most other elements, so the oxygen atom has a negative partial charge while the hydrogen atoms are partially positively charged. Along with the bent structure, this gives the molecule an electrical dipole moment and it is classified as a polar molecule.
Water is a good polar solvent, dissolving many salts and hydrophilic organic molecules such as sugars and simple alcohols such as ethanol. Water also dissolves many gases, such as oxygen and carbon dioxide—the latter giving the fizz of carbonated beverages, sparkling wines and beers. In addition, many substances in living organisms, such as proteins, DNA and polysaccharides, are dissolved in water. The interactions between water and the subunits of these biomacromolecules shape protein folding, DNA base pairing, and other phenomena crucial to life (hydrophobic effect).
Many organic substances (such as fats and oils and alkanes) are hydrophobic, that is, insoluble in water. Many inorganic substances are insoluble too, including most metal oxides, sulfides, and silicates.
Hydrogen bonding
Because of its polarity, a molecule of water in the liquid or solid state can form up to four hydrogen bonds with neighboring molecules. Hydrogen bonds are about ten times as strong as the Van der Waals force that attracts molecules to each other in most liquids. This is the reason why the melting and boiling points of water are much higher than those of other analogous compounds like hydrogen sulfide. They also explain its exceptionally high specific heat capacity (about 4.2 J/(g·K)), heat of fusion (about 333 J/g), heat of vaporization (), and thermal conductivity (between 0.561 and 0.679 W/(m·K)). These properties make water more effective at moderating Earth's climate, by storing heat and transporting it between the oceans and the atmosphere. The hydrogen bonds of water are around 23 kJ/mol (compared to a covalent O-H bond at 492 kJ/mol). Of this, it is estimated that 90% is attributable to electrostatics, while the remaining 10% is partially covalent.
These bonds are the cause of water's high surface tension and capillary forces. The capillary action refers to the tendency of water to move up a narrow tube against the force of gravity. This property is relied upon by all vascular plants, such as trees.
Self-ionization
Water is a weak solution of hydronium hydroxide—there is an equilibrium , in combination with solvation of the resulting hydronium and hydroxide ions.
Electrical conductivity and electrolysis
Pure water has a low electrical conductivity, which increases with the dissolution of a small amount of ionic material such as common salt.
Liquid water can be split into the elements hydrogen and oxygen by passing an electric current through it—a process called electrolysis. The decomposition requires more energy input than the heat released by the inverse process (285.8 kJ/mol, or 15.9 MJ/kg).
Mechanical properties
Liquid water can be assumed to be incompressible for most purposes: its compressibility ranges from 4.4 to in ordinary conditions. Even in oceans at 4 km depth, where the pressure is 400 atm, water suffers only a 1.8% decrease in volume.
The viscosity of water is about 10 Pa·s or 0.01 poise at , and the speed of sound in liquid water ranges between depending on temperature. Sound travels long distances in water with little attenuation, especially at low frequencies (roughly 0.03 dB/km for 1 kHz), a property that is exploited by cetaceans and humans for communication and environment sensing (sonar).
Reactivity
Metallic elements which are more electropositive than hydrogen, particularly the alkali metals and alkaline earth metals such as lithium, sodium, calcium, potassium and cesium displace hydrogen from water, forming hydroxides and releasing hydrogen. At high temperatures, carbon reacts with steam to form carbon monoxide and hydrogen.
On Earth
Hydrology is the study of the movement, distribution, and quality of water throughout the Earth. The study of the distribution of water is hydrography. The study of the distribution and movement of groundwater is hydrogeology, of glaciers is glaciology, of inland waters is limnology and distribution of oceans is oceanography. Ecological processes with hydrology are in the focus of ecohydrology.
The collective mass of water found on, under, and over the surface of a planet is called the hydrosphere. Earth's approximate water volume (the total water supply of the world) is .
Liquid water is found in bodies of water, such as an ocean, sea, lake, river, stream, canal, pond, or puddle. The majority of water on Earth is seawater. Water is also present in the atmosphere in solid, liquid, and vapor states. It also exists as groundwater in aquifers.
Water is important in many geological processes. Groundwater is present in most rocks, and the pressure of this groundwater affects patterns of faulting. Water in the mantle is responsible for the melt that produces volcanoes at subduction zones. On the surface of the Earth, water is important in both chemical and physical weathering processes. Water, and to a lesser but still significant extent, ice, are also responsible for a large amount of sediment transport that occurs on the surface of the earth. Deposition of transported sediment forms many types of sedimentary rocks, which make up the geologic record of Earth history.
Water cycle
The water cycle (known scientifically as the hydrologic cycle) is the continuous exchange of water within the hydrosphere, between the atmosphere, soil water, surface water, groundwater, and plants.
Water moves perpetually through each of these regions in the water cycle consisting of the following transfer processes:
evaporation from oceans and other water bodies into the air and transpiration from land plants and animals into the air.
precipitation, from water vapor condensing from the air and falling to the earth or ocean.
runoff from the land usually reaching the sea.
Most water vapors found mostly in the ocean returns to it, but winds carry water vapor over land at the same rate as runoff into the sea, about 47 Tt per year while evaporation and transpiration happening in land masses also contribute another 72 Tt per year. Precipitation, at a rate of 119 Tt per year over land, has several forms: most commonly rain, snow, and hail, with some contribution from fog and dew. Dew is small drops of water that are condensed when a high density of water vapor meets a cool surface. Dew usually forms in the morning when the temperature is the lowest, just before sunrise and when the temperature of the earth's surface starts to increase. Condensed water in the air may also refract sunlight to produce rainbows.
Water runoff often collects over watersheds flowing into rivers. Through erosion, runoff shapes the environment creating river valleys and deltas which provide rich soil and level ground for the establishment of population centers. A flood occurs when an area of land, usually low-lying, is covered with water which occurs when a river overflows its banks or a storm surge happens. On the other hand, drought is an extended period of months or years when a region notes a deficiency in its water supply. This occurs when a region receives consistently below average precipitation either due to its topography or due to its location in terms of latitude.
Water resources
Water resources are natural resources of water that are potentially useful for humans, for example as a source of drinking water supply or irrigation water. Water occurs as both "stocks" and "flows". Water can be stored as lakes, water vapor, groundwater or aquifers, and ice and snow. Of the total volume of global freshwater, an estimated 69 percent is stored in glaciers and permanent snow cover; 30 percent is in groundwater; and the remaining 1 percent in lakes, rivers, the atmosphere, and biota. The length of time water remains in storage is highly variable: some aquifers consist of water stored over thousands of years but lake volumes may fluctuate on a seasonal basis, decreasing during dry periods and increasing during wet ones. A substantial fraction of the water supply for some regions consists of water extracted from water stored in stocks, and when withdrawals exceed recharge, stocks decrease. By some estimates, as much as 30 percent of total water used for irrigation comes from unsustainable withdrawals of groundwater, causing groundwater depletion.
Seawater and tides
Seawater contains about 3.5% sodium chloride on average, plus smaller amounts of other substances. The physical properties of seawater differ from fresh water in some important respects. It freezes at a lower temperature (about ) and its density increases with decreasing temperature to the freezing point, instead of reaching maximum density at a temperature above freezing. The salinity of water in major seas varies from about 0.7% in the Baltic Sea to 4.0% in the Red Sea. (The Dead Sea, known for its ultra-high salinity levels of between 30 and 40%, is really a salt lake.)
Tides are the cyclic rising and falling of local sea levels caused by the tidal forces of the Moon and the Sun acting on the oceans. Tides cause changes in the depth of the marine and estuarine water bodies and produce oscillating currents known as tidal streams. The changing tide produced at a given location is the result of the changing positions of the Moon and Sun relative to the Earth coupled with the effects of Earth rotation and the local bathymetry. The strip of seashore that is submerged at high tide and exposed at low tide, the intertidal zone, is an important ecological product of ocean tides.
Effects on life
From a biological standpoint, water has many distinct properties that are critical for the proliferation of life. It carries out this role by allowing organic compounds to react in ways that ultimately allow replication. All known forms of life depend on water. Water is vital both as a solvent in which many of the body's solutes dissolve and as an essential part of many metabolic processes within the body. Metabolism is the sum total of anabolism and catabolism. In anabolism, water is removed from molecules (through energy requiring enzymatic chemical reactions) in order to grow larger molecules (e.g., starches, triglycerides, and proteins for storage of fuels and information). In catabolism, water is used to break bonds in order to generate smaller molecules (e.g., glucose, fatty acids, and amino acids to be used for fuels for energy use or other purposes). Without water, these particular metabolic processes could not exist.
Water is fundamental to both photosynthesis and respiration. Photosynthetic cells use the sun's energy to split off water's hydrogen from oxygen. In the presence of sunlight, hydrogen is combined with (absorbed from air or water) to form glucose and release oxygen. All living cells use such fuels and oxidize the hydrogen and carbon to capture the sun's energy and reform water and in the process (cellular respiration).
Water is also central to acid-base neutrality and enzyme function. An acid, a hydrogen ion (, that is, a proton) donor, can be neutralized by a base, a proton acceptor such as a hydroxide ion () to form water. Water is considered to be neutral, with a pH (the negative log of the hydrogen ion concentration) of 7 in an ideal state. Acids have pH values less than 7 while bases have values greater than 7.
Aquatic life forms
Earth's surface waters are filled with life. The earliest life forms appeared in water; nearly all fish live exclusively in water, and there are many types of marine mammals, such as dolphins and whales. Some kinds of animals, such as amphibians, spend portions of their lives in water and portions on land. Plants such as kelp and algae grow in the water and are the basis for some underwater ecosystems. Plankton is generally the foundation of the ocean food chain.
Aquatic vertebrates must obtain oxygen to survive, and they do so in various ways. Fish have gills instead of lungs, although some species of fish, such as the lungfish, have both. Marine mammals, such as dolphins, whales, otters, and seals need to surface periodically to breathe air. Some amphibians are able to absorb oxygen through their skin. Invertebrates exhibit a wide range of modifications to survive in poorly oxygenated waters including breathing tubes (see insect and mollusc siphons) and gills (Carcinus). However, as invertebrate life evolved in an aquatic habitat most have little or no specialization for respiration in water.
Effects on human civilization
Civilization has historically flourished around rivers and major waterways; Mesopotamia, one of the so-called cradles of civilization, was situated between the major rivers Tigris and Euphrates; the ancient society of the Egyptians depended entirely upon the Nile. The early Indus Valley civilization () developed along the Indus River and tributaries that flowed out of the Himalayas. Rome was also founded on the banks of the Italian river Tiber. Large metropolises like Rotterdam, London, Montreal, Paris, New York City, Buenos Aires, Shanghai, Tokyo, Chicago, and Hong Kong owe their success in part to their easy accessibility via water and the resultant expansion of trade. Islands with safe water ports, like Singapore, have flourished for the same reason. In places such as North Africa and the Middle East, where water is more scarce, access to clean drinking water was and is a major factor in human development.
Health and pollution
Water fit for human consumption is called drinking water or potable water. Water that is not potable may be made potable by filtration or distillation, or by a range of other methods. More than 660 million people do not have access to safe drinking water.
Water that is not fit for drinking but is not harmful to humans when used for swimming or bathing is called by various names other than potable or drinking water, and is sometimes called safe water, or "safe for bathing". Chlorine is a skin and mucous membrane irritant that is used to make water safe for bathing or drinking. Its use is highly technical and is usually monitored by government regulations (typically 1 part per million (ppm) for drinking water, and 1–2 ppm of chlorine not yet reacted with impurities for bathing water). Water for bathing may be maintained in satisfactory microbiological condition using chemical disinfectants such as chlorine or ozone or by the use of ultraviolet light.
Water reclamation is the process of converting wastewater (most commonly sewage, also called municipal wastewater) into water that can be reused for other purposes. There are 2.3 billion people who reside in nations with water scarcities, which means that each individual receives less than of water annually. of municipal wastewater are produced globally each year.
Freshwater is a renewable resource, recirculated by the natural hydrologic cycle, but pressures over access to it result from the naturally uneven distribution in space and time, growing economic demands by agriculture and industry, and rising populations. Currently, nearly a billion people around the world lack access to safe, affordable water. In 2000, the United Nations established the Millennium Development Goals for water to halve by 2015 the proportion of people worldwide without access to safe water and sanitation. Progress toward that goal was uneven, and in 2015 the UN committed to the Sustainable Development Goals of achieving universal access to safe and affordable water and sanitation by 2030. Poor water quality and bad sanitation are deadly; some five million deaths a year are caused by water-related diseases. The World Health Organization estimates that safe water could prevent 1.4 million child deaths from diarrhea each year.
In developing countries, 90% of all municipal wastewater still goes untreated into local rivers and streams. Some 50 countries, with roughly a third of the world's population, also suffer from medium or high water scarcity and 17 of these extract more water annually than is recharged through their natural water cycles. The strain not only affects surface freshwater bodies like rivers and lakes, but it also degrades groundwater resources.
Human uses
Agriculture
The most substantial human use of water is for agriculture, including irrigated agriculture, which accounts for as much as 80 to 90 percent of total human water consumption. In the United States, 42% of freshwater withdrawn for use is for irrigation, but the vast majority of water "consumed" (used and not returned to the environment) goes to agriculture.
Access to fresh water is often taken for granted, especially in developed countries that have built sophisticated water systems for collecting, purifying, and delivering water, and removing wastewater. But growing economic, demographic, and climatic pressures are increasing concerns about water issues, leading to increasing competition for fixed water resources, giving rise to the concept of peak water. As populations and economies continue to grow, consumption of water-thirsty meat expands, and new demands rise for biofuels or new water-intensive industries, new water challenges are likely.
An assessment of water management in agriculture was conducted in 2007 by the International Water Management Institute in Sri Lanka to see if the world had sufficient water to provide food for its growing population. It assessed the current availability of water for agriculture on a global scale and mapped out locations suffering from water scarcity. It found that a fifth of the world's people, more than 1.2 billion, live in areas of physical water scarcity, where there is not enough water to meet all demands. A further 1.6 billion people live in areas experiencing economic water scarcity, where the lack of investment in water or insufficient human capacity make it impossible for authorities to satisfy the demand for water. The report found that it would be possible to produce the food required in the future, but that continuation of today's food production and environmental trends would lead to crises in many parts of the world. To avoid a global water crisis, farmers will have to strive to increase productivity to meet growing demands for food, while industries and cities find ways to use water more efficiently.
Water scarcity is also caused by production of water intensive products. For example, cotton: 1 kg of cotton—equivalent of a pair of jeans—requires water to produce. While cotton accounts for 2.4% of world water use, the water is consumed in regions that are already at a risk of water shortage. Significant environmental damage has been caused: for example, the diversion of water by the former Soviet Union from the Amu Darya and Syr Darya rivers to produce cotton was largely responsible for the disappearance of the Aral Sea.
As a scientific standard
On 7 April 1795, the gram was defined in France to be equal to "the absolute weight of a volume of pure water equal to a cube of one-hundredth of a meter, and at the temperature of melting ice". For practical purposes though, a metallic reference standard was required, one thousand times more massive, the kilogram. Work was therefore commissioned to determine precisely the mass of one liter of water. In spite of the fact that the decreed definition of the gram specified water at —a highly reproducible temperature—the scientists chose to redefine the standard and to perform their measurements at the temperature of highest water density, which was measured at the time as .
The Kelvin temperature scale of the SI system was based on the triple point of water, defined as exactly , but as of May 2019 is based on the Boltzmann constant instead. The scale is an absolute temperature scale with the same increment as the Celsius temperature scale, which was originally defined according to the boiling point (set to ) and melting point (set to ) of water.
Natural water consists mainly of the isotopes hydrogen-1 and oxygen-16, but there is also a small quantity of heavier isotopes oxygen-18, oxygen-17, and hydrogen-2 (deuterium). The percentage of the heavier isotopes is very small, but it still affects the properties of water. Water from rivers and lakes tends to contain less heavy isotopes than seawater. Therefore, standard water is defined in the Vienna Standard Mean Ocean Water specification.
For drinking
The human body contains from 55% to 78% water, depending on body size. To function properly, the body requires between of water per day to avoid dehydration; the precise amount depends on the level of activity, temperature, humidity, and other factors. Most of this is ingested through foods or beverages other than drinking straight water. It is not clear how much water intake is needed by healthy people, though the British Dietetic Association advises that 2.5 liters of total water daily is the minimum to maintain proper hydration, including 1.8 liters (6 to 7 glasses) obtained directly from beverages. Medical literature favors a lower consumption, typically 1 liter of water for an average male, excluding extra requirements due to fluid loss from exercise or warm weather.
Healthy kidneys can excrete 0.8 to 1 liter of water per hour, but stress such as exercise can reduce this amount. People can drink far more water than necessary while exercising, putting them at risk of water intoxication (hyperhydration), which can be fatal. The popular claim that "a person should consume eight glasses of water per day" seems to have no real basis in science. Studies have shown that extra water intake, especially up to at mealtime, was associated with weight loss. Adequate fluid intake is helpful in preventing constipation.
An original recommendation for water intake in 1945 by the Food and Nutrition Board of the U.S. National Research Council read: "An ordinary standard for diverse persons is 1 milliliter for each calorie of food. Most of this quantity is contained in prepared foods." The latest dietary reference intake report by the U.S. National Research Council in general recommended, based on the median total water intake from US survey data (including food sources): for men and of water total for women, noting that water contained in food provided approximately 19% of total water intake in the survey.
Specifically, pregnant and breastfeeding women need additional fluids to stay hydrated. The US Institute of Medicine recommends that, on average, men consume and women ; pregnant women should increase intake to and breastfeeding women should get 3 liters (12 cups), since an especially large amount of fluid is lost during nursing. Also noted is that normally, about 20% of water intake comes from food, while the rest comes from drinking water and beverages (caffeinated included). Water is excreted from the body in multiple forms; through urine and feces, through sweating, and by exhalation of water vapor in the breath. With physical exertion and heat exposure, water loss will increase and daily fluid needs may increase as well.
Humans require water with few impurities. Common impurities include metal salts and oxides, including copper, iron, calcium and lead, and harmful bacteria, such as Vibrio. Some solutes are acceptable and even desirable for taste enhancement and to provide needed electrolytes.
The single largest (by volume) freshwater resource suitable for drinking is Lake Baikal in Siberia.
Washing
Transportation
Chemical uses
Water is widely used in chemical reactions as a solvent or reactant and less commonly as a solute or catalyst. In inorganic reactions, water is a common solvent, dissolving many ionic compounds, as well as other polar compounds such as ammonia and compounds closely related to water. In organic reactions, it is not usually used as a reaction solvent, because it does not dissolve the reactants well and is amphoteric (acidic and basic) and nucleophilic. Nevertheless, these properties are sometimes desirable. Also, acceleration of Diels-Alder reactions by water has been observed. Supercritical water has recently been a topic of research. Oxygen-saturated supercritical water combusts organic pollutants efficiently.
Heat exchange
Water and steam are a common fluid used for heat exchange, due to its availability and high heat capacity, both for cooling and heating. Cool water may even be naturally available from a lake or the sea. It is especially effective to transport heat through vaporization and condensation of water because of its large latent heat of vaporization. A disadvantage is that metals commonly found in industries such as steel and copper are oxidized faster by untreated water and steam. In almost all thermal power stations, water is used as the working fluid (used in a closed-loop between boiler, steam turbine, and condenser), and the coolant (used to exchange the waste heat to a water body or carry it away by evaporation in a cooling tower). In the United States, cooling power plants is the largest use of water.
In the nuclear power industry, water can also be used as a neutron moderator. In most nuclear reactors, water is both a coolant and a moderator. This provides something of a passive safety measure, as removing the water from the reactor also slows the nuclear reaction down. However other methods are favored for stopping a reaction and it is preferred to keep the nuclear core covered with water so as to ensure adequate cooling.
Fire considerations
Water has a high heat of vaporization and is relatively inert, which makes it a good fire extinguishing fluid. The evaporation of water carries heat away from the fire. It is dangerous to use water on fires involving oils and organic solvents because many organic materials float on water and the water tends to spread the burning liquid.
Use of water in fire fighting should also take into account the hazards of a steam explosion, which may occur when water is used on very hot fires in confined spaces, and of a hydrogen explosion, when substances which react with water, such as certain metals or hot carbon such as coal, charcoal, or coke graphite, decompose the water, producing water gas.
The power of such explosions was seen in the Chernobyl disaster, although the water involved in this case did not come from fire-fighting but from the reactor's own water cooling system. A steam explosion occurred when the extreme overheating of the core caused water to flash into steam. A hydrogen explosion may have occurred as a result of a reaction between steam and hot zirconium.
Some metallic oxides, most notably those of alkali metals and alkaline earth metals, produce so much heat in reaction with water that a fire hazard can develop. The alkaline earth oxide quicklime, also known as calcium oxide, is a mass-produced substance that is often transported in paper bags. If these are soaked through, they may ignite as their contents react with water.
Recreation
Humans use water for many recreational purposes, as well as for exercising and for sports. Some of these include swimming, waterskiing, boating, surfing and diving. In addition, some sports, like ice hockey and ice skating, are played on ice. Lakesides, beaches and water parks are popular places for people to go to relax and enjoy recreation. Many find the sound and appearance of flowing water to be calming, and fountains and other flowing water structures are popular decorations. Some keep fish and other flora and fauna inside aquariums or ponds for show, fun, and companionship. Humans also use water for snow sports such as skiing, sledding, snowmobiling or snowboarding, which require the water to be at a low temperature either as ice or crystallized into snow.
Water industry
The water industry provides drinking water and wastewater services (including sewage treatment) to households and industry. Water supply facilities include water wells, cisterns for rainwater harvesting, water supply networks, and water purification facilities, water tanks, water towers, water pipes including old aqueducts. Atmospheric water generators are in development.
Drinking water is often collected at springs, extracted from artificial borings (wells) in the ground, or pumped from lakes and rivers. Building more wells in adequate places is thus a possible way to produce more water, assuming the aquifers can supply an adequate flow. Other water sources include rainwater collection. Water may require purification for human consumption. This may involve the removal of undissolved substances, dissolved substances and harmful microbes. Popular methods are filtering with sand which only removes undissolved material, while chlorination and boiling kill harmful microbes. Distillation does all three functions. More advanced techniques exist, such as reverse osmosis. Desalination of abundant seawater is a more expensive solution used in coastal arid climates.
The distribution of drinking water is done through municipal water systems, tanker delivery or as bottled water. Governments in many countries have programs to distribute water to the needy at no charge.
Reducing usage by using drinking (potable) water only for human consumption is another option. In some cities such as Hong Kong, seawater is extensively used for flushing toilets citywide in order to conserve freshwater resources.
Polluting water may be the biggest single misuse of water; to the extent that a pollutant limits other uses of the water, it becomes a waste of the resource, regardless of benefits to the polluter. Like other types of pollution, this does not enter standard accounting of market costs, being conceived as externalities for which the market cannot account. Thus other people pay the price of water pollution, while the private firms' profits are not redistributed to the local population, victims of this pollution. Pharmaceuticals consumed by humans often end up in the waterways and can have detrimental effects on aquatic life if they bioaccumulate and if they are not biodegradable.
Municipal and industrial wastewater are typically treated at wastewater treatment plants. Mitigation of polluted surface runoff is addressed through a variety of prevention and treatment techniques.
Industrial applications
Many industrial processes rely on reactions using chemicals dissolved in water, suspension of solids in water slurries or using water to dissolve and extract substances, or to wash products or process equipment. Processes such as mining, chemical pulping, pulp bleaching, paper manufacturing, textile production, dyeing, printing, and cooling of power plants use large amounts of water, requiring a dedicated water source, and often cause significant water pollution.
Water is used in power generation. Hydroelectricity is electricity obtained from hydropower. Hydroelectric power comes from water driving a water turbine connected to a generator. Hydroelectricity is a low-cost, non-polluting, renewable energy source. The energy is supplied by the motion of water. Typically a dam is constructed on a river, creating an artificial lake behind it. Water flowing out of the lake is forced through turbines that turn generators.
Pressurized water is used in water blasting and water jet cutters. High pressure water guns are used for precise cutting. It works very well, is relatively safe, and is not harmful to the environment. It is also used in the cooling of machinery to prevent overheating, or prevent saw blades from overheating.
Water is also used in many industrial processes and machines, such as the steam turbine and heat exchanger, in addition to its use as a chemical solvent. Discharge of untreated water from industrial uses is pollution. Pollution includes discharged solutes (chemical pollution) and discharged coolant water (thermal pollution). Industry requires pure water for many applications and uses a variety of purification techniques both in water supply and discharge.
Food processing
Boiling, steaming, and simmering are popular cooking methods that often require immersing food in water or its gaseous state, steam. Water is also used for dishwashing. Water also plays many critical roles within the field of food science.
Solutes such as salts and sugars found in water affect the physical properties of water. The boiling and freezing points of water are affected by solutes, as well as air pressure, which is in turn affected by altitude. Water boils at lower temperatures with the lower air pressure that occurs at higher elevations. One mole of sucrose (sugar) per kilogram of water raises the boiling point of water by , and one mole of salt per kg raises the boiling point by ; similarly, increasing the number of dissolved particles lowers water's freezing point.
Solutes in water also affect water activity that affects many chemical reactions and the growth of microbes in food. Water activity can be described as a ratio of the vapor pressure of water in a solution to the vapor pressure of pure water. Solutes in water lower water activity—this is important to know because most bacterial growth ceases at low levels of water activity. Not only does microbial growth affect the safety of food, but also the preservation and shelf life of food.
Water hardness is also a critical factor in food processing and may be altered or treated by using a chemical ion exchange system. It can dramatically affect the quality of a product, as well as playing a role in sanitation. Water hardness is classified based on concentration of calcium carbonate the water contains. Water is classified as soft if it contains less than 100 mg/L (UK) or less than 60 mg/L (US).
According to a report published by the Water Footprint organization in 2010, a single kilogram of beef requires of water; however, the authors also make clear that this is a global average and circumstantial factors determine the amount of water used in beef production.
Medical use
Water for injection is on the World Health Organization's list of essential medicines.
Distribution in nature
In the universe
Much of the universe's water is produced as a byproduct of star formation. The formation of stars is accompanied by a strong outward wind of gas and dust. When this outflow of material eventually impacts the surrounding gas, the shock waves that are created compress and heat the gas. The water observed is quickly produced in this warm dense gas.
On 22 July 2011, a report described the discovery of a gigantic cloud of water vapor containing "140 trillion times more water than all of Earth's oceans combined" around a quasar located 12 billion light years from Earth. According to the researchers, the "discovery shows that water has been prevalent in the universe for nearly its entire existence".
Water has been detected in interstellar clouds within the Milky Way. Water probably exists in abundance in other galaxies, too, because its components, hydrogen, and oxygen, are among the most abundant elements in the universe. Based on models of the formation and evolution of the Solar System and that of other star systems, most other planetary systems are likely to have similar ingredients.
Water vapor
Water is present as vapor in:
Atmosphere of the Sun: in detectable trace amounts
Atmosphere of Mercury: 3.4%, and large amounts of water in Mercury's exosphere
Atmosphere of Venus: 0.002%
Earth's atmosphere: ≈0.40% over full atmosphere, typically 1–4% at surface; as well as that of the Moon in trace amounts
Atmosphere of Mars: 0.03%
Atmosphere of Ceres
Atmosphere of Jupiter: 0.0004% – in ices only; and that of its moon Europa
Atmosphere of Saturn – in ices only; Enceladus: 91% and Dione (exosphere)
Atmosphere of Uranus – in trace amounts below 50 bar
Atmosphere of Neptune – found in the deeper layers
Extrasolar planet atmospheres: including those of HD 189733 b and HD 209458 b, Tau Boötis b, HAT-P-11b, XO-1b, WASP-12b, WASP-17b, and WASP-19b.
Stellar atmospheres: not limited to cooler stars and even detected in giant hot stars such as Betelgeuse, Mu Cephei, Antares and Arcturus.
Circumstellar disks: including those of more than half of T Tauri stars such as AA Tauri as well as TW Hydrae, IRC +10216 and APM 08279+5255, VY Canis Majoris and S Persei.
Liquid water
Liquid water is present on Earth, covering 71% of its surface. Liquid water is also occasionally present in small amounts on Mars. Scientists believe liquid water is present in the Saturnian moons of Enceladus, as a 10-kilometre thick ocean approximately 30–40 kilometers below Enceladus' south polar surface, and Titan, as a subsurface layer, possibly mixed with ammonia. Jupiter's moon Europa has surface characteristics which suggest a subsurface liquid water ocean. Liquid water may also exist on Jupiter's moon Ganymede as a layer sandwiched between high pressure ice and rock.
Water ice
Water is present as ice on:
Mars: under the regolith and at the poles.
Earth–Moon system: mainly as ice sheets on Earth and in Lunar craters and volcanic rocks NASA reported the detection of water molecules by NASA's Moon Mineralogy Mapper aboard the Indian Space Research Organization's Chandrayaan-1 spacecraft in September 2009.
Ceres
Jupiter's moons: Europa's surface and also that of Ganymede and Callisto
Saturn: in the planet's ring system and on the surface and mantle of Titan and Enceladus
Pluto–Charon system
Comets and other related Kuiper belt and Oort cloud objects
And is also likely present on:
Mercury's poles
Tethys
Exotic forms
Water and other volatiles probably comprise much of the internal structures of Uranus and Neptune and the water in the deeper layers may be in the form of ionic water in which the molecules break down into a soup of hydrogen and oxygen ions, and deeper still as superionic water in which the oxygen crystallizes, but the hydrogen ions float about freely within the oxygen lattice.
Water and planetary habitability
The existence of liquid water, and to a lesser extent its gaseous and solid forms, on Earth are vital to the existence of life on Earth as we know it. The Earth is located in the habitable zone of the Solar System; if it were slightly closer to or farther from the Sun (about 5%, or about 8 million kilometers), the conditions which allow the three forms to be present simultaneously would be far less likely to exist.
Earth's gravity allows it to hold an atmosphere. Water vapor and carbon dioxide in the atmosphere provide a temperature buffer (greenhouse effect) which helps maintain a relatively steady surface temperature. If Earth were smaller, a thinner atmosphere would allow temperature extremes, thus preventing the accumulation of water except in polar ice caps (as on Mars).
The surface temperature of Earth has been relatively constant through geologic time despite varying levels of incoming solar radiation (insolation), indicating that a dynamic process governs Earth's temperature via a combination of greenhouse gases and surface or atmospheric albedo. This proposal is known as the Gaia hypothesis.
The state of water on a planet depends on ambient pressure, which is determined by the planet's gravity. If a planet is sufficiently massive, the water on it may be solid even at high temperatures, because of the high pressure caused by gravity, as it was observed on exoplanets Gliese 436 b and GJ 1214 b.
Law, politics, and crisis
Water politics is politics affected by water and water resources. Water, particularly fresh water, is a strategic resource across the world and an important element in many political conflicts. It causes health impacts and damage to biodiversity.
Access to safe drinking water has improved over the last decades in almost every part of the world, but approximately one billion people still lack access to safe water and over 2.5 billion lack access to adequate sanitation. However, some observers have estimated that by 2025 more than half of the world population will be facing water-based vulnerability. A report, issued in November 2009, suggests that by 2030, in some developing regions of the world, water demand will exceed supply by 50%.
1.6 billion people have gained access to a safe water source since 1990. The proportion of people in developing countries with access to safe water is calculated to have improved from 30% in 1970 to 71% in 1990, 79% in 2000, and 84% in 2004.
A 2006 United Nations report stated that "there is enough water for everyone", but that access to it is hampered by mismanagement and corruption. In addition, global initiatives to improve the efficiency of aid delivery, such as the Paris Declaration on Aid Effectiveness, have not been taken up by water sector donors as effectively as they have in education and health, potentially leaving multiple donors working on overlapping projects and recipient governments without empowerment to act.
The authors of the 2007 Comprehensive Assessment of Water Management in Agriculture cited poor governance as one reason for some forms of water scarcity. Water governance is the set of formal and informal processes through which decisions related to water management are made. Good water governance is primarily about knowing what processes work best in a particular physical and socioeconomic context. Mistakes have sometimes been made by trying to apply 'blueprints' that work in the developed world to developing world locations and contexts. The Mekong river is one example; a review by the International Water Management Institute of policies in six countries that rely on the Mekong river for water found that thorough and transparent cost-benefit analyses and environmental impact assessments were rarely undertaken. They also discovered that Cambodia's draft water law was much more complex than it needed to be.
In 2004, the UK charity WaterAid reported that a child dies every 15 seconds from easily preventable water-related diseases, which are often tied to a lack of adequate sanitation.
Since 2003, the UN World Water Development Report, produced by the UNESCO World Water Assessment Programme, has provided decision-makers with tools for developing sustainable water policies. The 2023 report states that two billion people (26% of the population) do not have access to drinking water and 3.6 billion (46%) lack access to safely managed sanitation. People in urban areas (2.4 billion) will face water scarcity by 2050. Water scarcity has been described as endemic, due to overconsumption and pollution. The report states that 10% of the world's population lives in countries with high or critical water stress. Yet over the past 40 years, water consumption has increased by around 1% per year, and is expected to grow at the same rate until 2050. Since 2000, flooding in the tropics has quadrupled, while flooding in northern mid-latitudes has increased by a factor of 2.5. The cost of these floods between 2000 and 2019 was 100,000 deaths and $650 million.
Organizations concerned with water protection include the International Water Association (IWA), WaterAid, Water 1st, and the American Water Resources Association. The International Water Management Institute undertakes projects with the aim of using effective water management to reduce poverty. Water related conventions are United Nations Convention to Combat Desertification (UNCCD), International Convention for the Prevention of Pollution from Ships, United Nations Convention on the Law of the Sea and Ramsar Convention. World Day for Water takes place on 22 March and World Oceans Day on 8 June.
In culture
Religion
Water is considered a purifier in most religions. Faiths that incorporate ritual washing (ablution) include Christianity, Hinduism, Islam, Judaism, the Rastafari movement, Shinto, Taoism, and Wicca. Immersion (or aspersion or affusion) of a person in water is a central Sacrament of Christianity (where it is called baptism); it is also a part of the practice of other religions, including Islam (Ghusl), Judaism (mikvah) and Sikhism (Amrit Sanskar). In addition, a ritual bath in pure water is performed for the dead in many religions including Islam and Judaism. In Islam, the five daily prayers can be done in most cases after washing certain parts of the body using clean water (wudu), unless water is unavailable (see Tayammum). In Shinto, water is used in almost all rituals to cleanse a person or an area (e.g., in the ritual of misogi).
In Christianity, holy water is water that has been sanctified by a priest for the purpose of baptism, the blessing of persons, places, and objects, or as a means of repelling evil.
In Zoroastrianism, water (āb) is respected as the source of life.
Philosophy
The Ancient Greek philosopher Empedocles saw water as one of the four classical elements (along with fire, earth, and air), and regarded it as an ylem, or basic substance of the universe. Thales, whom Aristotle portrayed as an astronomer and an engineer, theorized that the earth, which is denser than water, emerged from the water. Thales, a monist, believed further that all things are made from water. Plato believed that the shape of water is an icosahedron – flowing easily compared to the cube-shaped earth.
The theory of the four bodily humors associated water with phlegm, as being cold and moist. The classical element of water was also one of the five elements in traditional Chinese philosophy (along with earth, fire, wood, and metal).
Some traditional and popular Asian philosophical systems take water as a role-model. James Legge's 1891 translation of the Dao De Jing states, "The highest excellence is like (that of) water. The excellence of water appears in its benefiting all things, and in its occupying, without striving (to the contrary), the low place which all men dislike. Hence (its way) is near to (that of) the Tao" and "There is nothing in the world more soft and weak than water, and yet for attacking things that are firm and strong there is nothing that can take precedence of it—for there is nothing (so effectual) for which it can be changed." Guanzi in the "Shui di" 水地 chapter further elaborates on the symbolism of water, proclaiming that "man is water" and attributing natural qualities of the people of different Chinese regions to the character of local water resources.
Folklore
"Living water" features in Germanic and Slavic folktales as a means of bringing the dead back to life. Note the Grimm fairy-tale ("The Water of Life") and the Russian dichotomy of and . The Fountain of Youth represents a related concept of magical waters allegedly preventing aging.
Art and activism
In the significant modernist novel Ulysses (1922) by Irish writer James Joyce, the chapter "Ithaca" takes the form of a catechism of 309 questions and answers, one of which is known as the "water hymn". According to Richard E. Madtes, the hymn is not merely a "monotonous string of facts", rather, its phrases, like their subject, "ebb and flow, heave and swell, gather and break, until they subside into the calm quiescence of the concluding 'pestilential fens, faded flowerwater, stagnant pools in the waning moon.'" The hymn is considered one of the most remarkable passages in Ithaca, and according to literary critic Hugh Kenner, achieves "the improbable feat of raising to poetry all the clutter of footling information that has accumulated in schoolbooks." The literary motif of water represents the novel's theme of "everlasting, everchanging life," and the hymn represents the culmination of the motif in the novel. The following is the hymn quoted in full.
Painter and activist Fredericka Foster curated The Value of Water, at the Cathedral of St. John the Divine in New York City, which anchored a year-long initiative by the Cathedral on our dependence on water. The largest exhibition to ever appear at the Cathedral, it featured over forty artists, including Jenny Holzer, Robert Longo, Mark Rothko, William Kentridge, April Gornik, Kiki Smith, Pat Steir, Alice Dalton Brown, Teresita Fernandez and Bill Viola. Foster created Think About Water, an ecological collective of artists who use water as their subject or medium. Members include Basia Irland, Aviva Rahmani, Betsy Damon, Diane Burko, Leila Daw, Stacy Levy, Charlotte Coté, Meridel Rubenstein, and Anna Macleod.
To mark the 10th anniversary of access to water and sanitation being declared a human right by the UN, the charity WaterAid commissioned ten visual artists to show the impact of clean water on people's lives.
Dihydrogen monoxide parody
'Dihydrogen monoxide' is a technically correct but rarely used chemical name of water. This name has been used in a series of hoaxes and pranks that mock scientific illiteracy. This began in 1983, when an April Fools' Day article appeared in a newspaper in Durand, Michigan. The false story consisted of safety concerns about the substance.
Music
The word "Water" has been used by many Florida based rappers as a sort of catchphrase or adlib. Rappers who have done this include BLP Kosher and Ski Mask the Slump God. To go even further some rappers have made whole songs dedicated to the water in Florida, such as the 2023 Danny Towers song "Florida Water". Others have made whole songs dedicated to water as a whole, such as XXXTentacion, and Ski Mask the Slump God with their hit song "H2O".
See also
is a collection of the chemical and physical properties of water.
Notes
References
Works cited
Further reading
Debenedetti, PG., and HE Stanley, "Supercooled and Glassy Water", Physics Today 56 (6), pp. 40–46 (2003). Downloadable PDF (1.9 MB)
Gleick, PH., (editor), The World's Water: The Biennial Report on Freshwater Resources. Island Press, Washington, D.C. (published every two years, beginning in 1998.) The World's Water, Island Press
Journal of Contemporary Water Research & Education
Postel, S., Last Oasis: Facing Water Scarcity. W.W. Norton and Company, New York. 1992
Reisner, M., Cadillac Desert: The American West and Its Disappearing Water. Penguin Books, New York. 1986.
United Nations World Water Development Report . Produced every three years.
St. Fleur, Nicholas. The Water in Your Glass Might Be Older Than the Sun . "The water you drink is older than the planet you're standing on." The New York Times (15 April 2016)
External links
The World's Water Data Page
FAO Comprehensive Water Database, AQUASTAT
The Water Conflict Chronology: Water Conflict Database
Water science school (USGS)
Portal to The World Bank's strategy, work and associated publications on water resources
America Water Resources Association
Water on the web
Water structure and science
"Why water is one of the weirdest things in the universe", Ideas, BBC, Video, 3:16 minutes, 2019
The chemistry of water (NSF special report)
The International Association for the Properties of Water and Steam
H2O: The Molecule That Made Us, a 2020 PBS documentary
Articles containing video clips
Hydrogen compounds
Triatomic molecules
Inorganic solvents
Liquids
Materials that expand upon freezing
Nuclear reactor coolants
Oxides
Oxygen compounds | Water | Physics,Chemistry,Environmental_science | 13,269 |
479,159 | https://en.wikipedia.org/wiki/Central%20Valley%20Project | The Central Valley Project (CVP) is a federal power and water management project in the U.S. state of California under the supervision of the United States Bureau of Reclamation (USBR). It was devised in 1933 in order to provide irrigation and municipal water to much of California's Central Valley—by regulating and storing water in reservoirs in the northern half of the state (once considered water-rich but suffering water-scarce conditions more than half the year in most years), and transporting it to the water-poor San Joaquin Valley and its surroundings by means of a series of canals, aqueducts and pump plants, some shared with the California State Water Project (SWP). Many CVP water users are represented by the Central Valley Project Water Association.
In addition to water storage and regulation, the system has a hydroelectric capacity of over 2,000 megawatts, and provides recreation and flood control with its twenty dams and reservoirs. It has allowed major cities to grow along Valley rivers which previously would flood each spring, and transformed the semi-arid desert environment of the San Joaquin Valley into productive farmland. Freshwater stored in Sacramento River reservoirs and released downriver during dry periods prevents salt water from intruding into the Sacramento-San Joaquin Delta during high tide. There are eight divisions of the project and ten corresponding units, many of which operate in conjunction, while others are independent of the rest of the network. California agriculture and related industries now directly account for 7% of the gross state product for which the CVP supplied water for about half.
Many CVP operations have had considerable environmental consequences, including a decline in the salmon population of four major California rivers in the northern state, and the reduction of riparian zones and wetlands. Many historical sites and Native American tribal lands have been flooded by CVP reservoirs. In addition, runoff from intensive irrigation has polluted rivers and groundwater. The Central Valley Project Improvement Act, passed in 1992, intends to alleviate some of the problems associated with the CVP with programs like the Refuge Water Supply Program.
In recent years, a combination of drought and regulatory decisions passed based on the Endangered Species Act of 1973 have forced Reclamation to turn off much of the water for the west side of the San Joaquin Valley in order to protect the fragile ecosystem in the Sacramento-San Joaquin Delta and keep alive the dwindling fish populations of Northern and Central California rivers. In 2017 the Klamath and Trinity rivers witnessed the worst fall run Chinook salmon return in recorded history, leading to a disaster declaration in California and Oregon due to the loss of the commercial fisheries. The recreational fall Chinook salmon fishery in both the ocean and the Trinity and Klamath rivers was also closed in 2017. Only 1,123 adult winter Chinook salmon returned to the Sacramento Valley in 2017, according to a report sent to the Pacific Fishery Management Council (PFMC) by the California Department of Fish and Wildlife (CDFW). This is the second lowest number of returning adult winter run salmon since modern counting techniques were implemented in 2003. By comparison, over 117,000 winter Chinooks returned to spawn in 1969.
Overview
Operations
The CVP stores about of water in 20 reservoirs in the foothills of the Sierra Nevada, the Klamath Mountains and the California Coast Ranges, and passes about of water annually through its canals. Of the water transported, about goes to irrigate of farmland, supplies municipal uses, and is released into rivers and wetlands in order to comply with state and federal ecological standards.
Two large reservoirs, Shasta Lake and Trinity Lake, are formed by a pair of dams in the mountains north of the Sacramento Valley. Water from Shasta Lake flows into the Sacramento River which flows to the Sacramento-San Joaquin Delta and water from Trinity Lake flows into the Trinity River which leads to the Pacific Ocean. Both lakes release water at controlled rates. There, before it can flow on to San Francisco Bay and the Pacific Ocean, some of the water is intercepted by a diversion channel and transported to the Delta-Mendota Canal, which conveys water southwards through the San Joaquin Valley, supplying water to San Luis Reservoir (a SWP-shared facility) and the San Joaquin River at Mendota Pool in the process, eventually reaching canals that irrigates farms in the valley. Friant Dam crosses the San Joaquin River upstream of Mendota Pool, diverting its water southwards into canals that travel into the Tulare Lake area of the San Joaquin Valley, as far south as the Kern River. Finally, New Melones Lake, a separate facility, stores water flow of a San Joaquin River tributary for use during dry periods. Other smaller, independent facilities exist to provide water to local irrigation districts.
Background
The Central Valley Project was the world's largest water and power project when undertaken during Franklin D. Roosevelt's New Deal public works agenda. The Project was the culmination of eighty years of political fighting over the state's most important natural resource - Water. The Central Valley of California lies to the west of the Sierra Nevada Mountains with its annual run-off draining into the Pacific Ocean through the Sacramento–San Joaquin River Delta. It is a large receding geological floodplain moderated by its Mediterranean climate of dry summers and wet winters that includes regular major drought cycles. At the time of its construction, the project was at the center of a political and cultural battle over the state's future. It intersected with the state's ongoing war over land use, access to water rights, impacts on indigenous communities, large vs. small farmers, the state's irrigation districts and public vs. private power. Its proponents ignored environmental concerns over its impacts, other than the outcome not damage the major stakeholders at that time.
The Central Valley of California has gone through two distinct culturally driven land use eras. The first was the indigenous tribal period that lasted for thousands of years. Then came the arrival of Europeans, first by the Spanish colonial model of Catholic missions and ranchos (1772–1846) was then followed by the current United States era. Due to its Mediterranean climate, the first cultural period was hunter-gatherer based. The Spanish missions' ranching and tanning business was based on the forced labor of Las Californias tribes. Spain's model of land use with the grazing of livestock for meat, wool and leather started along Alta California's coast eventually spreading inland. The U.S. era evolved from primarily ranching to large-scale plantations or more commonly known today as corporate farming that turned the Central Valley into the breadbasket of the U.S.
Following the 1848 California Gold Rush, large numbers of U.S. citizens came into the region and made attempts to practice rainfed agriculture, but most of the Central Valley land was taken up by large cattle ranchers like Henry Miller who eventually controlled 22,000 square miles of land. The large-scale levee construction by Chinese workers along the Delta was where limited irrigation for orchards first started.
Following the arrival of the Transcontinental railroad, immigration from Asia and the rest of the U.S. led to growing numbers of settlers in the region. Despite the rich soils and favorable weather of the Central Valley, immigrants to the valley who were unfamiliar with its seasonal patterns of rainfall and flooding began to take up irrigation practices. Farmers soon found themselves troubled by frequent floods in the Sacramento Valley and a general lack of water in the San Joaquin Valley. The Sacramento River, which drains the northern part, receives between 60 and 75% of the precipitation in the Valley, despite the Sacramento Valley covering less area than the much larger San Joaquin Valley, drained by the San Joaquin River, which receives only about 25% of the rainfall. Furthermore, cities drawing water from the Sacramento-San Joaquin Delta faced problems in dry summer and autumn months when the inflowing water was low. In order to continue to sustain the valley's economy, there needed to be systems to regulate flows in the rivers and equally distribute water among the north and south parts of the valley.
History
In 1873, Barton S. Alexander completed a report for the U.S. Army Corps of Engineers that was the first attempt at creating a Central Valley Project. In 1904, the Bureau of Reclamation (then the Reclamation Service) first became interested in creating such a water project, but did not get far involved until a series of droughts and related disasters occurred in the early 1920s. The State of California passed the Central Valley Project Act in 1933, which authorized Reclamation to sell revenue bonds in order to raise about $170 million for the project. Unfortunately, because of insufficient money in the state's treasury and the coincidence with the Great Depression, California turned to the national government for funding to build the project. This resulted in several transfers of the project between California and the federal government, and between Reclamation and the Army Corps of Engineers. The first dams and canals of the project started going up in the late 1930s, and the last facilities were completed in the early 1970s. Other features of the project were never constructed, some lie partly finished, or are still awaiting authorization.
Timeline
pre-western arrival – Tribal culture - seasonal migratory locations between the Tulares and Sierra foothills
1493 – The Papal Bull known as the Discovery Doctrine, in Latin titled the "Inter Caetera", gave Spain the right to take land and convert the indigenous occupants to Christianity in areas west of the Inter Caetera's line of demarcation, which divided the Western Hemisphere
1769–76 – The arrival of Spain and its Spanish missions in California - Indians promised sovereign control
1822 - Republic of Mexico formed - breaks with Spain's sovereign promise to California Mission Indians
1823 - The Papal Bulls that made up the Discovery doctrine from 1455 and 1493 becomes part of U.S. Law
1833 - Secularization Act closes California Missions and sells off church properties
The act initiated the transfer of 64 million acres of tribal lands via land grants or Ranchos to former Spanish citizens of Californio
1846 Bear Flag Rebellion - as part of the Mexican–American War and U.S. invasion of California - Republic of California
1846 - John Fremont kills original owner of the largest North American mercury mine at New Almaden after failing to buy it
1846 - Yerba Buena land grant takes its name from local Catholic mission and becomes San Francisco
1848 - Treaty of Guadalupe Hidalgo promises Mexican-Americans ownership of their Ranchos (ranches) and water rights
1849 - Influx of 80,000 immigrants during Gold Rush includes riots and land theft by squatter movement
Sept-Oct - Californians meets in Monterrey for the first California Constitutional Convention
1850 - California admitted to Union
1850s - Hydraulic mining in gold region contaminates Sacramento Delta with silt and mercury
1850 - California Indian Protection Act removes Indian's civil rights and enslaves them, starting the mass genocide of many of the state's 300 tribes
1850 - California adopts British Common Law causing 70 years of disputes over water rights
1850 - Squatters cut down the world's largest Blossom Rock Redwoods and clearcut the groves on Peralta and Moraga's ranchos in Oakland hills
1851-90 California Lands Commission - Mexican-American Ranchos lost to whites
1851 - Catholic Church attempts to get land for Mission tribes from California Lands Commission but fails
1851-1890 - Indians populations decimated
1851 - Tribes Rounded up by U.S. Army and forced to sign 18 treaties
1850 Swamp Act - Enables Henry Miller to eventually own over 1 million acres of land in the Central valley
1853 - Americans cut Mother of the Forest causing international uproar
The history of Clearcutting in the Sierra Nevada Mountains resulted in expanded flooding and environmental degradation
1853 - First documented American irrigation ditch constructed in Visalia, Tulare County
1855 - Van Ness ordinance - State adopts illegal grab of San Francisco lands
1860 - San Francisco beats U.S. Government in Supreme Court over land claims
1861 - Chinese build Sacramento Delta levees
1862 - Sacramento and levees destroyed by flooding - levees rebuilt
1862 - Lincoln passes Transcontinental Railroad Act giving away 140 million acres to railroad barons
1862 - Homestead Act allows adults who never took up arms against the government the right to claim 160 acres
May 14 - California legislation permits the formation of canal construction companies
1866 - San Francisco wins Supreme Court case and all illegal land takings
1866 - The University of California formed as Land-grant university with the right to take public lands
1866 - Mining Act included the right of miners to take public lands for ditches and dams
1869 - Transcontinental railroad completed - new immigration rush
1869 - First systematic Irrigation was in Anaheim and Riverside
1872 - Desert Land Act allows irrigation and lands in the west
1872 - California Irrigation Act passed by the state legislature allowing for cooperative water irrigation development.
1872 - US Mining Act
1873 - Congress sets up the Alexander Commission to design an irrigation system for the Central Valley.
1874 - Alexander Commission report sent to congress in March
1878 - Workingmen's Party takes control of state government on an anti-railroad campaign
1878 - William Hammond Hall obtains $100,000 to produce statewide irrigation plan - project collapses
1879 - New Constitution for state passed by workingmen bans Southern Pacific R.R. lobbying
1880 - Formation of California Railroad Commission known today as the California Public Utilities Commission
1884 - The use of Hydraulic mining in gold mining of the Sierra's damages Sacramento watershed and is forced to stop
1886 - Miller-Lux Cattle Ranch v. Tevis-Haggins water wars: Riparian v. Prior-appropriation water rights
1887 - State of California establishes the Modesto Irrigation District
Mar 7 - California Wright Act okays the formation of irrigation districts. It was renamed as the California Irrigation District Act in 1917.
1890 - Canal Act reserved federal authority for right of way to canals on public lands
1898 - San Francisco passes Charter that calls for public ownership of transit, telephones, water and power
1899 - Elwood Mead is appointed by the U.S. Department of Agriculture to carry out irrigation surveys
1901 - Right to Construct Reservoirs by corporations on public lands
1902 - National Reclamation Act passed that created the United States Bureau of Reclamation (USBR)
1902 - Tulare Irrigation District v. Shepard irrigation district legal dispute
1905 - Owens valley water plan okayed by Los Angeles Water Commission
1905 - $40 million statewide irrigation plan fails to due to lack of financing support
1905 - Salton Sea created by irrigation diversion of Colorado River
1907 - City of San Francisco votes to construct a water and power supply known as Hetch Hetchy that is located Yosemite
1911 - Constitutional Act - California Railroad Commission takes over regulatory role of cities for electric rates
1913 - Water Commission Act attempts to sort out the state's water rights
1913 - The Raker Act passes, permitting San Francisco to build a public water and power system at Hetch Hetchy
1915 - State Water Problems Conference set up holding hearings the following year - decision Riparian rights the problem
1915 - California Irrigation Act declared unconstitutional by state supreme court
1917 - California Hawson Bill provides relief to water appropriator claims from Riparian Rights lawsuits
1918-20 - State suffers severe drought
1919 - Chief Hydrographer of the USGS Robert Bradford Marshall sends report titled the "Irrigation of Twelve Million Acres in the Valley of California" to Governor William Stephens Marshall is considered the father of the Central Valley Project
Jan 14 - The city of Oroville Ca. moves ahead with plan to purchase PG&E gas and power operations
Feb 3 - U.S. presidential candidate Senator Hiram Johnson is in favor of public ownership of electric utilities
Feb 18 - Glenn County Ca. considers formation of an Irrigation district to take advantage of planned Iron Mountain dam
1920 Jan 4 - Sacramento Valley Irrigation Association calls for water congress at the Capital
Jan 10 - The U.S. Corps of Engineers proposes 3 dams and a series of locks on the Sacramento River
Jan 14 - Western States request $250 million for irrigation projects
Jan 23 - The Yuba-Nevada-Sutter Water and Power Association established for 80,000 acre water and power project
Jan 23 - Santa Barbara plan to add powerhouse as way to pay for the city's Gibraltar dam project
Jan 24 - Eureka Chamber of Commerce opposes proposed dam across Eel River others opposed due to fishing impacts
Jan 29 - PG&E which relies heavily on hydro-electricity prepares emergency power plans due to lack of rainfall
Feb 8 - Interior Secretary Franklin Lane requests $12.8 million for annual western irrigation funding
Feb 8 - The Sacramento Union asks public to "Pray for rain" on the front page of its newspaper
Feb 11 - Nevada County farmers protest PG&E's attempt to divert their water supplies at California Railroad Commission
Feb 24 - Miller & Lux legal fight against the Madera Irrigation District to take water from the San Joaquin River
Feb 25 - Major Water and Power rationing announced due to Northern California drought
Feb 26 - The Sacramento Valley water and irrigation congress asks governor to call a special legislative session on drought
Mar 13 - Proposal to build three powerhouses and divert American River water for irrigation in Placer County
Mar 25 - Ninety California power companies meet and agree to let state power administrator manage power during crisis
April 21 - PG&E announces plans to spend $15 Million in next two years on new power development
April 30 - Sacramento politicians call for takeover of PG&E's electric and transit system
May - The National Electric Light Association releases its National Water power report
May 1 - PG&E announces $10 million plan to construct hydro-electric dams on Pit River
May 11 - The California Railroad Commission (CPUC) emergency plan opposed by the Association of Irrigation Districts of Northern California
May 13 - PG&E acknowledged during hearings that it used ratepayer money for political campaigns
May 17 - Yolo County announces plan to create 100,000 acre irrigation district
May 18 - Proposal to construct dam across the Carquinez Strait to stop saltwater incursions
May 27 - Impacts of Clearcutting Sierra Nevada's Forests and flooding Central Valley made public
June 7 - Water wars between Northern California irrigation districts and Contra Costa and Delta farmers over salt water incursions
June 10 - 1920 Federal Water Power Act Signed into law that allows for expediting nationwide development of hydro-electric projects on U.S. rivers
June 20 - PG&E applies to state railroad commission for rule changes to protect itself during power and water shortage
July 4 - U.S. War Department begins investigation of building 4 dams and mobile locks on Sacramento River
July 10 - PG&E curtails afternoon water pumping in five irrigation districts
July 13 - City of Antioch starts lawsuit against rice farmers that threatens Water supply
July 24 - The Madera Irrigation District starts the Madera dam project on San Joaquin River which later becomes Friant Dam
July 27 - California representative protests Nevada's plan to take Lake Tahoe water
July 28 - 800,000 acres of Miller-Lux land and water rights to be subdivided and sold to small farmers
July 31 - The Glenn-Colusa irrigation district announce plan for a 1 million acre reservoir in Shasta county
Aug 5 - Irrigation companies organize their own plan for water development
Aug 15 - Colonel Robert B Marshall of USGS Plan introduced at Sacramento Valley Development Assoc.
Aug 24 - War Department's plan for four Dam dragged into lawsuit between Antioch and California rice farmers
Sept 26 - Major support for state Marshall Plan announced
Oct 7 - Carquinez Straights dam not feasible
Oct 11 - Court case between Rice farmers and Antioch continues
Oct 17 - Marshall Plan will ask state legislature for $500,000 survey
Oct 30 - The California State Irrigation Association expands its operations and support for statewide Marshall water plan
Nov 10 - California League of Municipalities to cooperate in legislation on public power and water
Nov 11 - Valley Cities urged to develop public power
Nov 20 - Klamath Chamber of Commerce opens hearings on public vs. private power and water development
Nov 21 - Locals opposed to California-Oregon Power Company's Klamath River power monopoly
Dec 21 - Giant Boulder Dam plan on Colorado River by Southern California Edison announced
1920 PUC report on SVWCo
1921 - The Municipal Utility District Act (MUD Act) passed by the California Legislature
Jan 5 - Marshall Plan proposes Shasta dam to be located at Kennett rather than Iron Mountain
Jan 7 - State Senator M.B Johnson introduces California Water and Power senate bill
Jan 7 – 13 years of bloodshed and litigation end with PG&E winning water rights
Jan 11 - The California State Irrigation Association and Sacramento Union promotes Marshall Plan review
Jan 21 - $500,000 for Marshall water plan study introduced at state legislature
Jan 29 - League of California Municipalities develop plan for public power legislation
Jan 29 - Sacramento City Attorney attacks California Railroad commission for bias towards PG&E
Jan 30 - Marshall Plan endorsement by League of California Municipalities
Feb 23 - Marshall Plan endorsed by Southern California municipalities
Mar 10 - The California State Irrigation Association sends Col. Marshall's list of 346 reservoir candidates to the League of California Municipalities
Mar 14 - Details of the Marshall Plan promoted by the California State Irrigation Association
Mar 15 - Municipal Utility District law results in heavy debate
Mar 20 - State, federal and global impacts on the passage of the 1920 Water and Power Act
Apr 2 - San Francisco Commonwealth Club opposes Marshall Plan during legislative hearings in Sacramento
Apr 2 - Attempt by electric company supporter to kill Johnson's Water and Power Bill fails in Senate
Apr 21 - Growing concern in San Joaquin Valley over Southern California power companies taking hydro-electric sites
Apr 22 - Marshall Plan for Sacramento River irrigation survey given $200,000 by legislature
Apr 26 - Johnson Power & Water Bill 397 loses by 4 votes in assembly committee
Apr 28 - Municipal Utility District Act passed by state senate
Apr 30 - Sacramento Union editorial calls for statewide vote after electric company lobby kills Johnson Power Bill
May 4 - Sacramento City Commission resolution calls for emergency meeting of League of California Municipalities (248 cities)over Johnson bill
May 9 - Sacramento City Attorney says public ownership could reduce electric rates from 8 cents to .8 cent
May 17 - Sacramento City Commission report on building its own hydro-electric site on Silver Creek
May 20 - Plan set up for statewide public power initiative at emergency meeting of League of California Municipalities
May 20 - California State Irrigation Association endorses Marshall plan and Municipal League's statewide vote
May 24 - Governor signs Municipal Utility District Act into law
June 4 - $200,000 survey fund for Marshall Plan signed by governor
July 1 - Miller and Lux loses its lawsuit to stop the Madera Irrigation District from using water from the San Joaquin river
July 22 - Summary of the proposed Water and Power Act is modeled like the Ontario Canada hydro-electric system
July 27 - State Water & Power Act initiative petition drive announced
July 28 - Sacramento City Attorney Robert Shinn comes out against statewide Water and Power initiative
Aug 4 - Riverside Chamber of Commerce circulates claim of "City Against Country" over Los Angeles public power
Aug 29 - Committee redraft of initiative accepted by Shinn with petition gathering for 120,000 signatures to begin
Sept 29 - California state control of water and power urged by former Interior Secretary Gifford Pinchot
Sept 14 - $500 million public Water and Power plan will be on the 1922 election
Nov 15 - state funded Marshall survey of water resources begins
Nov 22 - Water and Power initiative attacked by state senator
Dec 29 - Herbert Hoover placed in charge of Colorado River Commission that is reviewing plan for Boulder dam
Dec 29 - State railroad commission okays PG&E plan for $5 million to expand its Pit River hydro-electric developments
Dec 31 - Water & Power Initiative qualifies for November 1922 statewide ballot
1922 Jan 1 - World's highest dam proposed at Boulder Canyon
Jan 6 - The Water and Power Initiative qualifies for the November 1922 ballot
Jan 22 - PG&E front group "Greater California League" attacks water and power act
Feb 23 - Antioch decides to build reservoirs to store water to counter summer salt-water incursions
Feb 24 - PG&E president attacks water and power act initiative at Modesto Progressive men's Business club
Mar 7 - California State Irrigation Association comes out against water and power initiative
Mar 17 - Boulder (Hoover) Dam okayed
Apr 1 - Summary of the Water and Power Act debate held by the Commonwealth Club of California
Apr 2 - Application for major Shasta water diversion by engineers from San Joaquin Light & Power company
Apr 16 - Full page attack against Water and Power act published by S.F. Chronicle
Apr 30 - San Francisco Chronicle claims water and Power act is an attempt to "foist communism on people"
May 4 - Supreme Court to rule on PG&E ratebase inclusion of $52 million decision by state railroad commission
Jun 11 - Robert Marshall comes out against the Water and power act (he later reverses himself)
Sep 28 - Water and Power Act leader, Rudolph Spreckels blames power companies for his ouster at bank
Sep 30 - First phase in PG&E's $100 million Pit River hydro-electric project turned on
Oct 2 - Riverside Daily Press prints story that lies about Rudolph Spreckels and power and water act history
Proposition 19 - Water and Power Initiative Summary and full wording
Nov 9 - Proposition 19 (Water and Power Act) loses (243,604 to 597,453)
Nov - 1922 Water and power Act initiative fails due to $3 million dollar electric industry PR campaign
Water & Power Act electric company fraud investigated in 1934 by FTC. Testimony placed expenditure at over $1 million against initiative - working on cite -
Dec 1 - Water Power Act supporters plan for a new initiative attempt for 1924
1923 Feb - California media fails to expose $14,000 bribe, uncovered during senate investigation, to California State Irrigation Association by electric front group for reversing support of water and power initiative
Feb 12 - State Senate investigation exposes opponents spent $234,000 to stop the Water and Power initiative
Feb 13 - San Francisco Civic League of Improvement given $4,000 to distribute 200,000 flyers against Water and Power initiative
Feb 13 - Former SF Mayor and labor leader given $10,000 to oppose initiative while unions were all for it
Feb 13 - Southern California newspaper reports $393,000 spent against water and power initiative
Feb 16 - New PG&E filings with senate investigation place total spent against water and power initiative at over $500,000
Feb 24 - P.H. McCarthy forced to resign from San Francisco Trades Council due to his role in water and power initiative
Feb - Senate Hearings Summary - 1934 12-12 - Federal Trade Commission Investigation: pg 268-273 of 1922 initiative
July 23 - Sacramento County voters form the Sacramento Municipal Utility District.
1924 Proposition 16 Water and Power Summary and full text
Sept 3 - Col. Robert Marshall comes out in favor of power and water initiative
Sept 6 - Arguments for and against Prop 16, the water and power act with Robert Marshall making the for statement
Oct 28 – Robert Marshall speaks in favor of Water and Power Act
Nov – California Municipalities League attempt at Water & Power fails again
1925 June 20 - San Francisco board of supervisors illegally sells Hetch Hetchy power to PG&E
Note - Add link to actual propositions from hastings...
1926 Proposition 18 Water and Power summary and full text
1926 - California Water & Power Initiative fails for 3rd time
1927 - Cal Bulletin #18 Cal irrigation District Laws
1929 - $390,000 authorized to investigate state's water resources
1930 - Federal-State Water Resources Commission report proposes federal project
1931 - state water plan legislature report proposing new CVP plan
Jan 30 - The Hoover-Young Commission report estimate that state water plan will cost $374 million
1933 Mar. 4 - Franklin D. Roosevelt sworn in as president includes major public works projects
July 8 - Bureau of Reclamation (USBR) okays funding for Central Valley Act (CVP)
Jul 15 - Details of CVP legislation announced with plan to cooperate with USBR
Jul 20 - CVP bill stalls in legislature when rules committee blocks it
Jul 22 - CVP legislation revived in state senate after federal support promised
July 27 - California Legislature votes for CVP Act assembly passing it 58-9 senate passes vote 23-15.
Aug - PG&E funds petition drive for referendum that was run by a company lawyer named Aherne
Aug 5 - Governor signs $170 million CVP Act into law
Dec 15 - Local state representative urges a yes vote on CVP while large PG&E opposed is to the right of article
Dec 15 - SF Chamber of Commerce openly opposes CVP Act
Dec 17 - CVP special election debate pros and cons along with map of project
Dec 19 - Voter Information Guide for Proposition One - CVP special election
Dec 19 - CVP referendum to go ahead wins 459,712 for to 426,109
Dec 21 - Great Water Project vote increases CVP vote status
CVP victory due to dead Catalina cow with Slovenian community vote over fisherman's felony conviction
1933 - SF Labor Council obtains PG&E political expenditures report to state
1933 - PG&E spent $275,737.18 on political and other donations according to State Railroad Commission
1934 Nov 6 - Sacramento, CA votes to form Sacramento Municipal Utility District (SMUD) and purchase PG&E properties with $12 million in bonds
1935 Jan 2 - PG&E files suit to try to overturn the formation of SMUD and its buyout of PG&E
Aug 30 - Rivers and Harbors Act authorizes $12 million funding by Army Corps of Engineers for CVP - never happens
Dec 2 - USBR takes over CVP, loans $4.2 million - new estimate increases to $228 million source 1942 CVP Writers Project
Dec 2 - USBR regulations stipulate that water only be given out to farmers with 160 acres of land or less - see 4-7-1944
1936 June 22 - Sacramento and San Joaquin Flood Control Studies okayed by Rivers and Harbors Act 1936
Sept 12 - Ceremonies at Kennett for Shasta Dam
Oct 19 - Contra Costa Canal Work begins
Oct 22 - Governor hears $477 million CVP plan
1938 Mar 2 - State water authority commissioner opposed to agreement between PG&E and SMUD
Jul 6 - contract $35.9 million for Shasta reservoir given
Sept 8 - Shasta Construction work starts
1939 - Fortune Magazine Map of PG&E territory
Nov 5 - Construction of $8.7 million Friant Dam begins
Nov 27 - Pacific Gas & Electric Co. proposes to buy and distribute all of Shasta Dam power
1940 - US v. San Francisco Interior Sec. Ickes wins case to force San Francisco via the Raker Act to stop its sale of Hetch Hetchy water to PG&E
Jan 7 - California legislature blocks Governor Olson proposal to unfreeze $170 CVP Bonds
Jan 19 - Central Valley association spokesperson opposed to $50 million CVP bonds is actually a PG&E lobbyist
Jan 22 - Interior Sec. Ickes advises state to set up Public utility market for Shasta at half PG&E prices
Jan 24 - The Water Project Authority of California votes to delay Olson $50 million bond proposal until new study is done
Jan 27 - Governor Olson opens legislative session with request for CVP Power bonds
Jan 30 - Madera Irrigation District calls for vote about governor Olson's $50 million CVP bond proposal
Feb 14 - Governor Olson and CVP senate supporters fail to get $50 million funding out of committee
Feb 28 — State Water Project Authority creates four new jobs along with survey money from legislature allotment
Mar 12 - U.S. Senate approves $5 million for CVP
May 3 - Federal request for $191 million, including over $25 million to California for flood control following wet winter
July 8 - First concrete poured at Shasta Dam
Jul 22 - Sacramento and San Joaquin rivers diverted as work on CVP dams get underway
Aug 20 - CVP Contra Costa canal delivers first water to city of Pittsburg
Sep 25 - CVP will irrigate 3 million acres and allow for increased Central Valley population
Oct 5 - Madera Tribune posts photo of USBR's Friant Dam construction
Oct 19 - President Roosevelt signs rivers and harbors authorization bill (HR9972)with funds for CVP but includes limitation
Nov 27 - Governor Olson goes to Washington to propose federal takeover of CVP due to state funding opposition
Dec 6 - Another CVP dam proposed south of Shasta dam near Iron Mountain
Dec 19 - Governor Olson obtains support for his CVP plan after meeting with president Roosevelt
Dec 21 - State water commission requests a federal delay on PG&E's request for hyro work near Shasta dam
1941 Jan 8 - state senate proposal to expand the size of the CVP project to include Sacramento Valley
Jan 20 - Congressional oversight of $446 million CVP project based on TVA model is ready
Feb 14 - CVP contracts have helped companies in 40 different U.S. states
Feb 21 - $50 million CVP federal funding in exchange for PG&E Feather River power
Mar 20 - The state water authority budgets $200,000 for CVP work, including cooperative federal projects
Apr 17 - Interior Secretary Ickes prepares legislation for federal oversight of the CVP
Apr 30 - Congress approves a $34.7 million budget for CVP
May 22 - State legislature agrees to include funding for CVP electricity
Jul 28 - The CVP project is made a national defense priority with sped up on Keswick Dam contracts to start in August
July 30 - Central Valley Indian Lands Acquisition Act promised to pay for all Wintu lands covered by Shasta dam
Jul 31 - FDR signs CVP legislation that takes tribal lands that will be submerged by Shasta and Friant dams
Aug 12 - First major contract for the $12.5 million Keswick dam awarded
Sep 17 - CVP statistical report says 1.7 million acre feet of water being diverted from Sacramento River
Oct 22 - $319,802 contract for 6 miles of Contra Costa Canal awarded
Dec 30 - Regional director of the USBR, Charles E. is Carey selected by Ickes to develop market search for CVP power customers
1942 Jan 8 - CVP Shasta and Friant are the 2nd and 4th world's largest dams and rapidly being completed for the war
Feb 26 - CVP's chief engineer gives detailed status report on CVP to Madera citizens
Mar 20 - PG&E offers to buy all CVP power during House Appropriation Committee hearings
Mar 25 - House committee deletes $15 million for transmission lines and CVP steam plant
Mar 26 - Rep. Voorhis exposes prominent reason PG&E is behind blocking CVP power lines as Sacramento wants to break away from PG&E and buy power at a cheaper rate
Mar 26 - PG&E gets permission from Federal Power Commission to build steam plant to block USBR's Antioch facility
Aug 20 - The Madera Tribune congratulates Bertrand W. Gearhart on his role in promoting the CVP
Nov 13 - Shasta dam nearly ready - construction work photo
Nov 21 - Major segments of the CVP project halted by the War Production Board including transmission lines and Friant Dam PG&E allowed to take over CVP power at Shasta
Nov 27 - state railroad commission sets price of PG&E electric property in Sacramento at $11.6 million
Dec 22 - Ag Association spokesperson threatens city over city's push to buy power from CVP
1943 Jun 9 - $30.9 million funds sought for CVP as war power needs expanding
Jun 19 - War Powers Board okays CVP Friant-Kern Canal funding
Jul 20 - CVP Shasta to Oroville power line bids opened
Sep 2 - Interior Secretary Ickes' order to build CVP transmission line attacked by Rep. Carter who represents Tulare county but lives in Oakland
Sep 8 - San Francisco sends resolution to War Production Board calling for urgent completion of Friant-Kern Canal
Sep 24 - CVP coordinator announces operational schedules including Friant dam diversion to start in 1944
Sept 28 - Ickes announces PG&E contract to buy all Shasta dam power agreed to
Dec 29 - War Production Board refuses to fund the CVP's Friant-Kern Canal
1944 Jan 14 - 90 year dream - Shasta reservoir is filling up
Apr 7 - CVP coordinator will follow federal law and block big farms from obtaining CVP water
Apr 14 - Madera Tribune calls Interior Secretary Ickes "Little Harold" over CVP following federal water use rules
May 2 - Madera Tribune attacks "Oakies" and Interior Secretary "Little Harold" Ickes as a Czarist for retaining 160 acre water limit
May 12 - President Roosevelt supports 40 year old 160 acre federal rule that CVP water will only go to small farmers
Jun 8 - State Senate committee wants 160 acre limit lifted
June 26 - Shasta dam starts producing Power from two generators
Jul 20 - Quarter page PG&E Ad promotes its takeover of CVP power
Jul 24 - Hearings begin on the federal 160 acre water limit campaign by wealthy farmers
Jul 25 - PG&E starts taking Shasta dam power for resale
July 26 - Sacramento phase of hearings end. Federal laws will not be broken say federal authorities - for wealthy interests
Jul 30 - Week long CVP hearings in Bakersfield held by Senate subcommittee on irrigation - 160 acre water limit attacked
Oct 11 - War Production Board reverses itself and delays work on Friant-Kern Canal
Elliot Amendment to the Harbors and Rivers Act attempts to remove 160 acre water limit of the 1902 Reclamation Act fails
1945 Jan 2 - USBR proposes spending $600 million for CVP
Mar 22 - Rural congressional representatives want more control over CVP but don't want to pay for the system
Apr 12 - USBR proposes spending $836 million on CVP
Jun 4 - The state Chamber of Commerce promotes the takeover of the Central valley project when completed
Jun 8 - Chairman of the Central Valley Project Congress advocates cheap power development for San Joaquin Vallery farmers
Jul 18 - state water authority funded to evaluate possibility of purchasing the $340 million CVProject
Sep 6 - New 300 page CVP report calls for dramatic $527 million increase to project for total of $735 million (map)
Sep 27 - The wartime ban on construction will end in October with $15 million available to start on Friant Dam
Oct 30 - Attack on federal limits to CVP water for farms less than 160 acres is actually 320 leaving out only giant operations
Nov 24 - USBR introduces CVP plan to Congress with 38 proposed dams
Nov 26 - CVP funding ends up in hostile subcommittee that cuts all transmission and power funding
Nov 27 - U.S. House appropriations committee cuts budge for transmission lines for CVP
Nov 28 - SF Chronicle fails to mention $5 million cut on transmission line budget, only mentions $780,000 left
Nov 29 - Chamber of Commerce hears claim that federal control over the CVP is totalitarian
Nov 30 - SF Chronicle promotes Mendota 42,000 acre family farmer's opinion that employs 400 regular and 1,000 Mexican migratory workers
Dec 7 - Two day statewide water conference begins with fighting over 160 acre ban
Dec 8 - The first statewide water conference in 18 years is moderated by Governor Warren - the war of big vs. small farmers
Dec 26 - Madera Tribune's attempt to be neutral about the 160 acre fight
1946 Apr 5 - small town newspaper uses front group to call Dept. of Public Works communistic for funding CVP project
Apr 9 - 96,000 acre feet of Friant dam water released in March 1946 for irrigation of valley
May 3 - President Truman announces plan to expand scope of CVP
Jun 18 - CVP obtains $20 million funding for most of its projects
Jun 22 - Sacramento Municipal Utility District $10.5 million in bonds to purchase PG&E vote agreed to
Jun 26 - U.S. Senate funding for CVP reduced from $225 million to $12.5 million
Sep 24 - PG&E announces $160 million budget to expand power output
Nov 30 - Interior Sec. Krug says need for water and power from CVP being held up by "one or two large corporations"
1947 Jan 6 - Republican control of state legislature results in funding for only a CVP study
Jan 6 - Democrats push investigation of monopolist takeover of CVP
Feb 14 - President Truman requests $30 million including $5 million for CVP transmission lines for the next fiscal year
Feb 19 - If the 160 acre law is banned 20 giant Central Valley companies will get water monopoly
Feb 20 - Small farmers and labor oppose repeal of CVP 160 acre water limit
Feb 27 - 61% of $384 million CVP costs will be paid by electric sales
Mar 17 - Senator introduces bill to exempt CVP from USBR's 160 acre ban
Jun 3 - Sixteen day 160 acre ban hearing by Senate ends, no action taken
Jul 28 - $29 million CVP budget split between Army Corps and U.S.B.R. with $1.5 million for transmission lines
Sep 18 - CVP project funding and speed to increased with hope to complete entire project by 1950
Dec 3 - Governor Warren seeks emergency CVP funding
Dec 23 - $11.4 million emergency funds for CVP project granted as senator tries to get CVP head fired over 160 acre ban
1948 Jan 12 - President Truman submits a $42 Million CVP budget for next year
Jan 15 - Proposal to expand CVP to American River
Jan 22 - San Joaquin Valley farmers sign 19 contracts for 320,000 acre feet of water
Feb 25 - with another drought, the Stale Water Project authority requests $55.6 million for CVP
Mar 5 - USBR will seek Truman veto if California republican try to overthrow 160 acre ban
Mar 18 - two farm groups on opposite of the 160 acre debate
Jun 5 - Governor Warren supports CVP transmission system - see confusion headline
Jul 6 - CVP budget for 1948-49 year set at $68.5 million
Jul 19 - New CVP work to include expansion of Shasta dam power Klamath River and Santa Barbara projects
Aug 6 - $50 million fund sought to buy up large farms and resell them to small farmers
Oct 7 - Chamber of Commerce threatens legal fights over CVP's reclamation laws
Oct 13 - Interior Secretary Krug warns farmers that California electric companies are blocking CVP project
Nov 30 - State Water Project Authority urges 160 acre law removal
1949 - Map of Central Valley Cotton producers
Mar 30 - Major Congressional victory as subcommittee okays transmission lines as part of CVP $53.5 million budget
Jul 2 - Cal. Assembly funds study to buy CVP
Jul 9 - 15,000 attend Governor Warren's release of Friant dam water into San Joaquin valley
Jul 11 - Media says 100 years in the making as 20,000 people attend opening of $58 million Friant-Kern Canal
Jul 13 - US Senate boosts CVP annual funding to $60.8 million
Jul 21 - Senator Downey (R-CA) demands investigation of USBR and it continued 160 acre ban
Aug 2 - Congress tentatively agrees to fund two more CVP canals for $20–40 million
Aug 25 - Madera Tribune writes highly manipulative article suggesting Public Power advocates had increased funding yet story details how Senator Knowland (R-Ca) amendment stripped transmission funding
Aug 30 - President Truman proposes $1 billion CVP expansion for 38 dams and 25 power facilities
Sep 27 - Friant dam is fourth largest dam in world - details of history and construction
Sep 27 - U.S. Senate okays CVP addition of $110 million for American River development
Nov 14 - USBR plans to begin moving water from Sacramento Valley into the San Joaquinn Valley in 1951
Dec 2 - CVP deal contract with Madera Irrigation District almost settled
1950 Feb 3 - Gov Warren supports $69 million CVP budget for 1951
Mar 16 - California house members cut $4 million of power project out of CVP budget
Apr 14 - The Agricultural Council of California calls the USBR's public power operations socialist
May 8 - Warning that government should withdraw from CVP if 160 acre ban on water rights removed
Jun 17 - PG&E attacked by Governor Warren for blocking CVP projects during Shasta Dam dedication
Sep 19 - Detailed overview of how CVP works and impacts to Madera Irrigation District
1951 Jan 3 - CVP and state agree to keep grasslands flooded to protect migratory birds
Apr 20 - $18.3 of the $33.8 million CVP annual budget earmarked for Friant-Kern Canal
May 13 - Friant-Kern Canal completed
Jul 5 - The California legislature passes legislation to build the Oroville dam and power facilities as part of the CVP system
Aug 1 - Shasta Dam starts sending water into CVP canals
Aug 8 - Friant dam ceremony exposes new rift as state court orders excess water released as tactic to flood aquifer
Sep 13 - PG&E advertisement claim that 55% of all Central Valley water comes from aquifers by electric pumps
Sep 25 - Madera Tribune does extended coverage of CVP as major milestone in project is completed with historic map
Sep 25 - History of the Reclamation Act as part of Madera Tribune celebration issue
Sep 25 - Unnamed (big) farmers take Madera Irrigation District water contract with USBR to court
1952 Feb 23 - USBR proposes CVP Power plan that would takeover local PG&E project and spark major growth in Fresno
Mar 1 - USBR reports 1951 income of $8 million from water sales for 1951
Mar 21 - $34.9 million budget okayed by congress for construction activities
May 2 - Sixteen large farmers representing 14,000 acres agree to take CVP water and eventually abide by 160 acre rule
Dec 13 - SMUD makes contract to buy CVP power from USBR
California legislature appropriated $10 million for investigation into state purchase of CVP
1953 Jan 9 - President Truman asks for $83 million for CVP construction
Jan 10 – 110 foot coffer dam at CVP's $58 million Folsom dam breached - no deaths from flooding
Jan 24 - Madera Tribune enraged that USBR signs a long term contract to sell 17% of CVP excess power to the Sacramento Municipal Utility District
Jan 28 - Lawsuit to stop all major water diversions a threat to the CVP
Apr 23 - House Committee headed by Ca. representatives cuts $7 million from $19 million CVP budget, all from power projects
May 20 - USBR request to senate that it reinstates $7 million pulled from CVP's power and transmission budget
May 28 - State legislature tries to block irrigation district contracts with USBR
Sep 26 - Full details of the size and cost of Friant dam - the 4th largest concrete dam in world
Dec 28 - Republicans, corporate farms and state Chamber of Commerce push for state to buy CVP from Interior Dept.
1954 BR report: Four dams, five canals and other systems have been completed at a cost of $435.4 million
Jan 21 - President Eisenhower asks for $70.4 million CVP budget
May 4 PG&E offers to buy CVP power and facilities for $130 million cash
Aug 27 - Central Valley Project Act Reauthorization
Sep 10 - Proposal for $230 million San Luis segment of the CVP announced includes map
1955 Feb 21 - PG&E makes proposal to buy CVP power from Trinity dam for $3.5 million a year
Apr 14 - US BR ignores PG&E's proposal to take over the electric system of the $219 million Trinity dam
Jul 14 - Urgent need for more water results in Trinity project moving ahead as San Luis project not ready
Jul 16 - CVP $15 million budget for 1956 will be to complete Folsom Dam and being work on Trinity Dam
1956 May 21 - Congress appropriates $83 million for irrigation with $20 million going to Central Valley projects including a Tulare Lake dam
Jul 19 - US BR announces plans to construct the Glen Canyon Dam and $42 million for five CVP projects for 1957
1957 - Fear based 28 minute video pushing to expand state expansion of water project
Feb 20 - PG&E attacks republican senators opposition to PG&E's proposal for joint construction of Trinity Dam project
Jun 13 - $88 million for California was given but excluded all funding for transmission systems
Oct 14 - U.S. Supreme Court agrees to hear the USBR's 160-acre ban on big water users
Oct 29 - 5 million acre feet a year being extracted from Central Valley's aquifer
Nov 1 - CVP's Feather River project considered world's largest engineering project
1958 Jan 23 - PG&E agrees to renegotiate rates it charges for CVP power after report discloses company's rate manipulation
Feb 5 - Interior Secretary Fred A. Seaton recommends that PG&E be allowed to takeover Trinity Dam power
Mar 5 - CVP Plan to add 2 million acre feet of water in San Joaquin Valley endorsed
May 26 - Proposal for San Luis Canal project and 500,000 acres of land in western Merced, Fresno and Kings counties
Jun 9 - Congress okays $42 million budget for coming CVP's next fiscal year
Jun 23 - U.S. Supreme Court reverses state supreme court in upholding the 160-acre ban on USBR water to large users
Oct 15 - Total of 444,000 Kilowatts of CVP power being transfer to PG&E
1959 Feb 13 - PG&E plan to build "cream skimmer" transmission lines between Bonneville and CVP attacked
Mar 18 - representative James B. Utt introduces legislation to turn all Trinity Dam power over to PG&E
Apr 27 - Two more dams proposed for CVP project
May 12 - Governor Brown releases breakdown on where $1.75 billion funding for State Water Project will go to
Jun 3 - Congress okays $103 Million with $43 to USBR and $59.8 to Corps of Engineers for state irrigation and flooding
Oct 21 - California Grange opposed to state takeover of Oroville Dam and giving PG&E control of Trinity Dam electricity
Jul 9 - Governor Brown signs $1.75 billion state water bond law that includes 735 foot high Oroville Dam
Sep 30 - Interior Department signs two new contracts with PG&E for 629,000 Kilowatts of CVP electricity from four dams
Sep 30 - Madera Irrigation District opposed Fresno plan to take San Joqauin River surplus water
Sep 30 - Interior Department extends PG&E contracts for CVP Power up to April 1971
1960 State and USBR cooperation Agreement
Jul 1 - Congress okays $61 million CVP budget
1961 Feb 2 - State takes first step in $400 State Water Project
Aug 10 - History of EBMUD and the November 1959 $1.7 billion state water project vote
1962 - May 17 - $27 million joint CVP funding project proposed
1963 - Corps of Engineers dredges the Sacramento Deep Water Ship Channel to the port of Sacramento.
Jan 18 - Congress to propose $106 million annual CVP Budget
Mar 2 - Governor Brown Announced $325 Million plan to fund state water project
May 24 - State Senate votes against Governor Brown's proposal to fund state plan with bonds
June 11 - Attempts by Republicans to kill the sale of $325 million in bonds for state water project fails
Dec 15 - Extended summary of all the state's new water plans laid out in series of articles by agency
1964 Jan 13 - SMUD, EBMud and growing construction of dams background story on state water expansion
Jan 21 - Utility Districts across the state will benefit from expansion of the state water project (map of state plan)
Jan 22 - $112 million annual CVP budget proposed to congress with state to include $42 million for San Luis
1965 - Inter-agency Delta Committee recommendation for Peripheral Canal and Delta facilities
Jan 14 - City of Santa Clara asked LBJ for direct access to CVP vs. PG&E power
July 23 - $5 billion San Luis Reservoir segment of the CVP begins construction
Aug 4 - PG&E Hydro-electric project connects 3 rivers near Shasta
Aug 6 - Auburn-Folsom Project goes before congress for funding
Sept 16 - Governor Brown request $188 million for CVP funding
1966 Jan 25 - President Johnson asks Congress for $100 million CVP annual budget
Mar 11 - 21st Century water shortage predicted if system not expanded
Apr 3 - State water project good until 1990 but won't handle predicted 54 million population expected by 2020
Apr 26 - State seeks $164 million from feds for CVP's 1967 fiscal year
1967 Jan 13 - CVP produces record 5.3 billion kilowatts hours of electricity in 1966
Jan 25 - President Johnson withholds $34 million for CVP's San Luis project
Oct 6 - State Water Project's Oroville Dam and Reservoir are completed
Oct 18 - State Assemblyman seeks $600 million in Bonds for the state's water project
1968 Feb 8 - State budgets $425 million for state's water project
Apr 19 - CVP's San Luis Reservoir dedicated
May 16 - $468 million cut to proposed on CVP's Auburn Dam project
Dec 28 - Interior Dept. okays new CVP plan along east side of valley
1969 - State Water Project obtains emergency loan from state treasury as inflation rates have dried up funding from bond sales
1969 - The Harvey O. Banks Delta Pumping Plant and John E. Skinner Fish Facility are completed by DWR
1970 Mar 15 - Army Corps of Engineers announces construction of 625 foot high New Melones Dam
Apr 30 - Governor Reagan promotes $209 million 43 mile long, 400 foot wide Peripheral Canal plan
1971 Jan 29 - Nixon administration proposes $150 million for state water projects
Feb 15 - NCPA files Writ with CPUC to stop PG&E power contract with SMUD for Rancho Seco surplus power
Mar 18 - Sierra Club files lawsuit to shut down the CVP
Jul 23 - California State Water Resources Control Board sets CVP water quality standards.
Jul 30 - California Water Resources Association attacks passage of Wild and Scenic Rivers legislation
Oct 8 - New association of state agencies formed to promote water projects
1972 Jan 20 - Labor Leader says 45 corporations with 3.7 million acres gets illegal USBR water subsidies
May 25 - Proposition 9 ban on nuclear development will endanger CVP says California Water Resources Association
Aug 10 - $4.9 million CVP contract for 25 of 188 mile long San Luis drain awarded
Dec 7 - GAO study says big landowners received $1.5 billion CVP water subsidy
1973 - legislation funds new Delta levees
Feb 9 - Nixon administration blocks $2 million in CVP funds okayed by Congress
1974 Feb 14 - History of Peripheral canal plan dates to 1964
Jul 11 - 29,000-acre Giffen Inc. broken up and sold to comply with 160-acre USBR rules
Sept 25 - Environmental review for 43 mile long Peripheral canal released
1975 Sept 4 - Healdsburg joins 10 NCPA other cities to obtain its own electricity
1976 Jan 28 - USBR says there will be enough water for the year as drought continues
Mar 24 - 59 farmers file $33 million lawsuit against CVP and SWP for 1974 flood damages
Apr 22 - Eight mile Pacheco tunnel from San Luis reservoir to Santa Clara started
1977 - Department of Water Resources supports Peripheral Canal as best way move water to the Delta
Feb 8 - USBR announces plan to cut CVP water by up to 75% due to drought
Feb 25 - Westland's Land Dynamics Inc. pleads guilty and fined $10,000 for conspiracy to violate land sale rules
Apr 17 - President Carter stops 15 water projects including review of CVP
Apr 21 - Salyer Land and J.G. Boswell Cos. (cotton growers) propose buying $45 million Pine Flat Dam to bypass 160-acre rule
Sept 15 - Assembly votes 56-22 in favor of SB 346 Peripheral Canal legislation
Sept 16 - Senate votes down Governor Brown's $4.2 billion Peripheral Canal proposal
Oct 6 - USBR lost $74 million between 1971 and April 1976 for underpricing electricity sold to PG&E
Nov 5 - 529 page federal report says USBR has failed to breakup corporate ownership in Westlands over 160 acre limit on water subsidies
Nov 5 - Government task force report documents $2.7 billion water subsidy to CVP farmers at taxpayers expense
Nov 5 - Report documents how the USBR's 197 mile long San Luis drain (Kesterson) in the Westlands went from $7 million to $542 million
Nov 30 - Roberts Farm Inc's 8,100 acre operations in Kern county goes bankrupt and sold for $21.5 million
Dec 11 - The Chandler family's L.A. Times caught in conflict of Interest over newspaper's attack on 160-acre limit as family owns major investments in Tejon Ranch and J.G. Boswell Company
Dec 19 - California v. U.S. Supreme Court case over control of discharge rights
1978 - California State Water Resources Control Board releases Water Rights Decision 1485 (D-1485) requiring Delta water quality
Jan 6 - Call for one year moratorium over 160-acre ban ruling and Interior Dept decision
Jan 26 - CVP water rates too cheap as study shows project will be $8.8 billion in debt by 2037
Feb 8 - PG&E making 800% profit on CVP power it buys
Feb 20 - Federal Land Bank of Sacramento ignores 160-acre CVP rlimit rule when issuing loans to large farmers
Mar 18 - Sec. of Interior urges cooperative operations - state charges $22 vs. CVP charging $3.50 per acre foot of water
July 4 - US Supreme Court rules in favor of state over right to enforce environmental regulations
Sep 20 - Lobbyists for Salyer Land and J.G. Boswell Cos. who own 150,000-acres of cotton lands paid $165,000 to fight 160-acre limit
Nov 8 - Fish and Wildlife Improvement Act of 1978 (16 U.S.C. 742l; 92 Stat. 3110) -- Public Law 95-616 updates CVP Act
Nov 21 - Westlands Irrigation District legal Budget for 1979 set at $549,000 to fight the federal government
1979 Jan 3 - Dept. of Interior agrees to abide by state's environmental quality rules
Jan 16 - Bill to allocate $50 Million for state water project including money for Peripheral canal introduced
Feb 25 - J.G. Boswell investigated for secret contract by Grand Jury with Cotton Inc. (lobby firm) $113 million 10 year budget
Mar 8 - US Dept. of Agriculture expands probe of Boswell-Cotton Inc. $60,000 annual contract for Cotton Board research and promotion
Mar 11 - Westlands Irrigation District hires Washington lawfirm of Williams & Connolly to represent their 160-acre legal fight
Mar 22 - Senate hearings open on the Reclamation Reform Act of 1979 - to replace the 160-acre limit for USBR water
Mar 23 - Western water war erupts over hundreds of millions of acres of subsidized lands with call to change 160-acre limitation
Apr 13 - Support for study calling for 200 foot increase of Shasta Dam
Oct 11 - Regional battle between farmers and environmentalists hold up dams and Peripheral Canal plans
1980 Mar 13 - State legislature passes SB200 Peripheral Canal act opposed by ecologists
Oct 18 - Santa Clara power users sue agency for $18 million over rates
1981 Oct 21 - CVP proposal to sell power to city of Healdsburg announced
1982 - Voters defeat the Peripheral Canal initiative - Proposition 9
Apr 29 - Santa Cruz to do study on takeover of PG&E power grid
Apr 30 - Healdsburg to start buying CVP power from Westeran Area Power Administration
May 4 - Healdsburg breaks from PG&E power
August 4 - PG&E claims Healdsburg owes them $62,000 as city goes for public power
1983 Oct 2 - Republicans moves away from conservation on Central Valley water
1984 May 5 - National Wildlife Federation says USBR under collected water fees by $10 billion
Nov 16 - Federal plan to dump Central Valley waste water into Pacific attacked
1985 Mar 30 - Interior Dept plan to stop dumping Central Valley toxics into Kesterson
Aug 21 - CVP has made $1.5 billion in illegal subsidies to giant ag farms
Sep 10 - House passes on cooperative agreement between CVP and SWP
1986 - DWR-USBR Coordinated Operation Agreement, agreed to by Congress.
Nov 27 - Ceremony held in Sacramento on agreement between CVP and SWP
1987 - State Water Board starts revision of D-1485 after U.S. EPA calls plan inadequate.
1988 - Suisun Marsh salinity control gates start up.
May 28 - 2nd Dry year starting to impact CVP water supply
1989 - EPA lists Sacramento River Chinook salmon as threatened
Feb 16 - USBR announces 25-50% reduction in water availability due to 3 year drought
May 3 - USBR investigation of expanding Tehama-Colusa Canal
June 23 - PG&E loses court case over its refusal to transmit power to public agencies
1990 Feb 16 - 4th year of drought expected to cause cutbacks in water to users
Jul 15 - $150 million environmental CVP legislation angers farmers and PG&E
1991 - State Water Board produces Bay-Delta salinity control plan but partially rejected by the EPA
Construction completed on four south Delta pumping facilities
Jan 30 - 800 attend statewide meeting on water crisis solutions
Feb 13 - Water Rights issue grow as 5th year of drought calls for 50% farm water cutbacks
Feb 15 - Water crisis worst since 1945, CVP to drain all reservoirs with up 75% restrictions in use
Mar 16 - Recent storms reduce water crisis but orders for reduced use to hold
1992 - The Central Valley Project Improvement Act mandated the balancing of water, pricing and distribution policies
Jan 1 - U.S. Corps of Engineers releases environmental plan for 3,400 acre Yolo Country wildlife refuge
Feb 13 - Bush administration submits $906 million USBR budget for 1993 including CVP
Oct 30 - Reclamation Projects Authorization and Adjustment Act of 199—Public Law 102-575
Nov 18 - New federal legislation will give Yolo and Solano County CVP water
1993 - A documented indicator species, the Delta smelt is listed as threatened (goes to endangered in 2009)
1993 - Save San Francisco Bay Association's Barry Nelson calls the CVP "the biggest single environmental disaster ever to strike California."
Feb 18 - USBR open new office to oversee 1992 CVP Improvement Act
Dec 17 - Governor Wilson attacks federal plans to withhold water for environment
1994 Feb 16 - Drought response results in 2/3rd cut in farm waters
Apr 10 - Judge blocks attempt to sell CVP water to mining company
Sep 19 - Pajaro Valley loses 19,000 acre feet of CVP water due to legal technicality
1995 Jul 18 - Folsom Dam gate breaks releasing half million acre feet of water
1996 Oct 12 - Pajaro Valley water agency decides to buy $5.6 million in CVP water rights
Dec 21 - Kern County plan to sell 22 billion gallons of water to L.A. starts water war
1997 - $80 million temperature controlled fish protection support added to Shasta dam
Sept 13 - Cadillac Desert author supports more subsidies to farmers
Dec 14 - Proposal to sell Friant dam water to L.A. reduced to just excess flow years
1998 May 29 - Measure D in Pajaro Valley alternative to CVP plan attacked for conservation and small dams
Jun 3 - Measure D passes, effectively ending plan to import CVP water into Pajaro Valley
2000 - Westlands Water District sues the USBR over drainage promises and wins $2.6 billion agreement
Jun 9 - $450 million water plan proposed by Governor Davis includes raising Shasta dam height
2002 Feb 13 - Appeal of court ruling taking CVP water from fish and environment
Jul 17 - Westlands wants feds to buy contaminated land for $500 million
2004 - CalFed budget zeroed out for fifth year in a row as attempts to find common ground fail
Apr 22 - Editorial: death of 34,000 fish on Klamath impacts Hupa tribe
Jul 14 - Court order allows for protection of fish in Trinity River
2005 Mar 16 - CVP water resold by users as 200,000 acres in Westland's too toxic for growing
2006 - San Joaquin water flows restored to protect fish
2007 May 25 - Federal court overturns U.S. Fish and Wildlife's 2005 opinion that increased CVP water take would not endanger Smelt
Oct 25 - "Racanelli Decision" - Judge decides in favor of Aug. 1978 decision (1485) compelling USBR and DWR adhere to the State Water Resources Control Board's water quality standards
2008 - Central Valley Project Improvement Act's fisheries program conducts "Listen to the River" independent peer review
Apr 9 - CVP's Lewiston dam predicted to have a normal reservoir levels for year
Aug 9 - The Kern County Water Agency buys state water for as cheap as $28 and sells it for up to $200 and acre
2009 - A documented indicator species, the Delta smelt is listed under the ESA as endangered (listed as threatened in 1993)
Mar 11 - Drought fears recede after recent rain bring CVP's Lewiston dam up to 59% of normal
May 24 - How the Ca. Dept. of Water Resources lost control of the Kern Country Water Bank
Jun 5 - National Oceanic and Atmospheric Administration releases 4 year study on fish impacts
Oct 7 - Trinity County protests USBR's petition to extend state water rights to 2030
2010 Jun 3 - Environmental groups file a lawsuit seeking to block a secret backroom deal – known as the "Monterey Amendments"
Dec 15 - The release of the Bay Delta Conservation Plan, or the reincarnation of peripheral canal is immediately opposed by environmental groups
2012 Mar 2 - Court of Appeals ends thirteen year legal battle between Westlands and Interior Dept in government's favor
2014 May 14 - 10% of all California goes to Almond production
Nov 4 - After 5 years of reworking, the public okays $510 million in state water funding
2015 Jan 27 - Harvard University has bought 10,000 acres California land for Wine production and water speculation
Apr 21 - California Almond production is using over 1 trillion gallons of agricultural water
Sep 11 - USBR announces agreement with Westlands water contract and drainage controversy
2017 Jan 3 - HR 23 Central Valley Project Water Reliability introduced and passed by house fails in senate would have stripped all CVP environmental protections
Feb 17 - CVP's Oroville Dam spillway water levels result in 180,000 people forced to evacuate
Mar 17 - House republicans invoke the "God Squad" option of the Endangered Species Act Amendments of 1978 to overturn water limits caused by the endangered Smelt
Jun 10 - Trump admin proposes selling off all grid assets of the Power Marketing Administration
2018 - Congress set aside $20 million to raise Shasta dam by 18.5' or an additional 636,000 acre feet of water a year
2019 Aug 1 - Meeting to start new Delta Tunnel by state agencies held
Sep 8 - Westlands Irrigation District appeals court decision to block raising height of CVP's Shasta dam
Aug 21 - Trump admin suppresses report on dangers to Steelhead Salmon
Oct 23 - Dept. of Interior changes water rules in favor of farmers
2020 - Jan 1 - No Smelt indicator species found in the Sacramento Delta for last 2 years
Feb 20 - President Trump signs Record of Decision on federal biological opinions
Feb 29 - Seventy five project customers, including the large Westlands Water District, received permanent federal water contracts
Facilities in the Sacramento Valley
Sacramento River
Shasta Division consists of a pair of large dams on the Sacramento River north of the city of Redding. The Shasta Dam is the primary water storage and power generating facility of the CVP. It impounds the Sacramento River to form Shasta Lake, which can store over of water, and can generate 680 MW of power. Shasta Dam functions to regulate the flow of the Sacramento River so that downstream diversion dams and canals can capture the flow of the river more efficiently, and to prevent flooding in the Sacramento-San Joaquin Delta where many water pump facilities for San Joaquin Valley aqueducts are located. The Keswick Dam functions as an afterbay (regulating reservoir) for the Shasta Dam, also generating power.
The Sacramento Canals Division of the CVP takes water from the Sacramento River much farther downstream of the Shasta and Keswick Dams. Diversion dams, pumping plants, and aqueducts provide municipal water supply as well as irrigation of about . The Red Bluff Diversion Dam diverts part of the Sacramento River into the Tehama-Colusa Canal, the Corning Canal and a small reservoir formed by Funks Dam. Six pump plants take water from the canal and feed it to the Colusa County water distribution grid.
Trinity River
Water diversions from northern rivers in the state remain controversial due to environmental damage. Trinity River Division is the second largest CVP department for the northern Sacramento Valley. The primary purpose of the division is to divert water from the Trinity River into the Sacramento River drainage downstream of Shasta Dam in order to provide more flow in the Sacramento River and generating peaking power in the process. Trinity Dam forms Trinity Lake, the second largest CVP water-storage reservoir, with just over half the capacity of Shasta and a generating capacity of 140 MW. Lewiston Dam, downstream of Trinity Dam, diverts water into the Clear Creek Tunnel, which travels to empty into a third reservoir, Whiskeytown Lake on Clear Creek, a tributary of the Sacramento River, generating 154 MW of power in the process. Whiskeytown Lake (formed by Clair. A Hill Whiskeytown Dam) in turn provides water to the Spring Creek Tunnel, which travels into the lowermost extreme of Spring Creek, a stream that flows into Keswick Reservoir, generating another 180 MW of electricity. From there the water from the Trinity River empties into Keswick Reservoir and the Sacramento River. In 1963, the Spring Creek Debris Dam was constructed just upstream of the outlet of the Spring Creek Tunnel, to prevent acid mine drainage from the Iron Mountain Mine from continuing downstream and contaminating the river.
American River
The American River Division is located in north-central California, on the east side of the Great Central Valley. Its structures use the water of the American River, which drains off the Sierra Nevada and flows into the Sacramento River. The division is further divided into three units: the Folsom, Sly Park and Auburn-Folsom South. The American River Division stores water in the American River watershed, to both provide water supply for local settlements, and supply it to the rest of the system. The dams also are an important flood control measure. Hydroelectricity is generated at Folsom and Nimbus dams, and marketed to the Western Area Power Administration.
The Folsom Unit consists of Folsom Dam, its primary water storage component, and Nimbus Dam, which serves as its downstream forebay. The Folsom Dam is located on the American River, and stores of water in its reservoir, Folsom Lake. Folsom Lake covers and is located inside the Folsom Lake State Recreational Area. Eight additional earth fill saddle dams are required to keep the reservoir from overflowing. The dam also generates 200 MW from three generators. About downstream of Folsom Dam is the Nimbus Dam, forming Lake Natoma. The dam generates 7.7 MW from two Kaplan turbines on the north side of the river. The Nimbus Fish Hatchery is located downstream of Nimbus Dam, to compensate for the two dams' destruction of American River spawning grounds.
The Sly Park Unit includes Sly Park Dam, Jenkinson Lake, the Camp Creek Diversion Dam, and two diversion tunnels. The Sly Park Dam and its similarly-sized auxiliary dam form Jenkinson Lake, which covers . Jenkinson Lake feeds the Camino Conduit, a aqueduct. The Camp Creek Diversion Dam diverts some water from Camp Creek into Jenkinson Lake.
The third unit is the Auburn-Folsom South Unit, consisting of several dams on American River tributaries. These include Sugar Pine Dam and Pipeline (supplying water to Foresthill), and the uncompleted Folsom South Canal. The primary component of the unit, concrete thin-arch Auburn Dam, was to be located on the North Fork of the American, but was never built because of the significant risk of earthquakes in the area, and general public opposition to the project. However, the high Foresthill Bridge, built as part of the preliminary work for Auburn Dam, still stands. County Line Dam, about south of Folsom Dam, was also never built.
Facilities in the San Joaquin Valley
Delta and canal system
One of the most important parts of the CVP's San Joaquin Valley water system is the series of aqueducts and pumping plants that take water from the Sacramento-San Joaquin Delta and send it southwards to supply farms and cities. The Delta Cross Channel intercepts Sacramento River water as it travels westwards towards Suisun Bay and diverts it south through a series of man-made channels, the Mokelumne River, and other natural sloughs, marshes and distributaries. From there, the water travels to the C.W. Bill Jones Pumping Plant, which raises water into the Delta-Mendota Canal, which in turn travels southwards to Mendota Pool on the San Joaquin River, supplying water to other CVP reservoirs about midway. A facility exists at the entrance of the pump plant in order to catch fish that would otherwise end up in the Delta-Mendota Canal. A second canal, the Contra Costa Canal, captures freshwater near the central part of the delta, taking it southwards, distributing water to the Clayton and Ygnacio Canals in the process, and supplying water to Contra Loma Dam, eventually terminating at Martinez Reservoir.
San Joaquin River
The CVP also has several dams on the San Joaquin River—which has far less average flow than the Sacramento—in order to divert its water to southern Central Valley aqueducts. The Friant Dam, completed in 1942, is the largest component of the Friant Division of the CVP. The dam crosses the San Joaquin River where it spills out of the Sierra Nevada, forming Millerton Lake, which provides water storage for San Joaquin Valley irrigators as well as providing a diversion point for a pair of canals, the Friant-Kern Canal and the Madera Canal. The Friant-Kern Canal sends water southwards through the Tulare Lake area to its terminus at Bakersfield on the Kern River, supplying irrigation water to Tulare, Fresno, and Kern counties. The Madera Canal takes water northwards to Madera County, emptying into the Chowchilla River. The Central Valley also consisted of 500 miles of canals, providing the city dwellers and power sales from the generation of electricity pay of the project costs.
Stanislaus River
On the Stanislaus River, a major tributary of the San Joaquin, lies the relatively independent East Side Division and New Melones Unit of the CVP. The sole component of the division/unit is New Melones Dam, forming New Melones Lake, which, when filled to capacity, holds nearly of water, about equal to the storage capacity of Trinity Lake. The dam functions to store water during dry periods and release it downstream into the northern San Joaquin Valley according to water demand. The dam generates 279 MW of power with a peaking capacity of 300 MW.
Offstream storage and aqueducts
The CVP has a significant amount of facilities for storing and transporting water on the west side of the San Joaquin Valley, in the foothills of the California Coast Ranges. The West San Joaquin Division and San Luis Unit consist of several major facilities that are shared with the federal California State Water Project (SWP). San Luis Dam (or B.F. Sisk Dam) is the largest storage facility, holding of water.
Although called an offstream storage reservoir by USBR, the reservoir floods part of the San Luis Creek valley. San Luis Creek, however, is not the primary water source for the reservoir. Downstream of San Luis Reservoir is O'Neill Forebay, which is intersected by the Delta-Mendota Canal, a separate CVP facility. Water is pumped from the canal into the Forebay and uphill into San Luis Reservoir, which functions as an additional water source during dry periods.
Water released from San Luis and O'Neill reservoirs feeds into the San Luis Canal, the federally built section of the California Aqueduct, which carries both CVP and SWP water. The San Luis Canal terminates at Kettleman City, where it connects with the state-built section of the California Aqueduct. With a capacity of , it is one of the largest irrigation canals in the United States. The Coalinga or Pleasant Valley Canal branches off the San Luis Canal towards the Coalinga area. A pair of separate dams, Los Baños Detention Dam and Little Panoche Detention Dam, provide flood control in the Los Baños area. The San Luis Drain was a separate project by USBR in an attempt to keep contaminated irrigation drainage water out of the San Joaquin River, emptying into Kesterson Reservoir where the water would evaporate or seep into the ground. Because of environmental concerns, the system was never completed.
The CVP also operates a San Felipe Division to supply water to of land in the Santa Clara Valley west of the Coast Ranges. San Justo Dam stores water diverted from San Luis Reservoir through the Pacheco Tunnel and Hollister Conduit, which travel through the Diablo Range. A separate canal, the Santa Clara Tunnel and Conduit, carries water to the Santa Clara Valley.
Environmental impacts
Once, profuse runs of anadromous fish—salmon, steelhead, and others—migrated up the Sacramento and San Joaquin Rivers to spawn in great numbers. The construction of CVP dams on the two rivers and many of their major tributaries—namely Friant Dam and Shasta Dam—mostly ended the once-bountiful Central Valley salmon run. From north to south, the Sacramento upriver of Shasta Dam, the American upriver of Folsom Dam, the Stanislaus upriver of New Melones Dam, and the San Joaquin upriver of Mendota—have become inaccessible to migrating salmon. In three of these cases, it is because the dams are too high and their reservoirs too large for fish to bypass via fish ladders. The San Joaquin River, however, had a different fate. Almost of the river is dry because of diversions from Friant Dam and Millerton Lake. Even downstream of Mendota, where the Delta-Mendota Canal gives the river a new surge of water from the Sacramento-San Joaquin Delta, irrigation runoff water, contaminated with pesticides and fertilizer, has caused the river to become heavily polluted. To make matters worse, efforts by the California Department of Fish and Game to route the San Joaquin salmon run into the Merced River in the 1950s failed, because the salmon did not recognize the Merced as their "home stream".
Not only on the San Joaquin River have CVP facilities wreaked environmental havoc. On the Sacramento River, Red Bluff Diversion Dam in Tehama County, while not as large or as impacting as Friant Dam, was once a barrier to the migration of anadromous fish. The original fish passage facilities of the dam continually experienced problems from the beginning of operation in 1966, and introduced species that prey on young smolt often gather at the base of the dam, which reduced the population of outmigrating juvenile salmon into the Pacific. The Red Bluff Diversion Dam has since been replaced with a fish screen and pumping plant, thus allowing unimpaired passage through Red Bluff. Further upstream, Keswick and Shasta Dams form total barriers to fish migration. Even out of the Central Valley watershed, the CVP's diversion of water from the Trinity River from Lewiston Dam into Whiskeytown Lake has significantly hurt the Klamath River tributary's salmon run. Over three-quarters of the river's flow is diverted through the Clear Creek Tunnel and away from the Trinity River, causing the river below the dam to become warm, silty, shallow and slow-flowing, attributes that hurt young salmon. Furthermore, the Trinity Dam forms a blockade that prevents salmon from reaching about of upriver spawning grounds. In the early years of the 21st century, the Bureau of Reclamation finally began to steadily increase the water flow downstream from Lewiston Dam. While providing less water for the CVP altogether, the new flow regime allows operations to meet the line drawn by Reclamation itself in 1952 stating that at least 48% of the river's natural flow must be left untouched in order for Trinity River salmon to survive. The lack of flow in the Trinity up to then was also a violation of the authorization that Congress made over the operation of the dam. The "...legislation required that enough be left in the Trinity for in-basin needs, including preservation of the salmon fishery."
In the early years of the 21st century, the Bureau of Reclamation studied the feasibility of raising Shasta Dam. One of the proposed heights was greater than its current size, thus increasing the storage capacity of Shasta Lake by . The agency also proposed a smaller raise of that would add . Previously, a raise of the dam, increasing storage to , was considered, but deemed uneconomical. When Shasta Dam was first built, it was actually planned to be two hundred feet higher than it is now, but Reclamation stopped construction at its present height because of a shortage of materials and workers during World War II. The raising of the dam would further regulate and store more Sacramento River water for dry periods, thus benefiting the entire operations of the CVP, and also generating additional power. However, the proposed height increase was fought over for many reasons. Raising the dam would cost several hundred million dollars and raise the price of irrigation water from Shasta Lake. It would drown most of the remaining land belonging to the Winnemem Wintu tribe—90 percent of whose land already lies beneath the surface of the lake—and flood several miles of the McCloud River, protected under National Wild and Scenic River status. Buildings, bridges, roads and other structures would have to be relocated. The added capacity of the reservoir would change flow fluctuations in the lower Sacramento River, and native fish populations, especially salmon, would suffer with the subsequent changes to the ecology of the river.
New Melones Dam has come under even greater controversy than Shasta Dam, mainly because of the project's conflicts with federal and state limits and its impact on the watershed of the Stanislaus River. The original Melones Dam, submerged underneath New Melones Lake (hence the name New Melones Dam) is the source of one of these problems. The disused Melones Dam blocks cold water at the bottom of the lake from reaching the river, especially in dry years when the surface of the lake is closer to the crest of the old dam. This results in the river below the dam attaining a much higher temperature than usual, hurting native fish and wildlife. To solve this problem, Reclamation shuts off operations of the dam's hydroelectric power plant when water levels are drastically low, but this results in power shortages. Originally, after the dam was constructed, the State of California put filling the reservoir on hold because of enormous public opposition to what was being inundated: the limestone canyon behind the dam, the deepest of its kind in the United States, contained hundreds of archaeological and historic sites and one of California's best and most popular whitewater rafting runs. Thus the reservoir extended only to Parrot's Ferry Bridge, below its maximum upriver limit, until the El Niño event of 1982–1983, which filled it to capacity within weeks and even forced Reclamation to open the emergency spillways, prompting the state and federal governments to repeal the limits they had imposed on the reservoir. Furthermore, the project allows a far smaller sustainable water yield than originally expected, and Reclamation calls the dam "a case study of all that can go wrong with a project".
In response to these environmental problems, Congress passed in 1992 the Central Valley Project Improvement Act (CVPIA), Title 34 of Public Law 102-575, to change water management practices in the CVP in order to lessen the ecological impact on the San Joaquin and Sacramento Rivers. Actions mandated included the release of more water to supply rivers and wetlands, funding for habitat restoration work (especially for anadromous fish spawning gravels), water temperature control, water conservation, fish passage, increasing the service area of the CVP's canals, and other items. Despite the preservation of river programs, the state legislature continued to have the power to construct dams.
CVP Government Library
1902-1966 US Bureau of Reclamation Annual Appropriations
1923-1949 US Buruea of Reclamation - Reclamation Era Bulletins - includes monthly reports on projects and highlights
1948 US Bureau of Reclamation Project Reports
1949 CVP Comprehensive Report
1950 CVP Annual Report
1952 US Bureau of Reclamation 50th Anniversary
1955 CVP Annual Report & Highlights
1956 CVP Annual Report & Highlights
1957 CVP Annual Report & Highlights
1958 CVP Annual Report &
1959 CVP Annual Report & Highlights
1960 CVP Annual Report & Highlights
1961 CVP Annual Report & Highlights
1962 CVP Annual Report & Highlights
1963 CVP Annual Report & Highlights
1964 CVP Annual Report & Highlights
1965 CVP Annual Report & Highlights
1966 CVP Annual Report & Highlights
1967 CVP Annual Report & Highlights
1968 CVP Annual Report & Highlights
1969 CVP Annual Report & Highlights
1970 CVP Annual Report & Highlights
1971 CVP Annual Report & Highlights
1971 US Bureau of Reclamation Annual Report
1972 CVP Annual Report
1950 United States v. Gerlach Live Stock Co., 339 U.S. 725 (1950) Riparian Rights
1958 Ivanhoe Irrig. Dist. v. McCracken, 357 U.S. 275 (1958) 160-acre limitation
1960 Ivanhoe Irrig. Dist. v. All Parties, 53 Cal.2d 692 (1960) irrigation districts contracts
1963 Dugan v. Rank, 372 U.S. 609 (1963) Friant Dam Water Rights
1963 City of Fresno v. State of California, 372 U.S. 627 eminent domain and water rights
1973 Environmental Defense v. Armstrong, 487 F.2d 814 (9th Cir. 1973) New Melones Dam environmental impacts
1976 National Land for the People, Inc. v. Bureau of Reclamation, 417 F. Supp. 449 (D.D.C. 1976). Injunction against DOE land sales
1977 Trinity County v. Andrus, 438 F. Supp. 1368 (E.D. Cal. 1977) drought impacts
1978 California v. United States, 438 U.S. 645 (1978) water distribution and rights
1981 California v. Sierra Club, 451 U.S. 287 (1981) Delta Water quality
1982 United States v. State Water Resources Control Board, 694 F. 2d 1171 (9th Cir. 1982) New Melones water permits
1982 United States v. State of California, 529 F.Supp. 303 (E.D. Cal. 1982) Delta Water Quality Control Plan
1982 Morici Corp. v. United States, 681 F.2d 645 (9th Cir. 1982) Federal immunity claim over crop damages
1983 Westlands Water District v. United States, 700 F.2d 561 (9th Cir. 1983) Environmental impacts and legal intervention
1985 South Delta Water Agency v. United States, 767 F.2d 531 (9th Cir. 1985) South Delta's water rights
1985 SWRCB Water Quality Order No. WQ 85-1 Kesterson Reservoir mitigation
1986 United States v. State Water Resources Control Board (182 Cal. App.3d 82 (1986) ("Racanelli Decision") State Water Resources Control Board's Delta water quality plan and Water Rights
1990 Peterson v. United States Dept. of Interior, 899 F.2d 799 (9th Cir. 1990) environmental impacts and water rights
1993 Madera Irr. Dist. v. Hancock, 985 F.2d 1397 (9th Cir. 1993) water contracts
1993 Barcellos and Wolfsen, Inc. v. Westlands Water District, 899 F.2d 814 (9th Cir.1990) subsidized water contracts
1993 Sumner Peck Ranch, Inc. v. Bureau of Reclamation, 823 F.Supp. 715 (E.D. Cal. 1993) environmental impacts
1994 Westlands Water Dist. v. NRDC, 43 F.3d 457 (9th Cir. 1994) environmental impacts
1995 O'Neill v. United States, 50 F.3d 677 (9th Cir. 1995) water contracts
1995 California Trout v. Schaefer, 58 F.3d 469 (9th Cir. 1995) environmental impacts and water contracts
1996 Westlands Water Dist. v. United States, 100 F.3d 94 (9th Cir. 1996) Water contracts
1997 County of San Joaquin v. State Water Resources Control Board, 54 Cal.App.4th 1144 (1997) New Melones water allocations
1998 Natural Resources Defense Council v. Houston, 146 F.3d 1118 (9th Cir. 1998) Environmental Species Act enforcement
1999 Central Green Co. v. United States, 531 U.S. 425 (1999) Friant dam flood liability
2000 Firebaugh Canal Co. et al., v. United States, 203 F.3d 568 (9th Cir. 2000) Kesterson drain
2001 State of California v. United States, 271 F.3d 1377 (Fed. Cir. 2001) Kesterson impacts
2002 Central Delta Water Agency v. United States, 306 F.3d 938 (9th Cir. 2002) New Melones Reservoir intervenor legal standings
2003 Westlands Water District v. United States, 337 F.3d 1092 (9th Cir. 2003) water contracts
2003 Laub v. U.S. Department of the Interior (9th Circuit, 2003) Environmental Impacts
2004 Bay Inst. of San Francisco v. United States (9th Cir., unpublished, 87 Fed. Appx. 637, January 23, 2004) water rights and 1992 CVPIA
2004 Westlands Water District v. U.S. Department of Interior, 376 F. 3d 853 (9th Cir. 2004) Environmental impacts
2005 Orff v. United States, 545 U.S. 596 (2005) Water contracts
2005 Hoopa Valley Indian Tribe v. Ryan, 415 F.3d 986 (9th Cir. 2005) Water contracts
2006 State Water Resources Control Board Cases, 136 Cal.App. 4th 674 (2006) Water rights
2006 Central Delta Water Agency v. Bureau of Reclamation, 452 F.3d 1021 (9th Cir. 2006) water salinity
2007 Stockton East Water District v. United States, 76 Fed. Cl. 321 (2007), amended by 76 Fed. Cl. 470 New Melones Reservoir water contracts
2007 Pacific Coast Federation of Fishermen's Associations v. Gutierrez, U.S. District Court for the Eastern District of California, Case No. 1:06-CV-00245 OWW environmental impacts on salmon
2007 Laub v. Davis, California Supreme Court Case No. S138974; CALFED environmental impacts
2009 NRDC v. Kempthorne 627 Supp 2d 1212 - Delta Smelt impacts
2010 Consolidated Delta Smelt Cases, 717 F. Supp. 2d 1021 (E.D. Cal. 2010) District Court, E.D. California
2010 San Luis & Delta-Mendota Water Auth. v. Salazar, 760 F. Supp. 2d 855 (E.D. Cal. 2010) water contracts environment
2018 Hoopa Valley Tribe v. National Marine Fisheries, et al. and Yurok Tribe, et al. v. United States Bureau of Reclamation fishing rights
1955 7-6 Report on USBR, for the Fiscal Years Ended June 30, 1952 and 1953
1958 11-18 Report on Acquisition, Leasing, and Disposal of Reclamation Lands, Bureau of Reclamation
1957 12-11 Audit of CVP for the Fiscal Year Ended June 30, 1956
1962 4-26 Revenue-Producing Water Resources Development Projects, USBR and Corps of Engineers, Fiscal Year 1960
1968 10-18 Negotiation of Contracts for Water From the CVP
Congress Should Reevaluate the 160-Acre Limitation on Land Eligible To Receive Federal Water
1973 11-19 CVP's Proposed Power Rate Increase
1974 1-21 Comments on Proposed Power Rate Increase by the USBR's CVP
1974 8-1 Financial Position of the CVP
1977 4-14 Allegations Concerning Westlands Water District
1977 9-2 More and Better Uses Could Be Made of Billions of Gallons of Water by Improving Irrigation Delivery Systems
1977 11-21 Rationale for Power Rates Charged by the CVP to Pacific Gas and Electric Company
1979 3-22 Cotton Production by California Farmers Who Receive Irrigation Water
1981 4-21 Information on the Resale of Water Provided Under Contract by the Federal Government in California
1982 7-18 Obligation of Funds for CVP's for Fiscal Year 1978
1983 6-18 Proposed Pricing of Irrigation Water From CVP's New Melones Reservoir
1983 10-5 Archeological Studies at New Melones Dam in California
1984 1-4 USBR Rates for Electric Power Sales by the CVP
1982 1-18 Information On California Delta Water Quality Standards
1984 5-21 Query Concerning Repayment of O&M Costs Under CVP
1985 9-9 Bureau of Reclamation's CVP Repayment Arrangements
1987 7-17 Kesterson Wildlife Management: National Refuge Contamination Is Difficult To Confirm and Clean Up
1989 10-12 Basic Changes Needed to Avoid Abuse of the 960-Acre Limit
1991 10-21 Changes Needed Before Water Service Contracts Are Renewed
1994 4-18 Impact of Higher Irrigation Rates on CVP Farmers
1994 8-15 Federal Actions to Protect Sacramento River Salmon
2001 5-4 Water Marketing Activities and Costs at the CVP
2007 12-18 Reimbursement of CVP Construction Costs by San Luis Unit Irrigation Water Districts
2014 9-8 USBR: Availability of Information on Repayment of Water Project Construction Costs
2015 6-4 Financial Information for Three California Water Programs
2018 8-16 SF Bay Delta Watershed: Wide Range of Restoration Efforts Need Updated
CVP resources
The U.S. Dept. of Interior's US Bureau of Reclamation is the federal agency that manages the CVP: Annual reports 1995-to present
The U.S. Dept. of Energy's Western Area Power Administration oversees distribution of the CVP's federally produced electricity
The U.S. Army Corps of Engineers manages 17 of the Central Valley Project dams including its dam safety alert system
Licensed Hydroelectric Projects at the Federal Energy Regulatory Commission
The National Oceanic and Atmospheric Administration Central Valley Regional Office monitors the CVP's Endangered Species Act Operations
U.S. Department of Justice - Central Valley project Environment and Natural Resources Division
- U.S. Geological Survey's California Central Valley Water Science Center
USGS California Central Valley Groundwater Study Tool
USGS Groundwater Data for California
Central Valley Hydrologic Model: Texture Model
USGS Goose Population Dynamics in the California Central Valley and Pacific Flyway
Central Valley Watershed Monitoring Directory
Findlaw California Water Code Search Engine
The California State Water Project (SWP) is managed by the California Department of Water Resources
Central Valley Flood Protection Plan
Association of California Water Agencies
Directory - Association of California Water Agencies
Sacramento Valley Water Quality Coalition (SVWQC)
Overview of Projected Climate Change in the California Central Valley | California Climate Commons
Regulated Water Utilities in California | California Water Association
The California Reclamation Districts are the legal districts that manage the Central Valley's levees
California Water Districts
Ca. Dept. of Water Resources: Central Valley History
Chronology of Major Litigation Involving the CVP and SWP
The California Sportfishing Protection Alliance's Listen to the River peer review summary
The California Water Plan is the state's official water policy with the latest version completed in 2013
Water in California Summarizes the history and details of the state's water policy issues.
California's Irrigation district's 92 public self-governing subdivisions of the State that purchase water from the CVP
Central Valley Ag - CVA
MAVEN'S NOTEBOOK | California Water news
UC Davis: California Water Primer
Mid-Pacific Water Users' Conference
Water Education Foundation
Library of Congress - Central Valley Project
CVP annual construction costs 1935-1959
1945 U.S. Bureau of Reclamation 160-acre Legal analysis
US Bureau of Reclamation Documents - Hathi Trust Digital Library
"The Central valley project" by Federal Writers' Project (U.S.) California, 1942
1956 Congressional Library on authorizing Documents Central Valley Project - Includes detailed timeline
1,600 page investigation of USBR that includes the Reclamation Reform Act of 1979: Hearing Before the Subcommittee on Energy
1984 Information Bulletin #2 U.S. BUREAU of RECLAMATION - KESTERSON RESERVOIR - AND WATERFOWL - Impacts
1986 - The Agreement between the United States of America and the State of California for coordinated operation of the Central Valley Project and the State Water Project
The Grapes of Wrath Movie & book
Cadillac Desert documentary & book
Farmworker movements in California from the Grange, IWW and the Wheatland hop riot, the Bracero's to the United Farm Workers
Bitter Harvest, a History of California Farmworkers, 1870-1941 By Cletus E. Dani
Dorothea Lange Central Valley - PBS Biography
The Great Central Valley Project by Stephen Johnson, Robert Dawson and author Gerald Haslam
The Southern Pacific railroad, currently known as BNSF Railway was the Central Valley's largest owner and played a major role in its evolution, from the Mussel Slough Tragedy, the California Development Company's Salton Sea, its land grabs
California's version of Pork barrel politics started with the Owens Valley land and water takings by the city of Los Angeles with a PBS documentary series Part 1 and movie Chinatown (1974 film)
The Central Valley is also the home to one of the country's oldest and largest oil & gas industries, that includes the environmental controversial. use of fracking.
Gallery
See also
CALFED Bay-Delta Program
Cadillac Desert -about the book- and Cadillac Desert (film)
California Department of Water Resources
California Reclamation Districts
California State Water Project
California Water Wars
Droughts in California
Environment of California
Environmental issues in Fresno, California
Rivers and Harbors Act
Sacramento–San Joaquin River Delta
San Joaquin River
Water in California
References
External links
Central Valley Project Operations Office
http://www.sacmetronews.com/2018/02/tribes-fishermen-slam-trump-plan-to.html
Central Valley Project Summary
Central Valley Project Historic Photos
The Central Valley Project: Informational page and slideshow of project facilities, Mavens Notebook
USBR Glossary of Terms
"Food for 70,000,000 – How Engineering Wil Aid Nature in California's Central Valley". Popular Science, March 1944, pp. 95–98.
1933 establishments in California
Agriculture in California
Energy infrastructure in California
History of California
Interbasin transfer
Irrigation in the United States
San Joaquin Valley
United States Bureau of Reclamation | Central Valley Project | Engineering,Environmental_science | 20,074 |
168,848 | https://en.wikipedia.org/wiki/Human%20skeleton | The human skeleton is the internal framework of the human body. It is composed of around 270 bones at birth – this total decreases to around 206 bones by adulthood after some bones get fused together. The bone mass in the skeleton makes up about 14% of the total body weight (ca. 10–11 kg for an average person) and reaches maximum mass between the ages of 25 and 30. The human skeleton can be divided into the axial skeleton and the appendicular skeleton. The axial skeleton is formed by the vertebral column, the rib cage, the skull and other associated bones. The appendicular skeleton, which is attached to the axial skeleton, is formed by the shoulder girdle, the pelvic girdle and the bones of the upper and lower limbs.
The human skeleton performs six major functions: support, movement, protection, production of blood cells, storage of minerals, and endocrine regulation.
The human skeleton is not as sexually dimorphic as that of many other primate species, but subtle differences between sexes in the morphology of the skull, dentition, long bones, and pelvis exist. In general, female skeletal elements tend to be smaller and less robust than corresponding male elements within a given population. The human female pelvis is also different from that of males in order to facilitate childbirth. Unlike most primates, human males do not have penile bones.
Divisions
Axial
The axial skeleton (80 bones) is formed by the vertebral column (32–34 bones; the number of the vertebrae differs from human to human as the lower 2 parts, sacral and coccygeal bone may vary in length), a part of the rib cage (12 pairs of ribs and the sternum), and the skull (22 bones and 7 associated bones).
The upright posture of humans is maintained by the axial skeleton, which transmits the weight from the head, the trunk, and the upper extremities down to the lower extremities at the hip joints. The bones of the spine are supported by many ligaments. The erector spinae muscles are also supporting and are useful for balance.
Appendicular
The appendicular skeleton (126 bones) is formed by the pectoral girdles, the upper limbs, the pelvic girdle or pelvis, and the lower limbs. Their functions are to make locomotion possible and to protect the major organs of digestion, excretion and reproduction.
Functions
The skeleton serves six major functions: support, movement, protection, production of blood cells, storage of minerals and endocrine regulation.
Support
The skeleton provides the framework which supports the body and maintains its shape. The pelvis, associated ligaments and muscles provide a floor for the pelvic structures. Without the rib cages, costal cartilages, and intercostal muscles, the lungs would collapse.
Movement
The joints between bones allow movement, some allowing a wider range of movement than others, e.g. the ball and socket joint allows a greater range of movement than the pivot joint at the neck. Movement is powered by skeletal muscles, which are attached to the skeleton at various sites on bones. Muscles, bones, and joints provide the principal mechanics for movement, all coordinated by the nervous system.
It is believed that the reduction of human bone density in prehistoric times reduced the agility and dexterity of human movement. Shifting from hunting to agriculture has caused human bone density to reduce significantly.
Protection
The skeleton helps to protect many vital internal organs from being damaged.
The skull protects the brain
The vertebrae protect the spinal cord.
The rib cage, spine, and sternum protect the lungs, heart and major blood vessels.
Blood cell production
The skeleton is the site of haematopoiesis, the development of blood cells that takes place in the bone marrow. In children, haematopoiesis occurs primarily in the marrow of the long bones such as the femur and tibia. In adults, it occurs mainly in the pelvis, cranium, vertebrae, and sternum.
Storage
The bone matrix can store calcium and is involved in calcium metabolism, and bone marrow can store iron in ferritin and is involved in iron metabolism. However, bones are not entirely made of calcium, but a mixture of chondroitin sulfate and hydroxyapatite, the latter making up 70% of a bone. Hydroxyapatite is in turn composed of 39.8% of calcium, 41.4% of oxygen, 18.5% of phosphorus, and 0.2% of hydrogen by mass. Chondroitin sulfate is a sugar made up primarily of oxygen and carbon.
Endocrine regulation
Bone cells release a hormone called osteocalcin, which contributes to the regulation of blood sugar (glucose) and fat deposition. Osteocalcin increases both insulin secretion and sensitivity, in addition to boosting the number of insulin-producing cells and reducing stores of fat.
Sex differences
Anatomical differences between human males and females are highly pronounced in some soft tissue areas, but tend to be limited in the skeleton. The human skeleton is not as sexually dimorphic as that of many other primate species, but subtle differences between sexes in the morphology of the skull, dentition, long bones, and pelvis are exhibited across human populations. In general, female skeletal elements tend to be smaller and less robust than corresponding male elements within a given population. It is not known whether or to what extent those differences are genetic or environmental.
Skull
A variety of gross morphological traits of the human skull demonstrate sexual dimorphism, such as the median nuchal line, mastoid processes, supraorbital margin, supraorbital ridge, and the chin.
Dentition
Human inter-sex dental dimorphism centers on the canine teeth, but it is not nearly as pronounced as in the other great apes.
Long bones
Long bones are generally larger in males than in females within a given population. Muscle attachment sites on long bones are often more robust in males than in females, reflecting a difference in overall muscle mass and development between sexes. Sexual dimorphism in the long bones is commonly characterized by morphometric or gross morphological analyses.
Pelvis
The human pelvis exhibits greater sexual dimorphism than other bones, specifically in the size and shape of the pelvic cavity, ilia, greater sciatic notches, and the sub-pubic angle. The Phenice method is commonly used to determine the sex of an unidentified human skeleton by anthropologists with 96% to 100% accuracy in some populations.
Women's pelvises are wider in the pelvic inlet and are wider throughout the pelvis to allow for child birth. The sacrum in the women's pelvis is curved inwards to allow the child to have a "funnel" to assist in the child's pathway from the uterus to the birth canal.
Clinical significance
There are many classified skeletal disorders. One of the most common is osteoporosis. Also common is scoliosis, a side-to-side curve in the back or spine, often creating a pronounced "C" or "S" shape when viewed on an x-ray of the spine. This condition is most apparent during adolescence, and is most common with females.
Arthritis
Arthritis is a disorder of the joints. It involves inflammation of one or more joints. When affected by arthritis, the joint or joints affected may be painful to move, may move in unusual directions or may be immobile completely. The symptoms of arthritis will vary differently between types of arthritis. The most common form of arthritis, osteoarthritis, can affect both the larger and smaller joints of the human skeleton. The cartilage in the affected joints will degrade, soften and wear away. This decreases the mobility of the joints and decreases the space between bones where cartilage should be.
Osteoporosis
Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined by the World Health Organization in women as a bone mineral density 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average, as measured by dual energy X-ray absorptiometry, with the term "established osteoporosis" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called "postmenopausal osteoporosis", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and be at risk of fracture.
Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium supplements may also be advised, as may vitamin D. When medication is used, it may include bisphosphonates, strontium ranelate, and osteoporosis may be one factor considered when commencing hormone replacement therapy.
History
India
The Sushruta Samhita, composed between the 6th century BCE and 5th century CE speaks of 360 bones. Books on Salya-Shastra (surgical science) know of only 300. The text then lists the total of 300 as follows: 120 in the extremities (e.g. hands, legs), 117 in the pelvic area, sides, back, abdomen and breast, and 63 in the neck and upwards. The text then explains how these subtotals were empirically verified. The discussion shows that the Indian tradition nurtured diversity of thought, with Sushruta school reaching its own conclusions and differing from the Atreya-Caraka tradition. The differences in the count of bones in the two schools is partly because Charaka Samhita includes 32 tooth sockets in its count, and their difference of opinions on how and when to count a cartilage as bone (which both sometimes do, unlike modern anatomy).
Hellenistic world
The study of bones in ancient Greece started under Ptolemaic kings due to their link to Egypt. Herophilos, through his work by studying dissected human corpses in Alexandria, is credited to be the pioneer of the field. His works are lost but are often cited by notable persons in the field such as Galen and Rufus of Ephesus. Galen himself did little dissection though and relied on the work of others like Marinus of Alexandria, as well as his own observations of gladiator cadavers and animals. According to Katherine Park, in medieval Europe dissection continued to be practiced, contrary to the popular understanding that such practices were taboo and thus completely banned. The practice of holy autopsy, such as in the case of Clare of Montefalco further supports the claim. Alexandria continued as a center of anatomy under Islamic rule, with Ibn Zuhr a notable figure. Chinese understandings are divergent, as the closest corresponding concept in the medicinal system seems to be the meridians, although given that Hua Tuo regularly performed surgery, there may be some distance between medical theory and actual understanding.
Renaissance
Leonardo da Vinci made studies of the skeleton, albeit unpublished in his time. Many artists, Antonio del Pollaiuolo being the first, performed dissections for better understanding of the body, although they concentrated mostly on the muscles. Vesalius, regarded as the founder of modern anatomy, authored the book De humani corporis fabrica, which contained many illustrations of the skeleton and other body parts, correcting some theories dating from Galen, such as the lower jaw being a single bone instead of two. Various other figures like Alessandro Achillini also contributed to the further understanding of the skeleton.
18th century
As early as 1797, the death goddess or folk saint known as Santa Muerte has been represented as a skeleton.
See also
List of bones of the human skeleton
Distraction osteogenesis
References
Bibliography
Further reading
Endocrine system
Human anatomy | Human skeleton | Biology | 2,518 |
28,878,050 | https://en.wikipedia.org/wiki/Isoelastic%20function | In mathematical economics, an isoelastic function, sometimes constant elasticity function, is a function that exhibits a constant elasticity, i.e. has a constant elasticity coefficient. The elasticity is the ratio of the percentage change in the dependent variable to the percentage causative change in the independent variable, in the limit as the changes approach zero in magnitude.
For an elasticity coefficient (which can take on any real value), the function's general form is given by
where and are constants. The elasticity is by definition
which for this function simply equals r.
Derivation
Elasticity of demand is indicated by
,
where r is the elasticity, Q is quantity, and P is price.
Rearranging gets us:
Then integrating
Simplify
Examples
Demand functions
An example in microeconomics is the constant elasticity demand function, in which p is the price of a product and D(p) is the resulting quantity demanded by consumers. For most goods the elasticity r (the responsiveness of quantity demanded to price) is negative, so it can be convenient to write the constant elasticity demand function with a negative sign on the exponent, in order for the coefficient to take on a positive value:
where is now interpreted as the unsigned magnitude of the responsiveness.
An analogous function exists for the supply curve.
Utility functions in the presence of risk
The constant elasticity function is also used in the theory of choice under risk aversion, which usually assumes that risk-averse decision-makers maximize the expected value of a concave von Neumann-Morgenstern utility function. In this context, with a constant elasticity of utility with respect to, say, wealth, optimal decisions on such things as shares of stocks in a portfolio are independent of the scale of the decision-maker's wealth. The constant elasticity utility function in this context is generally written as
where x is wealth and is the elasticity, with , ≠ 1 referred to as the constant coefficient of relative risk aversion (with risk aversion approaching infinity as → ∞).
See also
Constant elasticity of substitution
Power function
References
External links
Constant Elasticity Demand and Supply Curves
Mathematical economics | Isoelastic function | Mathematics | 442 |
65,279,162 | https://en.wikipedia.org/wiki/Console%20war | In the video game industry, a console war describes the competition between two or more video game console manufacturers in trying to achieve better consumer sales through more advanced console technology, an improved selection of video games, and general marketing around their consoles. While console manufacturers are generally always trying to out-perform other manufacturers in sales, these console wars engage in more direct tactics to compare their offerings directly against their competitors or to disparage the competition in contrast to their own, and thus the marketing efforts have tended to escalate in back-and-forth pushes.
While there have been many console wars to date, the term became popular between Sega and Nintendo during the late 1980s and early 1990s as Sega attempted to break into the United States video game market with its Sega Genesis console. Through a novel marketing approach and improved hardware, Sega had been able to gain a majority of the video game console market by 1991, three years after the Genesis’ launch. This caused back and forth competition between the two companies throughout the early 1990s. However, Nintendo eventually regained its market share and Sega stopped making home console hardware by 2001.
Background and etymology
The video game console market started in 1972 with the release of the first home console, the Magnavox Odyssey. As more manufacturers entered the market and technology improved, the market began to coalesce around releases of more advanced hardware every few years on a predictable cycle, which are typically grouped into generations. Since 1972, there have been nine console generations, with two to three dominant manufacturers controlling the marketplace.
As with most industries without a single dominant leader, console manufacturers have marketed their products in a manner to highlight them in a more favorable manner compared to their competitors', or to focus on features that their competitors may lack, often in aggressive manners. For example, console manufacturers in the 1980s and 1990s heavily relied on the word size of the central processor unit, emphasizing that games had better capabilities with 16-bit processors over 8-bit ones. This type of aggressive marketing led video game journalists to call the competitive marketing a "war" or "battle" as early as August 1988. As each new console generation emerged with new marketing approaches, journalists and consumers continued to use variations of the "war" language, including "system wars" and "console wars". By the early 2000s, the term "console war" was most commonly used to describe heated competition between console manufacturers within any generation.
Nintendo versus Sega
While not the only console war, the rivalry between Sega and Nintendo for dominance of the North American video game market in the late 1980s and early 1990s is generally the most visible example of a console war. It established the use of aggressive marketing and advertising tactics by each company to try to gain control of the marketplace, and ended around 1995 when a new player, Sony, entered and disrupted the console space.
Background
The United States video game industry suffered a severe market crash in 1983 from numerous factors which led to a larger market recession and increasing popularity of personal computers as a video game platform. A key contributing factor to the crash was the loss of publishing control for console games. Early success by some of the first third-party developers like Activision for the Atari VCS console led to venture capitalists bringing in teams of inexperienced programmers to try to capture the same success, but only managed to flood the market with poor quality games, which made it difficult for good quality games to sell. The video game crash impacted other factors in the industry that were already in decline, such as video game arcades.
In Japan, Nintendo had released its Famicom (Family Computer) console in 1983, one of the first consoles of the third generation. Japan did not have a similar third-party development system in place, and Nintendo maintained control on the manufacturing of game cartridges for the Famicom using a licensing model to limit which third-party games were published on it. Nintendo looked to release the unit in the United States, but recognized that the market was still struggling from the 1983 crash. Nintendo took several steps to redesign the Famicom prior to a United States launch. It was made to look like a VCR unit rather than a console, and was given the name the "Nintendo Entertainment System" to distance it from being a video game console. Further, Nintendo added a special 10NES lockout system that worked as a lock-and-key system with game cartridges to further prevent unauthorized games from being published for the system and avoid the loss of publishing control that had caused the 1983 crash. The NES revitalized the U.S. video game industry and established Nintendo as the dominant name in video game consoles over Atari. In lifetime sales, the NES had sold nearly 62 million units worldwide, with 34 million in North America.
At the same time, Sega was looking to get into the video game console industry as well, having been a successful arcade game manufacturer, but due to the downturn in arcade game business, looked to use that expertise for the home market. They released the SG-1000 console in Japan the same day as the Famicom in 1983, but sold only 160,000 units of the SG-1000 in its first year.
Sega redesigned the SG-1000 twice to try to build a system to challenge Nintendo's dominance; the SG-1000 Mark II remained compatible with the SG-1000 but failed to gain any further sales. The next iteration, the Sega Mark III, was released in 1985, using Sega's arcade hardware for its internals to provide more refined graphics. The console was slightly more powerful than the Famicom, and Sega's marketing attempted to push on the more advanced graphics their system offered over the Famicom. Sega attempted to follow Nintendo with a worldwide release of the Mark III, rebranded as the Master System. The Master System was released in the United States in 1986, but Nintendo of America developed a licensing plan in the U.S. to keep developers exclusive to the NES, limiting the library of games that Sega could offer and to also ensure that another gaming crash didn't begin. Further, Sega's third-party distributor, the toy company Tonka, opted against localizing several of the Japanese games Sega had created, further capping the game library Sega could offer in the U.S. Only a total estimated two million systems were sold.
Entering the United States' market
The fourth generation of video game consoles was started by the launch of NEC's PC Engine in 1987 in Japan. While the PC Engine used an 8-bit CPU, it included 16-bit graphic rendering components, and NEC marketed this heavily as a 16-bit game console to distinguish it from the Famicom and Mark III; when NEC brought the PC Engine worldwide, it was rebranded as the "TurboGrafx-16" to emphasize this. After the release of the TurboGrafx-16, use of the bit designation caught on, which led manufacturers to focus their advertising heavily on the number of bits in a console system for the next two console generations.
NEC was one of many competitors to Sega and Nintendo. Following a similar path they had done for the Mark III, Sega used their arcade game technology, now using 16-bit processor boards, and adapted those into a home console, released in Japan in October 1988 as the Mega Drive. Compared to its prior consoles, the Mega Drive was designed to be more mature-looking and less like a toy compared to the Famicom to appeal to an older demographic of gamers, and "16-bit" was emblazoned on the console's case to emphasize this feature. While the system was positively received by gaming magazines like Famitsu, it was overshadowed by the release a week prior of Super Mario Bros. 3 for the Famicom.
As with the Master System, Sega also planned for a major push of the Mega Drive into the United States to challenge Nintendo's dominance among other markets, with the unit rebranded as the Sega Genesis. Sega was dissatisfied with Tonka's handling of the Master System and so sought a new partner through the Atari Corporation led by Jack Tramiel. Tramiel was bullish on the Genesis due to its cost, and turned down the offer, instead focusing more on the company's computer offerings. Sega instead used its dormant Sega of America branch to run a limited launch of the console in August 1989 in test markets of New York City and Los Angeles, with its launch system being bundled with the port of the arcade game Altered Beast.
In October 1989, the company named former Atari Entertainment Electronics Division president Michael Katz as CEO of Sega of America to implement a marketing strategy for a nation-wide push of the Genesis with a target of one million consoles. Katz used a two-prong strategy to challenge Nintendo. The first was to stress the arcade-like capabilities of the Genesis with the capabilities of games like Altered Beast compared to the simpler 8-bit graphics of the NES, and devising slogans such as "Genesis does what Nintendon't." Katz also observed that Nintendo still held most of the rights to arcade game ports for the NES, so the second part of his strategy was to work with the Japanese headquarters of Sega to pay celebrities for their naming rights for games like Pat Riley Basketball, Arnold Palmer Golf, Joe Montana Football, and Michael Jackson's Moonwalker.
Most of these games were developed by Sega's Japanese programmers, though notably, Joe Montana Football had originally been developed by Mediagenic, the new name for Activision after it had become more involved in publishing and business application development alongside games. Mediagenic had started a football game which Katz wanted to brand under Joe Montana's name, but unknown to Katz at the time, the game was only partially finished due to internal strife at Mediagenic. After the deal had been completed and Katz learned of this, he took the game to Electronic Arts. Electronic Arts had already made itself a significant force in the industry as they had been able to reverse engineer the cartridge format for both the NES and the Genesis, though Electronic Arts' CEO Trip Hawkins felt it was better for the company to develop for the Genesis. Electronic Arts used their reverse engineering knowledge as part of their negotiations with Sega to secure a freer licensing contract to develop openly on the Genesis, which proved beneficial for both companies. At the time Katz had secured Mediagenic's Joe Montana football, Electronic Arts was working on its John Madden Football series for personal computers. Electronic Arts was able to help bring Joe Montana Football, more as an arcade title compared to the strategic John Madden Football, to reality, as well as bringing John Madden Football over as a Genesis title.
The second push in 1991
The Genesis still struggled in the United States against Nintendo, and only sold about 500,000 units by mid-1990. Nintendo had released Super Mario Bros. 3 in February 1990 which further drove sales away from Sega's system. Nintendo themselves did not seem to be affected by either Sega's or NEC's entry into the console market. Sega's president Hayao Nakayama wanted the company to develop an iconic mascot character and build a game around it as one means to challenge Nintendo's own Mario mascot. Company artist Naoto Ohshima came up with the concept of Sonic the Hedgehog, a fast anthropomorphic character with an "attitude" that would appeal to teenagers and incorporating the blue color of Sega's logo, and Yuji Naka helped to develop the game Sonic the Hedgehog to showcase the character as well as the graphics and processing speed of the Genesis. The game was ready by early 1991 and launched in North America in June 1991.
Separately, Sega fired Katz and replaced him with Tom Kalinske as Sega of America's new CEO in mid-1990. Kalinske had been president of Mattel and did not have much experience in video games but recognized the razor and blades model, and developed a new strategy for Sega's push to challenge Nintendo's dominance in America with four key decisions, which included cutting the price of the Genesis from to , and continue the same aggressive marketing campaigns to make the Genesis look "cool" over the NES and of Nintendo's upcoming Super Nintendo Entertainment System (SNES). Further, Kalinske pushed hard for American developers like Electronic Arts to create games on the Genesis that would better fit American preferences, particularly sports simulation games which the console had gained a reputation for. Finally, Kalinske insisted on making Sonic the Hedgehog the bundled game on the system following its release in June 1991, replacing Altered Beast and even offering those that had purchased a Genesis with Altered Beast a trade-in replacement for Sonic.
Under Kalinske, Sega also revamped their advertising approach, aiming for more of a young adult audience, as Nintendo still was positioning the SNES as a child-friendly console. Advertising focused on Sonic, the edgier games in the Genesis library, and its larger library of sports games which appealed to this group. Television ads for the Genesis and its games ended with the "Sega Scream" – a character shouting the name "Sega" to the camera in the final shot – which also caught on quickly.
These changes, all predating the SNES's planned North American release in September 1991, gave Sega its first gain on Nintendo in the U.S. market. Further, the price cut to made the Genesis a cheaper option than the planned price for the SNES led many families to purchase the Genesis instead of waiting for the SNES. The Genesis had a larger library of games for the U.S. with over 150 titles by the time the SNES launched alongside eight games, and Sega continued to push out titles that drew continuous press throughout the year, whereas with the SNES, its game library was generally held up by flagship Mario and Zelda games that only came at out once a year, along with less which further made the Genesis a more desirable option.
For Nintendo, up until 1991, they had been passive towards Sega's approach in North America, but as the SNES launch approach, the company recognized that they were losing ground. The company shifted their advertising in North America to focus on more of the advanced features of the SNES that were not present in the Genesis, such as its Mode 7 to create simulated 3D perspective effects. When the SNES launched, this would be most prominently seen with the release of F-Zero, where the 3D made the game look more complex, compared to earlier 3rd person racing games on home consoles. Pilotwings used Mode 7 to better simulate the landings that would happen, after players completed the other objectives in the level. The initial shipment of one million SNES units sold out quickly and a total of 3.4 million SNES were sold by the end of 1991, a record for a new console launch, but the Genesis maintained strong sales against the SNES. The Genesis's resilience against the SNES led several of Nintendo's third-party developers to break their exclusive development agreements with Nintendo and seek out licenses to also develop for Genesis. These developers included Acclaim, Konami, Tecmo, Taito, and Capcom, the latter of which arranged to have a special licensing mechanism with Sega that allowed them to publish select titles exclusively for the Genesis.
During this period, the push for marketing by both Nintendo and Sega led to the growth of video game magazines. Nintendo had already established Nintendo Power in 1988 in part to serve as a help guide for players on its popular titles, and was able to use this further to advertise the SNES and upcoming games. Numerous other titles grew in the late 1980s and early 1990s, giving Sega the opportunity to market its games heavily in these publications.
The war escalates in 1992 and 1993
Nintendo publicly acknowledged that it knew it was no longer in the dominant position in the console market by 1992. A year into the SNES's release, the SNES's price was lowered to to match the Genesis, to which Sega reduced the Genesis to shortly after. The SNES was helped by Capcom's decision to maintain exclusivity of its home port of its popular brawler arcade game Street Fighter II: The World Warrior to the SNES when it was released in June 1992. Nintendo also experimented with including processing chips within game cartridges to augment to power of the SNES, with the Super FX chip bring real-time 3D rendering first used in Star Fox. While the SNES outsold the Genesis in the U.S. in 1992. the Genesis still had a larger install base.
The success of Street Fighter II both as an arcade game and as a home console title led to the growth of the fighting game genre, and numerous variations from other developers followed. Of significant interest was Midway's Mortal Kombat, released to arcades in 1992. Compared to most other fighting games at the time, Mortal Kombat was much more violent. The game showed combatants’ blood splatter during combat and allowed players to end matches in graphically intense "fatalities.” Because of its controversial style and gameplay, the game proved extremely popular in arcades.
By 1993, Both Nintendo and Sega recognized the need to have Mortal Kombat on their consoles. However, Nintendo, fearing issues with the game’s violence, licensed a “clean” version of the game from Acclaim for the SNES. Which included replacing the blood splatter with sweat and removing the aforementioned fatalities. Sega also licensed a censored version of the game for the Genesis. However, players could enter a cheat code that reverted the game back to its original arcade version. Both home versions were released in September, and approximately 6.5 million units were sold over the game’s lifetime. But the Genesis version was more popular with three to five times more sales than its SNES counterpart.
The popularity of the home console version of Mortal Kombat, coupled with other moral panics in the early 1990s, led to concerns from parents, activists and lawmakers in the United States, leading up to the 1993 congressional hearings on video games first held in December. Led by Senators Joe Lieberman and Herb Kohl, the Senate Committees on Governmental Affairs and the Judiciary brought several of the video game industry leaders, including Howard Lincoln, vice president of Nintendo of America, and Bill White, vice president of Sega of America, to discuss the way they marketed games like Mortal Kombat and Night Trap on consoles to children. Lincoln and White accused each other's companies of creating the issue at hand. Lincoln stated that Nintendo had taken a curated approach to selecting games for their consoles, and that violent games had no place in the market. White responded that Sega purposely was targeting an older audience than Nintendo, and had created a ratings system for its games that it had been trying to encourage the rest of the industry to use; further, despite Nintendo's oversight, White pointed out that there were still many Nintendo titles that incorporated violence. With neither Lincoln nor White giving much play, Lieberman concluded the first hearing with a warning that the industry needs to come together with some means to regulate video games or else Congress would pass laws to do this for them.
By the time of the second hearing in March 1994, the industry had come together to form the Interactive Digital Software Association (today the Entertainment Software Association) and were working to establish the Entertainment Software Rating Board (ESRB), a ratings panel, which ultimately was introduced by September 1994. Despite Sega offering its ratings system as a starting point, Nintendo refused to work with that as they still saw Sega as their rival, requiring a wholly new system to be created. The ESRB eventually established a form modelled off the Motion Picture Association of America (MPAA)'s rating system for film, and the committee was satisfied with the proposed system and allowed the video game industry to continue without further regulations.
The arrival of Sony and the end of the war
In 1994 and 1995, there was a contraction in the video game industry, with NPD Group reporting a 17% and 19% year-to-year drop in revenue. While Sega had been outperforming Nintendo in 1993, it still carried corporate debt while Nintendo remained debt-free from having a more dominant position in the worldwide market, even beating Sega in the North American and US market winning the 16 bit console war. To continue to fight Nintendo, Sega's next console was the Sega Saturn, first released in November 1994 in Japan. It brought in technology used by Sega's arcade games that used 3d polygonal graphics, and launch titles featured home versions of these arcade games including Virtua Fighter. While Virtua Fighter was not a pack-in game, sales of the title were nearly 1:1 with the console in Japan. Sega, recognizing that they had numerous consoles with disparate games they were now trying to support, decided to put most of their attention onto the Saturn line going forward, dropping support for the Genesis despite its sales still being strong in the United States at the time.
At the same time, a new competitor in the console marketplace emerged, Sony Computer Entertainment, with the introduction of the PlayStation in December 1994. The PlayStation moved away from cartridges and took advantage of nascent CD-ROM technology for game distribution, allowing much more data to be stored on each disc and reducing the costs for reproduction. Nintendo had worked with Sony on a prototype add-on for the SNES, the Super NES CD-ROM, that would allow it to read CD-ROMs, but the project was terminated by 1992 after Nintendo revealed it opted to start working with Philips and its own optical disc technology, while Sony used their development towards the PlayStation. Sega, aware of Sony's potential competition in Japan, made sure to have enough Saturns ready for sale on the day the PlayStation first shipped as to overwhelm Sony's offering.
Both Sega and Sony turned to move these units to the North American market. With the formation of the ISDA, a new North American tradeshow, the Electronic Entertainment Expo (E3) was created in 1995 to focus on video games, to distinguish it from the Consumer Electronics Show (CES), which covered all home electronics. Nintendo, Sega and Sony gave their full support to E3 in 1995. Sega believed they had the stronger position going into E3 over Sony, as gaming publications, comparing the Saturn to the PlayStation, rated the Saturn as the better system. At the first E3 in May 1995, Sega's Kalinske premiered the North American version of the Saturn, announced its various features and its selling price of , and said that while it would officially launch that same day, they had already sent a number of systems to selected vendors for sale. Sony's Olaf Olafsson of Sony Electronic Publishing began to cover the PlayStation features, then invited Steve Race, president of Sony Computer Entertainment America to the stage. Race stated the launch price of the PlayStation, "", and then left to "thunderous applause". The surprise price cut caught Sega off-guard, and, in addition to several stores pulling Sega from their lineup due to being shunned from early Saturn sales, the higher price point made it more difficult for them to sell the system. As a result of this strategy by Sony, future E3s became a battleground for other console wars, with journalists judging the various hardware manufacturers' presentations to determine which one had the most successful pitches.
When the PlayStation officially launched in the United States in September 1995, its sales over the first two days exceeded what the Saturn had sold over the prior five months. Because Sega had invested heavily on Saturn into the future, Sony's competition drastically hurt the company's finances.
In the case of Nintendo, they bypassed the 32-bit CPU and instead their next offering was the Nintendo 64, a 64-bit CPU console first released in June 1996. While this gave them powerful capabilities such as 3D graphics to keep up and surpass those on the Saturn and PlayStation, it was still a cartridge-based system limiting how much information could be stored for each game. This decision ultimately cost them Square Soft who moved their popular Final Fantasy series over to the PlayStation line to take advantage of the larger space on optical media. The first PlayStation game in the series, Final Fantasy VII, drove sales of the PlayStation, further weakening Nintendo's position and driving Sega further out of the market.
By this point, the console war between Nintendo and Sega had evaporated, with both companies now facing Sony as their rival. Sega made one more console, the Dreamcast, which had a number of innovative features including a built-in modem for online connectivity, but the console's lifespan was short-lived in part due to the success of Sony's next product, the PlayStation 2, currently being the best-selling home console of all time. Sega left the home console hardware business in 2001 to focus on software development and licensing. Nintendo remains a key player in the home console business, but more recently has taken a "blue ocean strategy" approach to avoid competing directly with Sony or Microsoft on a feature-for-feature basis with consoles like the Wii, Nintendo DS, and Nintendo Switch.
Legacy
The Sega/Nintendo console war is the subject of the non-fiction novel Console Wars by Blake Harris in 2014, as well as a film adaption/documentary of the book in 2020.
Sega and Nintendo have since collaborated on various software titles. Sega has developed a biennial Mario & Sonic at the Olympics series of sports games based on the Summer and Winter Olympics since 2008 featuring characters from both the Super Mario and Sonic series, while Nintendo has developed the Super Smash Bros. crossover fighter series for numerous Nintendo properties that has included Sonic as a playable character along with other Sonic characters in supporting roles since Super Smash Bros. Brawl.
Sony versus Microsoft
Background
Since the sixth generation, both Sony and Microsoft have been direct competitors for home consoles. Since 2000, both companies have released a new console model within a year of each other with roughly comparable specifications. While Nintendo also has remained a significant competitor to both companies, its development and marketing strategy using the "blue ocean" approach is considered fundamentally different from Sony or Microsoft that it is usually not regarded as major participant in the console war.
Initial Challenge From Microsoft
Microsoft specifically entered the console market with the Xbox console in 2001 as it saw Sony's PlayStation 2 as a potential competitor to the home computer as a ubiquitous device in the living room. Whereas the PlayStation 2 was developed from mostly custom components, Microsoft approached the Xbox as a highly refined personal computer based on Microsoft Windows and DirectX technology. The original Xbox did not compete well against the PlayStation 2, selling only about 24 million units worldwide against the PlayStation 2's 155 million, with Microsoft reportedly failing to profit on the console hardware. Nonetheless, Microsoft, satisfied with the Xbox's overall performance, reaffirmed its commitment to the console marketplace with the reveal of the Xbox 360 in 2005.
Xbox 360 vs PlayStation 3
Microsoft was able to take lessons learned from the first Xbox to its second model, the Xbox 360 released in 2005, ahead of Sony's release of the PlayStation 3 in 2006. Besides the earlier release and improved design, Microsoft had secured more first-party developers in its Microsoft Game Studios, mimicking Sony's own first-party developers and other third-party developers for several console exclusives. The PlayStation 3, on the other hand, had fewer exclusives at launch and was hampered by a higher price point at launch, giving the Xbox 360 an edge in the first years of release. Both consoles aimed to include multimedia feature into high-definition movie playback. One miscue Microsoft had was backing the HD-DVD standard for movie playback over the Blu-ray standard that Sony had selected, as shortly after the Xbox 360's release, the movie industry had standardized on Blu-ray. The Xbox 360 also suffered from the "Red Ring of Death", a hardware fault on a large fraction of retail models that cost Microsoft over in repairs over the console's lifetime.
Both consoles were challenged by Nintendo's Wii and specifically its novel Wiimote motion-sensing device. To compete, both Microsoft and Sony released their motion-sensing systems, the Kinect and PlayStation Move, respectively, for their consoles. The companies also released console refreshes mid-generation. Microsoft released a low-cost Xbox 360 S, which shipped with less internal storage space, as well as a high-end Xbox 360 E, which shipped with more storage space and the Kinect sensor. Sony released two different Slim models of the PlayStation 3 that reduced the system size and subsequent retail price which helped improve sales. Ultimately, the Xbox 360 sold an estimated 84 million units, based on industry estimates as Microsoft stopped reporting its sales, while the PlayStation 3 sold 87 million units; the Wii comparatively sold over 101 million units.
Xbox One vs PlayStation 4
Sony and Microsoft both released their next consoles, the PlayStation 4 and the Xbox One, in 2013. Sony considered the difficulties developers had with using the custom instruction set for the Cell processor on the PlayStation 3 and restructured the PlayStation 4 to use the more standard x86 instruction set used by most personal computers helping to bring development in convergence with computer systems. Microsoft initially wanted to drive the Xbox One as a replacement for a cable box in the living room as a single source for entertainment with features aimed around television viewing in addition to gaming. To achieve this, the Xbox One was to be shipped with Kinect and was to use an always-on Internet connection to enable numerous features, such as the ability to share games with other family members. However, when these features were first promoted, there was a heavy backlash from journalists and consumers, considering these unnecessary, privacy-invading features. Microsoft had to pull many of these features from the Xbox One before launch, such as eliminating the always-connected requirement and the need to use Kinect. Sony took the opportunity in their PlayStation 4 marketing to play off Microsoft's missteps, such as demonstrating the simplicity of game sharing by simply passing along the physical media to another person, as well as its lower price point. While Microsoft was able to course-correct the Xbox One after launch, Sony had gained enough ground with the capabilities of the PlayStation 4 along with a strong library of console-exclusive titles, and the PlayStation 4 outsold the Xbox One, 117 million units to 52 million units.
The Xbox One was ultimately the more expensive of the two, however, both console prices were high when compared to the historical console market, setting a trend for ever more expensive consoles.
Xbox Series X|S vs PlayStation 5
Both companies released their next consoles in 2020, the PlayStation 5 and the Xbox Series X and Series S. Both console families represent technology improvements with similar target specifications, including high-resolution and high framerates, high-speed internal storage, and backward compatibility with earlier systems. More recently Microsoft has expanded game offerings beyond consoles, such as Xbox Game Pass and the xCloud game streaming service, as to move away from a console war mentality. Phil Spencer, head of Xbox for Microsoft, stated that they see Xbox in competition with Netflix and other online streaming services vying for entertainment options, rather than Sony. Similarly, Sony launched a more intense focus into streaming services. For example, in 2020, a "media remote" was launched, advertising "effortless control of a wide range of blockbuster entertainment on the PS5".
With Microsoft's acquisition of Zenimax Media in 2021 and acquisition of Activision Blizzard in 2023, the potential for escalation in Sony/Microsoft console war grew, as Microsoft could potentially make Bethesda Softworks and Activision Blizzard's games exclusive to the Xbox line. Microsoft's potential ownership of the Call of Duty series had become a focus of Sony's concerns of the acquisition. While Microsoft has given Sony a written commitment to keep the Call of Duty series on the PlayStation consoles for several years, Sony has expressed concern that this is not adequate and that Microsoft would make the series Xbox-exclusive following that period. As regulatory agencies considered these positions, Microsoft stated that they had been losing the console war against Sony, having always been in a weaker sales position against the PlayStation 5.
Other console wars
Atari versus Intellivision
Following the release of the Atari 2600 in 1977, Mattel sought to enter the console market, and released the Intellivision in 1979. The console was designed for improved graphics and other features compared to the Atari 2600, a factor that dominated the marketing campaign for Intellivision. Intellivision's launch included a number of sports games, using licenses from the major sports leagues , and included an advertising campaign with sports writer George Plimpton. Mattel also focused on hardware accessories to the console like a keyboard for programming. While the Atari 2600 sold an estimated 30 million consoles, the Intellivision sold around 5 million units and was considered the primary competitor to Atari in the second generation of video game consoles.
In the following years, Mattel sought to expand the Intellivision line, releasing the Intellivision II in 1983 and with development of an Intellivision III starting in 1982. Atari released its successor to the 2600, the Atari 5200, in 1982 to compete with the Intellivision. The console war between Atari and Intellivision was shaken up by the arrival of Coleco's ColecoVision in 1982, which was a further technological improvement over both Atari and Intellivision. Both Atari and Mattel suffered significant financial losses in the video game crash of 1983. Atari would scale back its video game efforts in the years that followed, while Mattel sold off the Intellivision brand in 1984.
After decades where the intellectual property from both Atari and Mattel shifted across different agencies, Atari SA acquired the Intellivision brand and rights to over 200 games from its systems in May 2024, which Atari SA jokingly stated to have put an end to the decades-long console war.
1990s handheld consoles
A number of major handheld consoles were released on the market within about a year of each other: Nintendo's Game Boy, Sega's Game Gear, and the Atari Lynx. While the Game Boy used a monochromatic display, both the Game Gear and Lynx had colour displays. As these handheld releases were alongside the Sega v. Nintendo console war, they were also subject to heavy marketing and advertising to try to draw consumers. However, the Game Boy ultimately won out in this battle, selling over 118 million units over its lifetime (including its future revisions) compared to 10 million for the Game Gear and 3 million for the Lynx. The Game Boy initially sold for or more cheaper than its competitors, and had a larger library of games, including what is considered the handheld's killer app, Tetris, that drew non-gamers to purchase the handheld to play it.
Modern handheld consoles
Nintendo's DS, 3DS, and Switch each faced competitors in the form of Sony's PlayStation Portable (PSP) and PS Vita, and the Steam Deck from Valve. In each instance, Nintendo managed to get higher sales despite having the weaker hardware.
In video games
The Hyperdimension Neptunia series of video games started as a parody of the console wars, incorporating personified consoles, developers, consumers, and other such figures within the gaming industry.
See also
Browser wars
Format war
Smartphone patent wars
References
History of video games
Business rivalries | Console war | Technology | 7,176 |
22,779,719 | https://en.wikipedia.org/wiki/List%20of%20number%20fields%20with%20class%20number%20one | This is an incomplete list of number fields with class number 1.
It is believed that there are infinitely many such number fields, but this has not been proven.
Definition
The class number of a number field is by definition the order of the ideal class group of its ring of integers.
Thus, a number field has class number 1 if and only if its ring of integers is a principal ideal domain (and thus a unique factorization domain). The fundamental theorem of arithmetic says that Q has class number 1.
Quadratic number fields
These are of the form K = Q(), for a square-free integer d.
Real quadratic fields
K is called real quadratic if d > 0. K has class number 1 for the following values of d :
2*, 3, 5*, 6, 7, 11, 13*, 14, 17*, 19, 21, 22, 23, 29*, 31, 33, 37*, 38, 41*, 43, 46, 47, 53*, 57, 59, 61*, 62, 67, 69, 71, 73*, 77, 83, 86, 89*, 93, 94, 97*, ...
(complete until d = 100)
*: The narrow class number is also 1 (see related sequence A003655 in OEIS).
Despite what would appear to be the case for these small values, not all prime numbers that are congruent to 1 modulo 4 appear on this list, notably the fields Q() for d = 229 and d = 257 both have class number greater than 1 (in fact equal to 3 in both cases). The density of such primes for which Q() does have class number 1 is conjectured to be nonzero, and in fact close to 76%,
however it is not even known whether there are infinitely many real quadratic fields with class number 1.
Imaginary quadratic fields
K has class number 1 exactly for the 9 following negative values of d:
−1, −2, −3, −7, −11, −19, −43, −67, −163.
(By definition, these also all have narrow class number 1.)
Cubic fields
Totally real cubic field
The first 60 totally real cubic fields (ordered by discriminant) have class number one. In other words, all cubic fields of discriminant between 0 and 1944 (inclusively) have class number one. The next totally real cubic field (of discriminant 1957) has class number two. The polynomials defining the totally real cubic fields that have discriminants less than 500 with class number one are:
Complex cubic field
All complex cubic fields with discriminant greater than −500 have class number one, except the fields with discriminants −283, −331 and −491 which have class number 2. The real root of the polynomial for −23 is the reciprocal of the plastic ratio (negated), while that for −31 is the reciprocal of the supergolden ratio. The polynomials defining the complex cubic fields that have class number one and discriminant greater than −500 are:
Cyclotomic fields
The following is a complete list of thirty n for which the field Q(ζn) has class number 1:
1, 3, 4, 5, 7, 8, 9, 11, 12, 13, 15, 16, 17, 19, 20, 21, 24, 25, 27, 28, 32, 33, 35, 36, 40, 44, 45, 48, 60, 84.
(Note that values of n congruent to 2 modulo 4 are redundant since Q(ζ2n) = Q(ζn) when n is odd.)
On the other hand, the maximal real subfields Q(cos(2π/2n)) of the 2-power cyclotomic fields Q(ζ2n) (where n is a positive integer) are known to have class number 1 for n≤8, and
it is conjectured that they have class number 1 for all n. Weber showed that these fields have odd class number. In 2009, Fukuda and Komatsu showed that the class numbers of these fields have no prime factor less than 107, and later improved this bound to 109. These fields are the n-th layers of the cyclotomic Z2-extension of Q. Also in 2009, Morisawa showed that the class numbers of the layers of the cyclotomic Z3-extension of Q have no prime factor less than 104. Coates has raised the question of whether, for all primes p, every layer of the cyclotomic Zp-extension of Q has class number 1.
CM fields
Simultaneously generalizing the case of imaginary quadratic fields and cyclotomic fields is the case of a CM field K, i.e. a totally imaginary quadratic extension of a totally real field. In 1974, Harold Stark conjectured that there are finitely many CM fields of class number 1. He showed that there are finitely many of a fixed degree. Shortly thereafter, Andrew Odlyzko showed that there are only finitely many Galois CM fields of class number 1. In 2001, V. Kumar Murty showed that of all CM fields whose Galois closure has solvable Galois group, only finitely many have class number 1.
A complete list of the 172 abelian CM fields of class number 1 was determined in the early 1990s by Ken Yamamura and is available on pages 915–919 of his article on the subject. Combining this list with the work of Stéphane Louboutin and Ryotaro Okazaki provides a full list of quartic CM fields of class number 1.
See also
Class number problem
Class number formula
Brauer–Siegel theorem
Notes
References
Algebraic number theory
Field (mathematics) | List of number fields with class number one | Mathematics | 1,213 |
51,873 | https://en.wikipedia.org/wiki/Paper%20clip | A paper clip (or paperclip) is a tool used to hold sheets of paper together, usually made of steel wire bent to a looped shape (though some are covered in plastic). Most paper clips are variations of the Gem type introduced in the 1890s or earlier, characterized by the one and a half loops made by the wire. Common to paper clips proper is their utilization of torsion and elasticity in the wire, and friction between wire and paper. When a moderate number of sheets are inserted between the two "tongues" of the clip, the tongues will be forced apart and cause torsion in the bend of the wire to grip the sheets together. They are usually used to bind papers together for productivity and portability.
The paper clip's widespread use in various settings, from offices to educational institutions, underscores its functional design and adaptability. While primarily designed for binding papers, its versatility has led to a range of applications, both practical and creative.
Shape and composition
Paper clips usually have an oblong shape with straight sides, but may also be triangular or circular, or have more elaborate shapes. The most common material is steel or some other metal, but molded plastic is also used. Some other kinds of paper clips use a two-piece clamping system. Recent innovations include multi-colored plastic-coated paper clips and spring-fastened binder clips. Regular metal paper clips weigh about a gram.
History
According to the Early Office Museum, the first patent for a bent wire paper clip was awarded in the United States to Samuel B. Fay in 1867. This clip was originally intended primarily for attaching tickets to fabric, although the patent recognized that it could be used to attach papers together. Fay received U.S. patent 64,088 on April 23, 1867. Although functional and practical, Fay's design along with the 50 other designs patented prior to 1899 are not considered reminiscent of the modern paperclip design known today. Another notable paper clip design was also patented in the United States by Erlman J. Wright on July 24, 1877, patent #193,389. This clip was advertised at that time for use in fastening together loose leaves of papers, documents, periodicals, newspapers etc.
The most common type of wire paper clip still in use, the Gem paper clip, was never patented, but it was most likely in production in Britain in the early 1870s by "The Gem Manufacturing Company", according to the American expert on technological innovations, Professor Henry J. Petroski. He refers to an 1883 article about "Gem Paper-Fasteners", praising them for being "better than ordinary pins" for "binding together papers on the same subject, a bundle of letters, or pages of a manuscript". Since the 1883 article had no illustration of this early "Gem", it may have been different from modern paper clips of that name.
The earliest illustration of its current form is in an 1893 advertisement for the "Gem Paper Clip". In 1904 Cushman & Denison registered a trademark for the "Gem" name in connection with paper clips. The announcement stated that it had been used since March 1, 1892, which may have been the time of its introduction in the United States. Paper clips are still sometimes called "Gem clips", and in Swedish the word for any paper clip is "gem" (but pronounced similar to English game).
Definite proof that the modern type of paper clip was well known in 1899 at the latest, is the patent granted to William Middlebrook of Waterbury, Connecticut on April 27 of that year for a "Machine for making wire paper clips." The drawing clearly shows that the product is a perfect clip of the Gem type. The fact that Middlebrook did not mention it by name, suggests that it was already well known at the time. Since then countless variations on the same theme have been patented. Some have pointed instead of rounded ends, some have the end of one loop bent slightly to make it easier to insert sheets of paper, and some have wires with undulations or barbs to get a better grip. In addition, purely aesthetic variants have been patented, clips with triangular, star, or round shapes. But the original Gem type has for more than a hundred years proved to be the most practical, and consequently by far the most popular. Its qualities—ease of use, gripping without tearing, and storing without tangling—have been difficult to improve upon. In the United States, National Paperclip Day is celebrated on May 29.
The Gem-type paperclip has become a symbol of inventive design, as confirmed below – although falsely – by its celebration as a Norwegian invention in 1899. More convincing is its appropriation as logo of the Year of Design () in Barcelona 2003, depicted on posters, T-shirts and other merchandise.
Unsupported claims
It has been claimed that the paper clip was invented by English intellectual Herbert Spencer (1820–1903). Spencer registered a "binding-pin" on 2 September 1846, which was made and sold by Adolphus Ackermann for over a year, advertised as "for holding loose manuscripts, sermons, weekly papers, and all unstitched publications". Spencer's design, approximately unfolded, looked more like a modern cotter pin than a modern paper clip.
Norwegian claim
Norwegian Johan Vaaler (1866–1910) has been identified as the inventor of the paper clip. He was granted patents in Germany and in the United States (1901) for a paper clip of similar design, but less functional and practical. Because it was more complicated to insert into the paper, Vaaler probably did not know that a better product was already on the market, although not yet in Norway. His version was never manufactured and never marketed because the superior Gem was already available.
Long after Vaaler's death, his countrymen created a national myth based on the false assumption that the paper clip was invented by an unrecognized Norwegian genius. Norwegian dictionaries since the 1950s have mentioned Vaaler as the inventor of the paper clip, and that myth later found its way into international dictionaries and much of the international literature on paper clips.
Vaaler probably succeeded in having his design patented abroad, despite the previous existence of more useful paper clips, because patent authorities at that time were quite liberal and rewarded any marginal modification of existing inventions. Johan Vaaler began working for Alfred J. Bryns Patentkontor in Kristiania in 1892 and was later promoted to office manager, a position he held until his death. As the employee of a patent office, he could easily have obtained a patent in Norway. His reasons for applying abroad are not known; it is possible that he wanted to secure the commercial rights internationally. Also, he may have been aware that a Norwegian manufacturer would find it difficult to introduce a new invention abroad, starting from the small home market.
Vaaler's patents expired quietly, while the "Gem" was used worldwide, including his own country. The failure of his design was its impracticality. Without the two full loops of the fully developed paper clip, it was difficult to insert sheets of paper into his clip. One could manipulate the end of the inner wire so that it could receive the sheet, but the outer wire was a dead end because it could not exploit the torsion principle. The clip would instead stand out like a keel, perpendicular to the sheet of paper. The impracticality of Vaaler's design may easily be demonstrated by cutting off the last outer loop and one long side from a regular Gem clip.
National symbol
The originator of the Norwegian paper clip myth was an engineer of the Norwegian national patent agency who visited Germany in the 1920s to register Norwegian patents in that country. He came across Vaaler's patent but failed to detect that it was not the same as the then-common Gem-type clip. In the report of the first fifty years of the patent agency, he wrote an article in which he proclaimed Vaaler to be the inventor of the common paper clip. This piece of information found its way into some Norwegian encyclopedias after World War II.
Events of that war contributed greatly to the mythical status of the paper clip. Patriots wore them in their lapels as a symbol of resistance to the German occupiers and local Nazi authorities when other signs of resistance, such as flag pins or pins showing the cipher of the exiled King Haakon VII of Norway, were forbidden. Those wearing them did not yet see them as national symbols, as the myth of their Norwegian origin was not commonly known at the time.
The clips were meant to denote solidarity and unity ("we are bound together"). The wearing of paper clips was soon prohibited, and people wearing them could risk severe punishment.
The leading Norwegian encyclopedia mentioned the role of the paper clip as a symbol of resistance in a supplementary volume in 1952 but did not yet proclaim it a Norwegian invention. That information was added in later editions. According to the 1974 edition, the idea of using the paper clip to denote resistance originated in France. A clip worn on a lapel or front pocket could be seen as "deux gaules" (two posts or poles) and be interpreted as a reference to the leader of the French Resistance, General Charles de Gaulle.
The post-war years saw a widespread consolidation of the paper clip as a national symbol. Authors of books and articles on the history of Norwegian technology eagerly seized it to make a thin story more substantial. They chose to overlook the fact that Vaaler's clip was not the same as the fully developed Gem-type clip. In 1989, a giant paper clip, almost high, was erected on the campus of a commercial college near Oslo in honor of Vaaler, ninety years after his invention was patented. But this monument shows a Gem-type clip, not the one patented by Vaaler. The celebration of the alleged Norwegian origin of the paper clip culminated in 1999, one hundred years after Vaaler submitted his application for a German patent. A commemorative stamp was issued that year, the first in a series to draw attention to Norwegian inventiveness. The background shows a facsimile of the German "Patentschrift". However, the figure in the foreground is not the paper clip depicted on that document, but the much better known "Gem". In 2005, the national biographical encyclopedia of Norway (Norsk biografisk leksikon) published the biography of Johan Vaaler, stating he was the inventor of the paper clip.
Other uses
Wire is versatile in its nature. Thus a paper clip is a useful accessory in many kinds of mechanical work, including computer work: the metal wire can be unfolded with a little force. Several devices call for a very thin rod to push a recessed button which the user might only rarely need. This is seen on most CD-ROM drives as an "emergency eject" should the power fail; also on early floppy disk drives (including the early Macintosh). Various smartphones require the use of a long, thin object such as a paper clip to eject the SIM card and some Palm PDAs advise the use of a paper clip to reset the device. The trackball can be removed from early Logitech pointing devices using a paper clip as the key to the bezel. A paper clip bent into a "U" can be used to start an ATX PSU without connecting it to a motherboard, by connecting the green[what?] to a black[what?] on the motherboard header. One or more paper clips can make a loopback device for a RS-232 interface (or indeed many interfaces). A paper clip could be installed in a Commodore 1541 disk drive as a flexible head-stop. The steel wire from a paperclip can be used in dentistry to form a dental post.
Pipe smokers, including Cannabis smokers use straightened out paper clips to unclog their pipe or bong bowl.
Another creative use of paper clips is in "paperclip art", where enthusiasts bend and twist paper clips into intricate designs and figures, ranging from simple shapes to detailed sculptures. This form of art showcases the flexibility and adaptability of the paper clip beyond its traditional use.
Additionally, paper clips can serve as temporary bookmarks in books or documents. Their slim profile and easy placement make them useful for marking a specific page or section without causing damage or adding bulk.
Paper clips can be bent into a crude but sometimes effective lock picking device. Some types of handcuffs can be unfastened using paper clips. There are two approaches. The first one is to unfold the clip in a line and then twist the end in a right angle, trying to imitate a key and using it to lift the lock fixator. The second approach, which is more feasible but needs some practice, is to use the semi-unfolded clip kink for lifting when the clip is inserted through the hole where the handcuffs are closed.
A paper clip image is the standard image for an attachment in an email client.
Trade
In 1994, the United States imposed anti-dumping tariffs against China on paper clips.
Other fastening devices
Binder clip
Brass fastener
Bulldog clip
Staple
Treasury tag
See also
Clippy – an anthropomorphic paper clip assistant in Microsoft Office
Universal Paperclips - a game based on a thought experiment where the user plays the role of an AI programmed to produce paperclips
Operation Paperclip
Paper Clips Project – project where a small town American school wished to understand the grand scale of 6,000,000 Jews murdered during the Holocaust by collecting 6,000,000 (and more) physical objects, deciding to collect paperclips because of their small size and easy availability
Notes
Further reading
External links
History of the Paper Clip
Patents
—Paper clip—E. P. Bugge
American inventions
Fasteners
Office equipment
Products introduced in 1867
Stationery | Paper clip | Engineering | 2,850 |
2,748,749 | https://en.wikipedia.org/wiki/Pi%20Virginis | Pi Virginis (π Vir, π Virginis) is a binary star in the zodiac constellation of Virgo. It is visible to the naked eye with an apparent visual magnitude of 4.64. The distance to this star, based upon parallax measurements, is roughly 380 light years.
This is a spectroscopic binary system with a stellar classification of A5V. They have an orbital period of 283 days with an eccentricity of 0.27. The mass ratio of the two stars is about 0.47, with the primary having an estimated mass of around 2.2 times that of the Sun. The primary is a cool metallic-lined Am star.
References
Virginis, Pi
Virgo (constellation)
Spectroscopic binaries
A-type main-sequence stars
Virginis, 008
104321
058490
4589
Durchmusterung objects | Pi Virginis | Astronomy | 181 |
51,146,427 | https://en.wikipedia.org/wiki/11C-UCB-J | 11C-UCB-J is a PET tracer for imaging the synaptic vesicle glycoprotein 2A in the human brain.
It is used to study the brain changes associated with several diseases including Alzheimer's disease, schizophrenia, and depression.
References
PET radiotracers
Pyridines
Pyrrolidones | 11C-UCB-J | Chemistry | 72 |
49,401 | https://en.wikipedia.org/wiki/Hall | In architecture, a hall is a relatively large space enclosed by a roof and walls. In the Iron Age and early Middle Ages in northern Europe, a mead hall was where a lord and his retainers ate and also slept. Later in the Middle Ages, the great hall was the largest room in castles and large houses, and where the servants usually slept. As more complex house plans developed, the hall remained a large room for dancing and large feasts, often still with servants sleeping there. It was usually immediately inside the main door. In modern British houses, an entrance hall next to the front door remains an indispensable feature, even if it is essentially merely a corridor.
Today, the (entrance) hall of a house is the space next to the front door or vestibule leading to the rooms directly and/or indirectly. Where the hall inside the front door of a house is elongated, it may be called a passage, corridor (from Spanish corredor used in El Escorial and 100 years later in Castle Howard), or hallway.
History
In warmer climates, the houses of the wealthy were often built around a courtyard, but in northern areas manors were built around a great hall. The hall was home to the hearth and was where all the residents of the house would eat, work, and sleep. One common example of this form is the longhouse. Only particularly messy tasks would be done in separate rooms on the periphery of the hall. Still today the term hall is often used to designate a country house such as a hall house, or specifically a Wealden hall house, and manor houses.
In later medieval Europe, the main room of a castle or manor house was the great hall. In a medieval building, the hall was where the fire was kept. As heating technology improved and a desire for privacy grew, tasks moved from the hall to other rooms. First, the master of the house withdrew to private bedrooms and eating areas. Over time servants and children also moved to their own areas, while work projects were also given their own chambers leaving the hall for special functions. With time, its functions as dormitory, kitchen, parlour, and so on were divided into separate rooms or, in the case of the kitchen, a separate building.
Until the early modern era that majority of the population lived in houses with a single room. In the 17th century, even lower classes began to have a second room, with the main chamber being the hall and the secondary room the parlor. The hall and parlor house was found in England and was a fundamental, historical floor plan in parts of the United States from 1620 to 1860.
In Europe, as the wealthy embraced multiple rooms initially the common form was the enfilade, with rooms directly connecting to each other. In 1597 John Thorpe is the first recorded architect to replace multiple connected rooms with rooms along a corridor each accessed by a separate door.
Other uses
Collegiate halls
Many institutions and buildings at colleges and universities are formally titled "___ Hall", typically being named after the person who endowed it, for example, King's Hall, Cambridge. Others, such as Lady Margaret Hall, Oxford, commemorate respected people. Between these in age, Nassau Hall at Princeton University began as the single building of the then college. In medieval origin, these were the halls in which the members of the university lived together during term time. In many cases, some aspect of this community remains.
Some of these institutions are titled "Hall" instead of "College" because at the time of their foundation they were not recognised as colleges (in some cases because their foundation predated the existence of colleges) and did not have the appropriate Royal Charter. Examples at the University of Oxford are:
St Edmund Hall
Hart Hall (now Hertford College)
Lady Margaret Hall
The (currently six) Permanent private halls.
In colleges of the universities of Oxford and Cambridge, the term "Hall" is also used for the dining hall for students, with High Table at one end for fellows. Typically, at "Formal Hall", gowns are worn for dinner during the evening, whereas for "informal Hall" they are not. The medieval collegiate dining hall, with a dais for the high table at the upper end and a screen passage at the lower end, is a modified or assimilated form of the Great hall.
Meeting hall
A hall is also a building consisting largely of a principal room, that is rented out for meetings and social affairs. It may be privately or government-owned, such as a function hall owned by one company used for weddings and cotillions (organized and run by the same company on a contractual basis) or a community hall available for rent to anyone, such as a British village hall.
Religious halls
In religious architecture, as in Islamic architecture, the prayer hall is a large room dedicated to the practice of worship. (example: the prayer hall of the Great Mosque of Kairouan in Tunisia). A hall church is a church with a nave and side aisles of approximately equal height. Many churches have an associated church hall used for meetings and other events.
Public buildings
Following a line of similar development, in office buildings and larger buildings (theatres, cinemas etc.), the entrance hall is generally known as the foyer (the French for fireplace). The atrium, a name sometimes used in public buildings for the entrance hall, was the central courtyard of a Roman house.
Types
In architecture, the term "double-loaded" describes corridors that connect to rooms on both sides. Conversely, a single-loaded corridor only has rooms on one side (and possible windows on the other). A blind corridor does not lead anywhere.
Billiard hall
City hall, town hall or village hall
Concert hall
Concourse (at a large transportation station)
Convention center (exhibition hall)
Dance hall
Dining hall
Firehall
Great room or great hall
Moot hall
Prayer hall, such as the sanctuary of a synagogue
Reading room
Residence hall
Trades hall (also called union hall, labour hall, etc.)
Waiting room (in large transportation stations)
See also
Hall of fame
References
External links
Rooms | Hall | Engineering | 1,237 |
7,988 | https://en.wikipedia.org/wiki/Dual%20space | In mathematics, any vector space has a corresponding dual vector space (or just dual space for short) consisting of all linear forms on together with the vector space structure of pointwise addition and scalar multiplication by constants.
The dual space as defined above is defined for all vector spaces, and to avoid ambiguity may also be called the .
When defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the continuous dual space.
Dual vector spaces find application in many branches of mathematics that use vector spaces, such as in tensor analysis with finite-dimensional vector spaces.
When applied to vector spaces of functions (which are typically infinite-dimensional), dual spaces are used to describe measures, distributions, and Hilbert spaces. Consequently, the dual space is an important concept in functional analysis.
Early terms for dual include polarer Raum [Hahn 1927], espace conjugué, adjoint space [Alaoglu 1940], and transponierter Raum [Schauder 1930] and [Banach 1932]. The term dual is due to Bourbaki 1938.
Algebraic dual space
Given any vector space over a field , the (algebraic) dual space (alternatively denoted by or ) is defined as the set of all linear maps (linear functionals). Since linear maps are vector space homomorphisms, the dual space may be denoted .
The dual space itself becomes a vector space over when equipped with an addition and scalar multiplication satisfying:
for all , , and .
Elements of the algebraic dual space are sometimes called covectors, one-forms, or linear forms.
The pairing of a functional in the dual space and an element of is sometimes denoted by a bracket:
or . This pairing defines a nondegenerate bilinear mapping called the natural pairing.
Finite-dimensional case
If is finite-dimensional, then has the same dimension as . Given a basis in , it is possible to construct a specific basis in , called the dual basis. This dual basis is a set of linear functionals on , defined by the relation
for any choice of coefficients . In particular, letting in turn each one of those coefficients be equal to one and the other coefficients zero, gives the system of equations
where is the Kronecker delta symbol. This property is referred to as the bi-orthogonality property.
Consider the basis of V. Let be defined as the following:
.
These are a basis of because:
The are linear functionals, which map such as and to scalars and . Then also, and . Therefore, for .
Suppose . Applying this functional on the basis vectors of successively, lead us to (The functional applied in results in ). Therefore, is linearly independent on .
Lastly, consider . Then
and generates . Hence, it is a basis of .
For example, if is , let its basis be chosen as . The basis vectors are not orthogonal to each other. Then, and are one-forms (functions that map a vector to a scalar) such that , , , and . (Note: The superscript here is the index, not an exponent.) This system of equations can be expressed using matrix notation as
Solving for the unknown values in the first matrix shows the dual basis to be . Because and are functionals, they can be rewritten as and .
In general, when is , if is a matrix whose columns are the basis vectors and is a matrix whose columns are the dual basis vectors, then
where is the identity matrix of order . The biorthogonality property of these two basis sets allows any point to be represented as
even when the basis vectors are not orthogonal to each other. Strictly speaking, the above statement only makes sense once the inner product and the corresponding duality pairing are introduced, as described below in .
In particular, can be interpreted as the space of columns of real numbers, its dual space is typically written as the space of rows of real numbers. Such a row acts on as a linear functional by ordinary matrix multiplication. This is because a functional maps every -vector into a real number . Then, seeing this functional as a matrix , and as an matrix, and a matrix (trivially, a real number) respectively, if then, by dimension reasons, must be a matrix; that is, must be a row vector.
If consists of the space of geometrical vectors in the plane, then the level curves of an element of form a family of parallel lines in , because the range is 1-dimensional, so that every point in the range is a multiple of any one nonzero element.
So an element of can be intuitively thought of as a particular family of parallel lines covering the plane. To compute the value of a functional on a given vector, it suffices to determine which of the lines the vector lies on. Informally, this "counts" how many lines the vector crosses.
More generally, if is a vector space of any dimension, then the level sets of a linear functional in are parallel hyperplanes in , and the action of a linear functional on a vector can be visualized in terms of these hyperplanes.
Infinite-dimensional case
If is not finite-dimensional but has a basis indexed by an infinite set , then the same construction as in the finite-dimensional case yields linearly independent elements () of the dual space, but they will not form a basis.
For instance, consider the space , whose elements are those sequences of real numbers that contain only finitely many non-zero entries, which has a basis indexed by the natural numbers . For , is the sequence consisting of all zeroes except in the -th position, which is 1.
The dual space of is (isomorphic to) , the space of all sequences of real numbers: each real sequence defines a function where the element of is sent to the number
which is a finite sum because there are only finitely many nonzero . The dimension of is countably infinite, whereas does not have a countable basis.
This observation generalizes to any infinite-dimensional vector space over any field : a choice of basis identifies with the space of functions such that is nonzero for only finitely many , where such a function is identified with the vector
in (the sum is finite by the assumption on , and any may be written uniquely in this way by the definition of the basis).
The dual space of may then be identified with the space of all functions from to : a linear functional on is uniquely determined by the values it takes on the basis of , and any function (with ) defines a linear functional on by
Again, the sum is finite because is nonzero for only finitely many .
The set may be identified (essentially by definition) with the direct sum of infinitely many copies of (viewed as a 1-dimensional vector space over itself) indexed by , i.e. there are linear isomorphisms
On the other hand, is (again by definition), the direct product of infinitely many copies of indexed by , and so the identification
is a special case of a general result relating direct sums (of modules) to direct products.
If a vector space is not finite-dimensional, then its (algebraic) dual space is always of larger dimension (as a cardinal number) than the original vector space. This is in contrast to the case of the continuous dual space, discussed below, which may be isomorphic to the original vector space even if the latter is infinite-dimensional.
The proof of this inequality between dimensions results from the following.
If is an infinite-dimensional -vector space, the arithmetical properties of cardinal numbers implies that
where cardinalities are denoted as absolute values. For proving that it suffices to prove that which can be done with an argument similar to Cantor's diagonal argument. The exact dimension of the dual is given by the Erdős–Kaplansky theorem.
Bilinear products and dual spaces
If V is finite-dimensional, then V is isomorphic to V∗. But there is in general no natural isomorphism between these two spaces. Any bilinear form on V gives a mapping of V into its dual space via
where the right hand side is defined as the functional on V taking each to . In other words, the bilinear form determines a linear mapping
defined by
If the bilinear form is nondegenerate, then this is an isomorphism onto a subspace of V∗.
If V is finite-dimensional, then this is an isomorphism onto all of V∗. Conversely, any isomorphism from V to a subspace of V∗ (resp., all of V∗ if V is finite dimensional) defines a unique nondegenerate bilinear form on V by
Thus there is a one-to-one correspondence between isomorphisms of V to a subspace of (resp., all of) V∗ and nondegenerate bilinear forms on V.
If the vector space V is over the complex field, then sometimes it is more natural to consider sesquilinear forms instead of bilinear forms.
In that case, a given sesquilinear form determines an isomorphism of V with the complex conjugate of the dual space
The conjugate of the dual space can be identified with the set of all additive complex-valued functionals such that
Injection into the double-dual
There is a natural homomorphism from into the double dual , defined by for all . In other words, if is the evaluation map defined by , then is defined as the map . This map is always injective; and it is always an isomorphism if is finite-dimensional.
Indeed, the isomorphism of a finite-dimensional vector space with its double dual is an archetypal example of a natural isomorphism.
Infinite-dimensional Hilbert spaces are not isomorphic to their algebraic double duals, but instead to their continuous double duals.
Transpose of a linear map
If is a linear map, then the transpose (or dual) is defined by
for every . The resulting functional in is called the pullback of along .
The following identity holds for all and :
where the bracket [·,·] on the left is the natural pairing of V with its dual space, and that on the right is the natural pairing of W with its dual. This identity characterizes the transpose, and is formally similar to the definition of the adjoint.
The assignment produces an injective linear map between the space of linear operators from V to W and the space of linear operators from W to V; this homomorphism is an isomorphism if and only if W is finite-dimensional.
If then the space of linear maps is actually an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that .
In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over F to itself.
It is possible to identify (f) with f using the natural injection into the double dual.
If the linear map f is represented by the matrix A with respect to two bases of V and W, then f is represented by the transpose matrix AT with respect to the dual bases of W and V, hence the name.
Alternatively, as f is represented by A acting on the left on column vectors, f is represented by the same matrix acting on the right on row vectors.
These points of view are related by the canonical inner product on Rn, which identifies the space of column vectors with the dual space of row vectors.
Quotient spaces and annihilators
Let be a subset of .
The annihilator of in , denoted here , is the collection of linear functionals such that for all .
That is, consists of all linear functionals such that the restriction to vanishes: .
Within finite dimensional vector spaces, the annihilator is dual to (isomorphic to) the orthogonal complement.
The annihilator of a subset is itself a vector space.
The annihilator of the zero vector is the whole dual space: , and the annihilator of the whole space is just the zero covector: .
Furthermore, the assignment of an annihilator to a subset of reverses inclusions, so that if , then
If and are two subsets of then
If is any family of subsets of indexed by belonging to some index set , then
In particular if and are subspaces of then
and
If is finite-dimensional and is a vector subspace, then
after identifying with its image in the second dual space under the double duality isomorphism . In particular, forming the annihilator is a Galois connection on the lattice of subsets of a finite-dimensional vector space.
If is a subspace of then the quotient space is a vector space in its own right, and so has a dual. By the first isomorphism theorem, a functional factors through if and only if is in the kernel of . There is thus an isomorphism
As a particular consequence, if is a direct sum of two subspaces and , then is a direct sum of and .
Dimensional analysis
The dual space is analogous to a "negative"-dimensional space. Most simply, since a vector can be paired with a covector by the natural pairing
to obtain a scalar, a covector can "cancel" the dimension of a vector, similar to reducing a fraction. Thus while the direct sum is a -dimensional space (if is -dimensional), behaves as an -dimensional space, in the sense that its dimensions can be canceled against the dimensions of . This is formalized by tensor contraction.
This arises in physics via dimensional analysis, where the dual space has inverse units. Under the natural pairing, these units cancel, and the resulting scalar value is dimensionless, as expected. For example, in (continuous) Fourier analysis, or more broadly time–frequency analysis: given a one-dimensional vector space with a unit of time , the dual space has units of frequency: occurrences per unit of time (units of ). For example, if time is measured in seconds, the corresponding dual unit is the inverse second: over the course of 3 seconds, an event that occurs 2 times per second occurs a total of 6 times, corresponding to . Similarly, if the primal space measures length, the dual space measures inverse length.
Continuous dual space
When dealing with topological vector spaces, the continuous linear functionals from the space into the base field (or ) are particularly important.
This gives rise to the notion of the "continuous dual space" or "topological dual" which is a linear subspace of the algebraic dual space , denoted by .
For any finite-dimensional normed vector space or topological vector space, such as Euclidean n-space, the continuous dual and the algebraic dual coincide.
This is however false for any infinite-dimensional normed space, as shown by the example of discontinuous linear maps.
Nevertheless, in the theory of topological vector spaces the terms "continuous dual space" and "topological dual space" are often replaced by "dual space".
For a topological vector space its continuous dual space, or topological dual space, or just dual space (in the sense of the theory of topological vector spaces) is defined as the space of all continuous linear functionals .
Important examples for continuous dual spaces are the space of compactly supported test functions and its dual the space of arbitrary distributions (generalized functions); the space of arbitrary test functions and its dual the space of compactly supported distributions; and the space of rapidly decreasing test functions the Schwartz space, and its dual the space of tempered distributions (slowly growing distributions) in the theory of generalized functions.
Properties
If is a Hausdorff topological vector space (TVS), then the continuous dual space of is identical to the continuous dual space of the completion of .
Topologies on the dual
There is a standard construction for introducing a topology on the continuous dual of a topological vector space . Fix a collection of bounded subsets of .
This gives the topology on of uniform convergence on sets from or what is the same thing, the topology generated by seminorms of the form
where is a continuous linear functional on , and runs over the class
This means that a net of functionals tends to a functional in if and only if
Usually (but not necessarily) the class is supposed to satisfy the following conditions:
Each point of belongs to some set :
Each two sets and are contained in some set :
is closed under the operation of multiplication by scalars:
If these requirements are fulfilled then the corresponding topology on is Hausdorff and the sets
form its local base.
Here are the three most important special cases.
The strong topology on is the topology of uniform convergence on bounded subsets in (so here can be chosen as the class of all bounded subsets in ).
If is a normed vector space (for example, a Banach space or a Hilbert space) then the strong topology on is normed (in fact a Banach space if the field of scalars is complete), with the norm
The stereotype topology on is the topology of uniform convergence on totally bounded sets in (so here can be chosen as the class of all totally bounded subsets in ).
The weak topology on is the topology of uniform convergence on finite subsets in (so here can be chosen as the class of all finite subsets in ).
Each of these three choices of topology on leads to a variant of reflexivity property for topological vector spaces:
If is endowed with the strong topology, then the corresponding notion of reflexivity is the standard one: the spaces reflexive in this sense are just called reflexive.
If is endowed with the stereotype dual topology, then the corresponding reflexivity is presented in the theory of stereotype spaces: the spaces reflexive in this sense are called stereotype.
If is endowed with the weak topology, then the corresponding reflexivity is presented in the theory of dual pairs: the spaces reflexive in this sense are arbitrary (Hausdorff) locally convex spaces with the weak topology.
Examples
Let 1 < p < ∞ be a real number and consider the Banach space ℓ p of all sequences for which
Define the number q by . Then the continuous dual of ℓ p is naturally identified with ℓ q: given an element , the corresponding element of is the sequence where denotes the sequence whose -th term is 1 and all others are zero. Conversely, given an element , the corresponding continuous linear functional on is defined by
for all (see Hölder's inequality).
In a similar manner, the continuous dual of is naturally identified with (the space of bounded sequences).
Furthermore, the continuous duals of the Banach spaces c (consisting of all convergent sequences, with the supremum norm) and c0 (the sequences converging to zero) are both naturally identified with .
By the Riesz representation theorem, the continuous dual of a Hilbert space is again a Hilbert space which is anti-isomorphic to the original space.
This gives rise to the bra–ket notation used by physicists in the mathematical formulation of quantum mechanics.
By the Riesz–Markov–Kakutani representation theorem, the continuous dual of certain spaces of continuous functions can be described using measures.
Transpose of a continuous linear map
If is a continuous linear map between two topological vector spaces, then the (continuous) transpose is defined by the same formula as before:
The resulting functional is in . The assignment produces a linear map between the space of continuous linear maps from V to W and the space of linear maps from to .
When T and U are composable continuous linear maps, then
When V and W are normed spaces, the norm of the transpose in is equal to that of T in .
Several properties of transposition depend upon the Hahn–Banach theorem.
For example, the bounded linear map T has dense range if and only if the transpose is injective.
When T is a compact linear map between two Banach spaces V and W, then the transpose is compact.
This can be proved using the Arzelà–Ascoli theorem.
When V is a Hilbert space, there is an antilinear isomorphism iV from V onto its continuous dual .
For every bounded linear map T on V, the transpose and the adjoint operators are linked by
When T is a continuous linear map between two topological vector spaces V and W, then the transpose is continuous when and are equipped with "compatible" topologies: for example, when for and , both duals have the strong topology of uniform convergence on bounded sets of X, or both have the weak-∗ topology of pointwise convergence on X.
The transpose is continuous from to , or from to .
Annihilators
Assume that W is a closed linear subspace of a normed space V, and consider the annihilator of W in ,
Then, the dual of the quotient can be identified with W⊥, and the dual of W can be identified with the quotient .
Indeed, let P denote the canonical surjection from V onto the quotient ; then, the transpose is an isometric isomorphism from into , with range equal to W⊥.
If j denotes the injection map from W into V, then the kernel of the transpose is the annihilator of W:
and it follows from the Hahn–Banach theorem that induces an isometric isomorphism
.
Further properties
If the dual of a normed space is separable, then so is the space itself.
The converse is not true: for example, the space is separable, but its dual is not.
Double dual
In analogy with the case of the algebraic double dual, there is always a naturally defined continuous linear operator from a normed space V into its continuous double dual , defined by
As a consequence of the Hahn–Banach theorem, this map is in fact an isometry, meaning for all .
Normed spaces for which the map Ψ is a bijection are called reflexive.
When V is a topological vector space then Ψ(x) can still be defined by the same formula, for every , however several difficulties arise.
First, when V is not locally convex, the continuous dual may be equal to { 0 } and the map Ψ trivial.
However, if V is Hausdorff and locally convex, the map Ψ is injective from V to the algebraic dual of the continuous dual, again as a consequence of the Hahn–Banach theorem.
Second, even in the locally convex setting, several natural vector space topologies can be defined on the continuous dual , so that the continuous double dual is not uniquely defined as a set. Saying that Ψ maps from V to , or in other words, that Ψ(x) is continuous on for every , is a reasonable minimal requirement on the topology of , namely that the evaluation mappings
be continuous for the chosen topology on . Further, there is still a choice of a topology on , and continuity of Ψ depends upon this choice.
As a consequence, defining reflexivity in this framework is more involved than in the normed case.
See also
Covariance and contravariance of vectors
Dual module
Dual norm
Duality (mathematics)
Duality (projective geometry)
Pontryagin duality
Reciprocal lattice – dual space basis, in crystallography
Notes
References
Bibliography
.
External links
Functional analysis
Linear algebra
Space
Linear functionals | Dual space | Mathematics | 4,785 |
73,731,688 | https://en.wikipedia.org/wiki/Google%20Silicon%20Initiative | The Google Open Silicon Initiative is an initiative launched by the Google Hardware Toolchains team to democratize access to custom silicon design. Google has partnered with SkyWater Technology and GlobalFoundries to open-source their Process Design Kits for 180nm, 130nm and 90nm process. This initiative provides free software tools for chip designers to create, verify and test virtual chip circuit designs before they are physically produced in factories. The aim of the initiative is to reduce the cost of chip designs and production, which will benefit DIY enthusiasts, researchers, universities, and chip startups. The program has gained more partners, including the US Department of Defense, which injected $15 million in funding to SkyWater, one of the manufacturers supporting the program.
References
External links
Google Open Silicon (official site)
Google Git repositories of FOSS EDA Tools
SkyWater Technology Foundry FOSS 130nm Production PDK- Github
GlobalFoundries GF180MCU FOSS 180nm Production PDK - GitHub
Google hardware
Integrated circuits | Google Silicon Initiative | Technology,Engineering | 219 |
5,645,755 | https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%206 | Bone morphogenetic protein 6 is a protein that in humans is encoded by the BMP6 gene.
The protein encoded by this gene is a member of the TGFβ superfamily. Bone morphogenetic proteins are known for their ability to induce the growth of bone and cartilage. BMP6 is able to induce all osteogenic markers in mesenchymal stem cells.
The bone morphogenetic proteins (BMPs) are a family of secreted signaling molecules that can induce ectopic bone growth. BMPs are part of the transforming growth factor-beta (TGFB) superfamily. BMPs were originally identified by an ability of demineralized bone extract to induce endochondral osteogenesis in vivo in an extraskeletal site. Based on its expression early in embryogenesis, the BMP encoded by this gene has a proposed role in early development. In addition, the fact that this BMP is closely related to BMP5 and BMP7 has led to speculation of possible bone inductive activity.
As of April 2009, an additional function of BMP6 has been identified as described in Nature Genetics April; 41 [4]:386-8. BMP6 is the key regulator of hepcidin, the small peptide secreted by the liver which is the major regulator of iron metabolism in mammals.
References
Further reading
External links
Bone morphogenetic protein
Developmental genes and proteins
TGFβ domain | Bone morphogenetic protein 6 | Biology | 302 |
72,484,350 | https://en.wikipedia.org/wiki/Coleosporium%20asterum | is a species of rust fungus in the family Coleosporiaceae. It infects species in the Asteraceae family, such as those in genus Aster and Solidago, as well as the needle pines Pinus contorta and P. banksiana. It has been recorded on aster family species Canadanthus modestus, Eurybia conspicua, Solidago missouriensis, Symphyotrichum ciliolatum, S. laeve, and numerous others.
The basionym of Coleosporium asterum is Stichopsora asterum, and the fungus originally was found in 1898 on leaves of the Asteraceae species Callistephus chinensis, Aster scaber (now Doellingeria scabra), and Aster tataricus on the island of Honshu, Japan.
Citations
References
Fungal plant pathogens and diseases
Pucciniales
Fungi described in 1900
Fungus species | Coleosporium asterum | Biology | 194 |
39,789 | https://en.wikipedia.org/wiki/Rotation | Rotation or rotational motion is the circular movement of an object around a central line, known as an axis of rotation. A plane figure can rotate in either a clockwise or counterclockwise sense around a perpendicular axis intersecting anywhere inside or outside the figure at a center of rotation. A solid figure has an infinite number of possible axes and angles of rotation, including chaotic rotation (between arbitrary orientations), in contrast to rotation around a axis.
The special case of a rotation with an internal axis passing through the body's own center of mass is known as a spin (or autorotation). In that case, the surface intersection of the internal spin axis can be called a pole; for example, Earth's rotation defines the geographical poles.
A rotation around an axis completely external to the moving body is called a revolution (or orbit), e.g. Earth's orbit around the Sun. The ends of the external axis of revolution can be called the orbital poles.
Either type of rotation is involved in a corresponding type of angular velocity (spin angular velocity and orbital angular velocity) and angular momentum (spin angular momentum and orbital angular momentum).
Mathematics
Mathematically, a rotation is a rigid body movement which, unlike a translation, keeps at least one point fixed. This definition applies to rotations in two dimensions (in a plane), in which exactly one point is kept fixed; and also in three dimensions (in space), in which additional points may be kept fixed (as in rotation around a fixed axis, as infinite line).
All rigid body movements are rotations, translations, or combinations of the two.
A rotation is simply a progressive radial orientation to a common point. That common point lies within the axis of that motion. The axis is perpendicular to the plane of the motion.
If a rotation around a point or axis is followed by a second rotation around the same point/axis, a third rotation results. The reverse (inverse) of a rotation is also a rotation. Thus, the rotations around a point/axis form a group. However, a rotation around a point or axis and a rotation around a different point/axis may result in something other than a rotation, e.g. a translation.
Rotations around the x, y and z axes are called principal rotations. Rotation around any axis can be performed by taking a rotation around the x axis, followed by a rotation around the y axis, and followed by a rotation around the z axis. That is to say, any spatial rotation can be decomposed into a combination of principal rotations.
Fixed axis vs. fixed point
The combination of any sequence of rotations of an object in three dimensions about a fixed point is always equivalent to a rotation about an axis (which may be considered to be a rotation in the plane that is perpendicular to that axis). Similarly, the rotation rate of an object in three dimensions at any instant is about some axis, although this axis may be changing over time.
In other than three dimensions, it does not make sense to describe a rotation as being around an axis, since more than one axis through the object may be kept fixed; instead, simple rotations are described as being in a plane. In four or more dimensions, a combination of two or more rotations about a plane is not in general a rotation in a single plane.
Axis of 2-dimensional rotations
2-dimensional rotations, unlike the 3-dimensional ones, possess no axis of rotation, only a point about which the rotation occurs. This is equivalent, for linear transformations, with saying that there is no direction in the plane which is kept unchanged by a 2-dimensional rotation, except, of course, the identity.
The question of the existence of such a direction is the question of existence of an eigenvector for the matrix A representing the rotation. Every 2D rotation around the origin through an angle in counterclockwise direction can be quite simply represented by the following matrix:
A standard eigenvalue determination leads to the characteristic equation
which has
as its eigenvalues. Therefore, there is no real eigenvalue whenever , meaning that no real vector in the plane is kept unchanged by A.
Rotation angle and axis in 3 dimensions
Knowing that the trace is an invariant, the rotation angle for a proper orthogonal 3×3 rotation matrix is found by
Using the principal arc-cosine, this formula gives a rotation angle satisfying . The corresponding rotation axis must be defined to point in a direction that limits the rotation angle to not exceed 180 degrees. (This can always be done because any rotation of more than 180 degrees about an axis can always be written as a rotation having if the axis is replaced with .)
Every proper rotation in 3D space has an axis of rotation, which is defined such that any vector that is aligned with the rotation axis will not be affected by rotation. Accordingly, , and the rotation axis therefore corresponds to an eigenvector of the rotation matrix associated with an eigenvalue of 1. As long as the rotation angle is nonzero (i.e., the rotation is not the identity tensor), there is one and only one such direction. Because A has only real components, there is at least one real eigenvalue, and the remaining two eigenvalues must be complex conjugates of each other (see Eigenvalues and eigenvectors#Eigenvalues and the characteristic polynomial). Knowing that 1 is an eigenvalue, it follows that the remaining two eigenvalues are complex conjugates of each other, but this does not imply that they are complex—they could be real with double multiplicity. In the degenerate case of a rotation angle , the remaining two eigenvalues are both equal to −1. In the degenerate case of a zero rotation angle, the rotation matrix is the identity, and all three eigenvalues are 1 (which is the only case for which the rotation axis is arbitrary).
A spectral analysis is not required to find the rotation axis. If denotes the unit eigenvector aligned with the rotation axis, and if denotes the rotation angle, then it can be shown that . Consequently, the expense of an eigenvalue analysis can be avoided by simply normalizing this vector if it has a nonzero magnitude. On the other hand, if this vector has a zero magnitude, it means that . In other words, this vector will be zero if and only if the rotation angle is 0 or 180 degrees, and the rotation axis may be assigned in this case by normalizing any column of that has a nonzero magnitude.
This discussion applies to a proper rotation, and hence . Any improper orthogonal 3x3 matrix may be written as , in which is proper orthogonal. That is, any improper orthogonal 3x3 matrix may be decomposed as a proper rotation (from which an axis of rotation can be found as described above) followed by an inversion (multiplication by −1). It follows that the rotation axis of is also the eigenvector of corresponding to an eigenvalue of −1.
Rotation plane
As much as every tridimensional rotation has a rotation axis, also every tridimensional rotation has a plane, which is perpendicular to the rotation axis, and which is left invariant by the rotation. The rotation, restricted to this plane, is an ordinary 2D rotation.
The proof proceeds similarly to the above discussion. First, suppose that all eigenvalues of the 3D rotation matrix A are real. This means that there is an orthogonal basis, made by the corresponding eigenvectors (which are necessarily orthogonal), over which the effect of the rotation matrix is just stretching it. If we write A in this basis, it is diagonal; but a diagonal orthogonal matrix is made of just +1s and −1s in the diagonal entries. Therefore, we do not have a proper rotation, but either the identity or the result of a sequence of reflections.
It follows, then, that a proper rotation has some complex eigenvalue. Let v be the corresponding eigenvector. Then, as we showed in the previous topic, is also an eigenvector, and and are such that their scalar product vanishes:
because, since is real, it equals its complex conjugate , and and are both representations of the same scalar product between and .
This means and are orthogonal vectors. Also, they are both real vectors by construction. These vectors span the same subspace as and , which is an invariant subspace under the application of A. Therefore, they span an invariant plane.
This plane is orthogonal to the invariant axis, which corresponds to the remaining eigenvector of A, with eigenvalue 1, because of the orthogonality of the eigenvectors of A.
Rotation of vectors
A vector is said to be rotating if it changes its orientation. This effect is generally only accompanied when its rate of change vector has non-zero perpendicular component to the original vector. This can be shown to be the case by considering a vector which is parameterized by some variable for which:
Which also gives a relation of rate of change of unit vector by taking , to be such a vector: showing that vector is perpendicular to the vector, .
From:
,
since the first term is parallel to and the second perpendicular to it, we can conclude in general that the parallel and perpendicular components of rate of change of a vector independently influence only the magnitude or orientation of the vector respectively. Hence, a rotating vector always has a non-zero perpendicular component of its rate of change vector against the vector itself.
In higher dimensions
As dimensions increase the number of rotation vectors increases. Along a four dimensional space (a hypervolume), rotations occur along x, y, z, and w axis. An object rotated on a w axis intersects through various volumes, where each intersection is equal to a self contained volume at an angle. This gives way to a new axis of rotation in a 4d hypervolume, were a 3d object can be rotated perpendicular to the z axis.
Physics
The speed of rotation is given by the angular frequency (rad/s) or frequency (turns per time), or period (seconds, days, etc.). The time-rate of change of angular frequency is angular acceleration (rad/s2), caused by torque. The ratio of torque to the angular acceleration is given by the moment of inertia:
The angular velocity vector (an axial vector) also describes the direction of the axis of rotation. Similarly, the torque is an axial vector.
The physics of the rotation around a fixed axis is mathematically described with the axis–angle representation of rotations. According to the right-hand rule, the direction away from the observer is associated with clockwise rotation and the direction towards the observer with counterclockwise rotation, like a screw.
Circular motion
It is possible for objects to have periodic circular trajectories without changing their orientation. These types of motion are treated under circular motion instead of rotation, more specifically as a curvilinear translation. Since translation involves displacement of rigid bodies while preserving the orientation of the body, in the case of curvilinear translation, all the points have the same instantaneous velocity whereas relative motion can only be observed in motions involving rotation.
In rotation, the orientation of the object changes and the change in orientation is independent of the observers whose frames of reference have constant relative orientation over time. By Euler's theorem, any change in orientation can be described by rotation about an axis through a chosen reference point. Hence, the distinction between rotation and circular motion can be made by requiring an instantaneous axis for rotation, a line passing through instantaneous center of circle and perpendicular to the plane of motion. In the example depicting curvilinear translation, the center of circles for the motion lie on a straight line but it is parallel to the plane of motion and hence does not resolve to an axis of rotation. In contrast, a rotating body will always have its instantaneous axis of zero velocity, perpendicular to the plane of motion.
More generally, due to Chasles' theorem, any motion of rigid bodies can be treated as a composition of rotation and translation, called general plane motion. A simple example of pure rotation is considered in rotation around a fixed axis.
Cosmological principle
The laws of physics are currently believed to be invariant under any fixed rotation. (Although they do appear to change when viewed from a rotating viewpoint: see rotating frame of reference.)
In modern physical cosmology, the cosmological principle is the notion that the distribution of matter in the universe is homogeneous and isotropic when viewed on a large enough scale, since the forces are expected to act uniformly throughout the universe and have no preferred direction, and should, therefore, produce no observable irregularities in the large scale structuring over the course of evolution of the matter field that was initially laid down by the Big Bang.
In particular, for a system which behaves the same regardless of how it is oriented in space, its Lagrangian is rotationally invariant. According to Noether's theorem, if the action (the integral over time of its Lagrangian) of a physical system is invariant under rotation, then angular momentum is conserved.
Euler rotations
Euler rotations provide an alternative description of a rotation. It is a composition of three rotations defined as the movement obtained by changing one of the Euler angles while leaving the other two constant. Euler rotations are never expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. They constitute a mixed axes of rotation system, where the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes and the third one is an intrinsic rotation around an axis fixed in the body that moves.
These rotations are called precession, nutation, and intrinsic rotation.
Astronomy
In astronomy, rotation is a commonly observed phenomenon; it includes both spin (auto-rotation) and orbital revolution.
Spin
Stars, planets and similar bodies may spin around on their axes. The rotation rate of planets in the solar system was first measured by tracking visual features. Stellar rotation is measured through Doppler shift or by tracking active surface features. An example is sunspots, which rotate around the Sun at the same velocity as the outer gases that make up the Sun.
Under some circumstances orbiting bodies may lock their spin rotation to their orbital rotation around a larger body. This effect is called tidal locking; the Moon is tidal-locked to the Earth.
This rotation induces a centrifugal acceleration in the reference frame of the Earth which slightly counteracts the effect of gravitation the closer one is to the equator. Earth's gravity combines both mass effects such that an object weighs slightly less at the equator than at the poles. Another is that over time the Earth is slightly deformed into an oblate spheroid; a similar equatorial bulge develops for other planets.
Another consequence of the rotation of a planet are the phenomena of precession and nutation. Like a gyroscope, the overall effect is a slight "wobble" in the movement of the axis of a planet. Currently the tilt of the Earth's axis to its orbital plane (obliquity of the ecliptic) is 23.44 degrees, but this angle changes slowly (over thousands of years). (See also Precession of the equinoxes and Pole Star.)
Revolution
While revolution is often used as a synonym for rotation, in many fields, particularly astronomy and related fields, revolution, often referred to as orbital revolution for clarity, is used when one body moves around another while rotation is used to mean the movement around an axis. Moons revolve around their planets, planets revolve about their stars (such as the Earth around the Sun); and stars slowly revolve about their galaxial centers. The motion of the components of galaxies is complex, but it usually includes a rotation component.
Retrograde rotation
Most planets in the Solar System, including Earth, spin in the same direction as they orbit the Sun. The exceptions are Venus and Uranus. Venus may be thought of as rotating slowly backward (or being "upside down"). Uranus rotates nearly on its side relative to its orbit. Current speculation is that Uranus started off with a typical prograde orientation and was knocked on its side by a large impact early in its history. The dwarf planet Pluto (formerly considered a planet) is anomalous in several ways, including that it also rotates on its side.
Flight dynamics
In flight dynamics, the principal rotations described with Euler angles above are known as pitch, roll and yaw. The term rotation is also used in aviation to refer to the upward pitch (nose moves up) of an aircraft, particularly when starting the climb after takeoff.
Principal rotations have the advantage of modelling a number of physical systems such as gimbals, and joysticks, so are easily visualised, and are a very compact way of storing a rotation. But they are difficult to use in calculations as even simple operations like combining rotations are expensive to do, and suffer from a form of gimbal lock where the angles cannot be uniquely calculated for certain rotations.
Amusement rides
Many amusement rides provide rotation. A Ferris wheel has a horizontal central axis, and parallel axes for each gondola, where the rotation is opposite, by gravity or mechanically. As a result, at any time the orientation of the gondola is upright (not rotated), just translated. The tip of the translation vector describes a circle. A carousel provides rotation about a vertical axis. Many rides provide a combination of rotations about several axes. In Chair-O-Planes the rotation about the vertical axis is provided mechanically, while the rotation about the horizontal axis is due to the centripetal force. In roller coaster inversions the rotation about the horizontal axis is one or more full cycles, where inertia keeps people in their seats.
Sports
Rotation of a ball or other object, usually called spin, plays a role in many sports, including topspin and backspin in tennis, English, follow and draw in billiards and pool, curve balls in baseball, spin bowling in cricket, flying disc sports, etc. Table tennis paddles are manufactured with different surface characteristics to allow the player to impart a greater or lesser amount of spin to the ball.
Rotation of a player one or more times around a vertical axis may be called spin in figure skating, twirling (of the baton or the performer) in baton twirling, or 360, 540, 720, etc. in snowboarding, etc. Rotation of a player or performer one or more times around a horizontal axis may be called a flip, roll, somersault, heli, etc. in gymnastics, waterskiing, or many other sports, or a one-and-a-half, two-and-a-half, gainer (starting facing away from the water), etc. in diving, etc. A combination of vertical and horizontal rotation (back flip with 360°) is called a möbius in waterskiing freestyle jumping.
Rotation of a player around a vertical axis, generally between 180 and 360 degrees, may be called a spin move and is used as a deceptive or avoidance manoeuvre, or in an attempt to play, pass, or receive a ball or puck, etc., or to afford a player a view of the goal or other players. It is often seen in hockey, basketball, football of various codes, tennis, etc.
See also
Circular motion
Cyclone – large scale rotating air mass
Instant centre of rotation – instantaneously fixed point on an arbitrarily moving rigid body
Mach's principle – speculative hypothesis that a physical law relates the motion of the distant stars to the local inertial frame
Orientation (geometry)
Point reflection
Rolling – motion of two objects in contact with each-other without sliding
Rotation (quantity) – a unitless scalar representing the number of rotations
Rotation around a fixed axis
Rotation formalisms in three dimensions
Rotating locomotion in living systems
Top – spinning toy
Euler angle
References
External links
Product of Rotations at cut-the-knot.
When a Triangle is Equilateral at cut-the-knot.
Rotate Points Using Polar Coordinates, howtoproperly.com
Rotation in Two Dimensions by Sergio Hannibal Mejia after work by Roger Germundsson and Understanding 3D Rotation by Roger Germundsson, Wolfram Demonstrations Project. demonstrations.wolfram.com
Euclidean geometry
Classical mechanics
Orientation (geometry)
Kinematics | Rotation | Physics,Mathematics,Technology | 4,259 |
34,146,907 | https://en.wikipedia.org/wiki/Address%20%28programming%20language%29 | The Address programming language (, ) is one of the world's first high-level programming languages. It was created in 1955 by Kateryna Yushchenko. In particular, the Address programming language made possible indirect addressing and addresses of the highest rank analogous to pointers.
Unlike Fortran and ALGOL 60, APL (Address Programming Language) supported indirect addressing and addressing of higher ranks. Indirect addressing is a mechanism that appeared in other programming languages much later (1964 in PL/1).
The Address language was implemented on all the computers of the first and second generation produced in the Soviet Union. The Address language influenced the architecture of the Kyiv, M-20, Dnipro, Ural, Promin and Minsk computers. The Address programming language was used exclusively for the solution of economical problems, including aviation, space exploration, machine building, and military complex in particular, to calculate the trajectories of ballistic missiles in flight in the 1950–60s. Implementations of the Address programming language were used for nearly 20 years. A book about APL was published in Ukraine in 1963 and it was translated and published in France in 1974.
The Address language affected not only the Soviet Union's and other socialist countries economical development, but information technology and programming worldwide. APL's proposed and implemented ideas and tools can be found in many programming-related fields, such as abstract data types, object-oriented programming, functional programming, logical programming, databases and artificial intelligence.
Books
Glushkov V.M., & Yushchenko E.L., D 1966, The Kiev Computer; a Mathematical Description, USA, Ohio, Translation Division, Foreign Technology Div., Wright-Pattenon AFB, 234p., ASIN: B0007G3QGC.
Gnedenko B.V., Koroliouk V. S. & Iouchtchenko E.L., D 1969, Eléments de programmation sur ordinateurs, Paris, Dunod, 362p., ASIN: B0014UQTU0, viewed 24 October 2021, URL: https://files.infoua.net/yushchenko/Elements-de-programmation-sur-ordinateurs_BGnedenko-VKoroliouk-EIouchtchenko_1969_France_OCR.pdf.
Gnedenko B.V., Koroljuk V.S. & Justschenko E.L., D 1964, Elemente der Programmierung, DDR, Leipzig, Verlag: B. G. Teubner, 327 oldal.
Gnedenko B.V., Korolyuk V.S. & Juscsenko E.L. D 1964, Bevezetѐs a progamozásba, – I, II. – Magyarország, Budapest, Uj technica.
Вычислительная машина «Киев»: математическое описание / В. М. Глушков, Е. Л. Ющенко. — К. : Техн. лит., 1962. — 183 с.
Кулинкович А.Е., Ющенко Е.Л., О базовом алгоритмическом языке. / Кулинкович А.Е., Ющенко Е.Л., в журн.: «Кибернетика», К. : No. 2, 1965. C.3–9, – URL: https://files.infoua.net/yushchenko/O-bazovom-algoritmicheskov-yazyke_AKulinkovich_EYushchenko_1965.pdf
Ющенко Е. Л. Адресное программирование / Е. Л. Ющенко. — К. : Техн. лит., 1963. — 286 с. https://files.infoua.net/yushchenko/Adresnoe-programmirovanie_EYushchenko_1963.pdf
Ющенко Е. Л. Программирующая программа с входным адресным языком для машины Урал −1 / Е. Л. Ющенко, Т. А. Гринченко. — К. : Наук. думка, 1964. — 107 с.
Ющенко Е.Л., Адресный язык (Тема 5) // Кибернетика на транспорте: Заочный семинар. / Киевский дом Научно-технической пропаганды / – К. : – 1962. – 32 с., – URL: Kibernetika-na-transporte_Adresnyy-yazyk_KYushchenko_1962.pdf (infoua.net)
Управляющая машина широкого назначения «Дніпро» и программирующая программа в ней / Е. Л. Ющенко, Б. Н. Малиновский, Г. А. Полищук, Э. К. Ядренко, А. И. Никитин. — К. : Наук. думка, 1964. — 280 с.
References
Programming languages
Soviet inventions
Computing in the Soviet Union
Programming languages created by women
Ukrainian inventions | Address (programming language) | Technology | 1,386 |
47,301,692 | https://en.wikipedia.org/wiki/Cray%20Urika-XA | The Cray Urika-XA extreme analytics platform, manufactured by supercomputer maker Cray Inc., was an appliance that analyzes the massive amounts of data—usually called big data—that supercomputers collect. It was introduced in 2015 and discontinued in 2017. Organizations that use supercomputers have traditionally used multiple smaller off-the-shelf systems for data analysis. But as organizations see a dramatic increase in the amount of data they collect—everything from research data to retail transactions—they need data analytics systems that can make sense of it and help them use it strategically. In a nod to organizations that lean toward open-source software, the Urika-XA comes pre-installed with Cloudera Enterprise Hadoop and Apache Spark.
References
Further reading
Nicole Hemsoth (15 Oct 2014) "Cray Launches Hadoop into HPC Airspace." HPCWire.
"The Evolution of Data Analytics ." Infographic.
Eileen McNulty (22 May 2014). "Understanding Big Data: The Seven V's." Dataconomy.
Andy Patrizio (30 Jun 2017). "Cray adds big data software to its supercomputers." NETWORKWORLD.
Cray products | Cray Urika-XA | Technology | 254 |
16,765,973 | https://en.wikipedia.org/wiki/WASP-11b/HAT-P-10b | WASP-11b/HAT-P-10b or WASP-11Ab/HAT-P-10Ab is an extrasolar planet discovered in 2008. The discovery was announced (under the designation WASP-11b) by press release by the SuperWASP project in April 2008 along with planets WASP-6b through to WASP-15b, however at this stage more data was needed to confirm the parameters of the planets and the coordinates were not given. On 26 September 2008, the HATNet Project's paper describing the planet which they designated HAT-P-10b appeared on the arXiv preprint server. The SuperWASP team's paper appeared as a preprint on the Extrasolar Planets Encyclopaedia on the same day, confirming that the two objects (WASP-11b and HAT-P-10b) were in fact the same, and the teams agreed to use the combined designation.
The planet had the third lowest insolation of the known transiting planets at the time of the discovery (prior to this, Gliese 436 b and HD 17156 b were known to have lower insolation). The temperature implies it falls into the pL class of hot Jupiters: planets which lack significant quantities of titanium(II) oxide and vanadium(II) oxide in their atmospheres and do not have temperature inversions. An alternative classification system for hot Jupiters is based on the equilibrium temperature and the planet's Safronov number. In this scheme, for a given temperature, class I planets have high Safronov numbers and tend to be in orbit around cooler host stars, while class II planets have lower Safronov numbers. In the case of WASP-11b/HAT-P-10b, the equilibrium temperature is 1030 K and the Safronov number is 0.047±0.003, which means it is located close to the dividing line between the class I and class II planets.
The planet is in a binary star system, the second star is WASP-11 B, with a mass 0.34 ± 0.05 of the Sun and a temperature of 3483 ± 43 K.
Notes
See also
OGLE-TR-111b
References
External links
Exoplanets discovered by WASP
Exoplanets discovered in 2008
Giant planets
Hot Jupiters
Transiting exoplanets
Perseus (constellation)
Exoplanets discovered by HATNet
de:WASP-11 b | WASP-11b/HAT-P-10b | Astronomy | 513 |
440,014 | https://en.wikipedia.org/wiki/Organ%20culture | Organ culture is the cultivation of either whole organs or parts of organs in vitro. It is a development from tissue culture methods of research, as the use of the actual in vitro organ itself allows for more accurate modelling of the functions of an organ in various states and conditions.
A key objective of organ culture is to maintain the architecture of the tissue and direct it towards normal development. In this technique, it is essential that the tissue is never disrupted or damaged. It thus requires careful handling. The media used for a growing organ culture are generally the same as those used for tissue culture. The techniques for organ culture can be classified into (i) those employing a solid medium and (ii) those employing liquid medium.
Organ culture technology has contributed to advances in embryology, inflammation, cancer, and stem cell biology research.
Current progress
In April 2006, scientists reported a successful trial of seven bladders grown in-vitro and given to humans. A bladder has been cultured by Anthony Atala of the Wake Forest Institute for Regenerative Medicine in Winston-Salem, North Carolina. A jawbone has been cultured at Columbia University, a lung has been cultured at Yale. A beating rat heart has been cultured by Doris Taylor at the University of Minnesota. An artificial kidney has been cultured by H. David Humes at the University of Michigan.
Silk cut from silkworm cocoons has been successfully used as growth scaffolding for heart tissue production. Heart tissue does not regenerate if damaged, so producing replacement patches is of great interest. The experiment used rat heart cells and produced functional heart tissue. In order to further test applications to humans as a cure, a way to transform human stem cells into heart tissue would have to be found.
In 2015, Harald Ott was able to grow a rat forelimb. He now works at Ott Lab which focuses on the creation of bioartificial hearts, lungs, tracheas and kidneys.
In 2016, another test was done in which human cells were used to assemble intricately structured hearts. The hearts ultimately proved immature but proved we were yet one step further to making a heart from stem cells.
In January 2017, scientists from Salk Institute for Biological Studies managed to create a pig embryo that had part of its DNA, critical for the growth of organs, edited out. They then introduced human stem cells inside the pig embryo to have the human DNA fill in the gaps.
In March 2022, research was published that demonstrated the tentative success of a corneal implant called BPCDX that was shown to have significant tissue attachment and host cell migration once implanted.
Methodology
In vitro culture
Embryonic organ culture is an easier alternative to normal organ culture derived from adult animals. The following are four techniques employed for embryonic organ culture.
Plasma clot method
The following are general steps in organ culture on plasma clots.
Prepare a plasma clot by mixing 15 drops of plasma with five drops of embryo extract in a watch glass.
Place a watch glass on a pad of cotton wool in a petri dish; cotton wool is kept moist to prevent excessive evaporation from the dish.
Place a small, carefully dissected piece of tissue on top of the plasma clots in watch glass.
The technique has now been modified, and a raft of lens paper or rayon net is used on which the tissue is placed. Transfer of the tissue can then be achieved by raft easily. Excessive fluid is removed and the net with the tissue placed again on the fresh pool of medium.
Agar gel method
Media solidified with agar are also used for organ culture and these media consist of 7 parts 1% agar in BSS, 3 parts chick embryo extract and 3 parts of horse serum. Defined media with or without serum are also used with agar. The medium with agar provides the mechanical support for organ culture. It does not liquefy. Embryonic organs generally grow well on agar, but adult organ culture will not survive on this medium.
The culture of adult organs or parts from adult animals is more difficult due to their greater requirement of oxygen. A variety of adult organs (e.g. the liver) have been cultured using special media with special apparatus (Towell's II culture chamber). Since serum was found to be toxic, serum-free media were used, and the special apparatus permitted the use of 95% oxygen.
Raft Methods
In this approach the explant is placed onto a raft of lens paper or rayon acetate, which is floated on serum in a watch glass. Rayon acetate rafts are made to float on the serum by treating their 4 corners with silicone.
Similarly, floatability of lens paper is enhanced by treating it with silicone. On each raft, 4 or more explants are usually placed.
In a combination of raft and clot techniques, the explants are first placed on a suitable raft, which is then kept on a plasma clot. This modification makes media changes easy, and prevents the sinking of explants into liquefied plasma.
Grid Method
Initially devised by Trowell in 1954, the grid method utilizes 25 mm x 25 mm pieces of a suitable wire mesh or perforated stainless steel sheet whose edges are bent to form 4 legs of about 4 mm height.
Skeletal tissues are generally placed directly on the grid but softer tissues like glands or skin are first placed on rafts, which are then kept on the grids.
The grids themselves are placed in a culture chamber filled with fluid medium up to the grid; the chamber is supplied with a mixture of O2 and CO2 to meet the high O2 requirements of adult mammalian organs. A modification of the original grid method is widely used to study the growth and differentiation of adult and embryonic tissues.
Uses
Cultured organs can be an alternative for organs from other (living or deceased) people.
This is useful as the availability of transplantable organs (derived from other people) is declining in developed countries.
Another advantage is that cultured organs, created using the patients own stem cell, allows for organ transplants where the patient would no longer require immunosuppressive drugs.
Limitations
Results from in vitro organ cultures are often not comparable to those from in vivo studies (e.g. studies on drug action) since the drugs are metabolized in vivo but not in vitro.
See also
Cell culture
Tissue culture
3D bioprinting
References
External links
Fetal Thymus Organ Culture
Histology
Laboratory techniques | Organ culture | Chemistry | 1,311 |
9,176,452 | https://en.wikipedia.org/wiki/Supramolecular%20polymer | Supramolecular polymers are a subset of polymers where the monomeric units are connected by reversible and highly directional secondary interactions–that is, non-covalent bonds. These non-covalent interactions include van der Waals interactions, hydrogen bonding, Coulomb or ionic interactions, π-π stacking, metal coordination, halogen bonding, chalcogen bonding, and host–guest interaction. Their behavior can be described by the theories of polymer physics) in dilute and concentrated solution, as well as in the bulk.
Additionally, some supramolecular polymers have distinctive characteristics, such as the ability to self-heal. Covalent polymers can be difficult to recycle, but supramolecular polymers may address this problem.
History
The preamble of the field of supramolecular polymers can be considered dye-aggregates and host-guest complexes. In early 19th century, it was noticed that dyes aggregate via "a special kind of polymerization". In 1988, Takuzo Aida, a Japanese polymer chemist, reported the concept of cofacial assembly wherein the amphiphilic porphyrin monomers are connected via van der Waals interaction forming one-dimensional architectures in solution, which can be considered as a prototype of supramolecular polymers. Soon thereafter, one-dimensional aggregates were described based on hydrogen bonding interaction in the crystalline state. With a different strategyusing hydrogen bonds, Jean M. J. Fréchet showed in 1989 that mesogenic molecules with carboxylic acid and pyridyl motifs, upon mixing in bulk, heterotropically dimerize to form a stable liquid crystalline structure. In 1990, Jean-Marie Lehn showed that this strategy can be expanded to form a new category of polymers, which he called "liquid crystalline supramolecular polymer" using complementary triple hydrogen bonding motifs in bulk. In 1993, M. Reza Ghadiri reported a nanotubular supramolecular polymer where a b-sheet-forming macrocyclic peptide monomer assembled together via multiple hydrogen bonding between adjacent macrocycles. In 1994, Anselm. C. Griffin showed an amorphous supramolecular material using a single hydrogen bond between a homotropic molecules having carboxylic acid and pyridine termini. The idea to make mechanically strong polymeric materials by 1D supramolecular association of small molecules requires a high association constant between the repeating building blocks. In 1997, E.W. "Bert" Meijer reported a telechelic monomer with ureidopyrimidinone termini as a "self-complementary" quadruple hydrogen bonding motif and demonstrated that the resulting supramolecular polymer in chloroform shows a temperature-dependent viscoelastic property in solution. This is the first demonstration that supramolecular polymers, when sufficiently mechanically robust, are physically entangled in solution.
Formation mechanisms
Monomers undergoing supramolecular polymerization are considered to be in equilibrium with the growing polymers, and thermodynamic factors therefore dominate the system. However, when the constituent monomers are connected via strong and multivalent interactions, a "metastable" kinetic state can dominate the polymerization. An externally supplied energy, in the form of heat in most cases, can transform the "metastable" state into a thermodynamically stable polymer. A clear understanding of multiple pathways exist in supramolecular polymerization is still under debate, however, the concept of "pathway complexity", introduced by E.W. "Bert" Meijer, shed a light on the kinetic behavior of supramolecular polymerization. Thereafter, many dedicated scientists are expanding the scope of "pathway complexity" because it can produce a variety of interesting assembled structures from the same monomeric units. Along this line of kinetically controlled processes, supramolecular polymers having "stimuli-responsive" and "thermally bisignate" characteristics is also possible.
In conventional covalent polymerization, two models based on step-growth and chain-growth mechanisms are operative. Nowadays, a similar subdivision is acceptable for supramolecular polymerization; isodesmic also known as equal-K model (step-growth mechanism) and cooperative or nucleation-elongation model (chain-growth mechanism). A third category is seeded supramolecular polymerization, which can be considered as a special case of chain-growth mechanism.
Step-growth polymerization
Supramolecular equivalent of step-growth mechanism is commonly known as isodesmic or equal-K model (K represents the total binding interaction between two neighboring monomers). In isodesmic supramolecular polymerization, no critical temperature or concentration of monomers is required for the polymerization to occur and the association constant between polymer and monomer is independent of the polymer chain length. Instead, the length of the supramolecular polymer chains rises as the concentration of monomers in the solution increases, or as the temperature decreases. In conventional polycondensation, the association constant is usually large that leads to a high degree of polymerization; however, a byproduct is observed. In isodesmic supramolecular polymerization, due to non-covalent bonding, the association between monomeric units is weak, and the degree of polymerization strongly depends on the strength of interaction, i.e. multivalent interaction between monomeric units. For instance, supramolecular polymers consisting of bifunctional monomers having single hydrogen bonding donor/acceptor at their termini usually end up with low degree of polymerization, however those with quadrupole hydrogen bonding, as in the case of ureidopyrimidinone motifs, result in a high degree of polymerization. In ureidopyrimidinone-based supramolecular polymer, the experimentally observed molecular weight at semi-dilute concentrations is in the order of 106 Dalton and the molecular weight of the polymer can be controlled by adding mono-functional chain-cappers.
Chain-growth polymerization
Conventional chain-growth polymerization involves at least two phases; initiation and propagation, while and in some cases termination and chain transfer phases also occur. Chain-growth supramolecular polymerization in a broad sense involves two distinct phases; a less favored nucleation and a favored propagation. In this mechanism, after the formation of a nucleus of a certain size, the association constant is increased, and further monomer addition becomes more favored, at which point the polymer growth is initiated. Long polymer chains will form only above a minimum concentration of monomer and below a certain temperature. However, to realize a covalent analogue of chain-growth supramolecular polymerization, a challenging prerequisite is the design of appropriate monomers that can polymerize only by the action of initiators. Recently one example of chain-growth supramolecular polymerization with "living" characteristics is demonstrated. In this case, a bowl-shaped monomer with amide-appended side chains form a kinetically favored intramolecular hydrogen bonding network and does not spontaneously undergo supramolecular polymerization at ambient temperatures. However, an N-methylated version of the monomer serves as an initiator by opening the intramolecular hydrogen bonding network for the supramolecular polymerization, just like ring-opening covalent polymerization. The chain end in this case remains active for further extension of supramolecular polymer and hence chain-growth mechanism allows for the precise control of supramolecular polymer materials.
Seeded polymerization
This is a special category of chain-growth supramolecular polymerization, where the monomer nucleates only in an early stage of polymerization to generate "seeds" and becomes active for polymer chain elongation upon further addition of a new batch of monomer. A secondary nucleation is suppressed in most of the case and thus possible to realize a narrow polydispersity of the resulting supramolecular polymer. In 2007, Ian Manners and Mitchell A. Winnik introduced this concept using a polyferrocenyldimethylsilane–polyisoprene diblock copolymer as the monomer, which assembles into cylindrical micelles. When a fresh feed of the monomer is added to the micellar "seeds" obtained by sonication, the polymerization starts in a living polymerization manner. They named this method as crystallization-driven self-assembly (CDSA) and is applicable to construct micron-scale supramolecular anisotropic structures in 1D–3D. A conceptually different seeded supramolecular polymerization was shown by Kazunori Sugiyasu in a porphyrin-based monomer bearing amide-appended long alkyl chains. At low temperature, this monomer preferentially forms spherical J-aggregates while fibrous H-aggregates at higher temperature. By adding a sonicated mixture of the J-aggregates ("seeds") into a concentrated solution of the J-aggregate particles, long fibers can be prepared via living seeded supramolecular polymerization. Frank Würthner achieved similar seeded supramolecular polymerization of amide functionalized perylene bisimide as monomer. Importantly, the seeded supramolecular polymerization is also applicable to prepare supramolecular block copolymers.
Examples
Hydrogen bonding interaction
Monomers capable of forming single, double, triple or quadruple hydrogen bonding has been utilized for making supramolecular polymers, and increased association of monomers obviously possible when monomers have maximum number of hydrogen bonding donor/acceptor motifs. For instance, ureidopyrimidinone-based monomer with self-complementary quadruple hydrogen bonding termini polymerized in solution, accordingly with the theory of conventional polymers and displayed a distinct viscoelastic nature at ambient temperatures.
π-π stacking
Monomers with aromatic motifs such as bis(merocyanine), oligo(para-phenylenevinylene) (OPV), perylene bisimide (PBI) dye, cyanine dye, corannulene and nano-graphene derivatives have been employed to prepare supramolecular polymers. In some cases, hydrogen bonding side chains appended onto the core aromatic motif help to hold the monomer strongly in the supramolecular polymer. A notable system in this category is a nanotubular supramolecular polymer formed by the supramolecular polymerization of amphiphilic hexa-peri-hexabenzocoronene (HBC) derivatives. Generally, nanotubes are categorized as 1D objects morphologically, however, their walls adopt a 2D geometry and therefore require a different design strategy. HBC amphiphiles in polar solvents solvophobically assemble into a 2D bilayer membrane, which roles up into a helical tape or a nanotubular polymer. Conceptually similar amphiphilic design based on cyanine dye and zinc chlorin dye also polymerize in water resulting in nanotubular supramolecular polymers.
Host-guest interaction
A variety of supramolecular polymers can be synthesized by using monomers with host-guest complementary binding motifs, such as crown ethers/ammonium ions, cucurbiturils/viologens, calixarene/viologens, cyclodextrins/adamantane derivatives, and pillar arene/imidazolium derivatives [30–33]. When the monomers are "heteroditopic", supramolecular copolymers results, provided the monomers does not homopolymerize. Akira Harada was one of the firstwhorecognize the importance of combining polymers and cyclodextrins. Feihe Huang showed an example of supramolecular alternating copolymer from two heteroditopic monomers carrying both crown ether and ammonium ion termini. Takeharo Haino demonstrated an extreme example of sequence control in supramolecular copolymer, where three heteroditopic monomers are arranged in an ABC sequence along the copolymer chain. The design strategy utilizing three distinct binding interactions; ball-and-socket (calix[5]arene/C60), donor-acceptor (bisporphyrin/trinitrofluorenone), and Hamilton's H-bonding interactions is the key to attain a high orthogonality to form an ABC supramolecular terpolymer.
Chirality
Stereochemical information of a chiral monomer can be expressed in a supramolecular polymer. Helical supramolecular polymer with P-and M-conformation are widely seen, especially those composed of disc-shaped monomers. When the monomers are achiral, both P-and M-helices are formed in equal amounts. When the monomers are chiral, typically due to the presence of one or more stereocenters in the side chains, the diastereomeric relationship between P- and M-helices leads to the preference of one conformation over the other. Typical example is a C3-symmetric disk-shaped chiral monomer that forms helical supramolecular polymers via the "majority rule". A slight excess of one enantiomer of the chiral monomer resulted in a strong bias to either the right-handed or left-handed helical geometry at the supramolecular polymer level. In this case, a characteristic nonlinear dependence of the anisotropic factor, g, on the enantiomeric excess of a chiral monomer can be generally observed. Like in small molecule based chiral system, chirality of a supramolecular polymer also affected by chiral solvents. Some application such as a catalyst for asymmetric synthesis and circular polarized luminescence are observed in chiral supramolecular polymers too.
Copolymers
A copolymer is formed from more than one monomeric species. Advanced polymerization techniques have been established for the preparation of covalent copolymers, however supramolecular copolymers are still in its infancy and is slowly progressing. In recent years, all plausible category of supramolecular copolymers such as random, alternating, block, blocky, or periodic has been demonstrated in a broad sense.
Properties
Supramolecular polymers are the subject of research in academia and industry.
Reversibility and dynamicity
The stability of a supramolecular polymer can be described using the association constant, Kass. When Kass ≤ 104M−1, the polymeric aggregates are typically small in size and do not show any interesting properties and when Kass≥ 1010 M−1, the supramolecular polymer behaves just like covalent polymers due to the lack of dynamics. So, an optimum Kass = 104–1010M−1need to be attained for producing functional supramolecular polymers. The dynamics and stability of the supramolecular polymers often affect by the influence of additives (e.g. co-solvent or chain-capper). When a good solvent, for instance chloroform, is added to a supramolecular polymer in a poor solvent, for instance heptane, the polymer disassembles. However, in some cases, cosolvents contribute the stabilization/destabilization of supramolecular polymer. For instance, supramolecular polymerization of a hydrogen bonding porphyrin-based monomer in a hydrocarbon solvent containing a minute amount of a hydrogen bond scavenging alcohol shows distinct pathways, i.e. polymerization favored both by cooling as well as heating, and is known as "thermally bisignate supramolecular polymerization". In another example, minute amounts of molecularly dissolved water molecules in apolar solvents, like methylcyclohexane, become part of the supramolecular polymer at lower temperatures, due to specific hydrogen bonding interaction between the monomer and water.
Self-healing
Supramolecular polymers may be relevant to self-healing materials. A supramolecular rubber based on vitrimers can self-heal simply by pressing the two broken edges of the material together. High mechanical strength of a material and self-healing ability are generally mutually exclusive. Thus, a glassy material that can self-heal at room temperature remained a challenge until recently. A supramolecularly polymer based on ether-thiourea is mechanically robust (e= 1.4 GPa) but can self-heal at room temperature by a compression at the fractured surfaces. The invention of self-healable polymer glass updated the preconception that only soft rubbery materials can heal.
Another strategy uses a bivalent poly(isobutylene)s (PIBs) functionalized with barbituric acid at head and tail. Multiple hydrogen bonding existed between the carbonyl group and amide group of barbituric acid enable it to form a supramolecular network. In this case, the snipped small PIBs-based disks can recover itself from mechanical damage after several-hour contact at room temperature.
Interactions between catechol and ferric ions exhibit pH-controlled self-healing supramolecular polymers. The formation of mono-, bis- and triscatehchol-Fe3+ complexes can be manipulated by pH, of which the bis- and triscatehchol-Fe3+ complexes show elastic moduli as well as self-healing capacity. For example, the triscatehchol-Fe3+ can restore its cohesiveness and shape after being torn. Chain-folding polyimide and pyrenyl-end-capped chains give rise to supramolecular networks.
Optoelectronic
By incorporating electron donors and electron acceptors into the supramolecular polymers, features of artificial photosynthesis can be replicated.
Biocompatible
DNA is a major example of a supramolecular polymer. protein Much effort has been develoted to related but synthetic materials. At the same time, their reversible and dynamic nature make supramolecular polymers bio-degradable, which surmounts hard-to-degrade issue of covalent polymers and makes supramolecular polymers a promising platform for biomedical applications. Being able to degrade in biological environment lowers potential toxicity of polymers to a great extent and therefore, enhances biocompatibility of supramolecular polymers.
Biomedical applications
With the excellent nature in biodegradation and biocompatibility, supramolecular polymers show great potential in the development of drug delivery, gene transfection and other biomedical applications.
Drug delivery: Multiple cellular stimuli could induce responses in supramolecular polymers. The dynamic molecular skeletons of supramolecular polymers can be depolymerized when exposing to the external stimuli like pH in vivo. On the basis of this property, supramolecular polymers are capable of being a drug carrier. Making use of hydrogen bonding between nucleobases to induce self-assemble into pH-sensitive spherical micelles.
Gene transfection: Effective and low-toxic nonviral cationic vectors are highly desired in the field of gene therapy. On account of the dynamic and stimuli-responsive properties, supramolecular polymers offer a cogent platform to construct vectors for gene transfection. By combining ferrocene dimer with β-cyclodextrin dimer, a redox-control supramolecular polymers system has been proposed as a vector. In COS-7 cells, this supramolecular polymersic vector can release enclosed DNA upon exposing to hydrogen peroxide and achieve gene transfection.
Adjustable mechanical properties
Basic Principle : Noncovalent interactions between polymer molecules significantly affect the mechanical properties of supramolecular polymers. More interaction between polymers tends to enhance the interaction strength between polymers. The association rate and dissociation rate of interacting groups in polymer molecules determine intermolecular interaction strength. For supramolecular polymers, the dissociation kinetics for dynamic networks plays a critical role in the material design and mechanical properties of the SPNs(supramolecular polymer networks). By changing the dissociation rate of polymer crosslink dynamics, supramolecular polymers have adjustable mechanical properties. With a slow dissociation rate for dynamic networks of supramolecular polymers, glass-like mechanical properties are dominant, on the other hand, rubber-like mechanical properties are dominant for a fast dissociation rate. These properties can be obtained by changing the molecular structure of the crosslink part of the molecule.
Experimental examples : One research controlled the molecular design of cucurbit[8]uril, CB[8]. The hydrophobic structure of the second guest of CB-mediated host-guest interaction within its molecular structure can tune the dissociative kinetics of the dynamic crosslinks. To slow the dissociation rate (kd), a stronger enthalpic driving force is needed for the second guest association (ka) to release more of the conformationally restricted water from the CB(8] cavity. In other words, the hydrophobic second guest exhibited the highest Keq and lowest kd values. Therefore, by polymerizing different concentrations of polymer subgroups, different dynamics of the intermolecular network can be designed.For example, mechanical properties like compressive strain can be tuned by this process. Polymerized with different hydrophobic subgroups in CB[B], The compressive strength was found to increase across the series in correlation with a decrease of kd, which could be tuned between 10–100MPa. NVI, is the most hydrophobic subgroup structure of monomer which have two benzene rings, on the other hand, BVI is the least hydrophobic subgroup structure of monomer via control group. Besides, varying concentrations of hydrophobic subgroups in CB[B], polymerized molecules show different compressive properties. Polymers with the highest concentration of hydrophobic subgroups show the highest compressive strain and vice versa.
Biomaterials
Supramolecular polymers can simultaneously meet the requirements of aqueous compatibility, bio-degradability, biocompatibility, stimuli-responsiveness and other strict criterion. Consequently, supramolecular polymers could be applicable to the biomedical fields.
The reversible nature of supramolecular polymers can produce biomaterials that can sense and respond to physiological cues, or that mimic the structural and functional aspects of biological signaling.
Protein delivery, bio-imaging and diagnosis and tissue engineering, are also well developed.
Further reading
<
References
Supramolecular chemistry
Polymers | Supramolecular polymer | Chemistry,Materials_science | 4,811 |
63,593,401 | https://en.wikipedia.org/wiki/Japanese%20amber | Japanese amber is a type of amber that can be found in Japan.
The largest sources of this substance are located in Honshu. It is similar to Baltic amber and has similar general use. However, Japanese amber is softer and much more difficult to treat than the Baltic type. Its treatment requires special care and precision because stones can be easily damaged. Its color range varies from many shades of orange to brown. It is characterized by dark spots that can be found on its surface. The opacity of Japanese amber varies from clear to opaque pieces.
Location
Sources of Japanese amber can be found in many different locations all over Japan. They have the whole area of 2800 km, starting from Hokkaido in the North and Kyūshū in the South. The only still opened mine is the Fuji mine where amber is recovered since the 6th century AD. In 1938 up to 13 tons of amber was recovered there. Two pieces recovered in the mine can be found as a part of a private collection (mass: 19kg, size 40x40x2 cm, recovered in 1927) and as a part of an exhibition in the National Museum of Nature and Science in Tokyo (mass: 16 kg, size 40x23x23 cm, recovered in 1941).
Use
Due to its soft and easy to damage surface Japanese amber is not widely used. It can be found in jewellery as a decorative gemstone or to decorate clothes and utility items. A recovered decorative pillow from the 6th century decorated with Japanese amber was a part of an exhibition in Kaliningrad. Modern artists prefer to use Baltic amber, as it is easier to work with and has similar aesthetic values.
References
Amber | Japanese amber | Physics | 333 |
45,079,890 | https://en.wikipedia.org/wiki/Penicillium%20brunneoconidiatum | Penicillium brunneoconidiatum is a fungus species of the genus of Penicillium.
See also
List of Penicillium species
References
Fungi described in 2014
brunneoconidiatum
Fungus species | Penicillium brunneoconidiatum | Biology | 47 |
4,477,256 | https://en.wikipedia.org/wiki/Losing%20stream | A losing stream, disappearing stream, influent stream or sinking river is a stream or river that loses water as it flows downstream. The water infiltrates into the ground recharging the local groundwater, because the water table is below the bottom of the stream channel. This is the opposite of a more common gaining stream (or effluent stream) which increases in water volume farther downstream as it gains water from the local aquifer.
Losing streams are common in arid areas due to the climate which results in huge amounts of water evaporating from the river generally towards the mouth. Losing streams are also common in regions of karst topography where the streamwater may be completely captured by a cavern system, becoming a subterranean river.
Examples
There are many natural examples of subterranean rivers including:
Bosnia and Herzegovina
Unac; Mušnica-Trebišnjica-Krupa/Ombla (Trebišnjica is considered to be one of the largest sinking rivers in the world; one of its effluents, Ombla, springs out of huge cave near Dubrovnik, Croatia and after about 30 metres empties into Adriatic Sea's ria called Rijeka Dubrovačka); Zalomka-Buna/Bunica/Bregava; Vrljika-Trebižat; Lištica-Jasenica; Šuica-Ričina
Germany
The Danube River disappears in the Danube Sinkhole between Immendingen and Möhringen in an area of karst.
New Zealand
The Selwyn River / Waikirikiri normally disappears below ground as it flows down the Canterbury Plains due to overlaying a deep and porous aquifer, re-emerging about 15 kilometres away from its output at Lake Ellesmere / Te Waihora.
United States
There are two rivers in Idaho, the Big Lost River and the Little Lost River, which both flow into the same depression and become subterranean, feeding the Snake River Plain Aquifer. Via the aquifer and numerous springs, they are tributaries of the Snake River.
The Lost River in Indiana rises in Vernon Township, Washington County, Indiana, and discharges into the East Fork of the White River. The Lost River is about long and its name is derived from the fact that at least of the primary course of the river flows completely underground. The river disappears into a series of sink holes of the type that are abundant in the karstland of southern Indiana.
The Lost River of New Hampshire is a stream in the White Mountains of New Hampshire. It is part of the Pemigewasset River watershed. The Lost River begins in Kinsman Notch, one of the major passes through the White Mountains. As it flows through the notch, it passes through Lost River Gorge, an area where enormous boulders falling off the flanking walls of the notch at the close of the last Ice Age have covered the river, creating a network of boulder caves.
The Lost River of West Virginia is located in the Appalachian Mountains of Hardy County in the Eastern Panhandle region of the state. It flows into an underground channel northeast of Baker along West Virginia Route 259 at "the Sinks" and reappears near Wardensville as the Cacapon River.
See also
Ponor
Groundwater
Spring (hydrology)
Subterranean river
References
Tom Aley, Karst Groundwater, Missouri Conservationist Online, Mar. 2000 – Vol. 61 No. 3
Hydrology
Dinaric karst formations
Karst formations
Karst
Geomorphology | Losing stream | Chemistry,Engineering,Environmental_science | 711 |
67,745,009 | https://en.wikipedia.org/wiki/Jin-Quan%20Yu | Jin-Quan Yu () is a Chinese-born American chemist. He is the Frank and Bertha Hupp Professor of Chemistry at Scripps Research, where he also holds the Bristol Myers Squibb Endowed Chair in Chemistry. He is a 2016 recipient of the MacArthur Fellowship, and is a member of the American Academy of Arts and Sciences, American Association for the Advancement of Science, and the Royal Society of Chemistry. Yu is a leader in the development of C–H bond activation reactions in organic chemistry, and has reported many C–H activation reactions that could be applicable towards the synthesis of drug molecules and other biologically active compounds. He also co-founded Vividion Therapeutics in 2016 with fellow Scripps chemists Benjamin Cravatt and Phil Baran, and is a member of the scientific advisory board of Chemveda Life Sciences.
Early life and education
Yu was born on January 10, 1966, in Zhejiang, China. He received his B.Sc. in chemistry at East China Normal University in 1987. Yu then went on to the Guangzhou Institute of Chemistry, Chinese Academy of Sciences where he worked on heterogeneous reactions of terpenes with zeolite materials with Prof. Shu-De Xiao, obtaining his M.Sc. in 1990. He remained at the Guangzhou Institute of Chemistry for four years as a research associate.
In 1994, Yu moved to the United Kingdom to pursue graduate studies at the University of Cambridge with Prof. Jonathan B. Spencer. At Cambridge, he studied biosynthesis and the mechanistic details of the hydrometallation step in asymmetric hydrogenation reactions with heterogeneous and homogeneous catalysts, among the twenty-one papers he co-authored with Spencer. Yu graduated with his Ph.D. in 1999.
Between 1999 and 2001, Yu worked as a Junior Research Fellow of St John's College, Cambridge. From 2001-2002, Yu worked as a postdoctoral fellow at Harvard University in the laboratory of Prof. E. J. Corey on selective palladium-catalyzed allylic oxidation reactions. Yu returned to Cambridge in 2002 and continued in his position as a Junior Research Fellow.
Independent career
Yu was awarded a Royal Society University Research Fellowship in 2003, which allowed him to start his independent research towards the development of asymmetric C–H activation reactions. In 2004, he moved to Brandeis University as an assistant professor of chemistry. He moved to The Scripps Research Institute as an associate professor in 2007 and was promoted to full professor in 2010. In 2012, he was appointed the Frank and Bertha Hupp Professor of Chemistry.
Research
Yu is an organic synthetic chemist who develops of new methods for functionalizing carbon-hydrogen (C–H) bonds, or C–H activation. A longstanding goal in organic synthesis, C–H activation would allow for inert, unreactive C–H bonds to be replaced with bonds to functional groups that can alter a molecule's reactivity and properties. One strategy to achieve selective C–H activation under mild conditions is to use metal-based catalysts that are guided to the targeted C–H bond by nearby directing functional groups. These directing groups often must be removed once the new functional group has been appended to the molecule. This style of C–H activation methodology could greatly simplify the synthesis of pharmaceutical drug molecules, agrochemicals, and natural products.
Yu has contributed metal palladium-catalyzed C-H bond activation promoted by "weak coordination," that is by directing group effects. Other areas of interest are the development of remote C-H bond activation, for example at the meta-position to a directing group. Since many drugs and natural products are chiral, Yu has also developed important asymmetric C-H bond activation reactions, including those templated by modified amino acids that can act as transient, chiral directing groups.
Awards and memberships
Yu is the recipient of numerous awards and honors for his work in organic chemistry reaction development, including a MacArthur Fellowship (also known as a "Genius Grant") in 2016. He was elected a member of the American Academy of Arts and Sciences in 2019, a fellow of the American Association for the Advancement of Science and the Royal Society of Chemistry in 2012. In 2013, he received the Raymond and Beverly Sackler Prize in the Physical Sciences.
Yu received the Pedler Award from the Royal Society of Chemistry in 2016, and the Elias J. Corey Award for Outstanding Original Contribution in Organic Synthesis by a Young Investigator from the American Chemical Society. In 2012, he was awarded the Mukaiyama Award from the Japanese Society of Organic Synthesis, the ACS Cope Scholar Award, and the Bristol-Myers Squibb Award. His honors also include the Novartis Early Career Award in Organic Chemistry (2011), Eli Lilly Grantee Award (2008), Amgen Young Investigator's Award (2008), and Sloan Research Fellowship (2008).
Personal life
Yu has a son, Tony.
References
Living people
1966 births
Chinese organic chemists
American organic chemists
Scripps Research faculty
21st-century Chinese chemists
Fellows of St John's College, Cambridge
21st-century American chemists | Jin-Quan Yu | Chemistry | 1,053 |
1,128,934 | https://en.wikipedia.org/wiki/Patrick%20Matthew | Patrick Matthew (20 October 1790 – 8 June 1874) was a Scottish grain merchant, fruit farmer, forester, and landowner, who contributed to the understanding of horticulture, silviculture, and agriculture in general, with a focus on maintaining the British navy and feeding new colonies. He published the basic concept of natural selection as a mechanism in evolutionary adaptation and speciation (directional selection) and species constancy or stasis (stabilizing selection) in 1831 in a book called Naval Timber and Arboriculture in which he uses the phrase "the natural process of selection". He did not further publicly develop his ideas until after Darwin and Wallace published their theories of evolution by natural selection in 1859. It has been suggested that Darwin and/or Wallace had encountered Matthew's earlier work, but there is no evidence of this. After the publication of On the Origin of Species, Darwin became aware of Matthew's 1831 book and subsequent editions of The Origin include an acknowledgment that Matthew "gives precisely the same view on the origin of species as that" given in the "present volume".
Biography
Patrick Matthew was born 20 October 1790 at Rome, a farm held by his father John Matthew near Scone Palace, in Perthshire. His mother was Agnes Duncan, a relative of Adam Duncan, 1st Viscount Duncan. In 1807, Matthew inherited Gourdiehill from Adam Duncan.
Matthew was educated at Perth Academy and the University of Edinburgh, but did not graduate due to the death of his father. Matthew had to take over the responsibilities of managing and running the affairs of a property estate at Gourdiehill. Over the years he successfully nurtured, cultivated, and transformed much of the estate's farmland and pastures into several large orchards of apple and pear trees, numbering over 10,000. During this time, Matthew became an avid researcher of both silviculture and horticulture. His research and experience at the modest estate framed a strong base of reference to form his own opinions and theories.
Matthew periodically traveled to Europe between 1807 and 1831 either on business or for his scientific studies. A trip to Paris in 1815 had to be cut short when Napoleon returned from Elba. Between 1840 and 1850, Matthew traveled extensively in what is now northern Germany. Recognizing the commercial potential of Hamburg, he bought two farms in Schleswig-Holstein.
Matthew married his maternal first cousin, Christian Nicol in 1817, and they had eight children: John (born 1818), Robert (1820), Alexander (1821), Charles (1824), Euphemia (1826), Agnes (1828), James Edward (1830), and Helen Amelia (1833). Robert farmed Gourdiehill in Patrick's old age, Alexander took over the German interests; the other three sons emigrated, initially to the United States. Matthew became interested in the colonization of New Zealand and was instrumental in setting up a "Scottish New Zealand Land Company". At his urging, James and Charles Matthew emigrated to New Zealand, where they set up one of the earliest commercial orchards in Australasia using seed and seedlings from Gourdiehill. John Matthew remained in America, sending botanical tree specimens back to his father; these included the first seedlings known to have been planted in Europe of both the Giant Redwood and the Coastal Redwood. A group of trees of these species still thriving near Inchture in Perthshire comes from these seedlings. Matthew gave many more seedlings to friends, relatives and neighbors, and redwoods can be found throughout the Carse of Gowrie; these as well as some elsewhere in Scotland (e.g. at Gillies Hill near Stirling Castle) are thought to have been grown from the seedlings. His reputation as a local celebrity faded in the twentieth century, when he was remembered as a "character" who at the end of his life became convinced that "someone very dear to his heart" had become a bird, and "that was the rizzen he wouldna allow the blackies to be shot in his orchard for fear they would shute her, ye ken, although the blackies were sair on the fruit".
Matthew's house, Gourdiehill, fell into disrepair in the 1970s and 1980s, and was demolished in 1990 when the grounds became a small housing estate; some of the salvaged stone was incorporated in a rock garden.
Work
In managing his orchards, Patrick Matthew became familiar with the problems related to the principles of husbandry in horticulture for food production (and hence, by extension silviculture). In 1831, Matthew published On Naval Timber and Arboriculture to mixed reception. Notably, the book contains an addendum that discusses natural selection 28 years before Charles Darwin's publication of On the Origin of Species.
Charles Darwin and natural selection
In 1860, Matthew read in the Gardeners' Chronicle for 3 March a review (by Huxley), republished from The Times, of Charles Darwin's On the Origin of Species, which said Darwin "professes to have discovered the existence and the modus operandi of natural selection, and described its principles". A letter by Matthew, published in the Gardeners' Chronicle on 7 April 1860, said that this was what he had "published very fully and brought to apply practically to forestry" in Naval Timber and Arboriculture in 1831, as publicised in reviews. He quoted extracts from his book, firstly the opening words of Note B from pages 364–365 of the Appendix, stopping before his discussion of hereditary nobility and entail.
He then quoted in its entirety a section from pages 381 to 388 of the Appendix. This lacked a heading, but in the Contents appeared as "Accommodation of organized life to circumstance, by diverging ramifications". In it, he commented on the difficulty of distinguishing "between species and variety". The change of the fossil record between geological eras implied living organisms having "a power of change, under a change of circumstances", in the same way as the "derangements and changes in organised existence, induced by a change of circumstance from the interference of man" gave "proof of the plastic quality of superior life" which he called "a circumstance-suiting power". Following past deluges, "an unoccupied field would be formed for new diverging ramifications of life" in "the course of time, moulding and accommodating their being anew to the change of circumstances". He proposed that "the progeny of the same parents, under great difference of circumstance, might, in several generations, even become distinct species, incapable of co-reproduction."
He described this as a "circumstance-adaptive law, operating upon the slight but continued natural disposition to sport in the progeny". Matthew then quoted the opening three paragraphs from Part III of his book, Miscellaneous Matter Connected with Naval Timber: Nurseries, pages 106 to 108, on "the luxuriance and size of timber depending upon the particular variety of the species" and the need to select seed from the best individuals when growing trees.
On reading this, Darwin commented in a letter to Charles Lyell dated 10 April:
Darwin then wrote a letter of his own to the Gardener's Chronicle, stating,
As promised, Darwin included a statement in the third (1861) and subsequent editions of On the Origin of Species, acknowledging that Matthew had anticipated "precisely the same view on the origin of species" and "clearly saw...the full force of the principle of natural selection". The statement referred to the correspondence, and quoted from a response by Matthew published in the Gardener's Chronicle. Darwin wrote that.
In June 1864, after visiting his son who was farming in Schleswig-Holstein, Matthew wrote to Darwin about his pamphlet publishing five of his letters. The title page of this political pamphlet by Matthew stated his claim to be "solver of the problem of species". In a letter to Hooker (22 and 28 October 1865), Darwin commented that William Charles Wells, in an essay "read in 1813 to Royal Soc. but not printed", had applied "most distinctly the principle of N. Selection to the races of man.— So poor old Patrick Matthew, is not the first, & he cannot or ought not any longer put on his Title pages 'Discoverer of the principle of Natural Selection'!."
Matthew's legacy in evolutionary studies
Matthew, Darwin and Wallace are the only three people considered to have independently discovered the principle of natural selection as a mechanism for speciation (macroevolution). Others prior to Matthew had proposed natural selection as a mechanism for the generation of varieties or races within a species: James Hutton suggested the mechanism in 1794 as leading to improvement of varieties, and an 1813 paper by William Charles Wells proposed that it would form new varieties.
Modern claims for Matthew's priority
Although Darwin insisted he had been unaware of Matthew's work, some modern commentators have held that he and Wallace were likely to have known of it, or could have been influenced indirectly by other naturalists who read and cited Matthew's book.
Ronald W. Clark, in his 1984 biography of Darwin, commented that Only the transparent honesty of Darwin's character... makes it possible to believe that by the 1850s he had no recollection of Matthew's work. This begs the question, for it assumes he did read Matthew's book. Clark continues by suggesting: If Darwin had any previous knowledge of Arboriculture, it had slipped down into the unconscious.
In 2014, Nottingham Trent University criminologist Mike Sutton published in a non-peer-reviewed (i.e. not reviewed by experts in the field) proceedings a research paper that he presented to a British Society of Criminology conference proposing that both Darwin and Wallace had "more likely than not committed the world's greatest science fraud by apparently plagiarising the entire theory of natural selection from a book written by Patrick Matthew and then claiming to have no prior knowledge of it." On 28 May 2014 The Daily Telegraph science correspondent reported Sutton's views, and also the opinion of Darwin biographer James Moore that this was a non-issue (below). Sutton published a 2014 non-peer reviewed e-book Nullius in Verba: Darwin's Greatest Secret reiterating his argument, and alleging that "the orthodox Darwinist account" is wrong as "Darwin/Wallace corresponded with, were editorially assisted by, admitted to being influenced by and met with other naturalists who - it is newly discovered - had read and cited Matthew's book long before 1858". Sutton included as one of these naturalists the publisher Robert Chambers, and said it was significant that the book by Matthew had been cited in the weekly magazine Chambers's Edinburgh Journal on 24 March 1832, then in 1844 Chambers had published anonymously the best selling Vestiges of the Natural History of Creation which, according to Sutton, had influenced Darwin and Wallace. In 2015, Sutton further repeated his assertion of "knowledge contamination" in the Polish journal, Filozoficzne Aspekty Genezy (F.A.G.) (Philosophical Aspects of Genesis), which Sutton asserts is peer-reviewed, and about which, one of the journal's editors responded, "As to Sutton, he cannot justifiably claim much credibility for his ideas just because these are published in such a journal like ours, i.e. one adopting Feyerabendian pluralism. If he thinks otherwise, it is only his problem. Any reasonable person should know better." In addition to his papers and e-book, Sutton disseminates his claims against Charles Darwin and Alfred Russel Wallace via several blog sites and Twitter accounts, and public lectures: to the Ethical Society, at the Conway Hall, on 27 July 2014; to the Teesside Skeptics in the Pub, at O'Connells Pub in Middlehaven, a ward of Middlesbrough, on 2 October 2014; and to the Carse of Gowrie Sustainability Group, at the James Hutton Institute, at Craigiebuckler, Aberdeen, on 17 March 2016.
However, there is no direct evidence that Darwin had read the book, and his letter to Charles Lyell stating that he had ordered the book clearly indicates that he did not have a copy in his extensive library or access to it elsewhere. The particular claim that Robert Chambers had read and transmitted Matthew's ideas that are relevant to natural selection is also not supported by the facts. The article in the Chambers's Edinburgh Journal (1832, vol. 1, no. 8, 24 March, p. 63) is not a review but only an abridged excerpt from pp. 8–14 of On Naval Timber that amounts to no more than a recipe for pruning and contains nothing of relevance to natural selection. It is headed "ON THE TRAINING OF PLANK TIMBER" and ends with ".— Matthew on Naval Timber." Even if it had been penned by Robert Chambers, this does not mean that he had read or understood, leave alone transmitted, the other passages of Matthew's book that do contain anything relevant to natural selection. Further, The Vestiges of the Natural History of Creation contain nothing of relevance about natural selection. Combining these facts, Robert Chambers had probably not read or received the message about natural selection in Matthew's book, likely did not promulgate it in the Vestiges, and probably neither in conversations.
Rebuttal of claims
Challenges to Matthew's claim to priority, or those made since he died, have essentially made reference to the same issues, that his description of natural selection was not accessible and it lacked lengthier development. Other criticisms have focussed on the differences between Darwin's and Matthew's versions of natural selection, and sometimes Wallace's too (e.g., Weale 2015). If Matthew's ideas had made the impact on subsequent evolutionary thinking, as claimed, the signals ought to be there, either during Matthew's lifetime, or Darwin's. Yet, modern claims for Matthew's priority have been unable to provide evidence for this, that has withstood fact checking.
Accessibility and development
Historian of science, Peter Bowler succinctly summarised some of those main reasons given for why Matthew does not deserve priority for natural selection over Darwin and Wallace,
Ernst Mayr's opinion was even more clear-cut:
Richard Dawkins also grants that Matthew had grasped the general concept of natural selection, but failed to appreciate the significance, nor develop it further,
In response to Sutton's e-book, Darwin biographer James Moore said many people came towards a similar perception during the 19th century, but Darwin was the only one who fully developed the idea:
In response to Sutton (2015) Darwin and Wallace scholar, John van Wyhe commented,
To coincide with Sutton's presentation to the Carse of Gowrie Sustainability Group, Darwin author, Julian F. Derry sent an open letter, saying,
Analysis of comparative speciation concepts
Sutton's claim that Darwin and Wallace plagiarised evolution by natural selection from Matthew also has been refuted by Joachim Dagg,
[Wallace's] concept of lineage-adaptation as a sequence of extinctions of less fit and survival of fitter varieties and his gradualism put him closer to Darwin than to Matthew. But he emphasized environmental changes for differential extinction and some form of isolation for lineage-splitting and speciation, whereas Darwin's mature theory saw competition as a sufficient cause of divergence, differential extinction, lineage-adaptation and lineage-splitting. This is not to say that Darwin was right in this view and Wallace wrong. By current standards, they were both right and wrong in different respects (competitive vs. environmental selection, sympatric vs. allopatric speciation).
The perspective emerging from this comparison shows at least four unique theories (Matthew, early Darwin, mature Darwin and Wallace), each interesting in its own right. Each theory integrated change in conditions, variability, competition and natural selection in ways that allowed for species transformation somehow. Apart from this similarity, the theories differ significantly from each other in the mechanisms underlying transformation. However, this difference does not lie in the struggle for survival and survival of the fittest, but in the way in which natural selection is integrated with variability, competition and environmental conditions. Transmutation is a convergent result of structurally different mechanisms.
The similarity of Matthew's scheme to the theory of punctuated equilibria is equally superficial. Eldredge & Gould (1972) took Mayr's model of allopatric speciation and combined it with Wright's model of genetic drift in order to explain gaps in the fossil record as results of relatively swift evolutionary change in small and isolated populations. Although catastrophes can produce such populations they are not required, and the mechanism underlying the punctuated record is the drift within small and isolated populations, not the absence of competing species that would prevent species transmutation. Therefore, viewing Matthew (1831) as an anticipator of the theory of punctuated equilibria (e.g. Rampino, 2011) is as wrong as claiming his scheme identical to Darwin's or Wallace's.
Darwin's contemporaries
While completing a doctoral thesis on Disputes of Plagiarism in Darwin's Theory of Evolution at the University of Zielona Gora, where the journal Filozoficzne Aspekty Genezy (F.A.G.) (Philosophical Aspects of Genesis) is based, Grzegorz Malec published a critical review of Sutton (2015), in which the main difficulty of valid identification of communication pathways was discussed, along with observations on Sutton's alternative approach,
Natural theology
Writing to Darwin in 1871, Matthew enclosed an article he had written for The Scotsman and, as well as wishing that he had time to write a critique of The Descent of Man, and Selection in Relation to Sex, expressed the belief that there is evidence of design and benevolence in nature, and that beauty cannot be accounted for by natural selection. Such a belief is mainstream natural theology, and reveals how far Matthew was from Darwin in realising the potential of evolutionary explanations: for him as well as others, man was the sticking-point.
There is little or no evidence that Matthew held these views as a younger man: there is no discussion of a religious nature in Arboriculture.
Socio-political views
Matthew's idea on society were radical for their times. Although he was a landowner, he was involved with the Chartist movement, and argued that institutions of hereditary nobility were detrimental to society. It has been suggested that these views worked against acceptance of his theory of natural selection, being politically incorrect at the time (see Barker, 2001). The more likely reason is that the obscurity of the location hid the ideas from many who would have been interested. Only after Darwin's Origin did Matthew come forward in a popular journal, the Gardeners' Chronicle. Matthew also published a book in 1839, Emigration Fields (Black, Edinburgh), suggesting that overpopulation, as predicted by Malthus, could be solved by mass migration to North America and the Dominions.
Matthew supported the invasion of Schleswig-Holstein by Bismarck in 1864: his pamphlet on the event was denounced by the Dundee Advertiser. He also supported the Germans against the French in the Franco-Prussian War (1870–71), a war which marked the final unification of the German Empire and the end of the Second French Empire.
In 1870 Matthew became aware of the terrible housing conditions of the workers in Dundee. In a letter to the Dundee Advertiser he told readers that the death rate of children under five in the town was 40%, and outlined a blueprint for the redevelopment of the city.
The Tay bridge
When the Edinburgh and Northern Railway (E&N) and the Dundee and Perth Railway (D&P) were seeking Parliamentary approval in 1845, it was proposed by their engineers that from Perth both should share a line running along the south bank of the Tay as far as Newburgh, where the D&P would cross to the north bank, and the E&N leave the Tay and head south to a ferry crossing of the Forth. Matthew had been in a very small minority supporting this, and the D&P as built crossed the Tay at Perth. In 1864, when a bridge crossing the Tay at Dundee was proposed, Matthew urged that a bridge at Newburgh was preferable to a bridge at Dundee, a Newburgh bridge giving much the same reduction in the rail distance between Dundee and the Forth ferry-ports from which passengers could cross to Edinburgh as a bridge at Dundee but doing so by a shorter (and therefore cheaper) crossing of the Tay. He argued the costs of a Dundee bridge were being grossly under-estimated: "To erect a substantial bridge, not a flimsy spectral thing, which might or not vanish as a phantom the first storm, or break down under the vibration caused by a heavy, rapid, moving train, would, in my opinion cost nearly double, and probably much more than double, the sum the Engineer states; upon this I stake my judgement against that of the Engineer", noting in passing, "from the geological indices, I would expect the foundation to be more regular at Newburgh than at Dundee, consequently better".
The financial crisis of 1866 put an end to the 1864 Tay Bridge proposal, but it was revived in 1869. Matthew responded with a series of letters to the Dundee papers arguing for a Newburgh bridge, and advancing all manner of additional arguments against a Dundee bridge; it would have a deleterious effect on silting and tidal scour in the Firth; it would prevent navigation upstream of it; it would be torn apart by the centrifugal force from heavy trains rapidly descending the curve at its northern end; it was vulnerable to earthquake, a ship colliding with a pier, or to high wind.
Matthew's objections were not heeded, and were not persisted in once Parliament had passed the Bill authorising construction of the Tay Bridge. During construction of the bridge some of Matthew's criticisms were borne out: it became apparent that bedrock could not be found at a depth allowing the use of brick piers; the design had to be modified to use lattice-work iron piers of reduced width, and there was considerable cost overrun. The bridge opened in June 1878 and was destroyed in a storm in December 1879: the lattice work piers supporting the centre section of the bridge (the high girders) failed catastrophically as a train was crossing the bridge. The high girders and the train fell into the Tay and about seventy-five lives were lost. Whilst it was recalled in the immediate aftermath of the disaster that Matthew had predicted collapse in a high wind as one of the horrible ends to which a bridge at Dundee could come, the disaster is generally ascribed to defects in the design and manufacture of the lattice work piers introduced into the design well after Matthew's campaign against the bridge.
See also
Evolution
History of evolutionary thought
Natural selection
Tay Bridge disaster
William Charles Wells
Notes
Notes
Citations
References
Barker, J.E. (2001). Patrick Matthew—Forest Geneticist (1790–1874) , Forest History Today.
Dempster, W.J. (1996). Natural selection and Patrick Matthew: evolutionary concepts in the nineteenth century. The Pentland Press, Edinburgh.
Sutton, M. (2014). The hi-tech detection of Darwin's and Wallace's possible science fraud: Big data criminology re-writes the history of contested discovery, Papers from the British Criminology Conference. Vol. 14: 49-64 Panel Paper. The British Society of criminology. Accessed July 2015. But see Dagg (2018).
Weale, M. E. (2015), Patrick Matthew's law of natural selection., Biological Journal of the Linnean Society. doi:10.1111/bij.12524 Accessed April 2015
External links
Patrick Matthew Biography – UC Berkeley
The Patrick Matthew Project – Links to Matthew's writings
Natural Selection as a Creative Force by Stephen Jay Gould
Patrick Matthew.com
Critique of "Nullius in Verba: Darwin's Greatest Secret"
Article exploring the question of whether Patrick Matthew independently discovered natural selection
1790 births
1874 deaths
Catastrophism
Charles Darwin
Chartists
Scottish farmers
Proto-evolutionary biologists
Pre-Darwinian publications in evolutionary biology
19th-century Scottish people
People educated at Perth Academy
Alumni of the University of Edinburgh
Scottish science writers
People from Perth and Kinross
Scottish letter writers
19th-century Scottish landowners
Scottish agriculturalists
19th-century Scottish businesspeople | Patrick Matthew | Biology | 5,052 |
10,587,844 | https://en.wikipedia.org/wiki/Oregrounds%20iron | Oregrounds iron was a grade of iron that was regarded as the best grade available in 18th century England. The term was derived from the small Swedish city of Öregrund, the port from which the bar iron was shipped. It was produced using the Walloon process.
Oregrounds iron is the equivalent of the Swedish vallonjärn, which literally translates as Walloon iron. The Swedish name derives from the iron being produced by the Walloon version of the finery forge process, the Walloon process as opposed to the German method, which was more common in Sweden. Actually, the term is more specialised, as all the Swedish Walloon forges made iron from ore ultimately derived from the Dannemora mine. It was made in about 20 forges mainly in Uppland.
Many of the ironworks were founded by Louis de Geer and other Dutch entrepreneurs who set up ironworks in Sweden in the 1610s and 1620s, with blast furnaces and finery forges. Most of the early forgemen were also from Wallonia.
Origins in Wallonia
The technique was developed in Wallonia in present-day Belgium during the Middle Ages. The Walloon method consisted of making pig iron in a blast furnace, followed by refining it in a finery forge. The process was devised in the Liège region, and spread into France and thence from the Pays de Bray to England before the end of the 15th century. Louis de Geer took it to Roslagen in Sweden in the early 17th century, where he employed Walloon ironmakers. Iron made there by this method was known in England as oregrounds iron.
Quality, uses and marketing
Swedish law required bars of iron to have the forge's mark stamped into it for quality control reasons. In Britain, the iron was known by these 'marks', and the quality of each brand was well-known to the buyers in London, Sheffield, Birmingham and elsewhere. It was divided into two grades:
'First oregrounds' came from Österby ('double bullet'), Leufsta (now Lövsta - hoop L), and Åkerby (PL crown). Later Gimo joined them.
'Second oregrounds' came from the other forges, including Forsmark, Harg, Vattholma, and Ullfors.
Its special property was its purity. The manganese content of the Dannemora ore caused impurities, which would otherwise have remained in the iron, to react preferentially with the manganese and to be carried off into the slag. This level of purity meant that the iron was particularly suitable for conversion to steel by being re-carburized, using the cementation process. This made it particularly suitable for making steel, oregrounds iron was an indispensable raw material for metal manufactures, particularly the Sheffield cutlery industry. Substantial quantities were also (until about 1808) bought for use by the British Navy.
This and other uses absorbed substantially the whole output of the industry. The trade in oregrounds iron was controlled from the 1730s to the 1850s by a cartel of merchants, of whom the longest enduring members were the Sykes family of Hull. Other participants were resident in (or controlling imports through) London and Bristol. These merchants advanced money to Swedish exporting houses, which in turn advanced it to the ironmasters, thus buying up the output of the forges several years in advance.
References
K. C. Barraclough, Steelmaking before Bessemer: I Blister Steel (Metals Society, London, 1985).
K. C. Barraclough, 'Swedish iron and Sheffield steel' History of Technology 12 (1990), 1-39 - originally published in Swedish in A Attman et al., Forsmark och vallonjärnet [Forsmark and Walloon iron] (Sweden 1987)
P. W. King, 'The Cartel in Oregrounds Iron' Journal of Industrial History 6(1) (2003), 25-48.
K-G. Hildebrand, Swedish iron in the seventeenth and eighteenth centuries: export industry before industrialization (Stockholm 1992).
Notes
Metallurgy
Ferrous alloys
Uppland
Goods manufactured in Sweden
Economic history of Sweden
Iron | Oregrounds iron | Chemistry,Materials_science,Engineering | 864 |
72,497,031 | https://en.wikipedia.org/wiki/Jon%20Agee | Jon Agee (born 1960) is a children's book writer and illustrator whose work centers around wordplay. Since 1981, he has published more than 31 books.
Early life and education
Agee was born in Nyack, New York in 1960. He attended Cooper Union School of Art and graduated with a BFA degree.
Career
Agee's art style is known for its "trademark blocky ink-and-watercolor illustrations," according to The New York Times.
In the 1990s, he wrote two musicals for children for the Tada! theater company, one of which was titled B.O.T.C.H, short for Bureau of Turmoil, Chaos and Headaches, a fictional New York City agency in charge of disrupting city functioning.
He has written cartoons for The New Yorker.
Agee has published several books of palindromes and word play such as anagrams and oxymorons. He became interested in them after a friend started writing them. "I liked the way absurdity and logic were intertwined," Agee said. In its review of Agee's book of 60 illustrated oxymorons called Who Ordered the Jumbo Shrimp? The New York Times wrote that "it would be a near miss, if not a minor catastrophe, not to take the calculated risk of treating the whole family to this instant classic."
His books include the 1996 picture book Dmitri the Astronaut, Smart Feller Fart Smeller, and many more.
At the first annual Symmys palindrome awards, he won in the short palindrome category for "An igloo costs a lot, Ed! Amen. One made to last! So cool, Gina!". He also won in 2021.
Personal life
Agee lives in San Francisco with his wife, Audrey. He enjoys crossword puzzles. In 2003, New York Times puzzle editor Will Shortz wrote that Agee had thanked him for including his name in a Friday crossword and joked that "he would not be satisfied until his name appeared in a Monday puzzle, the easiest of the week, where every answer is supposed to be familiar to most solvers. Only then would he know that he had truly arrived."
List of works
Picture books
If Snow Falls (1982)
Ellsworth (1983)
Ludlow Laughs (1985)
The Incredible Painting of Felix Clousseau (1988)
The Return of Freddy LeGrand (1992)
Flapstick (1993)
Dmitri the Astronaut (1996)
The Return of Freddy Legrand (1999)
Milo's Hat Trick (2001)
When Z Goes Home (2003)
Terrific (2005)
Why Did the Chicken Cross the Road? (2006)
Nothing (2007)
The Retired Kid (2008)
My Rhinoceros (2011)
The Other Side of Town (2012)
Little Santa (2013)
It's Only Stanley (2015)
Lion Lessons (2016)
Life on Mars (2017)
The Wall in the Middle of the Book (2018)
I Want a Dog (2019)
My Dad Is a Tree (2023)
Collections of word play
Go Hang a Salami! I'm a Lasagna Hog!: And Other Palindromes (1991)
So Many Dynamos!: And Other Palindromes (1994)
Who Ordered the Jumbo Shrimp?: And Other Oxymorons (1998)
Sit on a Potato Pan, Otis!: More Palindromes (1999)
Elvis Lives!: And Other Anagrams (2000)
Palindromania! (2002)
Smart Feller Fart Smeller: And Other Spoonerisms (2006)
Orangutan Tongs: Poems to Tangle Your Tongue (2009)
Mr. Putney's Quacking Dog (2010)
Otto: A Palindrama (2021)
As illustrator
Natalie Babbitt and others, The Big Book for Peace (1990)
Dee Lillegard, Sitting in My Box (1989)
Tor Seidler, Mean Margaret (1997)
Erica Silverman, The Halloween House (1998)
William Steig, Potch & Polly (2002)
References
American children's book illustrators
American children's writers
Writers who illustrated their own writing
20th-century American illustrators
21st-century American illustrators
American cartoonists
The New Yorker cartoonists
American humorists
Anagrammatists
Palindromists
Writers from San Francisco
Cooper Union alumni
21st-century American male writers
20th-century American male writers
1960 births
Living people | Jon Agee | Physics | 910 |
17,865,844 | https://en.wikipedia.org/wiki/Hyraceum | Hyraceum () is the petrified and rock-like excrement composed of both urine and feces of the rock hyrax (Procavia capensis) and closely related species.
The rock hyrax defecates in the same location over generations, which may be sheltered in caves. These locations form middens that are composed of hyraceum and hyrax pellets, which can be petrified and preserved for over 50,000 years. These middens form a record of past climate and vegetation.
It is also a sought-after material that has been used in both traditional South African medicine and perfumery.
Hyraceum in perfumery
The material hardens and ages until it becomes a fairly sterile, rock-like material (also referred to as "Africa Stone") that contains compounds giving it an animalic, deeply complex fermented scent that combines the elements of musk, castoreum, civet, tobacco and agarwood. The material is harvested without disturbing the animals by digging strata of the brittle, resinous, irregular, blackish-brown stone; because animals are not harmed in its harvesting, it is often an ethical substitute for deer musk and civet, which require killing or inflicting pain on the animal.
Hyraceum accumulates extremely slowly, making it essentially a non-renewable resource. Considering that hyraceum – accumulating in the form of rock hyrax middens – is in many cases the only available source for information regarding climate and environmental change in arid regions of Africa and Arabia, its collection for commercial sale has been criticized in scientific circles as the destruction of a critical resource that could help to understand the impact of climate change in sensitive regions.
Hyraceum in traditional South African medicine
After it has been fossilized hyraceum has been used as a traditional folk medicine in South Africa for treating epilepsy.
One clinical study of 14 samples of the material collected at various geographical locations in South Africa tested the material for its affinity for the GABA-benzodiazepine receptor, a neurologic receptor site that is effective in the treatment of seizures with benzodiazapines such as diazepam and lorazepam. Four of the hyraceum samples assayed positive for having an affinity for the receptor sites; however, extracts in water were inactive.
See also
Tinnunculite
References
Animal waste products
Hyraxes
Perfume ingredients
Science and technology in South Africa
Traditional African medicine | Hyraceum | Biology | 519 |
14,818,578 | https://en.wikipedia.org/wiki/RNF4 | RING finger protein 4 is a protein that in humans is encoded by the RNF4 gene.
The protein encoded by this gene contains a RING finger domain and acts as a transcription factor. This protein has been shown to interact with, and inhibit the activity of, TRPS1, a transcription suppressor of GATA-mediated transcription. Transcription repressor ZNF278/PATZ1 is found to interact with this protein, and thus reduce the enhancement of androgen receptor-dependent transcription mediated by this protein. Studies of the mouse and rat counterparts suggested a role of this protein in spermatogenesis.
Interactions
RNF4 has been shown to interact with TCF20, PATZ1 and Androgen receptor. RNF4 has been shown to be responsible for the degradation of the Werner syndrome helicase in MSI-H cells after WRN inhibition.
See also
RING finger domain
References
Further reading
External links
RING finger proteins | RNF4 | Chemistry | 190 |
976,730 | https://en.wikipedia.org/wiki/Safe%20operating%20area | For power semiconductor devices (such as BJT, MOSFET, thyristor or IGBT), the safe operating area (SOA) is defined as the voltage and current conditions over which the device can be expected to operate without self-damage.
SOA is usually presented in transistor datasheets as a graph with VCE (collector-emitter voltage) on the abscissa and ICE (collector-emitter current) on the ordinate; the safe 'area' referring to the area under the curve. The SOA specification combines the various limitations of the device — maximum voltage, current, power, junction temperature, secondary breakdown — into one curve, allowing simplified design of protection circuitry.
Often, in addition to the continuous rating, separate SOA curves are also plotted for short duration pulse conditions (1 ms pulse, 10 ms pulse, etc.).
The safe operating area curve is a graphical representation of the power handling capability of the device under various conditions. The SOA curve takes into account the wire bond current carrying capability, transistor junction temperature, internal power dissipation and secondary breakdown limitations.
Limits of the safe operating area
Where both current and voltage are plotted on logarithmic scales, the borders of the SOA are straight lines:
{|
| 1. || IC || = ICmax || — current limit
|-
| 2. || VCE || = VCEmax || — voltage limit
|-
| 3. || IC·VCE || = Pmax || — dissipation limit, thermal breakdown
|-
| 4. || IC·VCEα || = const || — this is the limit given by the secondary breakdown (bipolar junction transistors only)
|}
SOA specifications are useful to the design engineer working on power circuits such as amplifiers and power supplies as they allow quick assessment of the limits of device performance, the design of appropriate protection circuitry, or selection of a more capable device. SOA curves are also important in the design of foldback circuits.
Secondary breakdown
For a device that makes use of the secondary breakdown effect see Avalanche transistor
Secondary breakdown is a failure mode in bipolar power transistors. In a power transistor with a large junction area, under certain conditions of current and voltage, the current concentrates in a small spot of the base-emitter junction. This causes local heating, progressing into a short between collector and emitter. This often leads to the destruction of the transistor. Secondary breakdown can occur both with forward and reverse base drive. Except at low collector-emitter voltages, the secondary breakdown limit restricts the collector current more than the steady-state power dissipation of the device. Older power MOSFETs did not exhibit secondary breakdown, with their safe operating area being limited only by maximum current (the capacity of the bonding wires), maximum power dissipation and maximum voltage. This has changed in more recent devices as detailed in the next section. However, power MOSFETs have parasitic PN and BJT elements within the structure, which can cause more complex localized failure modes resembling secondary breakdown.
MOSFET thermal runaway in linear mode
In their early history, MOSFETs became known for their absence of secondary breakdown. This benefit was due to the fact that ON-resistance increases with increasing temperature, so that part of the MOSFET which is running hotter (e.g. due to irregularities in the die-attachment, etc.) will carry a lower current density, tending to even out any temperature variation and prevent hot spots. Recently, MOSFETs with very high transconductance, optimised for switching operation, have become available. When operated in linear mode, especially at high drain-source voltages and low drain currents, the gate-source voltage tends to be very close to the threshold voltage. Unfortunately the threshold voltage decreases as temperature increases, so that if there are any slight temperature variations across the chip, then the hotter regions will tend to carry more current than the cooler regions when Vgs is very close to Vth. This can lead to thermal runaway and the destruction of the MOSFET even when it is operating within its Vds, Id and Pd ratings. Some (usually expensive) MOSFETs are specified for operation in the linear region and include DC SOA diagrams, e.g. IXYS IXTK8N150L.
Reverse bias safe operating area
Transistors require some time to turn off, due to effects such as minority carrier storage time and capacitance. While turning off, they may be damaged depending on how the load responds (especially with poorly snubbed inductive loads). The reverse bias safe operating area (or RBSOA) is the SOA during the brief time before turning the device into the off state—during the short time when the base current bias is reversed. As long as the collector voltage and collector current stay within the RBSOA during the entire turnoff, the transistor will be undamaged. Typically the RBSOA will be specified for a variety of turn-off conditions, such as shorting the base to the emitter, but also faster turn-off protocols where the base-emitter voltage bias is reversed.
The RBSOA shows distinct dependencies compared to the normal SOA. For example in IGBTs the high-current, high-voltage corner of the RBSOA is cut out when the collector voltage increases too quickly. Since the RBSOA is associated with a very brief turn-off process, it is not constrained by the continuous power dissipation limit.
The ordinary safe operating area (when the device is in the on state) may be referred to as the Forward bias safe operating area (or FBSOA) when it is possible to confuse it with the RBSOA.
Protection
The most common form of SOA protection used with bipolar junction transistors senses the collector-emitter current with a low-value series resistor. The voltage across this resistor is applied to a small auxiliary transistor that progressively 'steals' base current from the power device as it passes excess collector current.
Another style of protection is to measure the temperature of the outside of the transistor, as an estimate of junction temperature, and reduce drive to the device or switch it off if the temperature is too high. If multiple transistors are used in parallel, only a few need to be monitored for case temperature to protect all parallel devices.
This approach is effective but not bullet-proof. In practice, it is very difficult to design a protection circuit that will work under all conditions, and it is left up to the design engineer to weigh the likely fault conditions against the complexity and cost of the protection.
See also
Derating
References
H. A. Schafft, J. C. French, Secondary Breakdown in Transistors, IRE Trans. Electron Devices ED-9, 129-136 (1962). online
Michaël Bairanzade, Understanding Power Transistors Breakdown Parameters, OnSemi application node AN1628/D online
Apex technical document on operating power opamps within SOA
Power electronics
Electronic engineering | Safe operating area | Technology,Engineering | 1,487 |
13,143,577 | https://en.wikipedia.org/wiki/Girneys | Girneys are soft vocalizations used by species of Old World monkeys to ease affiliative social interactions between unrelated members of the same species. The vocalizations are most commonly used by adult females around birthing season; the female will direct the call towards an unrelated mother and her offspring as an attempt to initiate friendly contact. However, mothers themselves will never direct girneys towards their own offspring as girneys do not increase affiliative interactions between relatives. Monkeys will also produce call when interacting with a dominant member of the same species, and when avoiding further conflict after becoming victim of an agonistic interaction. In all contexts, the vocalization is beneficial as it allows the signaler to inform potential aggressor that they are nonthreatening, thereby reducing the chance of attack and increasing fitness. Girneys are often accompanied by lip-smacking and a hesitant approach towards the dominant monkey. If the vocalization successfully reduces tension, it may be followed by allogrooming, alloparenting, and/or a rocking embrace.
Old World monkeys
Multiple species of Old World monkeys produce girneys. The actual sound of the vocalization varies slightly by species but its purpose is consistent – to reduce tension between unrelated members of the same species. No accounts of monkeys directing girneys towards different species of monkeys have been observed. Monkeys who use the call include Japanese macaques, rhesus macaques, mandrills, and baboons. However, they have been most extensively studied in species of macaques. The calls are commonly observed in adult Old World monkeys, but rarely in juveniles. This is likely because juveniles are already groomed and protected by their mother and would not benefit from producing an affiliative call.
Morphology
Girneys resemble a moaning and purring sound with a song-like quality. The call stays within a low frequency range, but is very morphologically variable as it does not maintain a consistent temporal pattern. Instead, the vocalizations are uttered in rapid succession, through different patterns each time. The vocalization are produced in conjunction with lip movement and teeth chattering. Dario Maestripieri, professor of comparative human development at the University of Chicago, says the sounds are "made with their mouths almost closed, sort of nasal and relatively soft", and suggested that girneys are similar to human baby talk. In the context of mother offspring dyad approach, the morphology of girneys can be divided into two distinct vocalizations, atonal girneys and tonal girneys. Both atonal and tonal girneys are submissive and accompanied by a hesitant approach.
Atonal girneys
Vocalizations modified by a process of lip movement on teeth scraping is characteristic of atonal girneys. An adult female will produce this distinct call when approaching a cluster of females and infants late in the birth season, particularly when orientated toward the infant of the group.
Tonal girneys
Tonal girneys are more morphologically variable than atonal girneys as the characteristic tongue and lip movements are superimposed. The call is produced by an adult female who intends to participate in grooming with an unrelated dominant female. The subordinate directs the call while the dominant is separate from the cluster during birth season.
Function and context
General function
Girneys are used in a variety of contexts but consistently function to ease affiliative social interactions between unrelated members of the same species and are generally directed from the subordinate monkey to the dominant monkey.
Establishing friendly contact with unrelated mother-offspring dyad
Male macaque do not often participate in infant care, so mothers tend to be highly protective of their infants and will display highly aggressive behavior and even physically attack monkeys who come within close proximity. In attempt to establish friendly contact with mother and minimize chance of attack, the subordinate adult female will produce girneys. The call can also benefit the adult female in that it may increase probability of affiliative physical contact such as grooming, which reduces stress. Monkeys who do not produce the call upon approaching mother-offspring dyad are less likely to attain access to an affiliative interaction. Additionally, females without infants may be allowed access to handling an unrelated mothers infant after initiating friendly contact with girneys. In this case, they often continue to make the call during handling.
Acknowledging social hierarchy
Japanese macaques form despotic societies in which some members of group are ranked higher than others. The higher ranking macaques are considered dominant, and macaques of lower rank will produce girneys when in close proximity to dominant members, to signal appeasement and acknowledge inferiority. Low ranking females without infants are the least likely to receive girneys while high ranking females with infants are most likely to receive girneys within a troop. This is coherent with the proposition that girneys function to reduce tension, as low ranking females without infants are more likely to be victims of aggression than any other member of troop.
Post-conflict affiliation
Opponents of a conflict are attracted to each other for a short period of time following the conflict. During this time, the victim will produce girneys in attempt to restore friendly relations with former opponent. The victim is more likely to use the vocalization when opponent is less familiar, indicating that the production of girneys after conflict is dependent on how predictable the winner is to the victim. Despite the reconciliation intent of girneys, victims make themselves vulnerable to further aggression when initiating post conflict affiliation. So it is unclear if the vocalizations are effective in this context.
References
Further reading
Ethology
Animal sounds | Girneys | Biology | 1,147 |
20,052,808 | https://en.wikipedia.org/wiki/Blind%20arcade | A blind arcade or blank arcade is an arcade (a series of arches) that has no actual openings and that is applied to the surface of a wall as a decorative element: i.e., the arches are not windows or openings but are part of the masonry face. It is designed as an ornamental architectural element and has no load-bearing function.
Similar structures
Whereas a blind arch is usually a single arch or a series of joined arches as a frieze (sometimes called Lombard band), a blind arcade is composed of a series of arches that have well-defined columns in between its arches.
A blind arcade may resemble several s (false/blank windows or sealed-up windows) or blind niches that are side by side.
Examples
Blind arcades are a common decorative features on the facades of Romanesque and Gothic buildings throughout Western Europe, and are also a common feature in Byzantine Orthodox churches in Eastern Europe, and in Armenian churches.
See also
Dwarf gallery
Flying butress
References
External links
Dictionary of French Architecture from the 11th to 16th century/Volume 1/Blind Arcade
The Monasery of Marmashen
Arcades (architecture) | Blind arcade | Engineering | 230 |
1,959,529 | https://en.wikipedia.org/wiki/Battle%20of%20M%C5%82awa | The Battle of Mława, otherwise known as the Defence of the Mława position, took place to the north of the town of Mława in northern Poland between 1 and 3 September 1939. It was one of the opening battles of the Invasion of Poland and World War II in general. It was fought between the forces of the Polish Modlin Army under General Krukowicz-Przedrzymirski and the German 3rd Army under General Georg von Küchler.
History
Eve of the Battle
As a result of the Treaty of Versailles, the new German-Polish border was located only some 120 km north of Warsaw, the Polish capital city. In 1939 the Polish Modlin Army, led by Brigadier General Emil Krukowicz-Przedrzymirski, was thought of as the main defensive force guarding Polish borders from the north. It was located along the border with East Prussia and was to stop the enemy forces advancing towards Warsaw, the Modlin Fortress. Shortly before the war, a decision was made to strengthen the Polish defences by construction of a line of field fortifications and concrete bunkers to the north of Mława, in the centre of the army's positions.
The main line of defence of the army was located along the line of Narew and Vistula rivers. There were a number of 19th-century fortifications in the area, but the plains to the north of it were almost defenseless. To ease the delaying actions in case of a war with Germany, the Polish General Staff decided that the Modlin Army should be transported to the border with East Prussia and should defend the line for as long as possible. Afterwards, the units under command of General Przedrzymirski-Krukowicz were to withdraw to the south and defend the line of Narew and Vistula rivers, together with the forces of Narew Independent Operational Group.
After the Polish secret mobilization had started in March 1939, the 20th Infantry Division was assigned to the Modlin Army and transported to the area of Mława. In addition, the army commander was assigned a number of trainloads of concrete and other construction materials and several combat engineering battalions. It was decided that a line of fortifications should be constructed in the area held by that division. On 19 June that year the project was ready and was finally approved by Marshal of Poland Edward Rydz-Śmigły on 3 July.
The line of trenches and concrete bunkers, shielded by anti-tank trenches and obstacles, was to be constructed along a low glacial hill overlooking the valley of the Mławka river, to the north of the town. The river itself could be blocked by a dam to enhance the defensive capability of the area. In the center, the swampy terrain of the Niemyje Marshes was located, which was virtually impassable to enemy armored vehicles. This swamp divided the area into two separate flanks. The western section was to be reinforced with 68 concrete bunkers while the eastern, much shorter, with 25.
In peacetime the 20th Division was located in Baranowicze. In case of a war with the USSR, it was planned as the first-line unit to defend a line of German World War I fortifications built there in 1915. Because of that, most of its soldiers had experience in defending fortified positions.
The construction of bunkers in the western section of the front, near the town of Mława, was started on 14 July. It was carried out mostly by the soldiers themselves, under the command of the head of the 20th engineering battalion, Maj. Juliusz Levittoux. The construction of the eastern flank bunkers near the village of Rzęgnowo started on 12 August. Soon the soldiers were joined by a number of civilian volunteers, helping to dig the trenches. However, the positions were not finished until the outbreak of World War II and many of the bunkers were not completed.
Battle
At noon on 1 September 1939 the Polish line of defence manned by the 20th Infantry Division was attacked by the 1st Army Corps under General Walter Petzel. Georg von Küchler, the commander of German Third Army, ordered his units to launch probing attacks across the Polish front. Troops of the 11th Infantry Division were repulsed by Polish 78th Infantry Regiment, while Waffen-SS troops of SS-Standarte Deutschland, part of Panzer Division Kempf, were halted by the Polish defenses at the village of Uniszki Zawadzkie and called for armored support. Panzer Regiment 7 arrived at 15:00 and mounted an assault. Although the attacking forces were equipped with tanks and supported by warplanes, the initial assault was repelled by Polish-made 37mm Armata ppanc. wz. 36 anti-tank guns after the advance of the German tanks was blocked by a 6 meter wide anti-tank ditch. Seven tanks belonging to Panzer Regiment 7 were destroyed in the action, and by 19:00 the German units fell back to their original positions. Only on the right flank of the Polish line did the Germans find any success, with the 12th Infantry Division and 1st Cavalry Brigade successfully dislodging the Mazowiecka Cavalry Brigade from their positions near the town of Chorzele. Late in the day, cavalry from both sides skirmished near the village of Krznowłoga Mała.
The following afternoon the German units started heavy artillery bombardment of the Polish positions, in coordination with an aerial attack by Ju 87 Stukas of Sturzkampfgeschwader 1. After two hours of artillery fire, the 11th and 61st Infantry Divisions launched an attack but were repulsed by the 80th and 78th Infantry Regiments respectively. At Rzęgnowo on the Polish right flank however, the 1st Infantry Division successesfully outflanked the Polish 79th Infantry Regiment, and the Polish troops retreated towards Mława around 16:00. Sensing weakness on the Polish right flank, von Küchler ordered Panzer Division Kempf moved towards Rzęgnowo to prepare for an assault the following day. Krukowicz-Przedrzymirski meanwhile ordered the 20th Division to extend further eastwards and prepare the defence of its right flank between the villages of Dębsk and Nosarzewo. At the same time the 8th Infantry Division, until then held in reserve near Ciechanów, was ordered to prepare a counterattack.
The 8th Division arrived in the area in the early hours of 3 September. As the Mazowiecka Cavalry Brigade operating further eastwards was also endangered by German armoured troops, the army commander ordered the division to split its forces and attack in two directions: towards Grudusk east of Mława and towards Przasnysz. However, conflicting orders and German saboteurs operating in the rear disrupted both attacks and led to chaos in the Polish ranks. Communication broke down and friendly fire incidents occurred between the 13th and 32nd Infantry Regiments during the night, resulting in the retreat of the latter. By 22:00 the division was mostly destroyed and only the 21st Infantry Regiment of Colonel (later General) Stanisław Sosabowski managed to withdraw from the fights towards the Modlin Fortress. Despite this, the German attacks towards both flanks of the 20th Infantry Division were unsuccessful.
On 3 September the German engineers finally managed to cut through Polish antitank barriers. The Germans used the local civilians as human shields, which allowed them to finally capture several bunkers on the left flank of the Polish forces, but were unable to push forwards. On the right flank, in the Rzegnów section of the front to the east of the swamps, the attacks were more successful and in the late evening elements of German Wodrig Corps finally broke through the lines of the 79th Infantry Regiment to the rear of the Poles. This widened the front gap in the area of Grudusk.
At 09:00 on September 4, General Emil Krukowicz-Przedrzymirski, facing the risk of his forces being outflanked and surrounded, ordered the 20th division and the remnants of the 8th to withdraw towards Warsaw and Modlin, finally abandoning the fortified positions.
Aftermath
The withdrawal was started in the early morning of 4 September. Although the German mechanized units suffered heavy losses and were unable to maintain pursuit, the area to the south of Mława was very lightly forested and the Polish forces were constantly bombarded and strafed by the German Luftwaffe, suffering heavy losses both in troops and equipment.
Although the position was abandoned, the German forces suffered substantial losses and it was not until 13 September that they finally managed to reach the Modlin Fortress, located less than 100 kilometres to the south.
Opposing forces
Poland
Germany
See also
List of World War II military equipment of Poland
List of German military equipment of World War II
References
Sources
External links
Information on Polish tanks and armored vehicles
Battles of the Invasion of Poland
World War II defensive lines
Warsaw Voivodeship (1919–1939)
September 1939 | Battle of Mława | Engineering | 1,807 |
962,130 | https://en.wikipedia.org/wiki/Winnecke%204 | Winnecke 4 (also known as Messier 40 or WNC 4) is an optical double star consisting of two unrelated stars in a northerly zone of the sky, Ursa Major.
The pair were discovered by Charles Messier in 1764 while he was searching for a nebula that had been reported in the area by Johannes Hevelius. Not seeing any nebulae, Messier catalogued this apparent pair instead. The pair were rediscovered by Friedrich August Theodor Winnecke in 1863, and included in the Winnecke Catalogue of Double Stars as number 4. Burnham calls M40 "one of the few real mistakes in the Messier catalog," faulting Messier for including it when all he saw was a double star, not a nebula of any sort.
In 1991 the separation between the components was measured at 51.7″, an increase since 1764. Data gathered by astronomers Brian Skiff (2001) and Richard L. Nugent (2002) strongly suggested the subject was merely an optical double star rather than a physically connected (binary) system. The A star that seems the brighter is over twice as far as B. Parallax measurements from the Gaia satellite show the two stars, HD 238107 and HD 238108, are at distances of and respectively. HD 238108 is itself a genuine binary star, with an 18th magnitude white dwarf companion 5 arcseconds away and a parallax distance of .
See also
List of Messier objects
References
External links
SEDS: Messier Object 40
Messier 40 CCD LRGB image with 2 hrs total exposure
Double stars
Winnecke 4
Ursa Major
4
Orion–Cygnus Arm
17641024
238107/8
G-type main-sequence stars
K-type giants
Discoveries by Charles Messier | Winnecke 4 | Astronomy | 365 |
39,403,188 | https://en.wikipedia.org/wiki/Professional%20Lighting%20and%20Sound%20Association | The Professional Lighting and Sound Association (PLASA) is a trade association headquartered in Eastbourne, United Kingdom. Its membership is made up of companies involved with the events and entertainments technology sector.
History
PLASA was originally known as the British Association of Discothèque Equipment Manufacturers (BADEM), a name used between 1976 and 1983.
In 2010 PLASA merged with the Entertainment Services and Technology Association. and demerged in 2015. John Simpson, the PLASA Governing Body Chair at the time, said "This has been a difficult period for PLASA but it is also an opportunity for us to refocus. PLASA has a chance to reassess its role in this industry, its relationships and communications with its members, and the future directions of its commercial activities." Also during this time PLASA Show was relocated to Earls Court and CEO Matthew Griffiths left his post.
Peter Heath took the role of CEO in April of 2016. In the same year PLASA Show move back to the West London venue London Olympia. Since then, PLASA Show has steadily regained popularity with the 2018 edition of the show being the “busiest and most vibrant show in recent history”.
Activities
PLASA's activities include lobbying, organising trade show events (including the PLASA Show), publishing both technical and industry news products (such as Lighting & Sound International and Lighting & Sound America), developing industry standards and developing industry certification schemes.
PLASA performed lobbying of Ofcom and other British Government entities in the late 2000s when users of radio microphones and similar devices complained that their equipment would be rendered unusable as a result of proposed plans to auction the radio spectrum utilised by many of such devices as part of the digital television switchover.
After merging with ESTA, PLASA took on the role of maintaining the industry standards for DMX512 and RDM. PLASA have also been responsible for the development of a UK National Rigging Certificate, which launched in 2007 for individuals working in the entertainments rigging industry.
Each year, PLASA hands out PLASA Awards for Innovation and Sustainability Award. The PLASA Awards for Innovation aim to emphasise this focus on true innovation. The procedure ensures that all nominated products are vetted to show that they offer something new to the industry.
PLASA has been a part of the European Ecodesign Coalition which includes prominent industry bodies from across Europe. The purpose of the coalition has been to campaign against Ecodesign lighting regulations and propose exemptions for stage lighting.
In 2018 PLASA collaborated with Hamish Dumbreck of JESE Ltd, Peter Willis of Howard Eaton Lighting and Wayne Howell of Artistic Licence to present Plugfest, a three-day residential event in Gatwick, UK for lighting technicians and developers to test the interoperability of their products. This event returns in 2019 taking place in Lille, France.
See also
PLASA Show
Lighting & Sound International
Lighting & Sound America
Association of British Theatre Technicians
Remote Device Management (RDM)
DMX 512
References
External links
plasa.org
esta.org
plasashow.com
estafoundation.org
1976 establishments in the United Kingdom
Audio engineering
Eastbourne
Lighting
Organisations based in East Sussex
Organizations established in 1976
Trade associations based in the United Kingdom | Professional Lighting and Sound Association | Engineering | 657 |
481,662 | https://en.wikipedia.org/wiki/IMT-2000 | IMT-2000 (International Mobile Telecommunications-2000) is the global standard for third generation (3G) wireless communications as defined by the International Telecommunication Union.
In 1999 ITU approved five radio interfaces for IMT-2000 as a part of the ITU-R M.1457 Recommendation. The five standards are:
IMT-2000 CDMA Direct Spread
also known as W-CDMA, used in UMTS, the successor to GSM
IMT-2000 CDMA Multi-Carrier
also known as CDMA2000, the successor to 2G CDMA (IS-95)
IMT-2000 CDMA TDD
also known as TD-SCDMA
IMT-2000 TDMA Single Carrier
also known as EDGE, an intermediate 2.5G technology
IMT-2000 FDMA/TDMA
also known as DECT
To meet the IMT-2000 standards, a system must provide peak data rates of at 384 kbit/s for mobile stations and 2 Mbit/s for fixed stations.
References
External links
ITU-R Recommendation M.1457: Detailed specifications of the terrestrial radio interfaces of International Mobile Telecommunications-2000 (IMT-2000).
ITU IMT-2000 Network Aspects
Mobile telecommunications standards
ITU-R recommendations | IMT-2000 | Technology | 259 |
2,396,818 | https://en.wikipedia.org/wiki/Gramine | Gramine (also called donaxine) is a naturally occurring indole alkaloid present in several plant species. Gramine may play a defensive role in these plants, since it is toxic to many organisms.
Occurrence
Gramine has been found in the giant reed, Arundo donax, Acer saccharinum (Silver Maple), Hordeum, (a grass genus that includes barley) and Phalaris (another grass genus).
Effects and toxicity
Gramine has been found to act as an agonist of the adiponectin receptor 1 (AdipoR1).
The LD50 of gramine is 44.6 mg/ kg iv in mice and 62.9 mg/ kg iv in rats.
Numerous studies have been done on the toxicity of gramine to insects harmful to crops in order to assess its potential use as an insecticide.
References
Adiponectin receptor agonists
Indole alkaloids
Dimethylamino compounds
Plant toxins | Gramine | Chemistry | 198 |
55,807,360 | https://en.wikipedia.org/wiki/Sutorius%20magnificus | Sutorius magnificus, known until 2014 as Boletus magnificus, is a species of bolete fungus in the family Boletaceae native to Yunnan province in China. It was transferred to the new genus Neoboletus in 2014, and then Sutorius in 2016.
References
External links
Fungi described in 1948
Fungi of China
Fungus species
Boletaceae | Sutorius magnificus | Biology | 79 |
265,000 | https://en.wikipedia.org/wiki/Hilbert%20matrix | In linear algebra, a Hilbert matrix, introduced by , is a square matrix with entries being the unit fractions
For example, this is the 5 × 5 Hilbert matrix:
The entries can also be defined by the integral
that is, as a Gramian matrix for powers of x. It arises in the least squares approximation of arbitrary functions by polynomials.
The Hilbert matrices are canonical examples of ill-conditioned matrices, being notoriously difficult to use in numerical computation. For example, the 2-norm condition number of the matrix above is about 4.8.
Historical note
introduced the Hilbert matrix to study the following question in approximation theory: "Assume that , is a real interval. Is it then possible to find a non-zero polynomial P with integer coefficients, such that the integral
is smaller than any given bound ε > 0, taken arbitrarily small?" To answer this question, Hilbert derives an exact formula for the determinant of the Hilbert matrices and investigates their asymptotics. He concludes that the answer to his question is positive if the length of the interval is smaller than 4.
Properties
The Hilbert matrix is symmetric and positive definite. The Hilbert matrix is also totally positive (meaning that the determinant of every submatrix is positive).
The Hilbert matrix is an example of a Hankel matrix. It is also a specific example of a Cauchy matrix.
The determinant can be expressed in closed form, as a special case of the Cauchy determinant. The determinant of the n × n Hilbert matrix is
where
Hilbert already mentioned the curious fact that the determinant of the Hilbert matrix is the reciprocal of an integer (see sequence in the OEIS), which also follows from the identity
Using Stirling's approximation of the factorial, one can establish the following asymptotic result:
where an converges to the constant as , where A is the Glaisher–Kinkelin constant.
The inverse of the Hilbert matrix can be expressed in closed form using binomial coefficients; its entries are
where n is the order of the matrix. It follows that the entries of the inverse matrix are all integers, and that the signs form a checkerboard pattern, being positive on the principal diagonal. For example,
The condition number of the n × n Hilbert matrix grows as .
Applications
The method of moments applied to polynomial distributions results in a Hankel matrix, which in the special case of approximating a probability distribution on the interval [0, 1] results in a Hilbert matrix. This matrix needs to be inverted to obtain the weight parameters of the polynomial distribution approximation.
References
Further reading
. Reprinted in
Numerical linear algebra
Approximation theory
Matrices
Determinants | Hilbert matrix | Mathematics | 547 |
29,815,061 | https://en.wikipedia.org/wiki/Ronald%20Campbell%20Macfie | Ronald Campbell Macfie (1867–1931) was a Scottish medical doctor, poet and science writer specialising in eugenics and evolution. Macfie was a critic of Darwinism and developed his own non-Darwinian evolution theory which was a form of neovitalism. He believed that chance played no role in evolution and that evolution was directed. Macfie was also a panpsychist as he believed mind was to be found in all matter.
Biography
He was a Scottish physician and writer. He had qualified in medicine in Aberdeen in 1897 and specialised in the treatment of tuberculosis.
He was also a Liberal Member of British Parliament mentioned in The Bookman Treasury of Living Poets (4th edition 1931) as a contributor to such works as Fairy Tales for Old and Young (1909), and The Golden Treasury of Scottish Poetry (1940). Among his works are "Man’s Record in the Rocks" (My Magazine, May 1921) The Art of Keeping Well Cassell & Co. 1918/The Vegetarian Society and Evolutionary Consequences of War (cited below).
Campbell Macfie suggested that male war deaths (during World War I) would create a surplus of fertile women, thus reducing the overall birthrate whilst the surviving men would select partners from a wide range of 'surplus' females according to eugenically (sexually) attractive characteristics. He averred that:
Books published
The Romance of Medicine (1907)
Air and Health (1909)
Science, Matter, and Immortality (1909)
The Titanic: (An Ode of Immortality) (1912)
Heredity, Evolution, and Vitalism (1912)
The Romance of the Human Body (1919)
War: an Ode and Other Poems (1920)
Sunshine and Health (1927)
The Faiths and Heresies of a Poet and Scientist (1932)
The Theology of Evolution (1933)
See also
Lady Margaret Sackville
Baby Boom
Flora Thompson
References
1867 births
1931 deaths
19th-century Scottish medical doctors
20th-century Scottish medical doctors
Non-Darwinian evolution
Panpsychism
Scottish science writers
Vitalists | Ronald Campbell Macfie | Biology | 410 |
52,182,319 | https://en.wikipedia.org/wiki/QIIME | QIIME ( ) is a bioinformatics data science platform, originally developed for analysis of high-throughput microbiome marker gene (e.g., 16S or 18S rRNA genes) amplicon sequencing data. There have been two major versions of the QIIME platform, QIIME 1 and QIIME 2.
While microbiome marker gene analysis continues to be a major focus in QIIME 2, the developers describe it as a microbiome multi-omics platform, and support exists or is being added for analysis of shotgun metagenomics and metatranscriptomics data, as well as metabolomics mass spectrometry data.
Development of QIIME 1 was initiated in the Knight Lab at the University of Colorado at Boulder, and the first version of QIIME 1 was released on 26 January 2010. Beginning in August 2011, QIIME 1 development was led as a collaboration between the Caporaso Lab at Northern Arizona University and the Knight Lab. QIIME 2 development is led by the Caporaso Lab, but the project remains a community effort, with developers dispersed around the world. In January 2018, QIIME 2 succeeded QIIME 1, whereby the QIIME 2 community can official help through the QIIME 2 forum.
"QIIME" was originally coined as an acronym for Quantitative Insights Into Microbial Ecology, but since the development of QIIME 2 this acronym has not been used.
See also
QIIME 2 website
Microbial ecology
Microbiome
References
Bioinformatics software
Metagenomics | QIIME | Biology | 312 |
24,271,731 | https://en.wikipedia.org/wiki/Chemical%20WorkBench | Chemical WorkBench is a proprietary simulation software tool aimed at the reactor scale kinetic modeling of homogeneous gas-phase and heterogeneous processes and kinetic mechanism development. It can be effectively used for the modeling, optimization, and design of a wide range of industrially and environmentally important chemistry-loaded processes. Chemical WorkBench is a modeling environment based on advanced scientific approaches, complementary databases, and accurate solution methods. Chemical WorkBench is developed and distributed by Kintech Lab.
Chemical WorkBench models
Chemical WorkBench has an extensive library of physicochemical models:
Thermodynamic Models
Gas-Phase Kinetic Models
Flame model
Heterogeneous Kinetic Models
Non-Equilibrium Plasma Models
Detonation and Aerodynamic Models
Membrane Separation Models
Mechanism Analysis and Reduction
Fields of application
Chemical WorkBench can be used by researchers and engineers working in the following fields:
General chemical kinetics and thermodynamics
Kinetic mechanisms development
Thin films growth for microelectronics
Nanotechnology
Catalysis and chemical engineering
Combustion, detonation and pollution control
Waste treatment and recovering
Plasma light sources and plasma chemistry
High-temperature chemistry
Education
Combustion and detonation, clean power-generation technologies, safety analysis, CVD, heterogeneous and catalytic reactions and processes, and processes in non-equilibrium plasmas are the main areas of interest.
External links
Chemical WorkBench web page
Video Review of Chemical Workbench-Tool for Modeling Reactive Flows and Developing Chemical Mechanisms
See also
Chemical kinetics
Autochem
Cantera
CHEMKIN
Kinetic PreProcessor (KPP)
Laboratory information management system
References
1. https://web.archive.org/web/20090108090305/http://www.softscout.com/software/Science-and-Laboratory/Scientific-Modeling-and-Simulation/Chemical-Workbench.html
2.
3.
Chemical engineering software
Chemical kinetics
Combustion
Computational chemistry software
Molecular modelling software | Chemical WorkBench | Chemistry,Engineering | 398 |
26,752,503 | https://en.wikipedia.org/wiki/Project%2011780 | Project 11780 Kherson was an unrealized 1980s Soviet LHD program derived from the design comparable to the US . The ship would have been about 25,000 tons displacement, with steam turbine power plants and carried about 12 helicopters and four Ondatra-class landing craft or two Tsaplya-class LCACs.
Development
The development of the Project 11780 began when Admiral of the Fleet of the USSR Sergey Gorshkov ordered the development of a fully-fledged universal landing ship. The design and purpose of the ship evolved throughout the development. Initially, the ship was intended solely for landing operations. Then the General Staff proposed turning the Project 11780 ships into universal aircraft-carrying ships by equipping them with a bow ski-jump ramp, allowing the deployment of helicopters to bolster the air support for the landing troops.
It was planned to build two ships: "Kherson" and "Kremenchuk". The pair would have a standard displacement of 25,000 tons, making it only could be built at the Chernomorsky Shipyard. At that time, the slipways of the Black Sea Shipyard were scheduled for the construction of Project 1143.5 aircraft carriers, which then triggered a "struggle for the slipway". The General Staff, placing great importance on the construction of the LHDs, proposed building them instead of the aircraft carriers.
Ending
This proposal was opposed by the Navy Commander-in-Chief. Understanding that the construction of the LHDs, due to the lack of required shipbuilding capacity, would likely lead to the abandonment of Project 1143.5 aircraft carriers. And so a cunning trick was employed. By the Commander-in-Chief’s order, an AK-130 artillery mount was placed on the bow, directly in front of the flight deck. The Naval Research Institute was tasked with providing a "scientific" justification for the presence of such armament and its placement. As a result, the General Staff lost interest in the project, and the construction was postponed.
At the request of the Minister of Defense, Marshal Dmitry Ustinov, the tasks of Project 11780 were expanded to include peacetime tracking of enemy submarines in the seas. Ultimately, all these changes led to the Project 11780 ships never being laid down.
See also
Tarawa class amphibious assault ship
List of ships of the Soviet Navy
List of ships of Russia by project number
References
External links
http://www.globalsecurity.org/military/world/russia/ship-soviet-2.htm
https://www.navalnews.com/naval-news/2019/12/russia-to-begin-construction-of-lhd-in-2020-part-1/
Helicopter carrier classes
Amphibious warfare vessel classes
Cold War aircraft carriers of the Soviet Union
Amphibious warfare vessels of the Soviet Union
Proposed aircraft carriers
Abandoned military projects of the Soviet Union | Project 11780 | Engineering | 592 |
23,315,184 | https://en.wikipedia.org/wiki/OSDN | OSDN (formerly SourceForge.JP) is a web-based collaborative development environment for open-source software projects. It provides source code repositories and web hosting services. With features similar to SourceForge, it acts as a centralized location for open-source software developers.
The OSDN repository hosts more than 5,000 projects and more than 50,000 registered users. Registered software used to be mostly specialized for Japanese use, such as input method systems, fonts, and so on, but also included applications like Cabos, TeraTerm, and Shiira. Also, since the renewal of the brand name to OSDN, some projects that used to be developed on SourceForge moved to OSDN, such as MinGW, TortoiseSVN, Android-x86, and Clonezilla.
History
SourceForge.JP was started by VA Linux Systems (latterly SourceForge, Inc.) and its subsidiary VA Linux Systems Japan on April 18, 2002. OSDN K.K. spun off of VA Linux Systems Japan in August 2007. As of June 2009, OSDN K.K. was operating the SourceForge.JP.
On May 11, 2015, the site was renamed from "SourceForge.JP" to "OSDN". In the same month that OSDN changed the site name, SourceForge caused two controversies: DevShare adware and project hijacking. In contrast, OSDN totally refuses adware bundling and project hijacking. For that reason, the changing of the site name to OSDN is perceived to have been done based on criticism of and adverse reactions to SourceForge's monetization.
On February 26, 2020, it was announced on the site that OSDN was being transferred to Appirits, Inc., a Japanese software company.
Open Source China (OSChina) announced on 24 July 2023 that they had acquired OSDN in 2022. The site had reliability problems almost immediately after this announcement, and there was an effort by SourceForge (the original, American-based site) to recruit projects that might choose to leave OSDN; especially those using SVN, which would be unsupported on GitHub. Many projects did leave OSDN, including Vim and TeraTerm.
ITmedia NEWS reported on January 22, 2024 that OSDN had announced they would shut down the associated Slashdot Japan-successor site (Surado) at the end of January 2024. However, articles in ITmedia NEWS and Surado at the end of January reported that the closure of both sites had been cancelled and OSChina now hoped to keep them in operation while seeking acquirers to take them over.
Features
OSDN provides revision control systems such as CVS, SVN, Git, Mercurial, and every feature in SourceForge. What makes OSDN different from SourceForge is the bug tracking system and the wiki system. On OSDN, these are very Trac-like systems.
See also
Comparison of source code hosting facilities
References
External links
Free software websites
Geeknet
Internet properties established in 2002
Open-source software hosting facilities | OSDN | Technology | 669 |
35,406,032 | https://en.wikipedia.org/wiki/Margaret%20Morse%20Nice%20Medal | The Margaret Morse Nice Medal is an ornithological award made annually by the Wilson Ornithological Society (WOS). It was established in 1997 and named in honour of ornithologist Margaret Morse Nice (1883–1974). The medal recipient is expected to give the plenary lecture at the WOS annual general meeting.
Recipients
Source: Wilson Ornithological Society
1997 – Elsie Collias and Nick Collias (University of California, Los Angeles), "Seeking to understand the living bird"
1998 – Ellen Ketterson and Val Nolan (Indiana University), "Studying birds: one species at a time"
1999 – Frances C. James
2000 – Susan M. Smith
2001 – Glen E. Woolfenden
2002 – Richard T. Holmes
2003 – Robert E. Ricklefs
2004 – Stephen T. Emlen
2005 – Bridget J. M. Stutchbury and Eugene S. Morton
2006 – Gary Stiles
2007 – Patricia L. Schwagmeyer and Douglas Mock
2008 – Jerome Jackson
2009 – Sidney A. Gauthreaux (Clemson University), "Bird movements in the atmosphere: discoveries from radar and visual studies"
2010 – Robert B. Payne and Laura Payne (University of Michigan), "Brood parasitism in cuckoos, cowbirds, and African finches"
2011 – Richard N. Conner (USDA-Forest Service (retired)), "The ecology of the Red-cockaded Woodpecker, by necessity a multidiscipline study"
2012 – Peter R. Grant & B. Rosemary Grant (Princeton University), "A long-term study of Darwin's Finches"
2013 – Edward Burtt, Jr. (Ohio Wesleyan University), "From passion to science to the evolution of avian color"
2014 – Don Kroodsma (University of Massachusetts-Amherst), "Birdsong: the hour before dawn"
2015 – Erica H. Dunn (Environment Canada}, "Bird observatories: Diversity and opportunity"
2016 – John C. Wingfield (Department of Neurobiology, Physiology and Behavior University of California), "Nomads, pioneers and fugitives: on the move in a capricious world"
2017 – Frank R. Moore (University of Southern Mississippi), "Stopover biology of migratory songbirds: challenges, consequences and connections"
2018 – Reed Bowman (Archbold Biological Station), "Change on the long-term study of the Florida Scrub-Jay: A fifty-year perspective"
2019 – Robert L. Curry (Villanova University), "Transformation of familiar birds into model organisms: what chickadees can teach us"
2020 – Bette A. Loiselle (University of Florida) "Three decades of studying Neotropical birds: lessons learned along the way"
2021 – Ellen Ketterson (Indiana University) "Long term research on an ordinary extraordinary songbird: the dark-eyed junco"
2022 – Chris Rimmer (Executive Director, Vermont Center for Ecostudies) "Bicknell’s Thrush: Scientific surprises and conservation connections across the hemisphere"
See also
List of ornithology awards
References
Ornithology awards
Awards established in 1997 | Margaret Morse Nice Medal | Technology | 654 |
29,609,875 | https://en.wikipedia.org/wiki/Mercer%203 | Mercer 3, also known as GLIMPSE-C02, is a heavily obscured globular cluster embedded in the disk of the Milky Way galaxy. It was discovered in 2008 in the data obtained by 2MASS and GLIMPSE infrared surveys, and independently characterized by two groups. The cluster is located in the Scutum constellation. It had avoided detection for such a long time due to the extremely strong foreground extinction in its direction reaching 24 magnitudes in the visible light. Mercer 3 is probably situated at the distance from 4 to 8 kpc from the Sun and has a half-light radius of 0.7–1.5 pc.
Mercer 3 is an old globular cluster having the age of about 12 billion years. The mass of cluster is estimated at 2–3 hundred thousand solar masses. It is among the most metal-rich galactic globular clusters known.
References
Globular clusters
Scutum (constellation) | Mercer 3 | Astronomy | 190 |
28,828,826 | https://en.wikipedia.org/wiki/CI-1017 | CI-1017 is a muscarinic acetylcholine receptor agonist which is selective for and is approximately equipotent at the M1 and M4 receptors, with 20-30-fold lower affinity for the M2, M3, and M5 subtypes It is the (R)-enantiomer of the racemic compound PD-142,505.
In animals CI-1017 improves learning and memory and increases the electrical activity of the hippocampus through activation of the M1 receptor, while minimally producing parasympathetic side effects and only at very high doses. It also inhibits production of amyloidogenic A beta peptide and increases secretion of soluble amyloid precursor protein via stimulation of the M1 receptor as well. Based on these data, it was hypothesized that CI-1017 could not only treat the symptoms of Alzheimer's disease, but could also potentially slow its progression. It was tested in clinical trials for this purpose in the early 2000s but was abandoned due to lack of efficacy.
References
Muscarinic agonists
Ketoximes
Abandoned drugs | CI-1017 | Chemistry | 231 |
23,988,882 | https://en.wikipedia.org/wiki/Washaway | A washaway is a particular kind of landslide that can affect man-made structures such as cuttings, embankments and bridges. They are thus a hazard to railways and road traffic.
The biggest danger with washaways is that they may be difficult to spot in time to stop short of the point where one falls over the edge and/or into the water where one may drown.
Repairs
An embankment that is washed away can be repaired or restored by replacing the washed away earth, which is necessarily large because embankments have a gentle slope.
A quicker method is to replace the washed out earth with a criss-cross structure of timber steepers called a pigsty which is only slightly wider than the track itself. The pigsty has alternating layers of transverse and longitudinal layers of these sleepers, which contains a lot of air which saves weight. Steel and concrete sleepers are not necessarily suitable for this purpose as they are either not square or fragile.
The sleepers in the pigsty can be reused when the washaway is fully repaired. Rails can substitute for the sleepers. The hollow space inside the pigsty should be able to act as a culvert.
Warning devices
A mechanical railway signal that is normally "green" can be put to "red" if a link in the pulling wire is disengaged by a slump of the earth beneath.
An electrical railway signal that is normally green can be put to red if a contact is opened circuited by a slump of the earth beneath. One side of contact might be attached to the sleepers, while the other side is buried in the ballast beneath. To protect against a false feed keeping the warning signal green, the circuit should be double cut so that false feeds will connect positive to negative and blow a fuse, forcing the warning signal to red. A similar setup might be used to protect bridges likely to be hit by ship collisions, as with the 1993 Big Bayou Canot train wreck.
Accidents
Railway accidents involving bridge washaways include:
27 September 1923 – near Glenrock, Wyoming - a bridge over Coal Creek was washed away and a passenger train derailed, killing 30 of the train's 66 passengers.
24 December 1953 - Tangiwai disaster - lahar caused bridge washaway; train thrown into river; 151 killed.
1974 - Crystal Brook, South Australia - train thrown into river after washaway collapses bridge.
1993 - 114 perished in a passenger train that plunged into a river after floods washaway a bridge at Ngai Ndethya.
29 October 2005 - Veligonda train disaster - 114 killed
November 2011 - Feroleto-Marcellinara, Italy
Cause unclear
Peruman railway accident 1988 - 105 killed
See also
List of rail accidents
Washout
References
Landslide types
Traffic collisions
Railway accidents and incidents | Washaway | Technology | 559 |
1,726,600 | https://en.wikipedia.org/wiki/Tepidarium | The tepidarium was the warm (tepidus) bathroom of the Roman baths heated by a hypocaust or underfloor heating system. The speciality of a tepidarium is the pleasant feeling of constant radiant heat, which directly affects the human body from the walls and floor.
There is an interesting example at Pompeii; this was covered with a semicircular barrel vault, decorated with reliefs in stucco, and round the room a series of square recesses or niches divided from one another by telamones. The tepidarium was the great central hall, around which all the other halls were grouped, and which gave the key to the plans of the thermae. It was probably the hall where the bathers first assembled prior to passing through the various hot baths (caldarium) or taking the cold bath (frigidarium). The tepidarium was decorated with the richest marbles and mosaics; it received its light through clerestory windows on the sides, the front, and the rear, and would seem to have been the hall in which the finest treasures of art were placed.
In the Baths of Caracalla, the Farnese Hercules and the Farnese Bull (now in the National Archaeological Museum, Naples), the two gladiators, the sarcophagi of green basalt, and numerous other treasures were found during the excavations by Pope Paul III.
See also
Ancient Roman bathing
References
Rooms
Ancient Roman baths | Tepidarium | Engineering | 302 |
32,008,118 | https://en.wikipedia.org/wiki/TAZ%20zinc%20finger | In molecular biology, TAZ zinc finger (Transcription Adaptor putative Zinc finger) domains are zinc-containing domains found in the homologous transcriptional co-activators CREB-binding protein (CBP) and the P300. CBP and P300 are histone acetyltransferases (EC) that catalyse the reversible acetylation of all four histones in nucleosomes, acting to regulate transcription via chromatin remodelling. These large nuclear proteins interact with numerous transcription factors and viral oncoproteins, including p53 tumour suppressor protein, E1A oncoprotein, MyoD, and GATA-1, and are involved in cell growth, differentiation and apoptosis. Both CBP and P300 have two copies of the TAZ domain, one in the N-terminal region, the other in the C-terminal region. The TAZ1 domain of CBP and P300 forms a complex with CITED2 (CBP/P300-interacting transactivator with ED-rich tail), inhibiting the activity of the hypoxia inducible factor (HIF-1alpha) and thereby attenuating the cellular response to low tissue oxygen concentration. Adaptation to hypoxia is mediated by transactivation of hypoxia-responsive genes by hypoxia-inducible factor-1 (HIF-1) in complex with the CBP and p300 transcriptional coactivators.
The TAZ domain adopts an all-alpha fold with zinc-binding sites in the loops connecting the helices. The TAZ1 domain in P300 and the TAZ2 (CH3) domain in CBP have each been shown to have four amphipathic helices, organised by three zinc-binding clusters with HCCC-type coordination.
References
Protein families | TAZ zinc finger | Biology | 399 |
1,059,993 | https://en.wikipedia.org/wiki/Sun%20Zhihong | Sun Zhihong (, born October 16, 1965) is a Chinese mathematician, working primarily on number theory, combinatorics, and graph theory.
Sun and his twin brother Sun Zhiwei proved a theorem about what are now known as the Wall–Sun–Sun primes that guided the search for counterexamples to Fermat's Last Theorem.
External links
Zhi-Hong Sun's homepage
1965 births
Living people
Mathematicians from Jiangsu
20th-century Chinese mathematicians
21st-century Chinese mathematicians
Number theorists
Academic staff of Huaiyin Normal University
Scientists from Huai'an
Educators from Huai'an
Chinese twins | Sun Zhihong | Mathematics | 129 |
2,885,379 | https://en.wikipedia.org/wiki/Walter%20Trump | Walter Trump (born 1953 ) is a German mathematician and retired high school teacher. He is known for his work in recreational mathematics.
He has made contributions working on both the square packing problem and the magic tile problem. In 1979 he discovered the optimal known packing of 11 equal squares in a larger square, and in 2003, along with Christian Boyer, developed the first known magic cube of order 5. In 2012, Trump et al. described a model for retention of liquid on random surfaces.
In 2014, he and Francis Gaspalou were able to calculate all 8 × 8 bimagic squares.
Until he retired in 2016, Trump worked as a teacher for mathematics and physics at the Gymnasium in Stein, Bavaria.
References
External links
Walter Trump's pages on magic series
Walter Trump's listings on the OEIS
Walter Trump's solutions for one of Martin Gardner's puzzles
Scientists from Bavaria
20th-century German mathematicians
Recreational mathematicians
Living people
Year of birth missing (living people)
21st-century German mathematicians
People from Fürth (district) | Walter Trump | Mathematics | 209 |
2,417,703 | https://en.wikipedia.org/wiki/Aspergillus%20oryzae | Aspergillus oryzae is a mold used in East Asia to saccharify rice, sweet potato, and barley in the making of alcoholic beverages such as sake and shōchū, and also to ferment soybeans for making soy sauce and miso. It is one of the different koji molds used for food fermentation.
However, in the production of fermented foods of soybeans such as soy sauce and miso, Aspergillus sojae is sometimes used instead of A. oryzae. A. oryzae is also used for the production of rice vinegars. Barley kōji (麦麹) or rice kōji (米麹) are made by fermenting the grains with A. oryzae hyphae.
Genomic analysis has led some scholars to believe that the Japanese domesticated the Aspergillus flavus that had mutated and ceased to produce toxic aflatoxins, giving rise to A. oryzae. While the two fungi share the same cluster of genes that encode for aflatoxin synthesis, this gene cluster is non-functional in A. oryzae. Eiji Ichishima of Tohoku University called the kōji fungus a "national fungus" (kokkin) in the journal of the Brewing Society of Japan, because of its importance not only for making the kōji for sake brewing, but also for making the kōji for miso, soy sauce, and a range of other traditional Japanese foods. His proposal was approved at the society's annual meeting in 2006.
The Japanese word kōji (麹) is used in several meanings, and in some cases it specifically refers to A. oryzae and A. sojae, while in other cases it refers to all molds used in fermented foods, including Monascus purpureus and other molds, so care should be taken to avoid confusion.
Properties desirable in sake brewing and testing
The following properties of A. oryzae strains are important in rice saccharification for sake brewing:
Growth: rapid mycelial growth on and into the rice kernels
Enzymes: strong secretion of amylases (α-amylase and glucoamylase); some carboxypeptidase; low tyrosinase
Aesthetics: pleasant fragrance; accumulation of flavoring compounds
Color: low production of deferriferrichrome (a siderophore), flavins, and other colored substances
Two of the key enzyme groups secreted by A. oryzae are pectinase and peptidase. Pectinase drives starch hydrolysis by breaking down the pectin in the cell walls of plant materials like soybeans, in the case of miso and soy sauce production, while peptidases like leucine aminopeptidase cleave amino acids from proteins and polypeptides like glutamic acid, an amino acid that contributes to the characteristic umami flavor of these fermented soybean products.
A. oryzae secretes a number of salt-tolerant alkaline proteases which makes it particularly stable in the high-sodium conditions required for the production of miso and soy sauce. The strain A. oryzae RIB40, for example, appears to have specific salt tolerance genes that regulate K+ transport.
Varieties used for shōchū making
Three varieties of kōji mold are used for making shōchū, each with distinct characteristics.
Genichirō Kawachi (1883 -1948), who is said to be the father of modern shōchū and Tamaki Inui (1873 -1946), a lecturer at University of Tokyo succeeded in the first isolation and culturing of aspergillus species such as A. kawachii, A. awamori, and a variety of subtaxa of A. oryzae, which led to great progress in producing shōchū in Japan. Since then, aspergillus developed by Kawachi has also been used for soju and makgeolli in Korea.
Yellow kōji (A. oryzae) is used to produce sake, and at one time all honkaku shōchū. However, yellow kōji is extremely sensitive to temperature; its moromi can easily sour during fermentation. This makes it difficult to use in warmer regions such as Kyūshū, and gradually black and white kōji became more common in production of shōchū. Its strength is that it gives rise to a rich, fruity, refreshing taste, so despite the difficulties and great skill required, it is still used by some manufacturers. It is popular amongst young people who previously had no interest in typically strong potato shōchū, playing a role in its recent revival. Thus, white and black kōji are mainly used in the production of shōchū, but only yellow kōji (A. oryzae) is usually used in the production of sake.
White kōji (A. kawachii) was discovered as a mutation from black kōji by Genichirō Kawachi in 1918. This effect was researched and white kōji was successfully grown independently. White kōji is easy to cultivate and its enzymes promote rapid saccharization; as a result, it is used to produce most shōchū today. It gives rise to a drink with a refreshing, mild, sweet taste.
Black kōji (A. luchuensis) is mainly used to produce shōchū and awamori. In 1901, Tamaki Inui, lecturer at University of Tokyo succeeded in the first isolating and culturing. In 1910, Genichirō Kawachi succeeded for the first time in culturing var. kawachi, a variety of subtaxa of A. awamori. This improved the efficiency of shōchū production. It produces plenty of citric acid which helps to prevent the souring of the moromi. Of all three kōji, it most effectively extracts the taste and character of the base ingredients, giving its shōchū a rich aroma with a slightly sweet, mellow taste. Its spores disperse easily, covering production facilities and workers' clothes in a layer of black. Such issues led to it falling out of favour, but due to the development of new kuro-kōji (NK-kōji) in the mid-1980s, interest in black kōji resurged amongst honkaku shōchū makers because of the depth and quality of the taste it produced. Several popular brands now explicitly state they use black kōji on their labels.
Genome
Initially kept secret, the A. oryzae genome was released by a consortium of Japanese biotechnology companies in late 2005. The eight chromosomes together comprise 37 million base pairs and 12 thousand predicted genes. The genome of A. oryzae is thus one-third larger than that of two related Aspergillus species, the genetics model organism A. nidulans and the potentially dangerous A. fumigatus. Many of the extra genes present in A. oryzae are predicted to be involved in secondary metabolism. The sequenced strain isolated in 1950 is called RIB40 or ATCC 42149; its morphology, growth, and enzyme production are typical of strains used for sake brewing.
The increased number of genes in Aspergillus oryzae are responsible for the function of proteins and cellular processes such as hydrolase, transporters, and metabolism. The extensive array of secretory hydrolase and transporters allows the mold to break down or secrete various compounds effectively. Typically, when A. oryzae exposed to high concentrations of foods like rice, soybean, wheat, etc. during fermentation, its growth may be negatively affected. However, over time this may potentially allow the kōji to gain new transporters due to the environment's conditions.
Although A. oryzae is closely related A. flavus and A. parasiticus, which are known to secrete toxins called aflatoxins that cause severe food poisoning, the kōji mold has not been found to produce those toxins. Furthermore, no carcinogenic substances have been discovered in the mold. A study has shown that even when A. oryzae is put under conditions favorable to express and secrete aflatoxin, the aflatoxin genes in A. oryzae were not expressed.
Use in biotechnology
Trans-resveratrol can be efficiently cleaved from its glucoside piceid through the process of fermentation by A. oryzae. "Flavourzyme", a protease blend derived from A. oryzae, is used to produce enzyme-hydrolyzed vegetable protein.
A. oryzae is hard to study due to difficulties in conventional genetic manipulation. This is because A. oryzae have cell walls that are difficult to break down which makes gene insertion/editing complicated. However, scientists have recently started utilizing CRISPR/Cas9 in A. oryzae. This increased mutation rates in the genome which was not possible in the past since the mold only reproduced asexually.
Secondary metabolites
A. oryzae is a good choice as a secondary metabolite factory because of its relatively few endogenous secondary metabolites. Transformed types can produce: polyketide synthase-derived 1,3,6,8-tetrahydroxynaphthalene, alternapyrone, and 3-methylorcinaldehyde; citrinin; terrequinone A; tennelin, pyripyropene, aphidicolin, terretonin, and andrastin A by plasmid insertion; paxilline and aflatrem by co-transformation; and aspyridone, originally from A. nidulans, by Gateway cloning.
History of 麹 in a broad sense
麹 (Chinese qū, Japanese kōji) which means mold used in fermented foods, was first mentioned in the Zhouli (Rites of the Zhou dynasty) in China in 300 BCE. Its development is a milestone in Chinese food technology, for it provides the conceptual framework for three major fermented soy foods: soy sauce, jiang/miso, and douchi, not to mention grain-based wines (including Japanese sake and Chinese huangjiu) and li (the Chinese forerunner of Japanese amazake).
Gallery
See also
References
External links
Making Rice Koji from Koji Spores
Sake World's description of koji
Aspergillus oryzae genome from the Database of Genomes Analysed at NITE
Global Aspergillus oryzae Market Report 2020 - Market Size, Share, Price, Trend and Forecast
(DOGAN)
oryzae
Rice wine
Molds used in food production
Fungi of Japan
Japanese cuisine
Fungus species | Aspergillus oryzae | Biology | 2,256 |
11,806,561 | https://en.wikipedia.org/wiki/Superior%20ligament%20of%20epididymis | The superior ligament of the epididymis is a strand of fibrous tissue which is covered by a reflection of the tunica vaginalis and connects the upper aspect of the epididymis with the testis.
Sexual anatomy
Ligaments | Superior ligament of epididymis | Biology | 52 |
66,494,896 | https://en.wikipedia.org/wiki/Leccinum%20melaneum | Leccinum melaneum is a species of fungus belonging to the family Boletaceae.
It is native to Europe and Northern America.
References
melaneum
Fungus species | Leccinum melaneum | Biology | 36 |
49,924,075 | https://en.wikipedia.org/wiki/Western%20North%20American%20Naturalist | Western North American Naturalist, formerly The Great Basin Naturalist, is a peer-reviewed scientific journal focusing on biodiversity and conservation of western North America. The journal's geographic coverage includes "from northernmost Canada and Alaska to southern Mexico, and from the Mississippi River to the Pacific Ocean." Established in 1939, it is published by the Monte L. Bean Life Science Museum (Brigham Young University). The journal is published quarterly, with monographs published irregularly in Monographs of the Western North American Naturalist.
History
Vasco M. Tanner founded the magazine after a term as editor at Proceedings magazine. His hope for the journal was to have a publication that covered a wide range of biology-related topics in addition to having a place to publish his own research. From 1939 through 1966, the journal limited the publication of their issues to once or twice a year due to World War II. Franklin Harris encouraged the journal to continue publication, and it was one of the first journals to be used "for exchange purposes" by university libraries. From 1967 on, the journal published quarterly issues. Tanner served as editor of the Great Basin Naturalist until 1970. Steven Wood, Tanner's successor as editor, established an editorial board for the journal. The board allowed for the journal to utilize an improved peer review process.
In 1975, the journal moved its editorial offices to the Monte L. Bean Life Science Museum. In 1976, articles too long for publication in the journal started being published in The Great Basin Naturalist Memoirs series. In 1990, Jim Barnes succeeded Steven Wood as editor. The journal's editor changed again in 1994 to Richard Baumann. In 1999, the publication of The Great Basin Naturalist ended. The journal's title changed to Western North American Naturalist, which started publishing in 2000. In 2006, Mark C. Belk became the journal's new editor. Belk was still the editor in 2017.
Impact
According to Journal Citation Reports, Western North American Naturalist had an impact factor of 0.311 and ranked 147 of 153 in Ecology category in 2016.
References
External links
The Great Basin Naturalist archive at the Biodiversity Heritage Library
Ecology journals
Academic journals established in 1939
English-language journals
Delayed open access journals
Quarterly journals
Academic journals published by museums
1939 establishments in the United States | Western North American Naturalist | Environmental_science | 453 |
17,217,452 | https://en.wikipedia.org/wiki/Psilocybe%20guilartensis | Psilocybe guilartensis is a psilocybin mushroom which has psilocybin and psilocin as main active compounds. It is common in Puerto Rico.
First reported in the literature in 1997, Gastón Guzmán placed P. guilartensis in Psilocybe section Brunneocystidiatae due to its blue staining reaction, small thick-walled subrhomboid spores, and pigmented cystidia.
Other mushrooms in the section Brunneocystidiatae include Psilocybe banderillensis, Psilocybe banderillensis var. paulensis, Psilocybe brunneocystidia, Psilocybe heimii, Psilocybe inconspicua, Psilocybe pleurocystidiosa, Psilocybe rzedowski, Psilocybe singeri, Psilocybe uxpanapensis, Psilocybe verae-crucis and Psilocybe weldenii.
Description
Cap: 1 – 3 cm in diameter, initially subconical to campanulate (bell-shaped), expanding to plano-convex with an umbo. Cap surface dark violet brown in color, translucent-striate near the margin, hygrophanous, fading to tan as it dries. Staining blue-green to black where bruised.
Gills: Cream color when young, violet brown or chocolate brown in age, with adnexed attachment.
Spores: Dark violet brown, subrhomboid in face view, subellipsoid in side view, thick walled, 6 x 5 μm.
Stipe: 3 – 8 cm long, 1 – 2 mm thick, central, equal with subbulbous base, hollow and cylindric, color whitish to brown, ornamented with small flattened scales towards the base. The base is covered in tiny yellow fibers which help distinguish this from similar species. Staining blue-green to black where bruised.
Taste: Farinaceous, sometimes with a slight mustard taste.
Odor: Farinaceous, sometimes with a slight mustard odor.
Microscopic features: Pigmented cheilocystidia and pleurocystidia present. Basidia four-spored. Clamp connections common.
Distribution and habitat
Psilocybe guilartensis is found growing gregariously, often on disturbed bare clay or moss. It is found along hiking trails, in coffee plantations, tropical and subtropical forests, especially in landslide areas. The mushroom is known to grow in Puerto Rico and Dominican Republic.
References
External links
Mushroom Observer - Psilocybe guilartensis
Psilocybe guilartensis photograph
Entheogens
Psychoactive fungi
guilartensis
Psychedelic tryptamine carriers
Taxa named by Gastón Guzmán
Fungus species | Psilocybe guilartensis | Biology | 582 |
34,620,850 | https://en.wikipedia.org/wiki/Absorption%20%28logic%29 | Absorption is a valid argument form and rule of inference of propositional logic. The rule states that if implies , then implies and . The rule makes it possible to introduce conjunctions to proofs. It is called the law of absorption because the term is "absorbed" by the term in the consequent. The rule can be stated:
where the rule is that wherever an instance of "" appears on a line of a proof, "" can be placed on a subsequent line.
Formal notation
The absorption rule may be expressed as a sequent:
where is a metalogical symbol meaning that is a syntactic consequence of in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as:
where , and are propositions expressed in some formal system.
Examples
If it will rain, then I will wear my coat.
Therefore, if it will rain then it will rain and I will wear my coat.
Proof by truth table
Formal proof
See also
Absorption law
References
Rules of inference
Theorems in propositional logic | Absorption (logic) | Mathematics | 235 |
75,219,908 | https://en.wikipedia.org/wiki/Xeligekimab | Xeligekimab (GR1501) is a monoclonal antibody that neutralizes interleukin-17A; it is being developed for plaque psoriasis, axial spondyloarthritis, and lupus nephritis. It is in a Phase III trial in 2023.
References
Monoclonal antibodies
Disease-modifying antirheumatic drugs | Xeligekimab | Chemistry | 80 |
61,125,567 | https://en.wikipedia.org/wiki/Estradiol%20hexahydrobenzoate/hydroxyprogesterone%20caproate/testosterone%20hexahydrobenzoate | Estradiol hexahydrobenzoate/hydroxyprogesterone caproate/testosterone hexahydrobenzoate (EHHB/OHPC/THHB), sold under the brand name Trinestril AP, is an injectable combination medication of estradiol hexahydrobenzoate (EHHB), an estrogen, hydroxyprogesterone caproate (OHPC), a progestogen, and testosterone hexahydrobenzoate (THHB), an androgen/anabolic steroid. It contained 3 mg EHHB, 75 mg OHPC, and 100 mg THHB and was administered by intramuscular injection once per month. The medication was marketed by 1957.
See also
List of combined sex-hormonal preparations § Estrogens, progestogens, and androgens
References
Abandoned drugs
Combined estrogen–progestogen–androgen formulations | Estradiol hexahydrobenzoate/hydroxyprogesterone caproate/testosterone hexahydrobenzoate | Chemistry | 206 |
30,208,106 | https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Gallai%20theorem | The Erdős–Gallai theorem is a result in graph theory, a branch of combinatorial mathematics. It provides one of two known approaches to solving the graph realization problem, i.e. it gives a necessary and sufficient condition for a finite sequence of natural numbers to be the degree sequence of a simple graph. A sequence obeying these conditions is called "graphic". The theorem was published in 1960 by Paul Erdős and Tibor Gallai, after whom it is named.
Statement
A sequence of non-negative integers can be represented as the degree sequence of a finite simple graph on n vertices if and only if is even and
holds for every in .
Proofs
It is not difficult to show that the conditions of the Erdős–Gallai theorem are necessary for a sequence of numbers to be graphic. The requirement that the sum of the degrees be even is the handshaking lemma, already used by Euler in his 1736 paper on the bridges of Königsberg. The inequality between the sum of the largest degrees and the sum of the remaining degrees can be established by double counting: the left side gives the numbers of edge-vertex adjacencies among the highest-degree vertices, each such adjacency must either be on an edge with one or two high-degree endpoints, the term on the right gives the maximum possible number of edge-vertex adjacencies in which both endpoints have high degree, and the remaining term on the right upper bounds the number of edges that have exactly one high degree endpoint. Thus, the more difficult part of the proof is to show that, for any sequence of numbers obeying these conditions, there exists a graph for which it is the degree sequence.
The original proof of was long and involved. cites a shorter proof by Claude Berge, based on ideas of network flow. Choudum instead provides a proof by mathematical induction on the sum of the degrees: he lets be the first index of a number in the sequence for which (or the penultimate number if all are equal), uses a case analysis to show that the sequence formed by subtracting one from and from the last number in the sequence (and removing the last number if this subtraction causes it to become zero) is again graphic, and forms a graph representing the original sequence by adding an edge between the two positions from which one was subtracted.
consider a sequence of "subrealizations", graphs whose degrees are upper bounded by the given degree sequence. They show that, if G is a subrealization, and i is the smallest index of a vertex in G whose degree is not equal to di, then G may be modified in a way that produces another subrealization, increasing the degree of vertex i without changing the degrees of the earlier vertices in the sequence. Repeated steps of this kind must eventually reach a realization of the given sequence, proving the theorem.
Relation to integer partitions
describe close connections between the Erdős–Gallai theorem and the theory of integer partitions.
Let ; then the sorted integer sequences summing to may be interpreted as the partitions of . Under majorization of their prefix sums, the partitions form a lattice, in which the minimal change between an individual partition and another partition lower in the partition order is to subtract one from one of the numbers and add it to a number that is smaller by at least two ( could be zero). As Aigner and Triesch show, this operation preserves the property of being graphic, so to prove the Erdős–Gallai theorem it suffices to characterize the graphic sequences that are maximal in this majorization order. They provide such a characterization, in terms of the Ferrers diagrams of the corresponding partitions, and show that it is equivalent to the Erdős–Gallai theorem.
Graphic sequences for other types of graph
Similar theorems describe the degree sequences of simple directed graphs, simple directed graphs with loops, and simple bipartite graphs . The first problem is characterized by the Fulkerson–Chen–Anstee theorem. The latter two cases, which are equivalent, are characterized by the Gale–Ryser theorem.
Stronger version
proved that it suffices to consider the th inequality such that with and for . restrict the set of inequalities for graphs in an opposite thrust. If an even-summed positive sequence has no repeated entries other than the maximum and the minimum (and the length exceeds
the largest entry), then it suffices to check only the th inequality, where .
Generalization
A finite sequences of nonnegative integers with is graphic if is even and there exists a sequence that is graphic and majorizes . This result was given by . reinvented it and gave a more direct proof.
See also
Havel–Hakimi algorithm
References
.
.
Gallai theorem
Theorems in graph theory | Erdős–Gallai theorem | Mathematics | 988 |
4,796,040 | https://en.wikipedia.org/wiki/Chemostat | A chemostat (from chemical environment is static) is a bioreactor to which fresh medium is continuously added, while culture liquid containing left over nutrients, metabolic end products and microorganisms is continuously removed at the same rate to keep the culture volume constant. By changing the rate with which medium is added to the bioreactor the specific growth rate of the microorganism can be easily controlled within limits.
Operation
Steady state
One of the most important features of chemostats is that microorganisms can be grown in a physiological steady state under constant environmental conditions. In this steady state, growth occurs at a constant specific growth rate and all culture parameters remain constant (culture volume, dissolved oxygen concentration, nutrient and product concentrations, pH, cell density, etc.). In addition, environmental conditions can be controlled by the experimenter. Microorganisms growing in chemostats usually reach a steady state because of a negative feedback between growth rate and nutrient consumption: if a low number of cells are present in the bioreactor, the cells can grow at growth rates higher than the dilution rate as they consume little nutrient so growth is less limited by the addition of limiting nutrient with the inflowing fresh medium. The limiting nutrient is a nutrient essential for growth, present in the medium at a limiting concentration (all other nutrients are usually supplied in surplus). However, the higher the number of cells becomes, the more nutrient is consumed, lowering the concentration of the limiting nutrient. In turn, this will reduce the specific growth rate of the cells, which will lead to a decline in the number of cells as they keep being removed from the system with the outflow. This results in a steady state. Due to self-regulation, the steady state is stable. This enables the experimenter to control the specific growth rate of the microorganisms by changing the speed of the pump feeding fresh medium into the vessel.
Well-mixed
Another important feature of chemostats and other continuous culture systems is that they are well-mixed so that environmental conditions are homogenous or uniform and microorganisms are randomly dispersed and encounter each other randomly. Therefore, competition and other interactions in the chemostat are global, in contrast to biofilms.
Dilution rate
The rate of nutrient exchange is expressed as the dilution rate D. At steady state, the specific growth rate μ of the micro-organism is equal to the dilution rate D. The dilution rate is defined as the flow of medium per unit of time, F, over the volume V of culture in the bioreactor
Maximal growth rate and critical dilution rate
Specific growth rate μ is inversely related to the time it takes the biomass to double, called doubling time td, by:
Therefore, the doubling time td becomes a function of dilution rate D in steady state:
Each microorganism growing on a particular substrate has a maximal specific growth rate μmax (the rate of growth observed if growth is limited by internal constraints rather than external nutrients). If a dilution rate is chosen that is higher than μmax, the cells cannot grow at a rate as fast as the rate with which they are being removed so the culture will not be able to sustain itself in the bioreactor, and will wash out.
However, since the concentration of the limiting nutrient in the chemostat cannot exceed the concentration in the feed, the specific growth rate that the cells can reach in the chemostat is usually slightly lower than the maximal specific growth rate because specific growth rate usually increases with nutrient concentration as described by the kinetics of the Monod equation. The highest specific growth' rates (μmax) cells can attain is equal to the critical dilution rate (D'c):
where S is the substrate or nutrient concentration in the chemostat and K''S is the half-saturation constant (this equation assumes Monod kinetics).
Applications
Research
Chemostats in research are used for investigations in cell biology, as a source for large volumes of uniform cells or protein. The chemostat is often used to gather steady state data about an organism in order to generate a mathematical model relating to its metabolic processes. Chemostats are also used as microcosms in ecology and evolutionary biology. In the one case, mutation/selection is a nuisance, in the other case, it is the desired process under study. Chemostats can also be used to enrich for specific types of bacterial mutants in culture such as auxotrophs or those that are resistant to antibiotics or bacteriophages for further scientific study. Variations in the dilution rate permit the study of the metabolic strategies pursued by the organisms at different growth rates.
Competition for single and multiple resources, the evolution of resource acquisition and utilization pathways, cross-feeding/symbiosis, antagonism, predation, and competition among predators have all been studied in ecology and evolutionary biology using chemostats.
Industry
Chemostats are frequently used in the industrial manufacturing of ethanol. In this case, several chemostats are used in series, each maintained at decreasing sugar concentrations. The chemostat also serves as an experimental model of continuous cell cultures in the biotechnological industry.
Technical concerns
Foaming results in overflow with the volume of liquid not exactly constant.
Some very fragile cells are ruptured during agitation and aeration.
Cells may grow on the walls or adhere to other surfaces, which may be overcome by treating the glass walls of the vessel with a silane to render them hydrophobic. However, cells will be selected for attachment to the walls since those that do will not be removed from the system. Those bacteria that stick firmly to the walls forming a biofilm are difficult to study under chemostat conditions.
Mixing may not truly be uniform, upsetting the "static" property of the chemostat.
Dripping the media into the chamber actually results in small pulses of nutrients and thus oscillations in concentrations, again upsetting the "static" property of the chemostat.
Bacteria travel upstream quite easily. They will reach the reservoir of sterile medium quickly unless the liquid path is interrupted by an air break in which the medium falls in drops through air.
Continuous efforts to remedy each defect lead to variations on the basic chemostat quite regularly. Examples in the literature are numerous.
Antifoaming agents are used to suppress foaming.
Agitation and aeration can be done gently.
Many approaches have been taken to reduce wall growth
Various applications use paddles, bubbling, or other mechanisms for mixing
Dripping can be made less drastic with smaller droplets and larger vessel volumes
Many improvements target the threat of contamination
Experimental design considerations
Parameter choice and setup
The steady state concentration of the limiting substrate in the chemostat is independent of the influx concentration. The influx concentration will affect the cell concentration and thus the steady state OD.
Even though the limiting substrate concentration in the chemostat is usually very low, and is maintained by discrete highly concentrated influx pulses, in practice the temporal variation in the concentration within the chemostat is small (a few percent or less) and can thus be viewed as quasi-steady state.
The time it takes for the cell density (OD) to converge to a steady-state value (overshoot/undershoot) will often be long (multiple chemostat turnovers), especially when the initial inoculum is large. But, the time can be minimized with proper parameter choice.
Steady state growth
A chemostat might appear to be in steady state, but mutant strain takeovers can occur continuously, even though they are not detectable by monitoring macro scale parameters like OD or product concentrations.
The limiting substrate is usually at such low concentrations that it is undetectable. As a result, the concentration of the limiting substrate can vary greatly over time (percentage-wise) as different strains takeover the population, even if resulting changes in OD are too small to detect.
A “pulsed” chemostat (with very large influx pulses) has a substantially lower selective capacity than a standard quasi-continuous chemostat, for a mutant strain with increased fitness in limiting conditions.
By abruptly lowering the influx limiting substrate concentration it is possible to temporarily subject the cells to relatively harsher conditions, until the chemostat stabilizes back to the steady state (on the time order of the dilution rate D).
Mutation
Some types of mutant strains will appear rapidly:
If there is a SNP that can increase fitness it should appear in the population after only few chemostat doublings, for characteristically large chemostats (e.g. 10^11 E. coli cells).
A strain that requires two specific SNPs where only their combination gives a fitness advantage (whereas each one separately is neutral), is likely to appear only if the target size (the number of different SNP locations that give rise to an advantageous mutation) for each SNP is very large.
Other types of mutant strains (e.g. two SNPs with a small target size, more SNPs or in smaller chemostats) are highly unlikely to appear.
These other mutations are expected only through successive sweeps of mutants with a fitness advantage. One can only expect multiple mutants to arise if each mutation is independently beneficial, and not in cases where the mutations are individually neutral but together advantageous. Successive takeovers are the only reliable way for evolution to proceed in a chemostat.
The seemingly extreme scenario where we require every possible single SNP to co-exist at least once in the chemostat is actually quite likely. A large chemostat is very likely to reach this state.
For a large chemostat the expected time until an advantageous mutation occurs to be on the order of the chemostat turnover time. Note, this is usually substantially shorter than the time for an advantageous strain to take over the chemostat population. This is not necessarily so in a small chemostat.
The above points are expected to be the same across different asexually reproductive species (E. coli, S. cerevisiae, etc.).
Furthermore, the time until mutation appearance is independent of genome size, but dependent on per-BP mutation rate.
For characteristically large chemostats, a hyper-mutating strain does not give enough of an advantage to warrant use. Also, it does not have enough of a selective advantage to be expected to always appear through random mutation and take over the chemostat.
Single takeover
The takeover time is predictable given the relevant strain parameters.
Different dilution rates selectively favor different mutant strains to take over the chemostat population, if such a strain exists. For example:
A fast dilution rate creates a selection pressure for a mutant strain with a raised maximal growth rate;
A mid-range dilution rate creates a selection pressure for a mutant strain with a higher affinity to the limiting substrate;
A slow dilution rate creates a selection pressure for a mutant strain which can grow in media with no limiting substrate (presumably by consuming a different substrate present in the media);
The time for takeover of a superior mutant will be quite constant across a range of operation parameters. For characteristic operation values the take over time is on the order of days to weeks.
Successive takeovers
When the conditions are right (a large enough population, and multiple targets in the genome for simple advantageous mutations) multiple strains are expected to successively takeover the population, and to do so in a relatively timed and paced manner. The timing depends on the type of mutations.
In a takeover succession, even if the selective improvement of each of the strains stays constant (e.g. each new strain is better than the previous strain by a constant factor) – the takeover rate does not stay constant, but rather diminishes from strain to strain.
There are cases where successive takeovers occur so rapidly that it is very difficult to differentiate between strains, even when examining allele frequency. Thus, a lineage of multiple takeovers of consecutive strains might appear as the takeover of a single strain with a cohort of mutations.
Variations
Fermentation setups closely related to the chemostats are the turbidostat, the auxostat and the retentostat. In retentostats, culture liquid is also removed from the bioreactor, but a filter retains the biomass. In this case, the biomass concentration increases until the nutrient requirement for biomass maintenance has become equal to the amount of limiting nutrient that can be consumed.
See also
Bacterial growth
Biochemical engineering
Changestat
Continuous stirred-tank reactor (CSTR)
E. coli long-term evolution experiment
Fed-batch
References
External links
http://www.pererikstrandberg.se/examensarbete/chemostat.pdf
https://web.archive.org/web/20060504172359/http://www.rpi.edu/dept/chem-eng/Biotech-Environ/Contin/chemosta.htm
A final thesis including mathematical models of the chemostat and other bioreactors
A page about one laboratory chemostat design
Comprehensive chemostat manual (Dunham lab). Procedures and principles are general.
Bioreactors | Chemostat | Chemistry,Engineering,Biology | 2,703 |
61,330,792 | https://en.wikipedia.org/wiki/King%20Salman%20Award%20for%20Disability%20Research | The King Salman Award for Disability Research is an internationally recognized prize that is awarded to notable individuals who contributed to knowledge and scientific research in the field of disability. The prize was established by King Salman Center for Disability Research.
Areas and Nominations
There are three main areas of the award in which nominations are accepted. These areas are Health and Medical Sciences, Pedagogical and Educational Sciences, and Rehabilitative and Social Sciences. The Nominations are accepted from local and international research and scientific organizations as well as academic departments and universities.
Award Value
The laureates are given the following rewards:
A certificate with their names and works.
An honorary medal.
500,000 Saudi Riyals (US$133,450).
List of laureates
References
Saudi Arabian awards
Medicine awards | King Salman Award for Disability Research | Technology | 154 |
15,181,834 | https://en.wikipedia.org/wiki/Mitochondrial%20ribosomal%20protein%20L19 | 39S ribosomal protein L19, mitochondrial is a protein that in humans is encoded by the MRPL19 gene.
Mammalian mitochondrial ribosomal proteins are encoded by nuclear genes and help in protein synthesis within the mitochondrion. Mitochondrial ribosomes (mitoribosomes) consist of a small 28S subunit and a large 39S subunit. They have an estimated 75% protein to rRNA composition compared to prokaryotic ribosomes, where this ratio is reversed. Another difference between mammalian mitoribosomes and prokaryotic ribosomes is that the latter contain a 5S rRNA. Among different species, the proteins comprising the mitoribosome differ greatly in sequence, and sometimes in biochemical properties, which prevents easy recognition by sequence homology. This gene encodes a 39S subunit protein.
References
Further reading
Ribosomal proteins | Mitochondrial ribosomal protein L19 | Chemistry | 177 |
2,448,418 | https://en.wikipedia.org/wiki/Brian%20LaMacchia | Brian A. LaMacchia is a computer security specialist.
LaMacchia is currently the Executive Director of the MPC Alliance. LaMacchia was previously a Distinguished Engineer at Microsoft and headed the Security and Cryptography team within Microsoft Research (MSR). His team’s main project was the development of quantum-resistant public-key cryptographic algorithms and protocols. Brian was also a founding member of the Microsoft Cryptography Review Board and consulted on security and cryptography architectures, protocols and implementations across the company; previously he was the Director of Security and Cryptography in the Microsoft Extreme Computing Group. He played a leading role in the design of XKMS, the security architecture for .NET and Palladium. He designed and led the development team for the .NET security architecture. He was a security architect on Palladium. LaMacchia was originally well known for his work at the Massachusetts Institute of Technology establishing the MIT PGP Key Server, the first key centric PKI implementation to see wide-scale use. LaMacchia wrote the first Web interface for a PGP Key Server. He is a submitter of the Frodo post-quantum proposal to the NIST Post-Quantum Cryptography Standardization project.
His leadership has also been recognized by his membership in the Computing Community Consortium (CCC) Council.
He has played a leading role in the design of W3C XMLDsig and XKMS standards. In particular he is an author of versions 1.0, 1.1 and 2.0 XMLDsig. He is a contributor to XKMS. He is coauthor on OASIS standard WS-SECURITY.
LaMacchia earned S.B., S.M., and Ph.D. degrees from MIT in 1990, 1991, and 1996, respectively.
As of 2024, LaMacchia is serving his third three-year term as Treasurer of the International Association for Cryptologic Research. He first joined the IACR Board of Directors in 2015 as General Chair of CRYPTO 2016. LaMacchia also serves as a member of the Board of Directors of Seattle Opera. He previously served for ten years as member of the board of directors of the Seattle International Film Festival, including the 2015-2016 term as president of SIFF.
References
External links
Brian LaMacchia's home page
Brian LaMacchia's Microsoft page
Net Framework
The >25 patents for inventions by LaMacchia
Computer security specialists
Cypherpunks
Living people
Year of birth missing (living people)
Massachusetts Institute of Technology | Brian LaMacchia | Technology | 519 |
14,474,795 | https://en.wikipedia.org/wiki/Community%20Broadband%20Bill | The Community Broadband Act was a bill (proposed law) that was never enacted into legislation by the U.S. Senate,110th Congress The act was intended to promote affordable broadband access by allowing municipal governments to provide telecommunications capability and services.
Supporters of the bill believed it would have encouraged widespread broadband development in the United States by overturning existing state bans on public broadband deployments and eliminating existing barriers to broadband development.
Acquiring municipal broadband for some communities is problematic because the laws of certain states prohibit local municipalities from installing their own broadband networks and private sector companies are unable to provide the electric services needed for broadband. As a result, many rural and remote communities are left with without broadband services. Some municipalities may find broadband service, but it may be limited to already available commercial options, which may fall short of community need.
Bill
Specific provisions of the bill:
Prevent State governments from enforcing or adopting laws that would prohibit municipalities from providing broadband services
Encourage the development of public-private partnerships to spread the use of broadband services
Initiate notice requirements about broadband deployment to ensure the public has adequate information available to evaluate options
Give private providers the opportunity to provide alternative broadband services
Ensure public and private providers of broadband services are treated equally with respect to the laws, guidelines and policies that apply to all providers of broadband services
Economic impact
The onset of free or low-cost municipal broadband access to citizens in competition with commercial broadband services would have economic implications.
The Senate Committee on Commerce, Science and Transportation on October 30, 2007 estimated enacting the community broadband bill would have no significant impact on the federal budget. Because the act would preempt laws in 15 states that presently ban the provision of broadband services by public entities, including municipalities, it would, however, impose mandates on some state and local governments. In accordance with the act, public providers would be required to publish notice of their intent to offer broadband services. Public providers would also be required to provide details about the types of broadband services they intend to offer in addition to allowing private bids for those services.
Since the preemption laws and private bidding requirements would be considered intergovernmental mandates as defined by the UMRA, the Senate Committee on Commerce, Science and Transportation determined that the cost the mandates could not exceed the threshold established by UMRA and adjusted yearly for inflation. In 2007 the UMRA threshold was $66 million USD.
Background
Bush Administration
Although broadband access is national problem, it must be addressed on the local level. Acknowledging the importance of broadband in the increasingly competitive global economy, President Bush in June 2004 initiated a goal for universal and affordable broadband access for every American by 2007. However the United States is still far behind in reaching this goal. The Organisation for Economic Co-operation and Development conducted a study ranking the U.S. in 12th place worldwide in the percentage of people with broadband connections. The majority of nations in the top ranks showed to have successfully combined private-public partnerships to provide broadband access for citizens and businesses alike.
Proposal
On July 27, 2007, Senator Frank Lautenberg introduced the legislation, noting the importance of broadband services and how they are essential to providing important educational and economic opportunities, especially for rural areas. By providing public-private broadband service partnerships, the bill would make it easier for municipalities, cities, and towns across the nation to offer broadband access to their residents.
Telecommunications Industry
Despite the advocacy for public-private broadband partnerships and service there are many telecommunication firms seeking to bar the enactment of a community broadband act. Citing government-backed networks would compete unfairly with private companies. Also requiring heavy taxpayer subsidization that would minimize net benefits to local residents. Douglas Boone, chief executive of Premier Communications when speaking with U.S. Telecom Association said “setting up a government-owned network is like having city hall opening a chain of grocery stores or gas stations." Government-backed broadband might also lead to stifled innovation inhibiting technological advancement.
There are also telecommunication companies supportive of universal community broadband placements such as Earthlink. Believing partnerships between governments and private companies can provide low-cost, high-speed citywide service that offers many advantages to residents, visitors and taxpayers. EarthLink Municipal Networks, a subsidiary created to design and implement wireless broadband services, with the country's biggest municipal ISP contracts, covering Philadelphia and Anaheim, Calif. The partnership with Earthlink is a prime example of how affordable broadband service is being provided to low-income neighborhoods that would otherwise be passed over by private companies.
Community Broadband Coalition
On September 21, 2007 the Community Broadband Coalition formed by trade associations, public interest organizations, and private companies with an interest to enhance the availability of broadband services throughout the country submitted a congressional letter in support of the bill. The letter urged other senators to cosponsor the bipartisan ,pointing out the benefits of adopting community broadband networks.
Increased economic development and jobs enhancing market competition
Improved and accelerated delivery of e-government services
Universal, affordable Internet access for all Americans.
Major organizations included in the Community Broadband Coalition letter in support of The Community Broadband Act:
ACUTA
American Association of Law Libraries
American Library Association
American Public Power Association
Association of Research Libraries
EDUCAUSE
Free Press
Google
Intel
Media Access Project
National Association of Counties
National Association of Telecommunications Officers and Advisors (NATOA)
Tropos Networks
Utah Telecommunication Open Infrastructure Agency (UTOPIA)
XO Communications
Cosponsors
United States senators who cosponsored the bill in the 110th United States Congress
Gordon H. Smith
John Kerry
John McCain
Claire McCaskill
Olympia Snowe
Ted Stevens
Daniel Inouye
Russell Feingold
See also
Telecommunications Act of 2005
municipal broadband
References
Proposed legislation of the 110th United States Congress
Telecommunications law
Computer law
Broadband | Community Broadband Bill | Technology | 1,140 |
54,327,031 | https://en.wikipedia.org/wiki/Peter%20Bossaerts | Peter L. Bossaerts (10 January 1960 in Antwerp, Belgium) is a Belgian-American economist. He is considered one of the pioneers and leading researchers in neuroeconomics and experimental finance.
He is Professor of Neuroeconomics at the University of Cambridge.
Life
Bossaerts grew up in Belgium and studied at the Universitaire Faculteiten Sint-Ignatius Antwerpen (today University of Antwerp) from 1977 to 1982, where he obtained a Licenciate (Bachelor) and Doctorandus (Master) in applied economics. After coursework towards a PhD in statistics at the Vrije Universiteit Brussel, he earned a Ph.D. at University of California in Financial Economics under the supervision of Richard Roll.
He began his academic career as a research associate at Carnegie Mellon University, then worked as an assistant professor in finance from 1986 to 1990. He joined the California Institute of Technology (Caltech) in 1990 as an assistant professor and became an associate professor in 1994, full professor in 1998, William D. Hacker Professor of Economics and Management in 2003, before being appointed as Dean ("Division Chair") of the Humanities and Social Sciences. From 2007 to 2009, he was Swiss Finance Institute professor at the Swiss Federal Institute of Technology, Lausanne (EPFL). In 2013, he moved from Caltech to the Eccles School of Business of the University of Utah, and in 2016 on to the University of Melbourne (Australia), where he was professor in experimental finance and decision neuroscience, and was awarded a Redmond Barry Distinguished Professorship. He was co-head of the Brain, Mind and Markets Laboratory and was Honorary Professor at the Florey Institute of Neuroscience and Mental Health. In 2022, he moved to the University of Cambridge, UK, where he is now the Leverhulme International Professor of Neuroeconomics at the Faculty of Economics.
Bossaerts is an elected Fellow of the Econometric Society, the Society for the Advancement of Economic Theory, and the Academy of the Social Sciences in Australia. He was president of the Society for Neuroeconomics and the Society for Experimental Finance.
He has published numerous scientific articles in well-known field journals such as Econometrica, Journal of Political Economy, Journal of Finance, Review of Financial Studies, Neuron, Journal of Neuroscience, Econometric Theory, Mathematical Finance, as well as general science journals such as Science and Proceedings of the National Academy of Sciences. He summarised his earlier research on asset pricing and experimental finance in the 2002 book "The Paradox of Asset Pricing".
He is the father of two children and lives in Eltham, Victoria (Australia)
Research
Bossaerts is one of the pioneers of experimental finance, which is the use of controlled experiments to test theories in finance and designs for better allocation of risks and/or aggregation of information. He advanced the approach to test the core dynamic model used in finance, macro-economics and central banking to understand the link between asset prices, aggregate income, aggregate consumption, and business cycles (the "Lucas" model). This allowed him to test some of the major models of asset pricing (CAPM, Lucas Model or DGSE) that are used widely throughout academia, industry and government, in teaching, analysis of historical data from the field, and in setting policy and regulation. This allowed him also to try novel market designs, such as combinatorial double auctions, to improve allocation of risk, as well as to initiate a unique program on research and teaching of algorithmic (automated; robot) trading.
Bossaerts pioneered neuroeconomics, where decision theory and game theory is brought to bear on interpreting computational signals in the brain. This has led to the emergence of the new fields of decision neuroscience and computational neuropsychiatry.
In the past, his work has focused on decision making under uncertainty, where uncertainty is understood as in probability theory. Recently, he has been studying uncertainty that is generated by computational complexity.
References
External links
Resume (PDF; 202 kB) on the website of the California Institute of Technology
Resume on the website of the University of Melbourne
1960 births
Living people
Belgian economists
21st-century American economists
20th-century American economists
Fellows of the Econometric Society
American financial economists
Corporate finance theorists
Behavioral finance
Experimental economics | Peter Bossaerts | Biology | 877 |
15,844,313 | https://en.wikipedia.org/wiki/In-gel%20digestion | The in-gel digestion step is a part of the sample preparation for the mass spectrometric identification of proteins in course of proteomic analysis. The method was introduced in 1992 by Rosenfeld. Innumerable modifications and improvements in the basic elements of the procedure remain.
The in-gel digestion step primarily comprises the four steps; destaining, reduction and alkylation (R&A) of the cysteines in the protein, proteolytic cleavage of the protein and extraction of the generated peptides.
Destaining
Proteins which were separated by 1D or 2D PAGE are usually visualised by staining with dyes like Coomassie brilliant blue (CBB) or silver. Although the sensitivity of the method is significantly lower, the use of Coomassie is more common for samples destined for mass spectrometry since the silver staining impairs the analysis. After excision of the protein band of interest from the gel most protocols require a destaining of the proteins before proceeding.
The destaining solution for CBB contains usually the buffer salt ammonium bicarbonate (NH4HCO3) and a fraction of 30%-50% organic solvent (mostly acetonitrile). The hydrophobic interactions between protein and CBB are reduced by the organic fraction of the solution. At the same time, the ionic part of the solution diminishes the electrostatic bonds between the dye and the positively charged amino acids of the protein. In contrast to a mixture of water with organic solvent the effectivity of destaining is increased. An increase of temperature promotes the destaining process. To a certain degree (< 10%) the destaining procedure is accompanied with a loss of protein. Furthermore, the removal of CBB does not affect the yield of peptides in the mass spectrometric measurement.
In the case of silver stained protein bands the destaining is accomplished by oxidation of the metallic silver attached to the protein by potassium ferricyanide or hydrogen peroxide (H2O2). The released silver ions are complexed subsequently by sodium thiosulfate.
Reduction and alkylation (R & A)
The staining and destaining of gels is often followed by the reduction and alkylation (r&a) of the cystines or cysteines in the proteins. Hereby, the disulfide bonds of the proteins are irreversibly broken up and the optimal unfolding of the tertiary structure is obtained. The reduction to the thiol is accomplished by the reaction with chemicals containing sulfhydryl or phosphine groups such as dithiothreitol (DTT) or tris-2-carboxyethylphosphine hydrochloride (TCEP). In course of the subsequent irreversible alkylation of the SH groups with iodoacetamide the cysteines are transformed to the stable S-carboxyamidomethylcysteine (CAM; adduct: -CH2-CONH2). The molecular weight of the cysteine amino-acid residue is thereby increased from 103.01 Da to 160.03 Da.
Reduction and alkylation of cysteine residues improves peptide yield and sequence coverage and the identification of proteins with a high number of disulfide bonds. Due to the rareness of the amino acid cysteine for most of the proteins the step of r&a does not effect any improvement of the mass spectrometric analysis. For the quantitative and homogeneous alkylation of cysteines the position of the modification step in the sample-preparation process is crucial. With denaturing electrophoresis it is strongly recommended to perform the reaction before the execution of the electrophoresis, since there are free acrylamide monomers in the gel able to modify cysteine residues irreversibly. The resulting acrylamide adducts have a molecular weight of 174.05 Da.
In-gel digestion
Afterwards the eponymous step of the method is performed, the in-gel digestion of the proteins. By this procedure, the protein is cut enzymatically into a limited number of shorter fragments. These fragments are called peptides and allow for the identification of the protein with their characteristic mass and pattern. The serine protease trypsin is the most common enzyme used in protein analytics. Trypsin cuts the peptide bond specifically at the carboxyl end of the basic aminoacids arginine and lysine. If there is an acidic amino acid like aspartic acid or glutamic acid in direct neighborhood to the cutting site, the rate of hydrolysis is diminished, a proline C-terminal to the cutting site inhibits the hydrolysis completely.
An undesirable side effect of the use of proteolytic enzymes is the self digestion of the protease. To avoid this, in the past Ca2+-ions were added to the digestion buffer. Nowadays most suppliers offer modified trypsin where selective methylation of the lysines limits the autolytic activity to the arginine cutting sites. Unmodified trypsin has its highest activity between 35 °C and 45 °C. After the modification, the optimal temperature is changed to the range of 50 °C to 55 °C. Other enzymes used for in-gel digestion are the endoproteases Lys-C, Glu-C, Asp-N and Lys-N. These proteases cut specifically at only one amino acid e.g. Asp-N cuts n-terminal of aspartic acid. Therefore, a lower number of longer peptides is obtained.
The analysis of the complete primary sequence of a protein using only one protease is usually not possible. In those cases the digestion of the target protein in several approaches with different enzymes is recommended. The resulting overlapping peptides permit the assembly of the complete sequence of the protein.
For the digestion the proteins fixed in the matrix of the gel have to be made accessible for the protease. The permeation of the enzyme to the gel is believed to be facilitated by the dehydration of the gel pieces by treatment with acetonitrile and subsequent swelling in the digestion buffer containing the protease. This procedure relies on the presumption that the protease permeates to the gel by the process of swelling. Different studies about the penetration of the enzymes to the gel showed the process to be almost completely driven by diffusion. The drying of the gel does not seem to support the process. Therefore, the improvement of the in-gel digestion has to be achieved by the reduction of the way of the enzyme to its substrate e.g. by cutting the gel to pieces as small as possible.
Usually, the in-gel digestion is run as an overnight process. For the use of trypsin as protease and a temperature of 37 °C the time of incubation found in most protocols is 12-15 h. However, experiments about the duration of the digestion process showed that after 3 h there is enough material for successful mass spectrometric analysis. Furthermore, the optimisation of the conditions for the protease in temperature and pH allows for the completion of the digestion of a sample in 30 min.
Surfactant (detergents) can aid in the solubilization and denaturing of proteins in the gel and thereby shorten digestion times and increase protein cleavage and the number and amount of extracted peptides, especially for lipophilic proteins such as membrane proteins. Cleavable detergents are detergents that are cleaved after digestion, often under acidic conditions. This makes the addition of detergents compatible with mass spectrometry.
Extraction
After finishing the digestion the peptides generated in this process have to be extracted from the gel matrix. This is accomplished by one or several extraction steps. The gel particles are incubated with an extraction solution and the supernatant is collected. In the first extraction, almost all of the peptide is recovered, the repetition of the extraction step can increase the yield of the whole process by only 5-10%. To meet the requirements of peptides with different physical and chemical properties an iterative extraction with basic or acidic solutions is performed. For the extraction of acidic peptides a solution similar to the concentration and composition of the digestion buffer is used; basic peptides are extracted in dependence to the intended mass spectrometric method with a low concentrated acidic solution of formic acid for ESI and trifluoroacetic acid for MALDI respectively. Studies on model proteins showed a recovery of approximately 70–80% of the expected peptide yield by extraction from the gel.
Many protocols contain an additional fraction of acetonitrile to the extraction solution which, in concentrations above 30% (v/v), is effective in reducing the adsorption of peptides to the surface of reaction tubes and pipette tips. The liquid of the pooled extracts is evaporated in a centrifugal evaporator. If the volatile salt ammonium bicarbonate was used for the basic extraction, it is partially removed in the drying process. The dried peptides can be stored at -20 °C for at least six months.
Critical considerations and actual trends
Some major drawbacks of the common protocols for the in-gel digestion are the extended time needed and the multiple processing steps, making the method error-prone with respect to contaminations (especially keratin). These disadvantages were largely removed by the development of optimised protocols and specialised reaction tubes.
More severe than the difficulties with handling are losses of material while processing the samples. The mass spectrometric protein analysis is often performed at the limit of detection, so even small losses can dictate success or failure of the whole analysis. These losses are due to washout during different processing steps, adsorption to the surface of reaction tubes and pipette tips, incomplete extraction of peptides from the gel and/or bad ionisation of single peptides in the mass spectrometer. Depending on the physicochemical properties of the peptides, losses can vary between 15 and 50%. Due to the inherent heterogeneity of the peptides, up to now, a universally valid solution for this major drawback of the method has not been found.
Commercial implementations
The commercial implementations of in-gel digestion have to be divided into products for high and for low throughput laboratories.
High-throughput
Due to the highly time-consuming and work-intensive standard procedure, the method of in-gel digestion was limited to a relatively small number of protein spots to be processed at a time. Therefore, it has been found to be the ideal object for automation ambitions to overcome these limitations for industrial and service laboratories. Today, in laboratories where in-gel digestion is performed in high-throughput quantities, the procedure is usually automated. The degree of automation varies from simple pipetting robots to highly sophisticated all-in-one solutions, offering an automated workflow from gel to mass spectrometry. The systems usually consist of a spot picker, a digestion robot, and a spotter.
The advantages of the automation other than the larger number of spots to be processed at a time are the reduced manual work and the improved standardisation. Due to the many handling steps of the method, the results of the manual process could vary depending on the dexterity of the user and the risk of contamination is high. Therefore, the quality of the results is described to be one main advantage of the automated process.
Drawbacks of automated solutions are the costs for robots, maintenance and consumables as well as the complicated setup of the process. Since the automated picking needs digitised information of the spot location, the analysis of the gel image for relevant spots has to be done by software requiring standardised imaging methods and special scanners. This lengthy procedure prevents the researcher from spontaneous identifications of a few interesting spots from a single gel as well as the need to operate the systems at full capacity. The resulting amount of data from the subsequent automated MS analysis is another problem of high throughput systems as their quality is often questionable and the evaluation of these data takes significantly longer than the collection.
Low-throughput
The mentioned drawbacks limit the reasonable use of automated in-gel digestion systems to the routine laboratory whereas the research laboratory with a demand to make a flexible use of the instruments of protein identification more often stays with the manual, low-throughput methods for in-gel digestion and MS analysis. This group of customers is targeted by the industry with several kit systems for in-gel digestion.
Most of the kit systems are mere collections of the chemicals and enzymes needed for the in-gel digestion whereas the underlying protocol remains unchanged from the manual standard procedure described above. The advantage of these products for the inexperienced customer lies in the guaranteed functioning of the diverse solutions in combination with a ready-made protocol for the process.
A few companies have tried to improve the handling process of in-gel digestion to allow even with manual sample preparation an easier and more standardised workflow. The Montage In-Gel Digest Kit from Millipore is based on the standard protocol, but enables processing of a large number of parallel samples by transferring the handling of the gel pieces to a modified 96 well microplate. The solutions for the diverse steps of in-gel digestion are pipetted into the wells of this plate whereas the removal of liquids is performed through the bottom of the wells by a vacuum pump. This system simplifies the handling of the multiple pipetting steps by the use of multichannel pipettes and even pipetting robots. Actually, some manufacturers of high-throughput systems have adopted the system to work with their robots. This illustrates the orientation of this kit solution to laboratories with a larger number of samples.
See also
Zymography, an unrelated technique in molecular biology which also involves the digestion of proteins in an electrophoretic gel
References
External links
Flash film illustrating the experimental procedure of the optimised in-gel digestion as described in Granvogl et al.
Proteins
Mass spectrometry | In-gel digestion | Physics,Chemistry | 2,938 |
48,457,138 | https://en.wikipedia.org/wiki/Laetiporus%20cremeiporus | Laetiporus cremeiporus is a species of polypore fungus in the family Fomitopsidaceae. It is found in cooler temperate areas of China and Japan, where it grows on logs and stumps of hardwood trees, especially oak. The fruit body of the fungus comprises large masses of overlapping reddish-orange caps with a cream-colored pore surface on the underside.
Taxonomy
The fungus was described as new to science in 2010 by Japanese mycologists Yuko Ota and Tsutomu Hattori. The type collection was made on Mount Kurikoma, in Miyagi Prefecture, Japan, where the fungus was found fruiting on a trunk of oak.
Molecular analysis of DNA sequences confirmed that the taxon is a unique species within the genus Laetiporus. The specific epithet cremeiporus refers to the cream-colored pores on the cap underside.
Description
The fruit body of the fungus comprises overlapping light orange to reddish-orange fan-shaped plates that individually measure up to wide by long. Collectively, the entire fruit body can reach a size of or more. The color of the caps fades to pale brown in age. The pore surface on the cap underside are yellowish-white to cream colored initially, sometimes becoming pinkish in age. Pores are small, numbering two to four per millimeter; they are circular at first but become more angular as the fruit body matures. The flesh has a mild taste and an unpleasant odor that the authors liken to "garbage".
Spores are egg-shaped to ellipsoid, measuring 15–20 by 5–8 μm. Basidia (spore-bearing cells) are club-shaped with two to four sterigmata, and measure 15–20 by 5–8 μm.
Habitat and distribution
Laetiporus cremeiporus is found in cool and temperate areas of China and Japan, where it grows on stumps and logs of hardwood trees, usually oak.
Research on chemical constituents
Laetiporus cremeiporus has been used in traditional medicines, and has been researched for its pharmacological activity. Phytochemicals include bioactive compounds such as phenolic compound inaoside A, nicotinamide, adenosine, and 5′-S-methyl-5′-thioadenosine. Inaoside A exhibited DPPH radical scavenging activity as a monophenolic compound with a IC50 of 79.9 μM. The Trolox equivalent antioxidant capacity value was 0.36.
Other compounds were isolated, including multiple sterols and acids: ergosterol peroxide, fomefficinic acid A, ergosta‐7,22-dien‐3β‐ol, cerevisterol, sulphurenic acid, 4E,8E‐N‐D‐2′‐hydroxypalmitoyl‐1‐O‐β‐D‐glycopyranosyl‐9‐methyl‐4,8‐sphingadienine, ergosterol, N‐2′‐hydroxytetracosyl‐1,3,4‐trihydroxy‐2‐amino‐octadecane, nicotinic acid, and eburicoic acid.
References
Fungi described in 2010
Fungi of China
Fungi of Japan
Fungal plant pathogens and diseases
cremeiporus
Fungus species | Laetiporus cremeiporus | Biology | 700 |
327,612 | https://en.wikipedia.org/wiki/RadioShack | RadioShack (formerly written as Radio Shack) is an American electronics retailer that was established in 1921 as an amateur radio mail-order business. Its original parent company, Radio Shack Corporation, was purchased by Tandy Corporation in 1962, shifting its focus from radio equipment to hobbyist electronic components sold in retail stores. At its peak in 1999, Tandy operated over 8,000 RadioShack stores in the United States, Mexico, and Canada, and under the Tandy name in The Netherlands, Belgium, Germany, France, the United Kingdom, and Australia.
The 21st century proved to be a period of gradual decline. In February 2015, after years of management crises, poor worker relations, diminished revenue, and 11 consecutive quarterly losses, RadioShack was delisted from the New York Stock Exchange and subsequently filed for Chapter 11 bankruptcy. In May 2015, the company's assets, including the RadioShack brand name and related intellectual property, were purchased by General Wireless, a subsidiary of Standard General, for US$26.2 million.
In March 2017, General Wireless and subsidiaries filed for bankruptcy, claiming that a store-within-a-store partnership with Sprint was not as profitable as expected. As a result, RadioShack shuttered several company-owned stores and announced plans to shift its business primarily online.
RadioShack was acquired by Retail Ecommerce Ventures, a holding company owned by Alex Mehr and self-help influencer Tai Lopez, in November 2020. RadioShack operated primarily as an e-commerce website with a network of independently owned and franchised RadioShack stores, as well as a supplier of parts for HobbyTown USA.
In May 2023, Unicomer Group acquired control of the worldwide RadioShack franchise. Unicomer is based in El Salvador and is one of the largest franchisors of RadioShack, with stores in Central America, South America, and the Caribbean. It had purchased its first RadioShack franchise (in El Salvador) in January 1998.
History
The first 40 years
The company was started as Radio Shack in 1921 by two brothers, Theodore and Milton Deutschmann, who wanted to provide equipment for the new field of amateur radio (also known as ham radio). The brothers opened a one-store retail and mail-order operation in the heart of downtown Boston at 46 Brattle Street. They chose the name "Radio Shack", which was the term for a small, wooden structure that housed a ship's radio equipment. The Deutschmanns thought the name was appropriate for a store that would supply the needs of radio officers aboard ships, as well as hams (amateur radio operators). The idea for the name came from an employee, Bill Halligan, who went on to form the Hallicrafters company. The term was already in use — and is to this day — by hams when referring to the location of their stations.
The company issued its first catalog in 1939 as it entered the high-fidelity music market. In 1954, Radio Shack began selling its own private-label products under the brand name Realist, changing the brand name to Realistic after being sued by Stereo Realist.
During the period the chain was based in Boston, it was commonly referred to disparagingly by its customers as "Nagasaki Hardware", as much of the merchandise was sourced from Japan, then perceived as a source of low-quality, inexpensive parts.
In 1959, the store moved its headquarters to 730 Commonwealth Avenue in Boston (across the street from Boston University's Marsh Chapel), with ambitious plans for further expansion. After expanding to nine stores plus an extensive mail-order business, the company fell on hard times in the early 1960s.
Tandy Corporation
Tandy Corporation, a leather goods corporation, was looking for other hobbyist-related businesses into which it could expand. Charles D. Tandy saw the potential of Radio Shack and retail consumer electronics, purchasing the company in 1962 for US$300,000.
At the time of the Tandy Radio Shack & Leather 1962 acquisition, the Radio Shack chain was nearly bankrupt.
Tandy's strategy was to appeal to hobbyists. It created small stores that were staffed by people who knew electronics, and sold mainly private brands. Tandy closed Radio Shack's unprofitable mail-order business, ended credit purchases and eliminated many top management positions, keeping the salespeople, merchandisers and advertisers. The number of items carried was cut from 40,000 to 2,500, as Tandy sought to "identify the 20% that represents 80% of the sales" and replace Radio Shack's handful of large stores with many "little holes in the wall", large numbers of rented locations which were easier to close and re-open elsewhere if one location didn't work out. Private-label brands from lower-cost manufacturers displaced name brands to raise Radio Shack profit margins; non-electronic lines from go-carts to musical instruments were abandoned entirely.
Customer data from the former RadioShack mail-order business determined where Tandy would locate new stores. As an incentive for them to work long hours and remain profitable, store managers were required to take an ownership stake in their stores. In markets too small to support a company-owned Radio Shack store, the chain relied on independent dealers who carried the products as a sideline. Charles D. Tandy said "We’re not looking for the guy who wants to spend his entire paycheck on a sound system", instead seeking customers "looking to save money by buying cheaper goods and improving them through modifications and accessorizing", making it common among "nerds" and "kids aiming to excel at their science fairs".
Charles D. Tandy, who had guided the firm through a period of growth in the 1960s and 1970s, died of a heart attack at age 60 in November 1978.
In 1982, the breakup of the Bell System encouraged subscribers to own their own telephones instead of renting them from local phone companies; Radio Shack offered twenty models of home phones.
Much of the Radio Shack line was manufactured in the company's own factories. By 1990/1991, Tandy was the world's biggest manufacturer of personal computers; its OEM manufacturing capacity was building hardware for Digital Equipment Corporation, GRiD, Olivetti, AST Computer, Panasonic, and others. The company manufactured everything from store fixtures to computer software to wire and cable, TV antennas, audio and videotape. At one point, Radio Shack was the world's largest electronics chain.
In June 1991, Tandy closed or restructured its 200 Radio Shack Computer Centers, acquired Computer City, and attempted to shift its emphasis away from components and cables, toward mainstream consumer electronics. Tandy sold its computer manufacturing to AST Research in 1993, including the laptop computer Grid Systems Corporation which it had purchased in 1988. It sold the Memorex consumer recording trademarks to a Hong Kong firm, and divested most of its manufacturing divisions. House-brand products, which Radio Shack had long marked up heavily, were replaced with third-party brands already readily available from competitors. This reduced profit margins.
In 1992, Tandy attempted to launch big-box electronics retailer Incredible Universe; most of the seventeen stores never turned a profit. Its six profitable stores were sold to Fry's Electronics in 1996; the others were closed. Other rebranding attempts included the launch or acquisition of chains including McDuff, Video Concepts and the Edge in Electronics; these were larger stores which carried TVs, appliances and other lines.
Tandy closed the McDuff stores and abandoned Incredible Universe in 1996, but continued to add new RadioShack stores. By 1996, industrial parts suppliers were deploying e-commerce to sell a wide range of components online; it would be another decade before RadioShack would sell parts from its website, with a selection so limited that it was no rival to established industrial vendors with million-item specialised, centralised inventories.
In 1994, the company introduced a service known as "The Repair Shop at Radio Shack", through which it provided inexpensive out-of-warranty repairs for more than 45 different brands of electronic equipment. The company already had over one million parts in its extensive parts warehouses and 128 service centers throughout the US and Canada; it hoped to leverage these to build customer relationships and increase store traffic. Len Roberts, president of the Radio Shack division since 1993, estimated that the new repair business could generate $500 million per year by 1999.
"America's technology store" was abandoned for the "you've got questions, we've got answers" slogan in 1994. In early summer 1995, the company changed its logo; "Radio Shack" was spelled in camel case as "RadioShack". In 1996, RadioShack successfully petitioned the US Federal Communications Commission to allocate frequencies for the Family Radio Service, a short-range walkie-talkie system that proved popular.
Battery of the Month
From the 1960s until the early 1990s, Radio Shack promoted a "battery of the month" club; a free wallet-sized cardboard card offered one free Enercell per month in-store. Like the free vacuum tube testing offered in-store in the early 1970s, this small loss leader drew foot traffic. The cards also served as generic business cards for the salespeople.
Allied Radio
In 1970, Tandy Corporation bought Allied Radio Corporation (both retail and industrial divisions), merging the brands into Allied Radio Shack and closing duplicate locations. After a 1973 federal government review, the company sold off the few remaining Allied retail stores and resumed using the Radio Shack name. Allied Electronics, the firm's industrial component operation, continued as a Tandy division until it was sold to Spartan Manufacturing in 1981.
Flavoradio
The longest-running product for Radio Shack was the AM-only Realistic Flavoradio, sold from 1972 to 2000, 28 years in three designs. This also made the Flavoradio the longest production run in radio history. It was originally released in five colors in the 1972 catalog: vanilla, chocolate, strawberry, avocado, and plum. For 1973, vanilla and chocolate were dropped (and thus are rare today) and replaced by lemon and orange. At some point two-tone models with white backs were offered but never appeared in catalogs; these are extremely rare today.
The original design had five transistors (model 166). A sixth was added in 1980 (model 166a). The case was redesigned for 1987, making it taller and thinner, and it came in red, blue, and black. The final model, 201a, came in 1996 and was designed around an integrated circuit. They were first made in Korea then Hong Kong and finally the Philippines. The Flavoradio carried the Realistic name until about 1996, when it switched to "Radio Shack", then finally "Optimus". When the Flavoradio was dropped from the catalog in 2001, it was the last AM-only radio on the market.
CB radio
The chain profited from the mass popularity of citizens band radio in the mid-1970s which, at its peak, represented nearly 30% of the chain's revenue.
Home computers
In 1977, two years after the MITS Altair 8800, Radio Shack introduced the TRS-80, one of the first mass-produced personal computers. This was a complete pre-assembled system at a time when many microcomputers were built from kits, backed by a nationwide retail chain when computer stores were in their infancy. Sales of the initial, primitive US$600 (equal to $ today) TRS-80 exceeded all expectations despite its limited capabilities and high price. This was followed by the TRS-80 Color Computer in 1980, designed to attach to a television. Tandy also inspired the Tandy Computer Whiz Kids (1982–1991), a comic-book duo of teen calculator enthusiasts who teamed up with the likes of Archie and Superman. Radio Shack's computer stores offered lessons to pre-teens as "Radio Shack Computer Camp" in the early 1980s.
By September 1982, the company had more than 4,300 stores, and more than 2,000 independent franchises in towns not large enough for a company-owned store. The latter also sold third-party hardware and software for Tandy computers, but company-owned stores did not sell or even acknowledge the existence of non-Tandy products. In the mid-1980s, Radio Shack began a transition from its proprietary 8-bit computers to its proprietary IBM PC compatible Tandy computers, removing the "Radio Shack" name from the product in an attempt to shake off the long-running nicknames "Radio Scrap" and "Trash 80" to make the product appeal to business users. Poor compatibility, shrinking margins and a lack of economies of scale led Radio Shack to exit the computer-manufacturing market in the 1990s after losing much of the desktop PC market to newer, price-competitive rivals like Dell. Tandy acquired the Computer City chain in 1991, and sold the stores to CompUSA in 1998.
In 1994, RadioShack began selling IBM's Aptiva line of home computers. This partnership would last until 1998, when RadioShack partnered with Compaq and created 'The Creative Learning Center' as a store-within-a-store to promote desktop PCs. Similar promotions were tried with 'The Sprint Store at RadioShack' (mobile telephones), 'RCA Digital Entertainment Center' (home audio and video products), and 'PowerZone' (RadioShack's line of battery products, power supplies, and surge protectors).
RadioShack Corporation
In the mid-1990s, the company attempted to move out of small components and into more mainstream consumer markets, focusing on marketing wireless phones. This placed the chain, long accustomed to charging wide margins on specialized products not readily available from other local retailers, into direct competition against vendors such as Best Buy and Walmart.
In May 2000, the company dropped the Tandy name altogether, becoming RadioShack Corporation. The leather operating assets were sold to The Leather Factory on November 30, 2000; that business remains profitable.
House brands Realistic and Optimus were discontinued. In 1999, the company agreed to carry RCA products in a five-year agreement for a "RCA Digital Entertainment Center" store-within-a-store. When the RCA contract ended, RadioShack introduced its own Presidian and Accurian brands, reviving the Optimus brand in 2005 for some low-end products. Enercell, a house brand for dry cell batteries, remained in use until approximately 2014.
Most of the RadioShack house brands had been dropped when Tandy divested its manufacturing facilities in the early 1990s; the original list included: Realistic (stereo, hi-fi and radio), Archer (antenna rotors and boosters), Micronta (test equipment), Tandy (computers), TRS-80 (proprietary computer), ScienceFair (kits), DuoFone (landline telephony), Concertmate (music synthesizer), Enercell (cells and batteries), Road Patrol (radar detectors, bicycle radios), Patrolman (Realistic radio scanner), Deskmate (software), KitchenMate, Stereo Shack, Supertape (recording tape), Mach One, Optimus (speakers and turntables), Flavoradio (pocket AM radios in various colours), Weatheradio, Portavision (small televisions) and Minimus (speakers).
In 2000, RadioShack was one of multiple backers of the CueCat barcode reader, which soon turned out to be a marketing failure. The company had invested US$35 million in the concept, including printing the barcodes throughout its catalogs, and distributing CueCat devices to customers at no charge.
The last annual RadioShack printed catalogs were distributed to the public in 2003.
Until 2004, RadioShack routinely asked for the name and address of purchasers so they could be added to mailing lists. Name and mailing address were requested for special orders (RadioShack Unlimited parts and accessories, Direc2U items not stocked locally), returns, check payments, RadioShack Answers Plus credit card applications, service plan purchases and carrier activations of cellular telephones.
On December 20, 2005, RadioShack announced the sale of its newly built riverfront Fort Worth, Texas headquarters building to German-based KanAm Grund; the property was leased back to RadioShack for 20 years. In 2008, RadioShack assigned this lease to the Tarrant County College District (TCC), remaining in of the space as its headquarters.
In 2005, RadioShack parted with Verizon for a 10-year agreement with Cingular (later AT&T) and renegotiated its 11-year agreement with Sprint. In July 2011, RadioShack ended its wireless partnership with T-Mobile, replacing it with the "Verizon Wireless Store" within a store. 2005 under the leadership of Jim Hamilton, marked a banner year for wireless. RadioShack sold more mobile phones than Walmart, Circuit City and Best Buy combined.
RadioShack had not made products under the Realistic name since the early 1990s. Support for many of Radio Shack's traditional product lines, including amateur radio, had ended by 2006. A handful of small-town franchise dealers used their ability to carry non-RadioShack merchandise to bring in parts from outside sources, but these represented a minority.
PointMobl and "The Shack"
In mid-December 2008, RadioShack opened three concept stores under the name "PointMobl" to sell wireless phones and service, netbooks, iPod and GPS navigation devices. The three Texas stores (Dallas, Highland Village and Allen) were furnished with white fixtures like those in the remodelled wireless departments of individual RadioShack stores, but there was no communicated relationship to RadioShack itself. Had the test proved successful, RadioShack could have moved to convert existing RadioShack locations into PointMobl stores in certain markets.
While some PointMobl products, such as car power adapters and phone cases, were carried as store-brand products in RadioShack stores, the stand-alone PointMobl stores were closed and the concept abandoned in March 2011.
In August 2009, RadioShack rebranded itself as "The Shack". The campaign increased sales of mobile products, but at the expense of its core components business.
RadioShack aggressively promoted Dish Network subscriptions.
In November 2012, RadioShack introduced Amazon Locker parcel pick-up services at its stores, only to dump the program in September 2013. In 2013, the chain made token attempts to regain the do it yourself market, including a new "Do It Together" slogan.
Long-time staff observed a slow and gradual shift away from electronic parts and customer service and toward promotion of wireless sales and add-ons; the pressure to sell gradually increased, while the focus on training and product knowledge decreased. Morale was abysmal; longtime employees who were paid bonus and retirement in stock options saw the value of these instruments fade away.
Financial decline
In 1998, RadioShack called itself the single largest seller of consumer telecommunications products in the world; its stock reached its peak a year later.
InterTAN, a former Tandy subsidiary, sold the Tandy UK stores in 1999 and the Australian stores in 2001. InterTAN was sold (with its Canadian stores) to rival Circuit City in 2004. The RadioShack brand remained in use in the United States, but the 21st century proved a period of long decline for the chain, which was slow to respond to key trends— such as e-commerce, the entry of competitors like Best Buy and Amazon.com, and the growth of the maker movement.
By 2011, smartphone sales, rather than general electronics, accounted for half of the chain's revenue. The traditional RadioShack clientele of do-it-yourself tinkerers were increasingly sidelined. Electronic parts formerly stocked in stores were now mostly only available through on-line special order. Store employees concentrated efforts selling profitable mobile contracts, while other customers seeking assistance were neglected and left the stores in frustration.
Demand for consumer electronics was also increasingly being weakened by consumers buying the items online.
2004: "Fix 1500" initiative
In early 2004, RadioShack introduced Fix 1500, a sweeping program to "correct" inventory and profitability issues company-wide. The program put the 1,500 lowest-graded store managers, of over 5,000, on notice of the need to improve. Managers were graded not on tangible store and personnel data but on one-on-one interviews with district management.
Typically, a 90-day period was given for the manager to improve (thus causing another manager to then be selected for Fix 1500). A total of 1,734 store managers were reassigned as sales associates or terminated in a 6-month period. Also, during this period, RadioShack cancelled the employee stock purchase plan. By the first quarter of 2005, the metrics of skill assessment used during Fix 1500 had already been discarded, and the corporate officer who created the program had resigned.
In 2004, RadioShack was the target of a class-action lawsuit in which more than 3,300 current or former RadioShack managers alleged the company required them to work long hours without overtime pay. In an attempt to suppress the news, the company launched a successful strategic lawsuit against public participation against Bradley D. Jones, the webmaster of RadioShackSucks.com and a former RadioShack dealer for 17 years.
2006: Management problems
On February 20, 2006, CEO David Edmondson admitted to "misstatements" on his curriculum vitae and resigned after the Fort Worth Star-Telegram debunked his claim to degrees in theology and psychology from Heartland Baptist Bible College.
Chief operating officer Claire Babrowski briefly took over as CEO and president. A 31-year veteran of McDonald's Corporation, where she had been vice president and Chief Restaurant Operations Officer, Babrowski had joined RadioShack several months prior. She left the company in August 2006, later becoming CEO and Executive Vice President of Toys "R" Us.
RadioShack's board of directors appointed Julian C. Day as chairman and chief executive officer on July 7, 2006. Day had financial experience and had played a key role in revitalizing such companies as Safeway, Sears and Kmart but lacked any practical front-line sales experience needed to run a retail company. The Consumerist named him one of the "10 Crappiest CEOs" of 2009 (among consumer-facing companies, according to their own employees). He resigned in May 2011.
RadioShack Chief Financial Officer James Gooch succeeded Day as CEO in 2011, but "agreed to step down" 16 months later following a 73% plunge in the price of the stock. On February 11, 2013, RadioShack Corp. hired Joseph C. Magnacca from Walgreens, because he had experience in retail.
2006: Corporate layoffs and new strategy
In the spring of 2006, RadioShack announced a strategy to increase average unit volume, lower overhead costs, and grow profitable square footage. In early to mid-2006, RadioShack closed nearly 500 locations. It was determined that some stores were too close to each other, causing them to compete with one another for the same customers. Most of the stores closed in 2006 brought in less than US$350,000 in revenue each year.
Despite these actions, stock prices plummeted within what was otherwise a booming market. On August 10, 2006, RadioShack announced plans to eliminate a fifth of its company headquarters workforce to reduce overhead expense, improving its long-term competitive position while supporting a significantly smaller number of stores. On Tuesday, August 29, the affected workers received an e-mail: "The work force reduction notification is currently in progress. Unfortunately your position is one that has been eliminated." Four hundred and three workers were given 30 minutes to collect their personal effects, say their goodbyes to co-workers and then attend a meeting with their senior supervisors. Instead of issuing severance payments immediately, the company withheld them to ensure that company-issued BlackBerrys, laptops and cellphones were returned. This move drew immediate widespread public criticism for its lack of sensitivity.
2009: Customer relations problems
RadioShack and the Better Business Bureau of Fort Worth, Texas, met on April 23, 2009, to discuss unanswered and unresolved complaints. The company implemented a plan of action to address existing and future customer service issues. Stores were directed to post a sign with the district manager's name, the question "How Are We Doing?" and a direct toll-free number to the individual district office for their area. RadioShackHelp.com was created as another portal for customers to resolve their issues through the Internet. , the BBB had upgraded RadioShack from an "F" to an "A" rating; this was changed to "no rating" after the 2015 bankruptcy filing.
According to an experience ratings report published by Temkin Group, an independent research firm, RadioShack was ranked as the retailer with the worst overall customer experience; it maintained this position for six consecutive years.
2012–2014: Financial distress
From 2000 to 2011, RadioShack spent US$2.6 billion repurchasing its own stock in an attempt to prop up a share price which fell from US$24.33 to US$2.53; the buyback and the stock dividend were suspended in 2012 to conserve cash and reduce debt as the company continued to lose money. Company stock had declined 81 percent since 2010 and was trading well below book value. The stock reached an all-time low on April 14, 2012. In September 2012, RadioShack's head office laid off 130 workers after a US$21 million quarterly loss. Layoffs continued in August 2013; headquarters employment dropped from more than 2,000 before the 2006 layoffs to slightly fewer than 1,000 in late 2013. At the end of 2013, the chain owned 4,297 US stores.
The company had received a cash infusion in 2013 from Salus Capital Partners and Cerberus Capital Management. This debt carried onerous conditions, preventing RadioShack from gaining control over costs by limiting store closures to 200 per year and restricting the company's refinancing efforts. With too many underperforming stores remaining open, the chain continued to spiral toward bankruptcy.
On March 4, 2014, the company announced a net trading loss for 2013 of US$400.2 million, well above the 2012 loss of US$139.4 million, and proposed a restructuring which would close 1,100 lower-performing stores, almost 20% of its US locations. On May 9, 2014, the company reported that creditors had prevented it from carrying out those closures, with one lender presuming fewer stores would mean fewer assets to secure the loan and reduce any recovery it would get in a bankruptcy reorganization.
On June 10, 2014, RadioShack said that it had enough cash to last 12 months, but that lasting a year depended on sales growing. Sales had fallen for nine straight quarters, and by year's end the company realized a loss in "each of its 10 latest quarters". On June 20, 2014, RadioShack's stock price fell below US$1, triggering a July 25 warning from the New York Stock Exchange that it could be delisted for failure to maintain a stock price above $1. On July 28, 2014, Mergermarket's Debtwire reported RadioShack was discussing Chapter 11 bankruptcy protection as an option.
On September 11, 2014, RadioShack admitted it might have to file for bankruptcy, and would be unable to finance its operations "beyond the very near term" unless the company was sold, restructured, or received a major cash infusion. On September 15, 2014, RadioShack replaced its CFO with a bankruptcy specialist. On October 3, RadioShack announced an out-of-court restructuring, a 4:1 dilution of shares, and a rights issue priced at 40 cents a share. RadioShack's stock () was halted on the New York exchange for the entire day. Despite the debt restructuring proposal, in December Salus and Cerberus informed RadioShack that it was in default of the they had provided as a cash infusion in 2013.
At the end of October 2014, quarterly figures indicated RadioShack was losing US$1.1 million per day. A November 2014 attempt to keep the stores open from 8AM to midnight on Thanksgiving Day drew a sharp backlash from employees and a few resignations; comparable store sales for the three days (Thursday-Saturday) were 1% lower than the prior year, when the stores were open for two of the days. The company's problems maintaining inventories of big-ticket items, such as Apple's iPhone 6, further cut into sales.
By December 2014, RadioShack was being sued by former employees for having encouraged them to invest 401(k) retirement savings in company stock, alleging a breach of fiduciary duties to "prudently" handle the retirement fund which caused "devastating losses" in the retirement plans as the stock dropped from US$13 in 2011 to 38 cents at the end of 2014. These claims were dismissed by the Fifth U.S. Circuit Court of Appeals in 2018.
2015 bankruptcy
On January 15, 2015, The Wall Street Journal reported RadioShack had delayed rent payments to some commercial landlords and was preparing a bankruptcy filing that could come as early as February. Officials of the company declined to comment on the report. A separate report by Bloomberg claimed the company might sell leases to as many as half its stores to Sprint.
On February 2, 2015, the company was delisted from the New York Stock Exchange after its average market capitalization remained below US$50 million for longer than thirty consecutive days. That same day, Bloomberg News reported RadioShack was in talks to sell half of its stores to Sprint and close the rest, which would effectively render RadioShack no longer a stand-alone retailer. Amazon.com and Brookstone were also mentioned to be potential bidders, the former having at the time been wanting to establish a brick and mortar presence. On February 3, RadioShack defaulted on its loan from Salus Capital.
On the days following these reports, some employees were instructed to reduce prices and transfer inventory out of stores designated for closing to those that would remain open during the presumed upcoming bankruptcy proceedings, while the rest remained "in the dark" as to the company's future. Many stores had already closed abruptly on Sunday, February 1, 2015, the first day of the company's fiscal year, with employees only given a few hours advance notice. Some had been open with a skeleton crew, little inventory and reduced hours only because the Salus Capital loan terms limited the chain to 200 store closures a year. A creditor group alleged the chain had remained on life support instead of shutting down earlier and cutting its losses merely so that Standard General could avoid paying on credit default swaps which expired on December 20, 2014.
On February 5, 2015, RadioShack announced that it had filed for Chapter 11 bankruptcy protection. Using bankruptcy to end contractual restrictions that had required it keep unprofitable stores open, the company immediately published a list of 1784 stores which it intended to close, a process it wished to complete by the month's end to avoid an estimated US$7 million in March rent.
Customers had initially been given until March 6, 2015, to return merchandise or redeem unused gift cards. However, after legal pressure from the Attorneys General of several states, RadioShack ultimately agreed to reimburse customers for the value of unused gift cards.
RadioShack was criticized for including the personally identifying information of 67 million of its customers as part of its assets for sale during the proceedings, despite its long-standing policy and a promise to customers that data would never be sold for any reason at any time. The Federal Trade Commission and the Attorneys General of 38 states fought against this proposal. The sale of this data was ultimately approved, albeit greatly reduced from what was initially proposed.
General Wireless Operations, Inc.
On March 31, 2015, the bankruptcy court approved a US$160 million offer by the Standard General affiliate General Wireless Operations, Inc., gaining ownership of 1,743 RadioShack locations. As part of the deal, the company entered into a partnership with Sprint, in which the company would become a co-tenant at 1,435 RadioShack locations and establish store within a store areas devoted to selling its wireless brands, including Sprint, Boost Mobile and Virgin Mobile. The stores would collect commissions on the sale of Sprint products, and Sprint would assist in promotion. Sprint stated that this arrangement would increase the company's retail footprint by more than double; the company previously had around 1,100 company-owned retail outlets, in comparison to the over 2,000 run by AT&T Mobility. Although they would be treated as a co-tenant, a mockup showed Sprint branding being more prominent in promotion and exterior signage than that of RadioShack. The acquisition did not include rights to RadioShack's intellectual property (such as its trademarks), rights to RadioShack's franchised locations, and customer records, which were to be sold separately.
Re-branded stores soft launched on April 10, 2015, with a preliminary conversion of the stores' existing wireless departments to exclusively house Sprint brands, with all stores eventually to be renovated in waves to allocate larger spaces for Sprint. In May 2015, the acquisition of the "RadioShack" name and its assets by General Wireless for US$26.2 million was finalized. Chief marketing officer Michael Tatelman emphasized that the company that emerged from the 2015 proceedings is an entirely new company, and went on to affirm that the old RadioShack did not re-emerge from bankruptcy, calling it "defunct".
Less than one year after the bankruptcy events of 2015, Ron Garriques and Marty Amschler stepped down from their respective chief executive officer and chief financial officer positions; Garriques had held his position for nine months.
2017 bankruptcy
It was speculated on March 2, 2017, that General Wireless was preparing to take RadioShack through its second bankruptcy in two years. This was evidenced when dozens of corporate office employees were laid off and two hundred stores were planned to be shuttered, and further evidenced when the RadioShack website began displaying "all sales final" banners for in-store purchases at all locations.
RadioShack's Chapter 11 bankruptcy was formally filed on March 8, 2017. Of the then 1,300 remaining stores, several hundred were converted into Sprint-only locations.
Despite declaring Chapter 11 bankruptcy (typically reserved for reorganization of debt) instead of Chapter 7 (liquidation), the company engaged in liquidation of all inventory, supplies, and store fixtures, as well as auctioning off old memorabilia. On May 26, RadioShack announced plans to close all but 70 corporate stores and shift its business primarily to online. These stores closed after Memorial Day Weekend of 2017. Of the remaining stores, 50 more closed by the end of June 2017.
One particular store closing in April 2017 garnered widespread media attention when a Facebook account, calling itself "RadioShack - Reynoldsburg, Ohio", began posting aggressive messages alluding to the bankruptcy, such as "We closed. Fuck you all." RadioShack addressed these posts on their official Facebook page denying any involvement.
On June 29, 2017, RadioShack's creditors sued Sprint, claiming that it sabotaged its co-branded locations with newly built Sprint retail stores—which were constructed near well-performing RadioShack locations as determined by confidential sales information. The suit argued that Sprint's actions "destroyed nearly 6,000 RadioShack jobs".
General Wireless announced plans on June 12, 2017, to auction off the RadioShack name and IP, with bidding to begin on July 18. Bidding concluded on July 19, 2017, when one of RadioShack's creditors, Kensington Capital Holdings, obtained the RadioShack brand and other intellectual properties for US$15 million. Kensington was the sole bidder.
In October 2017, General Wireless officially exited bankruptcy and was allowed to retain the company's warehouse, e-commerce site, dealer network operations, and up to 28 stores.
Post-bankruptcy
RadioShack began shrinking its U.S. headquarters operation in 2017. By September of that year,
it had a staff of 50 and moved to RadioShack's distribution center on Terminal Road just north of the Fort Worth Stockyards.
In late July 2018, RadioShack partnered up with HobbyTown USA to open up around 100 RadioShack "Express" stores. HobbyTown owners select which RadioShack products to carry.
RadioShack dealerships had re-opened around 500 stores by October 2018. By November 2018, it had signed 77 of HobbyTown's 137 franchise stores.
Retail Ecommerce Ventures (REV)
In November 2020, RadioShack's intellectual property and its remaining operations—about 400 independent authorized dealers, about 80 Hobbytown USA affiliate stores, and its online sales operation—were purchased by Retail Ecommerce Ventures (REV), a Florida-based company that had previously purchased defunct retailers Pier 1 Imports, Dress Barn, Modell's Sporting Goods, and Linens 'n Things, along with The Franklin Mint.
In December 2021, REV announced they would use part of the brand name on a cryptocurrency platform called RadioShack DeFi (an abbreviation of decentralized finance). The platform would allow customers to exchange and freely swap existing cryptocurrency tokens for a token called $RADIO through the new platform.
The Twitter account for RadioShack gained notoriety in June 2022 when it began posting tweets with not safe for work content in an effort to attract attention towards its cryptocurrency platform, then renamed RadioShack Swap. The strategy, directed by chief marketing officer Ábel Czupor, received a mixed reaction among dealers; HobbyTown USA subsequently terminated its relationship with RadioShack in response to customer confusion surrounding the posts.
Corporate headquarters
In the 1970s RadioShack had a new headquarters "Tandy Towers" built in downtown Fort Worth on Throckmorton Street. In 2001, RadioShack bought the former Ripley Arnold public housing complex in Downtown Fort Worth along the Trinity River for US$20 million. The company razed the complex and had a corporate headquarters campus built, after the City of Fort Worth approved a 30-year economic agreement to ensure that the company stayed in Fort Worth. RadioShack moved into the campus in 2005.
In 2009, with two years left on a rent-free lease of the building, the Fort Worth Star-Telegram reported that the company was considering a new site for its headquarters. The Tampa Bay Business Journal reported rumors among Tampa Bay Area real estate brokers and developers that RadioShack might select Tampa as the site of its headquarters.
In 2010, however, RadioShack announced efforts to remain at its current site. The headquarters was ultimately reduced to a small group after the second bankruptcy filing. In September 2017, what was left of RadioShack (about 50 people) left the downtown location, moving to a warehouse on Terminal Road just north of "The Stockyards".
Non-US operations
InterTAN Inc.
In 1986, Tandy Corp. announced it would create a spinoff of its international retail operations, called InterTAN Inc. The new company would take over operations of over 2,000 international company-owned and franchised stores, while Tandy retained its 7,253 domestic outlets and 30 of its manufacturing facilities. InterTAN had two main units, Tandy Electronics Ltd., which operated in Canada, the UK, France, Belgium, West Germany, and the Netherlands; and Tandy Australia Ltd., which operated in Australia.
At the end of 1989, there were 1,417 stores operated by InterTAN under the Tandy or Radio Shack names. InterTAN operated Tandy or Radio Shack stores in the UK until 1999 and Australia until 2001. RadioShack branded merchandise accounted for 9.5% of InterTAN's inventory purchases in its 2002–2003 fiscal year, the last complete year before the Circuit City acquisition, and later disappeared from stores entirely.
Canada
Following the creation of InterTAN, Tandy Electronics operated 873 stores in Canada, and owned the rights to the RadioShack name. In 2004, Circuit City purchased InterTAN, which held the rights to use the RadioShack name in Canada until 2010. Radio Shack Corp., which operated Radio Shack stores in the US, sued InterTAN in an attempt to end the contract for the company name early. On March 24, 2005, a US district court judge ruled in favour of RadioShack, requiring InterTAN stop using the brand name in products, packaging or advertising by June 30, 2005. The Canadian stores were rebranded under the name "The Source by Circuit City". Radio Shack briefly re-entered the Canadian market, but eventually closed all stores to refocus attention on its core US business.
The Source was acquired by BCE Inc. in 2009. In January 2024, Bell announced a brand licensing agreement with its competitor Best Buy, which will see its locations rebranded as Best Buy Express and integrated into Best Buy's retail network, but remain under the ownership of BCE.
Asia
In March 2012, Malaysian company Berjaya Retail Berhad, entered into a franchising agreement with Radio Shack. Later that year, the company announced a second franchising deal with Chinese company, Cybermart.
Berjaya had six stores in Malaysia before it quietly ceased operations in 2017.
Mexico
In 1986, Grupo Gigante signed a deal with Tandy Corporation to operate Radio Shack branded stores in Mexico. After growing their electronics chain within Mexico to 24 stores, Grupo Gigante signed a new deal with Tandy in 1992 to form a new joint ventured called Radio Shack de México
in which both companies had an equal share. As part of the deal, Grupo Gigante transferred their electronics stores to Radio Shack de México.
In 2008, Grupo Gigante separated from Radio Shack, (then renamed Radio Shack Corporation) and sold its share of the joint venture to Radio Shack Corp. for $42.3 million.
In June 2015, Grupo Gigante repurchased 100 percent of RadioShack de Mexico, including stores, warehouses, and all related brand names and intellectual properties for use within Mexico, from the US Bankruptcy Court in Delaware for US$31.5 million. The chain had 247 stores in Mexico at the time of the sale. Following the sale, all Radio Shack stores, warehouses, brands, assets, and related trademarks in Mexico are currently owned by RadioShack de México S.A. de C.V., a subsidiary of Grupo Gigante.
A major Mexican news magazine had reported in March 2015 that Grupo Gigante actually purchased 100% of the stock in RadioShack de México from RadioShack Corporation for US$31.8 million, two months prior to the bankruptcy filing, but had only had to hand over US$11.8 million to RadioShack Corp. for also assuming approximately US$20 million in debt liabilities.
While Radio Shack was facing a second bankruptcy in the United States, Grupo Gigante announced in October 2017 that they planned to expand the Radio Shack brand within Mexico by opening eight more stores.
Latin America & the Caribbean
When Radio Shack Corporation filed for bankruptcy the first time in 2015, the Unicomer Group (Grupo Unicomer) purchased the Radio Shack brand from the bankruptcy court for its exclusive use in Latin America and the Caribbean, except Mexico. Unicomer, through its corporate parent Regal Forest Holding Co. Ltd., paid $5 million for the brand.
The company's relationship with Radio Shack dated back to 1998, when Unicomer opened its first Radio Shack franchise store in El Salvador. It later expanded into Honduras, Guatemala, and Nicaragua. By January 2015, Unicomer had 57 Radio Shack stores distributed throughout four countries within Central America.
In April 2015, Unicomer began receiving franchise payments from franchises in several countries that Unicomer had not previously had a business presence in. It expanded into Trinidad in 2016, Jamaica in 2017, Barbados in 2017, and Guyana in 2017.
By the end of 2017, Unicomer had company-owned stores located in the countries of Barbados, El Salvador, Guatemala, Guyana, Honduras, Jamaica, Nicaragua, and Trinidad while receiving franchise payments from independent franchised stores located in the countries of Antigua, Aruba, Costa Rica, Paraguay and Peru in which Unicomer did not have a business presence in. Since 2014, the independent company Coolbox is an authorized dealer for RadioShack products in Peru.
In April 2018, the RadioShack brand returned to Bolivia when franchisee Cosworld Trading opened two franchised stores for Unicomer in the capital city of La Paz. The previous RadioShack stores had closed in 2015 as a result of RadioShack first bankruptcy filing.
Middle East
When Radio Shack filed for bankruptcy the first time in 2015, the Egypt-based Delta RS for Trading purchased the Radio Shack brand from the bankruptcy court for its exclusive use in Middle East and North Africa for $US5 million.
Delta RS for Trading, as Radio Shack Egypt, had opened its first Radio Shack franchised store in 1998 in Nasr City. By March 2003, Radio Shack Egypt had 65 company-operated stores plus 15 sub-franchised stores. In 2017, the Egyptian government accused Radio Shack Egypt and its parent Delta RS in aiding the Muslim Brotherhood.
Other operations
Corporate citizenship
In 2006, RadioShack supported the National Center for Missing & Exploited Children by providing store presence for the StreetSentz program, a child identification and educational kit offered to families without charge. RadioShack supported United Way of America Charities to assist their Oklahoma and Texas relief efforts after the 2013 Moore tornado. RadioShack's green initiative promotes the Rechargeable Battery Recycling Corporation, which accepts end-of-life rechargeable batteries and wireless phones dropped off in-store to be safely recycled.
Other retailer partnerships
In August 2001, RadioShack opened kiosk-style stores inside Blockbuster outlets, only to abandon the project in February 2002; CEO Len Roberts announced that the stores did not meet expectations.
RadioShack operated wireless kiosks within 417 Sam's Club discount warehouses from 2004 to 2011. The kiosk operations, purchased from Arizona-based Wireless Retail Inc, operated as a subsidiary, SC Kiosks Inc., with employees contracted through RadioShack Corporation. No RadioShack-branded merchandise was sold. The kiosks closed in 2011, costing RadioShack an estimated US$10–15 million in 2011 operating income.
RadioShack then attempted a joint venture with Target to deploy mobile telephone kiosks in 1,490 Target stores by April 2011. In April 2013, RadioShack's partnership with Target ended and the Target Mobile in-store kiosks were turned over to a new partnership with Brightstar and MarketSource.
No-contract wireless
On September 5, 2012, RadioShack in a partnership with Cricket Wireless, began offering its own branded no-contract wireless services using Cricket and Sprint's nationwide networks. The service was discontinued on August 7, 2014; clients who had already purchased the service from RadioShack continue to receive service from Cricket Wireless.
Cycling team sponsorship
In 2009, the company became the main sponsor of a new cycling team, Team RadioShack, with Lance Armstrong and Johan Bruyneel. RadioShack featured Armstrong in a number of television commercials and advertising campaigns. RadioShack came under fire for having Armstrong as a spokesperson in 2011, when allegations that the cyclist had used performance-enhancing drugs surfaced.
Lawsuits
In September 1999, AutoZone, Inc., sued Tandy Corp., then the owner of RadioShack, in a federal district court in Tennessee for infringing the AutoZone trademark by using the name "PowerZone" for a section in RadioShack's retail stores. In November 2001, the district court granted Tandy's motion for summary judgment to dismiss the case, finding that AutoZone failed to prove that the use of "PowerZone" infringed the "AutoZone" trademark. AutoZone appealed that decision. In June 2004, the federal court of appeals affirmed the district court's dismissal of the case.
In June 2011, a customer sued Sprint and RadioShack after finding pornography on their newly purchased cell phones.
In 2012, a Denver jury awarded $674,938 to David Nelson, a 25-year RadioShack employee who had been fired in retaliation for complaining about age discrimination.
In 2013, a federal jury awarded over $1 million in an age discrimination suit to a longtime RadioShack store manager who was fired in 2010 from the San Francisco store he had managed since 1998.
In July 2014, in Verderame v. RadioShack Corp., the U.S. District Court for the Eastern District of Pennsylvania found that RadioShack owed its store managers in Pennsylvania a possible US$5.8 million for unpaid overtime.
In popular culture
In the 1980 film Used Cars, an electronics engineer needs equipment to do some last-minute repairs to a bootleg microwave transmitter, and says to his partner, "RadioShack closes in half an hour."
A "Radio Shock" store (owned by the "Tandy Corporation") appeared in the original 1991 release of Space Quest IV, displaced by "Hz. So Good" in later editions because of threats of legal action by Tandy.
RadioShack is featured prominently in Short Circuit 2, which serves as a "clinic" for Johnny 5 while he repairs himself after being assaulted by thieves.
RadioShack is mentioned and briefly featured on the pilot episode of Young Sheldon. Visits to RadioShack are a frequent plot point in the Young Sheldon series, building off allusions to childhood visits made by the character Sheldon Cooper in its parent series, The Big Bang Theory. The family returns to the RadioShack store in a later episode, where his mother purchases him a Tandy 1000.
RadioShack appears in the second season of the Netflix series Stranger Things as the workplace of Bob Newby. In one scene, an Armatron (a product actually sold at RadioShack during that period) can be seen on a shelf above his head.
In the 2001 re-make of the 1960 movie Ocean's Eleven, after Livingston asks an FBI agent to not touch his equipment by asking, "Do you see me grabbing the gun out of your holster and waving it around?", the agent retorts with "Hey 'RadioShack', relax".
American sportswriter and YouTuber Jon Bois worked at RadioShack sometime in the early to mid 2000s, later publishing multiple articles detailing his personal experiences as an employee.
References
Notes
Further reading
Hayden, Andrew, "Radio Shack: A Humble Beginning for an Electronics Giant", antiqueradio.com, February 2007
External links
Radio Shack Records in Fort Worth Library Archives
Radioshackcatalogs.com, an 80-year archive of RadioShack catalogs, plus other corporate publications and historic photos
1921 establishments in Massachusetts
Companies based in Fort Worth, Texas
Companies formerly listed on the New York Stock Exchange
Companies that filed for Chapter 11 bankruptcy in 2015
Companies that filed for Chapter 11 bankruptcy in 2017
Consumer electronics retailers of the United States
Electronic kit manufacturers
Home computer hardware companies
Loudspeaker manufacturers
American companies established in 1921
Retail companies established in 1921
2015 mergers and acquisitions
Radio manufacturers | RadioShack | Engineering | 10,629 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.