id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
58,630,686 | https://en.wikipedia.org/wiki/Aspergillus%20savannensis | Aspergillus savannensis is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 2016. It has been isolated from soil in Namibia.
Growth and morphology
A. savannensis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
savannensis
Fungi described in 2016
Fungus species | Aspergillus savannensis | Biology | 115 |
20,020,658 | https://en.wikipedia.org/wiki/Ideal%20city | In urban design, an ideal city is the concept of a plan for a city that has been conceived in accordance with a particular rational or moral objective.
Concept
The "ideal" nature of such a city may encompass the moral, spiritual and juridical qualities of citizenship as well as the ways in which these are realised through urban structures including buildings, street layout, etc. The ground plans of ideal cities are often based on grids (in imitation of Roman town planning) or other geometrical patterns. The ideal city is often an attempt to deploy Utopian ideals at the local level of urban configuration and living space and amenity rather than at the culture- or civilisation-wide level of the classical Utopias such as St Thomas More's Utopia.
History
Several attempts to develop ideal city plans are known from the Renaissance, and appear from the second half of the fifteenth century. The concept dates at least from the period of Plato, whose Republic is a philosophical exploration of the notion of the 'ideal city'. The nobility of the Renaissance, seeking to imitate the qualities of Classical civilisation, sometimes sought to construct such ideal cities either in reality or notionally through a reformation of manners and culture.
Leon Battista Alberti
The Renaissance concept of an Ideal town developed by Italian polymath Leon Battista Alberti (14041472), author of ten books of treatises on modern architecture titled De re aedificatoria written about 1450 with additions made until the time of his death in 1472, concerned the planning and building of an entire town as opposed to individual edifices for private patrons or ecclesiastical purposes.
Alberti insisted on choosing the location of the town first, followed by careful setting up of the size and direction of streets, then location of bridges and gates, and finally a building pattern ruled by perfect symmetry. One of the more prominent examples of a town modelled on this theory was Zamość founded in the 16th century by the chancellor Jan Zamoyski. At present, it is a World Heritage Site in Poland.
Examples
Examples of the ideal cities include Filarete's "Sforzinda", a description of which was included in Trattato di architettura (c. 1465). The city of Sforzinda was laid out within an eight-pointed star inscribed within a circular moat. Further examples may have been intended to have been read into the so-called "Urbino" and "Baltimore" panels (second half of the fifteenth century), which show classically influenced architecture disposed in logically planned piazzas.
The cities of Palmanova and Nicosia, whose Venetian Fortesses were built in the 1590s by the Venetian Republic, are considered to be practical examples of the concept of the ideal city. Another notable example of the concept is Zamość in eastern Poland, founded in the late 16th century and modelled by the Italian architect Bernardo Morando.
James Oglethorpe synthesized Classical and Renaissance concepts of the ideal city with new Enlightenment ideals of scientific planning, harmony in design, and social equality in his plan for the Province of Georgia. The physical design component of the famous Oglethorpe Plan remains preserved in the Savannah Historic District.
Late nineteenth-century examples of the ideal city include the Garden city movement of Sir Ebenezer Howard, realised at Letchworth Garden City and Welwyn Garden City in England. Poundbury, Charles III's architectural vision established in Dorset, is among the most recent examples of ideal city planning.
Built in 1950s Communist Poland, Nowa Huta, now part of Kraków, Poland, serves as an unfinished example of a utopian ideal city, and is still one of the largest planned socialist realist settlements or districts ever built and "one of the most renowned examples of deliberate social engineering" in the entire world. Its street hierarchy, layout and certain grandeur of buildings often resemble Paris or London. The high abundance of parks and green areas in Nowa Huta make it the greenest part of Kraków.
See also
Oglethorpe Plan
Utopia
References
Urban planning
Utopian theory | Ideal city | Engineering | 825 |
15,462,488 | https://en.wikipedia.org/wiki/Fermi%E2%80%93Ulam%20model | The Fermi–Ulam model (FUM) is a dynamical system that was introduced by Polish mathematician Stanislaw Ulam in 1961.
FUM is a variant of Enrico Fermi's primary work on acceleration of cosmic rays, namely Fermi acceleration. The system consists of a particle that collides elastically between a fixed wall and a moving one, each of infinite mass. The walls represent the magnetic mirrors with which the cosmic particles collide.
A. J. Lichtenberg and M. A. Lieberman provided a simplified version of FUM (SFUM) that derives from the Poincaré surface of section and writes
where is the velocity of the particle after the -th collision with the fixed wall, is the corresponding phase of the moving wall, is the velocity law of the moving wall and is the stochasticity parameter of the system.
If the velocity law of the moving wall is differentiable enough, according to KAM theorem invariant curves in the phase space exist. These invariant curves act as barriers that do not allow for a particle to further accelerate and the average velocity of a population of particles saturates after finite iterations of the map. For instance, for sinusoidal velocity law of the moving wall such curves exist, while they do not for sawtooth velocity law that is discontinuous. Consequently, at the first case particles cannot accelerate infinitely, reversely to what happens at the last one.
Over the years FUM became a prototype model for studying non-linear dynamics and coupled mappings.
The rigorous solution of the Fermi-Ulam problem (the velocity and energy of the particle are bounded) was given first by L. D. Pustyl'nikov in (see also and references therein).
In spite of these negative results, if one considers the Fermi–Ulam model in the framework of the special theory of relativity, then under some general conditions the energy of the particle tends to infinity for an open set of initial data.
2D generalization
Though the 1D Fermi–Ulam model does not lead to acceleration for smooth oscillations, unbounded energy growth has been observed in 2D billiards with oscillating boundaries,
The growth rate of energy in chaotic billiards is found to be much larger than that in billiards that are integrable in the static limit.
Strongly chaotic billiard with oscillating boundary can serve as a paradigm for driven chaotic systems. In the experimental arena this topic arises in the theory of nuclear friction, and more recently in the studies of cold atoms that are trapped in optical billiards. The driving induces diffusion in energy, and consequently the absorption coefficient is determined by the Kubo formula.
References
External links
Regular and Chaotic Dynamics: A widely acknowledged scientific book that treats FUM, written by A. J. Lichtenberg and M. A. Lieberman (Appl. Math. Sci. vol 38) (New York: Springer).
Dynamical systems | Fermi–Ulam model | Physics,Mathematics | 613 |
55,890,020 | https://en.wikipedia.org/wiki/Niles%20Firebrick | Niles Firebrick was manufactured by the Niles Fire brick Company since it was created in 1872 by John Rhys Thomas until the company was sold in 1953 and completely shutdown in 1960. Capital to establish the company was provided by Lizzie B. Ward to construct a small plant across from the Old Ward Mill which was run by her husband James Ward. Thomas immigrated in 1868 from Carmarthenshire in Wales with his wife and son W. Aubrey Thomas who served as secretary of the company until he was appointed as representative to the U. S. Congress in 1904. The company was managed by another son, Thomas E. Thomas, after J.R. Thomas died unexpectedly in 1898. The Thomases returned the favor of their original capitalization by purchasing an iron blast furnace from James Ward when he went bankrupt in 1879. Using their knowledge of firebrick they were able to make this small furnace profitable. Later they used it to showcase the value of adding hot blast to a furnace using 3 ovens packed full of firebrick. The furnace was managed by another son, John Morgan Thomas.
Fire brick was first invented in 1822 by William Weston Young in the Vale of Neath in Wales, the county just east of Llanelli where the Thomas family lived before emigrating to Niles. It is recorded that Firebrick was made in the Llanelli area in 1870 but the market was highly cyclical and it was difficult to make a living at it.
From 1937 to 1941 the company worked to prevent the United Brick Workers Union (CIO) from organizing the workers in preference for an independent union favored by management. The CIO union prevailed. In spite of this episode the company had good relations with the employees and tried to keep them employed during economic downturns. The "Clingans" mentioned in that referenced interview were Margaret Thomas Clingan, a daughter and John Rhys Thomas Clingan, a grandson, who took over management of the Company when T.E. Thomas died in 1920.
Patrick J. Sheehan worked various jobs at Niles Fire Brick Company from age 13 up until 1897 when he was appointed superintendent of the plant. When Sheehan started with the company they occupied a plant covering a floor space of 3,600 square feet, two kilns, and the output was 640,000 bricks per year. The plant was moved to Langley street eighteen months afterward, and the output increased to 1,200,000. This Langley street works has constantly added to each year, until the output was 6 million, and in 1905 they built the "Falcon" plant on the site formerly occupied by the Langley street plant, which doubled production to 12 million per year. By 1955 the output was 25 million. The work of molding and firing brick was highly labor-intensive. Immigrants from Southern States and European countries, especially Italy, were sought to perform labor under difficult working conditions.
An article in the March-April "The Niles Register" of the Niles Historical Society discusses the history of the headquarters of the company at 216 Langely Street with a pattern shop in the back where skilled workers created the molds for custom bricks ordered by the mills in the 1902- 1912 period. After that the pattern shop was used by the Sons of Italy and later by the Bagnoli-Irpino Club. This was a result of the large percentage of immigrants from the Bagnoli-Irpino area in Italy. One of the founders of the club was Lawrence Pallante, an early immigrant from that area and presumably an ancestor of the reference articles. Immigration from that area began in 1880 and extended to about 1960.
References
Refractory materials
Silicates
Bricks
Niles, Ohio | Niles Firebrick | Physics | 746 |
59,897,030 | https://en.wikipedia.org/wiki/Shepard%20tables | Shepard tables (also known as the Shepard tabletop illusion) are an optical illusion first published in 1990 as "Turning the Tables," by Stanford psychologist Roger N. Shepard in his book Mind Sights, a collection of illusions that he had created. It is one of the most powerful optical illusions, typically creating length miscalculations of 20–25%.
To quote A Dictionary of Psychology, the Shepard table illusion makes "a pair of identical parallelograms representing the tops of two tables appear radically different" because our eyes decode them according to rules for three-dimensional objects.
This illusion is based on a drawing of two parallelograms, identical aside from a rotation of 90 degrees. When the parallelograms are presented as tabletops, however, we see them as objects in three-dimensional space. One "table" seems long and narrow, with its longer dimension receding into the distance. The other "table" looks almost square, because we interpret its shorter dimension as foreshortening. The MIT Encyclopedia of the Cognitive Sciences explains the illusion as an effect of "size and shape constancy [which] subjectively expand[s] the near-far dimension along the line of sight." It classifies Shepard tables as an example of a geometrical illusion, in the category of an "illusion of size."
According to Shepard, "any knowledge or understanding of the illusion we may gain at the intellectual level remains virtually powerless to diminish the magnitude of the illusion." Children diagnosed with autism spectrum disorder are less susceptible to the Shepard table illusion than typically developing children but are equally susceptible to the Ebbinghaus illusion.
Shepard had described an earlier, less-powerful version of the illusion in 1981 as the "parallelogram illusion" (Perceptual Organization, pp. 297–9). The illusion can also be constructed using identical trapezoids rather than identical parallelograms.
A variant of the Shepard tabletop illusion was named "Best Illusion of the Year" for 2009.
Christopher W. Tyler, among others, has done scholarly research on the illusion.
References
External links
Animation of the illusion. Opticalillusion.net.
More optical illusions by Roger Shepard
Optical illusions | Shepard tables | Physics | 451 |
45,259,926 | https://en.wikipedia.org/wiki/Orbicella%20faveolata | Orbicella faveolata, commonly known as mountainous star coral, is a colonial stony coral in the family Merulinidae. Orbicella faveolata is native to the coral coast of the Caribbean Sea and the Gulf of Mexico and is listed as "endangered" by the International Union for Conservation of Nature. O. faveolata was formerly known as Montastraea faveolata.
Description
Colonies of this coral are solid and very large, forming a mound with a skirt. The surface is smooth and undulating, with small lumps, bulges or lobes. The corallites, the stony cups in which the polyps sit, are about in diameter and cover the entire surface of the coral. The colour is usually a pale brown, yellowish green and grey but may be deep brown, with fluorescent green highlights. This coral is part of a species complex including the closely related Orbicella annularis and Orbicella franksi, but the former has more distinct nodules or small columns and the latter has a more irregular, lumpy surface.
Distribution and habitat
Orbicella faveolata occurs in shallow waters in the Caribbean Sea and the Gulf of Mexico, its range including Florida, united states, the Bahamas, Venezuela and possibly Bermuda. It is found on both the back reef and fore reef slopes of fringing reefs at depths of up to . It is often the most abundant coral species on fore reef slopes between .
Biology
Like other corals, Orbicella faveolata has a symbiotic relationship with dinoflagellates in the genus Symbiodinium. These symbionts are commonly known as zooxanthellae and large numbers are present in the coral's living tissue. Several different species of Symbiodinium associate with the coral, depending on the degree of light intensity reaching the part of the surface where they reside. When artificial shading was applied by researchers to corals for some weeks, the Symbiodinium died out in the shaded portion. When the light was restored, zooxanthellae became reinstated, but in many instances, the original species was replaced by a different species of Symbiodinium.
The surface of the coral can be considered a microbiome, an ecological community of micro-organisms. The zooxanthellae, bacteria and archaea present vary with the time of year and in the spring (but not the autumn) their composition is also affected by the health of the coral and whether it is suffering from yellow-band disease.
Orbicella faveolata is related to the coral species lobed star coral (Orbicella annularis) and the species boulder star coral (Orbicella franksi) which both live in the Caribbean Sea and the Gulf of Mexico in areas such as the Bahamas and Bermuda. The species Paramontastraea saleborsa and Astrea curta have similarly sized corallites. The species P. saleborsa was also formally placed in the coral genus Montastraea.
Status
Orbicella faveolata is a slow-growing species and the rate at which new colonies are formed is less than the rate at which mature colonies die. It is susceptible to bleaching and to several coral diseases including yellow-band disease, black band disease and plague. Numbers of individuals are believed to have declined by over 50% in the last thirty years and the International Union for Conservation of Nature lists its conservation status as being "endangered".
References
Merulinidae
Coral reefs
Corals described in 1786
ESA threatened species | Orbicella faveolata | Biology | 733 |
75,253,555 | https://en.wikipedia.org/wiki/Soluble%20guanylate%20cyclase%20stimulator | Soluble guanylate cyclase (sGC) stimulators are a class of drugs developed to treat heart failure, pulmonary hypertension, and other diseases. The first-in-class medication was riociguat, approved in 2013 for pulmonary hypertension. They have also been investigated for hypertension, systemic sclerosis, and sickle cell disease.
Background
In 1998, the role of nitric oxide (NO) in cardiovascular disease received the Nobel Prize in Physiology. Although NO is still used to treat angina, its side effects, potential for tolerance, short duration of action, and narrow therapeutic index limit its therapeutic use. PDE5 inhibitors increase NO and are approved for erectile dysfunction, pulmonary arterial hypertension (PAH), and benign prostatic hyperplasia, but they are less effective in patients for whom NO production is suppressed, such as people with diabetes or obesity. Soluble guanylate cyclase is one of the downstream targets of NO, but the stimulators operate independently of it. sGC activators, another experimental class of drugs, may be more effective than stimulators when oxidative stress is high.
The drugs are also considered to possibly have the potential to treat kidney disease, lung fibrosis, scleroderma, and sickle cell disease.
List of drugs
FDA approved
Riociguat, approved in 2013 for pulmonary hypertension
Vericiguat, approved in 2021 for heart failure
Investigational
Praliciguat was tried in a phase II trial for heart failure with preserved ejection fraction
Olinciguat was developed for sickle cell disease but its development was discontinued in 2020.
References
Soluble guanylate cyclase stimulators
Drugs by mechanism of action | Soluble guanylate cyclase stimulator | Chemistry | 361 |
53,193,155 | https://en.wikipedia.org/wiki/Interchange%20instability | The interchange instability, also known as the Kruskal–Schwarzschild instability or flute instability, is a type of plasma instability seen in magnetic fusion energy that is driven by the gradients in the magnetic pressure in areas where the confining magnetic field is curved.
The name of the instability refers to the action of the plasma changing position with the magnetic field lines (i.e. an interchange of the lines of force in space) without significant disturbance to the geometry of the external field. The instability causes flute-like structures to appear on the surface of the plasma, hence it is also referred to as the flute instability. The interchange instability is a key issue in the field of fusion energy, where magnetic fields are used to confine a plasma in a volume surrounded by the field.
The basic concept was first noted in a 1954 paper by Martin David Kruskal and Martin Schwarzschild, who demonstrated that a situation similar to the Rayleigh–Taylor instability in classic fluids existed in magnetically confined plasmas. The problem can occur anywhere where the magnetic field is concave with the plasma on the inside of the curve. Edward Teller gave a talk on the issue at a meeting later that year, pointing out that it appeared to be an issue in most of the fusion devices being studied at that time. He used the analogy of rubber bands on the outside of a blob of jelly; there is a natural tendency for the bands to snap together and eject the jelly from the center.
Most machines of that era suffered from other instabilities that were far more powerful, and whether or not the interchange instability was taking place could not be confirmed. This was finally demonstrated beyond doubt by a Soviet magnetic mirror machine during an international meeting in 1961. When the US delegation stated they were not seeing this problem in their mirrors, it was pointed out they were making an error in the use of their instrumentation. When that was considered, it was clear the US experiments were also being affected by the same problem. This led to a series of new mirror designs, as well as modifications to other designs like the stellarator to add negative curvature. These had cusp-shaped fields so that the plasma was contained within convex fields, the so-called "magnetic well" configuration.
In modern designs, the interchange instability is suppressed by the complex shaping of the fields. In the tokamak design there are still areas of "bad curvature", but particles within the plasma spend only a short time in those areas before being circulated to an area of "good curvature". Modern stellarators use similar configurations, differing from tokamaks largely in how that shaping is created.
Basic concept
Magnetic confinement systems attempt to hold the plasma within a vacuum chamber using magnetic fields. The plasma particles are electrically charged, and thus see a transverse force from the field due to the Lorentz force. When the particle's original linear motion is superimposed on this transverse force, its resulting path through space is a helix, or corkscrew shape. Such a field will thus trap the plasma by forcing it to flow along the lines.
One can produce a linear field using an electromagnet in the form of a solenoid wrapped around a tubular vacuum chamber. In this case, the plasma will orbit the lines running down the center of the chamber and be prevented from moving outward towards the walls. This does not confine the plasma along the length of the tube, and it will rapidly flow out the ends. Designs that prevented this from occurring appeared in the early 1950s and experiments began in earnest in 1953. However, all of these devices proved to leak plasma at rates far higher than expected.
In May 1954, Martin David Kruskal and Martin Schwarzschild published a paper demonstrating two effects that meant plasmas in magnetic fields were inherently unstable. One of the two effects, which became known as the kink instability, was already being seen in early z-pinch experiments and occurred slowly enough to be captured on movie film. The topic of stability immediately gained significance in the field.
The other instability noted in the paper considered an infinite sheet of plasma held up against gravity by a magnetic field. It suggested there would be behaviour similar to that in classical physics when one heavy fluid is supported by a lighter one, which leads to the Rayleigh–Taylor instability. Any small vertical disturbance in an initially uniform field would result in the field pulling on the charges laterally and causing the initial disturbance to be further magnified. As large sheets of plasma were not common in existing devices, the outcome of this effect was not immediately obvious. It was not long before a corollary became obvious; the initial disturbance resulted in a curved interface between the plasma and the external field, and this was inherent to any design that had a convex area in the field.
In October 1954 a meeting of the still-secret Project Sherwood researchers was held at Princeton University's Gun Club building. Edward Teller brought up the topic of this instability and noted that two of the major designs being considered, the stellarator and the magnetic mirror, both had large areas of such curvature and thus should be expected to be inherently unstable. He further illustrated it by comparing the situation to jello being held together with rubber bands; while such a setup might be created, any slight disturbance would cause the rubber bands to contract and eject the jello. This exchange of position appeared to be identical to the mirror case in particular, where the plasma naturally wanted to expand while the magnetic fields had an internal tension.
No such behaviour had been seen in experimental devices, but as the situation was considered further, it became clear it would be more obvious in areas of greater curvature, and existing devices used relatively weak magnetic fields with relatively flat fields. This nevertheless presented a significant problem; a key measure of the attractiveness of a reactor design was its beta, the ratio of magnetic field strength to confined plasma - higher beta meant more plasma for the same magnet, which was a significant factor in cost. However, higher beta also implied more curvature in these devices, which would make them increasingly unstable. This might force reactors to operate at low beta and be doomed to be economically unattractive.
As the magnitude of the problem became clear, the meeting turned to the question of whether or not there was any arrangement that was naturally stable. Jim Tuck was able to provide a solution; the picket fence reactor concept had been developed as a solution to another problem, bremsstrahlung losses, but he pointed out that its field arrangement would be naturally stable under the conditions shown in the Kruskal/Schwarzschild paper. Nevertheless, as Amasa Bishop noted;
The correctness of the simplified model was then called into question and led to further study. The answer appeared at a follow-up meeting at Berkeley in
February 1955, where Harold Grad of New York University, Conrad Longmire of Los Alamos and Edward A. Frieman of Princeton presented independent developments that all proved the effect to be real, and worse, should be expected at any beta, not just high beta. Further work at Los Alamos demonstrated that the effect should be seen in both the mirror and stellarator.
The effect is most obvious in the magnetic mirror device. The mirror has a field that runs along the open center of the cylinder and bundles together at the ends. In the center of the chamber the particles follow the lines and flow towards either end of the device. There, the increasing magnetic density causes them to "reflect", reversing direction and flowing back into the center again. Ideally, this will keep the plasma confined indefinitely, but even in theory there a critical angle between the particle trajectory and the axis of the mirror where particles can escape. Initial calculations showed that the loss rate through this process would be small enough to not be a concern.
In practice, all mirror machines demonstrated a loss rate far higher than these calculations suggested. The interchange instability was one of the major reasons for these losses. The mirror field has a cigar shape to it, with increasing curvature at the ends. When the plasma is located in its design location, the electrons and ions are roughly mixed. However, if the plasma is displaced, the non-uniform nature of the field means the ion's larger orbital radius takes them outside the confinement area while the electrons remain inside. It is possible the ion will hit the wall of the container, removing it from the plasma. If this occurs, the outer edge of the plasma is now net negatively charged, attracting more of the positively charged ions, which then escape as well.
This effect allows even a tiny displacement to drive the entire plasma mass to the walls of the container. The same effect occurs in any reactor design where the plasma is within a field of sufficient curvature, which includes the outside curve of toroidal machines like the tokamak and stellarator. As this process is highly non-linear, it tends to occur in isolated areas, giving rise to the flute-like expansions as opposed to mass movement of the plasma as a whole.
History
In the 1950s, the field of theoretical plasma physics emerged. The confidential research of the war became declassified and allowed the publication and spread of very influential papers. The world rushed to take advantage of the recent revelations on nuclear energy. Although never fully realized, the idea of controlled thermonuclear fusion motivated many to explore and research novel configurations in plasma physics. Instabilities plagued early designs of artificial plasma confinement devices and were quickly studied partly as a means to inhibit the effects. The analytical equations for interchange instabilities were first studied by Kruskal and Schwarzschild in 1954. They investigated several simple systems including the system in which an ideal fluid is supported against gravity by a magnetic field (the initial model described in the last section).
In 1958, Bernstein derived an energy principle that rigorously proved that the change in potential must be greater than zero for a system to be stable. This energy principle has been essential in establishing a stability condition for the possible instabilities of a specific configuration.
In 1959, Thomas Gold attempted to use the concept of interchange motion to explain the circulation of plasma around the Earth, using data from Pioneer III published by James Van Allen. Gold also coined the term “magnetosphere” to describe “the region above the ionosphere in which the magnetic field of the Earth has a dominant control over the motions of gas and fast charged particles.” Marshall Rosenthal and Conrad Longmire described in their 1957 paper how a flux tube in a planetary magnetic field accumulates charge because of opposing movement of the ions and electrons in the background plasma. Gradient, curvature and centrifugal drifts all send ions in the same direction along the planetary rotation, meaning that there is a positive build-up on one side of the flux tube and a negative build-up on the other. The separation of charges established an electric field across the flux tube and therefore adds an E x B motion, sending the flux tube toward the planet. This mechanism supports our interchange instability framework, resulting in the injection of less dense gas radially inward. Since Kruskal and Schwarzschild's papers a tremendous amount of theoretical work has been accomplished that handle multi-dimensional configurations, varying boundary conditions and complicated geometries.
Studies of planetary magnetospheres with space probes has helped the development of interchange instability theories, especially the comprehensive understanding of interchange motions in Jupiter and Saturn’s magnetospheres.
Instability in a plasma system
The single most important property of a plasma is its stability. MHD and its derived equilibrium equations offer a wide variety of plasmas configurations but the stability of those configurations have not been challenged. More specifically, the system must satisfy the simple condition
where ? is the change in potential energy for degrees of freedom. Failure to meet this condition indicates that there is a more energetically preferable state. The system will evolve and either shift into a different state or never reach a steady state. These instabilities pose great challenges to those aiming to make stable plasma configurations in the lab. However, they have also granted us an informative tool on the behavior of plasma, especially in the examination of planetary magnetospheres.
This process injects hotter, lower density plasma into a colder, higher density region. It is the MHD analog of the well-known Rayleigh-Taylor instability. The Rayleigh-Taylor instability occurs at an interface in which a lower density liquid pushes against a higher density liquid in a gravitational field. In a similar model with a gravitational field, the interchange instability acts in the same way. However, in planetary magnetospheres co-rotational forces are dominant and change the picture slightly.
Simple models
Let's first consider the simple model of a plasma supported by a magnetic field B in a uniform gravitational field g. To simplify matters, assume that the internal energy of the system is zero such that static equilibrium may be obtained from the balance of the gravitational force and the magnetic field pressure on the boundary of the plasma. The change in the potential is then given by the equation: ? If two adjacent flux tubes lying opposite along the boundary (one fluid tube and one magnetic flux tube) are interchanged the volume element doesn't change and the field lines are straight. Therefore, the magnetic potential doesn't change, but the gravitational potential changes since it was moved along the z axis. Since the change in is negative the potential is decreasing.
A decreasing potential indicates a more energetically favorable system and consequently an instability. The origin of this instability is in the J × B forces that occur at the boundary between the plasma and magnetic field. At this boundary there are slight ripple-like perturbations in which the low points must have a larger current than the high points since at the low point more gravity is being supported against the gravity. The difference in current allows negative and positive charge to build up along the opposite sides of the valley. The charge build-up produces an E field between the hill and the valley. The accompanying E × B drifts are in the same direction as the ripple, amplifying the effect. This is what is physically meant by the “interchange” motion.
These interchange motions also occur in plasmas that are in a system with a large centrifugal force. In a cylindrically symmetric plasma device, radial electric fields cause the plasma to rotate rapidly in a column around the axis. Acting opposite to the gravity in the simple model, the centrifugal force moves the plasma outward where the ripple-like perturbations (sometimes called “flute” instabilities) occur on the boundary. This is important for the study of the magnetosphere in which the co-rotational forces are stronger than the opposing gravity of the planet. Effectively, the less dense “bubbles” inject radially inward in this configuration.
Without gravity or an inertial force, interchange instabilities can still occur if the plasma is in a curved magnetic field. If we assume the potential energy to be purely magnetic then the change in potential energy is: . If the fluid is incompressible then the equation can be simplified into . Since (to maintain pressure balance), the above equation shows that if the system is unstable. Physically, this means that if the field lines are toward the region of higher plasma density then the system is susceptible to interchange motions. To derive a more rigorous stability condition, the perturbations that cause an instability must be generalized. The momentum equation for a resistive MHD is linearized and then manipulated into a linear force operator. Due to purely mathematical reasons, it is then possible to split the analysis into two approaches: the normal mode method and the energy method. The normal mode method essentially looks for the eigenmodes and eigenfrequencies and summing the solutions to form the general solution. The energy method is similar to the simpler approach outlined above where is found for any arbitrary perturbation in order to maintain the condition . These two methods are not exclusive and can be used together to establish a reliable diagnosis of the stability.
Observations in space
The strongest evidence for interchange transport of plasma in any magnetosphere is the observation of injection events. The recording of these events in the magnetospheres of Earth, Jupiter and Saturn are the main tool for the interpretation and analysis of interchange motion.
Earth
Although spacecraft have travelled many times in the inner and outer orbit of Earth since the 1960s, the spacecraft was the first major plasma experiment performed that could reliably determine the existence of radial injections driven by interchange motions. The analysis revealed the frequent injection of a hot plasma cloud is injected inward during a substorm in the outer layers of the magnetosphere. The injections occur predominantly in the night-time hemisphere, being associated with the depolarization of the neutral sheet configuration in the tail regions of the magnetosphere. This paper then implies that Earth's magnetotail region is a major mechanism in which the magnetosphere stores and releases energy through the interchange mechanism. The interchange instability also has been found to have a limiting factor on the night side plasmapause thickness [Wolf et al. 1990]. In this paper, the plasmapause is found to be near the geosynchronous orbit in which the centrifugal and gravitational potential exactly cancel out. This sharp change in plasma pressure associated with the plasma pause enables this instability. A mathematical treatment comparing the growth rate of the instability with the thickness of the plasmapause boundary revealed that the interchange instability limits the thickness of that boundary.
Jupiter
Interchange instability plays a major role in the radial transport of plasma in the Io plasma torus at Jupiter. The first evidence of this behavior was published by Thorne et al. in which they discovered “anomalous plasma signatures” in the Io torus of Jupiter's magnetosphere. Using the data from the spacecraft Galileo's energetic particle detector (EPD), the study looked at one specific event. In Thorne et al. they concluded that these events had a density differential of at least a factor of 2, a spatial scale of km and an inward velocity of about km/s. These results support the theoretical arguments for interchange transport.
Later, more injections events were discovered and analyzed from Galileo. Mauk et al. used over 100 Jovian injections to study how these events were dispersed in energy and time. Similar to injections of Earth, the events were often clustered in time. The authors concluded that this indicated the injection events were triggered by solar wind activity against the Jovian magnetosphere. This is very similar to the magnetic storm relationship injection events have on Earth. However, it was found that Jovian injections can occur at all local time positions and therefore can't be directly related to the situation in Earth's magnetosphere. Although the Jovian injections aren't a direct analog of Earth's injections, the similarities indicate that this process plays a vital role in the storage and release of energy. The difference may lie in the presence of Io in the Jovian system. Io is a large producer of plasma mass because of its volcanic activity. This explains why the bulk of interchange motions are seen in a small radial range near Io.
Saturn
Recent evidence from the spacecraft Cassini has confirmed that the same interchange process is prominent on Saturn. Unlike Jupiter, the events happen much more frequently and more clearly. The difference lies in the configuration of the magnetosphere. Since Saturn's gravity is much weaker, the gradient/curvature drift for a given particle energy and L value is about 25 times faster. Saturn's magnetosphere provides a much better environment for the study of interchange instability under these conditions even though the process is essential in both Jupiter and Saturn. In a case study of one injection event, the Cassini Plasma Spectrometer (CAPS) produced characteristic radial profiles of plasma densities and temperatures of the plasma particles that also allowed the calculation of the origin of the injection and the radial propagation velocity. The electron density inside the event was lowered by a factor of about 3, the electron temperature was higher by an order of magnitude than the background, and there was a slight increase in the magnetic field. The study also used a model of pitch angle distributions to estimate the event originated between and had a radial speed of about 260+60/-70 km/s. These results are similar to the Galileo results discussed earlier. The similarities imply that the Saturn and Jupiter processes are the same.
See also
Plasma stability
Magnetic mirror
Fusion power
References
Plasma instabilities | Interchange instability | Physics | 4,144 |
41,835,970 | https://en.wikipedia.org/wiki/Yuba%E2%80%93Bear%20Hydroelectric%20Project | The Yuba–Bear Hydroelectric Project is a complex hydroelectric scheme in the northern Sierra Nevada in California, tapping the upper Yuba River and Bear River drainage basins. The project area encompasses approximately in Nevada, Placer, and Sierra Counties. Owned by the Nevada Irrigation District, it consists of 16 storage dams plus numerous diversion and regulating dams, and four generating stations producing 425 million kilowatt hours of electricity each year. The Yuba–Bear Hydroelectric Project consists of the Bowman development, Dutch Flat No. 2 development, Chicago Park development, and Rollins development.
History
The Yuba–Bear project was incepted in 1962 when NID voters approved a bond issue to construct the system. Construction began in 1963 and was completed in 1966, at a cost of $65 million ($ in dollars). The Rollins Dam and Bowman power plants were added in the 1980s. The Yuba–Bear Project introduced additional canals, reservoirs, ability to generate power, and 145,000 acre-feet of water storage to be utilized by residents of the district.
Geomorphology of the region
Geology and soils
The Yuba–Bear Hydroelectric Project is located within the Sierra Nevada mountain range, which experienced uplift beginning 3 to 5 million years ago, and contains faults resultant of tectonic collision during the late Paleozoic and Mesozoic eras.
This uplift and tilting of the Sierra Nevada created drainage patterns and channel incisions that shaped the landscape, including the Yuba and Bear rivers. Incision of the modern Yuba River began 5 million years ago, compounded by glacier erosion in the Quaternary period. The bedrock underneath the Yuba–Bear has a strong effect on the soils in this region. The soils include Mollisols, Inceptisols, Entisols, Alfisols, Andisols, and Ultisols.
Yuba River watershed
The Yuba River creates an incision through metamorphic bedrocks, including Mesozoic igneous rocks (granodiorite), Paleozoic phyllite, and slate from the Shoo Fly and Calaveras Formations. Over time, these channels were filled with Tertiary deposits of gravel, large boulders, and sands that were rich in gold.
The Yuba River has been heavily influenced by gold mining activity, with lingering effects such as abandoned mines, residual mercury sequestered in sediment, erosion, and alteration of sediment transport through the river system, with resulting consequences to channel structure. In addition to gold, major minerals of the area include copper, chromium, tungsten, and manganese. In 1994, mining of gravel and sand surpassed gold, with the Feather, Yuba, American, and Bear rivers providing a large amount of alluvial deposits for aggregate mining. The Yuba River has a high level of sediment supply, with a bed-load composed primarily of mining gravel as a result of intense levels of hydraulic mining that occurred in the area.
Bear River watershed
The Bear River displays characteristics of an "underfit" stream, indicating that it was formed by a larger river that had higher flows. The Bear River is located in a deep V-shaped canyon that suggests not only the work of a larger river, but also glacial advances that carved the topography and created this watershed. The Bear River has been severely impacted by hydraulic mining, and struggles with mercury contaminationleft over from the gold rush. The Bear River originates in the Tahoe National Forest, twenty miles west of the Sierra Nevada crest, and comprises three distinct sections termed the Upper, Middle, and Lower Bear River. The largest water body in the Bear River watershed is Camp Far West Reservoir, which Bear River feeds into before joining with the Feather River south of Yuba City. The Bear River supports popular brown and rainbow trout fisheries, and is popular among fly fishing clubs.
Project features
The main water sources for the project are the Middle Yuba River, and Canyon Creek (a tributary of the South Yuba River). Jackson Meadows Dam stores water from the Middle Yuba, which is diverted southward through the Milton-Bowman Diversion Conduit into Bowman Lake, an impoundment of Canyon Creek. In addition to Jackson Meadows and Bowman reservoirs, the Yuba–Bear project derives water from fourteen smaller high elevation Sierra lakes, which have been dammed to increase their size.
After passing through Bowman Powerhouse, the water continues south via the Bowman–Spaulding Conduit to Lake Spaulding, which is part of the heavily interconnected Drum-Spaulding Hydroelectric Project owned by PG&E. Lake Spaulding is an impoundment of the South Yuba River, and is an important hub of the system as nearly all the water used by both projects passes through it.
Below Lake Spaulding water passes through Drum-Spaulding Project canals through Emigrant Gap into the upper Bear River, where it powers six hydroelectric plants on its long descent to Rollins Reservoir, the lowermost major reservoir of the Yuba–Bear project. The Yuba–Bear project operates two of these plants (Dutch Flat No. 2 and Chicago Park), in addition to a smaller powerhouse below Rollins Dam.
Together with the Drum-Spaulding Project, the Yuba–Bear project is considered by the Federal Energy Regulatory Commission to be "the most physically and operationally complex hydroelectric project in the United States".
Yuba–Bear Hydroelectric Project
NID was granted the Federal Energy Regulatory Commission (FERC) license for the Yuba–Bear project in June, 1963. That license was set to expire and be up for renewal as of April, 2013. The four developments discussed in detail below compose the Yuba–Bear Project, with a total of 13 mains dams and 207,865 acre-feet of gross combined storage capacity. Accompanying this are 4 powerhouses, 4 water conduits, a 9-mile-long transmission line, appurtenant and recreational facilities.
Bowman development
The first installment of the Yuba–Bear project is the Bowman development. It begins with the Jackson Meadows Dam, which is located 45.6 miles up on the Middle Yuba River from where it joins with the North Yuba River. This includes the Jackson Meadows Dam Spillway and Reservoir. The Jackson Meadows Reservoir is man-made, with a surface area of 1,054 acres, and storage capacity of 69,205 ac-ft. Jackson Meadows Reservoir Recreation Area offers a combined 282 camping sites spread out over eight different campgrounds. Downstream of Jackson Meadows, located 42.2 miles upstream of where the Middle Yuba meets the North Yuba, is the Milton Main Diversion Dam, Milton South Diversion Dam, the Milton Diversion Dam Spillway, and the Milton Diversion Impoundment. The Milton Reservoir has a surface area of 103 acres, and 295 ac-ft of storage capacity. Not to be confused with Jackson Meadows, is Jackson Dam. This is located on Jackson Creek, 2.9 miles upstream from Bowman Lake. Jackson Dam is an earth-filled dam accompanied by the Jackson Dam Spillway and Jackson Dam Lake, which has 58 acres of surface area and 1,330 ac-ft of storage capacity. Jackson Creek Campground has 13 available camping sites.
The remaining impoundments contributing to the Bowman Development are on Canyon Creek. The first of these is French Dam, a rockfill dam, followed by French Dam Spillway and French Lake. French Lake has 356 acres of surface area and 13,940 ac-ft storage capacity. Next is Faucherie Dam, Spillway, and Lake, with 143 acres surface area and 3,980 ac-ft storage. Faucherie Lake Recreation Area contains 25 campsites and a day-use area. Sawmill Dam is another rockfill dam, and the last impoundment of Canyon Creek before reaching Bowman Lake. Sawmill Dam Spillway and Lake allow for 3,030 ac-ft storage capacity, and create 79.4 acres of surface area.
Bowman Lake is formed on Canyon Creek by Bowman North Dam, and Bowman South Dam and Spillway. Bowman Lake has a surface area of 820 acres, and 68,510 ac-ft of storage capacity. The Bowman Penstock, Powerhouse, Switchyard, and Transmission Line originate here, with the Transmission Line connecting the Bowman Development to the Drum-Spaulding Project run by PG&E. Bowman Campground has 10 camping sites.
Dutch Flat No. 2 development
Water reaches the Dutch Flat Development by way of the Bowman-Spaulding Conduit. This diverts flow from Canyon Creek below Bowman Lake through 40,501 feet of flumes and canals, and through 16,192 feet of tunnels. Texas Creek Diversion Dam and Fall Creek Diversion Dam also redirect portions of Canyon Creek flow into the Bowman-Spaulding Conduit. The Dutch Flat No. 2 Conduit is a combination of flume, tunnel, siphon, and canal that takes water from PG&E's Drum-Spaulding Project (at Drum Afterbay), and channels it into Dutch Flat No. 2 Forebay. The Dutch Flat Forebay Dam is an off-stream earthfilled embankment dam adjacent to the Bear River. The Dutch Flat Forebay Dam Spillway leads to Dutch Flat Forebay, a reservoir adjacent to the Bear River with a surface area of 8 acres and 185 ac-ft storage capacity. The remaining components of the Dutch Flat Development are Dutch Flat No. 2 Powerhouse Penstock, Dutch Flat No. 2 Powerhouse, and Dutch Flat No. 2 Powerhouse Switchyard.
Chicago Park development
Six miles upstream from where the Bear River meets Rollins Reservoir, is the Dutch Flat Afterbay Dam. The Dutch Flat Afterbay Dam Spillway discharges uncontrolled into the Bear River, and contributes to Dutch Flat Afterbay reservoir located on the Bear River. This reservoir has a surface area of 140 acres, and 2,037 ac-ft storage capacity. From here, the Chicago Park Conduit utilizes 21,700 feet of flumes and ditches to transport water to the Chicago Park Forebay Dam, an earthfill dam adjacent to the Bear River. The Chicago Park Forebay Dam Spillway allows water into Chicago Park Forebay, another reservoir adjacent to the Bear River with 7 acres of surface area and 117 ac-ft of storage capacity. The remaining components of this Development are the Chicago Park Powerhouse Penstock, Chicago Park Powerhouse, and the Chicago Park Switchyard.
Rollins development
This Development consists of an embankment dam on the Bear River known as Rollins Dam, Rollins Dam Spillway, and Rollins Reservoir, which has a surface area of 825 acres and 65,989 ac-ft of storage capacity. Rollins Reservoir also offers 332 camping site spread over 4 campgrounds. The Rollins Powerhouse Penstock, Rollins Powerhouse, and Rollins Switchyard complete this project.
Drum–Spaulding Project
The Drum-Spaulding Project is heavily intertwined with the Yuba–Bear project, and is run by PG&E. The Drum-Spaulding Project is composed of the following:
Upper Drum–Spaulding
Drum No. 1 and No. 2 Development, with generation of 105.9 MW. Spaulding No. 1 and No. 2 Development, generating 11.4 MW. Spaulding No. 3 Development with 5.8 MW, Dutch Flat No. 1 Development with generation of 22 MW, and Alta No. 1 Development that is proposed to retire.
Lower Drum
Halsey Development generating 11 MW, Wise Development with 14 MW and Wise No. 2 Development generating 3.2 MW, and Newcastle Development with 11.5 MW.
Deer Creek
Deer Creek development, generating 5.7 MW.
Yuba River Development Project
The Yuba River Development Project is run by the Yuba County Water Agency, and is connected to the Yuba–Bear Hydroelectric Project. This Development Project includes the New Bullards Bar Reservoiron the North Yuba River, two diversion dams (Our House Diversion Dam on the Middle Yuba, and Log Cabin Diversion on Oregon Creek), three powerhouses (New Colgate, Fish Release, and Narrows No. 2), along with various recreation facilities. The installed capacity of this Project is 361.9 megawatts.
Uses
At maximum capacity, the project can generate 79.32 megawatts. Power is distributed under contract with Pacific Gas and Electric (PG&E). The project's reservoirs have a gross storage capacity of .
In addition to hydroelectric power, the project significantly increases the water flow in the Bear River, the main source of NID's irrigation water supply.
References
Yuba River
Energy infrastructure in California
Hydroelectric power plants in California
Interbasin transfer
Water in California | Yuba–Bear Hydroelectric Project | Environmental_science | 2,572 |
5,411,925 | https://en.wikipedia.org/wiki/Agrin | Agrin is a large proteoglycan whose best-characterised role is in the development of the neuromuscular junction during embryogenesis. Agrin is named based on its involvement in the aggregation of acetylcholine receptors during synaptogenesis. In humans, this protein is encoded by the AGRN gene.
This protein has nine domains homologous to protease inhibitors. It may also have functions in other tissues and during other stages of development. It is a major proteoglycan component in the glomerular basement membrane and may play a role in the renal filtration and cell-matrix interactions.
Agrin functions by activating the MuSK protein (for Muscle-Specific Kinase), which is a receptor tyrosine kinase required for the formation and maintenance of the neuromuscular junction. Agrin is required to activate MuSK. Agrin is also required for neuromuscular junction formation.
Discovery
Agrin was first identified by the U.J. McMahan laboratory, Stanford University.
Mechanism of action
During development in humans, the growing end of motor neuron axons secrete a protein called agrin. When secreted, agrin binds to several receptors on the surface of skeletal muscle. The receptor which appears to be required for the formation of the neuromuscular junction (NMJ) is called the MuSK receptor (Muscle specific kinase). MuSK is a receptor tyrosine kinase - meaning that it induces cellular signaling by causing the addition of phosphate molecules to particular tyrosines on itself and on proteins that bind the cytoplasmic domain of the receptor.
In addition to MuSK, agrin binds several other proteins on the surface of muscle, including dystroglycan and laminin. It is seen that these additional binding steps are required to stabilize the NMJ.
The requirement for Agrin and MuSK in the formation of the NMJ was demonstrated primarily by knockout mouse studies. In mice that are deficient for either protein, the neuromuscular junction does not form. Many other proteins also comprise the NMJ, and are required to maintain its integrity. For example,
MuSK also binds a protein called "dishevelled" (Dvl), which is in the Wnt signalling pathway. Dvl is additionally required for MuSK-mediated clustering of AChRs, since inhibition of Dvl blocks clustering.
Signaling
The nerve secretes agrin, resulting in phosphorylation of the MuSK receptor.
It seems that the MuSK receptor recruits casein kinase 2, which is required for clustering.
A protein called rapsyn is then recruited to the primary MuSK scaffold, to induce the additional clustering of acetylcholine receptors (AChR). This is thought of as the secondary scaffold. A protein called Dok-7 has shown to be additionally required for the formation of the secondary scaffold; it is apparently recruited after MuSK phosphorylation and before acetylcholine receptors are clustered.
Structure
There are three potential heparan sulfate (HS) attachment sites within the primary structure of agrin, but it is thought that only two of these actually carry HS chains when the protein is expressed.
In fact, one study concluded that at least two attachment sites are necessary by inducing synthetic agents. Since agrin fragments induce acetylcholine receptor aggregation as well as phosphorylation of the MuSK receptor, researchers spliced them and found that the variant did not trigger phosphorylation. It has also been shown that the G3 domain of agrin is very plastic, meaning it can discriminate between binding partners for a better fit.
Heparan sulfate glycosaminoglycans covalently linked to the agrin protein have been shown to play a role in the clustering of AChR. Interference in the correct formation of heparan sulfate through the addition of chlorate to skeletal muscle cell culture results in a decrease in the frequency of spontaneous acetylcholine receptor (AChR) clustering. It may be that rather than solely binding directly to the agrin protein core a number of components of the secondary scaffold may also interact with its heparan sulfate side-chains.
A role in the retention of anionic macromolecules within the vasculature has also been suggested for agrin-linked HS at the glomerular or alveolar basement membrane.
Functions
Agrin may play an important role in the basement membrane of the microvasculature as well as in synaptic plasticity. Also, agrin may be involved in blood–brain barrier (BBB) formation and/or function and it influences Aβ homeostasis.
Research
Agrin is investigated in relation with osteoarthritis. In addition, by its ability to activate the Hippo signaling pathway, agrin is emerging as a key proteoglycan in the tumor microenvironment.
Clinical significance
AGRN gene mutation leads to congenital myasthenic syndromes and myasthenia gravis.
A recent genome-wide association study (GWAS) has found that genetic variations in AGRN are associated with late-onset sporadic Alzheimer’s disease (LOAD). These genetic variations alter β-amyloid homeostasis contributing to its accumulation and plaque formation.
References
Further reading
d
External links
Developmental neuroscience
Molecular neuroscience
Extracellular matrix proteins
Proteoglycans | Agrin | Chemistry | 1,133 |
13,269,420 | https://en.wikipedia.org/wiki/Unistochastic%20matrix | In mathematics, a unistochastic matrix (also called unitary-stochastic) is a doubly stochastic matrix whose entries are the squares of the absolute values of the entries of some unitary matrix.
A square matrix B of size n is doubly stochastic (or bistochastic) if all its entries are non-negative real numbers and each of its rows and columns sum to 1. It is unistochastic if there exists a unitary matrix U such that
This definition is analogous to that for an orthostochastic matrix, which is a doubly stochastic matrix whose entries are the squares of the entries in some orthogonal matrix. Since all orthogonal matrices are necessarily unitary matrices, all orthostochastic matrices are also unistochastic. The converse, however, is not true. First, all 2-by-2 doubly stochastic matrices are both unistochastic and orthostochastic, but for larger n this is not the case. For example, take and consider the following doubly stochastic matrix:
This matrix is not unistochastic, since any two vectors with moduli equal to the square root of the entries of two columns (or rows) of B cannot be made orthogonal by a suitable choice of phases. For , the set of orthostochastic matrices is a proper subset of the set of unistochastic matrices.
the set of unistochastic matrices contains all permutation matrices and its convex hull is the Birkhoff polytope of all doubly stochastic matrices
for this set is not convex
for the set of triangle inequality on the moduli of the raw is a sufficient and necessary condition for the unistocasticity
for the set of unistochastic matrices is star--shaped and unistochasticity of any bistochastic matrix B is implied by a non-negative value of its Jarlskog invariant
for the relative volume of the set of unistochastic matrices with respect to the Birkhoff polytope of doubly stochastic matrices is
for explicit conditions for unistochasticity are not known yet, but there exists a numerical method to verify unistochasticity based on the algorithm by Haagerup
The Schur-Horn theorem is equivalent to the following "weak convexity" property of the set of unistochastic matrices: for any vector the set is the convex hull of the set of vectors obtained by all permutations of the entries of the vector (the permutation polytope generated by the vector ).
The set of unistochastic matrices has a nonempty interior. The unistochastic matrix corresponding to the unitary matrix with the entries , where and , is an interior point of .
References
.
Matrices | Unistochastic matrix | Mathematics | 582 |
18,224,909 | https://en.wikipedia.org/wiki/Department%20of%20Defense%20Serum%20Repository | The Department of Defense Serum Repository (also referred to as the DoD Serum Repository or simply DoDSR) is a biological repository operated by the United States Department of Defense containing over 50,000,000 human serum specimens, collected primarily from applicants to and members of the United States Uniformed Services.
The DoDSR is located in Silver Spring, Maryland and is operated by the
Armed Forces Health Surveillance Center (AFHSC), a subordinate of the United States Army Center for Health Promotion and Preventive Medicine (USACHPPM), itself evolved from the Johns Hopkins School of Hygiene and Public Health. The DoDSR traces its origins to 1985 and the beginnings of the United States Armed Forces HIV screening program (originally referred to as the HTLV-III screening program), when serum remaining after periodic laboratory testing of service members was retained first by the Walter Reed Army Institute of Research (WRAIR), then later systematically archived in the Army/Navy Serum Repository, the precursor to the DoDSR.
Today the DoDSR is among the largest serum repositories in the world, in terms of numbers of individuals represented, number of longitudinal specimens stored per individual, and total quantity of serum. The majority of specimens are linked to detailed medical and personnel data, creating a valuable resource for retrospective research and public health surveillance. The DoDSR's longitudinal serum, collected systematically from a large population, has enabled major contributions to understanding the etiology of many health conditions not otherwise amenable to prospective study, including multiple sclerosis, schizophrenia, autoimmune diseases and cancer.
History
The earliest serum housed in the DoDSR was collected through the Armed Forces' HLTV-III screening program, implemented in 1985 in response to the emergence of a new human virus, subsequently known as Human Immunodeficiency Virus (HIV). Early laboratory testing was performed via contracted private laboratories. Screening soon expanded to all civilian applicants processed at Military Entrance Processing Stations. A condition of some early laboratory testing contracts specified that remnant serum were to remain in frozen storage. In 1989, the Army's Walter Reed Army Institute of Research (WRAIR) awarded a contract to McKesson to consolidate and store accumulated residual serum specimens at a single facility, established in proximity to WRAIR in Rockville, Maryland. The HIV Research Program (established by Congressional Direction in 1986), under the WRAIR Division of Retrovirology, established the Walter Reed Army Serum Repository, which would evolve to become the Army/Navy Serum Repository in 1989. In 2001, the repository inventory was moved to its current location, a facility in Silver Spring, Maryland. In recent years, the DoDSR has grown by approximately 1.9-2.3 million specimens annually. By 2007, the DoDSR inventory had grown to over 44 million specimens, and by the end of 2009, over 50,000,000 specimens.
Growth of the DoDSR Inventory
HIV Seronegative Specimens
The DoDSR, along with its precursor repositories, were designated as the ultimate storage facility for all serologically negative HIV specimens obtained through military HIV testing programs. Growing initially through the routine screening of all civilian applicants, and then through the continued screening of retained military personnel (at approximate two year intervals), by 1990 the DoDSR inventory had grown to contain over six million serum specimens, and by 1996 over 17 million specimens. Standardized processes in place at the contracted military HIV testing laboratories ensured efficient management of the growing inventory, permitting the DoDSR to enforce standards in specimen labelling, configuration, and shipment of specimens which facilitated their physical integration into the DoDSR inventory. Contracts for HIV testing, negotiated by the individual military services, covered all specimens shipped from Military Treatment Facilities for HIV testing within the United States; for this reason unless specifically removed, serum from military beneficiaries (i.e. spouses and children) would also find their way into the DoDSR inventory.
Addition of Pre- and Post-Deployment Specimens
Prompted by experiences in the aftermath of the Persian Gulf War, including claims by many service members of adverse health outcomes, the December, 1995 deployment of U.S. service members to Bosnia was accompanied by increased emphasis on health surveillance. A 1996 Assistant Secretary of Defense for Health Affairs memorandum mandated the collection of pre- and post-deployment serum specimens from deploying service members, and their integration into the DoDSR. The policy also directed that specimens collected for HIV surveillance could suffice. Although a small number of specimens were collected directly for health surveillance outside of existing HIV testing channels, specimens collected in this manner suffered from lack of standardization. By 1999, the Assistant Secretary of Defense issued modified instructions, which directed that the requirement for pre- and post-deployment specimens be satisfied by HIV testing. Initially, an HIV specimen was required to be collected prior to deployment if none had been collected in the year prior.
Concerns over the adequacy of specimens collected for this purpose, including findings of the Rhode Island Gulf War Commission, contributed to public concern over the adequacy of existing serum collection practices. The FY2005 Defense Authorization Act called on the Department of Defense to perform
[a]n assessment of whether there is a need for changes to regulations and standards for drawing blood samples for effective tracking and health surveillance of the medical conditions of personnel before deployment, upon the end of a deployment, and for a followup period of appropriate length.
Additionally, this legislation required DoD to change its policies to require collection of an HIV specimen within 120 days pre-deployment and 30 days post-deployment. This change was later rescinded in conjunction with the later recommendations of the Armed Forces Epidemiological Board, to permit a sample collected within the year prior to deployment to meet requirements. Despite the many changes in policies, the large numbers of service members deploying in support of Operations Iraqi Freedom and Enduring Freedom have led to a moderate increase in the rate of specimen acquisition and growth of the DoDSR inventory.
Evolution of the DoDSR Mission and Custody
The DoDSR has evolved from a research-affiliated repository limited to storing HIV seronegative specimens, to a repository serving a broad health surveillance mission for which it was not originally intended.
The first officially articulated purpose of the DoDSR is found in a 1991 WRAIR solicitation for the management of the precursor to the DoDSR:
Sera repository operations are required for retrospective studies in support of current and future retroviral research efforts ... Analysis of these sera will be very important.
The WRAIR solicitation anticipated as-needed specimen retrieval of up to 5,000 specimens per year.
In 1995, responsibility and custody of the DoDSR inventory and its associated database was transferred from WRAIR to a newly formed subordinate command of the United States Army Medical Command, the United States Army Center for Health Promotion and Preventive Medicine (USACHPPM). USACHPPM, or simply CHPPM, itself evolved from the U.S. Army Industrial Hygiene Laboratory, which was initially established in 1942 at the beginning of World War II at the Johns Hopkins School of Hygiene, now the Johns Hopkins Bloomberg School of Public Health. The change in custody was accompanied by an increased emphasis on the epidemiologic, public health and health surveillance utility of DoDSR specimens.
A DoD Instruction issued in 1997, since rescinded, described the purpose of the DoDSR as being for
medical surveillance for clinical diagnosis and epidemiological studies. The repository shall be used exclusively for the identification, prevention, and control of diseases associated with operational deployments of military personnel.
A subsequent DoD Directive, DoDD 6490.02E, expanded authorized uses of the DoDSR slightly:
There shall be a Department of Defense Serum Repository for medical surveillance for clinical diagnosis and epidemiological studies. The repository shall be used for the identification, prevention, and control of diseases associated with military service.
Rationale for Current Practices
Responding to concerns outlined in the FY2005 Defense Authorization Act , the Assistant Secretary of Defense for Health Affairs requested the Armed Forces Epidemiological Board (AFEB) address three questions related to the mission and operation of the DoDSR:
Is there a sound basis for the continued routine collection of sera pre- and post-deployment for clinical care reasons, public health surveillance or research purposes in order to examine the effects of deployment on health?
Should any other biological specimens be collected for clinical care reasons, public health surveillance, or research purposes?
Are there any valid reasons to change the time frames of specimens of collected biological specimens either pre- or post-deployment for clinical care reasons, public health surveillance, or research purposes?
The AFEB study determined that there was sound basis for the continued collection of serum, but recommending the additional collection of white blood cells in addition to serum. The AFEB study also recommended that the DoD establish an oversight panel be created to govern access to the specimens. Neither recommendation has yet been acted on.
Present DoDSR
Location
The DoDSR facility is located in of leased commercial space in a building located at 11800 Tech Road, Silver Spring, Maryland. The leased space was acquired through a ten-year lease managed by the General Services Administration (GSA) which expires October 1, 2010.
The commercial facility is shared with two other major tenants: Holy Cross Hospital, and Comcast, whose continued occupancy precludes contiguous expansion of the DoDSR inventory. Due to space constraints at the existing facility, relocation of the DoDSR inventory to another location in the Baltimore - National Capital region (including Ft. Meade, Maryland) was considered as early as 2005.
Considerations under BRAC
Although AFHCS maintains technical and computing facilities supporting the DoDSR at the Walter Reed Army Medical Center (WRAMC), Washington, D.C. and is subject to realignment under the recommendations of the Base Realignment and Closure (BRAC) Commission, published BRAC recommendations do not specify a location to which the facilities must relocate. Relocation of the WRAMC AFHSC facilities are necessary by September 15, 2011.
Contracted Operation
The DoDSR is operated by Thermo Fisher Scientific under a no-bid or "sole-source" contract awarded in 2006. An earlier no-bid contract was awarded to Cryonix in 2005, although Cryonix was later incorporated under Thermo Electron Corporation's Biorepository Services division Thermo Electron subsequently merged with Fisher Scientific in 2006. Thermo Fisher's Fisher BioServices business currently holds the contract.
Freezer Equipment
The DoDSR consists of 15 large walk-in freezers, each approximately x 30 feet x 10 high, whose interiors are maintained at -30 °C by pairs of compressors.
Serum Storage
The majority of serum specimens are stored inside the walk-in freezers in cardboard boxes, approximately 6 x 18 x in size, each containing 308 specimens, and each consisting of approximately 2.5 mL of frozen serum The cardboard boxes are sequentially numbered and labeled, and stored on metal shelving units within the walk-in freezers for ready accessibility and retrieval.
Due to storage constraints, approximately 5.5 million specimens from two walk-in freezers were placed into "high-density" configuration in 2006, and additional reconfiguration may be required. The current operations contract calls for the contractor to
"adjust the storage configurations of specimens in one or more freezers to accommodate high-density, boxed specimen storage" as required.
Transportation of Specimens to the DoDSR
The majority of specimens are received quarterly in frozen form, following completion of all HIV testing. Shipments arrive in pallets transported in a freezer truck from the major contracted testing laboratory, ViroMed, which is located in Minnetonka, Minnesota.
In 2008, the DoDSR procured a specialized freezer truck to transport specimens to the DoDSR, and solicited a bid for more frequent transport of specimens to the repository
.
Data Linkages
DoDSR inventory data and related information are stored in an Oracle database referred to as the Defense Medical Surveillance System (DMSS), which serves as the "sole link" to the DoDSR inventory. Serum specimens are identified by a unique specimen identification number, which for the majority of specimens are linked to the Social Security Number of the donor, and the date the specimen was obtained.
In addition to inventory data, DMSS also integrates select medical outcomes data available through the Military Health System (MHS), including International Classification of Diseases, 9th Edition, Clinical Modification (ICD-9CM) diagnosis codes, Current Procedural Terminology (CPT) codes, and other pertinent administrating data from inpatient and outpatient encountered provided directly by the MHS or through Tricare managed care services.
Active duty component service members (unlike service members in the Reserve components), are entitled to free (or nearly free) health care for the duration of their military service, the details of which are captured electronically in DMSS. The active duty component thus constitutes a cohort where health events can be assessed longitudinally with minimal ascertainment bias. Over half of the specimens in the DoDSR are traceable to service members who have been on active duty, and 75% of active duty service members have provided three or more longitudinal specimens.
Limited additional health and personnel data linked to DoDSR specimens include records of immunizations, overseas deployments, military assignment data, and records from pre- and post-deployment health assessments.
Links to Other Available Data
Significant additional MHS administrative and clinical data exist which are not integrated into DMSS. These include:
Records of pharmaceuticals dispensed at MHS outpatient pharmacies and through the outsourced civilian retail and mail-order pharmacy, available through the DoD Pharmacoeconomics Center's (PEC) Pharmacy Data Transaction Service (PDTS).
Records of Health Level 7 (HL7) coded results of microbiology, chemistry, and hematology laboratory tests, available through the MHS.
Family history, risk factor data, available through the MHS Armed Forces Health Longitudinal Technology Application (AHLTA) Electronic Medical Record system.
Information on confirmed cancer diagnoses available in the Automated Central Tumor Registry (ACTUR).
Recent AFHSC solicitations have requested additional staff to address these data shortfalls.
Linkages to Other DoD Biological Repositories
The Department of Defense, through the Armed Forces Institute of Pathology operates the AFIP Tissue Repository, which contains approximately 3 million case files and associated paraffin blocks, microscopic glass slides, and formalin-fixed tissue specimens from pathologic examinations occurring throughout the Military Health System. Thousands of cases are added to the repository each year. With the disestablishment of the AFIP under Base Realignment and Closure, management of the Tissue Repository was to have been transferred to the Uniformed Services University of the Health Sciences. However, Public Law 110-181 Section 722 directed the President to establish a Joint Pathology Center, which would subsume responsibility for the AFIP Tissue Repository. A Joint Pathology Center Working Group Concept of Operations stated that:
The JPC ... will provide maintenance/modernization of the Tissue Repository in support of the mission of the DoD and other federal agencies.
In its review of the JPC Working Group Concept of Operations, the Defense Health Board emphasized that:
Every effort must be pursued to guarantee that the Tissue Repository is preserved, implements world-class modernization, and is utilized appropriately. A recent independent report by Asterand (Detroit, MI) submitted to Uniformed Services University of the Health Sciences found the repository to have a commercial value of $3.0-$3.6 Billion ...
Despite the utility of linking AFIP Tissue Repository specimens to longitudinal pre-diagnostic serum available in the DoDSR, no formal linkage of the AFIP Tissue Repository inventory has yet been made to DMSS or to the DoDSR. No estimate is yet available on the potential commercial value of such a formal linkage.
Permitted Uses of DoDSR Specimens
Requests for access to DoDSR specimens are governed by guidelines developed by the Armed Forces Health Surveillance Center. According to guidelines, " [t]he Director of the repository is solely responsible for authorizing releases of specimens from the repository."
Research
DoDSR specimens may only be released to principal investigators outside the Department of Defense for purposes of medical research if the proposed study has "a coinvestigator who is assigned to the Department of Defense and is knowledgeable, responsible, and accountable for all aspects of the study's design and execution (including data management, analysis, interpretation, and reporting of results)."
Clinical Care
Serum from the DoDSR may be requested by clinicians within the Military Health System to aid diagnosis and guide clinical management. Serum may also be released to clinicians outside the Military Health System provided a physician in the Military Health System in the same specialty as the requestor validates the clinical relevance of the requested use prior to the release of any serum.
Criminal Investigations
Serum specimens from the DoDSR may be used for criminal investigations and prosecutions if directed by the Assistant Secretary of Defense for Health Affairs.
Other Issues
Informed Consent
DoDSR specimens are collected without informed consent, and specimen donors are not informed of the use of their specimens in subsequent studies. Specimens retrieved by the DoDSR for use in external research studies are, with rare exceptions, deidentified prior to being sent to outside investigators.
Genetic Testing
A 1996 memorandum specifically stated that DoDSR specimens collected for pre- and post-deployment health surveillance "will not be used for any genetics related testing".
Civilian and Beneficiary Serum
As a result of clinically indicated HIV testing performed on civilians and family beneficiaries at Military Treatment Facilities (eligible for health care within the Military Health System), approximately 900,000 serum specimens from individuals not directly affiliated with the Uniformed Services through application or service are also stored in the DoDSR. In Privacy Act documentation, DoD acknowledges that the AFHSC maintains "...specimen collections (remaining serum from blood samples) from which serologic tests can be performed..." from categories of individuals which include "Department of Defense military personnel (active and reserve) and their family members...".
Destruction of Specimens
DoDSR guidelines are silent as to whether a mechanism exists for specimen donors to request destruction or removal of their specimens from the repository. In Privacy Act documentation, DoD states that "[r]ecords are destroyed when no longer needed for reference and for conducting business", but no formal mechanism is articulated for the destruction or specimens. This is in contrast to the Armed Forces Institute of Pathology DNA Repository (also known as the Repository of Specimen Samples for the Identification of Remains) which articulates a mechanism for donors to request the destruction of their specimens following separation from service.
RAND Study on the Role of the DoDSR in Pandemic Influenza Preparedness
On May 1, 2009, during the early stages of the 2009 H1N1 flu outbreak, an unpublished RAND study, originally commissioned in 2006 by USACHPPM was published in its entirety on WikiLeaks. The leaked documents included a justification for the $500,000 contract cost, directly authorized by former Surgeon General of the United States Army Kevin C. Kiley on August 4, 2006, which stated the study and its 12-month timetable for delivery was necessary
... to describe the current and future capabilities of the Department of Defense Serum Repository to assist with the early identification and response to an influenza pandemic. Adequate resources are not available in-house to perform these analyses in sufficient time to prepare for a pandemic ...
Despite the leaked study draft's publication date of May 2008, at the time of the leak and outbreak in May 2009, RAND listed the study as a "current project", noting in its description that "the threat of an emerging human pandemic [has] highlighted the importance of a comprehensive U.S. Armed Forces health surveillance architecture". Around the time of the leaked documents' appearance on WikiLeaks, the lead author of the unpublished RAND study published an op-ed piece in The Baltimore Sun describing the control of the outbreak a concern of "national security", and highlighting the need to "marshal the best ... institutional strengths ... to prevent, detect and respond effectively to this latest infectious disease".
Notes
References
External links
Armed Forces Health Surveillance Center (AFHSC)
United States Department of Defense
Military-related organizations
Biological databases
Hematology organizations | Department of Defense Serum Repository | Biology | 4,156 |
646,125 | https://en.wikipedia.org/wiki/Heterosis | Heterosis, hybrid vigor, or outbreeding enhancement is the improved or increased function of any biological quality in a hybrid offspring. An offspring is heterotic if its traits are enhanced as a result of mixing the genetic contributions of its parents. The heterotic offspring often has traits that are more than the simple addition of the parents' traits, and can be explained by Mendelian or non-Mendelian inheritance. Typical heterotic/hybrid traits of interest in agriculture are higher yield, quicker maturity, stability, drought tolerance etc.
Definitions
In proposing the term heterosis to replace the older term heterozygosis, G.H. Shull aimed to avoid limiting the term to the effects that can be explained by heterozygosity in Mendelian inheritance.
Heterosis is often discussed as the opposite of inbreeding depression, although differences in these two concepts can be seen in evolutionary considerations such as the role of genetic variation or the effects of genetic drift in small populations on these concepts. Inbreeding depression occurs when related parents have children with traits that negatively influence their fitness largely due to homozygosity. In such instances, outcrossing should result in heterosis.
Not all outcrosses result in heterosis. For example, when a hybrid inherits traits from its parents that are not fully compatible, fitness can be reduced. This is a form of outbreeding depression, the effects of which are similar to inbreeding depression.
Genetic and epigenetic bases
Since the early 1900s, two competing genetic hypotheses, not necessarily mutually exclusive, have been developed to explain hybrid vigor. More recently, an epigenetic component of hybrid vigor has also been established.
Dominance and overdominance
When a population is small or inbred, it tends to lose genetic diversity. Inbreeding depression is the loss of fitness due to loss of genetic diversity. Inbred strains tend to be homozygous for recessive alleles that are mildly harmful (or produce a trait that is undesirable from the standpoint of the breeder). Heterosis or hybrid vigor, on the other hand, is the tendency of outbred strains to exceed both inbred parents in fitness.
Selective breeding of plants and animals, including hybridization, began long before there was an understanding of underlying scientific principles. In the early 20th century, after Mendel's laws came to be understood and accepted, geneticists undertook to explain the superior vigor of many plant hybrids. Two competing hypotheses, which are not mutually exclusive, were developed:
Dominance hypothesis. The dominance hypothesis attributes the superiority of hybrids to the suppression of undesirable recessive alleles from one parent by dominant alleles from the other. It attributes the poor performance of inbred strains to loss of genetic diversity, with the strains becoming purely homozygous at many loci. The dominance hypothesis was first expressed in 1908 by the geneticist Charles Davenport. Under the dominance hypothesis, deleterious alleles are expected to be maintained in a random-mating population at a selection–mutation balance that would depend on the rate of mutation, the effect of the alleles and the degree to which alleles are expressed in heterozygotes.
Overdominance hypothesis. Certain combinations of alleles that can be obtained by crossing two inbred strains are advantageous in the heterozygote. The overdominance hypothesis attributes the heterozygote advantage to the survival of many alleles that are recessive and harmful in homozygotes. It attributes the poor performance of inbred strains to a high percentage of these harmful recessives. The overdominance hypothesis was developed independently by Edward M. East (1908) and George Shull (1908). Genetic variation at an overdominant locus is expected to be maintained by balancing selection. The high fitness of heterozygous genotypes favours the persistence of an allelic polymorphism in the population. This hypothesis is commonly invoked to explain the persistence of some alleles (most famously the Sickle cell trait allele) that are harmful in homozygotes. In normal circumstances, such harmful alleles would be removed from a population through the process of natural selection. Like the dominance hypothesis, it attributes the poor performance of inbred strains to expression of such harmful recessive alleles.
Dominance and overdominance have different consequences for the gene expression profile of the individuals. If overdominance is the main cause for the fitness advantages of heterosis, then there should be an over-expression of certain genes in the heterozygous offspring compared to the homozygous parents. On the other hand, if dominance is the cause, fewer genes should be under-expressed in the heterozygous offspring compared to the parents. Furthermore, for any given gene, the expression should be comparable to the one observed in the fitter of the two parents. In any case, outcross matings provide the benefit of masking deleterious recessive alleles in progeny. This benefit has been proposed to be a major factor in the maintenance of sexual reproduction among eukaryotes, as summarized in the article Evolution of sexual reproduction.
Historical retrospective
Which of the two mechanisms are the "main" reason for heterosis has been a scientific controversy in the field of genetics. Population geneticist James Crow (1916–2012) believed, in his younger days, that overdominance was a major contributor to hybrid vigor. In 1998 he published a retrospective review of the developing science. According to Crow, the demonstration of several cases of heterozygote advantage in Drosophila and other organisms first caused great enthusiasm for the overdominance theory among scientists studying plant hybridization. But overdominance implies that yields on an inbred strain should decrease as inbred strains are selected for the performance of their hybrid crosses, as the proportion of harmful recessives in the inbred population rises. Over the years, experimentation in plant genetics has proven that the reverse occurs, that yields increase in both the inbred strains and the hybrids, suggesting that dominance alone may be adequate to explain the superior yield of hybrids. Only a few conclusive cases of overdominance have been reported in all of genetics. Since the 1980s, as experimental evidence has mounted, the dominance theory has made a comeback.
Crow wrote:
The current view ... is that the dominance hypothesis is the major explanation of inbreeding decline and [of] the high yield of hybrids. There is little statistical evidence for contributions from overdominance and epistasis. But whether the best hybrids are getting an extra boost from overdominance or favorable epistatic contributions remains an open question.
Epigenetics
An epigenetic contribution to heterosis has been established in plants, and it has also been reported in animals. MicroRNAs (miRNAs), discovered in 1993, are a class of non-coding small RNAs which repress the translation of messenger RNAs (mRNAs) or cause degradation of mRNAs. In hybrid plants, most miRNAs have non-additive expression (it might be higher or lower than the levels in the parents). This suggests that the small RNAs are involved in the growth, vigor and adaptation of hybrids.
'Heterosis without hybridity' effects on plant size have been demonstrated in genetically isogenic F1 triploid (autopolyploid) plants, where paternal genome excess F1 triploids display positive heterosis, whereas maternal genome excess F1s display negative heterosis effects. Such findings demonstrate that heterosis effects, with a genome dosage-dependent epigenetic basis, can be generated in F1 offspring that are genetically isogenic (i.e. harbour no heterozygosity). It has been shown that hybrid vigor in an allopolyploid hybrid of two Arabidopsis species was due to epigenetic control in the upstream regions of two genes, which caused major downstream alteration in chlorophyll and starch accumulation. The mechanism involves acetylation or methylation of specific amino acids in histone H3, a protein closely associated with DNA, which can either activate or repress associated genes.
Specific mechanisms
Major histocompatibility complex in animals
One example of where particular genes may be important in vertebrate animals for heterosis is the major histocompatibility complex (MHC). Vertebrates inherit several copies of both MHC class I and MHC class II from each parent, which are used in antigen presentation as part of the adaptive immune system. Each different copy of the genes is able to bind and present a different set of potential peptides to T-lymphocytes. These genes are highly polymorphic throughout populations, but are more similar in smaller, more closely related populations. Breeding between more genetically distant individuals decreases the chance of inheriting two alleles that are the same or similar, allowing a more diverse range of peptides to be presented. This, therefore, increases the chance that any particular pathogen will be recognised, and means that more antigenic proteins on any pathogen are likely to be recognised, giving a greater range of T-cell activation, so a greater response. This also means that the immunity acquired to the pathogen is against a greater range of antigens, meaning that the pathogen must mutate more before immunity is lost. Thus, hybrids are less likely to succumb to pathogenic disease and are more capable of fighting off infection. This may be the cause, though, of autoimmune diseases.
Plants
Crosses between inbreds from different heterotic groups result in vigorous F1 hybrids with significantly more heterosis than F1 hybrids from inbreds within the same heterotic group or pattern. Heterotic groups are created by plant breeders to classify inbred lines, and can be progressively improved by reciprocal recurrent selection.
Heterosis is used to increase yields, uniformity, and vigor. Hybrid breeding methods are used in maize, sorghum, rice, sugar beet, onion, spinach, sunflowers, broccoli and to create a more psychoactive cannabis.
Corn (maize)
Nearly all field corn (maize) grown in most developed nations exhibits heterosis. Modern corn hybrids substantially outyield conventional cultivars and respond better to fertilizer.
Corn heterosis was famously demonstrated in the early 20th century by George H. Shull and Edward M. East after hybrid corn was invented by Dr. William James Beal of Michigan State University based on work begun in 1879 at the urging of Charles Darwin. Dr. Beal's work led to the first published account of a field experiment demonstrating hybrid vigor in corn, by Eugene Davenport and Perry Holden, 1881. These various pioneers of botany and related fields showed that crosses of inbred lines made from a Southern dent and a Northern flint, respectively, showed substantial heterosis and outyielded conventional cultivars of that era. However, at that time such hybrids could not be economically made on a large scale for use by farmers. Donald F. Jones at the Connecticut Agricultural Experiment Station, New Haven invented the first practical method of producing a high-yielding hybrid maize in 1914–1917. Jones' method produced a double-cross hybrid, which requires two crossing steps working from four distinct original inbred lines. Later work by corn breeders produced inbred lines with sufficient vigor for practical production of a commercial hybrid in a single step, the single-cross hybrids. Single-cross hybrids are made from just two original parent inbreds. They are generally more vigorous and also more uniform than the earlier double-cross hybrids. The process of creating these hybrids often involves detasseling.
Temperate maize hybrids are derived from two main heterotic groups: 'Iowa Stiff Stalk Synthetic', and nonstiff stalk.
Rice (Oryza sativa)
Hybrid rice sees cultivation in many countries, including China, India, Vietnam, and the Philippines. Compared to inbred lines, hybrids produce approximately 20% greater yield, and comprise 45% of rice planting area in China. Rice production has seen enormous rise in China due to heavy uses of hybrid rice. In China, efforts have generated a super hybrid rice strain ('LYP9') with a production capability around 15 tons per hectare. In India also, several varieties have shown high vigor, including 'RH-10' and 'Suruchi 5401'.
Since rice is a self-pollinating species, it requires the use of male-sterile lines to generate hybrids from separate lineages. The most common way of achieving this is using lines with genetic male-sterility, as manual emasculation is not optimal for large-scale hybridization. The first generation of hybrid rice was developed in the 1970s. It relies on three lines: a cytoplasmic male sterile (CMS) line, a maintainer line, and a restorer line. The second generation was widely adopted in the 1990s. Instead of a CMS line, it uses an environment-sensitive genic male sterile line (EGMS), which can have its sterility reversed based on light or temperature. This removes the need for a maintainer, making the hybridization and breeding process more efficient (albeit still high-maintenance). Second generation lines show a yield increase of 5-10% over first generation lines. The third and current generation uses a nuclear male sterile line (NMS). Third generation lines have a recessive sterility gene, and their cultivation is more lenient towards maintainer lines and environmental conditions. Additionally, transgenes are only present in the maintainer, so hybrid plants can benefit from hybrid vigor without requiring special oversight.
Animals
Hybrid livestock
The concept of heterosis is also applied in the production of commercial livestock. In cattle, crosses between Black Angus and Hereford produce a cross known as a "Black Baldy". In swine, "blue butts" are produced by the cross of Hampshire and Yorkshire. Other, more exotic hybrids (two different species, so genetically more dissimilar), such as "beefalo" which are hybrids of cattle and bison, are also used for specialty markets.
Poultry
Within poultry, sex-linked genes have been used to create hybrids in which males and females can be sorted at one day old by color. Specific genes used for this are genes for barring and wing feather growth. Crosses of this sort create what are sold as Black Sex-links, Red Sex-links, and various other crosses that are known by trade names.
Commercial broilers are produced by crossing different strains of White Rocks and White Cornish, the Cornish providing a large frame and the Rocks providing the fast rate of gain. The hybrid vigor produced allows the production of uniform birds at a marketable carcass weight at 6–9 weeks of age.
Likewise, hybrids between different strains of White Leghorn are used to produce laying flocks that provide the majority of white eggs for sale in the United States.
Dogs
In 2013, a study found that mixed breeds live on average 1.2 years longer than pure breeds.
John Scott and John L. Fuller performed a detailed study of purebred Cocker Spaniels, purebred Basenjis, and hybrids between them.
They found that hybrids ran faster than either parent, perhaps due to heterosis. Other characteristics, such as basal heart rate, did not show any heterosis—the dog's basal heart rate was close to the average of its parents—perhaps due to the additive effects of multiple genes.
Sometimes people working on a dog-breeding program find no useful heterosis.
All this said, studies do not provide definitive proof of hybrid vigor in dogs. This is largely due to the unknown heritage of most mixed breed dogs used. Results vary wildly, with some studies showing benefit and others finding the mixed breed dogs to be more prone to genetic conditions.
Birds
In 2014, a study undertaken by the Centre for Integrative Ecology at Deakin University in Geelong, Victoria, concluded that intrasubspecific hybrids between the subspecies Platycercus elegans flaveolus and P. e. elegans of the crimson rosella (P. elegans) were more likely to fight off diseases than their pure counterparts.
Humans
Human beings are all extremely genetically similar to one another. Michael Mingroni has proposed heterosis, in the form of hybrid vigor associated with historical reductions of the levels of inbreeding, as an explanation of the Flynn effect, the steady rise in IQ test scores around the world during the 20th century, though a review of nine studies found that there is no evidence to suggest inbreeding has an effect on IQ.
Controversy
The term heterosis often causes confusion and even controversy, particularly in selective breeding of domestic animals, because it is sometimes (incorrectly) claimed that all crossbred plants and animals are "genetically superior" to their parents, due to heterosis,. but two problems exist with this claim:
according to an article published in the journal Genome Biology, "genetic superiority" is an ill-defined term and not generally accepted terminology within the scientific field of genetics. A related term fitness is well defined, but it can rarely be directly measured. Instead, scientists use objective, measurable quantities, such as the number of seeds a plant produces, the germination rate of a seed, or the percentage of organisms that survive to reproductive age. From this perspective, crossbred plants and animals exhibiting heterosis may have "superior" traits, but this does not necessarily equate to any evidence of outright "genetic superiority". Use of the term "superiority" is commonplace for example in crop breeding, where it is well understood to mean a better-yielding, more robust plant for agriculture. Such a plant may yield better on a farm, but would likely struggle to survive in the wild, making this use open to misinterpretation. In human genetics any question of "genetic superiority" is even more problematic due to the historical and political implications of any such claim. Some may even go as far as to describe it as a questionable value judgement in the realm of politics, not science.
not all hybrids exhibit heterosis (see outbreeding depression).
An example of the ambiguous value judgements imposed on hybrids and hybrid vigor is the mule. While mules are almost always infertile, they are valued for a combination of hardiness and temperament that is different from either of their horse or donkey parents. While these qualities may make them "superior" for particular uses by humans, the infertility issue implies that these animals would most likely become extinct without the intervention of humans through animal husbandry, making them "inferior" in terms of natural selection.
See also
F1 hybrid
Genetic admixture
Heterozygote advantage
Outbreeding depression
References
Further reading
NOAA Tech Memo NMFS NWFSC-30: Genetic Effects of Straying of Non-Native Hatchery Fish into Natural Populations: Inbreeding Depression and Outbreeding Depression
"Hybrids & Heirlooms"—an article from University of Illinois Extension's Home Hort Hints
Roybal, J. (July 1, 1998). "Ranchstar". Beef (beefmagazine.com).
"Sex-Links"—regarding poultry; at FeatherSite
Breeding
Classical genetics
Plant sexuality | Heterosis | Biology | 3,994 |
70,806,721 | https://en.wikipedia.org/wiki/Arzew%20Gas%20Terminal | The Arzew Gas Terminal is a large and historically important gas terminal on the coast of Algeria.
The plant brought the first natural gas for the UK from 1964. Its natural gas industry is highly important to the economy of Algeria. The plant was the first of its kind, and is now one of the largest.
Background
Natural gas reserves in Algeria in the 1960s were thought to be so large that the country's reserves could supply the whole of Europe for fifty years. The plan was developed by Sir David Milne-Watson of the Gas Council.
Construction
The plant was opened by Ben Bella on Sunday 27 September 1964, with Sir Harry Jones, the chairman of the Gas Council. The plant cost £31m, with a 280-mile pipeline.
A 28-minute industrial film was made about the project, in April 1965 entitled Saharan Venture, made by World Wide Pictures (UK).
Additional plants opened in 1978, 1981 and July 2014.
History
A similar plant at Skikda was planned in 1967, and opened in 1972. On 19 January 2004 an explosion at this site killed 29 people and caused $940m damage.
Revenues to the country's government were worth about £16m per year in 1967.
A new £9.6m gas separation plant was built in 1972, to produce butane and propane, connected to a 500-mile pipeline to the Hassi Messaoud gas field.
A nearby petrochemical plant and associated oil refinery was built in 1973.
By 2005, Algeria was the second-largest exporter of natural gas to Europe, after Russia. It supplied 20% of Europe's gas, including 50% of the natural gas required by Spain.
The £2bn Gassi Touil project was planned to build a new plant at the site in 2009; it was built by Chiyoda Corporation of Japan and Snamprogetti of Italy, and eventually opened in November 2013.
The original gas plant was decommissioned in 2010.
Prime Minister Ahmed Ouyahia visited the plant on Sunday 1 October 2017, to inaugurate two new natural gas tankers, operated by the Hyproc Shipping Company - the Tessala, named after the town Tessala in Sidi Bel Abbès Province, and Ougarta, named after the Ougarta Range of hills.
Supply to the UK
In the early 1960s Britain's domestic gas was supplied from 28 million tonnes of coal. How to supply Britain with natural gas was heavily discussed by the Select Committee on Nationalised Industries. On Friday 3 November 1961, the Minister for Power authorised the supply of natural gas by ship from a port in Algeria, supplied by the Hassi R'Mel gas field, at the time the third-largest natural gas field in the world; now it is the 18th-largest.
A contract had been signed by the Gas Council for the supply of natural gas for fifteen years from the plant.
The natural gas was transported by two tankers owned by Shell Tankers, taking 700,000 tons per year of natural gas to Essex, to land owned by the North Thames Gas Board in around sixty journeys every six days. This gas provided 10% of Britain's gas needs. Each tanker carried 12,000 tons, enough for a half day of Britain's needs for gas. The journey time to Essex was four days, over 1500 miles. The first gas arrived in Essex on Wednesday 14 October 1964.
This gas was carried by a new £7.5m 200 mile pipeline, the start of the NTS. It provided first natural gas supply in the UK, after a test shipment in February 1959.
In June 1968, the Gas Council planned a similar plant to cost £3.5m in North Lanarkshire, Scotland, to supply a nearby gas terminal, and a gas terminal in Derbyshire.
Supply to France
The first tanker to arrive in France unloaded its gas in March 1965, for Gaz de France.
By 1974, most gas from the site was being supplied to France, to a gas terminal at Fos in the south of France. An agreement was signed in late 1972 to supply gas also to Belgium, Switzerland, Austria and southern Germany, via a gas terminal at Monfalcone in Italy.
Supply to the United States
The first natural gas shipment to the US left on Sunday 30 October 1971. The US had signed a 25-year contract.
Operation
Algeria had been ruled by the French for 132 years, becoming independent in July 1962, only to be taken over by its army in June 1965 by Houari Boumédiène of the Revolutionary Council, who would stay as leader until 1978. The country was known as the Algerian Democratic People's Republic.
On 10 June 1967, Algeria placed an embargo on exports to the UK. On Monday 26 June 1967 the plant ceased operation; the supply to France stopped as well.
The British tankers were allowed again to load from September 1967.
Structure
The site at Bethioua was built in 1964, which was 180 acres. It could process 50 million square feet of natural gas per day. It had three processing structures - two processed gas for the UK, and the other was for France.
See also
Energy in Algeria
List of countries by natural gas production
Trans-Mediterranean Pipeline
References
1964 establishments in Algeria
Algeria–United Kingdom relations
Buildings and structures in Laghouat Province
Commercial buildings completed in 1964
Economic history of Algeria
Energy history of the United Kingdom
Energy infrastructure completed in 1964
Natural gas plants
Natural gas industry in Algeria
Natural gas industry in France
Natural gas industry in the United Kingdom | Arzew Gas Terminal | Chemistry | 1,121 |
62,945,037 | https://en.wikipedia.org/wiki/Teacup%20galaxy | The Teacup galaxy, also known as the Teacup AGN or SDSS J1430+1339 is a low redshift type 2 quasar, showing an extended loop of ionized gas resembling a handle of a teacup, which was discovered by volunteers of the Galaxy Zoo project and labeled as a Voorwerpje.
Galaxy
The Teacup galaxy is dominated by a bulge and has an asymmetric structure with a shell-like structure and a tidal tail. The shell and tail are signatures of a recent merger of two galaxies. Dust lanes in the system are interpreted as a gas-rich merger. Several candidate star clusters were identified in this galaxy with Hubble Space Telescope images. Observations with the Gran Telescopio Canarias showed that the Teacup Galaxy has a giant reservoir of ionized gas extending up to 111 kpc. The optical/radio bubbles seem to be expanding across this intergalactic medium.
Active galactic nucleus
Early studies of the Teacup AGN suggested that it is fading, although there was no clear evidence. Observations with VLT/SINFONI showed a blueshifted nuclear outflow with a velocity of 1600–1800 km/s. Observations in x-rays with Swift, XMM-Newton and Chandra revealed a powerful, highly obscured active galactic nucleus. This new result suggests that the AGN might not require fading. The quasar has dimmed by only a factor of 25 or less over the past 100,000 years.
Bubbles
One bubble was discovered by Galaxy Zoo volunteers in SDSS images as a 5 kpc loop of ionized gas. The loop is dominated by emission lines, such as hydrogen alpha and doubly ionized oxygen, which gives the loop seen in SDSS images a purple color. The emission of [O II] is extremely strong in the Teacup AGN and the quasar 3C 48 shows a similar [O II]/Hβ ratio.
Follow-up observations with the Very Large Array showed two 10-12 kpc bubbles, one "eastern bubble", consistent with the loop in optical observations and a "western bubble", only visible in radio wavelengths. The study also found a bright emission towards the north-east of the AGN, which is consistent with high-velocity ionized gas (-740 km/s). The bubbles are either created by small-scale radio jets or by quasar winds. Observations with Chandra revealed a loop in x-ray emission, consistent with the "eastern bubble". The Chandra data also show evidence for hotter gas within the bubble, which may imply that a wind of material is blowing away from the black hole. Such a wind, which was driven by radiation from the quasar, may have created the bubbles found in the Teacup.
The bubbles were observed with VLT/MUSE, showing that the jet strongly perturbs the host interstellar medium (ISM). At the edge of the bubble the researchers find a ≤100-150 Myr young population of stars, which indicates triggered star formation. This so-called positive feedback is predicted. Observations with ALMA found that the radio jet is compressing and accelerating molecular gas. This drives a lateral outflow, perpendicular to the radio jet. This is based on observations of carbon monoxide (CO) gas.
See also
Extended emission-line region
IC 2497
Hanny's Voorwerp
Galaxy Zoo
Zooniverse
List of quasars
References
External links
Hubble spies the Teacup, and I spy Hubble blog post from the Galaxy Zoo website
Voorwerpjes in Space NASA Astronomy Picture of the Day
VLA Finds Unexpected Storm at Galaxy's Core press-release by NRAO
SDSS J1430+1339: Storm Rages in Cosmic Teacup photo album by the website of Chandra
1436754
F14281+1352
Quasars
Boötes | Teacup galaxy | Astronomy | 794 |
65,042,180 | https://en.wikipedia.org/wiki/Geographic%20centre%20of%20Uganda | The geographic centre of Uganda is north of Lake Kyoga in Olyaka village, Olyaka parish in Namasale sub-county in Amolatar District, Northern Uganda.
The point is marked by the Amolatar Monument aka Uganda Tribes Monument which displays the names of all ethnic tribes in Uganda. The Amolatar peninsula offered refuge to different tribes during the Karimojong cattle rustling of the 1970s through to the 1980s and early 1990s most of whom ended up settling in the district. Once a year, in September, people from all tribes of the region gather at this place and pray.
The method by which the coordinates of this geographical centre were determined is not known. The centre point of a bounding box completely enclosing the area of Uganda results in another pair of coordinates (1.368153|N|32.303236|E) which belongs to a point along Kampala–Gulu Highway, west of Lake Kyoga.
References
Uganda
Geography of Uganda | Geographic centre of Uganda | Physics,Mathematics | 202 |
75,591,974 | https://en.wikipedia.org/wiki/Stefano%20Pluchino | Stefano Pluchino (born May 31, 1971) is Professor of Regenerative Neuroimmunology, within the Department of Clinical Neurosciences, at the University of Cambridge.
His research studies whether the accumulation of neurological disability observed in patients with chronic inflammatory neurological conditions can be slowed down using next generation molecular therapies.
The overarching aim is to understand the basic mechanisms that allow exogenously delivered stem cells, gene therapy vectors and/or exosomes to create an environment that preserves damaged axons or prevents neurons from dying. Such mechanisms are being harnessed and used to modulate disease states to repair and/or regenerate critical components of the nervous system.
He is best known for having provided compelling evidence in support of the feasibility and efficacy of advanced stem cell therapies in rodent and non-human primate models of inflammatory neurological diseases, including multiple sclerosis. His work has contributed to reshape the classical view that advanced cell therapeutics (ACTs), including cellular grafts, may exert their therapeutic effects not only through structural cell replacement, but also through modulation of mitochondrial function and neuroinflammatory pathways, and has inspired the first-in-kind clinical trials of allogeneic somatic neural stem cells in patients with progressive MS.
His most recent research has also elucidated the role of mitochondrial complex I activity in microglia, showcasing its pivotal role in sustaining neuroinflammation. This finding, as reported in a study published in Nature, unveils a novel avenue for understanding the mechanisms underlying progressive multiple sclerosis (MS). The implications of this discovery are profound, as it suggests a new target for disease-modifying therapies. By targeting mitochondrial complex I activity in microglia, researchers may be able to intervene in the neuroinflammatory processes that contribute to disease progression in MS. This not only enhances our understanding of the pathophysiology of progressive MS but also opens avenues for the development of innovative treatments that could potentially halt or slow down disease progression.
His combined efforts towards the identification of new druggable targets, as well as the development of advanced regenerative therapies, underscore the importance of continued research into the intricate mechanisms underlying neurological diseases and the development of targeted therapies that can address these mechanisms.
Education
Born in 1971, Pluchino grew up in Ragusa, Italy. He attended liceo classico Umberto I in Ragusa. He earned an M.D., a full residency in Neurology and a Ph.D. in Experimental Neurosciences from the University of Siena, Italy (joint with San Raffaele Scientific Institute, Milan), under the mentorship of Gianvito Martino in 2004. The title of his PhD thesis was ‘Development of a neural stem cell-based therapy for experimental multiple sclerosis in mice’.
He then completed his post-doctoral research at San Raffaele Scientific Institute, and Vita-Salute San Raffaele University, Milan. He was also an instructor in Experimental Neurosciences at University Vita-Salute San Raffaele, Milan until 2010.
In 2010, Pluchino joined the faculty at the University of Cambridge – School of Clinical Medicine, with a laboratory at the Van Geest Centre for Brain Repair, on the Forvie site of the Cambridge Biomedical Campus. He became University Lecturer and Honorary Consultant in Neurology, as well as principal investigator at the Wellcome–MRC Cambridge Stem Cell Institute. He was promoted to University Reader in Regenerative Neuroimmunology in 2016. In 2021, Pluchino was further promoted to Professor of Regenerative Neuroimmunology, in the Department of Clinical Neurosciences.
Research and career
Pluchino's research studies whether the accumulation of neurological disability observed in patients with chronic inflammatory neurological conditions can be slowed down using next generation molecular therapies. The overarching aim is to understand the basic mechanisms that allow exogenously delivered stem cells, gene therapy vectors and/or exosomes to create an environment that preserves damaged axons or prevents neurons from dying. Such mechanisms may be harnessed and used to modulate disease states to repair and/or regenerate critical components of the nervous system.
In addition to his positions in the Department of Clinical Neurosciences at the University of Cambridge, Pluchino serves as Chair of the Scientific Advisory Board at ReNeuron lcc.
Awards and honors
2003 AINI Award
2003 European Charcot Foundation (ECF) Award
2004 SIICA Award
2006 Serono Foundation Multiple Sclerosis Award;
2007 FISM Rita Levi-Montalcini Award;
2008 Regional Agency for Instruction, Formation and Work (ARIFL) Research and Internationalization Award;
2010 Royan International Research Award;
References
External links
Stefano Pluchino - Top Italian Scientist in Neurosciences & Psychology
Cambridge Immunology Network
PluchinoLab website
Preserving the Brain | Forum on neurodegenerative diseases
Stefano Pluchino's lecture at "Premio Rita Levi Montalcini". FISM Congress 2022
Italian neuroscientists
Living people
Stem cell researchers
Professors of the University of Cambridge
1971 births
University of Siena alumni
People from Ragusa, Sicily | Stefano Pluchino | Biology | 1,068 |
175,119 | https://en.wikipedia.org/wiki/Tire%20iron | A tire iron (also tire lever or tire spoon) is a specialized metal or plastic tool used in working with tires. Tire irons have not been in common use for automobile tires since the shift to the use of tubeless tires in the late 1950s.
Bicycle tire irons are still in use for those tires which have a separate inner tube, and can have a hooked C-shape cut into one end of the iron so that it may be hooked on a bicycle spoke to hold it in place.
Description and use
Tire irons, which usually come in pairs or threes, are used to pry the edge of a tire away from the rim of the wheel it has been mounted on. After one iron has pried a portion of the tire from its wheel, it is held in position while a second iron is applied further along the tire to pry more of the tire away from the wheel. This allows enough of the tire to be separated so that the first iron can be removed, and used again on the far side of the other iron. Alternating in this way, a person can work all the way around the tire to fully remove it from the wheel, in order to reach the tube that sits inside.
In the first half of the 20th century, they became a colloquial term of strength, as in "I couldn't get rid of him with a pair of tire irons," and frequently appeared in cartoons in similar situations. The usage is now considered passé.
Bicycle tire irons
Tire irons for bicycles are usually referred to as "tire levers", as they are often made of plastic, not metal.
Tire levers for bicycle tires have one end that is tapered and slightly curved. The other end is usually hooked so that it can be hooked around a spoke to keep the tire bead free of the rim at one point, allowing a second lever to be manipulated forward, progressively loosening a larger segment of the tire bead from the rim.
A common feature of tire levers is the lack of sharp edges. The slightest pinch of an inner tube by a lever can weaken or puncture the tube. It is good practice to examine a set of tire levers for any sharp edges and to file them smooth and round. Another problem, though less critical, is that a steel lever would scratch aluminum rims.
Classically, tire levers were made of metal. However plastic ones are now manufactured which are even less sharp and less likely to puncture the tube. There are also some single-lever varieties, which can be inserted under the bead at one point then quickly pushed around the rim to pop the bead off.
Tire levers are not necessary or desirable in all cases. In some cases, the tire can be reinserted on the rim, and sometimes removed from the rim, without the use of tire levers. This reduces the chance of puncture caused by pinching the tube between the rim and the tire bead. Sometimes they are used to fit the tire back on, but this can be done without the levers.
See also
Bead breaker
Crowbar
Lug wrench
References
External links
Bicycle tools
Tires
Mechanical hand tools | Tire iron | Physics | 644 |
1,315,510 | https://en.wikipedia.org/wiki/Upsampling | In digital signal processing, upsampling, expansion, and interpolation are terms associated with the process of resampling in a multi-rate digital signal processing system. Upsampling can be synonymous with expansion, or it can describe an entire process of expansion and filtering (interpolation). When upsampling is performed on a sequence of samples of a signal or other continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a higher rate (or density, as in the case of a photograph). For example, if compact disc audio at 44,100 samples/second is upsampled by a factor of 5/4, the resulting sample-rate is 55,125.
Upsampling by an integer factor
Rate increase by an integer factor can be explained as a 2-step process, with an equivalent implementation that is more efficient:
Expansion: Create a sequence, comprising the original samples, separated by zeros. A notation for this operation is:
Interpolation: Smooth out the discontinuities using a lowpass filter, which replaces the zeros.
In this application, the filter is called an interpolation filter, and its design is discussed below. When the interpolation filter is an FIR type, its efficiency can be improved, because the zeros contribute nothing to its dot product calculations. It is an easy matter to omit them from both the data stream and the calculations. The calculation performed by a multirate interpolating FIR filter for each output sample is a dot product:
where the sequence is the impulse response of the interpolation filter, and is the largest value of for which is non-zero.
The interpolation filter output sequence is defined by a convolution:
The only terms for which can be non-zero are those for which is an integer multiple of Thus:
for integer values of and the convolution can be rewritten as:
In the case function can be designed as a half-band filter, where almost half of the coefficients are zero and need not be included in the dot products. Impulse response coefficients taken at intervals of form a subsequence, and there are such subsequences (called phases) multiplexed together. Each of phases of the impulse response is filtering the same sequential values of the data stream and producing one of sequential output values. In some multi-processor architectures, these dot products are performed simultaneously, in which case it is called a polyphase filter.
For completeness, we now mention that a possible, but unlikely, implementation of each phase is to replace the coefficients of the other phases with zeros in a copy of the array, and process the sequence at times faster than the original input rate. Then of every outputs are zero. The desired sequence is the sum of the phases, where terms of the each sum are identically zero. Computing zeros between the useful outputs of a phase and adding them to a sum is effectively decimation. It's the same result as not computing them at all. That equivalence is known as the second Noble identity. It is sometimes used in derivations of the polyphase method.
Interpolation filter design
Let be the Fourier transform of any function, whose samples at some interval, equal the sequence. Then the discrete-time Fourier transform (DTFT) of the sequence is the Fourier series representation of a periodic summation of
When has units of seconds, has units of hertz (Hz). Sampling times faster (at interval ) increases the periodicity by a factor of
which is also the desired result of interpolation. An example of both these distributions is depicted in the first and third graphs of Fig 2.
When the additional samples are inserted zeros, they decrease the sample-interval to Omitting the zero-valued terms of the Fourier series, it can be written as:
which is equivalent to regardless of the value of That equivalence is depicted in the second graph of Fig.2. The only difference is that the available digital bandwidth is expanded to , which increases the number of periodic spectral images within the new bandwidth. Some authors describe that as new frequency components. The second graph also depicts a lowpass filter and resulting in the desired spectral distribution (third graph). The filter's bandwidth is the Nyquist frequency of the original sequence. In units of Hz that value is but filter design applications usually require normalized units. (see Fig 2, table)
Upsampling by a fractional factor
Let L/M denote the upsampling factor, where L > M.
Upsample by a factor of L
Downsample by a factor of M
Upsampling requires a lowpass filter after increasing the data rate, and downsampling requires a lowpass filter before decimation. Therefore, both operations can be accomplished by a single filter with the lower of the two cutoff frequencies. For the L > M case, the interpolation filter cutoff, cycles per intermediate sample, is the lower frequency.
See also
Downsampling
Multi-rate digital signal processing
Half-band filter
Oversampling
Sampling (information theory)
Signal (information theory)
Data conversion
Interpolation
Poisson summation formula
Notes
Page citations
References
Further reading
(discusses a technique for bandlimited interpolation)
Digital signal processing
Signal processing | Upsampling | Technology,Engineering | 1,085 |
58,105,173 | https://en.wikipedia.org/wiki/NGC%205018 | NGC 5018 is an elliptical galaxy located in the constellation of Virgo at an approximate distance of 132.51 Mly. NGC 5018 was discovered in 1788 by William Herschel.
Three supernovae have been observed in NGC 5018: SN 2002dj, (type Ia, mag. 17), SN 2017isq (type Ia, mag. 15.3), and SN 2021fxy (type Ia, mag. 13.9).
See also
Galaxy
References
External links
5018
Virgo (constellation)
Elliptical galaxies
Astronomical objects discovered in 1788 | NGC 5018 | Astronomy | 119 |
65,978,125 | https://en.wikipedia.org/wiki/Myxococcus%20llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogochensis | Myxococcus is a gram-negative, rod-shaped species of myxobacteria found in soil. It is a predator on other bacteria.
The ends of the rod-shaped vegetative cells taper slightly. The colonies are usually pale brown and show swarming motility. It produces orange, roughly spherical fruiting bodies. A draft sequence of its genome showed significant differences from all previously known species of the genus Myxococcus.
The species was isolated from soil collected near the village of Llanfairpwllgwyngyll, on the island of Anglesey in North Wales, and its specific name was given after the settlement's 58-character lengthened name (), which is the longest in Europe.
The scientific name of this bacterial species is considered the longest name in the binomial nomenclature system, bearing 73 letters in total.
The species name has been criticized for not following recommendations in the International Code of Nomenclature of Prokaryotes, that specifies that long and difficult to pronounce names should be avoided. Since the International Journal of Systematic and Evolutionary Microbiology plays an important role in nomenclature validation, some critics have argued that the species name can not be considered valid before being published in that Journal. With its publication in a list in 2021, the name was confirmed as valid.
See also
List of long species names
References
Myxococcota
Bacteria described in 2020
Llanfairpwllgwyngyll | Myxococcus llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogochensis | Biology | 297 |
63,925,261 | https://en.wikipedia.org/wiki/Quinolizidine%20alkaloids | Quinolizidine alkaloids are natural products that have a quinolizidine structure; this includes the lupine alkaloids.
Occurrence
Quinolizidine alkaloids can be found in the plant family of legumes, especially in papilionaceous plants. While the lupine alkaloids (following their name) can be found in lupines, tinctorin, for example, was isolated from the dyer's broom.
Examples
More than 200 quinolizidine alkaloids are known which can be classified into 6 structural types:
the lupinine type with 34 known structures, including lupinine and its derivatives
the camoensine type with 6 known structures, including camoensin
the spartein type with 66 structures, including sparteine, lupanine, angustifoline
the α-pyridone type with 25 structures, including anagyrine and cytisine
the matrine type with 31 structures, including matrine
and the ormosanin type with 19 structures, including ormosanine.
Properties
Cytisine is the toxic main alkaloid of laburnum. Similar to nicotine, it has a stimulating to hallucinogenic effect in low doses and a respiratory paralysing effect in higher doses. Cytisine and matrine are active ingredients of the Sophora beans from Mexico and the cow Seng and Shinkyogan drugs from China and Japan.
Quinolizidine alkaloids defend plants against pests and diseases and breeding to reduce QA concentrations lowers these resistances. They have various effects on warm-blooded animals and lead to poisoning of grazing livestock (sheep and cattle). Cytisin and anagyrin are particularly responsible for this. The effects of poisoning are stimulation, coordination disorders, shortness of breath, cramps and finally death from respiratory paralysis. Anagyrin acts teratogenic. The only quinolizidine alkaloid used therapeutically is sparteine, which has an antiarrhythmic and labor-promoting effect.
References | Quinolizidine alkaloids | Chemistry | 428 |
70,032,359 | https://en.wikipedia.org/wiki/Konda%20Hakuch%C5%8D%20Haniwa%20Production%20Site | The is an archaeological site with the ruins of a Kofun period factory for the production of haniwa clay funerary pottery, located in what is now the Hakucho neighborhood of the city of Habikino in Osaka Prefecture in the Kansai region of Japan. It received protection as a National Historic Site in 1973, with the area under protection expanded in 1975.
Overview
The Konda Hakuchō site is located in between the Konda Mitoyama Kofun (tomb of Emperor Ōjin) and the Hakayama Kofun in the Furuichi Kofun Cluster, and was the location where the thousands of haniwa used in these, and other burial mounds in the area. The kilns are divided into two groups, with a total of eleven kiln thus far located. Each has a width of about 1.5 meters, length of about 7 meters, and is at an inclination of about 12 degrees on the slope of a hill. Only a part of each base, the fire mouth, flue and the ash field have survived. Most of the artifacts found are cylindrical haniwa pieces, but figurative haniwa pieces of various types have also been found. Nearby. the foundation pillars of several raised-floor buildings in orderly rows were found. It is possible that into the Nara period, when haniwa were no longer being produced, the site became the location of the district office for ancient Furuichi District. The remains of a Haji ware workshop from the Nara period have also been found. At present, the site is now an archaeological park with one of the kilns restored to its original appearance.
The site is about a 10-minute walk from Furuichi Station on the Kintetsu Railway Minami Osaka Line.
See also
List of Historic Sites of Japan (Osaka)
References
External links
Ibaraki Prefectural Board of Education
Habikino City official site
Kofun period
History of Osaka Prefecture
Habikino
Historic Sites of Japan
Izumi Province
Japanese pottery kiln sites | Konda Hakuchō Haniwa Production Site | Chemistry,Engineering | 417 |
666,313 | https://en.wikipedia.org/wiki/Voodoo%20Science | Voodoo Science: The Road from Foolishness to Fraud is a book published in 2000 by physics professor Robert L. Park, critical of research that falls short of adhering to the scientific method. Other people have used the term "voodoo science", but amongst academics it is most closely associated with Park. Park offers no explanation as to why he appropriated the word voodoo to describe the four categories detailed below. The book is critical of, among other things, homeopathy, cold fusion and the International Space Station.
Categories
Park uses the term voodoo science (see the quote section below, Page 10) as covering four categories which evolve from self-delusion to fraud:
pathological science, wherein genuine scientists deceive themselves
junk science, speculative theorizing which bamboozles rather than enlightens
pseudoscience proper, work falsely claiming to have a scientific basis, which may be dependent on supernatural explanations
fraudulent science, exploiting bad science for the purposes of fraud
Park criticizes junk science as the creature of "scientists, many of whom have impressive credentials, who craft arguments deliberately intended to deceive or confuse."
Examples cited
Perpetual motion, free energy suppression and fringe physics claims
Robert Fludd
Garabed T. K. Giragossian
The Energy Machine of Joseph Newman
Better World Technologies (Dennis Lee)
Blacklight Power, formerly HydroCatalysis (Randell Mills)
Cold fusion (Stanley Pons and Martin Fleischmann)
Patterson Power Cell (James Patterson)
Gravitational shielding (Eugene Podkletnov)
Human spaceflight (in terms of actual importance to science since the rise of robotic spacecraft)
International Space Station (for claims of necessity to conduct scientific research)
Gerard K. O'Neill, L5 Society and space colonization
Robert Zubrin, Mars Society, Biosphere 2 and a human mission to Mars
Voodoo science protected by government secrecy
Project Mogul and the Roswell UFO incident resulting in a loss of public trust, as well as the later alien autopsy video hoax
Edward Teller and Lowell Wood's work on the Strategic Defense Initiative (especially regarding the X-ray laser, but also "Brilliant Pebbles")
Great Oil Sniffer Hoax
Superstitions and pseudoscience
Mars effect (astrology) claimed by Michel Gauquelin
Parapsychology (e.g. Robert G. Jahn and Dean Radin)
Placebos and alternative medicine
Vitamin O
Homeopathy
water memory (proposed by Jacques Benveniste)
Animal magnetism
Magnet therapy
Therapeutic touch (debunked by Emily Rosa at age nine)
Other health claims
Maharishi Effect (using Transcendental Meditation (TM) to effect a decrease in societal violence; the spike in murders during the 1993 Washington D.C. study is specifically mentioned)
Deepak Chopra (who makes claims linking Ayurveda (traditional medicine native to India) with quantum mechanics)
Electromagnetic radiation and health (especially related to power lines and cancer risk)
"Paul Brodeur and Microwave News in particular, had given the public a seriously distorted view of the scientific facts." (Page 158)
Contributing factors
Mainstream media reporting voodoo science uncritically as infotainment
Abolition of the Office of Technology Assessment
Establishment of the National Center for Complementary and Alternative Medicine
Park also discusses the Daubert standard for excluding junk science from litigation.
Quotes
I came to realize that many people choose scientific beliefs the same way they choose to be Methodists, or Democrats, or Chicago Cubs fans. They judge science by how well it agrees with the way they want the world to be. (Pages VIII-IX)
[P]ractitioners [of pseudoscience] may believe it to be science, just as witches and faith healers may truly believe they can call forth supernatural powers. What may begin as an honest error, however, has a way of evolving through almost imperceptible steps from self-delusion to fraud. The line between foolishness and fraud is thin. Because it is not always easy to tell when that line is crossed, I use the term voodoo science to cover them all: pathological science, junk science, pseudoscience and fraudulent science. This book is meant to help the reader to recognize voodoo science and to understand the forces that seem to conspire to keep it alive. (Page 10)
The integrity of science is anchored in the willingness of scientists to test their ideas and results in direct confrontation with their scientific peers. (Page 16)
America's astronauts have been left stranded in low-Earth orbit, like passengers waiting beside an abandoned stretch of track for a train that will never come, bypassed by the advance of science. (Page 91)
Few scientists or inventors set out to commit fraud. In the beginning, most believe they have made a great discovery. But what happens when they finally realize that things are not behaving as they believed? (Page 104)
[T]he uniquely American myth of the self-educated genius fighting against a pompous, close-minded establishment. (Page 112)
They are betting against the laws of thermodynamics. No one has ever won that wager. (Page 138)
Warning signs
Drawing on examples used in Voodoo Science, Park outlined seven warning signs that a claim may be pseudoscientific in a 2003 article for The Chronicle of Higher Education:
Discoverers make their claims directly to the popular media, rather than to fellow scientists.
Discoverers claim that a conspiracy has tried to suppress the discovery.
The claimed effect appears so weak that observers can hardly distinguish it from noise. No amount of further work increases the signal.
Anecdotal evidence is used to back up the claim.
True believers cite ancient traditions in support of the new claim.
The discoverer or discoverers work in isolation from the mainstream scientific community.
The discovery, if true, would require a change in the understanding of the fundamental laws of nature.
Reception
Matt Nisbet in the Skeptical Inquirer noted that the reaction to Voodoo Science has been mostly favorable.
Bob Goldstein in a book review for Nature Cell Biology described Park as an equivalent to Richard Dawkins and Stephen Jay Gould, scientific writers who have "talent for defending a view of the world that is perfectly rational and free of witchcraft and superstition."
American chemist Nicholas Turro wrote "the book is entertaining and provocative reading... Whether or not you agree with Park's take on voodoo science, a message of the book is that if scientists do not take a more significant role in the way that science is disseminated to the public and especially to politicians, voodoo science will continue to survive."
The mathematician Malcolm Sherman in the American Scientist gave the book a positive review stating "Park does more than analyze and expose various kinds of bad ("voodoo") science. He demonstrates how valid science is distorted or ignored by the media and by those (including scientists) seeking to influence public policy." The physicist Kenneth R. Foster also positively reviewed the book concluding "Park is an articulate and skeptical voice of reason about science."
Reviewing the book for The New York Times, Ed Regis compared it positively to the 1957 book by Martin Gardner, Fads and Fallacies in the Name of Science, calling Voodoo Science a "worthy successor" and praising it for explaining why various purportedly scientific claims were in fact impossible. Science writer Kendrick Frazier wrote "Robert Park has brought us a book that has a freshness and originality—and an importance and potential for influence—perhaps not seen since Gardner’s first."
Robin McKie for The Observer described it as "an admirable analysis: wittily written, vivid and put together without a hint of malice."
Rachel Hay in a review wrote that Park had "debunked expertly" pseudoscience topics such as homeopathy, cold fusion and perpetual motion machines but the book is not easily accessible to students. However, S. Elizabeth Bird an anthropology professor recommended it for "students who need to establish a grasp of the scientific method."
Bruce Lewenstein wrote a critical review claiming Park had lumped together pathological science, junk science, pseudoscience and fraud all together as voodoo science but this is problematic as "each category alone is fraught with definitional, historical, and analytical difficulties." Brian Josephson wrote that the book, while giving "the official story regarding a number of 'mistaken beliefs' ", did not provide "the additional information that might lead one to conclude that the official view does not tell the whole story."
See also
Antiscience
Cargo cult science
Denialism
Politicization of science
Scientific misconduct
Scientific skepticism
List of books about the politics of science
List of cognitive biases
List of experimental errors and frauds in physics
List of topics characterized as pseudoscience
Quackery
Debunking
1023 Campaign
Flim-Flam!
Frye standard
References
External links
"The rock that fell to Earth". The Verge.
2000 non-fiction books
Fringe physics
Popular science books
Scientific misconduct
Scientific skepticism
Scientific skepticism mass media | Voodoo Science | Technology | 1,827 |
46,984,994 | https://en.wikipedia.org/wiki/Neugrund%20breccia | Neugrund breccia is a type of rock consisting of gneissic breccia and amphibolite originating from the Neugrund crater. Neugrund breccia is different from Ordovician breccia, which is found in a similar region but was formed millions of years later after a different meteor strike.
Neugrund breccia formed during the cementation of meteor fragments. Glacial action distributed erratics of breccia throughout an area of over 10,000 km2 surrounding the impact site. Boulders of Neugrund breccia can be found in north-western Estonia and are especially concentrated around the island of Osmussaar.
The largest known Neugrund breccia formation is Skarvan. It is located near the west coast of Osmussaar.
References
Breccias | Neugrund breccia | Materials_science | 179 |
4,157,586 | https://en.wikipedia.org/wiki/Ditrigonal%20dodecadodecahedron | In geometry, the ditrigonal dodecadodecahedron (or ditrigonary dodecadodecahedron) is a nonconvex uniform polyhedron, indexed as U41. It has 24 faces (12 pentagons and 12 pentagrams), 60 edges, and 20 vertices. It has extended Schläfli symbol b{5,}, as a blended great dodecahedron, and Coxeter diagram . It has 4 Schwarz triangle equivalent constructions, for example Wythoff symbol 3 | 5, and Coxeter diagram .
Related polyhedra
Its convex hull is a regular dodecahedron. It additionally shares its edge arrangement with the small ditrigonal icosidodecahedron (having the pentagrammic faces in common), the great ditrigonal icosidodecahedron (having the pentagonal faces in common), and the regular compound of five cubes.
Furthermore, it may be viewed as a facetted dodecahedron: the pentagrammic faces are inscribed in the dodecahedron's pentagons. Its dual, the medial triambic icosahedron, is a stellation of the icosahedron.
It is topologically equivalent to a quotient space of the hyperbolic order-6 pentagonal tiling, by distorting the pentagrams back into regular pentagons. As such, it is a regular polyhedron of index two:
See also
List of uniform polyhedra
References
External links
Uniform polyhedra | Ditrigonal dodecadodecahedron | Physics | 306 |
38,900,381 | https://en.wikipedia.org/wiki/Tricholoma%20griseoviolaceum | Tricholoma griseoviolaceum is a mushroom of the agaric genus Tricholoma. It was described as new to science in 1996.
The cap ranges from in diameter; it is purplish gray with a dark center, and brownish gray in age. The stalk is long and 1–2 cm wide. The flesh is whitish gray. The spores are white. The odor and taste resemble cucumbers. Its edibility is unknown.
Similar species include Tricholoma atroviolaceum, T. portentosum, and T. virgatum.
See also
List of North American Tricholoma
List of Tricholoma species
References
griseoviolaceum
Fungi described in 1996
Fungi of North America
Fungus species | Tricholoma griseoviolaceum | Biology | 157 |
20,268,944 | https://en.wikipedia.org/wiki/Simion%20Stoilow%20Prize | The Simion Stoilow Prize () is the prize offered by the Romanian Academy for achievements in mathematics. It is named in honor of Simion Stoilow.
The prize is awarded either for a mathematical work or for a cycle of works.
The award consists of 2,000 lei and a diploma. The prize was established in 1963 and is awarded annually. Prizes of the Romanian Academy for a particular year are awarded two years later.
Honorees
Honorees of the Simion Stoilow Prize have included:
2020: Victor Daniel Lie
2019: Marius Ghergu; Bogdan Teodor Udrea
2018: Iulian Cîmpean
2017: Aurel Mihai Fulger
2016: Arghir Dani Zărnescu
2015: No award
2014: Florin Ambro
2013: Petru Jebelean
2012: George Marinescu
2011: Dan Timotin
2010: Laurențiu Leuștean; Mihai Mihăilescu
2009: Miodrag Iovanov; Sebastian Burciu
2008: Nicolae Bonciocat; Călin Ambrozie
2007: Cezar Joița; Bebe Prunaru; Liviu Ignat
2006: Radu Pantilie
2005: Eugen Mihăilescu, for the work "Estimates for the stable dimension for holomorphic maps"; Radu Păltânea, for the cycle of works "Approximation theory using positive linear operators"
2000: Liliana Pavel, for the book Hipergrupuri ("Hypergroups")
1999: Vicențiu Rădulescu for the work "Boundary value problems for nonlinear elliptic equations and hemivariational inequalities"
1995: No award
1994: No award
1993: No award
1992: Florin Rădulescu
1991: Ovidiu Cârjă
1990: Ștefan Mirică
1989: Gelu Popescu
1988: Cornel Pasnicu
1987: Călin-Ioan Gheorghiu; Titus Petrila
1986: Vlad Bally; Paltin Ionescu
1985: Vasile Brânzănescu; Paul Flondor; Dan Polisevschi; Mihai Putinar
1984: Toma Albu; ; Dan Vuza
1983: Mircea Puta; Ion Chițescu; Eugen Popa
1982: Mircea Craioveanu; Mircea Puta
1981: Lucian Bădescu
1980: Dumitru Gașpar; Costel Peligrad; Mihai Pimsner; Sorin T. Popa
1979: Dumitru Motreanu; Dorin Popescu; Ilie Valusescu
1978: Aurel Bejancu; Gheorghe Micula
1977: Alexandru Brezuleanu; Nicolae Radu;
1976: Zoia Ceaușescu; Ion Cuculescu; Nicolae Popa
1975: Șerban Strătilă; Elena Stroescu;
1974: Ioana Ciorănescu; Dan Pascali; Constantin Vârsan
1973: Vasile Istrătescu; Ioan Marusciac; ; Veniamin Urseanu
1972: Bernard Bereanu; Nicolae Pavel; Gustav Peeters; Elena Moldovan Popoviciu
1971: Nicolae Popescu
1970: Viorel Barbu;
1969: Ion Suciu
1968:
1967: Constantin Apostol
1966: Dan Burghelea; Cabiria Andreian Cazacu;
1965: ; Alexandru Lascu
1964: ;
1963: ;
See also
List of mathematics awards
References
Prizes of the Romanian Academy
Mathematics awards | Simion Stoilow Prize | Technology | 712 |
22,996,180 | https://en.wikipedia.org/wiki/Quasistatic%20approximation | Quasistatic approximation(s) refers to different domains and different meanings. In the most common acceptance, quasistatic approximation refers to equations that keep a static form (do not involve time derivatives) even if some quantities are allowed to vary slowly with time. In electromagnetism it refers to mathematical models that can be used to describe devices that do not produce significant amounts of electromagnetic waves. For instance the capacitor and the coil in electrical networks.
Overview
The quasistatic approximation can be understood through the idea that the sources in the problem change sufficiently slowly that the system can be taken to be in equilibrium at all times. This approximation can then be applied to areas such as classical electromagnetism, fluid mechanics, magnetohydrodynamics, thermodynamics, and more generally systems described by hyperbolic partial differential equations involving both spatial and time derivatives. In simple cases, the quasistatic approximation is allowed when the typical spatial scale divided by the typical temporal scale is much smaller than the characteristic velocity with which information is propagated. The problem gets more complicated when several length and time scales are involved. In the strict acceptance of the term the quasistatic case corresponds to a situation where all time derivatives can be neglected. However some equations can be considered as quasistatic while others are not, leading to a system still being dynamic. There is no general consensus in such cases.
Fluid dynamics
In fluid dynamics, only quasi-hydrostatics (where no time derivative term is present) is considered as a quasi-static approximation. Flows are usually considered as dynamic as well as acoustic waves propagation.
Thermodynamics
In thermodynamics, a distinction between quasistatic regimes and dynamic ones is usually made in terms of equilibrium thermodynamics versus non-equilibrium thermodynamics. As in electromagnetism some intermediate situations also exist; see for instance local equilibrium thermodynamics.
Electromagnetism
In classical electromagnetism, there are at least two consistent quasistatic approximations of Maxwell equations: quasi-electrostatics and quasi-magnetostatics depending on the relative importance of the two dynamic coupling terms. These approximations can be obtained using time constants evaluations or can be shown to be Galilean limits of electromagnetism.
Retarded times point of view
In magnetostatics equations such as Ampère's Law or the more general Biot–Savart law allow one to solve for the magnetic fields produced by steady electrical currents. Often, however, one may want to calculate the magnetic field due to time varying currents (accelerating charge) or other forms of moving charge. Strictly speaking, in these cases the aforementioned equations are invalid, as the field measured at the observer must incorporate distances measured at the retarded time, that is the observation time minus the time it took for the field (traveling at the speed of light) to reach the observer. The retarded time is different for every point to be considered, hence the resulting equations are quite complicated; often it is easier to formulate the problem in terms of potentials; see retarded potential and Jefimenko's equations.
In this point of view the quasistatic approximation is obtained by using time instead of retarded time or equivalently to assume that the speed of light is infinite. To first order, the mistake of using only Biot–Savart's law rather than both terms of Jefimenko's magnetic field equation fortuitously cancel.
Notes
Electromagnetism
Concepts in physics | Quasistatic approximation | Physics | 720 |
430,792 | https://en.wikipedia.org/wiki/Necklace%20polynomial | In combinatorial mathematics, the necklace polynomial, or Moreau's necklace-counting function, introduced by , counts the number of distinct necklaces of n colored beads chosen out of α available colors, arranged in a cycle. Unlike the usual problem of graph coloring, the necklaces are assumed to be aperiodic (not consisting of repeated subsequences), and counted up to rotation (rotating the beads around the necklace counts as the same necklace), but without flipping over (reversing the order of the beads counts as a different necklace). This counting function also describes the dimensions in a free Lie algebra and the number of irreducible polynomials over a finite field.
Definition
The necklace polynomials are a family of polynomials in the variable such that
By Möbius inversion they are given by
where is the classic Möbius function.
A closely related family, called the general necklace polynomial or general necklace-counting function, is:
where is Euler's totient function.
Applications
The necklace polynomials and appear as:
The number of aperiodic necklaces (or equivalently Lyndon words), which are cyclic arrangements of n colored beads having α available colors. Two such necklaces are considered equal if they are related by a rotation (not considering reflections). Aperiodic refers to necklaces without rotational symmetry, having n distinct rotations. Correspondingly, gives the number of necklaces including the periodic ones: this is easily computed using Pólya theory.
The dimension of the degree n component of the free Lie algebra on α generators ("Witt's formula"), or equivalently the number of Hall words of length n. Correspondingly, should be the dimension of the degree n component of a free Jordan algebra.
The number of monic irreducible polynomials of degree n over a finite field with α elements (when is a prime power). Correspondingly, is the number of polynomials which are primary (a power of an irreducible).
The exponent in the cyclotomic identity: .
Although these various types of objects are all counted by the same polynomial, their precise relationships remain unclear. For example, there is no canonical bijection between the irreducible polynomials and the Lyndon words. However, there is a non-canonical bijection as follows. For any degree n monic irreducible polynomial over a field F with α elements, its roots lie in a Galois extension field L with elements. One may choose an element such that is an F-basis for L (a normal basis), where σ is the Frobenius automorphism . Then the bijection can be defined by taking a necklace, viewed as an equivalence class of functions , to the irreducible polynomial for .Different cyclic rearrangements of f, i.e. different representatives of the same necklace equivalence class, yield cyclic rearrangements of the factors of , so this correspondence is well-defined.
Relations between M and N
The polynomials for M and N are easily related in terms of Dirichlet convolution of arithmetic functions , regarding as a constant.
The formula for M gives ,
The formula for N gives .
Their relation gives or equivalently , since the function is completely multiplicative.
Any two of these imply the third, for example:
by cancellation in the Dirichlet algebra.
Examples
For , starting with length zero, these form the integer sequence
1, 2, 1, 2, 3, 6, 9, 18, 30, 56, 99, 186, 335, ...
Identities
The polynomials obey various combinatorial identities, given by Metropolis & Rota:
where "gcd" is greatest common divisor and "lcm" is least common multiple. More generally,
which also implies:
References
Combinatorics on words
Enumerative combinatorics | Necklace polynomial | Mathematics | 780 |
5,918,814 | https://en.wikipedia.org/wiki/Book%20of%20Roads%20and%20Kingdoms | The Book of Roads and Kingdoms (, Kitāb al-Masālik waʿl-Mamālik) is a group of Islamic manuscripts composed from the Middle Ages to the early modern period. They emerged from the administrative tradition of listing pilgrim and post stages. Their text covers the cities, roads, topography, and peoples of the Muslim world, interspersed with personal anecdotes. A theoretical explanation of the "Inhabited Quarter" of the world, comparable to the ecumene, frames the world with classical concepts like the seven climes.
The books include illustrations so geometric that they are barely recognizable as maps. These schematic maps do not attempt a mimetic depiction of physical boundaries. With little change in design, the treatises typically offer twenty regional maps and a disc-shaped map of the world surrounded by the Encircling Ocean. The maps have a flat quality, but the textual component implies a spherical Earth. Andalusi scholar Abi Bakr Zuhri explained, "Their objective is the depiction of the earth, even if it does not correspond to reality. Because the earth is spherical but the [map] is simple".
The first, incomplete Kitāb al-Masālik wa'l-Mamālik by Ja‘far ibn Ahmad al-Marwazi is now lost. The earliest surviving version was written by Ibn Khordadbeh circa 870 CE, during the reigns of Abbasid caliphs al-Wathiq and al-Mu'tamid. The earliest known version of the idiosyncratic cartography was composed by al-Istakhri circa 950 CE, although only copies by later artists survive. As he was a follower of Abu Zayd al-Balkhi, this style of map-making is often referred to as the "Balkhī school", or the "Classical School".
Leiden University Libraries holds مختصر كتاب المسالك والممالك لابي اسحاق ابراهيم بن محمد الاصطخري / World map in a summary of Kitab al-masalik wa’l mamalik, MS Or. 3101, 1193.
The maps are sometimes called the "Atlas of Islam", or abbreviated as KMMS maps. This tradition of mapping appears in related works including Ibn Hawqal's Ṣūrat al-’Arḍ (; "The face of the Earth").
Works
Book of Roads and Kingdoms, written in the 9th century by Ibn Khordadbeh.
, written in the early 10th century by Istakhri.
Book of Roads and Kingdoms, written in the mid 11th century by al-Bakri in Spain.
, written in the 10th century by Ibn Hawqal.
, written in the 10th century by .
Book of Roads and Kingdoms, written in the 10th century by Muhammad ibn Yūsuf al-Warrāq.
Book of Roads and Kingdoms, written in the 9th century by Ahmad ibn al-Harith al-Kharraz (al-Khazzaz).
Book of Roads and Kingdoms, written in the 10th century by Abu Abdallah Muhammad ibn Ahmad al-Jayhani.
Gallery
See also
Geography and cartography in the medieval Islamic world
Surat Al-Ard
Notes
References
Accessed 2023-05-27.
Atlases
Geographic information systems
Geographical works of the medieval Islamic world
Geography books
Historic maps of the world
History of geography
Maps | Book of Roads and Kingdoms | Technology | 722 |
9,655,587 | https://en.wikipedia.org/wiki/Society%20for%20Applied%20Spectroscopy | The Society for Applied Spectroscopy (SAS) is an organization promoting research and education in the fields of spectroscopy, optics, and analytical chemistry. Founded in 1958, it is currently headquartered in Albany, New York. In 2006 it had about 2,000 members worldwide.
SAS is perhaps best known for its technical conference with the Federation of Analytical Chemistry and Spectroscopy Societies and short courses on various aspects of spectroscopy and data analysis. The society publishes the scientific journal Applied Spectroscopy.
SAS is affiliated with American Institute of Physics (AIP), the Coblentz Society, the Council for Near Infrared Spectroscopy (CNIRS), Federation of Analytical Chemistry and Spectroscopy Societies (FACSS), The Instrumentation, Systems, and Automation Society (ISA), and Optica.
SAS provides a number of awards with honoraria to encourage and recognize outstanding achievements.
See also
Spectroscopy
American Institute of Physics (AIP)
The Instrumentation, Systems, and Automation Society (ISA)
Optical Society of America (OSA)
References
External links
Coblentz
Council for Near Infrared Spectroscopy (CNIRS)
Federation of Analytical Chemistry and Spectroscopy Societies (FACSS)
Scientific societies based in the United States
Spectroscopy
Analytical chemistry | Society for Applied Spectroscopy | Physics,Chemistry | 237 |
61,448,894 | https://en.wikipedia.org/wiki/C9H11Cl2N | {{DISPLAYTITLE:C9H11Cl2N}}
The molecular formula C9H11Cl2N may refer to:
2,4-Dichloroamphetamine
3,4-Dichloroamphetamine | C9H11Cl2N | Chemistry | 53 |
12,731,316 | https://en.wikipedia.org/wiki/HD%20231701 | HD 231701 is a yellow-white hued star in the northern constellation of Sagitta, near the southern constellation border with Aquila. With an apparent visual magnitude of 8.97, it is too dim to be viewed with the naked eye, but can be seen with powerful binoculars or a small telescope. Parallax measurements provide a distance estimate of approximately 356 light years from the Sun, but it is drifting closer with a radial velocity of −63 km/s. It is predicted to come as close as in 1.345 million years.
HD 231701 is named Uruk. The name was selected in the NameExoWorlds campaign by Iraq, during the 100th anniversary of the IAU. Uruk was an ancient city of the Sumer and Babylonian civilizations in Mesopotamia.
This object is an ordinary F-type main-sequence star with a stellar classification of F8 V. It is around three to 4.5 billion years old and may be evolving onto the subgiant branch. It is spinning with a projected rotational velocity of 4 km/s and has low chromospheric activity. HD 231701 has 1.2 times the mass of the Sun and 1.45 times the Sun's radius. It is radiating 2.6 times the luminosity of the Sun from its photosphere at an effective temperature of 6,081 K.
In 2007, the N2K Consortium used the radial velocity technique to discover a Jupiter-like planet orbiting at a distance of from the star with a period of 141.6 days.
See also
List of extrasolar planets
References
External links
F-type main-sequence stars
Planetary systems with one confirmed planet
Sagitta
Durchmusterung objects
231701
096078 | HD 231701 | Astronomy | 363 |
26,894,908 | https://en.wikipedia.org/wiki/British%20Fluid%20Power%20Association | The British Fluid Power Association is a trade association in the United Kingdom that represents the hydraulic and pneumatic equipment industry, utilising properties of fluid power.
History
It started in 1959 as AHEM, becoming BFPA in 1986. A division of the organisation, the British Fluid Power Distributors Association (BFPDA) was formed in 1989.
Structure
It is based in Chipping Norton in Oxfordshire, just off the northern spur of the A44 in the north-east of the town. There are three types of membership: Full, Associate and Education.
Function
It acts as a marketing organisation (mostly abroad) for the industry and collects industry-wide statistics. Its technical committees also help in implementation and origination of standards for the BSI Group.
It represents companies involved with:
Electrohydraulics (e.g.power steering)
Pneumatic controls
Motion control
Linear motion
Hydraulic accumulators
Hydraulic pumps and Hydraulic motors
Valves
Pneumatic and hydraulic cylinders
Hydraulic seals
Hose and fittings
Marketing and industry statistical information
See also
National Fluid Power Association
International Association of Hydraulic Engineering and Research
References
External links
BFPA
IFPEX
Hydraulic engineering organizations
Organisations based in Oxfordshire
West Oxfordshire District
Organizations established in 1959
Fluid power
Trade associations based in the United Kingdom
1959 establishments in the United Kingdom | British Fluid Power Association | Physics,Engineering | 257 |
44,897,299 | https://en.wikipedia.org/wiki/Hydroxyprogesterone%20acetate | Hydroxyprogesterone acetate (OHPA), sold under the brand name Prodox, is an orally active progestin related to hydroxyprogesterone caproate (OHPC) which has been used in clinical and veterinary medicine. It has reportedly also been used in birth control pills.
OHPA is a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone.
OHPA was discovered in 1953 and was introduced for medical use in 1956.
Medical uses
OHPA has been used in the treatment of a variety of gynecological disorders, including secondary amenorrhea, functional uterine bleeding, infertility, habitual abortion, dysmenorrhea, and premenstrual syndrome.
OHPA (100 mg) was reportedly marketed in combination with mestranol (80 μg) as a sequential combined birth control pill under the brand name Hormolidin. The preparation was available in the early 1970s. The firm that manufactured it, known as Gador, was based in Argentina.
Available forms
Side effects
Pharmacology
Pharmacodynamics
OHPA is a progestogen and acts as an agonist of the progesterone receptor (PR), both PRA and PRB isoforms (IC50 = 16.8 nM and 12.6 nM, respectively). It has more than 50-fold higher affinity for the PR isoforms than 17α-hydroxyprogesterone, a little less than half the affinity of progesterone, and slightly higher affinity than OHPC. Additional studies have reported on the affinity of OHPA for the PR.
OHPA is of relatively low potency as a progestogen, which may explain its relatively limited use. It is 100-fold less potent than medroxyprogesterone acetate, 400-fold less potent than chlormadinone acetate, and 1,200-fold less potent than cyproterone acetate in animal assays. In terms of producing full progestogenic changes on the endometrium in women, 75 to 100 mg/day oral OHPA is equivalent to 20 mg/day parenteral progesterone, and OHPA is at least twice as potent as oral ethisterone in such regards. It is also reportedly more potent than OHPC. OHPA has been found to be effective as an oral progestogen-only pill at a dosage of 30 mg/day.
Pharmacokinetics
OHPA has very low but nonetheless significant oral bioavailability and can be taken by mouth. The pharmacokinetics of OHPA have been reviewed.
A single intramuscular injection of 150 to 350 mg OHPA in microcrystalline aqueous suspension has been found to have a duration of action of 9 to 16 days in terms of clinical biological effect in the uterus in women.
Chemistry
OHPA, also known as 17α-hydroxyprogesterone acetate or as 17α-acetoxypregn-4-ene-3,20-dione, is a synthetic pregnane steroid and a derivative of progesterone. It is the acetate ester of 17α-hydroxyprogesterone, as well as a parent compound of a number of progestins including chlormadinone acetate, cyproterone acetate, medroxyprogesterone acetate, and megestrol acetate.
Synthesis
Chemical syntheses of OHPA have been described.
History
In 1949, it was discovered that 17α-methylprogesterone had twice the progestogenic activity of progesterone when administered parenterally, and this finding led to renewed interest in 17α-substituted derivatives of progesterone as potential progestins. Along with OHPC, OHPA was synthesized by Karl Junkmann of Schering AG in 1953 and was first reported by him in the medical literature in 1954. OHPC shows very low oral activity and was introduced for use via intramuscular injection by Squibb in 1956 under the brand name Delalutin. Although a substantial prolongation of action occurs when OHPC is formulated in oil, the same was not observed to a significant extent with OHPA, and this is likely why OHPC was chosen by Schering for development over OHPA.
Subsequently, Upjohn unexpectedly discovered that OHPA, unlike OHPC and progesterone, is orally active and shows marked progestogenic activity with oral administration, a finding that had been missed by the Schering researchers (who were primarily interested in the oil solubility of such esters). OHPA was found to possess two to three times the oral activity of 17α-methylprogesterone. Upjohn reported the oral activity of OHPA in the medical literature in 1957 and introduced the drug for medical use as Prodox in 25 mg and 50 mg oral tablet formulations later the same year. OHPA was indicated for the treatment of a variety of gynecological disorders in women. However, it saw relatively little use, which was perhaps due its comparatively low potency relative to a variety of other progestins such as medroxyprogesterone acetate and norethisterone. These progestins were introduced around the same time and hence may have been favored.
In 1960, OHPA was introduced also as Prodox as an oral progestin for veterinary use for the indication of estrus suppression in dogs. However, probably due its high cost and the inconvenience of daily oral administration, the drug was not a market success. It was superseded for this indication by medroxyprogesterone acetate (brand name Promone) in 1963, which could be administered by injection conveniently once every six months, although this preparation was discontinued in 1966 for various reasons and hence was not a market success either.
Society and culture
Generic names
Hydroxyprogesterone acetate is the generic name of the drug and its .
Brand names
OHPA is or was marketed under the brand name Prodox initially for clinical use and then for veterinary use. Other brand names of OHPA include Gestageno, Gestageno Gador, Kyormon, Lutate-Inj, Prodix, and Prokan. OHPA may also be or have been marketed in combination with estradiol enantate under the brand names Atrimon and Protegin in Argentina and Nicaragua.
Availability
OHPA is no longer marketed and hence is no longer available in any country.
See also
Mestranol/hydroxyprogesterone acetate
References
Abandoned drugs
Acetate esters
Enones
Pregnanes
Progestogen esters
Progestogens
Veterinary drugs | Hydroxyprogesterone acetate | Chemistry | 1,436 |
1,909,668 | https://en.wikipedia.org/wiki/3-ring%20release%20system | The 3-ring release system is a parachute component that is widely used by sport skydivers and military freefall parachutists to attach the two risers of a main parachute to the harness that bears the load under the parachute.
Invented in its original large ring form by Bill Booth, and subsequently scaled down for thinner Type 17 webbing risers the three-ring system allows a skydiver to quickly cut-away a malfunctioning main parachute with a single motion. Skydivers usually need to do this quickly during emergencies in which they need to deploy a reserve parachute. The three-ring system is simple, inexpensive, reliable, and requires fewer operations than earlier parachute release systems while reducing the physical force needed.
The large bottom ring is securely attached to the skydiver's harness, the middle ring is securely attached to the end of the parachute riser, and the small ring is securely attached to the parachute riser above the middle ring. The middle ring is passed through the large ring and looped upwards; the small ring is then passed through the middle ring and looped upwards. Continuing in the same manner, a cord loop is passed through the small ring, loop upwards, and finally passes through a grommet to the opposing side of the parachute riser. A semi-rigid cable attached to a release handle then passes through this loop, securing the loop. Releasing the cord loop by removing the cable with a tug causes the three-ring system to cascade free and quickly disconnect the riser from the harness.
Each ring in the series multiplies the mechanical advantage of the loop of cord that is held in place by the semi-rigid cable (a Lolon-F or Teflon impregnated plastic coated steel cable, depending on manufacturer).
Variations
There are a few different variations of the 3-ring system. The original 3-ring release from the late 1970s is now known as large 3-rings. A version using smaller rings (mini rings) was introduced in the 1980s. The reasons for the development of the mini ring system and the associated smaller risers were mostly aesthetic; the mini rings do not increase safety but actually reduce the mechanical advantage inherent in the system thereby increasing the pull force a jumper must apply to cut-away. Tandem systems still use rings that are even larger than the original rings, and some tandem rigs even use four rings (e.g., Advance Tandem by Basik). Other variations have placed the rings under the risers facing back instead of forward of the risers facing front or varied the geometry of the rings for example using an elongated middle ring for a claimed improvement in mechanical advantage on Aerodyne's miniforce system.
Safety concerns
Since the introduction of the 3-ring system, variations in the design have raised safety concerns. For example, the move to mini rings and mini risers caused riser failures on some designs until riser strength was improved. The failure of some manufacturers to include stiff riser inserts and other hard housing cable guides to allow the free movement of the cutaway cable when risers and webbing are twisted has caused difficulty in cutting away from malfunctions with riser twists or harness deformations. The tolerance in the manufacture of the fabric risers and their connection to the rings is critical in maintaining the mechanical advantage of the 3-ring system and this has been compromised in some designs. Reversed risers placing the rings under the risers has prevented the rings moving freely and releasing under some cutaway scenarios. Replacement of the Lolon-F coating with Teflon impregnated compounds has caused both cracking and cable stripping problems.
Maintenance
Regular maintenance of the 3-ring system and risers is essential. Manufacturers recommend that the risers be disconnected from the harness and flexed, the rings should be checked for cracks or corrosion and the cable should be removed from the housing, cleaned and lubricated, typically with silicone based lubricant.
References
Parachuting | 3-ring release system | Engineering | 812 |
34,286,437 | https://en.wikipedia.org/wiki/Digital%20sensor | A digital sensor is an electronic or electrochemical sensor, where data is digitally converted and transmitted. Sensors are often used for analytical measurements, e.g. the measurement of chemical and physical properties of liquids. Examples of measured parameters are pH value, conductivity, oxygen, redox potentials. Such measurements are used in the industrialized world and give vital input for process control.
Analog sensors were used in the past, but digital sensors have come to dominate in the age of microprocessors. The differences between the two types, and the reasons for the development of digital sensors are discussed:
General aspects
Digital sensors are the modern successors of analog sensors. Digital sensors replace analog sensors stepwise, because they overcome the traditional drawbacks of analog sensor systems (cf chapter 3 –which book?)
History
Electronic and electrochemical sensors are typically one part of a measuring chain. A measuring chain comprises the sensor itself, a cable, and a transmitter.
In the traditional analog systems, the sensor converts the measuring parameter (e.g. pH value) into an analog electrical signal. This analog electrical signal is connected to a transmitter via a cable. The transmitter transforms the electrical signal into a readable form (display, current outputs, bus data transmission, etc.).
The sensor and the cable often are not connected permanently, but through electrical connectors.
This classical design with connectors and transmission of small currents through a cable has four main drawbacks:
1) Humidity and corrosion of the connector falsify the signal.
2) The cable must be shielded and of very high quality to prevent the measuring signal from being altered by electromagnetic noise.
3) The sensor cannot be calibrated or adjusted until installation, because the influence of the cable (length, resistance, impedance) cannot be neglected.
4) The cable length is limited.
Use and design
Digital sensors have been developed to overcome the traditional disadvantages of analog sensors.
Digital sensors are widely used in water and industrial processes. They measure parameters such as pH, redox potential, conductivity, dissolved oxygen, ammonium, nitrate, SAC, turbidity.
A digital sensor system consists of the sensor itself, a cable, and a transmitter. The differences with analog sensor systems are:
a) The sensor has an electronic chip. The measuring signal is directly converted into a digital signal inside the sensor. The data transmission through the cable is also digital. This digital data transmission is unaffected by cable length, cable resistance or impedance, and is not influenced by electromagnetic noise. Standard cables can be used.
b) The connection between sensor and cable can be contactless and done by inductive coupling. Humidity and related corrosion is no longer an issue. Alternative fibre-optic cables may also be an option for long or electromagnetically hostile connections
c) The sensor can be calibrated apart from the system.
See also
Image sensor
References
(German language, titles translated to English)
H. Galster: pH-Messung (pH Measurement), 1990, VCH Verlagsgesellschaft mbH,
C.H. Hamann, W. Vielstich: Elektrochemie I (Electrochemistry I), 1975, Verlag Chemie,
Schröter / Lautenschläger / Bibrack: Taschenbuch der Chemie (Pocketbook of Chemistry), 2001, Verlag Harri Deutsch,
U. Tietze, Ch. Schenk: Halbleiter-Schaltungstechnik (Semiconductor Circuit Technology), 2010, Springer Verlag,
Sensors
Digital technology | Digital sensor | Technology,Engineering | 723 |
412,765 | https://en.wikipedia.org/wiki/Organic%20certification | Organic certification is a certification process for producers of organic food and other organic agricultural products. In general, any business directly involved in food production can be certified, including seed suppliers, farmers, food processors, retailers and restaurants. A lesser known counterpart is certification for organic textiles (or organic clothing) that includes certification of textile products made from organically grown fibres.
Requirements vary from country to country (List of countries with organic agriculture regulation), and generally involve a set of production standards for growing, storage, processing, packaging and shipping that include:
avoidance of synthetic chemical inputs (e.g. fertilizer, pesticides, antibiotics, food additives), irradiation, and the use of sewage sludge;
avoidance of genetically modified seed;
use of farmland that has been free from prohibited chemical inputs for a number of years (often, three or more);
for livestock, adhering to specific requirements for feed, housing, and breeding;
keeping detailed written production and sales records (audit trail);
maintaining strict physical separation of organic products from non-certified products;
undergoing periodic on-site inspections.
In some countries, certification is overseen by the government, and commercial use of the term organic is legally restricted. Certified organic producers are also subject to the same agricultural, food safety and other government regulations that apply to non-certified producers.
Certified organic foods are not necessarily pesticide-free, as certain pesticides are allowed.
Purpose
Organic certification addresses a growing worldwide demand for organic food. It is intended to assure quality, prevent fraud, and to promote commerce. While such certification was not necessary in the early days of the organic movement, when small farmers would sell their produce directly at farmers' markets, as organics have grown in popularity, more and more consumers are purchasing organic food through traditional channels, such as supermarkets. As such, consumers must rely on third-party regulatory certification.
For organic producers, certification identifies suppliers of products approved for use in certified operations. For consumers, "certified organic" serves as a product assurance, similar to "low fat", "100% whole wheat", or "no artificial preservatives".
Certification is essentially aimed at regulating and facilitating the sale of organic products to consumers. Individual certification bodies have their own service marks, which can act as branding to consumers—a certifier may promote the high consumer recognition value of its logo as a marketing advantage to producers.
Methods
Third-party
In third party certification, the farm or the processing of the agriculture produce is certified in accordance with national or international organic standards by an accredited organic certification agency. To certify a farm, the farmer is typically required to engage in a number of new activities, in addition to normal farming operations:
Study the organic standards, which cover in specific detail what is and is not allowed for every aspect of farming, including storage, transport and sale.
Compliance — farm facilities and production methods must comply with the standards, which may involve modifying facilities, sourcing and changing suppliers, etc.
Documentation — extensive paperwork is required, detailing farm history and current set-up, and usually including results of soil and water tests.
Planning — a written annual production plan must be submitted, detailing everything from seed to sale: seed sources, field and crop locations, fertilization and pest control activities, harvest methods, storage locations, etc.
Inspection — annual on-farm inspections are required, with a physical tour, examination of records, and an oral interview. The vast majority of the inspections are pre-scheduled visits.
Fee — an annual inspection/certification fee (currently starting at $400–$2,000/year, in the US and Canada, depending on the agency and the size of the operation). There are financial assistance programs for qualifying certified operations.
Record-keeping — written, day-to-day farming and marketing records, covering all activities, must be available for inspection at any time.
In addition, short-notice or surprise inspections can be made, and specific tests (e.g. soil, water, plant tissue) may be requested.
For first-time farm certification, the soil must meet basic requirements of being free from use of prohibited substances (synthetic chemicals, etc.) for a number of years. A conventional farm must adhere to organic standards for this period, often two to three years. This is known as being in transition. Transitional crops are not considered fully organic.
Certification for operations other than farms follows a similar process. The focus is on the quality of ingredients and other inputs, and processing and handling conditions. A transport company would be required to detail the use and maintenance of its vehicles, storage facilities, containers, and so forth. A restaurant would have its premises inspected and its suppliers verified as certified organic.
Participatory
Participatory Guarantee Systems (PGS) represent an alternative to third party certification, especially adapted to local markets and short supply chains. They can also complement third party certification with a private label that brings additional guarantees and transparency. PGS enable the direct participation of producers, consumers and other stakeholders in:
the choice and definition of the standards
the development and implementation of certification procedures
the certification decisions
Participatory Guarantee Systems are also referred to as "participatory certification".
Alternative certification options
The word organic is central to the certification (and organic food marketing) process, and this is also questioned by some. Where organic laws exist, producers cannot use the term legally without certification. To bypass this legal requirement for certification, various alternative certification approaches, using currently undefined terms like "authentic" and "natural", are emerging. In the US, motivated by the cost and legal requirements of certification (as of Oct. 2002), the private farmer-to-farmer association, Certified Naturally Grown, offers a "non-profit alternative eco-labelling program for small farms that grow using USDA Organic methods but are not a part of the USDA Certified Organic program."
In the UK, the interests of smaller-scale growers who use "natural" growing methods are represented by the Wholesome Food Association, which issues a symbol based largely on trust and peer-to-peer inspection.
Organic certification and the Millennium Development Goals (MDGs)
Organic certification, as well as fair trade certification, has the potential to directly and indirectly contribute to the achievement of some of the Millennium Development Goals (MDGs), which are the eight international development goals that were established following the Millennium Summit of the United Nations in 2000, with all United Nations member states committed to help achieve the MDGs by 2015. With the growth of ethical consumerism in developed countries, imports of eco-friendly and socially certified produce from the poor in developing countries have increased, which could contribute towards the achievement of the MDGs. A study by Setboonsarng (2008) reveals that organic certification substantially contributes to MDG1 (poverty and hunger) and MDG7 (environmental sustainability) by way of premium prices and better market access, among others. This study concludes that for this market-based development scheme to broaden its poverty impacts, public sector support in harmonizing standards, building up the capacity of certifiers, developing infrastructure development, and innovating alternative certification systems will be required.
International food standards
The body Codex Alimentarius of the Food and Agriculture Organization of the United Nations was established in November 1961. The Commission's main goals are to protect the health of consumers and ensure fair practices in the international food trade. The Codex Alimentarius is recognized by the World Trade Organization as an international reference point for the resolution of disputes concerning food safety and consumer protection. One of their goals is to provide proper food labelling (general standard, guidelines on nutrition labelling, guidelines on labelling claims).
National variations
In some countries, organic standards are formulated and overseen by the government. The United States, the European Union, Canada and Japan have comprehensive organic legislation, and the term "organic" may be used only by certified producers. Being able to put the word "organic" on a food product is a valuable marketing advantage in today's consumer market, but does not guarantee the product is legitimately organic. Certification is intended to protect consumers from misuse of the term, and make buying organics easy. However, the organic labeling made possible by certification itself usually requires explanation. In countries without organic laws, government guidelines may or may not exist, while certification is handled by non-profit organizations and private companies.
Internationally, equivalency negotiations are underway, and some agreements are already in place, to harmonize certification between countries, facilitating international trade. There are also international certification bodies, including members of the International Federation of Organic Agriculture Movements (IFOAM) working on harmonization efforts. Where formal agreements do not exist between countries, organic product for export is often certified by agencies from the importing countries, who may establish permanent foreign offices for this purpose. In 2011 IFOAM introduced a new program—the IFOAM Family of Standards—that attempts to simplify harmonization. The vision is to establish the use of one single global reference (the COROS) to access the quality of standards rather than focusing on bilateral agreements.
The Certcost was a research project that conducted research and prepared reports about the certification of organic food. The project was supported by the European Commission and was active from 2008 to 2011. The website will be available until 2016.
North America
United States
In the United States, "organic" is a labeling term for food or agricultural products ("food, feed or fiber") that have been produced according to USDA organic regulations, which define standards that "integrate cultural, biological, and mechanical practices that foster cycling of resources, promote ecological balance, and conserve biodiversity". USDA standards recognize four types of organic production:
Crops: "Plants that are grown to be harvested as food, livestock feed, or fiber used to add nutrients to the field."
Livestock: "Animals that can be used in the production of food, fiber, or feed."
Processed/multi-ingredient products: "Items that have been handled and packaged (e.g. chopped carrots) or combined, processed, and packaged (e.g. bread or soup)."
Wild crops: "Plants from a growing site that is not cultivated."
Organic agricultural operations should ultimately maintain or improve soil and water quality, and conserve wetlands, woodlands, and wildlife.
The Organic Foods Production Act of 1990 "requires the Secretary of Agriculture to establish a National List of Allowed and Prohibited Substances which identifies synthetic substances that may be used, and the non- synthetic substances that cannot be used, in organic production and handling operations."
The Secretary of Agriculture promulgated regulations establishing the National Organic Program (NOP). The final rule was published in the Federal Register in 2000.
USDA Organic certification confirms that the farm or handling facility (whether within the United States or internationally) complies with USDA organic regulations. Farms or handling facilities can be certified by private, foreign, or State entities, whose agents are accredited by the USDA (accredited agents are listed on the USDA website). Any farm or business that grosses more than $5,000 annually in organic sales must be certified. Farms and businesses that make less than $5,000 annually are "exempt", and must follow all the requirements as stated in the USDA regulations except for two requirements:
Exempt operations do not need to be certified to "sell, label, or represent" their products as organic, but may not use the USDA organic seal or label their products as "certified organic". Exempt operations may pursue optional certification if they wish to use the USDA organic seal.
Exempt operations are not required to have a system plan that documents the specific practices and substances used in the production or handling of their organic products
Exempt operations are also barred from selling their products as ingredients for use in another producer or handler's certified organic product, and may be required by buyers to sign an affidavit affirming adherence to USDA organic regulations.
Before an operation may sell, label or represent their products as "organic" (or use the USDA organic seal), it must undergo a 3-year transition period where any land used to produce raw organic commodities must be left untreated with prohibited substances.
Operations seeking certification must first submit an application for organic certification to a USDA-accredited certifying agent including the following:
A detailed description of the operation seeking certification
A history of substances used on the land over the prior 3 years
A list of the organic products grown, raised, or processed
A written "Organic System Plan (OSP)" which outlines the practices and substances intended for use during future organic production.
Processors/handlers who are not primarily a farm (and farms with livestock and/or crops that also process products) must complete an Organic Handling Plan (OHP), and also include a product profile and label for each product
Certifying agents then review the application to confirm that the operation's practices follow USDA regulations, and schedule an inspection to verify adherence to the OSP, maintenance of records, and overall regulatory compliance
Inspection
During the site visit, the inspector observes onsite practices and compares them to the OSP, looks for any potential contamination by prohibited materials (or any risk of potential contamination), and takes soil, tissue, or product samples as needed. At farming operations, the inspector will also examine the fields, water systems, storage areas, and equipment, assess pest and weed management, check feed production, purchase records, livestock and their living conditions, and records of animal health management practices. For processing and handling facilities, the inspector evaluates the receiving, processing, and storage areas for organic ingredients and finished products, as well as assessing any potential hazards or contamination points (from "sanitation systems, pest management materials, or nonorganic processing aids"). If the facility also processes or handles nonorganic materials, the inspector will also analyze the measures in place to prevent commingling.
If the written application and operational inspection are successful, the certifying agent will issue an organic certificate to the applicant. The producer or handler must then submit an updated application and OSP, pay recertification fees to the agent, and undergo annual onsite inspections to receive recertification annually. Once certified, producers and handlers can have up to 75% of their organic certification costs reimbursed through the USDA Organic Certification Cost-Share Programs.
Federal legislation defines three levels of organic foods. Products made entirely with certified organic ingredients, methods, and processing aids can be labeled "100% organic" (including raw agricultural commodities that have been certified), while only products with at least 95% organic ingredients may be labeled "organic" (any non-organic ingredients used must fall under the exemptions of the National List). Under these two categories, no nonorganic agricultural ingredients are allowed when organic ingredients are available. Both of these categories may also display the "USDA Organic" seal, and must state the name of the certifying agent on the information panel.
A third category, containing a minimum of 70% organic ingredients, can be labeled "made with organic ingredients", but may not display the USDA Organic seal. Any remaining agricultural ingredients must be produced without excluded methods, including genetic modification, irradiation, or the application of synthetic fertilizers, sewage sludge, or biosolids. Non-agricultural ingredients used must be allowed on the National List. Organic ingredients must be marked in the ingredients list (e.g., "organic dill" or with an asterisk denoting organic status). In addition, products may also display the logo of the certification body that approved them.
Products made with less than 70% organic ingredients can not be advertised as "organic", but can list individual ingredients that are organic as such in the product's ingredient statement. Also, USDA ingredients from plants cannot be genetically modified.
Livestock feed is only eligible for labeling as "100% Organic" or "Organic".
Alcoholic products are also subject to the Alcohol and Tobacco Tax and Trade Bureau regulations. Any use of added sulfites in wine made with organic grapes means that the product is only eligible for the "made with" labeling category and therefore may not use the USDA organic seal. Wine labeled as made with other organic fruit cannot have sulfites added to it.
Organic textiles made be labeled organic and use the USDA organic seal if the finished product is certified organic and produced in full compliance with USDA organic regulations. If all of a specific fiber used in a product is certified organic, the label may state the percentage of organic fibers and identify the organic material.
Organic certification mandates that the certifying inspector must be able to complete both "trace-back" and "mass balance audits" for all ingredients and products. A trace-back audit confirms the existence of a record trail from time of purchase/production through the final sale. A mass balance audit verifies that enough organic product and ingredients have been produced or purchased to match the amount of product sold. Each ingredient and product must have an assigned lot number to ensure the existence of a proper audit trail.
Some of the earliest organizations to carry out organic certification in North America were the California Certified Organic Farmers, founded in 1973, and the voluntary standards and certification program popularized by the Rodale Press in 1972. Some retailers have their stores certified as organic handlers and processors to ensure organic compliance is maintained throughout the supply chain until delivered to consumers, such as Vitamin Cottage Natural Grocers, a 60-year-old chain based in Colorado.
Violations of USDA Organic regulations carry fines up to $11,000 per violation, and can also lead to suspension or revocation of a farm or business's organic certificate.
Once certified, USDA organic products can be exported to countries currently engaged in organic trade agreements with the U.S., including Canada, the European Union, Japan, and Taiwan, and do not require additional certification as long as the terms of the agreement are met.
Canada
In Canada, certification was implemented at the federal level on June 30, 2009. Mandatory certification is required for agricultural products represented as organic in import, export and inter-provincial trade, or that bear the federal organic logo. In Quebec, provincial legislation provides government oversight of organic certification within the province, through the Quebec Accreditation Board (Conseil D'Accréditation Du Québec). Only products that use at least 95% organic materials in production are allowed to bear the Canadian organic logo. Products between 70-95% may declare they have xx% of organic ingredients, however they do not meet requirements to bear the certified logo. Transitioning from a conventional agricultural operation to an organic operation takes the producers up to three years to receive organic certification, during which time products cannot be marketed as organic products, and producers will not receive pricing premiums on their goods during this time. Cows, sheep, and goats are the only livestock that are allowed to be transitioned to organic, under Canada's regulations. They must undergo organic management for one year before their products can be considered certified organic.
South America
Argentina
In Argentina, the Organic certification was implemented in December 2012, through a Ministry of Agriculture resolution. Organic products are labeled with the Orgánico Argentina seal, which is administered by SENASA and issued by four private companies. Organic production is regulated by the 25.127 Act, passed in 1999.
During 2019, of land were used for organic production certified with the Argentine seal.
Europe
Public organic certification
EU countries acquired comprehensive organic legislation with the implementation of the EU-Eco-regulation 1992. Supervision of certification bodies is handled on the national level. In March 2002 the European Commission issued an EU-wide label for organic food. It has been mandatory throughout the EU since July 2010. and has become compulsory after a two-year transition period.
The farmland converted to produce certified organic food has seen a significant evolution in the EU15 countries, rising from 1.8% in 1998 to 4.1% in 2005. For the current EU25 countries however the statistics report an overall percentage of just 1.5% as of 2005. However, the statistics showed a larger turnover of organic food in some countries, reaching 10% in France and 14% in Germany. In France 21% of available vegetables, fruits, milk and eggs were certified as organic. Numbers for 2010 show that 5.4% of German farmland has been converted to produce certified organic food, as has 10.4% of Swiss farmland and 11.7% of Austrian farmland. Non-EU countries have widely adopted the European certification regulations for organic food, to increase export to EU countries.
In 2009 a new logo was chosen through a design competition and online public vote. The new logo is a green rectangle that shows twelve stars (from the European flag) placed such that they form the shape of a leaf in the wind. Unlike earlier labels no words are presented on the label lifting the requirement for translations referring to organic food certification.
The new EU organic label has been implemented since July 2010 and has replaced the old European Organic label. However, producers that have had already printed and ready to use packaging with the old label were allowed to use them in the upcoming two years.
The development of the EU organic label was develop based on Denmark's organic food policy and the rules behind the Danish organic food label which at the moment holds the highest rate of recognition among its users in the world respectively 98% and 90% trust the label.
The current EU organic label is meant to signal to the consumer that at least 95% of the ingredients used in the processed organic food is from organic origin and 5% considered an acceptable error margin.
Private organic certification
Besides the public organic certification regulation EU-Eco-regulation in 1992, there are various private organic certifications available:
Demeter International is the largest certification organization for biodynamic agriculture, and is one of three predominant organic certifiers. Demeter Biodynamic Certification is used in over 50 countries to verify that biodynamic products meet international standards in production and processing. The Demeter certification program was established in 1928, and as such was the first ecological label for organically produced foods.
Bio Suisse established in 1981 is the Swiss organic farmer umbrella organization. International activities are mainly focused on imports towards Switzerland and do not support export activities.
Global Organic Textile Standard (GOTS) is a private standard for organic clothing for the entire post-harvest processing (including spinning, knitting, weaving, dyeing and manufacturing) of apparel and home textiles made with organic fibres (such as organic cotton, organic wool etc.). It includes both environmental and social criteria. Established in 2002, the standard is used in over 68 countries and is endorsed by USDA and IFOAM - Organics International. The material must be at least 95% organic, as certified by "recognized international or national standards". If the material is 70% organic, it can be labeled as "made with organic".
Czech Republic
Following private bodies certify organic produce: KEZ, o. p. s. (CZ-BIO-001), ABCert, AG (CZ-BIO-002) and BIOCONT CZ, s. r. o. (CZ-BIO-003). These bodies provide controlling of processes tied with issueing of certificate of origin. Controlling of compliancy (to (ES) no 882/2004 directive) is provided by government body ÚKZÚZ (Central Institute for Supervising and Testing in Agriculture).
France
In France, organic certification was introduced in 1985. It has established a green-white logo of "AB - agriculture biologique". The certification for the AB label fulfills the EU regulations for organic food. The certification process is overseen by a public institute ("Agence française pour le développement et la promotion de l'agriculture biologique" usually shortened to "Agence bio") established in November 2001. The actual certification authorities include a number of different institutes like Aclave, Agrocert, COSMEBIO, Ecocert SA, Qualité France SA, Ulase, SGS ICS.
Germany
In Germany the national label was introduced in September 2001 following in the footsteps of the political campaign of "Agrarwende" (agricultural major shift) led by minister Renate Künast of the Greens party. This campaign was started after the outbreak of mad cow disease in 2000. The effects on farming are still challenged by other political parties. The national "Bio"-label in its hexagon green-black-white shape has gained wide popularity—in 2007 there were 2431 companies having certified 41,708 products. The popularity of the label is extending to neighbouring countries like Austria, Switzerland and France.
In the German-speaking countries there have been older non-government organizations that had issued labels for organic food long before the advent of the EU organic food regulations. Their labels are still used widely as they significantly exceed the requirements of the EU regulations. An organic food label like "demeter" from Demeter International has been in use since 1928 and this label is still regarded as providing the highest standards for organic food in the world. Other active NGOs include Bioland (1971), Biokreis (1979), Biopark (1991), Ecoland (1997), Ecovin (1985), Gäa e.V. (1989), Naturland (1981) and Bio Suisse (1981).
Greece
In Greece, there are 16 certification and inspection bodies approved by the EU. Most of the certifications are obtained from DIO () and BIOHELLAS.
Ireland
In Ireland, organic certification is available from the Irish Organic Farmers and Growers Association, Demeter Standards Ltd. and Organic Trust Ltd.
Switzerland
In Switzerland, products sold as organic must comply at a minimum with the Swiss organic regulation (Regulation 910.18). Higher standards are required before a product can be labelled with the Bio Suisse label.
Sweden
In Sweden, organic certification is handled by the organisation KRAV with members such as farmers, processors, trade and also consumer, environmental and animal welfare interests.
Ukraine
In Ukraine, organic is regulated in accordance with the Law of Ukraine On Basic Principles and Requirements for Organic Production, Circulation and Labelling of Organic Products. Majority of Ukrainian producers, processing units, traders are also certified under international organic legislation (e.g. EU Organic Regulations, NOP, etc. The Order on the Approval of the State Logo for Organic Products was approved by the Ministry of Agrarian Policy and Food of Ukraine in 2019. The state logo for organic products is registered as a trademark and owned by the Ministry of Agrarian Policy and Food of Ukraine. The requirements for proper use of the Ukrainian state logo for organic products and labelling are described on the website of the Ministry of Agrarian Policy and Food of Ukraine as well as in the Methodical Recommendations on the Use of the State Logo for Organic Products.
In the summer 2023, the State Register of Operators the Produce Organic Products in Compliance with the Legislation in Organic Production, Circulation and Labelling of Organic Products and the State Register of Certification Bodies in Organic Production and Circulation of Organic Products that are maintained by the Ministry of Agrarian Policy and Food of Ukraine were launched in Ukraine.
A certificate confirming production and/or circulation of organic products under the legislation other than Ukrainian, shall be recognised in Ukraine with the view to import or export of such products, provided that it has been issued by the foreign certification body included in the List of foreign certification bodies which is maintained by the State Service of Ukraine on Food Safety and Consumer Protection.
State Institution "Entrepreneurship and Export Promotion Office" (EEPO, Ukraine) plays an important role in Ukrainian organic export facilitation.
Milestones and all the other useful information about the Ukrainian organic sector is available at the specialised Ukrainian organic webportal OrganicInfo.ua.
United Kingdom
In the United Kingdom, organic certification is handled by a number of organizations, regulated by The Department for Environment, Food and Rural Affairs (DEFRA), of which the largest are the Soil Association and Organic Farmers and Growers. While UK certification bodies are required to meet the EU minimum organic standards for all member states; they may choose to certify to standards that exceed the minimums, as is the case with the Soil Association.
Asia and Oceania
Australia
In Australia, organic certification is performed by several organisations that are accredited by the Biosecurity section of the Department of Agriculture (Australia), formerly the Australian Quarantine and Inspection Service, under the National Standard for Organic and Biodynamic Produce. All claims about the organic status of products sold in Australia are covered under the Competition and Consumer Act 2010.
In Australia, the Organic Federation of Australia is the peak body for the organic industry in Australia and is part of the government's Organic Consultative Committee Legislative Working Group that sets organic standards.
Department of Agriculture accreditation is a legal requirement for all organic products exported from Australia. Export Control (Organic Produce Certification) Orders are used by the Department to assess organic certifying bodies and recognise them as approved certifying organisations. Approved certifying organisations are assessed by the Department for both initial recognition and on an at least annual basis thereafter to verify compliance.
In the absence of domestic regulation, DOA accreditation also serves as a 'de facto' benchmark for certified product sold on the domestic market. Despite its size and growing share of the economy "the organic industry in Australia remains largely self-governed. There is no specific legislation for domestic organic food standardisation and labelling at the state or federal level as there is in the USA and the EU".
Australian approved certifying organisations
The Department has several approved certifying organisations that manage the certification process of organic and bio-dynamic operators in Australia. These certifying organisations perform a number of functions on the Department's behalf:
Assess organic and bio-dynamic operators to determine compliance to the National Standard for Organic and Bio-Dynamic Produce and importing country requirements.
Issue a Quality Management Certificate (QM Certificate) to organic operators to recognise compliance to export requirements.
Issue Organic Produce Certificates (Export Documentation) for consignments of organic and bio-dynamic produce being exported.
As of 2015, there are seven approved certifying organisations:
AUS-QUAL Pty Ltd (AUSQUAL)
Australian Certified Organic (ACO)
Bio-Dynamic Research Institute (BDRI)
NASAA Certified Organic (NCO)
Organic Food Chain (OFC)
Safe Food Production Queensland (SFQ)
Tasmanian Organic-dynamic Producers (TOP)
There are 2567 certified organic businesses reported in Australia in 2014. They include 1707 primary producers, 719 processors and manufacturers, 141 wholesalers and retailers plus other operators.
Australia does not have a national logo or seal to identify which products are certified organic, instead the logos of the individual certifying organisations are used.
China
In China, the organic certification is administered by a government agency named Certification and Accreditation Administration of the People's Republic of China (CNCA). While the implementation of certification works, including site checking, lab test on soil, water, product qualities are performed by the China Quality Certification Center (CQC) which is an agency of Administration of Quality Supervision, Inspection and Quarantine (AQSIQ).The organic certification procedures in china are performed according to China Organic Standard GB/T 19630.1-4—2011 which was issued in year 2011. This standard has governed standard procedure for Organic certification process performed by CQC, including application, inspection, lab test procedures, certification decision and post certification administration. The certificate issued by CQC are valid for one year.
There are two logos that are currently used by the CQC for labeling products with Organic Certification, these are the Organic Logo and CQC Logo. No conversion to organic Logo now.
There were more than 19000 valid certificates and 66 organic certification bodies until 2018 in China.
India
In India, APEDA regulates the certification of organic products as per National Standards for Organic Production. "The NPOP standards for production and accreditation system have been recognized by European Commission and Switzerland as equivalent to their country standards." Organic food products manufactured and exported from India are marked with the India Organic certification mark issued by the APEDA. APEDA has recognized 11 inspection certification bodies, some of which are branches of foreign certification bodies, others are local certification bodies.
Japan
In Japan, the Japanese Agricultural Standard (JAS) was fully implemented as law in April 2001. This was revised in November 2005 and all JAS certifiers were required to be re-accredited by the Ministry of Agriculture.
Singapore
As of 2014 the Agri-Food & Veterinary Authority of Singapore had no organic certification process, but instead relied on international certification bodies; it does not track local producers who claim to have gotten organic certification.
Cambodia
In Cambodia, Cambodian Organic Agriculture Association (COrAA) is the only organization that is authorized to give certificate for organic agricultural products. It is a nationwide private organization working for the promotion of organic and sustainable agriculture in Cambodia. COrAA has developed both organic and chemical-free agricultural standards and provides third-party-certification to producers following these standards. In addition, the services that COrAA provides include technical training for the conversion from chemical/conventional to organic farming, marketing support, organic awareness building among the general public, and a platform for dialogue and cooperation among organic stakeholders in Cambodia.
Africa
Kenya
In Kenya, the Kenya Organic Agriculture Network (KOAN) is mandated to coordinate the Organic Sector. It is the national Coordinator and Issuer of the certificate under Participatory Guarantee System (PGS). KOAN is also the custodian of the Kilimohai Organic Mark of Organic Certification under the East Africa Organic Products Standards.
Issues
Organic certification is not without its critics. Some of the staunchest opponents of chemical-based farming and factory farming practices also oppose formal certification. They see it as a way to drive independent organic farmers out of business, and to undermine the quality of organic food. Other organizations such as the Organic Trade Association work within the organic community to foster awareness of legislative and other related issues, and enable the influence and participation of organic proponents.
Obstacles to small independent producers
Originally, in the 1960s through the 1980s, the organic food industry was composed of mainly small, independent farmers, selling locally. Organic "certification" was a matter of trust, based on a direct relationship between farmer and consumer. Critics view regulatory certification as a potential barrier to entry for small producers, by burdening them with increased costs, paperwork, and bureaucracy
In China, due to government regulations, international companies wishing to market organic produce must be independently certified. It is reported that "Australian food producers are spending up to $50,000 to be certified organic by Chinese authorities to crack the burgeoning middle-class market of the Asian superpower." Whilst the certification process is described by producers as "extremely difficult and very expensive", a number of organic producers have acknowledged the ultimately positive effect of gaining access to the emerging Chinese market. For example, figures from Australian organic infant formula and baby food producer Bellamy's Organic indicate export growth, to China alone, of 70 per cent per year since gaining Chinese certification in 2008, while similar producers have shown export growth of 20 per cent to 30 per cent a year following certification
Peak Australian organic certification body, Australian Certified Organic, has stated however that "many companies have baulked at risking the money because of the complex, unwieldy and expensive process to earn Chinese certification." By comparison, equivalent certification costs in Australia are less than $2,000 (AUD), with costs in the United States as low as $750 (USD) for a similarly sized business.
Manipulative use of regulations
Manipulation of certification regulations as a way to mislead or outright dupe the public is a very real concern. Some examples are creating exceptions (allowing non-organic inputs to be used without loss of certification status) and creative interpretation of standards to meet the letter, but not the intention, of particular rules. For example, a complaint filed with the USDA in February 2004 against Bayliss Ranch, a food ingredient producer and its certifying agent, charged that tap water had been certified organic, and advertised for use in a variety of water-based body care and food products, in order to label them "organic" under US law. Steam-distilled plant extracts, consisting mainly of tap water introduced during the distilling process, were certified organic, and promoted as an organic base that could then be used in a claim of organic content. The case was dismissed by the USDA, as the products had been actually used only in personal care products, over which the department at the time extended no labeling control. The company subsequently adjusted its marketing by removing reference to use of the extracts in food products.
In 2013, the Australian Competition & Consumer Commission said that water can no longer be labelled as organic water because, based on organic standards, water cannot be organic and it is misleading and deceptive to label any water as such.
False assurance of quality
The label itself can be used to mislead many customers that food labelled as being organic is safer, healthier and more nutritious. Thus, a product may be labelled organic, but have no significant nutritional value compared to other products.
Erosion of standards
Critics of formal certification also fear an erosion of organic standards. Provided with a legal framework within which to operate, lobbyists can push for amendments and exceptions favorable to large-scale production, resulting in "legally organic" products produced in ways similar to current conventional food. Combined with the fact that organic products are now sold predominantly through high volume distribution channels such as supermarkets, the concern is that the market is evolving to favor the biggest producers, and this could result in the small organic farmer being squeezed out.
In the United States large food companies, have "assumed a powerful role in setting the standards for organic foods". Many members of standard-setting boards come from large food corporations. As more corporate members have joined, many nonorganic substances have been added to the National List of acceptable ingredients. The United States Congress has also played a role in allowing exceptions to organic food standards. In December 2005, the 2006 agricultural appropriations bill was passed with a rider allowing 38 synthetic ingredients to be used in organic foods, including food colorings, starches, sausage and hot-dog casings, hops, fish oil, chipotle chili pepper, and gelatin; this allowed Anheuser-Busch in 2007 to have its Wild Hop Lager certified organic "even though [it] uses hops grown with chemical fertilizers and sprayed with pesticides."
See also
Biopesticide
Certified Naturally Grown
Farm assurance
Herbicide
List of countries with organic agriculture regulation
List of organic food topics
NSF International
Organic clothing
Organic cotton
Organic farming
Organic food culture
Standards of identity for food
References
Citations and notes
General
Agricultural Marketing Service, USDA National Organic Program: Final Rule (7 CFR Part 205; Federal Register, Vol. 65, No. 246, 21 December 2000)
OCPP/Pro-Cert Canada Organic Agriculture & Food Standard (OC/PRO IS 350/150)
The Australian Organic Industry: A Profile, 2004, (pdf)
Certification marks
Ecolabelling
European Union food law
Farm assurance
Management theory
Product certification | Organic certification | Mathematics | 8,031 |
49,646,576 | https://en.wikipedia.org/wiki/South%20Lawn%20car%20park | The South Lawn car park is a parking garage at the University of Melbourne constructed in 1971–72 using an innovative reinforced concrete shells with parabolic profiles supported on short columns structural system designed by Jan van der Molen, an engineer. The car park was added to the Victorian Heritage Register on 6 April 1994.
History
The car park was proposed in the university Campus Master Plan prepared by Bryce Mortlock in 1970, partly to deal with increased demand for parking while retaining the landscape character of the core part of the university. Loder and Bayley, in association with Harris, Lange and Partners, were commissioned to prepare the designs, with Jan van der Molen as engineer in charge. Ellis Stones and Ronald Rayment, the first graduates of a landscape design course in Victoria, undertook the landscape design both above the car park and along the edges facing the Baillieu Library and John Medley Building.
The proposal met with some controversy, with eighteen appeals being made to the Building Regulations Committee before approval was finally granted. John Loder, from Loder and Bayly, was presented with three options reputedly excluded the others and only recommended van der Molen's design to the University.
Design and construction
The design comprises a series of reinforced concrete shells with parabolic profiles supported on short columns. The columns encase pipes to drain the soils above for the planting of lawn and trees of the South Lawn. van der Molen's design of sophisticated hyperbolic-paraboloidal platforms, was described as ...saucer-shaped flowerpots on columns, interconnected to form arches. The deep dishes of the concrete forms allowed large trees to be planted on its roof. Excavations involved substantial earthworks to retain the lawn at the same level of the 'Old Quad' building which was the historic core of the university. Works commenced in May 1971 and the car park was completed by November 1972. The east entrance to the car park incorporates a door from a 1745 house in St Stephen's Green, Dublin, and the west entrance is constructed with the salvaged doorway and is framed by two Atlas figures from the demolished Colonial Bank offices in Elizabeth Street in the Melbourne central business district.
In popular culture
The car park was used as setting for a ballet sequence in an ABC television broadcast, a number of student film projects and art installations, has been featured in many architectural publications and exhibitions, and for the police garage scene in the first Mad Max movie. Architectural historian, Professor Miles Lewis, described the structure at the time, as the... most important non-residential design in the country. This iconic carpark was also part of the set of Troye Sivan's "You" collaboration with Tate McRae and DJ Regard.
Gallery
References
External links
Garages (parking)
Structural system
University of Melbourne buildings
Heritage-listed buildings in Melbourne
Buildings and structures completed in 1972
1972 establishments in Australia
Buildings and structures in the City of Melbourne (LGA)
Transport in the City of Melbourne (LGA) | South Lawn car park | Technology,Engineering | 602 |
2,372,519 | https://en.wikipedia.org/wiki/Coordination%20polymerization | Coordination polymerisation is a form of polymerization that is catalyzed by transition metal salts and complexes.
Types of coordination polymerization of alkenes
Heterogeneous Ziegler–Natta polymerization
Coordination polymerization started in the 1950s with heterogeneous Ziegler–Natta catalysts based on titanium tetrachloride and organoaluminium co-catalysts. The mixing of TiCl4 with trialkylaluminium complexes produces Ti(III)-containing solids that catalyze the polymerization of ethene and propene. The nature of the catalytic center has been of intense interest but remains uncertain. Many additives and variations have been reported for the original recipes.
Homogeneous Ziegler–Natta polymerization
In some applications heterogeneous Ziegler–Natta polymerization has been superseded by homogeneous catalysts such as the Kaminsky catalyst discovered in the 1970s. The 1990s brought forward a new range of post-metallocene catalysts. Typical monomers are nonpolar ethene and propene. The development of coordination polymerization that enables copolymerization with polar monomers is more recent. Examples of monomers that can be incorporated are methyl vinyl ketones, methyl acrylate, and acrylonitrile.
Kaminsky catalysts are based on metallocenes of group 4 metals (Ti, Zr, Hf) activated with methylaluminoxane (MAO).
Polymerizations catalysed by metallocenes occur via the Cossee–Arlman mechanism. The active site is usually anionic but cationic coordination polymerization also exists.
Specialty monomers
Many alkenes do not polymerize in the presence of Ziegler–Natta or Kaminsky catalysts. This problem applies to polar olefins such as vinyl chloride, vinyl ethers, and acrylate esters.
Butadiene polymerization
The annual production of polybutadiene is 2.1 million tons (2000). The process employs a neodymium-based homogeneous catalyst.
Principles
Coordination polymerization has a great impact on the physical properties of vinyl polymers such as polyethylene and polypropylene compared to the same polymers prepared by other techniques such as free-radical polymerization. The polymers tend to be linear and not branched and have much higher molar mass. Coordination type polymers are also stereoregular and can be isotactic or syndiotactic instead of just atactic. This tacticity introduces crystallinity in otherwise amorphous polymers. From these differences in polymerization type the distinction originates between low-density polyethylene (LDPE), high-density polyethylene (HDPE) or even ultra-high-molecular-weight polyethylene (UHMWPE).
Coordination polymerization of other substrates
Coordination polymerization can also be applied to non-alkene substrates. Dehydrogenative coupling of silanes, dihydro- and trihydrosilanes, to polysilanes has been investigated, although the technology has not been commercialized. The process entails coordination and often oxidative addition of Si-H centers to metal complexes.
Lactides also polymerize in the presence of Lewis acidic catalysts to give polylactide:
See also
Cossee–Arlman mechanism
Ziegler–Natta catalyst
Polymerization
Coordination bond
References
Polymerization reactions | Coordination polymerization | Chemistry,Materials_science | 705 |
169,208 | https://en.wikipedia.org/wiki/Marsh | In ecology, a marsh is a wetland that is dominated by herbaceous plants rather than by woody plants. More in general, the word can be used for any low-lying and seasonally waterlogged terrain. In Europe and in agricultural literature low-lying meadows that require draining and embanked polderlands are also referred to as marshes or marshland.
Marshes can often be found at the edges of lakes and streams, where they form a transition between the aquatic and terrestrial ecosystems. They are often dominated by grasses, rushes or reeds. If woody plants are present they tend to be low-growing shrubs, and the marsh is sometimes called a carr. This form of vegetation is what differentiates marshes from other types of wetland such as swamps, which are dominated by trees, and mires, which are wetlands that have accumulated deposits of acidic peat.
Marshes provide habitats for many kinds of invertebrates, fish, amphibians, waterfowl and aquatic mammals. This biological productivity means that marshes contain 0.1% of global sequestered terrestrial carbon. Moreover, they have an outsized influence on climate resilience of coastal areas and waterways, absorbing high tides and other water changes due to extreme weather. Though some marshes are expected to migrate upland, most natural marshlands will be threatened by sea level rise and associated erosion.
Basic information
Marshes provide a habitat for many species of plants, animals, and insects that have adapted to living in flooded conditions or other environments. The plants must be able to survive in wet mud with low oxygen levels. Many of these plants, therefore, have aerenchyma, channels within the stem that allow air to move from the leaves into the rooting zone. Marsh plants also tend to have rhizomes for underground storage and reproduction. Common examples include cattails, sedges, papyrus and sawgrass. Aquatic animals, from fish to salamanders, are generally able to live with a low amount of oxygen in the water. Some can obtain oxygen from the air instead, while others can live indefinitely in conditions of low oxygen. The pH in marshes tends to be neutral to alkaline, as opposed to bogs, where peat accumulates under more acid conditions.
Values and ecosystem services
Marshes provide habitats for many kinds of invertebrates, fish, amphibians, waterfowl and aquatic mammals. Marshes have extremely high levels of biological production, some of the highest in the world, and therefore are important in supporting fisheries.
Marshes also improve water quality by acting as a sink to filter pollutants and sediment from the water that flows through them. Marshes partake in water purification by providing nutrient and pollution consumption. Marshes (and other wetlands) are able to absorb water during periods of heavy rainfall and slowly release it into waterways and therefore reduce the magnitude of flooding. Marshes also provide the services of tourism, recreation, education, and research.
Types of marshes
Marshes differ depending mainly on their location and salinity. These factors greatly influence the range and scope of animal and plant life that can survive and reproduce in these environments. The three main types of marsh are salt marshes, freshwater tidal marshes, and freshwater marshes. These three can be found worldwide, and each contains a different set of organisms.
Salt marshes
Saltwater marshes are found around the world in mid to high latitudes, wherever there are sections of protected coastline. They are located close enough to the shoreline that the motion of the tides affects them, and, sporadically, they are covered with water. They flourish where the rate of sediment buildup is greater than the rate at which the land level is sinking. Salt marshes are dominated by specially adapted rooted vegetation, primarily salt-tolerant grasses.
Salt marshes are most commonly found in lagoons, estuaries, and on the sheltered side of a shingle or sandspit. The currents there carry the fine particles around to the quiet side of the spit, and sediment begins to build up. These locations allow the marshes to absorb the excess nutrients from the water running through them before they reach the oceans and estuaries. These marshes are slowly declining. Coastal development and urban sprawl have caused significant loss of these essential habitats.
Freshwater tidal marshes
Although considered a freshwater marsh, the ocean tides affect this form of marsh. However, without the stresses of salinity at work in its saltwater counterpart, the diversity of the plants and animals that live in and use freshwater tidal marshes is much higher than in salt marshes. The most severe threats to this form of marsh are the increasing size and pollution of the cities surrounding them.
Freshwater marshes
Ranging greatly in size and geographic location, freshwater marshes make up North America's most common form of wetland. They are also the most diverse of the three types of marsh. Some examples of freshwater marsh types in North America are:
Wet meadows
Wet meadows occur in shallow lake basins, low-lying depressions, and the land between shallow marshes and upland areas. They also happen on the edges of large lakes and rivers. Wet meadows often have very high plant diversity and high densities of buried seeds. They are regularly flooded but are often dry in the summer.
Vernal pools
Vernal pools are a type of marsh found only seasonally in shallow depressions in the land. They can be covered in shallow water, but in the summer and fall, they can be completely dry. In western North America, vernal pools tend to form in open grasslands, whereas in the east, they often occur in forested landscapes. Further south, vernal pools form in pine savannas and flatwoods. Many amphibian species depend upon vernal pools for spring breeding; these ponds provide a habitat free from fish, which eat the eggs and young of amphibians. An example is the endangered gopher frog. Similar temporary ponds occur in other world ecosystems, where they may have local names. However, vernal pool can be applied to all such temporary pool ecosystems.
Playa lakes
Playa lakes are a form of shallow freshwater marsh in the southern high plains of the United States. Like vernal pools, they are only present at certain times of the year and generally have a circular shape. As the playa dries during the summer, conspicuous plant zonation develops along the shoreline.
Prairie potholes
Prairie potholes are found in northern North America, such as the Prairie Pothole Region. Glaciers once covered these landscapes, and as a result, shallow depressions were formed in great numbers. These depressions fill with water in the spring. They provide important breeding habitats for many species of waterfowl. Some pools only occur seasonally, while others retain enough water to be present all year.
Riverine wetlands
Many kinds of marsh occur along the fringes of large rivers. The different types are produced by factors such as water level, nutrients, ice scour, and waves.
Embanked marshlands
Large tracts of tidal marsh have been embanked and artificially drained. They are usually known by the Dutch name of polders. In Northern Germany and Scandinavia they are called Marschland, Marsch or marsk; in France marais maritime. In the Netherlands and Belgium, they are designated as marine clay districts. In East Anglia, a region in the East of England, the embanked marshes are also known as Fens.
Restoration
Some areas have already lost 90% of their wetlands, including marshes. They have been drained to create agricultural land or filled to accommodate urban sprawl. Restoration is returning marshes to the landscape to replace those lost in the past. Restoration can be done on a large scale, such as by allowing rivers to flood naturally in the spring, or on a small scale by returning wetlands to urban landscapes.
See also
References
External links
Marshes of the Lowcountry (South Carolina) – Beaufort County Library
Fluvial landforms
Pedology
Wetlands
tr:Sazlık | Marsh | Environmental_science | 1,593 |
10,560,323 | https://en.wikipedia.org/wiki/Rytov%20number | The Rytov number is a fundamental scaling parameter for laser propagation through atmospheric turbulence. Rytov numbers greater than 0.2 are generally considered to be strong scintillation. A Rytov number of 0 would indicate no turbulence, thus no scintillation of the beam.
References
Wave mechanics | Rytov number | Physics | 59 |
10,178,889 | https://en.wikipedia.org/wiki/Dieter%20Enders | Dieter Enders (17 March 1946 – 29 June 2019) was a German organic chemist who did work developing asymmetric synthesis, in particular using modified prolines as chiral auxiliaries. The most widely applied of his chiral auxiliaries are the complementary SAMP and RAMP auxiliaries, which allow for asymmetric alpha-alkylation of aldehydes and ketones. In 1974 he obtained his doctorate from the University of Gießen studying under Dieter Seebach and followed this with a postdoc at Harvard University studying with Elias James Corey. He then moved back to Gießen to obtain his Habilitation in 1979, whereupon he became a lecturer, soon obtaining Professorship in 1980 as Professor of Organic Chemistry at Bonn. In 1985 he moved to Aachen, where he was Full Professor of Organic Chemistry and Director. He was editor-in-chief of Synthesis and was on the advisory boards of many other journals including Letters in Organic Chemistry and SynLett.
During his career he won many awards, including:
1993 Gottfried Wilhelm Leibniz Prize of the Deutsche Forschungsgemeinschaft
1995 Yamada Award, Japan
2000 Max-Planck-Forschungspreis for Chemistry
2002 Emil-Fischer-Medaille of the GDCh
2014 Ryoji Noyori Prize, Japan
External links
Dieter Enders Home Page
Curriculum Vitae Prof. Dr. Dieter Enders
1946 births
2019 deaths
University of Giessen alumni
Harvard University alumni
Academic staff of the University of Bonn
20th-century German chemists
Gottfried Wilhelm Leibniz Prize winners
Academic staff of RWTH Aachen University
German organic chemists
21st-century German chemists | Dieter Enders | Chemistry | 339 |
31,982,409 | https://en.wikipedia.org/wiki/Federation%20of%20Oils%2C%20Seeds%20and%20Fats%20Associations | The Federation of Oils, Seeds and Fats Associations (FOSFA International) is the main trade association for the oil, seeds and fats industry. It regulates legal contracts in the trade/industry.
History
FOSFA was incorporated in 1968. It serves communities in the United Kingdom.
Function
85% of worldwide trade in oils and fats is under FOSFA contracts. It regulates trade in the industry. Its rules cover products transported with Cost, Insurance and Freight (CIF) or Freight on Board (FOB).
The advantage of having the vast majority of worldwide trade under FOSFA contracts is that using standard contracts reduces the risk of misinterpretations or misunderstandings between trading parties. Additionally, these standard form contracts are familiar to trading parties and reflective of trade practices that are longstanding in the industry.
It holds week-long residential training courses during the Autumn at The University of Greenwich.
References
Oils
Organisations based in the City of London
International trade organizations
Arbitration organizations
Organizations established in 1968
Food industry trade groups based in the United Kingdom | Federation of Oils, Seeds and Fats Associations | Chemistry | 214 |
24,663,426 | https://en.wikipedia.org/wiki/PICMG%202.4 | PICMG 2.4 is a specification by PICMG that standardizes user IO pin mappings from ANSI/VITA standard IP sites to J3/P3, J4/P4, and J5/P5 on a CompactPCI backplane.
Status
Adopted: 9 September 1998
Current revision: 1.0
References
Open standards
PICMG standards | PICMG 2.4 | Technology | 74 |
60,245,045 | https://en.wikipedia.org/wiki/Sofia%20Feltzing | Johanna Sofia Nikolina Feltzing (born 26 June 1965 in Högsbo, Gothenburg, Sweden) is a Swedish astronomer and Professor of Astronomy at Lund University since 2011. Feltzing was the first woman to complete a PhD in astronomy at Uppsala, and the tenth in Sweden.
Biography
Feltzing completed her PhD at Uppsala University in 1996, publishing a dissertation about the chemical evolution of the Milky Way. She was a postdoctoral researcher at Royal Greenwich Observatory and the Institute of Astronomy, Cambridge at Cambridge University from 1996 to 1998. In 1998, she moved to Lund Observatory.
Feltzing's research primarily concerns understanding galaxy formation and evolution by studying the stars and gas of the Milky Way. She has also studied dwarf spheroidal galaxies and globular star clusters.
In 2013, Feltzing was awarded the Strömer-Ferrnerska prize of 20,000SEK by the Royal Swedish Academy of Sciences for "her spectroscopic and photometric studies which have been crucial contributions to a deeper understanding of the development of the Milky Way and its surrounding galaxies."
In 2015, Feltzing was elected to the Royal Swedish Academy of Sciences.
In 2021, an article in Nature journal reported that an investigation into victimisation by the University of Lund, using the method Faktaundersökning, found that Feltzing could have committed acts of victimisation against some other employees. An independent group Academic Rights Watch in Sweden described the investigation as substandard, resembling extrajudicial mock trial and in some cases misinterpreting the Swedish law.
Notes
Swedish women academics
People from Gothenburg
21st-century Swedish astronomers
1965 births
Members of the Royal Swedish Academy of Sciences
Women astronomers
Living people | Sofia Feltzing | Astronomy | 342 |
4,912,746 | https://en.wikipedia.org/wiki/Selcall | Selcall (selective calling) is a type of squelch protocol used in radio communications systems, in which transmissions include a brief burst of sequential audio tones. Receivers that are set to respond to the transmitted tone sequence will open their squelch, while others will remain muted.
Selcall is a radio signalling protocol mainly in use in Europe, Asia, Australia and New Zealand, and continues to be incorporated in radio equipment marketed in those areas.
Details
The transmission of a selcall code involves the generation and sequencing of a series of predefined, audible tones. Both the tone frequencies, and sometimes the tone periods, must be known in advance by both the transmitter and the receiver. Each predefined tone represents a single digit. A series of tones therefore represents a series of digits that represents a number. The number encoded in a selcall burst is used to address one or more receivers. If the receiver is programmed to recognise a certain number, then it will un-mute its speaker so that the transmission can be heard; an unrecognised number is ignored and therefore the receiver remains muted.
Tone Sets
A selcall tone set contains 16 tones that represent 16 digits. The digits correspond to the 16 hexadecimal digits, i.e. 0-9 and A-F. Digits A-F are typically reserved for control purposes. For example, digit "E" is typically used as the repeat digit.
There are eight, well known, selcall tone sets.
Tone Periods
The physical characteristics of the transmitted sequence of tones is tightly controlled. Each tone is generated for a predefined period, in the order of tens of milliseconds. Each subsequent tone is transmitted immediately after the preceding one for the same period, until the sequence is complete.
Typical tone periods include; 20ms, 30ms (sometimes 33ms), 40ms, 50ms, 60ms, 70ms, 80ms, 90ms and 100ms.
The longer the tone period, the more reliable the decoding of the tone sequence. Naturally, the longer the tone period, the greater the duration of the selcall tone burst; longer bursts may be enough to force the user pause before speaking, especially if using the leading-edge ANI scheme.
A typical tone period selection is 40ms, so for a 5-tone sequence this represents a total selcall duration of 5 x 40ms = 200ms. However this is vendor specific and for example commercial radios from Ericsson uses a tone period selection of 100ms where the first tone is 700ms. The 700ms is used on the first tone and allows radios to run a tone scan on several channels without missing a call.
Repeat Tone
Each tone in a selcall sequence must be unique. Typically, the receiving device cannot discriminate between two consecutive tones, where the frequency of those two tones is the same; that is, two consecutive tones with the same frequency will be decoded as a single digit. Therefore, where there are two consecutive digits to be transmitted that are the same, the second digit will be replaced by the repeat digit. The repeat digit is nearly always assigned as "E". On reception, if the receiving device decodes a sequence that contains a repeat digit, then it will substitute it with the preceding digit, thereby reconstituting the original sequence.
For example; the sequence "12334" is actually transmitted as "123E4".
If a transmission would have multiple repeats, like "12333", it would be transmitted as "123E3" in order to not have the same problem again.
Implementations
Automatic Number Identification
Automatic Number Identification or ANI, is a scheme that uses selcall for identification purposes. Typically a mobile radio will be configured to transmit a preconfigured selcall sequence when the user presses the ‘push-to-talk’ (PTT) button, which will automatically identify them to other devices listening on the same frequency on the radio network.
There are two ANI schemes; leading-edge and trailing-edge. Leading-edge ANI will transmit the selcall sequence as soon as the user presses the PTT button. Trailing-edge ANI will transmit the selcall sequence as soon as the user releases the PTT button.
Some selcall implementations use the last digit in the selcall sequence to signify some sort of status or condition, for example emergency or duress. Both transmitting and receiving devices are configured such that they attribute the same significance to each of the status codes. Often a device that decodes a certain status can display a predefined message to alert the user.
Together, ANI and status provide a convenient way to rapidly relay information via the radio network, without the user having to speak. For example, an ambulance paramedic in the field, having encountered some emergency, can simply press and release the PTT button on their radio to signal their predicament to the base. The ANI will identify the caller, the status code will indicate the scenario and the base can dispatch assistance as required.
Status Gap
A variation on selcall transmission that includes a status code is for the transmitting device to insert one or two tone periods of silence between the preceding tones and the status tone; the so-called status gap. Another variation is to prolong the status tone by another tone period; the so-called two tone-period status tone.
Proprietary Implementations
Motorola's name is Select 5 in sales brochures for obsolete equipment marketed in Europe such as Syntor mobiles, Syntor X mobiles, Mitrek mobiles, Mostar mobiles, and Maxar mobiles.
As push-to-talk identifier
A similar proprietary Motorola format used a seven-tone sequence and was called MODAT. Radios with this option were marketed in the US during the 1970s and 1980s. MODAT encoders in Motorola radios can be configured to send five-tone sequences with code plans compatible to CCIR, ZVEI, or the proprietary Motorola seven-tone-sequential format. These systems send tone sequences to identify a unit (unit ID) rather than for selective calling. Some systems used CTCSS and MODAT. In a unit ID application, every radio has a different five- or seven-tone code. Each time the push-to-talk is pressed, the tone sequence is transmitted. This code is displayed at the dispatch console to identify which unit has called. In some cases the code is translated to a vehicle number or other identifier.
External links
Radio technology | Selcall | Technology,Engineering | 1,344 |
52,452,510 | https://en.wikipedia.org/wiki/NGC%206440 | NGC 6440 is a globular cluster of stars in the southern constellation of Sagittarius. It was discovered by German-English astronomer William Herschel on 28 May 1786. With an apparent visual magnitude of 9.3 and an angular diameter of , it can be observed as a fuzzy blob when viewed through a small telescope. Its Shapley–Sawyer Concentration Class is V.
This cluster is located at a distance of from the Sun. It is situated toward the galactic bulge of the Milky Way, about from the Galactic Center. The center of the cluster is fairly concentrated, but does not appear to have undergone a core collapse. It has a core radius of , and a half-mass radius of . Observations suggest it is one of the most metal–rich globular clusters in the galaxy, and it is close to solar metallicity. NGC 6440 is a rich target for Astrophysical X-ray sources. , thirteen pulsars have been discovered in NGC 6440.
References
External links
Globular clusters
Sagittarius (constellation)
6440 | NGC 6440 | Astronomy | 219 |
72,606,747 | https://en.wikipedia.org/wiki/International%20Bathymetric%20Chart%20of%20the%20Southern%20Ocean | The International Bathymetric Chart of the Southern Ocean (IBCSO) is a regional mapping initiative of the General Bathymetric Chart of the Oceans (GEBCO). IBSCO receives support from the Nippon Foundation – GEBCO Seabed 2030 Project.
Background
IBCSO is a joint project by the International Hydrographic Organization, the Scientific Committee on Antarctic Research, the General Bathymetric Chart of the Oceans and the Seabed 2030 Project. The project aims to identify and pool all Bathymetry data in the Southern Ocean and use that data to produce gridded bathymetric maps of the seafloor.
The extent of the project is bound by 50°S, stretching from the southern tip of South America to the coastal waters of Antarctica. The IBCSO project is currently hosted by the bathymetry department at the Alfred Wegener Institute for Polar and Marine Research in Bremerhaven.
Description
Bathymetric data from all data holders are reviewed and pooled at 1 meter resolution to form the basis of the bathymetric data set. This includes all seafloor mapping sources from modern echo sounding methods such as multibeam echosounders and singlebeam echosounders to historic lead line measurements. A weighted blockmedian filter is run across the pooled data set to create a spatial map of high and low quality data.
The low quality data is processed using a spline interpolation algorithm and used as a background layer. The high quality data from e.g. modern multibeam echosounder data is then added on top and incorporated in the background surface using a bending algorithm. Regions with no data coverage are padded by bathymetric data from the dataset collected by the Shuttle Radar Topography Mission.
The gridded product is made available to the public at a 500m x 500m resolution in a polar stereographic projection (EPSG: 9354) with either bedrock data of the Antarctic continent based on BedMachine and data for the ice surface topography derived from various sources such as REMA. The most recent version is also incorporated into the annual release of the General Bathymetric Chart of the Oceans grid.
The first version of IBCSO was published in 2013, covering the Southern Ocean south of 60°S. More than 4,200 million ocean soundings of diverse types and quality were incorporated.
IBCSO became associated with and is supported by the Nippon Foundation – Seabed 2030 Project since 2017. IBCSO version 2 was published in 2022 and increased the extent of the bathymetric map to 50°S, increasing the area covered by 2.5 compared to IBCSO version 1. 92.7% of map data originate from multibeam data, 6.7% originate from singlebeam data, and the remaining ~1% comes from mixed sources (seismic reflection, lidar, etc.).
Versions
IBCSO Version 1
500x500 meter resolution
coverage up to 60°S
IBCSO version 2
500x500 meter resolution
coverage up to 50°S
References
External links
IBCSO Version 1
IBCSO version 2
IBCSO Products
SCAR website of the IBCSO project
Current coverage of the world's oceans by SEABED2030 Project (hosted by the University of Stockholm, Sweden)
Oceanography
World maps
Hydrography | International Bathymetric Chart of the Southern Ocean | Physics,Environmental_science | 673 |
3,981,163 | https://en.wikipedia.org/wiki/Circulator%20pump | A circulator pump or circulating pump is a specific type of pump used to circulate gases, liquids, or slurries in a closed circuit with small elevation changes. They are commonly found circulating water in a hydronic heating or cooling system. They are specialized in providing a large flow rate rather than providing much head, as they are supposed to only overcome the friction of a piping system, as opposed to a regular centrifugal pump which may need to lift a fluid significantly.
Circulator pumps as used in hydronic systems are usually electrically powered centrifugal pumps. As used in homes, they are often small, sealed, and rated at a fraction of a horsepower, but in commercial applications they range in size up to many horsepower and the electric motor is usually separated from the pump body by some form of mechanical coupling. The sealed units used in home applications often have the motor rotor, pump impeller, and support bearings combined and sealed within the water circuit. This avoids one of the principal challenges faced by the larger, two-part pumps: maintaining a water-tight seal at the point where the pump drive shaft enters the pump body.
Small- to medium-sized circulator pumps are usually supported entirely by the pipe flanges that join them to the rest of the hydronic plumbing. Large pumps are usually pad-mounted.
Pumps that are used solely for closed hydronic systems can be made with cast iron components as the water in the loop will either become de-oxygenated or be treated with chemicals to inhibit corrosion. But pumps that have a steady stream of oxygenated, potable water flowing through them must be made of more expensive materials such as bronze.
Use with domestic hot water
Circulating pumps are often used to circulate domestic hot water so that a faucet will provide hot water instantly upon demand, or (more conserving of energy) a short time after a user's request for hot water. In regions where water conservation issues are rising in importance with rapidly expanding and urbanizing populations local water authorities offer rebates to homeowners and builders that install a circulator pump to save water. In typical one-way plumbing without a circulation pump, water is simply piped from the water heater through the pipes to the tap. Once the tap is shut off, the water remaining in the pipes cools producing the familiar wait for hot water the next time the tap is opened. By adding a circulator pump and constantly circulating a small amount of hot water through the pipes from the heater to the farthest fixture and back to the heater, the water in the pipes is always hot, and no water is wasted during the wait. The tradeoff is the energy wasted in operating the pump and the additional demand on the water heater to make up for the heat lost from the constantly hot pipes.
While the majority of these pumps mount nearest to the hot water heater and have no adjustable temperature capabilities, a significant reduction in energy can be achieved by using a temperature adjustable thermostatically controlled circulation pump mounted at the last fixture on the loop. Thermostatically controlled circulation pumps allow owners to choose the desired temperature of hot water to be maintained within the hot water pipes since most homes do not require degree water instantly out of their taps. Thermostatically controlled circulation pumps cycle on and off to maintain a user's chosen temperature and consume less energy than a continuously operating pump. By installing a thermostatically controlled pump just after the farthest fixture on the loop, cyclic pumping maintains ready hot water up to the last fixture on the loop instead of wasting energy heating the piping from the last fixture to the water heater. Installing a circulation pump at the farthest fixture on a hot water circulation loop is often not feasible due to limited available space, cosmetics, noise restrictions or lack of available power. Recent advancements in hot water circulation technology allow for benefiting from temperature controlled pumping without having to install the pump at the last fixture on the hot water loop. These advanced hot water circulation systems utilize a water contacting temperature probe strategically installed at the last fixture on the loop to minimize the energy wasted heating lengthy return pipes. Thermal insulation applied to the pipes helps mitigate this second loss and minimize the amount of water that must be pumped to keep hot water constantly available.
The traditional hot water recirculation system uses the existing cold water line as return line from the point of use located farthest from the hot water tank back to the hot water tank. The first of two system types has a pump mounted at the hot water heater while a "normally open" thermostatic control valve gets installed at the farthest fixture from the water heater and closes once hot water contacts the valve to control crossover flow between the hot and cold lines. A second type of system uses a thermostatically controlled pump which gets installed at the farthest fixture from the water heater. These thermostatically controlled pumps often have a built-in "normally closed" check-valve which prevents water in the cold water line from entering into the hot water line. Compared to a dedicated return line, using the cold water line as a return has the disadvantage of heating the cold water pipe (and the contained water). Accurate temperature monitoring and active flow control can minimize loss of cold water within the cold water line.
Technological advancements within the industry allow for incorporating timers to limit the operations during specific hours of the day to reduce energy waste by only operating when occupants are likely to use hot water. Additional advancements in technology include pumps which cycle on and off to maintain hot water temperature versus a continuously operating pump which consumes more electrical energy. Reduced energy waste and discomfort is possible by preventing occurrences of hot water line siphoning in open-loop hot water circulation systems which utilize the cold water line to return water back to the water heater. Hot Water Line Siphoning occurs when water from within the hot water line siphons or is forced into the cold water line due to differences in water pressure between the hot and cold water lines. Utilizing "normally closed" solenoid valve significantly reduces energy consumption by preventing siphoning of non-hot water out of hot water lines during cold water use. Using cold water instantly lowers the water pressure in the cold water lines, the higher water pressure in the hot water lines force water through "normally open" thermostatic crossover valves and backflow check valves (which only prevent cold water from flowing into hot water line), increasing the energy demand on the water heater.
Circulator pump potential side effects
It is important to take note of the increased heat in the piping system, which in turn increases system pressure. Piping that is sensitive to the water condition (i.e., copper, and soft water) will be adversely affected by the continual flow. Although water is conserved, the parasitic heat loss through the piping will be greater as a result of the increased heat passing through it.
Quantitative measures of function
During the pump operation, there is a drop of the liquid flow in the center of the rotor, causing the inflow of the liquid through the suction port. In the event of an excessive pressure decrease, in some parts of the rotor, the pressure can be lower than the saturation pressure corresponding to the temperature of the pumped liquid, causing the so-called cavitation, i.e. liquid evaporation. To prevent this, the pressure in the suction port (at the inlet of the pump) should be higher than the saturation pressure corresponding to the liquid temperature by the net positive suction head (NPSH).
The following parameters are characteristic for the circulating pumps: capacity Q, pump pressure ∆p (delivery head ∆H), energy consumption P with pump unit efficiency η, impeller rotational speed n, NPSH and sound level L.
In practice, the graphical relationship between the values Q, ∆ p(∆H), P and η is used. These are called the pump curves. They are determined by studies, whose methodology is standardized. These curves are specified when water is pumped with a density of 1000 kg/m3 and kinematic viscosity of 1 mm2/s. When the circulating pump is used for liquids of different density and viscosity, the pump curves have to be recalculated. These curves are provided in catalogues and in operation and maintenance manuals, however their stroke is the subject of pump manufacturers warranty.
EU regulation for circulators
As from 1 January 2013, circulators must comply with European regulation 641/2009. This regulation is part of the ecodesign policy of the European Union.
See also
Zone valve
References
Bibliography
Pumps | Circulator pump | Physics,Chemistry | 1,777 |
3,076,863 | https://en.wikipedia.org/wiki/Machine%20epsilon | Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point number systems. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subject of computational science. The quantity is also called macheps and it has the symbols Greek epsilon .
There are two prevailing definitions, denoted here as rounding machine epsilon or the formal definition and interval machine epsilon or mainstream definition.
In the mainstream definition, machine epsilon is independent of rounding method, and is defined simply as the difference between 1 and the next larger floating point number.
In the formal definition, machine epsilon is dependent on the type of rounding used and is also called unit roundoff, which has the symbol bold Roman u.
The two terms can generally be considered to differ by simply a factor of two, with the formal definition yielding an epsilon half the size of the mainstream definition, as summarized in the tables in the next section.
Values for standard hardware arithmetics
The following table lists machine epsilon values for standard floating-point formats.
Alternative definitions for epsilon
The IEEE standard does not define the terms machine epsilon and unit roundoff, so differing definitions of these terms are in use, which can cause some confusion.
The two terms differ by simply a factor of two. The more-widely used term (referred to as the mainstream definition in this article), is used in most modern programming languages and is simply defined as machine epsilon is the difference between 1 and the next larger floating point number. The formal definition can generally be considered to yield an epsilon half the size of the mainstream definition, although its definition does vary depending on the form of rounding used.
The two terms are described at length in the next two subsections.
Formal definition (Rounding machine epsilon)
The formal definition for machine epsilon is the one used by Prof. James Demmel in lecture scripts, the LAPACK linear algebra package, numerics research papers and some scientific computing software. Most numerical analysts use the words machine epsilon and unit roundoff interchangeably with this meaning, which is explored in depth throughout this subsection.
Rounding is a procedure for choosing the representation of a real number in a floating point number system. For a number system and a rounding procedure, machine epsilon is the maximum relative error of the chosen rounding procedure.
Some background is needed to determine a value from this definition. A floating point number system is characterized by a radix which is also called the base, , and by the precision , i.e. the number of radix digits of the significand (including any leading implicit bit). All the numbers with the same exponent, , have the spacing, . The spacing changes at the numbers that are perfect powers of ; the spacing on the side of larger magnitude is times larger than the spacing on the side of smaller magnitude.
Since machine epsilon is a bound for relative error, it suffices to consider numbers with exponent . It also suffices to consider positive numbers. For the usual round-to-nearest kind of rounding, the absolute rounding error is at most half the spacing, or . This value is the biggest possible numerator for the relative error. The denominator in the relative error is the number being rounded, which should be as small as possible to make the relative error large. The worst relative error therefore happens when rounding is applied to numbers of the form where is between and . All these numbers round to with relative error . The maximum occurs when is at the upper end of its range. The in the denominator is negligible compared to the numerator, so it is left off for expediency, and just is taken as machine epsilon. As has been shown here, the relative error is worst for numbers that round to , so machine epsilon also is called unit roundoff meaning roughly "the maximum error that can occur when rounding to the unit value".
Thus, the maximum spacing between a normalised floating point number, , and an adjacent normalised number is .
Arithmetic model
Numerical analysis uses machine epsilon to study the effects of rounding error. The actual errors of machine arithmetic are far too complicated to be studied directly, so instead, the following simple model is used. The IEEE arithmetic standard says all floating-point operations are done as if it were possible to perform the infinite-precision operation, and then, the result is rounded to a floating-point number. Suppose (1) , are floating-point numbers, (2) is an arithmetic operation on floating-point numbers such as addition or multiplication, and (3) is the infinite precision operation. According to the standard, the computer calculates:
By the meaning of machine epsilon, the relative error of the rounding is at most machine epsilon in magnitude, so:
where in absolute magnitude is at most or u. The books by Demmel and Higham in the references can be consulted to see how this model is used to analyze the errors of, say, Gaussian elimination.
Mainstream definition (Interval machine epsilon)
This alternative definition is significantly more widespread: machine epsilon is the difference between 1 and the next larger floating point number. This definition is used in language constants in Ada, C, C++, Fortran, MATLAB, Mathematica, Octave, Pascal, Python and Rust etc., and defined in textbooks like «Numerical Recipes» by Press et al.
By this definition, ε equals the value of the unit in the last place relative to 1, i.e. (where is the base of the floating point system and is the precision) and the unit roundoff is u = ε / 2, assuming round-to-nearest mode, and u = ε, assuming round-by-chop.
The prevalence of this definition is rooted in its use in the ISO C Standard for constants relating to floating-point types and corresponding constants in other programming languages. It is also widely used in scientific computing software and in the numerics and computing literature.
How to determine machine epsilon
Where standard libraries do not provide precomputed values (as <float.h> does with FLT_EPSILON, DBL_EPSILON and LDBL_EPSILON for C and <limits> does with std::numeric_limits<T>::epsilon() in C++), the best way to determine machine epsilon is to refer to the table, above, and use the appropriate power formula. Computing machine epsilon is often given as a textbook exercise. The following examples compute interval machine epsilon in the sense of the spacing of the floating point numbers at 1 rather than in the sense of the unit roundoff.
Note that results depend on the particular floating-point format used, such as float, double, long double, or similar as supported by the programming language, the compiler, and the runtime library for the actual platform.
Some formats supported by the processor might not be supported by the chosen compiler and operating system. Other formats might be emulated by the runtime library, including arbitrary-precision arithmetic available in some languages and libraries.
In a strict sense the term machine epsilon means the accuracy directly supported by the processor (or coprocessor), not some accuracy supported by a specific compiler for a specific operating system, unless it's known to use the best format.
IEEE 754 floating-point formats have the property that, when reinterpreted as a two's complement integer of the same width, they monotonically increase over positive values and monotonically decrease over negative values (see the binary representation of 32 bit floats). They also have the property that , and (where is the aforementioned integer reinterpretation of ). In languages that allow type punning and always use IEEE 754–1985, we can exploit this to compute a machine epsilon in constant time. For example, in C:
typedef union {
long long i64;
double d64;
} dbl_64;
double machine_eps (double value)
{
dbl_64 s;
s.d64 = value;
s.i64++;
return s.d64 - value;
}
This will give a result of the same sign as value. If a positive result is always desired, the return statement of machine_eps can be replaced with:
return (s.i64 < 0 ? value - s.d64 : s.d64 - value);
Example in Python:
def machineEpsilon(func=float):
machine_epsilon = func(1)
while func(1) + machine_epsilon != func(1):
machine_epsilon_last = machine_epsilon
machine_epsilon = func(machine_epsilon) / func(2)
return machine_epsilon_last
64-bit doubles give 2.220446e-16, which is 2−52 as expected.
Approximation
The following simple algorithm can be used to approximate the machine epsilon, to within a factor of two (one order of magnitude) of its true value, using a linear search.
epsilon = 1.0;
while (1.0 + 0.5 * epsilon) ≠ 1.0:
epsilon = 0.5 * epsilon
The machine epsilon, can also simply be calculated as two to the negative power of the number of bits used for the mantissa.
Relationship to absolute relative error
If is the machine representation of a number then the absolute relative error in the representation is
Proof
The following proof is limited to positive numbers and machine representations using round-by-chop.
If is a positive number we want to represent, it will be between a machine number below and a machine number above .
If , where is the number of bits used for the magnitude of the significand, then:
Since the representation of will be either or ,
Although this proof is limited to positive numbers and round-by-chop, the same method can be used to prove the inequality in relation to negative numbers and round-to-nearest machine representations.
See also
Floating point, general discussion of accuracy issues in floating point arithmetic
Unit in the last place (ULP)
Notes and references
Anderson, E.; LAPACK Users' Guide, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, third edition, 1999.
Cody, William J.; MACHAR: A Soubroutine to Dynamically Determine Machine Parameters, ACM Transactions on Mathematical Software, Vol. 14(4), 1988, 303–311.
Besset, Didier H.; Object-Oriented Implementation of Numerical Methods, Morgan & Kaufmann, San Francisco, CA, 2000.
Demmel, James W., Applied Numerical Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997.
Higham, Nicholas J.; Accuracy and Stability of Numerical Algorithms, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, second edition, 2002.
Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; and Flannery, Brian P.; Numerical Recipes in Fortran 77, 2nd ed., Chap. 20.2, pp. 881–886
Forsythe, George E.; Malcolm, Michael A.; Moler, Cleve B.; "Computer Methods for Mathematical Computations", Prentice-Hall, , 1977
External links
MACHAR, a routine (in C and Fortran) to "dynamically compute machine constants" (ACM algorithm 722)
Diagnosing floating point calculations precision, Implementation of MACHAR in Component Pascal and Oberon based on the Fortran 77 version of MACHAR published in Numerical Recipes (Press et al., 1992).
Computer arithmetic
Articles with example C code
Articles with example Python (programming language) code | Machine epsilon | Mathematics | 2,400 |
8,099,541 | https://en.wikipedia.org/wiki/Pogo%20pin | A pogo pin or spring-loaded pin is a type of electrical connector mechanism with spring plungers that is used in many modern electronic applications and in the electronics testing industry. They are used for their improved durability over other electrical contacts, and the resilience of their electrical connection to mechanical shock and vibration.
The name pogo pin comes from the pin's resemblance to a pogo stickthe integrated helical spring in the pin applies a constant normal force against the back of the mating receptacle or contact plate, counteracting any unwanted movement which might otherwise cause an intermittent connection. This helical spring makes pogo pins unique, since most other types of pin mechanisms use a cantilever spring or expansion sleeve.
A complete connection path requires a mating receptacle for the pin to engage, which is termed a target or land. A pogo target consists of a flat or concave metal surface, which unlike the pins, has no moving parts. Targets may be separate components in the complete connector assembly, or in the case of printed circuit boards, simply a plated area of the board.
Spring-loaded pins are precision parts fabricated with a turning and spinning process which does not require a mold, thus allowing the production of smaller quantities at a lower cost.
Structure
A basic spring-loaded pin consists of 3 main parts: a plunger, barrel, and spring. When force is applied to the pin, the spring is compressed and the plunger moves inside the barrel. The shape of the barrel retains the plunger, stopping the spring from pushing it out when the pin is not locked in place.
In the design of electrical contacts, a certain amount of friction is required to hold a connector in place and retain the contact finish. However, high friction is undesirable because it increases stress and wear on the contact springs and housings. Thus, a precise normal force, typically around 1 newton, is required to generate this friction. Since a spring-loaded pin needs to have a slight gap between the plunger and barrel so that it can slide easily, momentary disconnections can happen when there is vibration or movement. In order to counter this, the plunger usually has a small tilt to ensure a continuous connection.
Many manufacturers have created their own proprietary variations on this design, most commonly by varying the interface between the plunger and spring. For example, a ball may be added between the two components, or the plunger may have an angled or countersunk tip.
Materials
The plunger and barrel of pogo pins usually use brass or copper as a base material on which a thin layer of nickel is applied.
As common in electrical connectors, manufacturers often apply a gold plating that improves the durability and contact resistance.
The springs are usually made of copper alloys or spring steel.
Applications
Spring-loaded connectors are used for a wide variety of applications, in both industrial and consumer electronics:
Board-to-board connectors (usually permanent)
Ingress-protected connectors in consumer devices, e.g. smart watches, rugged computers
Battery terminals on laptops
Magnetic charging or signal connectors, e.g. laptop docks and chargers
High-frequency connectors, e.g. antennas, monitor connectors
Printed circuit board testing
Integrated circuit testing
Battery testing
Other electronics testing
Connector arrangement
When pogo pins are used in a connector, they are usually arranged in a dense array, connecting many individual nodes of two electrical circuits. They are commonly found in automatic test equipment like bed of nails testers, where they facilitate the rapid, reliable connection of the devices under test (DUTs). In one extremely high-density configuration, the array takes the form of a ring containing hundreds or thousands of individual pogo pins; this device is sometimes referred to as a pogo tower.
They can also be used for more permanent connections, for example, in the Cray-2 supercomputer.
When used in the highest-performance applications, pogo pins must be very carefully designed to allow not only high reliability across many mating/unmating cycles but also high-fidelity transmission of the electrical signals. The pins themselves must be hard, yet plated with a substance (such as gold) that provides for reliable contact. Within the body of the pin, the plunger must make good electrical contact with the body lest the higher-resistance spring carry the signal (along with the undesirable inductance that the spring represents). The design of pogo pins to be used in matched-impedance circuits is especially challenging; to maintain the correct characteristic impedance, the pins are sometimes arranged with one signal-carrying pin surrounded by four, five, or six grounded pins.
Combination with magnets
Spring-loaded connectors may be combined with magnets to form a strong and reliable connectiona technique which has been employed extensively for consumer electronics such as 2-in-1 PCs and high-frequency data transfer. One notable example of this is Apple's MagSafe connector.
Commercial products
Although often used as a generic name, pogo pin is a registered trademark of Everett Charles Technologies (ECT).
See also
Electrical connector, in which pogo pins are sometimes used
Jumper (computing), performs a similar function but bridges a circuit between two pins
In-circuit test, a common application of pogo pins
Fuzz Button, a high performance electrical connection
References
Electronic test equipment
Electrical connectors | Pogo pin | Technology,Engineering | 1,100 |
12,543,181 | https://en.wikipedia.org/wiki/Ribonucleoprotein%20particle | A ribonucleoprotein particle (RNP) is a complex formed between RNA and RNA-binding proteins (RBPs). The term RNP foci can also be used to denote intracellular compartments involved in processing of RNA transcripts.
RNA/RBP complexes
RBPs interact with RNA through various structural motifs. Aromatic amino acid residues in RNA-binding proteins result in stacking interactions with RNA. Lysine residues in the helical portion of RNA binding proteins help to stabilize interactions with other nucleic acids as a result of the force of attraction between the positively-charged lysine side chains and the negatively-charged phosphate "backbone" of RNA.
It is hypothesized that RNA sequences in the 3'-untranslated region determine the binding of RBPs, and that these RBPs determine the post-transcriptional fate of mRNAs.
RNP granules
RNP granules are a highly diverse group of compartments. These include stress granules, processing bodies, and exosomes in somatic cells. Many RNP granules are cell type and/or species specific. For example, chromatoid bodies are found only in male germ cells, whereas transport granules have so far been found only in neurons and oocytes. RNP granules function mainly by physically separating or associating transcripts with proteins. They function in the storage, processing, degradation and transportation of their associated transcripts.
RNP granules have been shown to have particular importance in cells where post-transcriptional regulation is of vital importance. For example, in neurons where transcripts must be transported and stored in dendrites for the formation and strengthening of connections, in oocytes/embryos where mRNAs are stored for years before being translated, and in developing sperm cells where transcription is halted before development is complete.
See also
Messenger RNP, complex between mRNA and protein(s) present in nucleus
Heterogeneous ribonucleoprotein particle, complexes of RNA and protein present in the cell nucleus
References
Cell biology | Ribonucleoprotein particle | Biology | 422 |
3,989 | https://en.wikipedia.org/wiki/Banach%20space | In mathematics, more specifically in functional analysis, a Banach space (pronounced ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space.
Banach spaces are named after the Polish mathematician Stefan Banach, who introduced this concept and studied it systematically in 1920–1922 along with Hans Hahn and Eduard Helly.
Maurice René Fréchet was the first to use the term "Banach space" and Banach in turn then coined the term "Fréchet space".
Banach spaces originally grew out of the study of function spaces by Hilbert, Fréchet, and Riesz earlier in the century. Banach spaces play a central role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces.
Definition
A Banach space is a complete normed space
A normed space is a pair
consisting of a vector space over a scalar field (where is commonly or ) together with a distinguished
norm Like all norms, this norm induces a translation invariant
distance function, called the canonical or (norm) induced metric, defined for all vectors by
This makes into a metric space
A sequence is called or or if for every real there exists some index such that
whenever and are greater than
The normed space is called a and the canonical metric is called a if is a , which by definition means for every Cauchy sequence in there exists some such that
where because this sequence's convergence to can equivalently be expressed as:
The norm of a normed space is called a if is a Banach space.
L-semi-inner product
For any normed space there exists an L-semi-inner product on such that for all in general, there may be infinitely many L-semi-inner products that satisfy this condition. L-semi-inner products are a generalization of inner products, which are what fundamentally distinguish Hilbert spaces from all other Banach spaces. This shows that all normed spaces (and hence all Banach spaces) can be considered as being generalizations of (pre-)Hilbert spaces.
Characterization in terms of series
The vector space structure allows one to relate the behavior of Cauchy sequences to that of converging series of vectors.
A normed space is a Banach space if and only if each absolutely convergent series in converges to a value that lies within
Topology
The canonical metric of a normed space induces the usual metric topology on which is referred to as the canonical or norm induced topology.
Every normed space is automatically assumed to carry this Hausdorff topology, unless indicated otherwise.
With this topology, every Banach space is a Baire space, although there exist normed spaces that are Baire but not Banach. The norm is always a continuous function with respect to the topology that it induces.
The open and closed balls of radius centered at a point are, respectively, the sets
Any such ball is a convex and bounded subset of but a compact ball / neighborhood exists if and only if is a finite-dimensional vector space.
In particular, no infinite–dimensional normed space can be locally compact or have the Heine–Borel property.
If is a vector and is a scalar then
Using shows that this norm-induced topology is translation invariant, which means that for any and the subset is open (respectively, closed) in if and only if this is true of its translation
Consequently, the norm induced topology is completely determined by any neighbourhood basis at the origin. Some common neighborhood bases at the origin include:
where is a sequence in of positive real numbers that converges to in (such as or for instance).
So for example, every open subset of can be written as a union
indexed by some subset where every may be picked from the aforementioned sequence (the open balls can be replaced with closed balls, although then the indexing set and radii may also need to be replaced).
Additionally, can always be chosen to be countable if is a , which by definition means that contains some countable dense subset.
Homeomorphism classes of separable Banach spaces
All finite–dimensional normed spaces are separable Banach spaces and any two Banach spaces of the same finite dimension are linearly homeomorphic.
Every separable infinite–dimensional Hilbert space is linearly isometrically isomorphic to the separable Hilbert sequence space with its usual norm
The Anderson–Kadec theorem states that every infinite–dimensional separable Fréchet space is homeomorphic to the product space of countably many copies of (this homeomorphism need not be a linear map).
Thus all infinite–dimensional separable Fréchet spaces are homeomorphic to each other (or said differently, their topology is unique up to a homeomorphism).
Since every Banach space is a Fréchet space, this is also true of all infinite–dimensional separable Banach spaces, including
In fact, is even homeomorphic to its own unit which stands in sharp contrast to finite–dimensional spaces (the Euclidean plane is not homeomorphic to the unit circle, for instance).
This pattern in homeomorphism classes extends to generalizations of metrizable (locally Euclidean) topological manifolds known as , which are metric spaces that are around every point, locally homeomorphic to some open subset of a given Banach space (metric Hilbert manifolds and metric Fréchet manifolds are defined similarly).
For example, every open subset of a Banach space is canonically a metric Banach manifold modeled on since the inclusion map is an open local homeomorphism.
Using Hilbert space microbundles, David Henderson showed in 1969 that every metric manifold modeled on a separable infinite–dimensional Banach (or Fréchet) space can be topologically embedded as an subset of and, consequently, also admits a unique smooth structure making it into a Hilbert manifold.
Compact and convex subsets
There is a compact subset of whose convex hull is closed and thus also compact (see this footnote for an example).
However, like in all Banach spaces, the convex hull of this (and every other) compact subset will be compact. But if a normed space is not complete then it is in general guaranteed that will be compact whenever is; an example can even be found in a (non-complete) pre-Hilbert vector subspace of
As a topological vector space
This norm-induced topology also makes into what is known as a topological vector space (TVS), which by definition is a vector space endowed with a topology making the operations of addition and scalar multiplication continuous. It is emphasized that the TVS is a vector space together with a certain type of topology; that is to say, when considered as a TVS, it is associated with particular norm or metric (both of which are "forgotten"). This Hausdorff TVS is even locally convex because the set of all open balls centered at the origin forms a neighbourhood basis at the origin consisting of convex balanced open sets. This TVS is also , which by definition refers to any TVS whose topology is induced by some (possibly unknown) norm. Normable TVSs are characterized by being Hausdorff and having a bounded convex neighborhood of the origin.
All Banach spaces are barrelled spaces, which means that every barrel is neighborhood of the origin (all closed balls centered at the origin are barrels, for example) and guarantees that the Banach–Steinhaus theorem holds.
Comparison of complete metrizable vector topologies
The open mapping theorem implies that if and are topologies on that make both and into complete metrizable TVS (for example, Banach or Fréchet spaces) and if one topology is finer or coarser than the other then they must be equal (that is, if or then ).
So for example, if and are Banach spaces with topologies and and if one of these spaces has some open ball that is also an open subset of the other space (or equivalently, if one of or is continuous) then their topologies are identical and their norms are equivalent.
Completeness
Complete norms and equivalent norms
Two norms, and on a vector space are said to be if they induce the same topology; this happens if and only if there exist positive real numbers such that for all If and are two equivalent norms on a vector space then is a Banach space if and only if is a Banach space.
See this footnote for an example of a continuous norm on a Banach space that is equivalent to that Banach space's given norm.
All norms on a finite-dimensional vector space are equivalent and every finite-dimensional normed space is a Banach space.
Complete norms vs complete metrics
A metric on a vector space is induced by a norm on if and only if is translation invariant and , which means that for all scalars and all in which case the function defines a norm on and the canonical metric induced by is equal to
Suppose that is a normed space and that is the norm topology induced on Suppose that is metric on such that the topology that induces on is equal to If is translation invariant then is a Banach space if and only if is a complete metric space.
If is translation invariant, then it may be possible for to be a Banach space but for to be a complete metric space (see this footnote for an example). In contrast, a theorem of Klee, which also applies to all metrizable topological vector spaces, implies that if there exists complete metric on that induces the norm topology on then is a Banach space.
A Fréchet space is a locally convex topological vector space whose topology is induced by some translation-invariant complete metric.
Every Banach space is a Fréchet space but not conversely; indeed, there even exist Fréchet spaces on which no norm is a continuous function (such as the space of real sequences with the product topology).
However, the topology of every Fréchet space is induced by some countable family of real-valued (necessarily continuous) maps called seminorms, which are generalizations of norms.
It is even possible for a Fréchet space to have a topology that is induced by a countable family of (such norms would necessarily be continuous)
but to not be a Banach/normable space because its topology can not be defined by any norm.
An example of such a space is the Fréchet space whose definition can be found in the article on spaces of test functions and distributions.
Complete norms vs complete topological vector spaces
There is another notion of completeness besides metric completeness and that is the notion of a complete topological vector space (TVS) or TVS-completeness, which uses the theory of uniform spaces.
Specifically, the notion of TVS-completeness uses a unique translation-invariant uniformity, called the canonical uniformity, that depends on vector subtraction and the topology that the vector space is endowed with, and so in particular, this notion of TVS completeness is independent of whatever norm induced the topology (and even applies to TVSs that are even metrizable).
Every Banach space is a complete TVS. Moreover, a normed space is a Banach space (that is, its norm-induced metric is complete) if and only if it is complete as a topological vector space.
If is a metrizable topological vector space (such as any norm induced topology, for example), then is a complete TVS if and only if it is a complete TVS, meaning that it is enough to check that every Cauchy in converges in to some point of (that is, there is no need to consider the more general notion of arbitrary Cauchy nets).
If is a topological vector space whose topology is induced by (possibly unknown) norm (such spaces are called ), then is a complete topological vector space if and only if may be assigned a norm that induces on the topology and also makes into a Banach space.
A Hausdorff locally convex topological vector space is normable if and only if its strong dual space is normable, in which case is a Banach space ( denotes the strong dual space of whose topology is a generalization of the dual norm-induced topology on the continuous dual space ; see this footnote for more details).
If is a metrizable locally convex TVS, then is normable if and only if is a Fréchet–Urysohn space.
This shows that in the category of locally convex TVSs, Banach spaces are exactly those complete spaces that are both metrizable and have metrizable strong dual spaces.
Completions
Every normed space can be isometrically embedded onto a dense vector subspace of Banach space, where this Banach space is called a of the normed space. This Hausdorff completion is unique up to isometric isomorphism.
More precisely, for every normed space there exist a Banach space and a mapping such that is an isometric mapping and is dense in If is another Banach space such that there is an isometric isomorphism from onto a dense subset of then is isometrically isomorphic to
This Banach space is the Hausdorff of the normed space The underlying metric space for is the same as the metric completion of with the vector space operations extended from to The completion of is sometimes denoted by
General theory
Linear operators, isomorphisms
If and are normed spaces over the same ground field the set of all continuous -linear maps is denoted by In infinite-dimensional spaces, not all linear maps are continuous. A linear mapping from a normed space to another normed space is continuous if and only if it is bounded on the closed unit ball of Thus, the vector space can be given the operator norm
For a Banach space, the space is a Banach space with respect to this norm. In categorical contexts, it is sometimes convenient to restrict the function space between two Banach spaces to only the short maps; in that case the space reappears as a natural bifunctor.
If is a Banach space, the space forms a unital Banach algebra; the multiplication operation is given by the composition of linear maps.
If and are normed spaces, they are isomorphic normed spaces if there exists a linear bijection such that and its inverse are continuous. If one of the two spaces or is complete (or reflexive, separable, etc.) then so is the other space. Two normed spaces and are isometrically isomorphic if in addition, is an isometry, that is, for every in The Banach–Mazur distance between two isomorphic but not isometric spaces and gives a measure of how much the two spaces and differ.
Continuous and bounded linear functions and seminorms
Every continuous linear operator is a bounded linear operator and if dealing only with normed spaces then the converse is also true. That is, a linear operator between two normed spaces is bounded if and only if it is a continuous function. So in particular, because the scalar field (which is or ) is a normed space, a linear functional on a normed space is a bounded linear functional if and only if it is a continuous linear functional. This allows for continuity-related results (like those below) to be applied to Banach spaces. Although boundedness is the same as continuity for linear maps between normed spaces, the term "bounded" is more commonly used when dealing primarily with Banach spaces.
If is a subadditive function (such as a norm, a sublinear function, or real linear functional), then is continuous at the origin if and only if is uniformly continuous on all of ; and if in addition then is continuous if and only if its absolute value is continuous, which happens if and only if is an open subset of
And very importantly for applying the Hahn–Banach theorem, a linear functional is continuous if and only if this is true of its real part and moreover, and the real part completely determines which is why the Hahn–Banach theorem is often stated only for real linear functionals.
Also, a linear functional on is continuous if and only if the seminorm is continuous, which happens if and only if there exists a continuous seminorm such that ; this last statement involving the linear functional and seminorm is encountered in many versions of the Hahn–Banach theorem.
Basic notions
The Cartesian product of two normed spaces is not canonically equipped with a norm. However, several equivalent norms are commonly used, such as
which correspond (respectively) to the coproduct and product in the category of Banach spaces and short maps (discussed above). For finite (co)products, these norms give rise to isomorphic normed spaces, and the product (or the direct sum ) is complete if and only if the two factors are complete.
If is a closed linear subspace of a normed space there is a natural norm on the quotient space
The quotient is a Banach space when is complete. The quotient map from onto sending to its class is linear, onto and has norm except when in which case the quotient is the null space.
The closed linear subspace of is said to be a complemented subspace of if is the range of a surjective bounded linear projection In this case, the space is isomorphic to the direct sum of and the kernel of the projection
Suppose that and are Banach spaces and that There exists a canonical factorization of as
where the first map is the quotient map, and the second map sends every class in the quotient to the image in This is well defined because all elements in the same class have the same image. The mapping is a linear bijection from onto the range whose inverse need not be bounded.
Classical spaces
Basic examples of Banach spaces include: the Lp spaces and their special cases, the sequence spaces that consist of scalar sequences indexed by natural numbers ; among them, the space of absolutely summable sequences and the space of square summable sequences; the space of sequences tending to zero and the space of bounded sequences; the space of continuous scalar functions on a compact Hausdorff space equipped with the max norm,
According to the Banach–Mazur theorem, every Banach space is isometrically isomorphic to a subspace of some For every separable Banach space there is a closed subspace of such that
Any Hilbert space serves as an example of a Banach space. A Hilbert space on is complete for a norm of the form
where
is the inner product, linear in its first argument that satisfies the following:
For example, the space is a Hilbert space.
The Hardy spaces, the Sobolev spaces are examples of Banach spaces that are related to spaces and have additional structure. They are important in different branches of analysis, Harmonic analysis and Partial differential equations among others.
Banach algebras
A Banach algebra is a Banach space over or together with a structure of algebra over , such that the product map is continuous. An equivalent norm on can be found so that for all
Examples
The Banach space with the pointwise product, is a Banach algebra.
The disk algebra consists of functions holomorphic in the open unit disk and continuous on its closure: Equipped with the max norm on the disk algebra is a closed subalgebra of
The Wiener algebra is the algebra of functions on the unit circle with absolutely convergent Fourier series. Via the map associating a function on to the sequence of its Fourier coefficients, this algebra is isomorphic to the Banach algebra where the product is the convolution of sequences.
For every Banach space the space of bounded linear operators on with the composition of maps as product, is a Banach algebra.
A C*-algebra is a complex Banach algebra with an antilinear involution such that The space of bounded linear operators on a Hilbert space is a fundamental example of C*-algebra. The Gelfand–Naimark theorem states that every C*-algebra is isometrically isomorphic to a C*-subalgebra of some The space of complex continuous functions on a compact Hausdorff space is an example of commutative C*-algebra, where the involution associates to every function its complex conjugate
Dual space
If is a normed space and the underlying field (either the real or the complex numbers), the continuous dual space is the space of continuous linear maps from into or continuous linear functionals.
The notation for the continuous dual is in this article.
Since is a Banach space (using the absolute value as norm), the dual is a Banach space, for every normed space The Dixmier–Ng theorem characterizes the dual spaces of Banach spaces.
The main tool for proving the existence of continuous linear functionals is the Hahn–Banach theorem.
In particular, every continuous linear functional on a subspace of a normed space can be continuously extended to the whole space, without increasing the norm of the functional.
An important special case is the following: for every vector in a normed space there exists a continuous linear functional on such that
When is not equal to the vector, the functional must have norm one, and is called a norming functional for
The Hahn–Banach separation theorem states that two disjoint non-empty convex sets in a real Banach space, one of them open, can be separated by a closed affine hyperplane.
The open convex set lies strictly on one side of the hyperplane, the second convex set lies on the other side but may touch the hyperplane.
A subset in a Banach space is total if the linear span of is dense in The subset is total in if and only if the only continuous linear functional that vanishes on is the functional: this equivalence follows from the Hahn–Banach theorem.
If is the direct sum of two closed linear subspaces and then the dual of is isomorphic to the direct sum of the duals of and
If is a closed linear subspace in one can associate the in the dual,
The orthogonal is a closed linear subspace of the dual. The dual of is isometrically isomorphic to
The dual of is isometrically isomorphic to
The dual of a separable Banach space need not be separable, but:
When is separable, the above criterion for totality can be used for proving the existence of a countable total subset in
Weak topologies
The weak topology on a Banach space is the coarsest topology on for which all elements in the continuous dual space are continuous.
The norm topology is therefore finer than the weak topology.
It follows from the Hahn–Banach separation theorem that the weak topology is Hausdorff, and that a norm-closed convex subset of a Banach space is also weakly closed.
A norm-continuous linear map between two Banach spaces and is also weakly continuous, that is, continuous from the weak topology of to that of
If is infinite-dimensional, there exist linear maps which are not continuous. The space of all linear maps from to the underlying field (this space is called the algebraic dual space, to distinguish it from also induces a topology on which is finer than the weak topology, and much less used in functional analysis.
On a dual space there is a topology weaker than the weak topology of called weak* topology.
It is the coarsest topology on for which all evaluation maps where ranges over are continuous.
Its importance comes from the Banach–Alaoglu theorem.
The Banach–Alaoglu theorem can be proved using Tychonoff's theorem about infinite products of compact Hausdorff spaces.
When is separable, the unit ball of the dual is a metrizable compact in the weak* topology.
Examples of dual spaces
The dual of is isometrically isomorphic to : for every bounded linear functional on there is a unique element such that
The dual of is isometrically isomorphic to .
The dual of Lebesgue space is isometrically isomorphic to when and
For every vector in a Hilbert space the mapping
defines a continuous linear functional on The Riesz representation theorem states that every continuous linear functional on is of the form for a uniquely defined vector in
The mapping is an antilinear isometric bijection from onto its dual
When the scalars are real, this map is an isometric isomorphism.
When is a compact Hausdorff topological space, the dual of is the space of Radon measures in the sense of Bourbaki.
The subset of consisting of non-negative measures of mass 1 (probability measures) is a convex w*-closed subset of the unit ball of
The extreme points of are the Dirac measures on
The set of Dirac measures on equipped with the w*-topology, is homeomorphic to
The result has been extended by Amir and Cambern to the case when the multiplicative Banach–Mazur distance between and is
The theorem is no longer true when the distance is
In the commutative Banach algebra the maximal ideals are precisely kernels of Dirac measures on
More generally, by the Gelfand–Mazur theorem, the maximal ideals of a unital commutative Banach algebra can be identified with its characters—not merely as sets but as topological spaces: the former with the hull-kernel topology and the latter with the w*-topology.
In this identification, the maximal ideal space can be viewed as a w*-compact subset of the unit ball in the dual
Not every unital commutative Banach algebra is of the form for some compact Hausdorff space However, this statement holds if one places in the smaller category of commutative C*-algebras.
Gelfand's representation theorem for commutative C*-algebras states that every commutative unital C*-algebra is isometrically isomorphic to a space.
The Hausdorff compact space here is again the maximal ideal space, also called the spectrum of in the C*-algebra context.
Bidual
If is a normed space, the (continuous) dual of the dual is called , or of
For every normed space there is a natural map,
This defines as a continuous linear functional on that is, an element of The map is a linear map from to
As a consequence of the existence of a norming functional for every this map is isometric, thus injective.
For example, the dual of is identified with and the dual of is identified with the space of bounded scalar sequences.
Under these identifications, is the inclusion map from to It is indeed isometric, but not onto.
If is surjective, then the normed space is called reflexive (see below).
Being the dual of a normed space, the bidual is complete, therefore, every reflexive normed space is a Banach space.
Using the isometric embedding it is customary to consider a normed space as a subset of its bidual.
When is a Banach space, it is viewed as a closed linear subspace of If is not reflexive, the unit ball of is a proper subset of the unit ball of
The Goldstine theorem states that the unit ball of a normed space is weakly*-dense in the unit ball of the bidual.
In other words, for every in the bidual, there exists a net in so that
The net may be replaced by a weakly*-convergent sequence when the dual is separable.
On the other hand, elements of the bidual of that are not in cannot be weak*-limit of in since is weakly sequentially complete.
Banach's theorems
Here are the main general results about Banach spaces that go back to the time of Banach's book () and are related to the Baire category theorem.
According to this theorem, a complete metric space (such as a Banach space, a Fréchet space or an F-space) cannot be equal to a union of countably many closed subsets with empty interiors.
Therefore, a Banach space cannot be the union of countably many closed subspaces, unless it is already equal to one of them; a Banach space with a countable Hamel basis is finite-dimensional.
The Banach–Steinhaus theorem is not limited to Banach spaces.
It can be extended for example to the case where is a Fréchet space, provided the conclusion is modified as follows: under the same hypothesis, there exists a neighborhood of in such that all in are uniformly bounded on
This result is a direct consequence of the preceding Banach isomorphism theorem and of the canonical factorization of bounded linear maps.
This is another consequence of Banach's isomorphism theorem, applied to the continuous bijection from onto sending to the sum
Reflexivity
The normed space is called reflexive when the natural map
is surjective. Reflexive normed spaces are Banach spaces.
This is a consequence of the Hahn–Banach theorem.
Further, by the open mapping theorem, if there is a bounded linear operator from the Banach space onto the Banach space then is reflexive.
Indeed, if the dual of a Banach space is separable, then is separable.
If is reflexive and separable, then the dual of is separable, so is separable.
Hilbert spaces are reflexive. The spaces are reflexive when More generally, uniformly convex spaces are reflexive, by the Milman–Pettis theorem.
The spaces are not reflexive.
In these examples of non-reflexive spaces the bidual is "much larger" than
Namely, under the natural isometric embedding of into given by the Hahn–Banach theorem, the quotient is infinite-dimensional, and even nonseparable.
However, Robert C. James has constructed an example of a non-reflexive space, usually called "the James space" and denoted by such that the quotient is one-dimensional.
Furthermore, this space is isometrically isomorphic to its bidual.
When is reflexive, it follows that all closed and bounded convex subsets of are weakly compact.
In a Hilbert space the weak compactness of the unit ball is very often used in the following way: every bounded sequence in has weakly convergent subsequences.
Weak compactness of the unit ball provides a tool for finding solutions in reflexive spaces to certain optimization problems.
For example, every convex continuous function on the unit ball of a reflexive space attains its minimum at some point in
As a special case of the preceding result, when is a reflexive space over every continuous linear functional in attains its maximum on the unit ball of
The following theorem of Robert C. James provides a converse statement.
The theorem can be extended to give a characterization of weakly compact convex sets.
On every non-reflexive Banach space there exist continuous linear functionals that are not norm-attaining.
However, the Bishop–Phelps theorem states that norm-attaining functionals are norm dense in the dual of
Weak convergences of sequences
A sequence in a Banach space is weakly convergent to a vector if converges to for every continuous linear functional in the dual The sequence is a weakly Cauchy sequence if converges to a scalar limit for every in
A sequence in the dual is weakly* convergent to a functional if converges to for every in
Weakly Cauchy sequences, weakly convergent and weakly* convergent sequences are norm bounded, as a consequence of the Banach–Steinhaus theorem.
When the sequence in is a weakly Cauchy sequence, the limit above defines a bounded linear functional on the dual that is, an element of the bidual of and is the limit of in the weak*-topology of the bidual.
The Banach space is weakly sequentially complete if every weakly Cauchy sequence is weakly convergent in
It follows from the preceding discussion that reflexive spaces are weakly sequentially complete.
An orthonormal sequence in a Hilbert space is a simple example of a weakly convergent sequence, with limit equal to the vector.
The unit vector basis of for or of is another example of a weakly null sequence, that is, a sequence that converges weakly to
For every weakly null sequence in a Banach space, there exists a sequence of convex combinations of vectors from the given sequence that is norm-converging to
The unit vector basis of is not weakly Cauchy.
Weakly Cauchy sequences in are weakly convergent, since -spaces are weakly sequentially complete.
Actually, weakly convergent sequences in are norm convergent. This means that satisfies Schur's property.
Results involving the basis
Weakly Cauchy sequences and the basis are the opposite cases of the dichotomy established in the following deep result of H. P. Rosenthal.
A complement to this result is due to Odell and Rosenthal (1975).
By the Goldstine theorem, every element of the unit ball of is weak*-limit of a net in the unit ball of When does not contain every element of is weak*-limit of a in the unit ball of
When the Banach space is separable, the unit ball of the dual equipped with the weak*-topology, is a metrizable compact space and every element in the bidual defines a bounded function on :
This function is continuous for the compact topology of if and only if is actually in considered as subset of
Assume in addition for the rest of the paragraph that does not contain
By the preceding result of Odell and Rosenthal, the function is the pointwise limit on of a sequence of continuous functions on it is therefore a first Baire class function on
The unit ball of the bidual is a pointwise compact subset of the first Baire class on
Sequences, weak and weak* compactness
When is separable, the unit ball of the dual is weak*-compact by the Banach–Alaoglu theorem and metrizable for the weak* topology, hence every bounded sequence in the dual has weakly* convergent subsequences.
This applies to separable reflexive spaces, but more is true in this case, as stated below.
The weak topology of a Banach space is metrizable if and only if is finite-dimensional. If the dual is separable, the weak topology of the unit ball of is metrizable.
This applies in particular to separable reflexive Banach spaces.
Although the weak topology of the unit ball is not metrizable in general, one can characterize weak compactness using sequences.
A Banach space is reflexive if and only if each bounded sequence in has a weakly convergent subsequence.
A weakly compact subset in is norm-compact. Indeed, every sequence in has weakly convergent subsequences by Eberlein–Šmulian, that are norm convergent by the Schur property of
Type and cotype
A way to classify Banach spaces is through the probabilistic notion of type and cotype, these two measure how far a Banach space is from a Hilbert space.
Schauder bases
A Schauder basis in a Banach space is a sequence of vectors in with the property that for every vector there exist defined scalars depending on such that
Banach spaces with a Schauder basis are necessarily separable, because the countable set of finite linear combinations with rational coefficients (say) is dense.
It follows from the Banach–Steinhaus theorem that the linear mappings are uniformly bounded by some constant
Let denote the coordinate functionals which assign to every in the coordinate of in the above expansion.
They are called biorthogonal functionals. When the basis vectors have norm the coordinate functionals have norm in the dual of
Most classical separable spaces have explicit bases.
The Haar system is a basis for
The trigonometric system is a basis in when
The Schauder system is a basis in the space
The question of whether the disk algebra has a basis remained open for more than forty years, until Bočkarev showed in 1974 that admits a basis constructed from the Franklin system.
Since every vector in a Banach space with a basis is the limit of with of finite rank and uniformly bounded, the space satisfies the bounded approximation property.
The first example by Enflo of a space failing the approximation property was at the same time the first example of a separable Banach space without a Schauder basis.
Robert C. James characterized reflexivity in Banach spaces with a basis: the space with a Schauder basis is reflexive if and only if the basis is both shrinking and boundedly complete.
In this case, the biorthogonal functionals form a basis of the dual of
Tensor product
Let and be two -vector spaces. The tensor product of and is a -vector space with a bilinear mapping which has the following universal property:
If is any bilinear mapping into a -vector space then there exists a unique linear mapping such that
The image under of a couple in is denoted by and called a simple tensor.
Every element in is a finite sum of such simple tensors.
There are various norms that can be placed on the tensor product of the underlying vector spaces, amongst others the projective cross norm and injective cross norm introduced by A. Grothendieck in 1955.
In general, the tensor product of complete spaces is not complete again. When working with Banach spaces, it is customary to say that the projective tensor product of two Banach spaces and is the of the algebraic tensor product equipped with the projective tensor norm, and similarly for the injective tensor product
Grothendieck proved in particular that
where is a compact Hausdorff space, the Banach space of continuous functions from to and the space of Bochner-measurable and integrable functions from to and where the isomorphisms are isometric.
The two isomorphisms above are the respective extensions of the map sending the tensor to the vector-valued function
Tensor products and the approximation property
Let be a Banach space. The tensor product is identified isometrically with the closure in of the set of finite rank operators.
When has the approximation property, this closure coincides with the space of compact operators on
For every Banach space there is a natural norm linear map
obtained by extending the identity map of the algebraic tensor product. Grothendieck related the approximation problem to the question of whether this map is one-to-one when is the dual of
Precisely, for every Banach space the map
is one-to-one if and only if has the approximation property.
Grothendieck conjectured that and must be different whenever and are infinite-dimensional Banach spaces.
This was disproved by Gilles Pisier in 1983.
Pisier constructed an infinite-dimensional Banach space such that and are equal. Furthermore, just as Enflo's example, this space is a "hand-made" space that fails to have the approximation property. On the other hand, Szankowski proved that the classical space does not have the approximation property.
Some classification results
Characterizations of Hilbert space among Banach spaces
A necessary and sufficient condition for the norm of a Banach space to be associated to an inner product is the parallelogram identity:
It follows, for example, that the Lebesgue space is a Hilbert space only when
If this identity is satisfied, the associated inner product is given by the polarization identity. In the case of real scalars, this gives:
For complex scalars, defining the inner product so as to be -linear in antilinear in the polarization identity gives:
To see that the parallelogram law is sufficient, one observes in the real case that is symmetric, and in the complex case, that it satisfies the Hermitian symmetry property and The parallelogram law implies that is additive in
It follows that it is linear over the rationals, thus linear by continuity.
Several characterizations of spaces isomorphic (rather than isometric) to Hilbert spaces are available.
The parallelogram law can be extended to more than two vectors, and weakened by the introduction of a two-sided inequality with a constant : Kwapień proved that if
for every integer and all families of vectors then the Banach space is isomorphic to a Hilbert space.
Here, denotes the average over the possible choices of signs
In the same article, Kwapień proved that the validity of a Banach-valued Parseval's theorem for the Fourier transform characterizes Banach spaces isomorphic to Hilbert spaces.
Lindenstrauss and Tzafriri proved that a Banach space in which every closed linear subspace is complemented (that is, is the range of a bounded linear projection) is isomorphic to a Hilbert space. The proof rests upon Dvoretzky's theorem about Euclidean sections of high-dimensional centrally symmetric convex bodies. In other words, Dvoretzky's theorem states that for every integer any finite-dimensional normed space, with dimension sufficiently large compared to contains subspaces nearly isometric to the -dimensional Euclidean space.
The next result gives the solution of the so-called . An infinite-dimensional Banach space is said to be homogeneous if it is isomorphic to all its infinite-dimensional closed subspaces. A Banach space isomorphic to is homogeneous, and Banach asked for the converse.
An infinite-dimensional Banach space is hereditarily indecomposable when no subspace of it can be isomorphic to the direct sum of two infinite-dimensional Banach spaces.
The Gowers dichotomy theorem asserts that every infinite-dimensional Banach space contains, either a subspace with unconditional basis, or a hereditarily indecomposable subspace and in particular, is not isomorphic to its closed hyperplanes.
If is homogeneous, it must therefore have an unconditional basis. It follows then from the partial solution obtained by Komorowski and Tomczak–Jaegermann, for spaces with an unconditional basis, that is isomorphic to
Metric classification
If is an isometry from the Banach space onto the Banach space (where both and are vector spaces over ), then the Mazur–Ulam theorem states that must be an affine transformation.
In particular, if this is maps the zero of to the zero of then must be linear. This result implies that the metric in Banach spaces, and more generally in normed spaces, completely captures their linear structure.
Topological classification
Finite dimensional Banach spaces are homeomorphic as topological spaces, if and only if they have the same dimension as real vector spaces.
Anderson–Kadec theorem (1965–66) proves that any two infinite-dimensional separable Banach spaces are homeomorphic as topological spaces. Kadec's theorem was extended by Torunczyk, who proved that any two Banach spaces are homeomorphic if and only if they have the same density character, the minimum cardinality of a dense subset.
Spaces of continuous functions
When two compact Hausdorff spaces and are homeomorphic, the Banach spaces and are isometric. Conversely, when is not homeomorphic to the (multiplicative) Banach–Mazur distance between and must be greater than or equal to see above the results by Amir and Cambern.
Although uncountable compact metric spaces can have different homeomorphy types, one has the following result due to Milutin:
The situation is different for countably infinite compact Hausdorff spaces.
Every countably infinite compact is homeomorphic to some closed interval of ordinal numbers
equipped with the order topology, where is a countably infinite ordinal.
The Banach space is then isometric to . When are two countably infinite ordinals, and assuming the spaces and are isomorphic if and only if .
For example, the Banach spaces
are mutually non-isomorphic.
Examples
Derivatives
Several concepts of a derivative may be defined on a Banach space. See the articles on the Fréchet derivative and the Gateaux derivative for details.
The Fréchet derivative allows for an extension of the concept of a total derivative to Banach spaces. The Gateaux derivative allows for an extension of a directional derivative to locally convex topological vector spaces.
Fréchet differentiability is a stronger condition than Gateaux differentiability.
The quasi-derivative is another generalization of directional derivative that implies a stronger condition than Gateaux differentiability, but a weaker condition than Fréchet differentiability.
Generalizations
Several important spaces in functional analysis, for instance the space of all infinitely often differentiable functions or the space of all distributions on are complete but are not normed vector spaces and hence not Banach spaces.
In Fréchet spaces one still has a complete metric, while LF-spaces are complete uniform vector spaces arising as limits of Fréchet spaces.
See also
Notes
References
Bibliography
.*
.
.
.
.
.
.
.
External links
Functional analysis
Science and technology in Poland
Topological vector spaces | Banach space | Mathematics | 9,277 |
42,407,465 | https://en.wikipedia.org/wiki/Sakaguchi%20test | The Sakaguchi test is a chemical test used to detect presence of arginine in proteins. It is named after the Japanese food scientist and organic chemist, Shoyo Sakaguchi (1900–1995) who described the test in 1925. The Sakaguchi reagent used in the test consists of 1-Naphthol and a drop of sodium hypobromite. The guanidino (–C group in arginine reacts with the Sakaguchi reagent to form a red-coloured complex.
References
Protein methods
Chemical tests | Sakaguchi test | Chemistry,Biology | 115 |
70,740,807 | https://en.wikipedia.org/wiki/Thermotomaculum%20hydrothermale | Thermotomaculum hydrothermale is a species of Acidobacteriota.
References
Bacteria
Bacteria described in 2017 | Thermotomaculum hydrothermale | Biology | 26 |
28,868,152 | https://en.wikipedia.org/wiki/Jacobi%20operator | A Jacobi operator, also known as Jacobi matrix, is a symmetric linear operator acting on sequences which is given by an infinite tridiagonal matrix. It is commonly used to specify systems of orthonormal polynomials over a finite, positive Borel measure. This operator is named after Carl Gustav Jacob Jacobi.
The name derives from a theorem from Jacobi, dating to 1848, stating that every symmetric matrix over a principal ideal domain is congruent to a tridiagonal matrix.
Self-adjoint Jacobi operators
The most important case is the one of self-adjoint Jacobi operators acting on the Hilbert space of square summable sequences over the positive integers . In this case it is given by
where the coefficients are assumed to satisfy
The operator will be bounded if and only if the coefficients are bounded.
There are close connections with the theory of orthogonal polynomials. In fact, the solution of the recurrence relation
is a polynomial of degree n and these polynomials are orthonormal with respect to the spectral measure corresponding to the first basis vector .
This recurrence relation is also commonly written as
Applications
It arises in many areas of mathematics and physics. The case a(n) = 1 is known as the discrete one-dimensional Schrödinger operator. It also arises in:
The Lax pair of the Toda lattice.
The three-term recurrence relationship of orthogonal polynomials, orthogonal over a positive and finite Borel measure.
Algorithms devised to calculate Gaussian quadrature rules, derived from systems of orthogonal polynomials.
Generalizations
When one considers Bergman space, namely the space of square-integrable holomorphic functions over some domain, then, under general circumstances, one can give that space a basis of orthogonal polynomials, the Bergman polynomials. In this case, the analog of the tridiagonal Jacobi operator is a Hessenberg operator – an infinite-dimensional Hessenberg matrix. The system of orthogonal polynomials is given by
and . Here, D is the Hessenberg operator that generalizes the tridiagonal Jacobi operator J for this situation. Note that D is the right-shift operator on the Bergman space: that is, it is given by
The zeros of the Bergman polynomial correspond to the eigenvalues of the principal submatrix of D. That is, The Bergman polynomials are the characteristic polynomials for the principal submatrices of the shift operator.
See also
Hankel matrix
References
External links
Operator theory
Hilbert spaces
Recurrence relations | Jacobi operator | Physics,Mathematics | 507 |
50,123,287 | https://en.wikipedia.org/wiki/Forensic%20epidemiology | The discipline of forensic epidemiology (FE) is a hybrid of principles and practices common to both forensic medicine and epidemiology. FE is directed at filling the gap between clinical judgment and epidemiologic data for determinations of causality in civil lawsuits and criminal prosecution and defense.
Forensic epidemiologists formulate evidence-based probabilistic conclusions about the type and quantity of causal association between an antecedent harmful exposure and an injury or disease outcome in both populations and individuals. The conclusions resulting from an FE analysis can support legal decision-making regarding guilt or innocence in criminal actions, and provide an evidentiary support for findings of causal association in civil actions.
Applications of forensic epidemiologic principles are found in a wide variety of types of civil litigation, including cases of medical negligence, toxic or mass tort, pharmaceutical adverse events, medical device and consumer product failures, traffic crash-related injury and death, person identification and life expectancy.
History
The term Forensic Epidemiology was first associated with the investigation of bioterrorism in 1999, and coined by Dr. Ken Alibek, the former chief deputy of the Soviet bioweapons program. The scope of FE at that time was confined to the investigation of epidemics that were potentially man-made. After the US Anthrax attacks of 2001 the CDC defined forensic epidemiology as a means of investigating possible acts of bioterrorism.
At the present time FE is more widely known and described as the systematic application of epidemiology to disputed issues of causation that are decided in (primarily) civil, but also criminal courts. The use of epidemiologic data and analysis as a basis for assessing general causation in US courts, particularly in toxic tort cases, has been described for more than 30 years, beginning with the investigation of the alleged relationship between exposure to the Swine Flu vaccine in 1976 and subsequent cases of Guillain–Barré syndrome.
More recently FE has also been described as an evidence-based method of quantifying the probability of specific causation in individuals. The approach is particularly helpful when a clinical differential diagnosis approach to causation is disputed. Examples covering a wide variety of applications of FE are listed below under Examples of Investigative Questions Addressed by Forensic Epidemiologists.
Methods and principles
Comparative risk ratio
The metric of a case-specific FE analysis of cause is the comparative risk ratio (CRR). The CRR is a unique metric to FE; it allows for the comparison of probabilities applicable to the investigated circumstances of an individual injury or disease. Because a CRR is based on the unique circumstances surrounding the injury or disease of an individual, it may or may not be derived from a population-based relative risk (RR) or odds ratio (OR). An example of an RR analysis that could be used as a CRR is as follows: for an unbelted driver who was seriously injured in a traffic crash, an important causal question might be what role the failure to use a seat belt played in causing his injury. A relevant RR analysis would consist of the examination of the frequency of serious injury in 1000 randomly selected unbelted drivers exposed to a 20 mph frontal collision versus the frequency of serious injury in 1000 randomly selected restrained drivers exposed to the same collision severity and type. If the frequency of serious injury in the group exposed to the presumptive hazard (failure to use a seat belt) was 0.15 and the frequency in the unexposed (belted) group was 0.05, then the CRR would be the same thing as the RR of 0.15/0.05. The RR design of the analysis dictates that the populations that the numerator and denominator of the CRR are substantially similar in all respects, with the exception of the exposure to the investigated hazard, which was the failure to use a seat belt in the example.
In some instances encountered in a legal setting, however, the numerator and denominator risk must be derived from dissimilar populations in order to fit the circumstances of an investigated injury or disease. In such a case the CRR cannot be derived from either an RR or OR. An example of such a situation occurs when the numerator is a per event risk, and the denominator is a per-time risk (also known as a cumulative risk). An example of this type of analysis would be the investigation of a pulmonary embolism (PE) that occurred a week after a patient sustained a lower extremity fracture in a traffic crash. Such complications often result from blood clots forming in the legs and then traveling to the lungs. If the patient had a history of deep vein thrombosis (DVT) in the lower extremities prior to the crash, then a CRR might consist of the comparison between the risk of a PE following a lower extremity fracture (a per event rate) and the 1-week risk of PE in a patient with DVT (a time-dependent probability).
Another example of a CRR based on dissimilar populations is when there are only a limited number of potential causes to be compared. An example is the investigation of the cause of an adverse reaction in a person who took two different drugs at the same time, both of which could have caused the reaction (and which, for the example, do not interact with each other). In such a situation, the CRR applicable to the unique circumstances experienced by the individual could be estimated by comparing the adverse reaction rate for the two drugs.
Attributable proportion under the exposed
The attributable proportion under the exposed (APe ) is an indication of the proportion of patients who were exposed to the potential cause and got sick because of this exposure. It can only be used if the RR >1 and can be calculated by [(RR-1)/RR X 100%]. When the CRR is based on an RR, these formulae also apply to the CRR. The result of the analysis, given as an RR, CRR, or APe , meets the legal standard of what is “more likely true than not,” when the RR or CRR is >2.0 (with a 95% confidence interval lower boundary of >1.0), or the APe is >50%. The APe is also known as the "Probability of Causation (PC)" a term that is defined in the US Code of Federal Regulations (Federal Register / Vol. 67, No. 85 / Thursday, May 2, 2002 / Rules and Regulations p. 22297) and elsewhere.
Causal methodology
Analysis of causation, particularly for injury or other conditions with a relatively short latency period between exposure and outcome, is accomplished using a 3-step approach, as follows:
Plausibility: This first step addresses whether it is biologically possible for the injury event to have caused the condition (a.k.a. general causation), and follows a special application of the viewpoints set forth by Hill (see below). A finding of plausibility is unrelated to the frequency of the injury, because even if the injury occurs in only 1 in 100 or fewer cases of exposure to the event, it is still plausibly caused by the event. Plausibility is a relatively low hurdle to clear in a causal analysis, and is largely satisfied by the lack of evidence of implausibility of the relationship. Plausibility is often, but not necessarily, established with epidemiologic data or information.
Temporality: This second step examines the clinical and other evidence of the timing between the onset of the symptoms of injury and the injury event, and must be satisfied to assess specific causation. First, it must be established that the sequence of the injury and the event is appropriate; the symptoms cannot be identically present prior to the event. Further, the onset of the symptoms of injury cannot be either too latent or insufficiently latent, depending on the nature of the exposure and outcome.
Lack of a more probable alternative explanation: This final step examines the probability of the injury condition occurring at the same point in time in the individual, given what is known about the individual from the review of medical records and other evidence, but in the absence of the injury event (a.k.a. differential diagnosis). First, evidence of competing injury events must be evaluated, and compared for risk (often via analysis of epidemiologic data). Then, the likelihood of the condition occurring spontaneously must be assessed, given the known history of the individual.
United States case law on injury causation methodology
The 3-step methodology was challenged in United States District Court for the District of Colorado in Etherton v Auto-Owners Insurance Company. The defendant challenged, among other things, the reliability and fit of the methods described by the expert. After an extensive examination and discussion of the 3-step process used by the expert, the court found that the methodology appropriately fit the specific facts of the case, and that a population-based (epidemiologic) approach was an appropriate part of the causal methodology. The court denied the defendant's motion to strike the expert's testimony in the order, which was entered on 3/31/14.
The Defendant appealed the ruling from the District Court, and in July 2016, the Tenth Circuit U.S. Court of Appeals affirmed the 3-step causal methodology as generally accepted and well established for assessing injury causation, under the Daubert standard. See Etherton v. Auto-Owners Insurance Company, No. 14-1164 (10th Cir, 7/19/16).
Hill viewpoints
Plausibility of an investigated association can be assessed in an FE investigation, in part, via application of the Hill criteria, named for a 1965 publication by Sir Austin Bradford-Hill, in which he described nine "viewpoints" by which an association described in an epidemiologic study could be assessed for causality. Hill declined to call his viewpoints "criteria" lest they be considered a checklist for assessing causation. The term "Hill criteria" is used widely in the literature, however, and for convenience is used in the present discussion. Of the nine criteria, there are seven that have utility for assessing the plausibility of an investigated specific causal relationship, as follows:
Coherence: A causal conclusion should not contradict present substantive knowledge. It should "make sense" given current knowledge
Analogy: The results of a previously described causal relationship may be translatable to the circumstances of a current investigation
Consistency: The repeated observation of the investigated relationship in different circumstances or across a number of studies lends strength to a causal inference
Specificity: The degree to which the exposure is associated with a particular outcome
Biological plausibility: The extent to which the observed association can be explained by known scientific principles
Experiment: In some cases there may be evidence from randomized experiments (i.e., drug trials)
Dose response: The probability, frequency, or severity of the outcome increases with increased amount of exposure
Subsequent authors have added the feature of Challenge/ Dechallenge/ Rechallenge for circumstances when the exposure is repeated over time and there is the ability to observe the associated outcome response, as might occur with an adverse reaction to a medication. Additional considerations when assessing an association are the potential impact of confounding and bias in the data, which can obscure a true relationship. Confounding refers to a situation in which an association between an exposure and outcome is all or partly the result of a factor that affects the outcome but is unaffected by the exposure. Bias refers to a form of error that may threaten the validity of a study by producing results that are systematically different from the true results. Two main categories of bias in epidemiologic studies are selection bias, which occurs when study subjects are selected as a result of another unmeasured variable that is associated with both the exposure and outcome of interest; and information bias, which is systematic error in the assessment of a variable. While useful when assessing a previously unexplored association, there is no combination or minimal number of these criteria that must be met in order to conclude that a plausible relationship exists between a known exposure and an observed outcome.
In many FE investigations there is no need for a causal plausibility analysis if a general causal relationship is well established. In large part, plausibility of a relationship is entertained once implausibility has been rejected. The two remaining Hill criteria are temporality and strength of association. While both criteria have utility in assessing specific causation, temporality is the feature of an association that must be present, at least with regard to sequence (i.e., the exposure must precede the outcome), in order to consider a relationship causal. Temporal proximity can also be useful in some specific causation evaluations, as the closer the investigated exposure and the outcome are in time the less opportunity there is for an intervening cause to act. Another feature of temporality that may have a role in a specific causation evaluation is latency. An outcome may occur too soon or too long after an exposure to be considered causally related. As an example, some food borne illnesses must incubate for hours or days after ingestion, and thus an illness that begins directly following a meal, and which is later found to be caused by a food borne microorganism that requires >12 h incubation, was not caused by the investigated meal, even if an investigation were to reveal the microorganism in the ingested food. Strength of association is the criterion that is used in general causation to assess the impact of the exposure on the population, and is often quantified in terms of RR. In a specific causation evaluation the strength of the association between the exposure and the outcome is quantified by the CRR, as described above.
Test accuracy
Test accuracy investigation is a standard practice in clinical epidemiology. In this setting, a diagnostic test is scrutinized to determine by various measures how often a test result is correct. In FE the same principles are used to evaluate the accuracy of proposed tests leading to conclusions that are central to fact finder determinations of guilt or innocence in criminal investigations, and of causality in civil matters. The utility of a test is highly dependent on its accuracy, which is determined by a measure of how often a positive or negative test result truly represents the actual status that is being tested. For any test or criterion there are typically four possible results: (1) a true positive (TP), in which the test correctly identifies tested subjects with the condition of interest; (2) a true negative (TN), in which the test correctly identifies test subjects who do not have the condition of interest; (3) a false positive (FP), in which the test is positive even though condition is not present, and; (4) a false negative (FN) in which the test is negative even though the condition is present. Fig. 3.19 is a contingency table illustrating the relationships between test results and condition presence, as well as the following test accuracy parameters:
Sensitivity (the rate at which the test is positive when the condition is present) TP/(TP + FN)
Specificity (the rate at which the test is negative when the condition is absent) TN/(TN + FP)
Positive predictive value (the rate at which the condition is present when the test is positive) TP/(TP + FP)
Negative predictive value (the rate at which the condition is absent when the test is negative) TN/(TN + FN)
Bayesian reasoning
Probability is used to characterize the degree of belief in the truth of an assertion. The basis for such a belief can be a physical system that produces outcomes at a rate that is uniform over time, such as a gaming device like a roulette wheel or a die. With such a system, the observer does not influence the outcome; a fair six-sided die that is rolled enough times will land on any one of its sides 1/6th of the time. An assertion of a probability based in a physical system is easily tested with sufficient randomized experimentation. Conversely, the basis for a high degree of belief in an asserted claim may be a personally held perspective that cannot be tested. This does not mean that the assertion is any less true than one that can be tested. As an example, one might truthfully assert that “if I eat a banana there is a high probability that it will make me nauseous” based upon experience unknown to anyone but one's self. It is difficult to test such assertions, which are evaluated through collateral evidence of plausibility and analogy, often based on similar personal experience. In forensic settings, assertions of belief are often characterized as probabilities, that is, what is most likely, for a given set of facts. For circumstances in which a variety of conditions exist that may modify or “ condition” the probability of a particular outcome or scenario, a method of quantifying the relationship between the modifying conditions and the probability of the outcome employs Bayesian reasoning, named for Bayes’ Theorem or Law upon which the approach is based. Most simply stated, Bayes’ Law allows for a more precise quantification of the uncertainty in a given probability. As applied in a forensic setting, Bayes’ Law tells us what we want to know given what we do know. Although Bayes’ Law is known in forensic sciences primarily for its application to DNA evidence, a number of authors have described the use of Bayesian reasoning for other applications in forensic medicine, including identification and age estimation.
Post-test probability
The post-test probability is a highly useful Bayesian equation that allows for the calculation of the probability that a condition is present when the test is positive, conditioned by the pretest prevalence of the condition of interest. This equation is given in box to the right:
The equation results in a positive predictive value for a given pre-event or pretest prevalence. In a circumstance in which the pretest prevalence is considered “indifferent” the prevalence and (1-prevalence) values cancel out, and the calculation is a simplified to a positive predictive value.
Examples of investigative questions
What is likelihood that the asbestos exposure that Mr X experienced during his employment at company Z caused his lung cancer?
How likely is it that the DNA found on the forensic scene belongs to Mr X? What is the chance that you are wrong? Could you in your probability calculation take into account the other evidence that points towards the identification of Mr X?
Could you estimate the probability that the leg amputation of Mrs Y could have been prevented if the delay in diagnosis would not have occurred?
How likely is it that the heart failure of Mrs Y was indeed caused by the side effect of this drug?
What is the chance that the death that followed the administration of the opiate by 20 minutes was due to the drug and not to other (unknown) factors?
What is the chance that Mr. X would have needed neck surgery when he did if he had not been in a minor traffic crash the prior month?
How likely is it that the bladder cancer of Mrs Y was caused by passive smoking during her imprisonment given the fact that she was an ex-smoker herself?
Which liability percentage is reasonable in the given circumstance?
What would be the life expectancy of Mr X at the time of his death if the wrongful death not occurred?
How long is Mr X expected to survive, given his brain/ spinal cord injury, on a more probable than not basis?
Given the medical and non-medical evidence at hand regarding the circumstances of this traffic crash, what is the probability that Mrs Y was the driver?
Given the medical and non-medical evidence at hand regarding the circumstances of this car accident, what is the probability that Mr X was wearing a seat belt?
What is the probability that Mrs Y's need for surgery resulted from the crash, vs. that it would have occurred at the same time if the crash had not happened?
References
Further reading
External links
International Epidemiological Association
Team Forensic Epidemiology, Maastricht University, Michael Freeman & Maurice Zeegers
Journal of Forensic and Legal Medicine
Epidemiology
Epidemiology | Forensic epidemiology | Environmental_science | 4,185 |
9,930,466 | https://en.wikipedia.org/wiki/Controlled%20aerodynamic%20instability%20phenomena | The term controlled aerodynamic instability phenomena was first used by Cristiano Augusto Trein in the Nineteenth KKCNN Symposium on Civil Engineering held in Kyoto, Japan, in 2006. The concept is based on the idea that aerodynamic instability phenomena, such as Kármán vortex street, flutter, galloping and buffeting, can be driven into a controlled motion and be used to extract energy from the flow, becoming an alternative approach for wind power generation systems.
Justification
Nowadays, when a discussion is established around the theme wind power generation, what is promptly addressed is the image of a big wind turbine getting turned by the wind. However, some alternative approaches have already been proposed in the latter decades, showing that wind turbines are not the only possibility for the exploitation of the wind for power generation purposes.
In 1977 Jeffery experimented with an oscillating aerofoil system based on a vertically mounted pivoting wing which flapped in the wind. Farthing discovered that this free flutter could automatically cease for high wind protection and developed floating and pile based models for pumping surface and well water as well as compressing air with auxiliary battery charging. McKinney and DeLaurier in 1981 proposed a system called wingmill, based on a rigid horizontal airfoil with articulated pitching and plunging to extract energy from the flow. This system has stimulated Moores in 2003 to conduct further investigations on applications of such idea.
Following the same trend, other studies have already been carried out, for example the flutter power generation system proposed by Isogai et al. in 2003, which uses the flutter instability caused by the wind on an aerofoil to extract energy from the flow. In this branch, Matsumoto et al. went further, proposing enhancements for that system and assessing the feasibility of its usage with bluff bodies. The "kite motors" of Dave Santos utilize aerofoil instabilities.
Controlled aerodynamic instability phenomena
The wind interacts with the obstacles it reaches in its way by transferring a part of its energy to those interactions, which are converted into forces over the bodies, leading them to different levels of motion, which are directly dependent on their aeroelastic and geometric characteristics. A large number of studies and researches has been conducted concerning these interactions and their dependencies, aiming the understanding of the aerodynamic phenomena that arise due to them, such as the Kármán vortex street, galloping, buffeting and flutter, mainly regarding bluff bodies. By the understanding of such phenomena it is possible to predict instabilities and their consequent motions, feeding the designers with the data they need in order to arrange the structures properly.
In the great majority of the cases – e.g.: in civil buildings – such motions are useless and undesirable, in a manner that all the designing approaches are focused on avoiding them. However these instabilities may also be used in a profitable manner: if they are controlled and driven to a predictable motion, they can provide mechanical power supply to run, for example, turbines, machinery and electricity generators.
So, by using the knowledge acquired by now regarding those aerodynamic instabilities and by developing new features, it is possible to propose ways to stimulate them to an optimal state, using them for power generation purposes. That way, alternative approaches to the windmill may be proposed and developed. Farthing Econologica applies the practical requirements for a windmill to greatly whittle down the possibilities.
References and notes
External links
EnergyKiteSystems
Aerodynamics
Wind turbines | Controlled aerodynamic instability phenomena | Chemistry,Engineering | 699 |
5,083,172 | https://en.wikipedia.org/wiki/Eustress | The term eustress means "beneficial stress"—either psychological, physical (e.g., exercise), or biochemical/radiological (hormesis).
The word was introduced by endocrinologist Hans Selye (1907-1982) in 1976;
he combined the Greek prefix eu- meaning "good", and the English word stress, to give the literal meaning "good stress". The Oxford English Dictionary traces early use of the word (in psychological usage) to 1968.
Eustress is the positive cognitive response to stress that is healthy, or gives one a feeling of fulfilment or other positive feelings. Hans Selye created the term as a subgroup of stress to differentiate the wide variety of stressors and manifestations of stress.
Eustress is not defined by the stress or type, but rather how one perceives that stressor (e.g., a negative threat versus a positive challenge). Eustress refers to a positive response one has to a stressor, which can depend on one's current feelings of control, desirability, location, and timing of the stressor. Thus, the suggestion in a book title: Eustress and Distress: Neither Good Nor Bad, but Rather the Same?. Potential indicators of eustress may include responding to a stressor with a sense of meaning, hope, or vigor. Eustress has also been positively correlated with life satisfaction and well-being.
Definition
Eustress occurs when the gap between what one has and what one wants is slightly pushed, but not overwhelmed. The goal is not too far out of reach but is still slightly more than one can handle. This fosters challenge and motivation since the goal is in sight. The function of challenge is to motivate a person toward improvement and a goal. Challenge is an opportunity-related emotion that allows people to achieve unmet goals. Eustress is indicated by hope and active engagement. Eustress has a significantly positive correlation with life satisfaction and hope. It is typically assumed that experiencing chronic stress, either in the form of distress or eustress, is negative. However, eustress can instead fuel physiological thriving by positively influencing the underlying biological processes implicated in physical recovery and immunity.
Measurement
Occupational eustress may be measured on subjective levels such as of quality of life or work life, job pressure, psychological coping resources, complaints, overall stress level, and mental health. Other subjective methodological practices have included interviews with focus groups asking about stressors and stress level. In one study participants were asked to remember a past stressful event and then answer questionnaires on coping skills, job well-being, and appraisal of the situation (viewing the stressful event as a challenge or a threat). Common subjective methodologies were incorporated in a holistic stress model created in 2007 to acknowledge the importance of eustress, particularly in the workplace. This model uses hope, positive affect, meaningfulness, and manageability as a measure of eustress, and negative psychological states, negative affect, anxiety, and anger as a measure of distress. Objective measures have also been used and include blood pressure rate, muscle tension, and absenteeism rates. Further physiological research has looked for neuroendocrine changes as a result of eustress and distress. Research has shown that catecholamines change rapidly to pleasurable stimuli. Studies have demonstrated that eustress and distress produce different responses in the neuroendocrine system, particularly dependent on the amount of personal control one feels over a stressor.
Compared with distress
Distress is the most commonly referred to type of stress, having negative implications, whereas eustress is usually related to desirable events in a person's life. Selye first differentiated the two in an article he wrote in 1975. In this article Selye argued that persistent stress that is not resolved through coping or adaptation should be known as distress, and may lead to anxiety, withdrawal, and depressive behavior. In contrast, if stress enhances one's functioning it may be considered eustress. Both can be equally taxing on the body, and are cumulative in nature, depending on a person's way of adapting to the stressor that caused it. The body itself cannot physically discern between distress or eustress. Differentiation between the two is dependent on one's perception of the stress, but it is believed that the same stressor may cause both eustress and distress. One context that this may occur in is societal trauma (e.g. the black death, World War II) which may cause great distress, but also eustress in the form of hardiness, coping, and fostering a sense of community. The Yerkes–Dodson model demonstrates the optimum balance of stress with a bell curve (shown in the image in the top right). This model is supported by research demonstrating emotional-coping and behavioral-coping strategies are related to changes in perceived stress level on the Yerkes–Dodson Curve. However, the Yerkes-Dodson Curve has become increasingly questioned. A review of the psychological literature pertaining work performance, found that less than 5% of papers supported the inverted U-shaped curve whereas nearly 50% found a “negative linear” relationship (any level of stress inhibits performance).
Occupational
Much of the research on eustress has focused on its presence in the workplace. In the workplace, stress can often be interpreted as a challenge, which generally denotes positive eustress, or as a hindrance, which refers to distress that interferes with one's ability to accomplish a job or task.
Research has focused on increasing eustress in the workplace, in an effort to promote positive reactions to an inevitably stressful environment. Companies are interested in learning more about eustress and its positive effects to increase productivity. Eustress creates a better environment for employees, which makes them perform better and cost less. Occupational stress costs the United States somewhere in between 200 and 300 billion dollars per year. If this were eustress instead of distress, these companies might potentially retain a portion of these losses and the U.S. economy could improve as well. Stress has also been linked to the six leading causes of death: "disease, accidents, cancer, liver disease, lung ailments, suicide." If workers get sick and/or die, there is obviously a cost to the company in sick time and training new employees. It is better to have productive, happy employees. Eustress is necessary for achievement. Eustress is related to well-being and positive attitudes, and thus, increases work performance.
Other scholars within the positive organizational behavior movement tend to deemphasize the instrumental advantages of eustress to organizations; such scholars theorize that managing for eustress is more appropriately viewed as a means for improving worker well-being than a performance/motivation/profit-seeking manipulation. This line of exploration emphasizes minimizing distress and optimizing eustress. These scholars explicitly note that the utility of eustress has limits, and that typically positive stressors experienced in too high of an amplitude or of excessive duration can result in individual distress.
Techniques such as stress management interventions (SMI) have been employed to increase occupational eustress. SMIs often incorporate exercise, meditation, and relaxation techniques to decrease distress and increase positive perceptions of stress in the workplace. Rather than decrease stress in the workplace, SMI techniques attempt to increase eustress with positive reactions to stressful stimuli. Working within the Challenge-Hindrance Framework, positive primary interventions focus on relating stressors to the accomplishment of goals and personal development.
Self-efficacy
Eustress is primarily based on perceptions. It is how you perceive your given situation and how you perceive your given task. It is not what is actually happening, but a person's perception of what is happening. Eustress is thus related to self-efficacy. Self-efficacy is one's judgment of how they can carry out a required task, action or role. Some contributing factors are a person's beliefs about the effectiveness about their options for courses of action and their ability to perform those actions. If a person has low self-efficacy, they will see the demand as more distressful than eustressful because the perceived level of what the person has is lower. When a person has high self-efficacy, they can set goals higher and be motivated to achieve them. The goal then is to increase self-efficacy and skill in order to enable people to increase eustress.
Flow
When an individual appraises a situation as stressful, they add the label for distress or eustress to the issue at hand. If a situation induces eustress, the person may feel motivated and can experience flow. Positive psychologist, Mihaly Csikszentmihalyi, created this concept which is described as the moments when one is completely absorbed into an enjoyable activity with no awareness of surroundings. Flow is an extremely productive state in which an individual experiences their prime performance. The core elements are absorption, enjoyment and intrinsic motivation.
Flow is the "ultimate eustress experience – the epitome of eustress". Hargrove, Nelson and Cooper described eustress as being focused on a challenge, fully present and exhilarated, which almost exactly mirrors the definition of flow. Flow is considered a peak experience or "the single most joyous, happiest, most blissful moment of your life." Hargrove, Becker, and Hargrove build upon this work by modeling positive interventions that may lead to thriving and savoring.
Factors
There are several factors that may increase or decrease one's chances of experiencing eustress and, through eustress, experiencing flow.
Stress is also influenced by hereditary predispositions and expectations of society. Thus, a person could already be at a certain advantage or disadvantage toward experiencing eustress.
If a person enjoys experiencing new things and believes they have importance in the world, they are more likely to experience flow.
Flow is negatively related to self-directedness, or an extreme sense of autonomy.
Persistence is positively related to flow and closely related to intrinsic motivation.
People with an internal locus of control, have an increased chance of flow because they believe they can increase their skill level to match the challenge.
Perfectionism, however, is negatively related to flow. A person downplays their skill levels therefore making the gap too big, and they perceive the challenge to be too large to experience flow. On the opposite end of perfectionism, however, there are increased chances of flow.
Active procrastination is positively related to flow. By actively delaying work, the person increases the challenge. Then once the challenge is matched with the person's high skill levels, the person can experience flow. Those who passively procrastinate or do not procrastinate do not have these same experiences. It is only with the purposeful procrastination that a person is able to increase the challenge.
Mindset is a significant factor in determining distress versus eustress. Optimistic people and those with high self-esteem contribute to eustress experiences. The positive mindset increases the chances of eustress and a positive response to stressors. Currently, the predominant mindset toward stress is that stress is debilitating. However, mindsets toward stress can be changed.
See also
Distress, the opposite of eustress
Hans Selye, who founded this theory of stress
References
External links
Eustress at Whole Health Stress Management Lecture
American Psychological Association (APA)
Stress (biological and psychological)
Motivation
Behavioral neuroscience | Eustress | Biology | 2,385 |
58,579,686 | https://en.wikipedia.org/wiki/Graduate%20of%20Pharmacy | The Graduate of Pharmacy (Ph.G.) is an obsolete academic pharmacy degree. It was superseded by the Bachelor of Pharmacy degree (B.Pharm.) in the early part of the 20th century.
References
Pharmacology
Academic degrees in healthcare | Graduate of Pharmacy | Chemistry | 53 |
25,122,245 | https://en.wikipedia.org/wiki/European%20Interoperability%20Framework | The European Interoperability Framework (EIF) is a set of recommendations which specify how administrations, businesses and citizens communicate with each other within the European Union and across Member State borders.
The EIF 1.0 was issued under the Interoperable Delivery of European eGovernment Services to public Administrations, Businesses and Citizens programme (IDABC). The EIF continues under the new ISA programme, which replaced the IDABC programme on 31 December 2009.
EIF in effect is an Enterprise architecture framework targeted at the largest possible scale, designed to promote integration spanning multiple sovereign Nation States, specifically EU Member States.
For further examples of Enterprise Architecture frameworks designed operate at different levels of scale, see also Alternative Enterprise Architecture Frameworks
Versions
EIF Version 1.0
EIF Version 1.0 was published in November 2004.
Further non-technology obstacles that stand in the way of greater EIF adoption include the facts that EU Member States currently differ widely in terms of:
Scope of government - services provided, degree of state ownership of businesses, scale of armed forces, police and border control operations
Structure of government - central/local government balance, what departments exist, how departments interact
Citizen/state interaction models - processes related to key life events (births, marriages, deaths), document issue procedures, support for foreign languages
EIF Version 2
Draft Version 2 of the EIF was the subject of a political debate, where the main technology/commercial issues relate to the role of lobbying for proprietary software.
EIF 2 was adopted by the European Commission as the Annex II - EIF (European Interoperability Framework) of the Communication “Towards interoperability for European public services” on 16 December 2010.
'New EIF'
On 23 March 2017, the ISA2 programme released a new version of EIF. This version dropped version numbers and is simply called 'new EIF' and should include policy changes of the past years.
See also
Semantic Interoperability Centre Europe
References
External links
Documentation on the European Interoperability Framework 1.0 and Draft version of 2.0
Documentation on the European Interoperability Framework
European Union development policy
Information society and the European Union
Interoperability | European Interoperability Framework | Engineering | 437 |
24,005,414 | https://en.wikipedia.org/wiki/C40H56O3 | {{DISPLAYTITLE:C40H56O3}}
The molecular formula C40H56O3 (molar mass: 584.57 g/mol, exact mass: 584.4229 u) may refer to:
Antheraxanthin
Capsanthin
Flavoxanthin
Molecular formulas | C40H56O3 | Physics,Chemistry | 68 |
74,993,509 | https://en.wikipedia.org/wiki/Zero-click%20result | A zero-click result is the successful resolution of a web query when the user gets their desired result immediately on the search engine results page without having to navigate to any followup source of information.
Conventional pageview tracking does not detect zero-click results, and consequently, conventional digital marketing strategies which rely on pageview analysis do not apply. There are adaptive marketing strategies which can take into account zero-click results.
Scholarly research at the intersection of neuroscience and human–computer interaction has methods of using neuroimaging on users while they use their devices to detect satisfaction with zero-click results. Such measures are necessary to observe natural behavior because otherwise, users may not react in detectable ways as they use computer applications.
References
Search engine optimization
Internet search engines
Internet terminology
Digital marketing | Zero-click result | Technology | 157 |
73,789,292 | https://en.wikipedia.org/wiki/AT%202021lwx | AT 2021lwx (also known as ZTF20abrbeie or "Scary Barbie") is the most energetic non-quasar optical transient astronomical event ever observed, with a peak luminosity of 7 × 1045 erg per second (erg s−1) and a total radiated energy between 9.7 × 1052 erg to 1.5 × 1053 erg over three years. Despite being lauded as the largest explosion ever, GRB 221009A was both more energetic and brighter. It was first identified in imagery obtained on 13 April 2021 by the Zwicky Transient Facility (ZTF) astronomical survey and is believed to be due to the accretion of matter into a super massive black hole (SMBH) heavier than one hundred million solar masses (). It has a redshift of z = 0.9945, which would place it at a distance of about eight billion light-years from earth, and is located in the constellation Vulpecula. No host galaxy has been detected.
Forced photometry of earlier ZTF imagery showed AT 2021lwx had already begun brightening by 16 June 2020, as ZTF20abrbeie. It was also detected independently in data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) as ATLAS20bkdj, and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) as PS22iin. At the Neil Gehrels Swift Observatory, X-ray observations were made with the X-ray Telescope and ultraviolet, with the Ultraviolet-Optical Telescope (UVOT).
Subrayan et al. originally interpreted it to be a tidal disruption event between an SMBH (~108 ) and a massive star (~14 ). Wiseman et al. disfavor this interpretation, and instead believe the most likely scenario is "the sudden accretion of a large amount of gas, potentially a giant molecular cloud" (~1,000 ), onto an SMBH (>108 ).
The inferred mass of the SMBH, based on the light to mass ratio, is about 1 hundred million - 1 billion solar masses, given the observed brightness. However, the theoretical limit for an accreting super massive black hole is 1 hundred million solar masses. Given the best understood model of accreting SMBH's, this even may be the most massive SMBH to possibly accrete matter.
See also
Ophiuchus Supercluster eruption, a 5 × 1061-erg event that may have occurred up to 240 million years ago, revealed by a giant radio fossil
MS 0735.6+7421, a 1061-erg eruption that has been occurring for the last 100 million years
GRB 080916C, an 8.8 × 1054-erg gamma-ray burst seen in 2008
GRB 221009A, a 1.2 × 1055-erg gamma-ray burst seen in 2023
References
Astronomical objects discovered in 2023
Astronomical events
Supermassive black holes
Vulpecula | AT 2021lwx | Physics,Astronomy | 638 |
71,096,076 | https://en.wikipedia.org/wiki/Alloy%20230 | Alloy 230 is a nickel alloy, made up of mostly nickel and chromium, with smaller amounts of tungsten and molybdenum. This combination of metals results in a number of desirable properties including excellent strength, oxidation resistance at temperatures of up to and nitriding-resistance. Alloy 230 is one of the most nitriding-resistant alloys available.
Composition
Properties
Alloy 230 is also identified by the UNS number UNSN06230. It displays excellent strength at high temperatures, which is why it is often used in high temperature applications such as combustion linings on turbine engines, burner flame shrouds and furnace retorts. It also displays oxidation resistance at temperatures of up to , which again makes it ideal for high temperature applications. Its exceptional nitriding-resistance also makes it the preferred choice for nitriding furnace internal parts, as it remains unaffected by the treatment. It is also easily weldable and can be formed by hot or cold-working.
References
Nickel alloys | Alloy 230 | Chemistry | 207 |
918,643 | https://en.wikipedia.org/wiki/Washer%20%28hardware%29 | A washer is a thin plate (typically disk-shaped, but sometimes square) with a hole (typically in the middle) that is normally used to distribute the load of a threaded fastener, such as a bolt or nut. Other uses are as a spacer, spring (Belleville washer, wave washer), wear pad, preload indicating device, locking device, and to reduce vibration (rubber washer).
Washers are usually metal or plastic. High-quality bolted joints require hardened steel washers to prevent the loss of pre-load due to brinelling after the torque is applied. Washers are also important for preventing galvanic corrosion, particularly by insulating steel screws from aluminium surfaces. They may also be used in rotating applications, as a bearing. A thrust washer is used when a rolling element bearing is not needed either from a cost-performance perspective or due to space restraints. Coatings can be used to reduce wear and friction, either by hardening the surface or by providing a solid lubricant (i.e. a self-lubricating surface).
The origin of the word is unknown. The first recorded use of the word was in 1346; however, the first time its definition was recorded was in 1611.
Rubber or fiber gaskets used in taps (or faucets, valves, and other piping connections) as seal against water leaks are sometimes referred to colloquially as washers; but, while they may look similar, washers and gaskets are usually designed for different functions and made differently.
Washer types
Most washers can be categorized into three broad types;
Plain washers, which spread a load, and prevent damage to the surface being fixed, or provide some sort of insulation such as electrical
Spring washers, which have axial flexibility and are used to prevent fastening or loosening due to vibrations
Locking washers, which prevent fastening or loosening by preventing unscrewing rotation of the fastening device; locking washers are usually also spring washers.
Plain washers
Spring and locking washers
Lock washers, locknuts, jam nuts, and thread-locking fluid are ways to prevent vibration from loosening a bolted joint.
Gaskets
The term washer is often applied to various gasket types such as those used to seal the control valve in taps.
Specialised types
The DIN 125 metric washer standard refers to subtypes A and B. ISO 7089 calls these Form A and ISO 7090 calls them Form B. They are all the same overall size, but Form B is chamfered on one side.
Materials
Washers can be fabricated from a variety of materials including, but not limited to:
Steel – Carbon steel, spring steel, A2 (304) stainless steel, and A4 (316/316L) stainless steel
Non-ferrous metal – Copper, brass, aluminium, titanium, iron, bronze, and zinc
Alloy – Silicon bronze, Inconel, Monel, and Hastelloy
Plastic – Thermoplastics and thermosetting polymers such as polyethylene, PTFE (Teflon)
Nylon – Nylon 6, Nylon 66, Nylatron, and Tecamid MDS
Specialty – Fibers, ceramics, rubber, felt, leather, bimetals, and mica
Phenolic – The material has good electrical insulation, is lightweight, tough, has low moisture absorption, is heat resistant, and is resistant to chemicals and corrosion. Phenolic washers are substitutes for flat metallic washers in cases where electrical insulation is required. Phenolic washers are stamped out of large sheets of the phenolic material. The term "phenolic washer" is sometimes used for stamped washers from laminated materials such as paper, canvas, and Mylar.
Corrosion resistance
A number of techniques are used to enhance the corrosion resistant properties of certain washer materials:
Metallic coatings – Typical coatings used to produce corrosion resistant washers are zinc, cadmium, and nickel. Zinc coating acts as a sacrificial surface layer that falls victim to corrosive materials before the washer's material can be harmed. Cadmium produces a high-quality protective surface but is toxic, both biologically and environmentally. Nickel coatings add protection from corrosion only when the finish is dense and non-porous.
Electroplating – This method involves coating the washer by electrolytic deposition using metals such as chromium or silver.
Phosphating – A resilient, but abrasive surface is achieved by incorporating a zinc-phosphate layer and corrosion-protective oil.
Browning or bluing – Exposing the washer (typically steel) to a chemical compound or alkali salt solution causes an oxidizing chemical reaction, which results in the creation of a corrosion-resistant, colored surface. The integrity of the coating can be improved by treating the finished product with a water-displacing oil.
Chemical plating – This technique utilizes a nickel-phosphor alloy that is precipitated onto the washer surface, creating an extremely corrosion- and abrasive-resistant surface.
Type and form
The American National Standards Institute (ANSI) provides standards for general use flat washers. Type A is a series of steel washers at broad tolerances, where precision is not critical. Type B is a series of flat washers with tighter tolerances where outside diameters are categorized as "narrow", "regular" or "wide" for specific bolt sizes.
"Type" is not to be confused with "form" (but often is). The British Standard for Metric Series Metal Washers (BS4320), written in 1968, coined the term "form". The forms go from A to G and dictate the outside diameter and thickness of the flat washers.
Form A: Normal diameter, normal thickness
Form B: Normal diameter, light thickness
Form C: Large diameter, normal thickness
Form D: Large diameter, light thickness
Form E: Normal diameter, normal thickness
Form F: Large diameter, normal thickness
Form G: Largest diameter, larger thickness.Washer 'form' when comparing different washer material types is used quite freely by stockists. In relation to BS4320 specifically, washer forms 'A' to 'D' inclusive are designated 'bright metal' washers and are supplied self-finished in various metals including: steel alloys, brass, copper, etc. Whereas, BS4320 washer forms 'E' to 'G' inclusive are designated 'black' (uncoated) mild steel washers, which normally are specified with a supplementary protective coating supply condition.
Standard metric flat washers sizes
Washers of standard metric sizes equivalent to BS4320 Form A are listed in the table below. Measurements in the table refer to the dimensions of the washers as described by the drawing. Specifications for standard metric flat washers were known as DIN 125 (withdrawn) and replaced with ISO 7089. DIN (Deutsches Institut für Normung - German Institute for Standardization) standards are issued for a variety of components including industrial fasteners as Metric DIN 125 Flat Washers. The DIN standards remain common in Germany, Europe and globally even though the transition to ISO standards is taking place. DIN standards continue to be used for parts which do not have ISO equivalents or for which there is no need for standardization.
See also
Bit guard
Dowel
Unified Thread Standard
Notes
References
Further reading
Parmley, Robert. (2000). "Section 11: Washers." Illustrated Sourcebook of Mechanical Components. New York: McGraw Hill. Drawings, designs and discussion of various uses of washers.
External links
(http://www.fastenerdata.co.uk/flat-washers Dimensions of Global washers
ASME Plain washer dimensions (Type A and Type B)
Typical USA Flat Washer Dimensions USS, SAE, Fender, and NAS washer ID & OD (mm)
American National Standard (ANSI) Type B Plain Washers
SAE Flat Washers Type A Plain Washers
USS & SAE Combined Flat Washer Dimensions
Flat Washer Thickness Table Steel Gage Thicknesses, non-metric
Split Lockwashers: Truth vs. Myth Hill Country Engineering
Using machine washers Machine Design - Using washers
Hardware (mechanical)
Springs (mechanical)
Ironmongery | Washer (hardware) | Physics,Technology,Engineering | 1,716 |
42,826,100 | https://en.wikipedia.org/wiki/Kleinian%20integer | In mathematical cryptography, a Kleinian integer is a complex number of the form , with m and n rational integers. They are named after Felix Klein.
The Kleinian integers form a ring called the Kleinian ring, which is the ring of integers in the imaginary quadratic field . This ring is a unique factorization domain.
See also
Eisenstein integer
Gaussian integer
References
. (Review).
Quadratic irrational numbers
Ring theory | Kleinian integer | Mathematics | 90 |
30,977,268 | https://en.wikipedia.org/wiki/Sodium%20ethyl%20xanthate | Sodium ethyl xanthate (SEX) is an organosulfur compound with the chemical formula . It is a pale yellow powder, which is usually obtained as the dihydrate. Sodium ethyl xanthate is used in the mining industry as a flotation agent. A closely related potassium ethyl xanthate (KEX) is obtained as the anhydrous salt.
Production
Akin to the preparation of most xanthates, sodium ethyl xanthate can be prepared by treating sodium ethoxide with carbon disulfide:
Properties and reactions
Sodium ethyl xanthate is a pale yellow powder. Its aqueous solutions are stable at high pH if not heated. It rapidly hydrolyses at pH less than 9 at 25 °C. It is the conjugate base of the ethyl xanthic acid, a strong acid with pKa of 1.6 and pKb estimated as 12.4 for the conjugate base. Sodium ethyl xanthate easily adsorbs on the surface of many sulfide minerals, a key step in froth flotation.
Xanthates are susceptible to hydrolysis and oxidation at low pH:
Oxidation gives diethyl dixanthogen disulfide:
Detection
Sodium ethyl xanthate can be identified through optical absorption peaks in the infrared (1179, 1160, 1115, 1085 cm−1) and ultraviolet (300 nm) ranges. There are at least six chemical detection methods:
Iodometric method relies on oxidation to dixanthogen by iodine, with the product detected with a starch indicator. This method is however is not selective and suffers from interferences with other sulfur-containing chemicals.
Xanthate can be reacted with a copper sulfate or copper tartrate resulting in a copper xanthate residue which is detected with iodine. This method has an advantage of being is insensitive to sulfite, thiosulfate and carbonate impurities.
In the acid-base detection method, a dilute aqueous xanthate solution is reacted with a copious amount of 0.01 M hydrochloric acid yielding carbon disulfide and alcohol, which are evaluated. The excess acid and impurities are removed through filtering and titration.
In the argentometric method, sodium ethyl xanthate is reacted with silver nitrate in a dilute solution. The resulted silver xanthate is detected with 10% aqueous solution of iron nitrate. The drawbacks of this method are high cost of silver and blackening of silver xanthate by silver nitrate that reduces the detection accuracy.
In the mercurimetric method, xanthate is dissolved in 40% aqueous solution of dimethylamine, followed by heating and titration with o-hydroxymercuribenzoate. The product is detected with dithiofluorescein.
Perchloric acid method involves dissolution of xanthate in water-free acetic acid. The product is titrated with perchloric acid and detected with crystal violet.
Sodium ethyl xanthate can also be quantified using gravimetry, by weighing the lead xanthate residue obtained after reacting SEX with 10% solution of lead nitrate. There are also several electrochemical detection methods, which can be combined with some of the above chemical techniques.
Applications
Sodium ethyl xanthate is used in the mining industry as flotation agent for recovery of metals, such as copper, nickel, silver or gold, as well as solid metal sulfides or oxides from ore slurries. This application was introduced by Cornelius H. Keller in 1925. Other applications include defoliant, herbicide, and an additive to rubber to protect it against oxygen and ozone.
In 2000, Australia produced up to 10,000 tonnes of sodium ethyl xanthate and imported about 6,000 tonnes, mostly from China. The material produced in Australia is the so-called 'liquid sodium ethyl xanthate' that refers to a 40% aqueous solution of the solid. It is obtained by treating carbon disulfide with sodium hydroxide and ethanol. Its density is 1.2 g/cm3 and the freezing point is −6 °C.
Safety
Sodium ethyl xanthate has moderate oral and dermal toxicity in animals and is irritating to eyes and skin. It is especially toxic to aquatic life and therefore its disposal is strictly controlled. Median lethal dose for (male albino mice, oral, 10% solution at pH~11) is 730 mg/kg of body weight, with most deaths occurring in the first day. The most affected organs were the central nervous system, liver and spleen.
Since 1993, sodium ethyl xanthate is classified as a Priority Existing Chemical in Australia, meaning that its manufacture, handling, storage, use or disposal may result in adverse health or environment effects. This decision was justified by the widespread use of the chemical in industry and its decomposition to the toxic and flammable carbon disulfide gas. From two examples of sodium ethyl xanthate spillage in Australia, one resulted in evacuation of 100 people and hospitalization of 6 workers who were exposed to the fumes. In another accident, residents of the spillage area complained of headache, dizziness, and nausea. Consequently, during high-risk sodium ethyl xanthate handling operations, workers are required by the Australian regulations to be equipped with protective clothing, anti-static gloves, boots and full-face respirators or self-contained breathing apparatus.
References
Bibliography
Priority existing chemical Report No. 5 Sodium Ethyl Xanthate, National Industrial Chemicals Notification and Assessment Scheme, Dep. of Health and Ageing, Australian Government (1995)
Priority Existing Chemical. Secondary Notification Assessment Report No. 5S Sodium Ethyl Xanthate, National Industrial Chemicals Notification and Assessment Scheme, Dep. of Health and Ageing, Australian Government, (February 2000)
Salts
Thiocarbonyl compounds
Organic sodium salts | Sodium ethyl xanthate | Chemistry | 1,259 |
41,441,632 | https://en.wikipedia.org/wiki/Kent%20Design%20Awards | These awards were created to celebrate design excellence in Kent and were first staged in 2003 and are usually held every two years. They were then renamed 'Kent Design and Development Awards' in 2012. Then have stayed as the 'Kent Design and Development Awards' in 2014.
2003
Commercial and Industrial Building winner - Holiday Extras HQ Building, Newingreen, Hythe
Public Building winner - Riverhead Infant School, Sevenoaks
Urban Design and Town Centre Renewal winner - St. Mildreds Lavender Mews, Canterbury
Best Individual House - Lynwood, Tunbridge Wells (private residence)
Housebuilding for Quality winner - Ingress Park, Greenhithe
Overall winner - Lynwood, Tunbridge Wells (private residence)
Highly Commended was Romney Warren Visitor Centre
2004
Housebuilding for Quality winner - Vista (private residence), Dungeness
Public Building/Education winner - St Augustine's RC School, Hythe
Town and Village Renaissance - Horsebridge and Brownings Yard, Whitstable
Overall Winner - St Augustine's RC School, Hythe
2005/2006
Public Building winner - Trosley Country Park amenity block
Commercial, Industrial and retail winner - Kings Hill Village Centre
Housebuilding winner - Iden Farm Cottage, Boughton Monchelsea, near Maidstone
Building Renovation winner - The Old Gymnasium, Deal Cavalry Barracks, Deal
Best New Neighbourhood winner - Affordable village housing in Ash Grove, St Margaret's at Cliffe, near Dover.
Overall Winner - The Goods Shed, Canterbury
2007/2008
Also nominated was Sevenoaks Kaleidoscope museum, library and gallery, although misleadingly named as a winner on an architects brochure.
Commercial, Industrial and retail winner - Broadside (HQ of MHS Homes), Chatham
Housebuilding winner - Sandling Park (a residential scheme), Maidstone
Building Renovation (joint winners) - Pilkington Building and Drill Hall Library, within the Universities at Medway
Public Building winner - Parrock Street public toilets, Gravesend (Gravesham Community Project PFI)
Landscape category winner - Lower Leas Coastal Park, Folkestone
Overall Winner - The Pines Calyx, St Margaret's at Cliffe
2010
30 projects were shortlisted in seven categories from more than 60 entries.
The Medway Building at the University of Kent as part of the Universities at Medway, was nominated for Best Public Building. Also nominated was Crossway Low Energy House, near Maidstone.
Conservation & Craftsmanship Category winner - The Darnley Mausoleum, Cobham
Town & Village Renaissance winner - Ashford Shared Space
Residential overall winner - The Quays (towers with the former Chatham Dockyard), Chatham Maritime
Residential (major development) winner - The Quays, Chatham Maritime
Residential (minor development) winner - El Ray, Dungeness
Commercial, Industrial & Retail winner - Deal Pier
Public Buildings (general) winner - Quarterhouse, Folkestone
Public Buildings (schools) winner - St. James the Great Primary & Nursery School, East Malling
Project of the Year - the Lord Sandy Bruce-Lockhart Award - The Darnley Mausoleum, Cobham
2012 (Renamed as 'Kent Design and Development Awards')
Jointly organised and sponsored by 'DHA Planning' (town planning and transport consultancy), Kent County Council and Ward Homes (public housing management).
94 nominees including Sevenoaks School Performing Arts Centre and Cornwallis Academy.
Commercial, Industrial and Retail winner - Rocksalt Restaurant, Folkestone
Public Buildings Education winner - Marlowe Theatre, Canterbury
Civils and Infrastructure winner - Dover Esplande, sea frontage
Environmental Performance winner - Hadlow College
Minor Residential winner - Hill House, Ulcombe
Major Residential winner - Rosemary Gardens, Park Wood
Public Buildings, Community winner - Turner Contemporary
Public Buildings, Education winner - Walderslade Primary School
Project of the Year (Sponsored by DHA Planning) - Rocksalt Restaurant, Folkestone
2014 Kent Design and Development Awards
The shortlist was announced in September 2014;
Categories Include:
Major Residential category - Horsted Park, Chatham
Minor Residential category - Pobble House, Romney Marsh
Commercial, Industrial and Retail category - Medway Crematorium
Civils and Infrastructure category - Sandwich Town Tidal Defences
Education Public Buildings category - Goat Lees Primary School, Ashford
Community Public Buildings category - Cyclopark, Gravesend
Environmental Performance category - Goat Lees Primary School, Ashford
Overall winner ‘Project of the year’ - Goat Lees Primary School, Ashford,
2016 Awards
Twenty-three developments were shortlisted for the eight categories;
Winners:
Commercial, Industrial and Retail category - The Wing, Capel-le-Ferne
Conservation category - Command of the Oceans at Chatham Historic Dockyard,
Environmental Performance category - North Vat, a house near Dungeness,
Infrastructure and Renewables category - the cut and cover tunnel at Hermitage Quarry, Barming, by Gallagher Ltd,
Education Public Buildings category - The Yarrow in Broadstairs,
Community Public Buildings category - Fairfield (part of East Kent College) in Dartford
Minor Residential category - Nautical Mews in Margate,
Major Residential category - Farrow Court in Ashford and Wallis Fields in Maidstone,
The Wing for the Battle of Britain Memorial Trust at Capel-le-Ferne was named Project of the Year.
References
External links
Kent Design and Development Awards 2012
Design awards
Architecture awards
Architecture in the United Kingdom
British awards
Awards established in 2000
Kent | Kent Design Awards | Engineering | 1,077 |
152,671 | https://en.wikipedia.org/wiki/Z3%20%28computer%29 | The Z3 was a German electromechanical computer designed by Konrad Zuse in 1938, and completed in 1941. It was the world's first working programmable, fully automatic digital computer. The Z3 was built with 2,600 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was stored on punched film. Initial values were entered manually.
The Z3 was completed in Berlin in 1941. It was not considered vital, so it was never put into everyday operation. Based on the work of the German aerodynamics engineer Hans Georg Küssner (known for the Küssner effect), a "Program to Compute a Complex Matrix" was written and used to solve wing flutter problems. Zuse asked the German government for funding to replace the relays with fully electronic switches, but funding was denied during World War II since such development was deemed "not war-important".
The original Z3 was destroyed on 21 December 1943 during an Allied bombardment of Berlin. That Z3 was originally called V3 (Versuchsmodell 3 or Experimental Model 3) but was renamed so that it would not be confused with Germany's V-weapons. A fully functioning replica was built in 1961 by Zuse's company, Zuse KG, which is now on permanent display at Deutsches Museum in Munich.
The Z3 was demonstrated in 1998 to be, in principle, Turing-complete. However, because it lacked conditional branching, the Z3 only meets this definition by speculatively computing all possible outcomes of a calculation.
Thanks to this machine and its predecessors, Konrad Zuse has often been suggested as the inventor of the computer.
Design and development
Zuse designed the Z1 in 1935 to 1936 and built it from 1936 to 1938. The Z1 was wholly mechanical and only worked for a few minutes at a time at most. Helmut Schreyer advised Zuse to use a different technology. As a doctoral student at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin) in 1937 he worked on the implementation of Boolean operations and (in today's terminology) flip-flops on the basis of vacuum tubes. In 1938, Schreyer demonstrated a circuit on this basis to a small audience, and explained his vision of an electronic computing machine – but since the largest operational electronic devices contained far fewer tubes this was considered practically infeasible. In that year when presenting the plan for a computer with 2,000 electron tubes, Zuse and Schreyer, who was an assistant at Telecommunication Institute at Technische Universität Berlin, were discouraged by members of the institute who knew about the problems with electron tube technology. Zuse later recalled: "They smiled at us in 1939, when we wanted to build electronic machines ... We said: The electronic machine is great, but first the components have to be developed." In 1940, Zuse and Schreyer managed to arrange a meeting at the Oberkommando der Wehrmacht (OKW) to discuss a potential project for developing an electronic computer, but when they estimated a duration of two or three years, the proposal was rejected.
Zuse decided to implement the next design based on relays. The realization of the Z2 was helped financially by Kurt Pannke, who manufactured small calculating machines. The Z2 was completed and presented to an audience of the ("German Laboratory for Aviation") in 1940 in Berlin-Adlershof. Zuse was lucky – this presentation was one of the few instances where the Z2 actually worked and could convince the DVL to partly finance the next design.
In 1941, improving on the basic Z2 machine, he built the Z3 in a highly secret project of the German government. Joseph Jennissen (1905–1977), member of the "Research-Leadership" (Forschungsführung) in the Reich Air Ministry acted as a government supervisor for orders of the ministry to Zuse's company ZUSE Apparatebau. A further intermediary between Zuse and the Reich Air Ministry was the aerodynamicist Herbert A. Wagner.
The Z3 was completed in 1941 and was faster and far more reliable than the Z1 and Z2. The Z3 floating-point arithmetic was improved over that of the Z1 in that it implemented exception handling "using just a few relays", the exceptional values (plus infinity, minus infinity and undefined) could be generated and passed through operations. It further added a square root instruction.
The Z3, like its predecessors, stored its program on an external punched tape, thus no rewiring was necessary to change programs. However, it did not have conditional branching found in later universal computers.
On 12 May 1941, the Z3 was presented to an audience of scientists including the professors Alfred Teichmann and Curt Schmieden of the ("German Laboratory for Aviation") in Berlin, today known as the German Aerospace Center in Cologne.
Zuse moved on to the Z4 design, which he completed in a bunker in the Harz mountains, alongside Wernher von Braun's ballistic missile development. When World War II ended, Zuse retreated to Hinterstein in the Alps with the Z4, where he remained for several years.
Instruction set
The Z3 operated as a stack machine with a stack of two registers, R1 and R2. The first load operation in a program would load the contents of a memory location into R1; the next load operation would load the contents of a memory location into R2. Arithmetic instructions would operate on the contents of R1 and R2, leaving the result in R1, and clearing R2; the next load operation would load into R2. A store operation would store the contents of R1 into a memory location, and clear R1; the next load operation would load the contents of a memory location into R1.
A read keyboard operation would read a number from the keyboard into R1 and clear R2. A display instruction would display the contents of R1 and clear R2; the next load instruction would load into R2.
Z3 as a universal Turing machine
It was possible to construct loops on the Z3, but there was no conditional branch instruction. Nevertheless, the Z3 was Turing-complete – how to implement a universal Turing machine on the Z3 was shown in 1998 by Raúl Rojas. He proposed that the tape program would have to be long enough to execute every possible path through both sides of every branch. It would compute all possible answers, but the unneeded results would be canceled out (a kind of speculative execution). Rojas concludes, "We can therefore say that, from an abstract theoretical perspective, the computing model of the Z3 is equivalent to the computing model of today's computers. From a practical perspective, and in the way the Z3 was really programmed, it was not equivalent to modern computers."
This seeming limitation belies the fact that the Z3 provided a practical instruction set for the typical engineering applications of the 1940s. Mindful of the existing hardware restrictions, Zuse's main goal at the time was to have a workable device to facilitate his work as a civil engineer.
Relation to other work
The success of Zuse's Z3 is often attributed to its use of the simple binary system. This was invented roughly three centuries earlier by Gottfried Leibniz; Boole later used it to develop his Boolean algebra. Zuse was inspired by Hilbert's and Ackermann's book on elementary mathematical logic Principles of Mathematical Logic. In 1937, Claude Shannon introduced the idea of mapping Boolean algebra onto electronic relays in a seminal work on digital circuit design. Zuse, however, did not know of Shannon's work and developed the groundwork independently for his first computer Z1, which he designed and built from 1935 to 1938.
Zuse's coworker Helmut Schreyer built an electronic digital experimental model of a computer using 100 vacuum tubes in 1942, but it was lost at the end of the war.
An analog computer was built by the rocket scientist Helmut Hölzer in 1942 at the Peenemünde Army Research Center to simulate V-2 rocket trajectories.
The Colossus (1943), built by Tommy Flowers, and the Atanasoff–Berry computer (1942) used thermionic valves (vacuum tubes) and binary representation of numbers. Programming was by means of re-plugging patch panels and setting switches.
The ENIAC computer, completed after the war, used vacuum tubes to implement switches and used decimal representation for numbers. Until 1948 programming was, as with Colossus, by patch leads and switches.
The Manchester Baby of 1948 along with the Manchester Mark 1 and EDSAC both of 1949 were the world's earliest working computers that stored program instructions and data in the same space. In this they implemented the stored-program concept which is frequently (but erroneously) attributed to a 1945 paper by John von Neumann and colleagues. Von Neumann is said to have given due credit to Alan Turing, and the concept had actually been mentioned earlier by Konrad Zuse himself, in a 1936 patent application (that was rejected). Konrad Zuse himself remembered in his memoirs: "During the war it would have barely been possible to build efficient stored program devices anyway." Friedrich L. Bauer later wrote: "His visionary ideas (live programs) which were only to be published years afterwards aimed at the right practical direction but were never implemented by him."
Specifications
Average calculation speed: addition – 0.8 seconds, multiplication – 3 seconds
Arithmetic unit: Binary floating-point, 22-bit, add, subtract, multiply, divide, square root
Data memory: 64 22-bit words
Program memory: Punched celluloid tape
Input: Decimal floating-point numbers
Output: Decimal floating-point numbers
Input and Output was facilitated by a terminal, with a special keyboard for input and a row of lamps to show results
Elements: Around 2,000 relays (1,400 for the memory)
Frequency: 5–10 hertz
Power consumption: Around 4,000 watts
Weight: Around
Modern reconstructions
A modern reconstruction directed by Raúl Rojas and Horst Zuse started in 1997 and finished in 2003. It is now in the Konrad Zuse Museum in Hünfeld, Germany. Memory was halved to 32 words. Power consumption is about 400 W, and weight is about .
In 2008, Horst Zuse started a reconstruction of the Z3 by himself. It was presented in 2010 in the Konrad Zuse Museum in Hünfeld.
See also
History of computing hardware
Reverse Polish notation (RPN)
Notes
References
Further reading
External links
Z3 page at Horst Zuse's website
The Life and Work of Konrad Zuse
Paul E. Ceruzzi Collection on Konrad Zuse (CBI 219). Charles Babbage Institute, University of Minnesota. Collection contains published reports, articles, product literature, and other materials.
1940s computers
Z3
One-of-a-kind computers
German inventions of the Nazi period
World War II German electronics
Computer-related introductions in 1941
Konrad Zuse
Computers designed in Germany
Serial computers | Z3 (computer) | Technology | 2,297 |
211,922 | https://en.wikipedia.org/wiki/Impulse%20%28physics%29 | In classical mechanics, impulse (symbolized by or Imp) is the change in momentum of an object. If the initial momentum of an object is , and a subsequent momentum is , the object has received an impulse :
Momentum is a vector quantity, so impulse is also a vector quantity:
Newton’s second law of motion states that the rate of change of momentum of an object is equal to the resultant force acting on the object:
so the impulse delivered by a steady force acting for time is:
The impulse delivered by a varying force is the integral of the force with respect to time:
The SI unit of impulse is the newton second (N⋅s), and the dimensionally equivalent unit of momentum is the kilogram metre per second (kg⋅m/s). The corresponding English engineering unit is the pound-second (lbf⋅s), and in the British Gravitational System, the unit is the slug-foot per second (slug⋅ft/s).
Mathematical derivation in the case of an object of constant mass
Impulse produced from time to is defined to be
where is the resultant force applied from to .
From Newton's second law, force is related to momentum by
Therefore,
where is the change in linear momentum from time to . This is often called the impulse-momentum theorem (analogous to the work-energy theorem).
As a result, an impulse may also be regarded as the change in momentum of an object to which a resultant force is applied. The impulse may be expressed in a simpler form when the mass is constant:
where
is the resultant force applied,
and are times when the impulse begins and ends, respectively,
is the mass of the object,
is the final velocity of the object at the end of the time interval, and
is the initial velocity of the object when the time interval begins.
Impulse has the same units and dimensions as momentum. In the International System of Units, these are . In English engineering units, they are .
The term "impulse" is also used to refer to a fast-acting force or impact. This type of impulse is often idealized so that the change in momentum produced by the force happens with no change in time. This sort of change is a step change, and is not physically possible. However, this is a useful model for computing the effects of ideal collisions (such as in videogame physics engines). Additionally, in rocketry, the term "total impulse" is commonly used and is considered synonymous with the term "impulse".
Variable mass
The application of Newton's second law for variable mass allows impulse and momentum to be used as analysis tools for jet- or rocket-propelled vehicles. In the case of rockets, the impulse imparted can be normalized by unit of propellant expended, to create a performance parameter, specific impulse. This fact can be used to derive the Tsiolkovsky rocket equation, which relates the vehicle's propulsive change in velocity to the engine's specific impulse (or nozzle exhaust velocity) and the vehicle's propellant-mass ratio.
See also
Wave–particle duality defines the impulse of a wave collision. The preservation of momentum in the collision is then called phase matching. Applications include:
Compton effect
Nonlinear optics
Acousto-optic modulator
Electron phonon scattering
Dirac delta function, mathematical abstraction of a pure impulse
Notes
References
External links
Dynamics
Classical mechanics
Vector physical quantities
Mechanical quantities
de:Impuls#Kraftstoß | Impulse (physics) | Physics,Mathematics | 698 |
1,852,396 | https://en.wikipedia.org/wiki/Water%20user%20board | A Water User Board (WUB), or Water User Association (WUA) is a group of water users, such as irrigators, who pool their financial, technical, material, and human resources for the operation and maintenance of a water system. A WUA usually elects leaders, handles disputes internally, collects fees, and implements maintenance. In most areas, WUA membership depends on one's relationship to a water source (such as groundwater or a canal).
Local Water User's Boards are widely used to manage irrigation in Peru, and are increasingly used to manage irrigation in the Dominican Republic, although with mixed results.
Characteristics of enduring, Self-governing WUAs
Political scientist Elinor Ostrom has identified seven important characteristics of organizations which manage common resources well:
Clearly defined boundaries. The membership of the institution must be well defined. It must be clear who has legitimate access to the resource, who is under the authority of the association, and who the “others” are that must be prevented from access. Additionally, the boundaries of the resource must be defined. In the case of WUAs, the membership would likely be all landowners that receive water from a main canal and the resource would be the flows. This is known as a hydrologic organizing structure. However, some groups choose to organize in ways more familiar to their culture. There are cases of organization by village or kinship which also have had success.
Appropriation, rule, and local conditions congruence. It is necessary for the resource appropriations and rules to be adapted to a local area. Ostrom stresses that it is not specific rules which are necessary for strong institutions but rather rules to which the members agree. Rules made by locals will inevitably make sense with local conditions.
Collective-choice. It is necessary that all members have the opportunity to play a role in changing the rules. All those directly affected (i.e. irrigators) should be able to voice their opinions and vote. While officials are elected to execute duties, the real authority rests with the general assembly of water users.
Monitoring. In order for all users to make a credible commitment to one another and fully cooperate, they must know their fellow users are not stealing. Monitoring may take the form of water guards or more sophisticated gages.
Graduated sanctions. Penalties for those breaking the rules of the organization must be imposed by the members (or an elected board). The penalties should be commensurate with the infraction and could even lead to expulsion from the WUA. Such severe penalties deter users from attempting to steal.
Conflict-resolution mechanisms. One of the beauties of WUAs is the ability to handle disputes on the local level. This avoids the tortuous legal processes in the judicial system and adds to the accountability among the group. The members are apt to make equitable decisions for disputes knowing they may be in a similar situation in the future.
Minimal recognition of right to organize. Members must have the ability to organize without being challenged by external government authorities. In other words, they must be given true authority over their resource and the members in it.
WUA are fundamentally a participatory, bottom-up concept. Though they have existed for centuries, they have received particular attention in recent decades as a development tool. WUAs have been organized in developing countries as diverse and distant as Thailand, Brazil, Turkey, Somalia, and Nepal among others.
References
Water
Water and politics
Water management | Water user board | Environmental_science | 698 |
47,175,211 | https://en.wikipedia.org/wiki/Akshamsaddin | Akshamsaddin (Muhammad Shams al-Din bin Hamzah, ) (1389 in Damascus – 16 February 1459 in Göynük, Bolu), was an influential Ottoman Sunni Muslim scholar, poet, and mystic saint.
Biography
He was the grandson of Shahab al-Din al-Suhrawardi and a descendant of Abu Bakr al-Siddiq. He was an influential tutor and adviser to Sultan Mehmed the Conqueror. After completing his work with his master Sheikh Hacı Bayram-ı Veli, he founded the Shamsiyya-Bayramiyya Sufi order. He discovered the lost grave of Abu Ayyub al-Ansari (the companion of Muhammad) in Constantinople preceding the Siege of Constantinople.
In addition to his fame in religious sciences and Tasawwuf, Akshemsaddin was popular in the fields of medicine and pharmacology. There is not much reference to how he acquired this knowledge, but the Orientalist Elias John Wilkinson Gibb notes in his work History of Ottoman Poetry that Akshamsaddin learned from Haji Bayram Wali during his years with him. Akshamsaddin was also knowledgeable in the treatment of psychological and spiritual disorders. Akshamsaddin mentioned the microbe in his work Maddat ul-Hayat (The Material of Life) about two centuries prior to Antonie van Leeuwenhoek's discovery through experimentation:
Different sources claim that Akshemsaddin had seven or twelve sons; the youngest was the noted poet Ḥamd Allāh Ḥamdī.
Works
Risalat an-Nuriya
Khall-e Mushkilat
Maqamat-e Awliya
Kitab ut-Tib
Maddat ul-Hayat
References
Abu Ayyub al-Ansari
Abu Bakr
Bayramiye order
15th-century Muslim scholars of Islam
15th-century Muslim theologians
Muslims from the Ottoman Empire
Ottoman Sufis
Turkish Sufis
Sunni Sufis
Sunni Muslim scholars of Islam
15th-century writers from the Ottoman Empire
1389 births
1459 deaths
Microbiology
15th-century poets from the Ottoman Empire
Mehmed II
Male poets from the Ottoman Empire
Scientists from the Ottoman Empire
Muslim scholars | Akshamsaddin | Chemistry,Biology | 452 |
16,910,595 | https://en.wikipedia.org/wiki/Technetium%20star | A technetium star, or more properly a Tc-rich star, is a star whose stellar spectrum contains absorption lines of the light radioactive metal technetium. The most stable isotope of technetium is 97Tc with a half-life of 4.21 million years, which is too short a time to allow the metal to be material from before the star's formation. Therefore, the detection in 1952 of technetium in stellar spectra provided unambiguous proof of nucleosynthesis in stars, one of the more extreme cases being R Geminorum.
Stars containing technetium belong to the class of asymptotic giant branch stars (AGB)—stars that are like red giants, but with a slightly higher luminosity, and which burn hydrogen in an inner shell. Members of this class of stars switch to helium shell burning with an interval of some 100,000 years, in "dredge-ups". Technetium stars belong to the classes M, MS, S, SC and C-N. They are most often variable stars of the long period variable types.
Current research indicates that the presence of technetium in AGB stars occurs after some evolution and that a significant number of these stars do not exhibit the metal in their spectra. The presence of technetium seems to be related to the "third dredge-up" in the history of the stars. In between the thermal pulses of these RGB stars, heavy elements are formed in the region between the hydrogen and helium fusing shells via the slow neutron capture process; the s-process. The materials are then brought to the surface via deep convection events. 99Tc, an isotope with a half-life of only 200,000 years, is produced in AGB stars and brought to the surface during thermal pulses. Its presence is taken as a reliable indicator that a third dredge-up has taken place.
See also
References
External links
Star types
Technetium | Technetium star | Astronomy | 401 |
61,125,401 | https://en.wikipedia.org/wiki/From%20Argonavis | From Argonavis (stylized as from ARGONAVIS, originally titled Argonavis from BanG Dream! in 2018–2021) is a Japanese multimedia project by Bushiroad. An anime television series by Sanzigen aired from April 10 to July 3, 2020, on the Super Animeism block. A rhythm mobile game by DeNa titled Argonavis from BanG Dream! AAside featuring the main band Argonavis was released in Japan on January 14, 2021. A compilation anime film titled Gekijōban Argonavis: Ryūsei no Obligato premiered on November 19, 2021, and a new anime film titled Gekijōban Argonavis Axia premiered in March 2023.
In November 2021, it is announced that the project changed its name from Argonavis from BanG Dream! to From Argonavis, meaning the project is now a whole of its own instead a part of BanG Dream!. A new company centered to manage the project, Argonavis Co., Ltd. is also established with Daisuke Hyūga as the public relation manager. Some addition includes fanclub establishment, server termination of rhythm game AAside, and new smartphone game in development.
Concept
From Argonavis' former name was stylistically written in all caps (ARGONAVIS from BanG Dream!) to differentiate the project and band names. Although it was titled Argonavis from BanG Dream!, the BanG Dream! franchise creator Takaaki Kidani stated that there will be no interaction between the girls in the main BanG Dream! universe and the new project as they are in different worlds from the one another. While the Argonavis project was originally planned as an extension of the general BanG Dream! franchise, mixed reception to the appearance of male characters in the original all-female franchise prompted Argonavis to be turned into an independent project in an alternative continuity.
Unlike the original BanG Dream! which is set in Shinjuku, Tokyo, Argonavis from BanG Dream! is set in Hakodate, Hokkaido. Characters of the first band, Argonavis, consists of five first-year university students. They begin their debut with their "0th Live" was held July 29, 2018. The second "0th live" was held on September 15 following the third live on December 10 of the same year. The lives were held at Shimokitazawa GARDEN. Argonavis' first original song, "Steady Goes!" was distributed for free for those who attended their first "0th" live.
The band's first single was released on February 20, 2019. Their second single "Starting Over" was released on August 21, 2019.
The band's first live was held on May 17 at Maihama Amphitheater, Chiba Prefecture. The projects announces manga serialization as well as music video for "Goal Line". Their second live titled will be held on December 5, 2019, at Tokyo Dome City Hall.
On November 5, 2019, Bushiroad announced that the franchise will have both anime series scheduled for Spring 2020, and a rhythm mobile game for early spring 2021 release. The game story takes place after the story in the anime. The franchise also introduced three new bands: εpsilonΦ, Fujin Rizing!, and Fantome Iris. The bands will be featured in the new game along with the Argonavis and Gyroaxia.
Characters
Argonavis
A college students pop rock band based in Hakodate, Hokkaido.
Vocalist. A first-year university student who is studying at the Faculty of Law. He could not forget the excitement of the outdoor live he saw as a child, and wished to stand on a big stage one day. However, since he is not good at communicating with other people, he would only sing on his own at karaoke sessions until he was scouted by Yūto, who was looking for vocalist for Argonavis. He is usually a calm person but will get fired up when it comes to music.
Guitarist. A first-year university student who is studying at the Faculty of Literature. Born within a prestigious family in Hakodate, he immersed himself with music activities due to his inferiority complex towards his superior older brothers. He is strong-minded and optimistic, and does not doubt that he will one day become successful with his band and that his family will finally look at him. With his creed "we wouldn't know before we try it", he created Argonavis with enough confidence.
Bassist. A first-year university student who is studying at the Faculty of Literature. His father used to be a seafarer and his mother's whereabouts are unknown. He has always been with his older brother since they were small. He started to become interested in bass because his brother was in a band. He is a prudent character who makes negative remarks to those who try to talk positively. However, he does that to make the band successful.
Keyboardist. A first-year university student who is studying at the Faculty of Political Science and Economics. Being surrounded by study, sports, and music to the point that he could do anything, he is nicknamed "Shindou", (meaning child prodigy) by people around him. Above all, he hoped to be a baseball player, but got hurt before the Koshien and had to give up his baseball career. He doesn't show his real emotions but will respond to people who need him and try his best.
Drummer. A first-year university student who is studying at the Faculty of Business. He wants to make a name for himself and earn money with the band to rebuild his parents' dairy farm that was sunk with debts. From his experience of playing taiko when he was young, he showcased a powerful performance and sold himself to joining Argonavis. He made use of the fact that his drumming is surprisingly powerful from someone his height and stature to be able to join Argonavis. With a thorough pragmatism and a personality that dislikes waste, he is constantly a high tension and a mood maker.
Gyroaxia
A college students hard rock band based in Sapporo, Hokkaido.
Vocalist. He leads GYROAXIA due to his powerful vocals and overwhelming charisma. He expects nothing from his band members other than the best. However, if they are not up to his expectations, he won't hesitate to cut them off without warning. He seems to be doing all this to get back at his father, who was a legendary bandman, for abandoning him and his mother. He has no other interests other than music to the point where he initially has a disconnect with his bandmates.
Guitarist. Bewitched by Nayuta Asahi's talent, this leader of GYROAXIA spares no effort in making his name well-known. He is proficient in intel-gathering and uses it to gather information on rival bands, among others. While he adores Nayuta, he doesn't adore him as his own person rather, he adores him as a vocalist. Since he lives in different homes from Wataru Matoba, due to their parents' divorce, he worries about him a lot.
Guitarist. A stubborn man who is also very exceedingly patient. He's the enthusiastic hardworker type who won't stop trying until he excels. While he strongly opposes Nayuta's dictatorial ways, he recognizes the talent in GYROAXIA and strongly believes they deserve to be on top. The only person who has guts to bear his fangs towards Nayuta, but against someone as unpredictable as him, he always holds back from directly confronting him.
Bassist. He committed a crime back on his home planet and was banished to Earth as punishment. His crime was "the inability to make people happy". So he will stay on this planet until he is able to make someone happy... or so he says. He's a rare kind of genius—one able to join GYROAXIA without much of a sweat.
Drummer. He was a kickboxer before switching over to drums to get more popular. Making use of his natural strength, he engraved GYROAXIA's rhythm to anyone who ears with a flashy manner. He's nice to kids and women, looks superficial at first, but he's actually quite stoic. He's among the oldest in GYROAXIA and also admires Nayuta. However, he doesn't get along with Kenta's idea of throwing away his humanity for him.
Fantôme Iris
A visual kei band from Nagoya. They go by stage names during lives. The members are all working adults.
Vocalist.
Guitarist.
Guitarist.
Bassist.
Drummer.
Fujin Rizing
A college students ska band based in Nagasaki.
Vocalist and Saxophonist.
Guitarist.
Bassist.
Trombonist.
Drummer.
Epsilon Phi
A techno pop electronic rock band from Kyoto composed of middle school and high school students.
Vocalist.
Vocalist and Guitarist. Older twin brother of Kanata Nijo
Bassist. Younger twin brother of Haruka Nijo.
Synthesizer.
Drummer.
Straystride
A two-person rock band from Osaka. They previously won the LRFes and had their major debut but ended up disbanding.
Vocalist.
MC/rapper.
Other characters
Owner of the cafe Submariner.
Manager of Gyroaxia.
Vocalist of the legendary band SYANA and Nayuta Asahi's Father.
Music
An animation music video for "Goal Line" animated by Sanzigen will be some time in 2019. The band's second single, "Starting Over" was used as the theme song in the video game Card Fight!! Vanguard Ex; its coupling song is the ending song of Cardfight!! Vanguard anime adaptation.
Media
Anime
An anime adaptation for the franchise was announced on November 4, 2019. The series is animated by Sanzigen and directed by Hiroshi Nishikiori, with Nobuhiro Mōri handling series composition, Hikaru Miyoshi designing the characters, and Ryō Takahashi composing the series' music. It aired from April 10 to July 3, 2020, on the Super Animeism block on MBS, TBS, and other channels.
Episode list
Films
During the Argonavis AAside New Year Live-Streamed 'NaviZome' Online event on January 9, 2021, it was announced that a new anime film project is in production. A compilation film titled Gekijōban Argonavis: Ryūsei no Obligato has also been announced and premiered on November 19, 2021. The new anime film project, titled Gekijōban Argonavis Axia, was originally set to premiere in Japanese theaters in Q3 2022, but it was later delayed to November 4, 2022, and then to March 24, 2023.
Mobile game
A mobile rhythm game developed by DeNa titled Argonavis from BanG Dream AAside (with AA pronounced as Double A) was released on January 14, 2021. The game featured three other bands other than Argonavis and Gyroaxia: Fantôme Iris, Fuuzin Rizing!, and εpsilonΦ. It was initially planned for a late 2020 release before being postponed to spring 2021 to continue development and ensure a better product. On January 31, 2022, the game was shut down, however it was announced that a new game would commence development soon. On May 22, 2023, the game ARGONAVIS -Kimi ga Mita Stage e- (アルゴナビス -キミが見たステージへ-, ARGONAVIS -To the Stage You've Dreamed Of) was announced, along with the addition of Straystride. The game is a raising simulator, where players raise the stats of band members by levelling up cards and using the "produce" mechanic. It was planned for a summer 2023 release, but was postponed to winter 2023, and yet again postponed to the first half of 2023. The game was officially released on February 7, 2024.
Note list
References
External links
2020 anime television series debuts
Animeism
Bushiroad
Japanese idol video games
Japanese pop music groups
Japanese rock music groups
Muse Communication
Music in anime and manga
Multimedia works
Sanzigen
Shōnen manga
Shueisha manga | From Argonavis | Technology | 2,544 |
42,978,048 | https://en.wikipedia.org/wiki/HD%20240237 | HD 240237 is a star in the northern constellation of Cassiopeia. It is an orange star that can be viewed with binoculars or a small telescope, but is too faint to be seen with the naked eye at an apparent visual magnitude of 8.19. This object is located at a distance of approximately 3,100 light years away from the Sun based on parallax, but is drifting closer with a radial velocity of −25 km/s.
This is an aging giant star with a stellar classification of K2III; a star that has exhausted the supply of hydrogen at its core and expanded to 78 times the radius of the Sun. S. Gettel and associates (2011) estimate the star is around 270 million years old with 1.7 times the mass of the Sun. However, S. G. Sousa and associates found a much lower mass of 0.61 times the mass of the Sun. It is radiating 1,244 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 3,878 K.
Planetary system
In 2011, Gettel et al. announced the discovery of a planet orbiting this star. They estimated a mass around 5 times that of Jupiter, with an orbital period of and a moderate eccentricity. Sousa et al. (2015) gave a much lower estimate of . The designation b for this object, derives from the order of discovery. The designation of b is given to the first planet orbiting a given star, followed by the other lowercase letters of the alphabet. In the case of HD 240237, there was only one planet, so only the letter b is used.
References
K-type giants
Planetary systems with one confirmed planet
Cassiopeia (constellation)
Durchmusterung objects
240237
114840
J23154222+5802358 | HD 240237 | Astronomy | 376 |
5,731,861 | https://en.wikipedia.org/wiki/Van%20Hiele%20model | In mathematics education, the Van Hiele model is a theory that describes how students learn geometry. The theory originated in 1957 in the doctoral dissertations of Dina van Hiele-Geldof and Pierre van Hiele (wife and husband) at Utrecht University, in the Netherlands. The Soviets did research on the theory in the 1960s and integrated their findings into their curricula. American researchers did several large studies on the van Hiele theory in the late 1970s and early 1980s, concluding that students' low van Hiele levels made it difficult to succeed in proof-oriented geometry courses and advising better preparation at earlier grade levels. Pierre van Hiele published Structure and Insight in 1986, further describing his theory. The model has greatly influenced geometry curricula throughout the world through emphasis on analyzing properties and classification of shapes at early grade levels. In the United States, the theory has influenced the geometry strand of the Standards published by the National Council of Teachers of Mathematics and the Common Core Standards.
Van Hiele levels
The student learns by rote to operate with [mathematical] relations that he does not understand, and of which he has not seen the origin…. Therefore the system of relations is an independent construction having no rapport with other experiences of the child. This means that the student knows only what has been taught to him and what has been deduced from it. He has not learned to establish connections between the system and the sensory world. He will not know how to apply what he has learned in a new situation. - Pierre van Hiele, 1959
The best known part of the van Hiele model are the five levels which the van Hieles postulated to describe how children learn to reason in geometry. Students cannot be expected to prove geometric theorems until they have built up an extensive understanding of the systems of relationships between geometric ideas. These systems cannot be learned by rote, but must be developed through familiarity by experiencing numerous examples and counterexamples, the various properties of geometric figures, the relationships between the properties, and how these properties are ordered. The five levels postulated by the van Hieles describe how students advance through this understanding.
The five van Hiele levels are sometimes misunderstood to be descriptions of how students understand shape classification, but the levels actually describe the way that students reason about shapes and other geometric ideas. Pierre van Hiele noticed that his students tended to "plateau" at certain points in their understanding of geometry and he identified these plateau points as levels. In general, these levels are a product of experience and instruction rather than age. This is in contrast to Piaget's theory of cognitive development, which is age-dependent. A child must have enough experiences (classroom or otherwise) with these geometric ideas to move to a higher level of sophistication. Through rich experiences, children can reach Level 2 in elementary school. Without such experiences, many adults (including teachers) remain in Level 1 all their lives, even if they take a formal geometry course in secondary school. The levels are as follows:
Level 0. Visualization: At this level, the focus of a child's thinking is on individual shapes, which the child is learning to classify by judging their holistic appearance. Children simply say, "That is a circle," usually without further description. Children identify prototypes of basic geometrical figures (triangle, circle, square). These visual prototypes are then used to identify other shapes. A shape is a circle because it looks like a sun; a shape is a rectangle because it looks like a door or a box; and so on. A square seems to be a different sort of shape than a rectangle, and a rhombus does not look like other parallelograms, so these shapes are classified completely separately in the child’s mind. Children view figures holistically without analyzing their properties. If a shape does not sufficiently resemble its prototype, the child may reject the classification. Thus, children at this stage might balk at calling a thin, wedge-shaped triangle (with sides 1, 20, 20 or sides 20, 20, 39) a "triangle", because it's so different in shape from an equilateral triangle, which is the usual prototype for "triangle". If the horizontal base of the triangle is on top and the opposing vertex below, the child may recognize it as a triangle, but claim it is "upside down". Shapes with rounded or incomplete sides may be accepted as "triangles" if they bear a holistic resemblance to an equilateral triangle. Squares are called "diamonds" and not recognized as squares if their sides are oriented at 45° to the horizontal. Children at this level often believe something is true based on a single example.
Level 1. Analysis: At this level, the shapes become bearers of their properties. The objects of thought are classes of shapes, which the child has learned to analyze as having properties. A person at this level might say, "A square has 4 equal sides and 4 equal angles. Its diagonals are congruent and perpendicular, and they bisect each other." The properties are more important than the appearance of the shape. If a figure is sketched on the blackboard and the teacher claims it is intended to have congruent sides and angles, the students accept that it is a square, even if it is poorly drawn. Properties are not yet ordered at this level. Children can discuss the properties of the basic figures and recognize them by these properties, but generally do not allow categories to overlap because they understand each property in isolation from the others. For example, they will still insist that "a square is not a rectangle." (They may introduce extraneous properties to support such beliefs, such as defining a rectangle as a shape with one pair of sides longer than the other pair of sides.) Children begin to notice many properties of shapes, but do not see the relationships between the properties; therefore they cannot reduce the list of properties to a concise definition with necessary and sufficient conditions. They usually reason inductively from several examples, but cannot yet reason deductively because they do not understand how the properties of shapes are related.
Level 2. Abstraction: At this level, properties are ordered. The objects of thought are geometric properties, which the student has learned to connect deductively. The student understands that properties are related and one set of properties may imply another property. Students can reason with simple arguments about geometric figures. A student at this level might say, "Isosceles triangles are symmetric, so their base angles must be equal." Learners recognize the relationships between types of shapes. They recognize that all squares are rectangles, but not all rectangles are squares, and they understand why squares are a type of rectangle based on an understanding of the properties of each. They can tell whether it is possible or not to have a rectangle that is, for example, also a rhombus. They understand necessary and sufficient conditions and can write concise definitions. However, they do not yet understand the intrinsic meaning of deduction. They cannot follow a complex argument, understand the place of definitions, or grasp the need for axioms, so they cannot yet understand the role of formal geometric proofs.
Level 3. Deduction: Students at this level understand the meaning of deduction. The object of thought is deductive reasoning (simple proofs), which the student learns to combine to form a system of formal proofs (Euclidean geometry). Learners can construct geometric proofs at a secondary school level and understand their meaning. They understand the role of undefined terms, definitions, axioms and theorems in Euclidean geometry. However, students at this level believe that axioms and definitions are fixed, rather than arbitrary, so they cannot yet conceive of non-Euclidean geometry. Geometric ideas are still understood as objects in the Euclidean plane.
Level 4. Rigor: At this level, geometry is understood at the level of a mathematician. Students understand that definitions are arbitrary and need not actually refer to any concrete realization. The object of thought is deductive geometric systems, for which the learner compares axiomatic systems. Learners can study non-Euclidean geometries with understanding. People can understand the discipline of geometry and how it differs philosophically from non-mathematical studies.
American researchers renumbered the levels as 1 to 5 so that they could add a "Level 0" which described young children who could not identify shapes at all. Both numbering systems are still in use. Some researchers also give different names to the levels.
Properties of the levels
The van Hiele levels have five properties:
1. Fixed sequence: the levels are hierarchical. Students cannot "skip" a level. The van Hieles claim that much of the difficulty experienced by geometry students is due to being taught at the Deduction level when they have not yet achieved the Abstraction level.
2. Adjacency: properties which are intrinsic at one level become extrinsic at the next. (The properties are there at the Visualization level, but the student is not yet consciously aware of them until the Analysis level. Properties are in fact related at the Analysis level, but students are not yet explicitly aware of the relationships.)
3. Distinction: each level has its own linguistic symbols and network of relationships. The meaning of a linguistic symbol is more than its explicit definition; it includes the experiences the speaker associates with the given symbol. What may be "correct" at one level is not necessarily correct at another level. At Level 0 a square is something that looks like a box. At Level 2 a square is a special type of rectangle. Neither of these is a correct description of the meaning of "square" for someone reasoning at Level 1. If the student is simply handed the definition and its associated properties, without being allowed to develop meaningful experiences with the concept, the student will not be able to apply this knowledge beyond the situations used in the lesson.
4. Separation: a teacher who is reasoning at one level speaks a different "language" from a student at a lower level, preventing understanding. When a teacher speaks of a "square" she or he means a special type of rectangle. A student at Level 0 or 1 will not have the same understanding of this term. The student does not understand the teacher, and the teacher does not understand how the student is reasoning, frequently concluding that the student's answers are simply "wrong". The van Hieles believed this property was one of the main reasons for failure in geometry. Teachers believe they are expressing themselves clearly and logically, but their Level 3 or 4 reasoning is not understandable to students at lower levels, nor do the teachers understand their students’ thought processes. Ideally, the teacher and students need shared experiences behind their language.
5. Attainment: The van Hieles recommended five phases for guiding students from one level to another on a given topic:
Information or inquiry: students get acquainted with the material and begin to discover its structure. Teachers present a new idea and allow the students to work with the new concept. By having students experience the structure of the new concept in a similar way, they can have meaningful conversations about it. (A teacher might say, "This is a rhombus. Construct some more rhombi on your paper.")
Guided or directed orientation: students do tasks that enable them to explore implicit relationships. Teachers propose activities of a fairly guided nature that allow students to become familiar with the properties of the new concept which the teacher desires them to learn. (A teacher might ask, "What happens when you cut out and fold the rhombus along a diagonal? the other diagonal?" and so on, followed by discussion.)
Explicitation: students express what they have discovered and vocabulary is introduced. The students’ experiences are linked to shared linguistic symbols. The van Hieles believe it is more profitable to learn vocabulary after students have had an opportunity to become familiar with the concept. The discoveries are made as explicit as possible. (A teacher might say, "Here are the properties we have noticed and some associated vocabulary for the things you discovered. Let's discuss what these mean.")
Free orientation: students do more complex tasks enabling them to master the network of relationships in the material. They know the properties being studied, but need to develop fluency in navigating the network of relationships in various situations. This type of activity is much more open-ended than the guided orientation. These tasks will not have set procedures for solving them. Problems may be more complex and require more free exploration to find solutions. (A teacher might say, "How could you construct a rhombus given only two of its sides?" and other problems for which students have not learned a fixed procedure.)
Integration: students summarize what they have learned and commit it to memory. The teacher may give the students an overview of everything they have learned. It is important that the teacher not present any new material during this phase, but only a summary of what has already been learned. The teacher might also give an assignment to remember the principles and vocabulary learned for future work, possibly through further exercises. (A teacher might say, "Here is a summary of what we have learned. Write this in your notebook and do these exercises for homework.") Supporters of the van Hiele model point out that traditional instruction often involves only this last phase, which explains why students do not master the material.
For Dina van Hiele-Geldof's doctoral dissertation, she conducted a teaching experiment with 12-year-olds in a Montessori secondary school in the Netherlands. She reported that by using this method she was able to raise students' levels from Level 0 to 1 in 20 lessons and from Level 1 to 2 in 50 lessons.
Research
Using van Hiele levels as the criterion, almost half of geometry students are placed in a course in which their chances of being successful are only 50-50. — Zalman Usiskin, 1982
Researchers found that the van Hiele levels of American students are low. European researchers have found similar results for European students. Many, perhaps most, American students do not achieve the Deduction level even after successfully completing a proof-oriented high school geometry course, probably because material is learned by rote, as the van Hieles claimed. This appears to be because American high school geometry courses assume students are already at least at Level 2, ready to move into Level 3, whereas many high school students are still at Level 1, or even Level 0. See the Fixed Sequence property above.
Criticism and modifications of the theory
The levels are discontinuous, as defined in the properties above, but researchers have debated as to just how discrete the levels actually are. Studies have found that many children reason at multiple levels, or intermediate levels, which appears to be in contradiction to the theory. Children also advance through the levels at different rates for different concepts, depending on their exposure to the subject. They may therefore reason at one level for certain shapes, but at another level for other shapes.
Some researchers have found that many children at the Visualization level do not reason in a completely holistic fashion, but may focus on a single attribute, such as the equal sides of a square or the roundness of a circle. They have proposed renaming this level the syncretic level. Other modifications have also been suggested, such as defining sub-levels between the main levels, though none of these modifications have yet gained popularity.
Further reading
The Van Hiele Levels of Geometric Understanding by Marguerite Mason
Young Children's Developing Understanding of Geometric Shapes by Mary Anne Hannibal
References
External links
The van Hiele Levels of Geometric Understanding — PDF of Frequently Asked Questions about the van Hiele model, with bibliography
Linking the Van Hiele Theory to Instruction — Activities based on the van Hiele theory
The Development of Spatial and Geometric Thinking: the Importance of Instruction.
Van Hiele Levels and Achievement in Secondary School Geometry — Large 1982 Chicago study analyzing the van Hiele model and its import on understanding American high school students' achievement in geometry
A Framework for Geometry K – 12 — PowerPoint Presentation
The van Hiele Model of Geometric Thinking A short presentation of the main aspects of the van Hiele Model
International conference "Van Hiele Theory in Mathematical Education", Croatia. Organized by: the University of Zadar, Department for teachers education & HUNI - Hrvatska udruga nastavnika istraživača (Croatian association of teacher researchers). Professional lectures and workshops included topics materials about aspects of action research, levels aspects of van Hiele theory in functions- and Test proposal for Croatian state school usage.
Geometry education
Pedagogy | Van Hiele model | Mathematics | 3,391 |
15,492 | https://en.wikipedia.org/wiki/Imperial%20units | The imperial system of units, imperial system or imperial units (also known as British Imperial or Exchequer Standards of 1826) is the system of units first defined in the British Weights and Measures Act 1824 and continued to be developed through a series of Weights and Measures Acts and amendments.
The imperial system developed from earlier English units as did the related but differing system of customary units of the United States. The imperial units replaced the Winchester Standards, which were in effect from 1588 to 1825. The system came into official use across the British Empire in 1826.
By the late 20th century, most nations of the former empire had officially adopted the metric system as their main system of measurement, but imperial units are still used alongside metric units in the United Kingdom and in some other parts of the former empire, notably Canada.
The modern UK legislation defining the imperial system of units is given in the Weights and Measures Act 1985 (as amended).
Implementation
The Weights and Measures Act 1824 was initially scheduled to go into effect on 1 May 1825. The Weights and Measures Act 1825 pushed back the date to 1 January 1826. The 1824 act allowed the continued use of pre-imperial units provided that they were customary, widely known, and clearly marked with imperial equivalents.
Apothecaries' units
Apothecaries' units are not mentioned in the acts of 1824 and 1825. At the time, apothecaries' weights and measures were regulated "in England, Wales, and Berwick-upon-Tweed" by the London College of Physicians, and in Ireland by the Dublin College of Physicians. In Scotland, apothecaries' units were unofficially regulated by the Edinburgh College of Physicians. The three colleges published, at infrequent intervals, pharmacopoeias, the London and Dublin editions having the force of law.
Imperial apothecaries' measures, based on the imperial pint of 20 fluid ounces, were introduced by the publication of the London Pharmacopoeia of 1836, the Edinburgh Pharmacopoeia of 1839, and the Dublin Pharmacopoeia of 1850. The Medical Act 1858 transferred to the Crown the right to publish the official pharmacopoeia and to regulate apothecaries' weights and measures.
Units
Length
Metric equivalents in this article usually assume the latest official definition. Before this date, the most precise measurement of the imperial Standard Yard was metres.
Area
Volume
The Weights and Measures Act 1824 invalidated the various different gallons in use in the British Empire, declaring them to be replaced by the statute gallon (which became known as the imperial gallon), a unit close in volume to the ale gallon. The 1824 act defined as the volume of a gallon to be that of of distilled water weighed in air with brass weights with the barometer standing at at a temperature of . The 1824 act went on to give this volume as . The Weights and Measures Act 1963 refined this definition to be the volume of 10 pounds of distilled water of density weighed in air of density against weights of density , which works out to . The Weights and Measures Act 1985 defined a gallon to be exactly (approximately ).
British apothecaries' volume measures
These measurements were in use from 1826, when the new imperial gallon was defined. For pharmaceutical purposes, they were replaced by the metric system in the United Kingdom on 1 January 1971. In the US, though no longer recommended, the apothecaries' system is still used occasionally in medicine, especially in prescriptions for older medications.
Mass and weight
In the 19th and 20th centuries, the UK used three different systems for mass and weight.
troy weight, used for precious metals;
avoirdupois weight, used for most other purposes; and
apothecaries' weight, now virtually unused since the metric system is used for all scientific purposes.
The distinction between mass and weight is not always clearly drawn. Strictly a pound is a unit of mass, but it is commonly referred to as a weight. When a distinction is necessary, the term pound-force may refer to a unit of force rather than mass. The troy pound () was made the primary unit of mass by the Weights and Measures Act 1824 and its use was abolished in the UK on 1 January 1879, with only the troy ounce () and its decimal subdivisions retained. The Weights and Measures Act 1855 made the avoirdupois pound the primary unit of mass. In all the systems, the fundamental unit is the pound, and all other units are defined as fractions or multiples of it.
|}
Natural equivalents
The 1824 Act of Parliament defined the yard and pound by reference to the prototype standards, and it also defined the values of certain physical constants, to make provision for re-creation of the standards if they were to be damaged. For the yard, the length of a pendulum beating seconds at the latitude of Greenwich at Mean Sea Level in vacuo was defined as inches. For the pound, the mass of a cubic inch of distilled water at an atmospheric pressure of 30 inches of mercury and a temperature of 62° Fahrenheit was defined as 252.458 grains, with there being 7,000 grains per pound.
Following the destruction of the original prototypes in the 1834 Houses of Parliament fire, it proved impossible to recreate the standards from these definitions, and a new Weights and Measures Act 1855 was passed which permitted the recreation of the prototypes from recognized secondary standards.
Current use
=== United Kingdom ===
Since the Weights and Measures Act 1985, British law defines base imperial units in terms of their metric equivalent. The metric system is routinely used in business and technology within the United Kingdom, with imperial units remaining in widespread use amongst the public. All UK roads use the imperial system except for weight limits, and newer height or width restriction signs give metric alongside imperial.
Traders in the UK may accept requests from customers specified in imperial units, and scales which display in both unit systems are commonplace in the retail trade. Metric price signs may be accompanied by imperial price signs provided that the imperial signs are no larger and no more prominent than the metric ones.
The United Kingdom completed its official partial transition to the metric system in 1995, with imperial units still legally mandated for certain applications such as draught beer and cider, and road-signs. Therefore, the speedometers on vehicles sold in the UK must be capable of displaying miles per hour. Even though the troy pound was outlawed in the UK in the Weights and Measures Act 1878, the troy ounce may still be used for the weights of precious stones and metals. The original railways (many built in the Victorian era) are a big user of imperial units, with distances officially measured in miles and yards or miles and chains, and also feet and inches, and speeds are in miles per hour.
Some British people still use one or more imperial units in everyday life for distance (miles, yards, feet, and inches) and some types of volume measurement (especially milk and beer in pints; rarely for canned or bottled soft drinks, or petrol). , many British people also still use imperial units in everyday life for body weight (stones and pounds for adults, pounds and ounces for babies). Government documents aimed at the public may give body weight and height in imperial units as well as in metric. A survey in 2015 found that many people did not know their body weight or height in both systems. As of 2017, people under the age of 40 preferred the metric system but people aged 40 and over preferred the imperial system. As in other English-speaking countries, including Australia, Canada and the United States, the height of horses is usually measured in hands, standardised to . Fuel consumption for vehicles is commonly stated in miles per gallon (mpg), though official figures always include litres per equivalents and fuel is sold in litres. When sold draught in licensed premises, beer and cider must be sold in pints, half-pints or third-pints. Cow's milk is available in both litre- and pint-based containers in supermarkets and shops. Areas of land associated with farming, forestry and real estate are commonly advertised in acres and square feet but, for contracts and land registration purposes, the units are always hectares and square metres.
Office space and industrial units are usually advertised in square feet. Steel pipe sizes are sold in increments of inches, while copper pipe is sold in increments of millimetres. Road bicycles have their frames measured in centimetres, while off-road bicycles have their frames measured in inches. Display sizes for screens on television sets and computer monitors are always diagonally measured in inches. Food sold by length or width, e.g. pizzas or sandwiches, is generally sold in inches. Clothing is usually sized in inches, with the metric equivalent often shown as a small supplementary indicator. Gas is usually measured by the cubic foot or cubic metre, but is billed like electricity by the kilowatt hour.
Pre-packaged products can show both metric and imperial measures, and it is also common to see imperial pack sizes with metric only labels, e.g. a tin of Lyle's Golden Syrup is always labelled with no imperial indicator. Similarly most jars of jam and packs of sausages are labelled with no imperial indicator.
India
India began converting to the metric system from the imperial system between 1955 and 1962. The metric system in weights and measures was adopted by the Indian Parliament in December 1956 with the Standards of Weights and Measures Act, which took effect beginning 1 October 1958. By 1962, metric units became "mandatory and exclusive."
Today all official measurements are made in the metric system. In common usage some older Indians may still refer to imperial units. Some measurements, such as the heights of mountains, are still recorded in feet. Tyre rim diameters are still measured in inches, as used worldwide. Industries like the construction and the real estate industry still use both the metric and the imperial system though it is more common for sizes of homes to be given in square feet and land in acres.
In Standard Indian English, as in Australian, Canadian, New Zealand, Singaporean, and British English, metric units such as the litre, metre, and tonne utilise the traditional spellings brought over from French, which differ from those used in the United States and the Philippines. The imperial long ton is invariably spelt with one 'n'.
Hong Kong
Hong Kong has three main systems of units of measurement in current use:
The Chinese units of measurement of the Qing Empire (no longer in widespread use in China);
British imperial units; and
The metric system.
In 1976 the Hong Kong Government started the conversion to the metric system, and as of 2012 measurements for government purposes, such as road signs, are almost always in metric units. All three systems are officially permitted for trade, and in the wider society a mixture of all three systems prevails.
The Chinese system's most commonly used units for length are (lei5), (zoeng6), (cek3), (cyun3), (fan1) in descending scale order. These units are now rarely used in daily life, the imperial and metric systems being preferred. The imperial equivalents are written with the same basic Chinese characters as the Chinese system. In order to distinguish between the units of the two systems, the units can be prefixed with "Ying" (, jing1) for the imperial system and "Wa" (, waa4) for the Chinese system. In writing, derived characters are often used, with an additional (mouth) radical to the left of the original Chinese character, for writing imperial units. The most commonly used units are the mile or "li" (, li1), the yard or "ma" (, maa5), the foot or "chek" (, cek3), and the inch or "tsun" (, cyun3).
The traditional measure of flat area is the square foot (, fong1 cek3, ping4 fong1 cek3) of the imperial system, which is still in common use for real estate purposes. The measurement of agricultural plots and fields is traditionally conducted in (mau5) of the Chinese system.
For the measurement of volume, Hong Kong officially uses the metric system, though the gallon (, gaa1 leon4-2) is also occasionally used.
Canada
During the 1970s, the metric system and SI units were introduced in Canada to replace the imperial system. Within the government, efforts to implement the metric system were extensive; almost any agency, institution, or function provided by the government uses SI units exclusively. Imperial units were eliminated from all public road signs and both systems of measurement will still be found on privately owned signs, such as the height warnings at the entrance of a parkade. In the 1980s, momentum to fully convert to the metric system stalled when the government of Brian Mulroney was elected. There was heavy opposition to metrication and as a compromise the government maintains legal definitions for and allows use of imperial units as long as metric units are shown as well.
The law requires that measured products (such as fuel and meat) be priced in metric units and an imperial price can be shown if a metric price is present. There tends to be leniency in regards to fruits and vegetables being priced in imperial units only.
Environment Canada still offers an imperial unit option beside metric units, even though weather is typically measured and reported in metric units in the Canadian media. Some radio stations near the United States border (such as CIMX and CIDR) primarily use imperial units to report the weather. Railways in Canada also continue to use imperial units.
Imperial units are still used in ordinary conversation. Today, Canadians typically use a mix of metric and imperial measurements in their daily lives. The use of the metric and imperial systems varies by age. The older generation mostly uses the imperial system, while the younger generation more often uses the metric system. Quebec has implemented metrication more fully. Newborns are measured in SI at hospitals, but the birth weight and length is also announced to family and friends in imperial units. Drivers' licences use SI units, though many English-speaking Canadians give their height and weight in imperial. In livestock auction markets, cattle are sold in dollars per hundredweight (short), whereas hogs are sold in dollars per hundred kilograms. Imperial units still dominate in recipes, construction, house renovation and gardening. Land is now surveyed and registered in metric units whilst initial surveys used imperial units. For example, partitioning of farmland on the prairies in the late 19th and early 20th centuries was done in imperial units; this accounts for imperial units of distance and area retaining wide use in the Prairie Provinces.
In English-speaking Canada commercial and residential spaces are mostly (but not exclusively) constructed using square feet, while in French-speaking Quebec commercial and residential spaces are constructed in metres and advertised using both square metres and square feet as equivalents. Carpet or flooring tile is purchased by the square foot, but less frequently also in square metres. Motor-vehicle fuel consumption is reported in both litres per and statute miles per imperial gallon, leading to the erroneous impression that Canadian vehicles are 20% more fuel-efficient than their apparently identical American counterparts for which fuel economy is reported in statute miles per US gallon (neither country specifies which gallon is used). Canadian railways maintain exclusive use of imperial measurements to describe train length (feet), train height (feet), capacity (tons), speed (mph), and trackage (miles).
Imperial units also retain common use in firearms and ammunition. Imperial measures are still used in the description of cartridge types, even when the cartridge is of relatively recent invention (e.g., .204 Ruger, .17 HMR, where the calibre is expressed in decimal fractions of an inch). Ammunition that is already classified in metric is still kept metric (e.g., 9×19mm). In the manufacture of ammunition, bullet and powder weights are expressed in terms of grains for both metric and imperial cartridges.
In keeping with the international standard, air navigation is based on nautical units, e.g., the nautical mile, which is neither imperial nor metric, and altitude is measured in imperial feet.
Australia
While metrication in Australia has largely ended the official use of imperial units, for particular measurements, international use of imperial units is still followed.
In licensed venues, draught beer and cider is sold in glasses and jugs with sizes based on the imperial fluid ounce, though rounded to the nearest 5 mL.
Newborns are measured in metric at hospitals, but the birth weight and length is sometimes also announced to family and friends in imperial units.
Screen sizes, are frequently described in inches instead of or as well as centimetres.
Property size is infrequently described in acres, but is mostly as square metres or hectares.
Marine navigation is done in nautical miles, and water-based speed limits are in nautical miles per hour.
Historical writing and presentations may include pre-metric units to reflect the context of the era represented.
The illicit drug trade in Australia still often uses imperial measurements, particularly when dealing with smaller amounts closer to end user levels e.g. "8-ball" an 8th of an ounce or ; cannabis is often traded in ounces ("oz") and pounds ("p")
Firearm barrel length are almost always referred by in inches, ammunition is also still measured in grains and ounces as well as grams.
A persons height is frequently and informally described in feet and inches, but on official records is described in metres.
The influence of British and American culture in Australia has been noted to be a cause for residual use of imperial units of measure.
New Zealand
New Zealand introduced the metric system on 15 December 1976. Aviation was exempt, with altitude and airport elevation continuing to be measured in feet whilst navigation is done in nautical miles; all other aspects (fuel quantity, aircraft weight, runway length, etc.) use metric units.
Screen sizes for devices such as televisions, monitors and phones, and wheel rim sizes for vehicles, are stated in inches, as is the convention in the rest of the world - and a 1992 study found a continued use of imperial units for birth weight and human height alongside metric units.
Ireland
Ireland has officially changed over to the metric system since entering the European Union, with distances on new road signs being metric since 1997 and speed limits being metric since 2005. The imperial system remains in limited use – for sales of beer in pubs (traditionally sold by the pint). All other goods are required by law to be sold in metric units with traditional quantities being retained for goods like butter and sausages, which are sold in packaging. The majority of cars sold pre-2005 feature speedometers with miles per hour as the primary unit, but with a kilometres per hour display. Often signs such as those for bridge height can display both metric and imperial units. Imperial measurements continue to be used colloquially by the general population especially with height and distance measurements such as feet, inches, and acres as well as for weight with pounds and stones still in common use among people of all ages. Measurements such as yards have fallen out of favour with younger generations. Ireland's railways still use imperial measurements for distances and speed signage. Property is usually listed in square feet as well as metres also.
Horse racing in Ireland still continues to use stones, pounds, miles and furlongs as measurements.
Bahamas
Imperial measurements remain in general use in the Bahamas.
Legally, both the imperial and metric systems are recognised by the Weights and Measures Act 2006.
Belize
Both imperial units and metric units are used in Belize. Both systems are legally recognized by the National Metrology Act.
Myanmar
According to the CIA, in June 2009, Myanmar was one of three countries that had not adopted the SI metric system as their official system of weights and measures. Metrication efforts began in 2011. The Burmese government set a goal to metricate by 2019, which was not met, with the help of the German National Metrology Institute.
Other countries
Some imperial measurements remain in limited use in Malaysia, the Philippines, Sri Lanka and South Africa. Measurements in feet and inches, especially for a person's height, are frequently encountered in conversation and non-governmental publications.
Prior to metrication, it was a common practice in Malaysia for people to refer to unnamed locations and small settlements along major roads by referring to how many miles the said locations were from the nearest major town. In some cases, these eventually became the official names of the locations; in other cases, such names have been largely or completely superseded by new names. An example of the former is Batu 32 (literally "Mile 32" in Malay), which refers to the area surrounding the intersection between Federal Route 22 (the Tamparuli-Sandakan highway) and Federal Route 13 (the Sandakan-Tawau highway). The area is so named because it is 32 miles west of Sandakan, the nearest major town.
Petrol is still sold by the imperial gallon in Anguilla, Antigua and Barbuda, Belize, Myanmar, the Cayman Islands, Dominica, Grenada, Montserrat, St Kitts and Nevis and St. Vincent and the Grenadines. The United Arab Emirates Cabinet in 2009 issued the Decree No. (270 / 3) specifying that, from 1 January 2010, the new unit sale price for petrol will be the litre and not the gallon, which was in line with the UAE Cabinet Decision No. 31 of 2006 on the national system of measurement, which mandates the use of International System of units as a basis for the legal units of measurement in the country. Sierra Leone switched to selling fuel by the litre in May 2011.
In October 2011, the Antigua and Barbuda government announced the re-launch of the Metrication Programme in accordance with the Metrology Act 2007, which established the International System of Units as the legal system of units. The Antigua and Barbuda government has committed to a full conversion from the imperial system by the first quarter of 2015.
See also
Explanatory notes
Citations
General sources
Appendices B and C of NIST Handbook 44
Also available as a PDF file.
6 George IV chapter 12, 1825 (statute)
External links
British Weights And Measures Association
Canada Weights and Measures Act 1970-71-72
General table of units of measure – NIST – pdf
How Many? A Dictionary of Units of Measurement
Customary units of measurement
Systems of units
1824 introductions | Imperial units | Mathematics | 4,575 |
46,492,736 | https://en.wikipedia.org/wiki/Yuma%20creation%20myth | The Yuma creation myth comes from the Yuma people, or Quechan, living in northern Arizona. The Yuma developed a pictographic system that were probably older than the Egyptian hieroglyphics. The early Yuma people probably worshiped in caves, and many pictographs show scenes from nature, trading and mythology.
In the beginning of the Creation myth of the Yuma, there was nothing but water. Then Kokomaht came up out of the water. Bakohtal was born out of the water too, but he was forever blind because of a lie Kokomaht told him. Kokomaht said, "I opened my eyes when I was underwater." but this was a lie. Once the twins were born Bakohtal tried to create humans but they were not right. Instead of feet and hands they had lumps, but Bakohtal, being blind, thought it was perfect. However Kokomaht made a truly perfect being out of mud. He waved it four times toward the north and it stood on its feet. Then Kokomaht created the earth on top of the frog, Hanyi. Once he was finished and had made Komastam'hó, his son, he knew that his work was done, so he laid down on the earth and Hanyi sucked the breath out of him. Komashtam'hó made a sun and moon and smothered it with his spit, and told the people, "Look this is the sun, it will give you warmth and peace." Before Komashtam'hó made a great flood, animals were people. Then Komashtam'hó ordered everybody to shave their hair.
The Great Flood
Komashtam'hó was displeased with the fact that the animal people did not look good with their hair cut, so he changed them into animals. However the newly formed animals were violent and dangerous so he sent a great flood. After a long time, Markohuvek, a person, asked Komashtam'hó why he was making a great flood. Komashtam'hó replied to Markohuvek that he was trying to protect the people from the newly formed animals. Markohuvek replied that gradually people would freeze to death since there wasn't any fire. So Komastam'hó made a great fire to evaporate the water. However he made it accidentally too hot and he was slightly burned. Komashtam'hó then took a giant pole and smashed the house of his father. Water welling from the house became the Colorado River. And in it swam fish, eels, and other water animals. These were the beings created by Bakotahl. As for Bakotahl, he lays under the earth, and when there is an earthquake, the Yuma say, "It is the Blind Evil One stirring down below."
References
Creation myths
Quechan | Yuma creation myth | Astronomy | 603 |
65,452,829 | https://en.wikipedia.org/wiki/Transparent%20exopolymer%20particles | Transparent exopolymer particles (TEPs) are extracellular acidic polysaccharides produced by phytoplankton and bacteria in saltwater, freshwater, and wastewater. They are incredibly abundant and play a significant role in biogeochemical cycling of carbon and other elements in water. Through this, they also play a role in the structure of food webs and trophic levels. TEP production and overall concentration has been observed to be higher in the Pacific Ocean compared to the Atlantic, and is more related to solar radiation in the Pacific. TEP concentration has been found to decrease with depth, having the highest concentration at the surface, especially associated with the SML, either by upward flux or sea surface production. Chlorophyll a has been found to be the best indicator of TEP concentration, rather than heterotrophic grazing abundance, further emphasizing the role of phytoplankton in TEP production. TEP concentration is especially enhanced by haptophyte phytoplanktonic dominance, solar radiation exposure, and close proximity to sea ice. TEPs also do not seem to show any diel cycles. High concentrations of TEPs in the surface ocean slow the sinking of solid particle aggregations, prolonging pelagic residence time. TEPs may provide an upward flux of materials such as bacteria, phytoplankton, carbon, and trace nutrients. High TEP concentrations were found under arctic sea ice, probably released by sympagic algae. TEP is efficiently recycled in the ocean, as heterotrophic grazers such as zooplankton and protists consume TEP and produce new TEP precursors to be reused, further emphasizing the importance of TEPs in marine carbon cycling. TEP abundance tends to be higher in coastal, shallow waters compared to deeper, oceanic waters. Diatom-dominated phytoplankton colonies produce larger, and stickier, TEPs, which may indicate that TEP size distribution and composition may be a useful tool in determining aggregate planktonic community structure.
TEPs are formed from cell surface mucus sloughing, the disintegration of bacterial colonies, and precursors released by growing or senescent phytoplankton. TEP precursors can be fibrillar, forming larger colloids, or aggregations, and within hours to days after release from the cell are fully formed transparent exopolymer particles. While most exopolymeric substances range from loose slimes to tight shells surrounding cells, TEPs exist as individual particles, allowing them to aggregate and be collected by filtration. They are highly sticky, forming aggregations of solid particles known as marine snow, and are actually associated with all marine aggregations investigated thus far. TEPs have a high C:N ratio compared to the Redfield Ratio, suggesting the significance of TEPs in the promotion of carbon sequestration and particle sedimentation to the benthos, but this is complicated due to bacterial decomposition, as well as heterotrophic grazing by zooplankton such as euphausiids and protists. This also suggests that TEPs may represent a link between the oceanic microbial loop and other food webs, as well as creating short circuit food webs within the pelagic.
TEPs provide a surface within the pelagic ocean for bacterial colonies to form. The bacterial colonies associated with TEPs tend to be dominated by Alteromonadaceae, specifically taxonomic units previously associated with microgel habitats, Marinobacter and Glaciecola. A novel species of bacteria, Lentisphaera araneosa, was discovered colonizing TEPs off the coast of Oregon. Phytoplankton have been found to be the most significant source of TEP, but TEP abundance is also positively correlated with bacterial abundance. Bacteria either enhance the production of TEP by phytoplankton or contribute to the production of it. TEP presence is necessary for the sedimentation of diatoms, but are not involved in the sedimentation of foraminifera. Prochlorococcus sp. decay from increased solar radiation was found to promote TEP production, suggesting that picocyanobacteria are a source material for TEP. During a controlled diatom bloom, TEP concentrations saw exponential growth during bloom growth, flocculation, and senescence, but the production of TEP did not increase after nutrient depletion. In fact, TEP concentration was found to be a linear function of chlorophyll a and POC, suggesting that TEP production is linked to phytoplankton growth. The ratio of TEP to phytoplankton was a determining factor in bloom flocculation. During flocculation, TEP, due to its high stickiness, aggregated with itself and phytoplankton, but phytoplankton did not independently flocculate to themselves. Bacterial degradation may have contributed to TEP concentration loss.
The significance of TEPs in biogeochemical cycling and trophic cascading has always been suspected, but were not able to be accurately quantified until recently. Using light microscopy to quantitatively analyze TEP is a slow and tedious process. The use of Alcian blue to stain these otherwise transparent molecules has been beneficial in more efficiently analyzing them using spectrophotometry. TEPs have been referred to as ‘protobiofilms’ due to their intense colonization by bacteria, displaying many characteristics of a biofilm without being attached to a surface. Planktonic microgels, another term for TEPs, and their role as protobiofilms, may be of some significance to water and water treatment industries. TEPs may be useful in the desalination and water treatment industries through its contribution to biofouling mechanisms.
References
Polysaccharides | Transparent exopolymer particles | Chemistry | 1,224 |
7,733,261 | https://en.wikipedia.org/wiki/ACF2 | ACF2 (Access Control Facility 2) is a commercial, discretionary access control software security system developed for the MVS (z/OS today), VSE (z/VSE today) and VM (z/VM today) IBM mainframe operating systems by SKK, Inc. Barry Schrager, Eberhard Klemens, and Scott Krueger combined to develop ACF2 at London Life Insurance in London, Ontario in 1978. The "2" was added to the ACF2 name by Cambridge Systems (who had the North American marketing rights for the product) to differentiate it from the prototype, which was developed by Schrager and Klemens at the University of Illinois—the prototype name was ACF. The "2" also helped to distinguish the product from IBM's ACF/VTAM.
ACF2 was developed in response to IBM's RACF product (developed in 1976), which was IBM's answer to the 1974 SHARE Security and Data Management project's requirement whitepaper. ACF2's design was guided by these requirements, taking a resource-rule oriented approach. Unique to ACF2 were the concepts of "Protection by Default" and resource pattern masking.
As a result of the competitive tension between RACF and ACF2, IBM matured the SAF (Security Access Facility) interface in MVS (now z/OS), which allowed any security product to process operating system ("OS"), third-party software and application security calls, enabling the mainframe to secure all facets of mainframe operations.
SKK and ACF2 were sold to UCCEL Corporation in 1986, which in turn was purchased by Computer Associates International, Inc. in 1987. Broadcom Inc. now (2019) markets ACF2 as CA ACF2.
References
Operating system security
Computer access control frameworks
CA Technologies
IBM mainframe software | ACF2 | Technology | 397 |
10,767,147 | https://en.wikipedia.org/wiki/Lightning%20rocket | A lightning rocket is a rocket that unravels a conductor, such as a fine copper wire, as it ascends, to conduct lightning charges to the ground. Lightning strikes derived from this process are called "triggered lightning."
Design
A conducting lightning rod is grounded and positioned alongside the launch tube in communication with the conductive path to thereby control the time and location of a lightning strike from the thundercloud. The conductor trailed by the rocket can be either a physical wire, or column of ionized gas produced by the engine. A lightning rocket using solid propellant may have cesium salts added, which produces a conductive path when the exhaust gases are discharged from the rocket. In a liquid propellant rocket a solution of calcium chloride is used to form the conductive path.
The system consists of a specially designed launch pad with lightning rods and conductors attached. The launch pad is either controlled wirelessly or via pneumatic line to the control station to prevent the discharge traveling to the control equipment. The fine copper wire (more recently reinforced with kevlar) is attached to the ground and plays out from the rocket as it ascends. The initial strike follows this wire and is as a result unusually straight. As the wire is vaporized by the initial strike, subsequent strikes are more angular in nature and follow the ionization trail of the initial strike. Rockets of this type are used for both lightning research and lightning control.
Betts system
The Betts lightning rocket, patented by Robert E. Betts in 2003, consists of a rocket launcher that is in communication with a detection device that measures the presence of electrostatic and ionic change in close proximity to the rocket launcher that also fires the rocket. This system is designed to control the time and the location of a lightning strike. As the rocket flies to the thundercloud a liquid is expelled aft forming a column in the air of particles that are more electrically conductive than the surrounding air. In a similar fashion to the system employing a solid propellant as the conductive producer this conductive path conducts a lightning strike to ground to thereby control the time and location of a lightning strike from the thundercloud.
References
External links
July 25, 2002, triggered lightning video
Transient Response of a Tall Object to Lightning
Lightning | Lightning rocket | Physics | 460 |
78,320,288 | https://en.wikipedia.org/wiki/List%20of%20banned%20and%20restricted%20pesticides%20in%20India | This article provides a comprehensive list of pesticides currently subject to specific regulatory restrictions in India as of 31 March 2024. Restrictions are implemented to protect public health, the environment, and to ensure safer agricultural practices by limiting the use, production, and application of certain pesticides.
Pesticides banned from manufacture, import, and use
Pesticides banned for use but manufacture allowed for export
Pesticides Withdrawn
Pesticides refused registration
Pesticides restricted for use in the country
Recent proposals
The Government of India has proposed further restrictions on the usage of 27 pesticides which are already banned in other countries on 14 May 2020. This decision follows recommendations from an expert committee that reviewed the safety, environmental impact, and international regulatory status of these substances. The proposal seeks to ban the import, manufacture, sale, transport, and use of these pesticides in agriculture, citing risks such as carcinogenicity, endocrine disruption, and toxicity to aquatic organisms and pollinators.
Proposed pesticides for restricted Use in India
References
Pesticides by country
Pesticide regulation | List of banned and restricted pesticides in India | Chemistry | 208 |
37,730,844 | https://en.wikipedia.org/wiki/Paris-Saclay | Paris-Saclay is a research-intensive and business cluster currently under construction in the south of Paris, France. It encompasses research facilities, two French major universities with higher education institutions (grandes écoles) and also research centers of private companies. In 2013, the Technology Review put Paris-Saclay in the top 8 world research clusters. In 2014, it comprised almost 15% of French scientific research capacity.
The earliest settlements are from the 1950s, and this area was subsequently extended several times during the 1970s and 2000s. Several projects are underway to continue the development of the campus, including the relocation of some facilities.
The area is now home to many of the Europe's largest high-tech corporations, and to the two French universities Paris-Saclay University (CentraleSupélec, ENS Paris-Saclay, Paris-Saclay Faculty of Science, etc.) and the Polytechnic Institute of Paris (École Polytechnique, Telecom Paris, etc.). The Paris-Saclay University was ranked 15th in the world in the 2023 ARWU ranking. It was also placed 1st in the world for Mathematics and 9th in the world for Physics (1st in Europe).
The goal was to strengthen the cluster to build an international scientific and technological hub that can compete with other high-technology business districts, such as Silicon Valley or Cambridge, MA. This project started in 2006 and is likely to end in 2022. The main part is the construction of the campus du plateau de Saclay.
History
First post-war settlement
Several French national institutions settled on the plateau after the end of World War II. The CNRS is the first to settle there, headed by Frédéric Joliot-Curie, who bought the estate Button at Gif-sur-Yvette in 1946. The following year, the newly created CEA (the High Commissioner is also Joliot-Curie) to purchase land. The same year, ONERA settles on the plateau in Palaiseau. The Saclay center was inaugurated in 1952.
At the same time, higher education institutions settled nearby. The University of Paris is also up in the region in 1955 with the purchase of 50 hectares in the communes of Orsay and Bures. This Orsay campus brings laboratories of the Paris Faculty of Sciences (later the University of Paris-Sud) and moved to 1956. Other institutions followed with the installation of HEC in 1964 with its move to the town of Jouy-en-Josas, then with the arrival of the École supérieure d'optique in 1965 on the Orsay campus.
Research centers related to private companies also settled at that time in 1968 with the arrival of the Central Research Laboratory of Thomson-CSF.
Second wave of settlement in the 1970s
In the 1970s, the École polytechnique and Supélec settled on the plateau, the first in 1976 in the Palaiseau area, the other in 1975 in the Moulon area. The project had a scheduled time to install other schools soon after. The Moulon farm which currently houses the genetics and plant breeding was restored in 1978.
Institutions on the plateau at this time begin to join together in an association d'établissements scientifiques (association of scientific institutions, AES) to reflect future developments of the area.
Third wave of the 2000s
At the beginning of the twenty-first century, research centers of private companies settled on the campus. In 2000, Danone chooses to establish a center for Research and Development in the area of Palaiseau, joined in 2006 by Thales laboratories, and in 2009 by Kraft Foods which invests €15 million to install one of its expertise global centers. Other projects removal were also studied, including a research center of EDF, studied in 2010.
Two thematic advanced research are also on the campus, with the creation of Digiteo and Triangle de la physique in 2006. SOLEIL, which creation was decided in 2000 after three years of opposition of Claude Allègre, was inaugurated the same year, built with a budget of 313 million euros. The project of neuroimaging center NeuroSpin is launched in 2006 also on the plateau.
The first building constructed specifically for the campus is the Pôle commun de recherche en informatique (Joint Research Cluster Computing), which was inaugurated in November 2011.
Development projects
The proposed new construction and renovation of campus was launched by President Nicolas Sarkozy who wants to create a "French Silicon Valley". The entire project is estimated to three billion euros funding.
The different steps to set up the campus are part of several government operations.
The opération d'intérêt national de Massy Palaiseau Saclay Versailles Saint-Quentin-en-Yvelines is established in 2006. Larger than the campus, it provides for the creation of a science and technology cluster on the Saclay plateau. It is supported by "Grand Paris" project which also provides that the campus is accessible by the future line 18 of the Paris Metro.
In 2010, the "plan campus" permits an investment of 850 million euros.
With the debt, a billion is invested. Saclay campus is one of the winners of the « initiatives d'excellence » project so was awarded another grant of 950 million of euros. 30 October 2012, Jean-Marc Ayrault confirmed for the future operation of the project Campus Paris-Saclay staffing a billion for real estate transactions designed to bring together institutions, 850 million from plan campus and additional billion for investments for the future.
In February 2001, the Versailles Saint-Quentin-en-Yvelines University became a founding member of the scientific cooperation foundation foreshadowing the future campus on the Saclay plateau.
In November 2011, the Mines ParisTech finally withdrew the project.
Three administratives structures have been created for this project:
The Établissement Public Paris-Saclay, which is now the EPA Paris-Saclay, chaired by Pierre Veltz.
The Fondation de coopération scientifique Plateau de Saclay is the structure that carries the project. It must unite the various institutions at the university and scientific level. It is successively chaired by Alain Bravo, Paul Vialle (28 April 2009 to his resignation on 30 March 2011) and Dominique Vernay.
The consortium des établissements du Plateau de Saclay, which brings together 23 institutions.
Development status
Under construction
The last institutions to move on campus are mainly schools from the Paris-Saclay University, such as:
AgroParisTech in 2021. Particularly, AgroParisTech and INRAE have a project for the construction of a common building in Palaiseau. It will include all activities INRA Île-de-France unlocated in Jouy-en-Josas or Versailles
the Paris-Saclay Faculty of Pharmacy in 2022
and the Departments of Chemistry and Biology of the Paris-Saclay Faculty of Science in 2022.
In service on campus
Institutions that have already moved on campus, such as:
part the Paris-Saclay University
the École normale supérieure Paris-Saclay (moved in 2020)
Centre for nanosciences and nanotechnologies (moved in 2016)
Institut d'optique Graduate School (moved in 2007)
CentraleSupélec (moved in 2015)
part of the Polytechnic Institute of Paris
ENSTA Paris (moved in 2012)
ENSAE Paris (moved in 2017)
Télécom Paris (moved in 2019)
Télécom SudParis (moved in 2019)
Along with other institutions already located in the cluster, these education institutions are to be merged in Paris-Saclay University, such as:
part of Paris-Saclay University,
the Paris-Saclay Faculty of Science, that placed its university 1st in the world for Mathematics and 9th in the world for Physics (1st in Europe) in the 2020 ARWU ranking. (on campus since 1956)
Polytech Paris-Saclay (on campus since 2004)
part of the Polytechnic Institute of Paris
École Polytechnique (on campus since 1976)
HEC Paris (on campus since 1964)
This Paris-Saclay University was ranked 14th in the world in the 2020 ARWU ranking. The Polytechnic Institute of Paris, formed around the École Polytechnique, was ranked 61st internationally by the QS World University Rankings 2021, 93rd by the Times Higher Education World University Rankings 2020, and 2nd by the Times Higher Education Small University Rankings.
Companies established at Paris-Saclay
Town planning
The campus has currently three main areas:
Urban campus area
Quartier de Moulon
The area, located in the cities of Orsay and Gif-sur-Yvette, includes the main campus of Paris-Saclay University, which has 15,000 students in the area, with its graduate schools CentraleSupélec and the École normale supérieure Paris-Saclay, its Faculty of Science, its Polytechnic University School and the Paris-Saclay University Institute of Technology. There should then be around 8,100 staff, 5,000 students for engineering schools and 8,000 students only in the university's faculty of science.
The French National Centre for Scientific Research is located at Gif-sur-Yvette since 1946. The area has a dozen research units and service, and also 1,500 people.
It should accommodate several components of the Paris-Saclay University (earth sciences, economics and management, law and sport) as part of the development in the 2010s, but also several facilities pooled projected by the campus operation (conference center, students and international doctoral students accommodation centers, home business, documentation, logistics).
Quartier de la Vauve
The area, located in the city of Palaiseau, includes the main campus of the Polytechnic Institute of Paris, the second research university of Paris-Saclay, with the École Polytechnique, the ENSTA Paris, the ENSAE Paris, the Telecom Paris and Telecom SudParis.
It also includes the ONERA and the Paris-Saclay University's Institut d'Optique Graduate School and AgroParisTech / INRAE in 2021. The IPSA aerospace College moved to Ivry-sur-Seine in 2009.
Other areas
"Jouy-en-Josas" area
HEC Paris, associate member of the Polytechnic Institute of Paris, has been located at Jouy-en-Josas since 1964. INRAE has 1,400 people in the area, and facilities for experimentation on livestock and microbiology. An extension of these activities provided for the arrival of more than 300 people in 2012, with the construction of Biosafety P3 facilities for virology.
"Orme/Saclay" area
It includes the CEA's Saclay Nuclear Research Centre, member of Paris-Saclay University, the Orphée reactor and SOLEIL in Saint-Aubin.
"Nozay" area
It includes Nokia in France (former Alcatel-Lucent), in Nozay.
Versailles Satory area
The Satory site is located in the immediate vicinity of the Palace of Versailles, in the historic heart of the city. At the hinge between the Bièvre valley and Saint-Quentin-en-Yvelines, it is divided into two parts. The western part includes Army establishments and companies linked to the defence sector, such as Nexter Systems and Renault Trucks Defense. It also brings together several players in the field of mobility, with the presence of IFSTTAR, a public transport research organisation, the Citroën Racing motor sports team and the Val d'Or circuit, which also includes test tracks. The eastern part is home to logistics and training units of the Gendarmerie Nationale and the French Army, as well as 5,000 housing units for staff and their families.
Saint-Quentin-en-Yvelines area
As part of the Paris-Saclay project, the EPA Paris-Saclay is being asked to support development operations undertaken by the Yvelines departement and Saint-Quentin-en-Yvelines. The rail corridor, which separates the latter in two, constitutes a reserve of space available for construction.
The ESTACA Paris-Saclay institution moved to Saint-Quentin-en-Yvelines in 2015.
Projects critics
Various extensions of the campus were criticized by environmental movements in the early 1990s who accuse it of reducing the size of the agricultural areas. These criticisms are reformulated in the expansion projects of the 2000s.
Some also criticize a project that promotes the Grandes Ecoles too much, especially with regard to the governance of the Campus. The Snesup (Syndicat national de l'enseignement supérieur) denounces "a project based on an elitist vision of higher education" and the exclusion of many institutions from the board of directors. The management project initiated by the "campus plan" has also been criticized by local politicians who criticize the state for being the sole leader of the project, or other project stakeholders who criticize the state of exercising too much intervention.
The organization referred to as a business cluster is also criticized by the actors who doubt its effectiveness or fear that its development would be detrimental to other geographical areas, as in the case of the University of Paris-Sud and the École normale supérieure Paris-Saclay leaving towns in the Paris region, or in the case of grandes écoles leaving Paris.
See also
Paris-Saclay University
Polytechnic Institute of Paris
Plateau de Saclay
Business cluster
Research-intensive cluster
Research park
Science park
List of technology centers
Bibliography
Plan Campus du plateau de Saclay, Tome 1, Paris, March 2009, 65 p.
Plan Campus du plateau de Saclay, Tome 2, Paris, March 2009, 115 p.
References
External links
Official website
Education in Île-de-France
High-technology business districts in France
Information technology places
Science parks in France
Essonne
Yvelines
Planned developments
Education in France
Science and technology in France | Paris-Saclay | Technology | 2,833 |
14,055,092 | https://en.wikipedia.org/wiki/Rhizoctonia%20noxia | Rhizoctonia noxia is a species of fungus in the order Cantharellales. Basidiocarps (fruit bodies) are thin, effused, and web-like. The species is tropical to sub-tropical and is mainly known as a plant pathogen, the causative agent of "kole-roga" or black rot of coffee and various blights of citrus and other trees.
Taxonomy
The fungus responsible for kole-roga of coffee was sent from India to Mordecai Cubitt Cooke at the Royal Botanic Gardens, Kew who named it Pellicularia koleroga in 1876. Cooke, however, described only hyphae and some small warted spores, later presumed to be from a contaminating mould. As a result Donk, when reviewing Pellicularia in 1954, dismissed both the genus and P. koleroga as "nomina confusa", later (1958) substituting the new name Koleroga noxia for the species. Based on a re-examination of specimens, Roberts (1999) considered Koleroga to be a synonym of Ceratobasidium.
Molecular research, based on cladistic analysis of DNA sequences, has, however, now placed Ceratobasidium species (excepting the type) in synonymy with Rhizoctonia.
means "rot disease" in the Kannada language of Karnataka.
Description
Fruit bodies are effused, thin, and whitish. Microscopically they have colourless hyphae, 3 to 8 μm wide, without clamp connections. The basidia are ellipsoid to broadly club-shaped, 10 to 12 by 7 to 8 μm, bearing four sterigmata. The basidiospores are narrow and fusiform, 9 to 13 x 3 to 5 μm.
Habitat and distribution
Rhizoctonia noxia has only been collected as a plant pathogen on living stems and leaves of commercial crops (including coffee, citrus, and persimmon) on which it causes a web blight. It has been reported from Asia (including India and Vietnam) and from the Americas (including Colombia, Guatemala, Jamaica, Puerto Rico, Trinidad, United States, and Venezuela).
References
Fungal plant pathogens and diseases
Cantharellales
Taxa named by Marinus Anton Donk
Fungi described in 1958
Fungus species | Rhizoctonia noxia | Biology | 489 |
6,935,971 | https://en.wikipedia.org/wiki/Alpha%20cleavage | Alpha-cleavage (α-cleavage) in organic chemistry refers to the act of breaking the carbon-carbon bond adjacent to the carbon bearing a specified functional group.
Mass spectrometry
Generally this topic is discussed when covering tandem mass spectrometry fragmentation and occurs generally by the same mechanisms.
For example, of a mechanism of alpha-cleavage, an electron is knocked off an atom (usually by electron collision) to form a radical cation. Electron removal generally happens in the following order: 1) lone pair electrons, 2) pi bond electrons, 3) sigma bond electrons.
One of the lone pair electrons moves down to form a pi bond with an electron from an adjacent (alpha) bond. The other electron from the bond moves to an adjacent atom (not one adjacent to the lone pair atom) creating a radical. This creates a double bond adjacent to the lone pair atom (oxygen is a good example) and breaks/cleaves the bond from which the two electrons were removed.
In molecules containing carbonyl groups, alpha-cleavage often competes with McLafferty rearrangement.
Photochemistry
In photochemistry, it is the homolytic cleavage of a bond adjacent to a specified group.
See also
Inductive cleavage
References
Organic reactions
Tandem mass spectrometry | Alpha cleavage | Physics,Chemistry | 262 |
23,582,421 | https://en.wikipedia.org/wiki/C14H16N4 | The molecular formula C14H16N4 (molar mass: 240.30 g/mol, exact mass: 240.1375 u) may refer to:
Imiquimod
Budralazine
Molecular formulas | C14H16N4 | Physics,Chemistry | 48 |
312,943 | https://en.wikipedia.org/wiki/Blacklight | A blacklight, also called a UV-A light, Wood's lamp, or ultraviolet light, is a lamp that emits long-wave (UV-A) ultraviolet light and very little visible light. One type of lamp has a violet filter material, either on the bulb or in a separate glass filter in the lamp housing, which blocks most visible light and allows through UV, so the lamp has a dim violet glow when operating. Blacklight lamps which have this filter have a lighting industry designation that includes the letters "BLB". This stands for "blacklight blue". A second type of lamp produces ultraviolet but does not have the filter material, so it produces more visible light and has a blue color when operating. These tubes are made for use in "bug zapper" insect traps, and are identified by the industry designation "BL". This stands for "blacklight".
Blacklight sources may be specially designed fluorescent lamps, mercury-vapor lamps, light-emitting diodes (LEDs), lasers, or incandescent lamps. In medicine, forensics, and some other scientific fields, such a light source is referred to as a Wood's lamp, named after Robert Williams Wood, who invented the original Wood's glass UV filters.
Although many other types of lamp emit ultraviolet light with visible light, blacklights are essential when UV-A light without visible light is needed, particularly in observing fluorescence, the colored glow that many substances emit when exposed to UV. They are employed for decorative and artistic lighting effects, diagnostic and therapeutic uses in medicine, the detection of substances tagged with fluorescent dyes, rock-hunting, scorpion-hunting, the detection of counterfeit money, the curing of plastic resins, attracting insects and the detection of refrigerant leaks affecting refrigerators and air conditioning systems. Strong sources of long-wave ultraviolet light are used in tanning beds.
Medical hazard
UV-A presents a potential hazard when eyes and skin are exposed, especially to high power sources. According to the World Health Organization, UV-A is responsible for the initial tanning of skin and it contributes to skin ageing and wrinkling. UV-A may also contribute to the progression of skin cancers. Additionally, UV-A can have negative effects on eyes in both the short-term and long-term.
Types
Fluorescent
Fluorescent blacklight tubes are typically made in the same fashion as normal fluorescent tubes except that a phosphor that emits UVA light instead of visible white light is used on the inside of the tube. The type most commonly used for blacklights, designated blacklight blue or "BLB" by the industry, has a dark blue filter coating on the tube, which filters out most visible light, so that fluorescence effects can be observed. These tubes have a dim violet glow when operating. They should not be confused with "blacklight" or "BL" tubes, which have no filter coating, and have a brighter blue color. These are made for use in "bug zapper" insect traps where the emission of visible light does not interfere with the performance of the product. The phosphor typically used for a near 368 to 371 nanometer emission peak is either europium-doped strontium fluoride (:) or europium-doped strontium borate (:) while the phosphor used to produce a peak around 350 to 353 nanometres is lead-doped barium silicate (:). "Blacklight blue" lamps peak at 365 nm.
Manufacturers use different numbering systems for blacklight tubes. Philips' is becoming outdated (as of 2010), while the (German) Osram system is becoming dominant outside North America. The following table lists the tubes generating blue, UVA and UVB, in order of decreasing wavelength of the most intense peak. Approximate phosphor compositions, major manufacturer's type numbers and some uses are given as an overview of the types available. "Peak" position is approximated to the nearest 10 nm. "Width" is the measure between points on the shoulders of the peak that represent 50% intensity.
Bug zappers
Another class of UV fluorescent bulb is designed for use in bug zappers. Insects are attracted to the UV light, which they are able to see, and are then electrocuted by the device. These bulbs use the same UV-A emitting phosphor blend as the filtered blacklight, but since they do not need to suppress visible light output, they do not use a purple filter material in the bulb. Plain glass blocks out less of the visible mercury emission spectrum, making them appear light blue-violet to the naked eye. These lamps are referred to by the designation "blacklight" or "BL" in some North American lighting catalogs. These types are not suitable for applications which require the low visible light output of "BLB" tubes lamps.
Incandescent
A blacklight may also be formed by simply using a UV filter coating such as Wood's glass on the envelope of a common incandescent bulb. This was the method that was used to create the very first blacklight sources. Although incandescent bulbs are a cheaper alternative to fluorescent tubes, they are exceptionally inefficient at producing UV light since most of the light emitted by the filament is visible light which must be blocked. Due to its black body spectrum, an incandescent light radiates less than 0.1% of its energy as UV light. Incandescent UV bulbs, due to the necessary absorption of the visible light, become very hot during use. This heat is, in fact, encouraged in such bulbs, since a hotter filament increases the proportion of UVA in the black-body radiation emitted. This high running-temperature reduces the life of the lamp from a typical 1,000 hours to around 100 hours.
Mercury vapor
High-power mercury vapor blacklight lamps are made in power ratings of 100 to 1,000 watts. These do not use phosphors, but rely on the intensified and slightly broadened 350–375 nm spectral line of mercury from high pressure discharge at between , depending upon the specific type. These lamps use envelopes of Wood's glass or similar optical filter coatings to block out all the visible light and also the short wavelength (UVC) lines of mercury at 184.4 and 253.7 nm, which are harmful to the eyes and skin. A few other spectral lines, falling within the pass band of the Wood's glass between 300 and 400 nm, contribute to the output.
These lamps are used mainly for theatrical purposes and concert displays. They are more efficient UVA producers per unit of power consumption than fluorescent tubes.
LED
Ultraviolet light can be generated by some light-emitting diodes, but wavelengths shorter than 380 nm are uncommon, and the emission peaks are broad, so only the very lowest energy UV photons are emitted, within predominant not visible light.
Safety
Although blacklights produce light in the UV range, their spectrum is mostly confined to the longwave UVA region, that is, UV radiation nearest in wavelength to visible light, with low frequency and therefore relatively low energy. While low, there is still some power of a conventional blacklight in the UVB range. UVA is the safest of the three spectra of UV light, although high exposure to UVA has been linked to the development of skin cancer in humans. The relatively low energy of UVA light does not cause sunburn. It can damage collagen fibers, so may accelerate skin aging and cause wrinkles. It can also degrade vitamin A in the skin.
UVA light has been shown to cause DNA damage, but not directly, like UVB and UVC. Due to its longer wavelength, it is absorbed less and reaches deeper into skin layers, where it produces reactive chemical intermediates such as hydroxyl and oxygen radicals, which in turn can damage DNA and result in a risk of melanoma. The weak output of blacklights is not sufficient to cause DNA damage or cellular mutations in the way that direct summer sunlight can, although there are reports that overexposure to the type of UV radiation used for creating artificial suntans on sunbeds can cause DNA damage, photo-aging (damage to the skin from prolonged exposure to sunlight), toughening of the skin, suppression of the immune system, cataract formation and skin cancer.
UV-A can have negative effects on eyes in both the short-term and long-term.
Uses
Ultraviolet radiation is invisible to the human eye, but illuminating certain materials with UV radiation causes the emission of visible light, causing these substances to glow with various colors. This is called fluorescence, and has many practical uses. Blacklights are required to observe fluorescence, since other types of ultraviolet lamps emit visible light which drowns out the dim fluorescent glow.
Medical applications
A Wood's lamp is a diagnostic tool used in dermatology by which ultraviolet light is shone (at a wavelength of approximately 365 nanometers) onto the skin of the patient; a technician then observes any subsequent fluorescence. For example, porphyrins—associated with some skin diseases—will fluoresce pink. Though the technique for producing a source of ultraviolet light was devised by Robert Williams Wood in 1903 using "Wood's glass", it was in 1925 that the technique was used in dermatology by Margarot and Deveze for the detection of fungal infection of hair. It has many uses, both in distinguishing fluorescent conditions from other conditions and in locating the precise boundaries of the condition.
Fungal and bacterial infections
It is also helpful in diagnosing:
Fungal infections. Some forms of tinea, such as Trichophyton tonsurans, do not fluoresce.
Bacterial infections
Corynebacterium minutissimum is coral red
Pseudomonas is yellow-green
Cutibacterium acnes, a bacterium involved in acne causation, exhibits an orange glow under a Wood's lamp.
Ethylene glycol poisoning
A Wood's lamp may be used to rapidly assess whether an individual is suffering from ethylene glycol poisoning as a consequence of antifreeze ingestion. Manufacturers of ethylene glycol-containing antifreezes commonly add fluorescein, which causes the patient's urine to fluoresce under Wood's lamp.
Diagnosis
Wood's lamp is useful in diagnosing conditions such as tuberous sclerosis and erythrasma (caused by Corynebacterium minutissimum, see above). Additionally, detection of porphyria cutanea tarda can sometimes be made when urine turns pink upon illumination with Wood's lamp. Wood's lamps have also been used to differentiate hypopigmentation from depigmentation such as with vitiligo. A vitiligo patient's skin will appear yellow-green or blue under the Wood's lamp. Its use in detecting melanoma has been reported.
Security and authentication
Blacklight is commonly used to authenticate oil paintings, antiques and banknotes. It can also differentiate real currency from counterfeit notes because, in many countries, legal banknotes have fluorescent symbols on them that only show under a blacklight. In addition, the paper used for printing money does not contain any of the brightening agents which cause commercially available papers to fluoresce under blacklight. Both of these features make illegal notes easier to detect and more difficult to successfully counterfeit. The same security features can be applied to identification cards such as passports or driver's licenses.
Other security applications include the use of pens containing a fluorescent ink, generally with a soft tip, that can be used to "invisibly" mark items. If the objects that are so marked are subsequently stolen, a blacklight can be used to search for these security markings. At some amusement parks, nightclubs and at other, day-long (or night-long) events, a fluorescent mark is rubber stamped onto the wrist of a guest who can then exercise the option of leaving and being able to return again without paying another admission fee.
Biology
Fluorescent materials are also very widely used in numerous applications in molecular biology, often as "tags" which bind themselves to a substance of interest (for example, DNA), so allowing their visualization.
Thousands of moth and insect collectors all over the world use various types of blacklights to attract moth and insect specimens for photography and collecting. It is one of the preferred light sources for attracting insects and moths at night. They can illuminate animal excreta, such as urine and vomit, that is not always visible to the naked eye.
Fault detection
Blacklight is used extensively in non-destructive testing. Fluorescing fluids are applied to metal structures and illuminated, allowing easy detection of cracks and other weaknesses.
If a leak is suspected in a refrigerator or an air conditioning system, a UV tracer dye can be injected into the system along with the compressor lubricant oil and refrigerant mixture. The system is then run in order to circulate the dye across the piping and components and then the system is examined with a blacklight lamp. Any evidence of fluorescent dye then pinpoints the leaking part which needs replacement.
Art and decor
Blacklight is used to illuminate pictures painted with fluorescent colors, particularly on black velvet, which intensifies the illusion of self-illumination. The use of such materials, often in the form of tiles viewed in a sensory room under UV light, is common in the United Kingdom for the education of students with profound and multiple learning difficulties. Such fluorescence from certain textile fibers, especially those bearing optical brightener residues, can also be used for recreational effect, as seen, for example, in the opening credits of the James Bond film A View to a Kill. Blacklight puppetry is performed in a blacklight theater.
Mineral identification
Blacklights are a common tool for rock-hunting and identification of minerals by their fluorescence. The most common minerals and rocks that glow under UV light are fluorite, calcite, aragonite, opal, apatite, chalcedony, corundum (ruby and sapphire), scheelite, selenite, smithsonite, sphalerite, sodalite. The first person to observe fluorescence in minerals was George Stokes in 1852. He noted the ability of fluorite to produce a blue glow when illuminated with ultraviolet light and called this phenomenon “fluorescence” after the mineral fluorite. Lamps used to visualise seams of fluorite and other fluorescent minerals are commonly used in mines but they tend to be on an industrial scale. The lamps need to be short wavelength to be useful for this purpose and of scientific grade. UVP range of hand held UV lamps are ideal for this purpose and are used by Geologists to identify the best sources of fluorite in mines or potential new mines. Some transparent selenite crystals exhibit an “hourglass” pattern under UV light that is not visible in natural light. These crystals are also phosphorescent. Limestone, marble, and travertine can glow because of calcite presence. Granite, syenite, and granitic pegmatite rocks can also glow.
Curing resins
UV light can be used to harden particular glues, resins and inks by causing a photochemical reaction inside those substances. This process of hardening is called ‘curing’. UV curing is adaptable to printing, coating, decorating, stereolithography, and in the assembly of a variety of products and materials. In comparison to other technologies, curing with UV energy may be considered a low-temperature process, a high-speed process, and is a solventless process, as cure occurs via direct polymerization rather than by evaporation. Originally introduced in the 1960s, this technology has streamlined and increased automation in many industries in the manufacturing sector. A primary advantage of curing with ultraviolet light is the speed at which a material can be processed. Speeding up the curing or drying step in a process can reduce flaws and errors by decreasing time that an ink or coating spends wet. This can increase the quality of a finished item, and potentially allow for greater consistency. Another benefit to decreasing manufacturing time is that less space needs to be devoted to storing items which can not be used until the drying step is finished.
Because UV energy has unique interactions with many different materials, UV curing allows for the creation of products with characteristics not achievable via other means. This has led to UV curing becoming fundamental in many fields of manufacturing and technology, where changes in strength, hardness, durability, chemical resistance, and many other properties are required.
Cockpit lighting, LSD testing and tanning
One of the innovations for night and all-weather flying used by the US, UK, Japan and Germany during World War II was the use of UV interior lighting to illuminate the instrument panel, giving a safer alternative to the radium-painted instrument faces and pointers, and an intensity that could be varied easily and without visible illumination that would give away an aircraft's position. This went so far as to include the printing of charts that were marked in UV-fluorescent inks, and the provision of UV-visible pencils and slide rules such as the E6B.
They may also be used to test for LSD, which fluoresces under blacklight while common substitutes such as 25I-NBOMe do not.
Strong sources of long-wave ultraviolet light are used in tanning beds.
See also
Blacklight poster
List of light sources
Footnotes
References
External links
http://mississippientomologicalmuseum.org.msstate.edu/collecting.preparation.methods/Blacklight.traps.htm
American inventions
Articles containing video clips
Luminescence
Types of lamp
Ultraviolet radiation | Blacklight | Physics,Chemistry | 3,690 |
14,745,018 | https://en.wikipedia.org/wiki/Indian%20Institute%20of%20Science%20Education%20and%20Research%2C%20Mohali | Indian Institute of Science Education and Research, Mohali (IISER Mohali) is an autonomous public Research institute established in 2007 at Mohali, Punjab, India. It is one of the seven Indian Institutes of Science Education and Research (IISERs), established by the Ministry of Human Resources and Development, Government of India, to research in frontier areas of science and to provide science education at the undergraduate and postgraduate level. It was established after IISER Pune and IISER Kolkata and is recognized as an Institute of National Importance by the Government of India. Institute focuses on pure research as well as interdisciplinary research in various fields of science.
History
The institute was approved by The Planning Commission in New Delhi in July 2006 and land was provided by The Punjab State government. The foundation stone of IISER Mohali was laid on 27 September 2006 by the former Prime Minister of India, Manmohan Singh. The Computing Facility of IISER Mohali was inaugurated on 3 September 2007 by T. Ramasami (Secretary, Department of Science and Technology). The Earth-breaking ceremony for IISER boundary wall was held on 29 December 2008 at the proposed campus site in Knowledge City, Sector 81, S.A.S. Nagar. The ceremony was performed by N. Sathyamurthy, the founding director of the institute.
C.N.R. Rao inaugurated the Chemistry Research Laboratory on 8 April 2009. The Central Analytical Facility of IISER Mohali has been inaugurated in March 2010. Initially, the institute started its working from a transit campus in Mahatma Gandhi State Institute of Public Administration (MGSIPA), Chandigarh. In March 2010, the institute started shifting to its permanent campus in The Knowledge City at Sector 81 with the opening of Central Analytical Facility (CAF) and completed the shifting in May 2013 by shutting operations in MGSIPAP complex, Sector 26, Chandigarh.
Academics
Academic Programs
Source:
The institute offers the following programs:
Integrated Master's level (B.S.-M.S.): Admission to this program is after 10+2 years of school training and is done through the IISERs Joint Admissions Committee.
Integrated Doctoral Program (Int. Ph.D.): Integrated Ph.D. involves a master's degree (M.S.) followed by a doctorate (Ph.D.). Students after three years of undergraduate education can join the program.
Doctoral Program (Ph.D.): IISER Mohali has a separate doctoral program, in hard sciences or in the Humanities & Social Sciences Department, which requires a master's degree as qualification.
Admissions
Admissions to UG courses in IISERs are done exclusively through the IISER Aptitude Test (IAT)
Reputation and Rankings
The National Institutional Ranking Framework (NIRF) ranked it 49 in research and 64 overall in India in 2024.
Organization and administration
Departments
IISER Mohali is currently having six departments:
Department of Physical Sciences
Department of Chemical Sciences
Department of Mathematical Sciences
Department of Biological Science
Department of Earth Science
Department of Humanities and Social Sciences
Facilities
NMR Research Facility (NMR)
X-ray Facility - X-ray Diffraction Crystallograph
Cell Culture Facility
Animal house
Atomic Force microscope
Laser Raman and AFM Facility-Raman Infrared spectroscope
Circular Dichroic Spectrometer
Atmospheric Chemistry Facility
Computing Facility
Scanning Electron Microscopy
DC Sputtering
PLD Machine
Cryostat
Dilution refrigerator
Liquid Helium Facility
Liquid Nitrogen Facility
FemtoLaser facility
Proton Transfer Reaction Mass Spectrometer (PTR-MS)
Laser micro-Raman spectroscope
Single Crystal X-ray Diffractometer
Crystal Growth Laboratory
PPMS
SQUID
Tetra and mono arc furnace
Tube furnace
Conferences held
7th JNOST Conference: 15–18 December 2011
History of Chemistry in India, 2013
Conference on Nonlinear Systems and Dynamics, 2013
ICTS program: Knot theory and its Applications, 10–20 December 2013.
43rd National Seminar on Crystallography: 28–30 March 2014
32nd meeting of the Astronomical Society of India (ASI): 20–22 March 2014
International Workshop "Knots, Braids and Topology", 15–17 October 2014
International Workshop "ATMW: Lattices--Geometry and Dynamics", 17–22 December 2014
National Conference on Ethology and Evolution (30 October to 1 November 2015)
International Conference on Gravitation and Cosmology (ICGC) 2015
Conference on Nonlinear Systems and Dynamics, 2015
30th Annual Conference of the Ramanujan Mathematical Society, 15–17 May 2015.
GIAN course on "Quantum Criticality in Heavy Fermions: an Experimental Perspective", 22–28 March 2018
National Conference On Quantum Condensed Matter, 25–27 July 2018
9th International Conference on Gravitation and Cosmology, 10–13 December 2019
Student life
Amenities
Health Centre
Counseling Service
Accommodation & Transport including visitors hostel
World Class Library of 8 levels
Sports Complex complete with two courts each for basketball, tennis, and volleyball
Cricket cum Football ground in the stadium which has a seating capacity of 1000
Computer Centre with High-Performance Scientific Computing cluster
Various labs
Gym
National Science Day celebrations
National Science Day celebrations on 28 February are a regular feature at IISER Mohali, every year. Invitations are sent to schools in Mohali, Chandigarh, Panchkula and nearby areas.
The focus of the day is on science and mathematics demonstrations prepared by IISER Mohali students and faculty members. A large number of schools send teams for inter-school competitions such as science quizzes, group discussions, treasure hunt, junkyard wars, poster presentation held on this day. Other non-competitive events such as documentary screening, anti-superstition demonstrations, etc. are also held. The day usually ends with a 'panel discussion' in which the school students ask science-related questions to a panel of faculty members of IISER Mohali.
Since 2015, the Science Day celebrations have been shifted to 27 September, IISER Mohali's Foundation Day, as this date is more convenient for school students in the region.
Opportunity Cell
The Opportunity Cell was first proposed by the Student Representative Council, in October 2011 as a joint student-faculty
body to provide guidance to students about research and job opportunities. In the year 2012−13, the opportunity cell established a summer
research and internship programme with National Centre for Biological Sciences (NCBS), Bangalore, Connexios Life sciences and Lucid Software Limited (Lucid). It also organised various seminars such as
"Alternative Careers in Science", "Research Opportunities at University of St Andrews" etc. Currently the cell disseminates information about
summer research programmes, PhD positions and research oriented jobs.
Magazine
Manthan, IISER Mohali's student magazine was revived in the summer of 2018 after a long gap. Six editions, along with a lockdown Life in Quarantine edition, have been published since its revival.
Clubs
1. Phi@i - Physics Club
2. Biology Discussion Forum (BDF)
3. Infinity - Math Club
4. Turing Club- Computation Club
5. Curie Club - Chemistry Club
6. Robotics Club
7. Lumiére - Photography Club
8. Itehad - Dance Club
9. Aria - Music Club
10. Ambient - Environment Club
11. Miles - Running Club
12. DarPan - Drama Club
13. Literary and Debating Society(LDS)
14. Rang - Art club
15. IISER Mohali Quiz Club (IMQC)
16. Astronomy Club
17. Movie club
18. IEC - Entrepreneurship Club
19. Adventure Sports club - Trekking and stuff
20. Gaming club
21. IMLC - IISER Mohali LGBTQ collective.
Notable people
Current faculty
Inder Bir Singh Passi, Bhatnagar Prize winning Mathematician
Anand Kumar Bachhawat, Geneticist and Biochemist
Kausik Chattopadhyay, N-Bios laureate
Kapil Hari Paranjape, Bhatnagar Prize winning Mathematician
Sudeshna Sinha, Physicist
Anu Sabhlok, Architect and a well known geographer and feminist scholar
Somdatta Sinha, theoretical biologist
Debi Prasad Sarkar, Bhatnagar Prize winning biochemist
Former Faculty
Meera Nanda, Historian and Philosopher of Science
Narayanasami Sathyamurthy, Bhatnagar Prize winning Chemist and President of Chemical Research Society of India. He was the director of IISER Mohali from 2007 to 2017
References
External links
2007 establishments in Punjab, India
Mohali
Chemical research institutes
Research institutes established in 2007
Research institutes in Punjab, India
Education in Mohali | Indian Institute of Science Education and Research, Mohali | Chemistry | 1,745 |
46,795 | https://en.wikipedia.org/wiki/Mono%20Lake | Mono Lake ( ) is a saline soda lake in Mono County, California, formed at least 760,000 years ago as a terminal lake in an endorheic basin. The lack of an outlet causes high levels of salts to accumulate in the lake which make its water alkaline.
The desert lake has an unusually productive ecosystem based on brine shrimp, which thrive in its waters, and provides critical habitat for two million annual migratory birds that feed on the shrimp and alkali flies (Ephydra hians). Historically, the native Kutzadika'a people ate the alkali flies' pupae, which live in the shallow waters around the edge of the lake.
When the city of Los Angeles diverted water from the freshwater streams flowing into the lake, it lowered the lake level, which imperiled the migratory birds. The Mono Lake Committee formed in response and won a legal battle that forced Los Angeles to partially replenish the lake level.
Geology
Mono Lake occupies part of the Mono Basin, an endorheic basin that has no outlet to the ocean. Dissolved salts in the runoff thus remain in the lake and raise the water's pH levels and salt concentration. The tributaries of Mono Lake include Lee Vining Creek, Rush Creek and Mill Creek which flows through Lundy Canyon.
The basin was formed by geological forces over the last five million years: basin and range crustal stretching and associated volcanism and faulting at the base of the Sierra Nevada.
From 4.5 to 2.6 million years ago, large volumes of basalt were extruded around what is now Cowtrack Mountain (east and south of Mono Basin); eventually covering and reaching a maximum thickness of . Later volcanism in the area occurred 3.8 million to 250,000 years ago. This activity was northwest of Mono Basin and included the formation of Aurora Crater, Beauty Peak, Cedar Hill (later an island in the highest stands of Mono Lake), and Mount Hicks.
Lake Russell was the prehistoric predecessor to Mono Lake, during the Pleistocene. Its shoreline reached the modern-day elevation of , about higher than the present-day lake. As of 1.6 million years ago, Lake Russell discharged to the northeast, into the Walker River drainage. After the Long Valley Caldera eruption 760,000 years ago, Lake Russell discharged into Adobe Lake to the southeast, then into the Owens River, and eventually into Lake Manly in Death Valley. Prominent shore lines of Lake Russell, called strandlines by geologists, can be seen west of Mono Lake.
The area around Mono Lake is currently geologically active. Volcanic activity is related to the Mono–Inyo Craters: the most recent eruption occurred 350 years ago, resulting in the formation of Paoha Island. Panum Crater (on the south shore of the lake) is an example of a combined rhyolite dome and cinder cone.
Tufa towers
Many columns of limestone rise above the surface of Mono Lake. These limestone towers consist primarily of calcium carbonate minerals such as calcite (CaCO3). This type of limestone rock is referred to as tufa, which is a term used for limestone that forms in low to moderate temperatures.
Tufa tower formation
Mono Lake is a highly alkaline lake, or soda lake. Alkalinity is a measure of how many bases are in a solution, and how well the solution can neutralize acids. Carbonate (CO32-) and bicarbonate (HCO3−) are both bases. Hence, Mono Lake has a very high content of dissolved inorganic carbon. Through supply of calcium ions (Ca2+), the water will precipitate carbonate-minerals such as calcite (CaCO3). Subsurface waters enter the bottom of Mono Lake through small springs. High concentrations of dissolved calcium ions in these subsurface waters cause huge amounts of calcite to precipitate around the spring orifices.
The tufa originally formed at the bottom of the lake. It took many decades or even centuries to form the well-recognized tufa towers. When lake levels fell, the tufa towers came to rise above the water surface and stand as the pillars seen today (see Mono lake#Lake Level History for more information).
Tufa morphology
Description of the Mono Lake tufa dates back to the 1880s, when Edward S. Dana and Israel C. Russell made the first systematic descriptions of the Mono Lake tufa. The tufa occurs as "modern" tufa towers. There are tufa sections from old shorelines, when the lake levels were higher. These pioneering works in tufa morphology are referred to by researchers and were confirmed by James R. Dunn in 1953. The tufa types can roughly be divided into three main categories based on morphology:
Lithoid tufa - massive and porous with a rock-like appearance
Dendritic tufa - branching structures that look similar to small shrubs
Thinolitic tufa - large well-formed crystals of several centimeters
Through time, many hypotheses were developed regarding the formation of the large thinolite crystals (also referred to as glendonite) in thinolitic tufa. It was relatively clear that the thinolites represented a calcite pseudomorph after some unknown original crystal. The original crystal was only determined when the mineral ikaite was discovered in 1963. Ikaite, or hexahydrated CaCO3, is metastable and only crystallizes at near-freezing temperatures. It is also believed that calcite crystallization inhibitors such as phosphate, magnesium, and organic carbon may aid in the stabilization of ikaite. When heated, ikaite breaks down and becomes replaced by smaller crystals of calcite. In the Ikka Fjord of Greenland, ikaite was also observed to grow in columns similar to the tufa towers of Mono Lake. This has led scientists to believe that thinolitic tufa is an indicator of past climates in Mono Lake because they reflect very cold temperatures.
Tufa chemistry
Russell (1883) studied the chemical composition of the different tufa types in Lake Lahontan, a large Pleistocene system of multiple lakes in California, Nevada, and Oregon. Not surprisingly, it was found that the tufas consisted primarily of CaO and CO2. However, they also contain minor constituents of MgO (~2 wt%), Fe/Al-oxides (.25-1.29 wt%), and PO5 (0.3 wt%).
Climate
Limnology
The limnology of the lake shows it contains approximately 280 million tons of dissolved salts, with the salinity varying depending upon the amount of water in the lake at any given time. Before 1941, average salinity was approximately 50 grams per liter (g/L) (compared to a value of 31.5 g/L for the world's oceans). In January 1982, when the lake reached its lowest level of , the salinity had nearly doubled to 99 g/L. In 2002, it was measured at 78 g/L and is expected to stabilize at an average 69 g/L as the lake replenishes over the next 20 years.
An unintended consequence of ending the water diversions was the onset of a period of "meromixis" in Mono Lake. In the time prior to this, Mono Lake was typically "monomictic"; which means that at least once each year the deeper waters and the shallower waters of the lake mixed thoroughly, thus bringing oxygen and other nutrients to the deep waters. In meromictic lakes, the deeper waters do not undergo this mixing; the deeper layers are more saline than the water near the surface, and are typically nearly devoid of oxygen. As a result, becoming meromictic greatly changes a lake's ecology.
Mono Lake has experienced meromictic periods in the past; this most recent episode of meromixis, brought on by the end of the water diversions, commenced in 1994 and had ended by 2004.
Lake-level history
An important characteristic of Mono Lake is that it is a closed lake, meaning it has no outflow. Water can only escape the lake if it evaporates or is lost to groundwater. This may cause closed lakes to become very saline. The reconstruction of historical Mono Lake levels through carbon and oxygen isotopes have also revealed a correlation with well-documented changes in climate.
In the recent past, Earth experienced periods of increased glaciation known as ice ages. This geological period of ice ages is known as the Pleistocene, which lasted until ~11 ka. Lake levels in Mono Lake can reveal how the climate fluctuated. For example, during the cold climate of the Pleistocene the lake level was higher because there was less evaporation and more precipitation. Following the Pleistocene, the lake level was generally lower due to increased evaporation and decreased precipitation associated with a warmer climate.
The lake level has fluctuated during the Holocene, since the end of the ice ages. The Holocene high point is at elevation , reached in approximately 1820 BCE. The low point before modern diversions is at elevation , reached in 143 CE. The lowest modern level due to diversions is at , reached in 1980.
Ecology
Aquatic life
The hypersalinity and high alkalinity (pH=10 or equivalent to 4 milligrams of NaOH per liter of water) of the lake means that no fish are native to the lake. An attempt by the California Department of Fish and Game to stock the lake failed.
The whole food chain of the lake is based on the high population of single-celled planktonic algae present in the photic zone of the lake. These algae reproduce rapidly during winter and early spring after winter runoff brings nutrients to the surface layer of water. By March the lake is "as green as pea soup" with photosynthesizing algae.
The lake is famous for the Mono Lake brine shrimp, Artemia monica, a tiny species of brine shrimp, no bigger than a thumbnail, that are endemic to the lake. During the warmer summer months, an estimated 4–6 trillion brine shrimp inhabit the lake. Brine shrimp have no food value for humans, but are a staple for birds of the region. The brine shrimp feed on microscopic algae.
Alkali flies, Ephydra hians, live along the shores of the lake and walk underwater, encased in small air bubbles, for grazing and to lay eggs. These flies are an important source of food for migratory and nesting birds.
Eight nematode species were found living in the littoral sediment:
Auanema spec., which is outstanding for its extreme arsenic resistance (survives concentrations 500 times higher than humans), having 3 sexes, and being viviparous.
Pellioditis spec.
Mononchoides americanus
Diplogaster rivalis
species of the family Mermithidae
Prismatolaimus dolichurus
2 species of the order Monhysterida
Birds
Mono Lake is a vital resting and eating stop for migratory shorebirds and has been recognized as a site of international importance by the Western Hemisphere Shorebird Reserve Network.
Nearly 2,000,000 waterbirds, including 35 species of shorebirds, use Mono Lake to rest and eat for at least part of the year. Some shorebirds that depend on the resources of Mono Lake include American avocets, killdeer, and sandpipers. One to two million eared grebes and phalaropes use Mono Lake during their long migrations.
Late every summer tens of thousands of Wilson's phalaropes and red-necked phalaropes arrive from their nesting grounds, and feed until they continue their migration to South America or the tropical oceans respectively.
In addition to migratory birds, a few species spend several months to nest at Mono Lake. Mono Lake has the second largest nesting population of California gulls, Larus californicus, second only to the Great Salt Lake in Utah. Since abandoning the landbridged Negit Island in the late 1970s, California gulls have moved to some nearby islets and have established new, if less protected, nesting sites. Cornell University and Point Blue Conservation Science have continued the study of nesting populations on Mono Lake that was begun 35 years ago. Snowy plovers also arrive at Mono Lake each spring to nest along the northern and eastern shores.
History
Native Americans
The indigenous people of Mono Lake are from a band of the Northern Paiute, called the Kutzadika'a. They speak the Northern Paiute language. The Kutzadika'a traditionally forage alkali fly pupae, called kutsavi in their language.
The term "Mono" is derived from "Monachi", a Yokuts term for the tribes that live on both the east and west side of the Sierra Nevada.
During early contact, the first known Mono Lake Paiute chief was Captain John.
The Mono tribe has two bands: Eastern and Western. The Eastern Mono joined the Western Mono bands' villages annually at Hetch Hetchy Valley, Yosemite Valley, and along the Merced River to gather acorns, different plant species, and to trade. The Western Mono and Eastern mono traditionally lived in the south-central Sierra Nevada foothills, including Historical Yosemite Valley.
Present day Mono Reservations are currently located in Big Pine, Bishop, and several in Madera County and Fresno County, California.
Conservation efforts
The city of Los Angeles diverted water from the Owens River into the Los Angeles Aqueduct in 1913. In 1941, the Los Angeles Department of Water and Power extended the Los Angeles Aqueduct system farther northward into the Mono Basin with the completion of the Mono Craters Tunnel between the Grant Lake Reservoir on Rush Creek and the Upper Owens River. So much water was diverted that evaporation soon exceeded inflow and the surface level of Mono Lake fell rapidly. By 1982 the lake was reduced to , 69 percent of its 1941 surface area. By 1990, the lake had dropped 45 vertical feet and had lost half its volume relative to the 1941 pre-diversion water level. As a result, alkaline sands and formerly submerged tufa towers became exposed, the water salinity doubled, and Negit Island became a peninsula, exposing the nests of California gulls to predators (such as coyotes), and forcing the gull colony to abandon this site.
In 1974, ecologist David Gaines and his student David Winkler studied the Mono Lake ecosystem and became instrumental in alerting the public of the effects of the lower water level with Winkler's 1976 ecological inventory of the Mono Basin. The National Science Foundation funded the first comprehensive ecological study of Mono Lake, conducted by Gaines and undergraduate students. In June 1977, the Davis Institute of Ecology of the University of California published a report, "An Ecological Study of Mono Lake, California," which alerted California to the ecological dangers posed by the redirection of water away from the lake for municipal uses.
Gaines formed the Mono Lake Committee in 1978. He and Sally Judy, a UC Davis student, led the committee and pursued an informational tour of California. They joined with the Audubon Society to fight a now famous court battle, the National Audubon Society v. Superior Court, to protect Mono Lake through state public trust laws. While these efforts have resulted in positive change, the surface level is still below historical levels, and exposed shorelines are a source of significant alkaline dust during periods of high winds.
Owens Lake, the once-navigable terminus of the Owens River which had sustained a healthy ecosystem, is now a dry lake bed during dry years due to water diversion beginning in the 1920s. Mono Lake was spared this fate when the California State Water Resources Control Board (after over a decade of litigation) issued an order (SWRCB Decision 1631) to protect Mono Lake and its tributary streams on September 28, 1994. SWRCB Board Vice-chair Marc Del Piero was the sole Hearing Officer (see D-1631). In 1941 the surface level was at above sea level. As of October 2022, Mono Lake was at above sea level. The lake level of above sea level is the goal, designed to ensure that the lake would be able to reach and sustain a minimum surface level that is generally agreed to be the minimum for keeping the ecosystem healthy. It has been more difficult during years of drought in the American West.
In popular culture
Artwork
In 1968, the artist Robert Smithson made Mono Lake Non-Site (Cinders near Black Point) using pumice collected while visiting Mono on July 27, 1968, with his wife Nancy Holt and Michael Heizer (both prominent visual artists). In 2004, Nancy Holt made a short film entitled Mono Lake using Super 8 footage and photographs of this trip. An audio recording by Smithson and Heizer, two songs by Waylon Jennings, and Michel Legrand's Le Jeu, the main theme of Jacques Demy's film Bay of Angels (1963), were used for the soundtrack.
The Diver, a photo taken by Aubrey Powell of Hipgnosis for Pink Floyd's album Wish You Were Here (1975), features what appears to be a man diving into a lake, creating no ripples. The photo was taken at Mono Lake, and the tufa towers are a prominent part of the landscape. The effect was actually created when the diver performed a handstand underwater until the ripples dissipated.
In print
Mark Twain's Roughing It, published in 1872, provides an informative early description of Mono Lake in its natural condition in the 1860s. Twain found the lake to be lying "in a lifeless, treeless, hideous desert... the loneliest place on earth."
In film
A scene featuring a volcano in the film Fair Wind to Java (1953) was shot at Mono Lake.
Most of the film High Plains Drifter (1973) by Clint Eastwood was shot on the southern shores of Mono Lake in the 1970s. An entire town was built here for the film, and later removed when shooting was complete.
In music
The music video for glam metal band Cinderella's 1988 power ballad "Don't Know What You Got ('Till It's Gone)" was filmed by the lake.
See also
Bodie, a nearby ghost town
List of lakes in California
Mono Lake Tufa State Reserve
Mono Basin National Scenic Area
GFAJ-1, an organism from Mono Lake that has been at the center of a scientific controversy over hypothetical arsenic in DNA.
List of drying lakes
Whoa Nellie Deli, located in Lee Vining, California, overlooking Mono Lake
Monolake, a Berlin-based electronic music project named after the lake
References
Bibliography
Jayko, A.S., et al. (2013). Methods and Spatial Extent of Geophysical Investigations, Mono Lake, California, 2009 to 2011. Reston, Va.: U.S. Department of the Interior, U.S. Geological Survey.
External links
Mono Lake Area Visitor Information
Mono Lake Tufa State Nature Reserve
Mono Lake Committee website
Mono Lake Visitor Guide
Landsat image of Mono Lake
Roadside Geology and Mining History of the Owens Valley and Mono Basin
Saline lakes of the United States
Shrunken lakes
Lakes of Mono County, California
California placenames of Native American origin
Inyo National Forest
Mono people
Native American history of California
Lakes of the Sierra Nevada (United States)
Lakes of the Great Basin
Environment of California
Tourist attractions in Mono County, California
Endorheic lakes of California
Environmental controversies
Lakes of California
Lakes of Northern California
Geological type localities
Eutrophication | Mono Lake | Chemistry,Environmental_science | 3,989 |
74,223,180 | https://en.wikipedia.org/wiki/Medicinal%20Research%20Reviews | Medicinal Research Reviews is a bimonthly peer-reviewed scientific journal that publishes reviews on topics related to medicinal research. It is published by Wiley and was established in 1980. The editor-in-chief is Amanda E. Hargrove (Duke University).
The journal publishes critical reviews of topics include pathophysiology, genomics and proteomics, and clinical characteristics of important drugs.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2021 impact factor of 12.388.
References
External links
English-language journals
Wiley (publisher) academic journals
Academic journals established in 1980
Bimonthly journals
Medicinal chemistry journals | Medicinal Research Reviews | Chemistry | 141 |
2,520,554 | https://en.wikipedia.org/wiki/Plant%20milk | Plant milk is a category of non-dairy beverages made from a water-based plant extract for flavoring and aroma. Nut milk is a subcategory made from nuts, while other plant milks may be created from grains, pseudocereals, legumes, seeds or coconut. Plant-based milks are consumed as alternatives to dairy milk and provide similar qualities, such as a creamy mouthfeel, as well as a bland or palatable taste. Many are sweetened or flavored (e.g., vanilla).
As of 2021, there were about 17 different types of plant milks, of which almond, oat, soy, coconut and pea are the highest-selling worldwide. Production of plant milks—particularly soy, oat, and pea milks—can offer environmental advantages over animal milks in terms of greenhouse gas emissions and land and water use.
Plant-based beverages have been consumed for centuries, with the term "milk-like plant juices" used since the 13th century. In the 21st century, one of these drinks is commonly referred to as a plant-based milk, alternative milk, non-dairy milk or vegan milk. For commerce, plant-based beverages are typically packaged in containers similar and competitive to those used for dairy milk, but cannot be labeled as "milk" within the European Union.
Across various cultures, plant milk has been both a beverage and a flavor ingredient in sweet and savory dishes (such as the use of coconut milk in curries). These drinks are compatible with vegetarian and vegan lifestyles. Plant milks are also used to make ice cream alternatives, plant cream, vegan cheese, and yogurt-analogues (such as soy yogurt). The global plant milk market was estimated to reach 62billion by 2030.
History
Before commercial production of 'milks' from legumes, beans and nuts, plant-based mixtures resembling milk have existed for centuries. The Wabanaki and other Native American tribal nations in the northeastern United States made milk and infant formula from nuts.
In English, the word "milk" has been used to refer to "milk-like plant juices" since 1200 CE.
Recipes from the 13th-century Levant exist describing almond milk. Soy was a plant milk used in China during the 14th century. In medieval England, almond milk was used in dishes such as ris alkere (a type of rice pudding) and appears in the recipe collection The Forme of Cury. Coconut milk (and coconut cream) are traditional ingredients in many cuisines such as in South and Southeast Asia, and are often used in curries.
Plant milks may be regarded as milk substitutes in Western countries, but have traditionally been consumed in other parts of the world, especially ones where there are higher rates of lactose intolerance (see especially ).
Types
Common plant milks are almond milk, coconut milk, rice milk, and soy milk. Other plant milks include hemp milk, oat milk, pea milk, and peanut milk.
Plant milks can be made from:
Grains: barley, fonio, maize, millet, oat, rice, rye, sorghum, teff, triticale, spelt, wheat
Pseudocereals: amaranth, buckwheat, quinoa
Legumes: lupin, pea, peanut, soy, chickpea
Nuts: almond, brazil, cashew, hazelnut, macadamia, pecan, pistachio, walnut
Seeds: chia seed, flax seed, hemp seed, pumpkin seed, sesame seed, sunflower seed
Other: coconut (fruit; drupe), banana (fruit; berry) potato (tuber), tiger nut (tuber)
A blend is a plant milk created by mixing two or more types together. Examples of blends are almond-coconut milk and almond-cashew milk.
Other traditional plant milk recipes include:
Kunu, a Nigerian beverage made from sprouted millet, sorghum, or maize
Sikhye, a traditional sweet Korean rice beverage
Amazake, a Japanese rice milk
Manufacturing
Although there are variations in the manufacturing of plant milks according to the starting plant material, as an example, the general technique for soy milk involves several steps, including:
cleaning, soaking and dehulling the beans
grinding of the starting material to produce a slurry, powder or emulsion
heating the processed plant material to denature lipoxidase enzymes to minimize their effects on flavor
removing sedimentable solids by filtration
adding water, sugar (or sugar substitutes) and other ingredients to improve flavour, aroma, and micronutrient content
pasteurizing the pre-final liquid
homogenizing the liquid to break down fat globules and particles for a smooth mouthfeel
packaging, labeling and storage at
The actual content of the highlighted plant in commercial plant milks may be only around 2%. Other ingredients commonly added to plant milks during manufacturing include guar gum, xanthan gum, or sunflower lecithin for texture and mouthfeel, select micronutrients (such as calcium, B vitamins, and vitamin D), salt, and natural or artificial ingredients—such as flavours characteristic of the featured plant—for aroma, color, and taste. Plant milks are also used to make ice cream, plant cream, vegan cheese, and yogurt-analogues, such as soy yogurt.
The production of almond-based dairy substitutes has been criticized on environmental grounds as large amounts of water and pesticides are used. The emissions, land, and water footprints of plant milks vary, due to differences in crop water needs, farming practices, region of production, production processes, and transportation. Production of plant-based milks, particularly soy and oat milks, can offer environmental advantages over animal milks in terms of greenhouse gas emissions, land and water use.
Nutritional comparison with cow's milk
Many plant milks aim to contain the same proteins, vitamins and lipids as those produced by lactating mammals. Generally, because plant milks are manufactured using processed extracts of the starting plant, plant milks are lower in nutrient density than dairy milk and are fortified during manufacturing to add precise levels of micronutrients, commonly calcium and vitamins A and D. Animal milks are also commonly fortified, and many countries have laws mandating fortification of milk products with certain nutrients, commonly vitamins A and D.
Packaging and commerce
Plant-based milks have emerged as an alternative to dairy in response to consumer dietary requests and changing attitudes about animals and the environment. Huffington Post stated that due to health and environmental reasons as well as changing consumer trends, more individuals regularly buy non-dairy alternatives to milk. Between 1974 and 2020, dairy milk consumption of people aged between 16 and 24 in the United Kingdom decreased from 94% to 73%. In Australia, there is decreased confidence within the dairy industry, with only 53% being optimistic in the future profitability and demand for dairy products per a Dairy Australia report.
To improve competition, plant milks are typically packaged in containers similar to those of dairy milks. A scientific journal article argued that plant-milk companies send the message that plant milks are 'good and wholesome' and dairy milk is 'bad for the environment', and the article also reported that an increasing number of young people associate dairy with environmental damage. There has been an increased concern that dairy production has adverse effects on biodiversity, water and land use. These negative links between dairy and the environment have also been communicated through audiovisual material against dairy production, such as 'Cowspiracy' and 'What the Health'. Animal welfare concerns have also contributed to the declining popularity of dairy milk in many Western countries. Advertising for plant milks may also contrast the intensive farming effort to produce dairy milk with the relative ease of harvesting plant sources, such as oats, rice or soybeans. In 2021, an advertisement for oat milk brand Oatly aired during the Super Bowl.
In the United States, plant milk sales grew steadily by 61% over the period 2012 to 2018. As of 2019, the plant-based milk industry in the US is worth $1.8 billion per year. In 2018, the value of 'dairy alternatives' around the world was said to be $8 billion. Among plant milks, almond (64% market share), soy (13% market share), and coconut (12% market share) were category leaders in the United States during 2018. Oat milk sales increased by 250% in Canada during 2019, and its growing consumption in the United States and United Kingdom led to production shortages from unprecedented consumer demand. In 2020, one major coffee retailer – Starbucks – added oat milk, coconut milk, and almond milk beverages to its menus in the United States and Canada. During 2020, oat milk sales in the United States increased to $213 million, becoming the second most consumed plant milk after almond milk ($1.5 billion in 2020 sales).
A key dietary reason for the increase in popularity of plant-based milks is lactose intolerance. For example, the most common food causing intolerance in Australia is lactose and affects 4.5% of the population. In the United States, around 40 million people are lactose intolerant.
Labeling and terminology
Historically, a number of plant-based beverages have been traditionally referred to as "milk". One of the first reliable modern English dictionaries, Samuel Johnson's 1755 A Dictionary of the English Language, gave two definitions of the word "milk". The first described "the liquor with which animals feed their young from the breast", and the second an "emulsion made by contusion of seeds", using almond milk as an example.
In the late 20th and early 21st centuries, the use of the term "milk" for plant-based drinks became controversial. As demand for plant-based milks increased, dairy manufacturers and distributors advocated for legally restricting the term to animal products only: arguing that consumers may confuse the two, or be misled as to the nutritional content of plant-based alternatives.
Many jurisdictions strictly regulate the use of the term "milk" on food labelling. Some countries have outright banned its use for non-dairy products, while others mandate that "milk" only be used with qualifiers (such as "oat milk") on non-dairy alternatives. Where use of the term "milk" is restricted, plant milks may be labeled with terms reflecting their composition (such as "oat drink"), or absence of ingredients (such as "dairy-free").
Australia and New Zealand
Food standards in Australia and New Zealand are developed by the same common body, called Food Standards Australia New Zealand. As of 2024, products sold as 'milk' without qualifiers in Australia or New Zealand "must be milk" which is defined as an animal product. Qualifiers such as "soy milk" are allowed, due to the use of quotation marks in the legislative instrument.
Canada
The Canadian Food Inspection Agency limits the use of the word "milk" solely to ″the normal lacteal secretion, free from colostrum, obtained from the mammary gland of an animal″.
Europe
In December 2013, European Union regulations stated that the terms "milk", "butter", "cheese", "cream" and "yoghurt" can be used to market and advertise products derived only from animal milk, with a small number of exceptions, including coconut milk, peanut butter and ice cream. In 2017, the Landgericht Trier (Trier regional court), Germany, asked the Court of Justice of the European Union, to clarify European food-labeling law (Case C-422/16), with the court stating that plant-based products cannot be marketed as milk, cream, butter, cheese or yoghurt within the European Union because these are reserved for animal products; exceptions to this do not include tofu and soy. Although plant-based dairy alternatives are not allowed to be called "milk", "cheese" and the like, they are allowed to be described as buttery or creamy. However, there are exceptions for each of the EU languages, based on established use of livestock terms for non-livestock products. The list's extent varies widely; for example there is only one exception in Polish, and 20 exceptions in English.
A proposal for further restrictions failed at second reading in the European Parliament, in May 2021. The proposal, called Amendment 171, would have outlawed labels including 'yogurt-style' and 'cheese alternative'.
In the United Kingdom, strict standards are applied via acts of parliament to food labeling for terms such as milk, cheese, cream, yogurt, which are protected to describe dairy products and may not be used to describe non-dairy produce. These rules date from the United Kingdom's membership of the European Union, and are still in force in Great Britain. To contrast, as of September 2023, the EU Regulation (EU) No 1169/2011 applies directly to Northern Ireland.
India
The FSSAI stipulates products need a declaration with the phrase "non-dairy product" if the product is a 'plant based beverage', and these must not be labelled with any dairy term. The use of the word 'milk' is limited to animal products. The regulator makes exceptions for cases where, internationally, as in the case of coconut milk and peanut butter, dairy terms were already in-use traditionally.
United States
In the United States, the dairy industry petitioned the FDA to ban the use of terms like "milk", "cheese", "cream" and "butter" on plant-based analogues (except for peanut butter). FDA commissioner Scott Gottlieb stated on July 17, 2018, that the term "milk" is used imprecisely in the labeling of non-dairy beverages, such as soy milk, oat milk and almond milk: "An almond doesn't lactate", he said. In 2019, the US National Milk Producers Federation petitioned the FDA to restrict labeling of plant-based milks, claiming they should be described as "imitation". In response, the Plant-Based Foods Association stated the word "imitation" was disparaging, and there was no evidence that consumers were misled or confused about plant-based milks. A 2018 survey by the International Food Information Council Foundation found that consumers in the United States do not typically confuse plant-based analogues with animal milk or dairy products. As of 2021, though the USDA is investigating and various state legislatures are considering regulation, various courts have determined that reasonable consumers are not confused, and the FDA has enacted no regulations against plant-based milk labels.
In 2021, the FDA issued a final rule that amends yogurt's standard of identity (which remains a product of "milk-derived ingredients"), and was expected to issue industry guidance on "Labeling of Plant-based Milk Alternatives" in 2022.
Proponents of plant-based milk assert that these labeling requirements are infantilizing to consumers and burdensome and unfair on dairy-alternatives. Critics of the FDA's labeling requirements also asserted that there is often collusion between government officials and the dairy industry in an attempt to maintain dairy dominance in the market. For example, in 2017, Sen. Tammy Baldwin of Wisconsin introduced the "Defending Against Imitations and Replacements of Yogurt, Milk, and Cheese to Promote Regular Intake of Dairy Everyday (DAIRY PRIDE) Act" which would prevent almond milk, coconut milk and cashew milk from being labeled with terms like "milk", "yogurt", and "cheese". Proponents of plant-based dairy alternatives argued that dairy sales are decreasing faster than plant sales are increasing and that therefore, attacking plant milks as being the chief reason for a decline in dairy consumption is inaccurate. A 2020 USDA study found that the "increase in sales over 2013 to 2017 of plant-based options is one-fifth the size of the decrease in Americans' purchases of cow's milk."
Health recommendations
Health authorities recommend that plant milks should not be given to infants younger than 12 months unless commercially prepared infant formula is available, such as soy infant formula. A 2020 clinical review stated that only appropriate commercial infant formulas should be used as alternatives to human milk which contains a substantial source of calcium, vitamin D and protein in the first year of life and that plant milks "do not represent an equivalent source of such nutrients".
The Healthy Drinks, Healthy Kids 2023 guidelines state that infants younger than 12 months should not drink plant milks. They suggest that children between 12 and 24 months may consume fortified soy milk, but not other non-dairy milks such as almond, oat and rice, which are deficient in key nutrients. A 2022 review suggested that the best option for toddlers (1–3 years old) who do not consume cow's milk would be to have at least 250 mL/day of fortified soy milk.
For vegan infants younger than 12 months who are not breastfed, the New Zealand Ministry of Health recommends soy infant formula and advises against the use of plant milks. A 2019 Consensus Statement from the Academy of Nutrition and Dietetics, American Academy of Pediatric Dentistry, American Academy of Pediatrics, and the American Heart Association concluded that plant milks are not recommended for infants younger than 12 months and that for children aged 1–5 years plant milks may be useful for those with allergies or intolerances to cow's milk but should only be consumed after a consultation with a professional health care provider.
See also
Lactose intolerance
List of dishes made using coconut milk
Milk substitute
Non-dairy creamer
Plant cream
Roasted grain drink
Soy milk maker
Soy yogurt
Vegan cheese
References
External links
Wikibooks Cookbook category for Nut and Grain Milk recipes''
Cold drinks
Food ingredients
Imitation foods
Milk substitutes
Non-alcoholic drinks
Vegan cuisine
Vegetarianism and drinks | Plant milk | Technology | 3,714 |
55,486,970 | https://en.wikipedia.org/wiki/NGC%20481 | NGC 481 is an elliptical galaxy in the constellation Cetus. It is located approximately 229 million light-years from Earth and was discovered on November 20, 1886 by astronomer Lewis A. Swift.
See also
List of galaxies
List of NGC objects (1–1000)
References
External links
SEDS
Lenticular galaxies
0481
4899
Astronomical objects discovered in 1886
Discoveries by Lewis Swift
Cetus | NGC 481 | Astronomy | 80 |
642,328 | https://en.wikipedia.org/wiki/McKinsey%20%26%20Company | McKinsey & Company (informally McKinsey or McK) is an American multinational strategy and management consulting firm that offers professional services to corporations, governments, and other organizations. Founded in 1926 by James O. McKinsey, McKinsey is the oldest and largest of the "MBB" management consultancies (MBB). The firm mainly focuses on the finances and operations of their clients.
Under the direction of Marvin Bower, McKinsey expanded into Europe during the 1940s and 1950s. In the 1960s, McKinsey's Fred Gluck—along with Boston Consulting Group's Bruce Henderson, Bill Bain at Bain & Company, and Harvard Business School's Michael Porter—initiated a program designed to transform corporate culture. A 1975 publication by McKinsey's John L. Neuman introduced the business practice of "overhead value analysis" that contributed to a downsizing trend that eliminated many jobs in middle management.
McKinsey has been the subject of significant controversy and is the subject of multiple criminal investigations into its business practices. The company has been criticized for its role promoting OxyContin use during the opioid crisis in North America, its work with Enron, and its work for authoritarian regimes like Saudi Arabia and Russia. The criminal investigation by the US Justice Department, with a grand jury to determine charges, is into its role in the opioid crisis and obstruction of justice related to its activities in the sector.
McKinsey has a notoriously competitive hiring process and is widely seen as one of the most selective employers in the world. McKinsey recruits primarily from top business schools and was one of the first management consultancies to recruit a limited number of candidates with advanced academic degrees (e.g., PhD) and deep field expertise, and who have demonstrated business acumen and analytical skills. McKinsey publishes a business magazine, the McKinsey Quarterly.
History
Early history
McKinsey & Company was founded in Chicago under the name James O. McKinsey & Company in 1926 by James O. McKinsey, a professor of accounting at the University of Chicago. He conceived the idea after he had witnessed inefficiencies in military suppliers while he was working for the United States Army Ordnance Department. The firm called itself an "accounting and management firm" and started out giving advice on using accounting principles as a management tool. McKinsey's first partners were AT Kearney, hired in 1929, and Marvin Bower, hired in 1933.
Bower is credited with establishing McKinsey's values and principles in 1937, based on his experience as a lawyer. The firm developed an "up or out" policy, where consultants who are not promoted are asked to leave. In 1937, Bower established a set of rules: that consultants should put the interests of clients before McKinsey's revenues, not discuss client affairs, tell the truth even if it means challenging the client's opinion, and only perform work that is both necessary and that McKinsey can do well. Bower created the firm's principle of only working with CEOs, which was later expanded to CEOs of subsidiaries and divisions. He also created McKinsey's principle of only working with clients the firm felt would follow its advice. Bower also established the firm's language.
In 1932, the company opened its second office in New York City. In 1935, McKinsey left the firm temporarily to be the chairman and CEO of client Marshall Field's.
In 1935, McKinsey merged with accounting firm Scovell, Wellington & Company, creating the New York-based McKinsey, Wellington & Co. and splitting off the accounting practice into Chicago-based Wellington & Company. A Wellington project that accounted for 55 percent of McKinsey, Wellington & Company's billings was about to expire and Kearney and Bower had disagreements about how to run the firm. Bower wanted to expand nationally and hire young business school graduates, whereas Kearney wanted to stay in Chicago and hire experienced accountants.
In 1937, James O. McKinsey died after catching pneumonia. This led to the division of McKinsey, Wellington & Company in 1939. The accounting practice returned to Scovell, Wellington & Company, while the management engineering practice was split into McKinsey & Company and McKinsey, Kearney & Company. Bower had partnered with Guy Crockett from Scovell Wellington, who invested in the new McKinsey & Company and became managing partner, while Marvin Bower is credited with founding the firm's principles and strategy as his deputy. The New York office purchased exclusive rights to the McKinsey name from the former McKinsey Chicago office which was separated with AT Kearney in 1946.
Years of growth
McKinsey & Company grew quickly in the 1940s and 1950s, especially in Europe. It had 88 staff in 1951 and more than 200 by the 1960s, including 37 in London by 1966. In the same year, McKinsey had six offices in major US cities, including San Francisco, Cleveland, Los Angeles and Washington D.C., as well as six abroad. These foreign offices were primarily in Europe, such as in London, Paris, and Amsterdam, as well as in Melbourne. By this time, one third of the company's revenues originated from its European offices. Guy Crockett stepped down as managing director in 1959, and Marvin Bower was elected in his place. McKinsey's profit-sharing, executive and planning committees were formed in 1951. The organization's client base expanded especially among governments, defense contractors, blue-chip companies and military organizations in the post–World War II era. McKinsey became a private corporation with shares owned exclusively by McKinsey employees in 1956.
After Bower stepped down in 1967, the firm's revenues declined. New competitors like the Boston Consulting Group and Bain & Company created increased competition for McKinsey by marketing specific branded products, such as the Growth–Share Matrix, and by selling their industry expertise.
In 1971, McKinsey created the Commission on Firm Aims and Goals, which found that McKinsey had become too focused on geographic expansion and lacked adequate industry knowledge. The commission advised that McKinsey slow its growth and develop industry specialties.
In 1975, John L. Neuman, a McKinsey consultant at the time, published "Make Overhead Cuts That Last" in Harvard Business Review, in which he introduced new rules for scientific management such as "overhead valuation analysis" (OVA). OVA guided McKinsey's "path to downsizing", responding to the "mid-century corporation's excessive reliance on middle management". Neuman wrote that the "process, though swift, is not painless. Since overhead expenses are typically 70% to 85% people-related and most savings come from work-force reductions, cutting overhead does demand some wrenching decisions."
In 1976, Ron Daniel was elected managing director, serving until 1988. Daniel and Fred Gluck helped shift the firm away from its generalist approach by developing 15 specialized working groups within McKinsey called Centers of Competence and by developing practice areas called Strategy, Operations and Organization. Daniel also began McKinsey's knowledge management efforts in 1987. This led to the creation of an IT system that tracked McKinsey engagements, a process to centralize knowledge from each practice area and a resource directory of internal experts." By the end of his tenure in 1988 the firm was growing again and had opened new offices in Rome, Helsinki, São Paulo and Minneapolis.
Fred Gluck was McKinsey's managing director from 1988 to 1994. The firm's revenues doubled during his tenure. He organized McKinsey into 72 "islands of activity" that were organized under seven sectors and seven functional areas. By 1997, McKinsey had grown eightfold over its size in 1977. In 1989 the firm tried to acquire talent in IT services through a $10 million purchase of the Information Consulting Group (ICG), but a culture clash caused 151 out of the 254 ICG staff members to leave by 1993.
In 1994, Rajat Gupta became the first non-American-born partner to be elected as the firm's managing director. By the end of his tenure, McKinsey had grown from 2,900 to 7,700 staff and 58 to 84 locations. He opened new international offices in cities such as Moscow, Beijing and Bangkok. Continuing the structure developed by prior directors, Gupta also created 16 industry groups charged with understanding specific markets and instituted a three-term limit for the managing director. McKinsey created practice areas for manufacturing and business technology in the late 1990s.
McKinsey set up "accelerators" in the 1990s, where the firm accepted stock-based reimbursement to help internet startups; the company performed more than 1,000 e-commerce projects from 1998 to 2000 alone.
An October 1, 2000, article in the New York Times described the compulsory mini-courses that McKinsey—and its two largest rivals Boston Consulting and Bain—offered their "hyper-educated" young new recruits. Once completed, these newly certified management consultants would begin their work of "advising the executives of multibillion-dollar companies" on "projects" not related to their academic backgrounds"—"[l]awyers would help packaged-foods companies develop new products, and physicists would tell Internet start-ups how to stand out from the crowd."
The burst of the dot-com bubble led to a reduction in utilization rates of McKinsey's consultants from 64 to 52 percent. Though McKinsey avoided dismissing any personnel following the decline, the decline in revenues and losses from equity-based payments as stock lost value, together with a recession in 2001, meant the company had to reduce its prices, cut expenses and reduce hiring.
In 2001, McKinsey launched several practices that focused on the public and social sector. It took on many public sector or non profit clients on a pro bono basis. By 2002, McKinsey had invested a $35.8 million budget on knowledge management, up from $8.3 million in 1999. Its revenues were 50, 20, and 30 percent from strategy, operations, and technology consulting, respectively.
In 2003, Ian Davis, the head of the London office, was elected to the position of managing director. Davis promised a return to the company's core values after a period in which the firm had expanded rapidly, which some McKinsey consultants felt was a departure from the company's heritage. Also in 2003, the firm established a headquarters for the Asia-Pacific region in Shanghai. By 2004, more than 60 percent of McKinsey's revenues were generated outside the U.S. The company started a Social Sector Office (SSO) in 2008, which is divided into three practices: Global Public Health, Economic Development and Opportunity Creation (EDHOC) and Philanthropy. McKinsey does much of its pro-bono work through the SSO, whereas a Business Technology Office (BTO), founded in 1997, provides consulting on technology strategy.
By 2009, the firm consisted of 400 directors (senior partners), up from 151 in 1993. Dominic Barton was elected as managing director, a role he was re-elected for in 2012 and 2015.
Recent history
Rajat Gupta along with another McKinsey executive, Anil Kumar, were among those convicted in a government investigation into insider trading for sharing inside information with Galleon Group hedge fund owner Raj Rajaratnam. Though McKinsey was not accused of any wrongdoing, the convictions were embarrassing for the firm, since it prides itself on integrity and client confidentiality. McKinsey no longer maintains a relationship with either senior partner.
Senior partner Anil Kumar, described as Gupta's protégé, left the firm after the allegations in 2009, and pleaded guilty in January 2010. While he and other partners had been pitching McKinsey's consulting services to the Galleon Group, Kumar and Rajaratnam reached a private consulting agreement, violating McKinsey's policies on confidentiality. Gupta was convicted in June 2012, of four counts of conspiracy and securities fraud, and acquitted on two counts. In October 2011, he was arrested by the FBI on charges of sharing insider information from these confidential board meetings with Rajaratnam. At least twice, Gupta used a McKinsey phone to call Rajaratnam and retained other perks—an office, assistant, and $6 million retirement salary that year—as a senior partner emeritus.
After the scandal McKinsey instituted new policies and procedures to discourage future indiscretions from consultants, including investigating other partners' ties to Gupta.
In February 2018, Kevin Sneader was elected as managing director. He had a three-year term that began on July 1, 2018.
McKinsey has consulted for multiple cities, states and government organizations during the 2019 coronavirus pandemic. During the first four months of the pandemic, McKinsey obtained in excess of $100 million in consulting work, including no-bid contracts with the United States Department of Veterans Affairs and Air Force. The reopening guidelines for Florida's Miami-Dade County, produced with McKinsey's input, were criticized by local media and officials for complexity and lack of clarity.
McKinsey discontinued its investment banking advisory unit in 2021, citing "personnel matters" as the reason.
In 2021, McKinsey's Australian office made two acquisitions, Hypothesis, a digital product development company, and Venturetec, an innovation consulting firm.
On June 1, 2022, McKinsey announced that it had acquired Caserta, a data engineering firm.
In March 2023, McKinsey announced a layoff of 1,400 employees, in a rare job cut of the company.
Organization and services
Structure
McKinsey & Company was originally organized as a partnership before being legally restructured as a private corporation with shares owned by its partners in 1956. It mimics the structure of a partnership and senior employees are called "partners". The company has a flat hierarchy and each member is assigned a mentor. Since the 1960s, McKinsey's managing director has been elected by a vote of senior directors to work up to three, three-year terms or until reaching the mandatory retirement age of 60. The firm is also managed by a series of committees that each has its own area of responsibility.
By 2013, McKinsey was described as having a decentralized structure, whereby different offices operate similarly, but independently. The company's budgeting is centralized, but individual consultants are given a large degree of autonomy. As a global firm McKinsey does not have a traditional "headquarters"; the managing partner chooses his or her home office.
List of global managing partners
James O. McKinsey (1926–1935), Chicago office
Guy Crockett (1939–1950)
Marvin Bower (1950–1967), New York office
Gil Clee (1967–1968)
Chester Walton (1968–1976)
Alonzo L. McDonald (1973–1976)
Ron Daniel (1976–1988)
Frederick Gluck (1988–1994), New York office
Rajat Gupta (1994–2003), New York office
Ian Davis (2003–2009), London office
Dominic Barton (2009–2018), London office
Kevin Sneader (2018–2021), Hong Kong office
Bob Sternfels (2021– ), San Francisco office
Consulting services
McKinsey & Company provides strategy and management consulting services, such as providing advice on an acquisition, developing a plan to restructure a sales force, creating a new business strategy or providing advice on downsizing, according to the 2013 book, The Firm. The 1999 book, The McKinsey Way said that McKinsey consultants designed and implemented studies to evaluate management decisions using data and interviews to test hypotheses. which were then presented to senior management, typically in a PowerPoint presentation and a booklet.
McKinsey & Company has traditionally charged approximately 25 percent more than competing firms. Its invoices traditionally contain only a single line.
A typical McKinsey engagement (called a "study") can last between two and twelve months and involves three to six McKinsey consultants. An engagement is usually managed by a generalist that covers the region the client's headquarters are located in and specialists that have either an industry or functional expertise. Unlike some competing consulting firms, McKinsey does not hold a policy against working for multiple competing companies (although individual consultants are barred from doing so).
Recruiting
McKinsey & Company was the first management consultancy to hire recent graduates instead of experienced business managers, when it started doing so in 1953.
According to a 1997 article in The Observer, McKinsey recruited recent graduates and "imbue[d] them with a religious conviction" in the firm, then cull[ed] through them with its "up-or-out" policy. The "up or out" policy, which was established in 1951, meant that consultants that were not being promoted within the firm were asked to leave. By 1997, about one-fifth of McKinsey's consultants departed under the up or out policy each year. McKinsey's practice of hiring recent graduates and the "up-or-out" philosophy were originally based on Marvin Bower's experiences at the law firm Jones Day in the 1930s, as well as the "Cravath system" used at the law firm Cravath, Swaine and Moore.
In recent years, it has consistently been recognized by Vault as the most prestigious consulting firm employer in the world.
In 2018, 800,000 candidates applied for 8,000 jobs.
While many recruits have MBAs, by 2009, less than half of the firm's recruits were business majors; by 1999, recruits had advanced degrees in science, medicine, engineering or law.
Culture
A November 1, 1993, profile story in Fortune magazine said that McKinsey & Company was "the most well-known, most secretive, most high-priced, most prestigious, most consistently successful, most envied, most trusted, most disliked management consulting firm on earth". In the article, McKinsey was cited as claiming that its consultants were not motivated by money, and that partners talked to each other with "a sense of personal affection and admiration". The article described a culture clash that occurred in the early 1990s, leading to the departure of 151 out of the 254 ICG staff members.
In their 1997 book Dangerous Company: Management Consultants and the Businesses They Save and Ruin, authors James O'Shea and Charles Madigan said that McKinsey's culture had often been compared to religion, because of the influence, loyalty and zeal of its members. The firm has a policy against discussing specific client situations. A September 1997 The News Observer story said that McKinsey's internal culture was "collegiate and ruthlessly competitive" and has been described as arrogant. Ethan Rasiel's 1999 book entitled The McKinsey Way, described a culture at McKinsey's whereby members were not supposed to "sell" their services.
The Sunday Times wrote that McKinsey was a pioneer in the industry—the "first firm to hire MBA graduates from the top business schools to staff its projects, rather than relying on older industry personnel." They were still trying to keep a "very low profile public image" in 2005. That year, an article in The Guardian said that McKinsey "hours are long, expectations high and failure not acceptable". According to an October 2009 Reuters article, the firm had a "button-down culture" focused on "playing by the rules". In his 2013 book, The Firm: The Story of McKinsey and Its Secret Influence on American Business, Duff McDonald described how McKinsey's consultants were expected to become a part of the community and recruit clients from church, charitable foundations, board positions and other community involvements. McDonald wrote that McKinsey calls itself "The Firm" and its employees "members". BusinessWeek summarized The Firms description of McKinsey as a "fading empire, where hubris and changing times have diminished the firm's statures."
In his February 2020 in-depth article in The Atlantic, Daniel Markovits argues that McKinsey promotes "intellect and elite credentials" and "Meritocrats" over "directly relevant experience".
Influence
Many of McKinsey's alumni become CEOs of major corporations or hold important government positions. In doing so, they influence the other organizations with McKinsey's values and culture. McKinsey's alumni have been appointed as CEOs or high-level executives.
In his 2010 publication, The Lords of Strategy: The Secret Intellectual History of the New Corporate World, business journalist Walter Kiechel traced the roots of a profound change in corporate management to "four mavericks" in the 1960s—Fred Gluck at McKinsey & Company, Boston Consulting Group's Bruce Henderson, Bill Bain at Bain & Company, and Harvard Business School professor, Michael Porter. Kiechel recounted how they "revolutionized the way we think about business, changed the very soul of the corporation, and transformed the way we work," according to the Harvard Business Press synopsis.
McKinsey has been either directly involved in, or closely associated with, a number of notable scandals, involving Enron in 2001, Galleon in 2009, Valeant in 2015, Saudi Arabia in 2018, China in 2018, ICE in 2019, an internal conflict of interest in 2019 and Purdue Pharma in 2019, among others. By 2019, major news outlets, including The New York Times and ProPublica, had raised concerns about McKinsey's business practices.
Research and publishing
McKinsey & Company consultants regularly publish books, research and articles about business and management. The firm spends $50–$100 million a year on research. McKinsey was one of the first organizations to fund management research, when it founded the Foundation for Management Research in 1955. The firm began publishing a business magazine, The McKinsey Quarterly, in 1964. It funds the McKinsey Global Institute, which studies global economic trends and was founded in 1990. It also launched the McKinsey Institute for Black Economic Mobility in 2020 to fund research focused on advancing inclusive growth & racial equity globally. Many consultants are contributors to the Harvard Business Review. McKinsey consultants published only two books from 1960 to 1980, then more than 50 from 1980 to 1996. McKinsey's publications and research give the firm a "quasi-academic" image.
A McKinsey book, In Search of Excellence, was published in 1982. It featured eight characteristics of successful businesses based on an analysis of 43 top performing companies. It marked the beginning of McKinsey's shift from accounting to "softer" aspects of management, like skills and culture. According to David Guest from King's College, In Search of Excellence became popular among business managers because it was easy to read, well-marketed and some of its core messages were valid. However, it was disliked by academics because of flaws in its methodology. Additionally, a 1984 analysis by BusinessWeek found that many of those companies identified as "excellent" in the book no longer met the criteria only two years later.
A 1997 article and a book it published in 2001 on "The War for Talent"
prompted academics and the business community to start focusing more on talent management. The authors found that the best-performing companies were "obsessed" with acquiring and managing the best talent. They advocated that companies rank employees by their performance and promote "stars", while targeting under-performers for improvement or layoffs. After the book was published, Enron, a company which followed many of its principles, was involved in a scandal that led to its bankruptcy. In May 2001, a Stanford professor wrote a paper critical of the "War on Talent" arguing that it prioritized individuals at the expense of the larger organization.
McKinsey consultants published Creative Destruction in 2001. The book suggested that CEOs need to be willing to change or rebuild a company, rather than protect what they have created. It found that out of the first S&P 500 list from 1957, only 74 were still in business by 1998. The New York Times said it "makes a cogent argument that in times of rampant, uncertain change ... established companies are handcuffed by success." In 2009, McKinsey consultants published The Alchemy of Growth, which established three "horizons" for growth: core enhancements, new growth platforms and options.
In February 2011, McKinsey surveyed 1,300 US private-sector employers on their expected response to the Affordable Care Act (ACA). Thirty percent of respondents said they anticipated they would probably or definitely stop offering employer sponsored health coverage after the ACA went into effect in 2014. These results, published in June 2011 in the McKinsey Quarterly, became "a useful tool for critics of the ACA and a deep annoyance for defenders of the law" according to an article in Time magazine. Supporters of healthcare reform argued the survey far surpassed estimates by the Congressional Budget Office and insisted that McKinsey disclose the survey's methodology. Two weeks after publishing the survey results, McKinsey released the contents of the survey including the questionnaire and 206 pages of survey data. In its accompanying statement, McKinsey said it was intended to capture the attitude of employers at a certain point in time, not make a prediction.
Since 1990, McKinsey has been publishing Valuation: Measuring and Managing the Value of Companies, a textbook on valuation.
In 2022, McKinsey senior partners Carolyn Dewar, Scott Keller, and Vikram Malhotra authored the book CEO Excellence, which was published by Scribner.
Environmental consulting
Marginal abatement cost curves attempt to compare the financial costs of different options for reducing pollution in a region and are used in emissions trading, policy discussions and incentive programs. McKinsey & Company released its first marginal abatement cost (MAC) curve for greenhouse gas emissions in February 2007, which was updated to version two in January 2009. McKinsey & Company's MAC curve has become the most widely used and is the basis for McKinsey's consulting on climate change and sustainability.
McKinsey's curve predicts negative cost abatement strategies, which has been controversial among economists. The International Association for Energy Economics said in The Energy Journal that McKinsey's cost-curve was popular among policymakers, because it suggests they can take "bold action towards improving energy efficiency without imposing costs on society."
In a 2010 report, the Rainforest Foundation UK said McKinsey's cost curve methodology was misleading for policy decisions regarding the Reduced Emissions from Deforestation and Forest Degradation (REDD) program. The report argued that McKinsey's calculations exclude certain implementation and governance costs, which makes it favor industrial uses of forests while discouraging subsistence projects. Greenpeace said the curve has allowed Indonesia and Guyana to win financial incentives from the United Nations by creating inflated estimates of current deforestation so they could demonstrate reductions in comparison. McKinsey said they had made it clear in the cost-curve publications that cost curves do not translate "mechanically" into policy implications and that policymakers should consider "many other factors" before introducing new laws.
In April 2022, in the report "Global Energy Perspective" the company predicted that fossil fuels use will peak between 2023–2025 and in 2050 fossil fuels will account for 43% of energy consumption. Emissions will peak in all scenarios before 2030. However, the report of September 2024, said that fossil fuel consumption is expected to plateau between 2025 and 2035, and they will continue to play a major role accounting for 40%-60% of energy supply by 2050. Emissions will peak in 2025-2035. The report mention "a more challenging geopolitical landscape", rising electricity demand, especially from Artificial intelligence, some technical problems that are complicating the transition. The low cost of renewables can make them less profitable, so regulation can be required for adoption. The company also created a new report "Global Materials Perspective". It says that the energy transition require intensive mining of many materials, so even as the mining of coal will be reduced, the overall emissions from the mining and metal industry will decline only from 15% to 13% by 2035.
Significant consulting projects
McKinsey & Company's founder, James O. McKinsey, introduced the concept of budget planning as a management framework in his fifth book Budgetary Control in 1922. The firm's first client was the treasurer of Armour & Company, who, along with other early McKinsey clients, had read Budgetary Control. In 1931 McKinsey created a methodology for analyzing a company called the General Survey Outline (GSO), which was established based on ideas introduced in the 1924 book Business Administration. It was also known as the Banker's Survey, because McKinsey's clients who used it in the 1930s were predominantly banks. After the Wagner Act gave certain rights to employees to organize into unions in 1935, McKinsey started consulting corporations on employee relations. Later in the 1950s, the work of a McKinsey consultant on compensation was influential in "skyrocketing executive pay". It also helped many companies such as Heinz, IBM and Hoover expand into Europe.
In the 1940s, McKinsey helped many corporations convert into wartime production for World War II. It also helped organize NASA into an organization that relies heavily on contractors in 1958. McKinsey created a report in 1953 for Dwight D. Eisenhower that was used to guide government appointments. In 1973, McKinsey & Company led a project for a consortium of grocery chains represented by the U.S. Supermarket Ad Hoc Committee on a Uniform Grocery Product Code to create the barcode. According to the book "Business Research Methods", the barcode became commonplace after a study by McKinsey persuaded Kroger to adopt it.
In the 1970s and 1980s, McKinsey helped European companies change their organizational structure to M-form (Multidivisional Form), which organizes the company into semi-autonomous divisions that function around a product, industry or customer, rather than a function or expertise.
In the 1980s, AT&T reduced investments in cell towers due to McKinsey's prediction that there would only be 900,000 cell phone subscribers by 2000. According to The Firm this was "laughably off the mark" from the 109 million cellular subscribers by 2000. At the time cell phones were bulky and expensive. The firm helped the Dutch government facilitate a turnaround for Hoogovens, the world's largest steel company as of 2013, through a $1 billion bankruptcy bailout. It also implemented a turnaround for the city of Glasgow, which had problems with unemployment and crime. McKinsey created the corporate structure for NationsBank, when it was still a small company known as North Carolina National Bank. McKinsey was hired by General Motors to do a large-scale re-organization to help it compete with Japanese auto-makers. The book The Firm said it was an "unmitigated disaster" because McKinsey focused on corporate structure, whereas GM needed to compete with Japanese automakers through manufacturing process improvement. A McKinsey consultant said GM did not follow their advice.
A 2002 article in BusinessWeek said that a series of bankruptcies of McKinsey clients, such as Swissair, Kmart, and Global Crossing, in the 1990s raised questions as to whether McKinsey was responsible or had a lapse in judgement. McKinsey recommended that Swissair avoid high operating costs in its home country by developing partnerships with airlines based in other regions. In order to attract partners, Swissair acquired more than $1 billion in shares of other airlines, many of which were failing. This led to huge losses and even bankruptcy for Swissair.
As part of a lawsuit against Allstate, 13,000 McKinsey documents were released, showing that McKinsey recommended that Allstate reduce payouts to insurance claimants by offering low settlements, delaying processing to wear out claimants through attrition, and fighting customers that protest in court. Allstate's profits doubled over ten years after adopting McKinsey's strategy, but it also led to lawsuits alleging they were cheating claimants out of legitimate insurance claims.
Controversies
The firm has been associated with a number of notable scandals, including the collapse of Enron in 2001, the 2007–2008 financial crisis, and facilitating state capture in South Africa. It has also drawn controversy for involvement with Purdue Pharma, U.S. Immigration and Customs Enforcement, and authoritarian regimes. Michael Forsythe and Walt Bogdanich, reporters for The New York Times, wrote a book entitled When McKinsey Comes to Town about the controversially unethical work history of the company.
Enron
Enron was the creation of Jeff Skilling, a McKinsey consultant of 21 years, who was jailed after Enron reportedly used McKinsey on 20 different projects, and McKinsey consultants had "used Enron as their sandbox."
Prior to the Enron scandal, McKinsey helped it shift from an oil and gas production company into an electric commodities trader, which led to significant growth in profits and revenues. According to The Independent, there was "no suggestion that McKinsey was complicit in the subsequent scandal, [but] critics say the arrogance of Enron's leaders is emblematic of the McKinsey culture." The government did not investigate McKinsey, who said they did not provide advice on Enron's accounting. The Wall Street Journal questioned McKinsey's "liability" and its "close relationship with Enron", and a 2002 BusinessWeek article suggested that they had ignored warning signs.
In his July 2002 in-depth BusinessWeek article on the aftermath of the Enron scandal, John Bryne wrote that McKinsey had been a "key architect of the strategic thinking that made Enron a Wall Street darling. In books, articles, and essays, its partners regularly stamped their imprimatur on many of Enron's strategies and practices, helping to position the energy giant as a corporate innovator worthy of emulation. The firm may not be the subject of any investigations, but its close involvement with Enron raises the question of whether McKinsey, like some other professional firms, ignored warning flags in order to keep an important account." BusinessWeek described how McKinsey's culture had changed, as the "number of partners grew from 427 to 891" making it a "less personal place". According to the article, "some current and former McKinsey consultants" said that McKinsey had lost their "ingrained values" that used to guide the firm. Citing the example of the dot-com bubble, McKinsey had begun to have "less prestigious companies" as clients and had allowed "its focus on building agenda-shaping relationships with top management at leading companies to slip." As well, "there was a noticeable tilt toward bringing in revenue at the expense of developing knowledge." McKinsey denied this.
McKinsey denied giving Enron advice on financing issues or having suspicions that Enron was using improper accounting methods.
2008 financial crisis
McKinsey is said to have played a significant role in the 2008 financial crisis by promoting the securitization of mortgage assets and encouraged the banks to fund their balance sheets with debt, driving up risk, which "poisoned the global financial system and precipitated the 2008 credit meltdown". Furthermore, McKinsey advised Allstate Insurance to purposely give low offers to claimants. The Huffington Post revealed that the strategy was to make claims "so expensive and so time-consuming that lawyers would start refusing to help clients." Next to this, 2016 McKinsey partner Navdeep Arora was convicted for illegally depleting State Farm of over $500,000 over a period of 8 years, in collaboration with a State Farm employee.
Valeant
Valeant, a Canadian pharmaceutical company investigated by the SEC in 2015, has been accused of improper accounting, and that it used predatory price hikes to boost growth. The Financial Times states that "Valeant's downfall is not exactly McKinsey's fault but its fingerprints are everywhere." Three out of six senior executives were recent ex-McKinsey employees, as well as the chair of the 'talent and compensation' committee. MIO Partners was a private investor of Valeant and McKinsey consulted Valeant on drug prices and acquisitions.
Role in opioid epidemic
McKinsey advised opioid makers on how to "turbocharge" sales of OxyContin, proposed strategies to counter the emotional messages from mothers with teenagers who overdosed on OxyContin, and helped opioid makers circumvent regulation. The firm also advised Purdue Pharma to offer pharmacies rebates based on the number of overdoses and addictions they caused. In 2019, McKinsey projected that over 2,400 CVS customers would have an overdose or become reliant on opioids. McKinsey estimated that a rebate of $14,810 per "event" would mean that Purdue would have to pay CVS $36.8 million that year. In February 2021, McKinsey reached agreements with attorneys general in 49 states, five U.S. territories, and the District of Columbia. Across the settlements, the firm agreed to pay nearly $600 million to settle investigations into its role in promoting sales of OxyContin. McKinsey has since apologized for its advice to opioid makers.
Records show that McKinsey worked for Purdue Pharma and other opioid makers in a 15-year period, from 2004 to 2019. During 2018 and 2019, McKinsey collected at least $400 million consulting pharmaceutical companies. McKinsey advised Mallinckrodt, the largest manufacturer of generic opioids, as well as Endo International for which McKinsey consulted on marketing Opana. McKinsey's consultation grew Endo into a leading generics manufacturer. McKinsey recommended targeted and influenced doctors who treat back pain in elderly and long-term care patients.
In February 2021, McKinsey paid $600 million to settle investigations into its role in promoting sales of OxyContin and fueling the greater opioid epidemic.
In April 2022, the New York Times reported that McKinsey had frequently allowed partners and other consultants to work for both government clients, such as the FDA, and pharmaceutical clients, such as Purdue. These actions violated McKinsey's own internal ethical guidelines.
In December 2023, Reuters reported that McKinsey had agreed to pay an additional $78 million to settle claims with health insurers McKinsey's consulting helped to fuel "an epidemic of opioid addiction through its work for drug companies." Reuters reported the settlement would be the last in a series and that McKinsey "admitted to no wrongdoing."
In 2024, the company became the subject of a criminal investigation by the US Justice Department into its role in advising opioid manufacturers how to boost sales. A grand jury was convened to determine charges to be brought against the firm. It is also under investigation for obstruction of justice during the period that concerns were mounting about their activities. The firm settled the investigation in December 2024, for a sum of $650 million and with conditions that it cannot market controlled substances for a period of five years. This agreement was filed in federal court in Abingdon, Virginia, with aims to resolve criminal charges which were brought up as part of the latest corporate prosecution concerning the marketing of addictive painkillers.
Rikers Island jail complex
New York City paid McKinsey $27.5 million between 2014 and 2017 to reduce prison assaults in Rikers Island; but the violence grew and the city abandoned many of the firm's recommendations.
The consultancy's alleged failings included not soliciting the views of inmates or clinic staff; using the encrypted messaging app Wickr that deletes messages, allegedly to avoid transparency; initiatives involving the expanded use of Tasers, shotguns and K9 patrol dogs; replacing troublesome inmates with more accommodating ones in the test area, which skewed the data in favor of the project; the use of ineffective data-analytics software; and spreadsheet errors that inflated the baseline rate of violence, against which the project was measured.
McKinsey advised New York City's Rikers Island jail complex and tested an anti-violence strategy named "Restart" which occurred in Rikers housing units. Jail administrators reported that the strategy resulted in violent crimes dropping more than 70% inside the Rikers housing units. Later, it was found that McKinsey consultants and jail officials rigged the program by grouping compliant inmates into the housing units, and that violent crimes including "slashings and stabbings" increased over 1000% from 2011 to 2016.
Fine for insider trading by investment affiliate
In February 2019, The New York Times ran a series of articles about McKinsey and the in-house hedge fund it operates – McKinsey Investment Office, or MIO Partners. The articles claimed that there was "potential for undisclosed conflicts of interest between the fund's investments and the advice the firm sells to clients", since the hedge fund could benefit from the inside knowledge obtained through management consulting services.
The firm responded that "MIO and McKinsey employ separate staffs. MIO staff have no nonpublic knowledge of McKinsey clients. For the vast majority of assets under management, decisions about specific investments are made by third-party managers".
In 2019, McKinsey paid the Justice Department $15 million from fees earned to settle allegations relating to failure to disclose potential conflict in three bankruptcy cases that the firm had advised. In 2021, MIO Partners, an affiliate of McKinsey & Co. that invests almost $31 billion of money on behalf of its employees, was fined $18 million by the US regulator, SEC. The SEC said some of the same people making investment decisions for MIO Partners were McKinsey & Co. employees who had visibility into confidential information for companies for which McKinsey was consulting. The SEC claimed that MIO Partners had advanced knowledge of upcoming mergers, bankruptcy, and financial results announcements for companies that the firm was consulting.
Accusations of conflicts of interest in US bankruptcies
In January 2022, the Second U.S. Circuit Court of Appeals in Manhattan revived a lawsuit against McKinsey & Co. filed by retired turnaround specialist Jay Alix, accusing the consulting firm of concealing potential conflicts when seeking permission from bankruptcy courts to perform lucrative work on corporate restructurings.
In July 2023, former Prima Wawona CEO Dan Gerawan filed a lawsuit alleging that the investment firm Paine Schwartz used Prima Wawona, then America's largest stone fruit producer, to create financial gain for McKinsey and that many of the Paine Schwartz employees were former McKinsey employees. The lawsuit alleges that in late 2020, Paine Schwartz hired McKinsey as consultants, without board approval, to make massive changes in Prima Wawona's operations. Thereafter, the company's performance deteriorated quickly, according to the suit. Moody's downgraded the company's outlook from "stable" to "negative." In October 2023, Prima Wawona filed for bankruptcy. McKinsey was the company's largest creditor, owed $8 million. The company was in default on $679 million in debt. Efforts to sell the company in bankruptcy failed. In January 2024, the company announced that it would liquidate, lay off all 5,400 employees, and sell off more than 13,000 acres of farmland.
Controversial clients and association with authoritarian regimes
Role in U.S. Immigration and Customs Enforcement (ICE)
McKinsey stopped working for U.S. Immigration and Customs Enforcement (ICE) after it was disclosed that the firm had done more than $20 million in consulting work for the agency. McKinsey managing partner Kevin Sneader said the contract, not widely known within the company until The New York Times reported it, had "rightly raised" concerns. In 2019, The New York Times and ProPublica reported on newly uncovered documents which showed that McKinsey, as part of its work with ICE, proposed cuts in spending on food and medical care for migrants. McKinsey also advocated for an acceleration of the deportation process, causing concerns among ICE staff that the due process rights of the migrants would be violated. Previously, McKinsey managing partner, Kevin Sneader, had claimed that McKinsey had done no work for ICE in terms of developing and implementing immigration policy; the uncovered documents showed that to be false.
Role in Saudi clampdown on dissidents
In October 2018, in the wake of the assassination of Jamal Khashoggi, a Saudi dissident and journalist, The New York Times reported that McKinsey had identified the most prominent Saudi dissidents on Twitter and that the Saudi government subsequently repressed the dissidents and their families. One of the dissidents, Khalid al-Alkami, was arrested. Another dissident identified by McKinsey; Omar Abdulaziz in Canada, had two brothers imprisoned by the Saudi authorities, and his cell phone hacked. McKinsey issued a statement, saying "We are horrified by the possibility, however remote, that [the report] could have been misused. We have seen no evidence to suggest that it was misused, but we are urgently investigating how and with whom the document was shared." In December 2018, The New York Times reported that "the kingdom is a such a vital client for the firm—the source of nearly 600 projects from 2011 to 2016 alone—that McKinsey chose to participate in a major Saudi investment conference in October 2018 even after the killing and dismemberment of a Washington Post columnist by Saudi agents."
On February 12, 2019, the European Parliament Greens/EFA group presented a motion for a resolution on the situation on women's rights defenders in Saudi Arabia denouncing the involvement of foreign public relations companies in representing Saudi Arabia and handling its public image, particularly McKinsey & Company.
Saudi Arabian influence disclosure
In February 2024, McKinsey was questioned in court about possible violations of federal disclosure rules. The company, along with three other consulting firms, was accused of refusing to provide information about their work for Saudi Arabia's Public Investment Fund and failing to disclose themselves as agents of the Saudi government. Representatives of the firms warned that their staff could face jail if they complied with the subpoenas, breaking strict rules imposed by Saudi Arabia on what information consulting firms can share with the US government.
Support of authoritarian regimes
McKinsey's business and policy support for authoritarian regimes came under scrutiny in December 2018, in the wake of a lavish company retreat in China held adjacent to prisons where thousands of Uyghurs were being detained without cause. In December 2021, NBC News reported McKinsey's connection to a manufacturing facility owned by DJI, a drone maker sanctioned by the United States Department of the Treasury for alleged complicity in aiding the persecution of Uyghurs in China. In the preceding few years, McKinsey's clients included Saudi Arabia's absolute monarchy, Turkey's autocratic leader Recep Tayyip Erdogan, ousted former president of Ukraine Viktor Yanukovych, and several Chinese and Russian companies under sanctions.
China
In 2015, McKinsey's think tank, the Urban China Initiative, advised the Chinese government on its 13th five-year plan and its Made in China 2025 policy. As part of a project for China's National Development and Reform Commission, McKinsey's think tank advised the Chinese government to "deepen co-operation between business and the military and push foreign companies out of sensitive industries," according to a 2024 Financial Times report. In response, US legislators Marco Rubio and Michael McCaul stated that McKinsey had undermined US security and called for it to be banned from securing US federal government contracts. In October 2024, several US lawmakers called on the United States Department of Justice to investigate whether McKinsey misrepresented its work with Chinese government entities, including state-owned enterprises. On October 18, 2024, the U.S. House of Representatives Select Committee on the CCP reported that "McKinsey Equipped America's Foremost Adversary and Misrepresented Work for the Chinese Military Under Oath."
Work with Russian arms manufacturers
McKinsey is reported to have provided consulting services for the Russian state-owned enterprise Rostec, which is responsible for manufacturing missile engines used during Russia's war on Ukraine. According to January 2023 reporting from Die Zeit, McKinsey consultants would provide consulting services to Gazprom and Rostec while in Germany on behalf of the German Federal Ministry of Defence. According to US Senator Maggie Hassan McKinsey has displayed a "pattern of behavior" that raised "grave concerns about conflicts of interest." McKinsey has also done work for Sberbank, VTB bank, Gazprom, and Rosneft, which are all closely tied to the Kremlin.
Government corruption scandals
South African corruption scandal
The Gupta family (no relation to Rajat Gupta) had strategically placed corrupted individuals in various South African government, utilities and infrastructure sectors. It is alleged that McKinsey was complicit in this corruption by using the Guptas to obtain consulting contracts from certain state-owned enterprises, including Eskom and Transnet. Working with Trillian Capital Partners (a consultancy which was owned by a Gupta associate), they provided services to the value of R1 billion ($75 million) annually. Trillian was paid a commission for facilitating the business for McKinsey. McKinsey hired law firm Norton Rose Fulbright to carry out an internal investigation over the allegations. McKinsey's then Managing Partner, Dominic Barton, issued a statement following an internal investigation, in which the firm "admitted that it found violations of its professional standards but denied any acts of bribery, corruption, and payments to Trillian."
Corruption Watch, a South African non-governmental organization, filed a complaint about the controversial contract to the US Department of Justice, alleging that there was a criminal conspiracy between McKinsey, Trillian and Eskom in contravention of US and South African law. It was revealed in January 2018 that criminal complaints were filed against McKinsey & Company by the South African Companies and Intellectual Property Commission. South African prosecutors confirmed that they would enforce the seizing of assets from McKinsey.
South Africa's National Prosecuting Authority concluded in early 2018 that the payments to McKinsey and its local business partner, Trillian, were illegal, involving crimes such as fraud, theft, corruption and money laundering. McKinsey had subsequently been in discussion with Eskom and the National Prosecuting Authority's Asset Forfeiture Unit to agree on a transparent, legally appropriate process for returning the R 1 billion (US$74M) it had been paid – it was confirmed on 6 July 2018 that this had been concluded. Eskom confirmed it received R99.5M in interest from McKinsey on July 23, 2018. The interest payment covers the two years since McKinsey was paid almost R 1bn in 2016.
Information relating to allegedly corrupt practices by McKinsey at Transnet in 2011 and 2012 came to light in late July 2018. The weekly Mail & Guardian newspaper reported that a "...new forensic treasury report shows how controversial former Transnet and Eskom chief financial officer Anoj Singh enjoyed overseas trips at the expense of international consulting firm McKinsey, which scored multi-billion rand contracts at the state owned entities." The "...report reiterates treasury's recommendations that Singh's conduct with regards to McKinsey should be referred to the elite crime-fighting unit, the Hawks, for investigations under the Prevention and Combating of Corrupt Activities Act (Precca). Under Precca, Singh would be investigated for allegations of corruption as payment for the overseas trips alone would constitute a form of gratification, which is illegal." The Sunday City Press reported that the forensic report in turn reported that "multinational advisory firm McKinsey paid for Singh to go on lavish international trips to Dubai, Russia, Germany and the UK, after which their contract with Transnet was massively extended." McKinsey issued a statement that the allegations were incorrect. McKinsey stated that "based on an extensive review encompassing interviews, email records and expense documents, our understanding is that McKinsey did not pay for Mr. Singh's airfare and hotel lodgings in connection with the CFO Forum and the meetings that took place around the CFO Forum in London and elsewhere in 2012 and 2013." On 11 October 2019, the United States Treasury department announced that it had imposed wide-ranging financial sanctions on three Gupta brothers, Ajay, Atul and Rajesh (aka Tony) and their business associate Salim Essa under the United States Magnitsky Act.
The Economist reported in November 2019, that McKinsey's scandals, such as the 2016 South Africa scandal and the allegations of conflict of interest tied to its $12.7bn investment affiliate, McKinsey Investment Office (MIO), are relatively recent in terms of its long history. The article said that McKinsey's legal challenges facing McKinsey's new global managing partner, Kevin Sneader, may be related to the company's fast-paced growth with an increase of 2,200 partners compared to 2009. During that same time period, the number of employees increased to 30,000 worldwide from 17,000.
In 2020, McKinsey representatives giving testimony to the Zondo Commission of Inquiry into State Capture placed blame for the firm's involvement in the corruption scandal on former McKinsey partner, Vikas Sagar. During 2021 McKinsey & Co. agreed to repay R 870 million (US$63M) in fees to South African state logistics company Transnet SOC Ltd., seeking to distance itself from contracts linked to corruption allegations. In April 2022 the Zondo Commission recommended that key Eskom executives be criminally investigated for improperly awarding consulting contracts to McKinsey & Company.
South Africa's National Prosecuting Authority announced on Friday, 30 September 2022 that it had criminally charged both McKinsey South Africa and former McKinsey partner, Vikas Sagar, with fraud, corruption and theft related to a contract to advise Transnet on buying new locomotives.
In 2024, McKinsey was ordered to pay a $122 million criminal penalty (and enter into a three-year deferred prosecution agreement) to settle an investigation by the Justice Department and South Africa's National Prosecuting Authority for violations of the Foreign Corrupt Practices Act (FCPA); 50% of the penalty will be paid to South Africa (roughly R1.1 billion). McKinsey paid bribes to government officials in South Africa between 2012 and 2016.
French presidential corruption scandal
In December 2022, it was reported that the French National Financial Prosecutors' Office had raided the headquarters of President Emmanuel Macron's Renaissance party and McKinsey's Paris office. The raids were related to probes into false election campaign accounting, as well as possible favouritism and conspiracy. The probe had been widened in October 2022 from an initial focus on McKinsey's taxes to include alleged underreporting of campaign consulting costs and allegations of favoritism. Even though the company had a turn over of €329 million, they did not pay any corporation taxes in France in a decade, according to the French senate. McKinsey consultants are alleged to have worked as unpaid volunteers on Macron's 2017 and 2022 election campaigns, in violation of French law. The firm is subsequently alleged to have benefitted from special access and favorable government treatment, including the awarding of lucrative government contracts. The French media has dubbed the scandal 'McKinsey Affair' or 'McKinseygate'. McKinsey is facing possible charges for corruption and tax fraud as a result of the investigation. In July 2023, the case was still pending.
Canadian government consulting scandal
A January 2023 investigative report by CBC News revealed that Justin Trudeau's government had spent at least $117.4 million on McKinsey consulting since coming to power, compared to $2.2 million spent by the prior government. According to documents obtained by Radio-Canada, all of those contracts were sole-source, meaning other firms were not given the chance to bid for the contracts. Further investigative reporting identified at least $84 million in McKinsey consulting expenses between March 2021 and November 2022 alone.
According to anonymous sources with major roles at Immigration, Refugees and Citizenship Canada (IRCC), McKinsey is reported to have a particularly large and growing influence over Canadian immigration policy. Policy is reported to have been decided on without input from public servants, and with minimal consideration for the public interest. Canada's immigration targets have closely followed goals set in a plan by previous McKinsey head Dominic Barton, who outlined these plans in his 2016 report of the Advisory Council on Economic Growth and through his work with the Century Initiative. Both the report and the Century Initiative advocate for a steep increase in immigration to bring Canada's population to 100 million by 2100. According to one of the IRCC whistleblowers, the department was informed that Barton's report was a "foundational plan" in spite of reservations expressed by the then-immigration minister, John McCallum.
On January 10, 2023, Canadian opposition parties, including the Conservative Party of Canada, the New Democratic Party of Canada, and Bloc Quebecois, called for a parliamentary inquiry into federal contracts awarded to McKinsey. The opposition is demanding that the government disclose "contracts, conversations, records of work done, meetings held, text messages, email exchanges, everything that the government has with the company since taking office". McKinsey has thus far refused to answer CBC News questions regarding its role and agreements with the federal government, while the government has refused to provide copies of the company's reports. In response to the controversy, McKinsey issued a statement on its website indicating that it "welcomes the opportunity" to provide information to parliament, and that it "does not make policy recommendations on immigration or any other topic".
Trudeau asked fellow Liberal Party members Treasury Board President Mona Fortier and Procurement Minister Helena Jaczek to review the contracts return a final report. On 23 March 2023, the Treasury Board announced that audits had determined that departments did not consistently follow certain administrative rules and procedures, but that there was "broad compliance with values and ethics commitments." According to the Treasury Board, while the audits raised questions about fairness, transparency and conflicts of interest, no evidence was found of political direction in awarding the contracts.
Climate change
Climate action letter and AMC
In 2021, more than 1,100 McKinsey employees signed a letter calling out the firm for working with 43 of the 100 most polluting companies. According to The New York Times, these 43 clients alone were responsible for more than a third of the world's carbon emissions in 2018. The letter called on the firm to disclose how much carbon its clients emit. McKinsey executives said they would continue to advise major carbon polluters. Several employees resigned from the firm following the letter.
In April 2022, McKinsey, Alphabet Inc., Shopify, Meta Platforms, and Stripe, Inc. announced a $925 million advance market commitment of carbon dioxide removal (CDR) from companies that are developing CDR technology over the next 9 years.
2023 Africa Climate Summit
More than 400 civil society groups signed a letter of protest to Kenyan President William Ruto, accusing McKinsey of influencing the 2023 Africa Climate Summit. The letter claimed that the agenda for the summit, as proposed by McKinsey, "reflects the interests of the US, McKinsey and the Western corporations they represent." Leaked documents revealed that the consulting firm tried to push controversial carbon market schemes that would benefit its fossil fuel clients.
COP28
In 2023, an AFP investigation revealed in multiple leaked documents that McKinsey was using its position as primary advisor to COP28 hosts, the United Arab Emirates, to push the interest of its oil and gas clients (ExxonMobil and Aramco). McKinsey has been accused of putting its own interests ahead of the climate by sources involved in preparatory meetings for COP28. McKinsey's energy scenario for the COP28 presidency would allow for continued investment in fossil fuels, which would undermine the goals of the Paris Agreement; an "energy transition narrative" recommends oil use to be reduced by only 50% by 2050, and that trillions of dollars should continue to be invested in high-emission assets each year to at least 2050.
Diversity, equity, and inclusion
McKinsey & Company published several reports on business benefits of Diversity, equity, and inclusion. The reports were criticized as insufficiently investigating the difference between correlation and causality, lacking robustness, and overgeneralizing to broad claims.
Explanatory notes
References
Further reading
External links
of McKinsey & Co.
of McKinsey Quarterly
1926 establishments in Illinois
Consulting firms established in 1926
Macroeconomics consulting firms
History of Chicago
International management consulting firms
Life sciences industry
Management consulting firms of the United States
Outsourcing companies
Privately held companies based in Illinois
Privately held companies based in New York City | McKinsey & Company | Biology | 12,737 |
20,062 | https://en.wikipedia.org/wiki/Meditation | Meditation is a practice in which an individual uses a technique to train attention and awareness and detach from reflexive, "discursive thinking," achieving a mentally clear and emotionally calm and stable state, while not judging the meditation process itself.
Techniques are broadly classified into focused (or concentrative) and open monitoring methods. Focused methods involve attention to specific objects like breath or mantras, while open monitoring includes mindfulness and awareness of mental events.
Meditation is practiced in numerous religious traditions, though it is also practised independently from any religious or spiritual influences for its health benefits. The earliest records of meditation (dhyana) are found in the Upanishads, and meditation plays a salient role in the contemplative repertoire of Jainism, Buddhism and Hinduism. Meditation-like techniques are also known in Judaism, Christianity and Islam, in the context of remembrance of and prayer and devotion to God.
Asian meditative techniques have spread to other cultures where they have found application in non-spiritual contexts, such as business and health. Meditation may significantly reduce stress, fear, anxiety, depression, and pain, and enhance peace, perception, self-concept, and well-being. Research is ongoing to better understand the effects of meditation on health (psychological, neurological, and cardiovascular) and other areas.
Etymology
The English meditation is derived from Old French meditacioun, in turn from Latin meditatio from a verb meditari, meaning "to think, contemplate, devise, ponder". In the Catholic tradition, the use of the term meditatio as part of a formal, stepwise process of meditation goes back to at least the 12th-century monk Guigo II, before which the Greek word theoria was used for the same purpose.
Apart from its historical usage, the term meditation was introduced as a translation for Eastern spiritual practices, referred to as dhyāna in Hinduism, Buddhism, and Jainism, which comes from the Sanskrit root dhyai, meaning to contemplate or meditate. The greek word theoria actually derives from the same root.
The term "meditation" in English may also refer to practices from Islamic Sufism, or other traditions such as Jewish Kabbalah and Christian Hesychasm.
Definitions
Difficulties in defining meditation
No universally accepted definition for meditation
Meditation has proven difficult to define as it covers a wide range of dissimilar practices in different traditions and cultures. In popular usage, the word "meditation" and the phrase "meditative practice" are often used imprecisely to designate practices found across many cultures. These can include almost anything that is claimed to train the attention of mind or to teach calmness or compassion. There remains no definition of necessary and sufficient criteria for meditation that has achieved widespread acceptance within the modern scientific community.
Separation of technique from tradition
Some of the difficulty in precisely defining meditation has been in recognizing the particularities of the many various traditions; and theories and practice can differ within a tradition. Taylor noted that even within a faith such as "Hindu" or "Buddhist", schools and individual teachers may teach distinct types of meditation.
Ornstein noted that "Most techniques of meditation do not exist as solitary practices but are only artificially separable from an entire system of practice and belief." For instance, while monks meditate as part of their everyday lives, they also engage in the codified rules and live together in monasteries in specific cultural settings that go along with their meditative practices.
Dictionary definitions
Dictionaries give both the original Latin meaning of "think[ing] deeply about (something)", as well as the popular usages of "focusing one's mind for a period of time", "the act of giving your attention to only one thing, either as a religious activity or as a way of becoming calm and relaxed", and "to engage in mental exercise (such as concentrating on one's breathing or repetition of a mantra) for the purpose of reaching a heightened level of spiritual awareness."
Scholarly definitions
In modern psychological research, meditation has been defined and characterized in various ways. Many of these emphasize the role of attention and characterize the practice of meditation as attempts to detach from reflexive, "discursive thinking," not judging the meditation-process itself ("logical relaxation"), to achieve a deeper, more devout, or more relaxed state.
Bond et al. (2009) identified criteria for defining a practice as meditation "for use in a comprehensive systematic review of the therapeutic use of meditation", using "a 5-round Delphi study with a panel of 7 experts in meditation research" who were also trained in diverse but empirically highly studied (Eastern-derived or clinical) forms of meditation:
Several other definitions of meditation have been used by influential modern reviews of research on meditation across multiple traditions:
Walsh & Shapiro (2006): "Meditation refers to a family of self-regulation practices that focus on training attention and awareness in order to bring mental processes under greater voluntary control and thereby foster general mental well-being and development and/or specific capacities such as calm, clarity, and concentration"
Cahn & Polich (2006): "Meditation is used to describe practices that self-regulate the body and mind, thereby affecting mental events by engaging a specific attentional set.... regulation of attention is the central commonality across the many divergent methods"
Jevning et al. (1992): "We define meditation... as a stylized mental technique... repetitively practiced for the purpose of attaining a subjective experience that is frequently described as very restful, silent, and of heightened alertness, often characterized as blissful"
Goleman (1988): "the need for the meditator to retrain his attention, whether through concentration or mindfulness, is the single invariant ingredient in... every meditation system"
Classifications
Focused and open methods
In the West, meditation techniques have often been classified in two broad categories, which in actual practice are often combined: focused (or concentrative) meditation and open monitoring (or mindfulness) meditation:
Focused methods include paying attention to the breath, to an idea or feeling (such as mettā – loving-kindness), to a kōan, or to a mantra (such as in transcendental meditation), and single point meditation. Open monitoring methods include mindfulness, shikantaza and other awareness states.
Other possible typologies
Another typology divides meditation approaches into concentrative, generative, receptive and reflective practices:
concentrative: focused attention, including breath meditation, TM, and visualizations;
generative: developing qualities like loving kindness and compassion;
receptive: open monitoring;
reflective: systematic investigation, contemplation.
The Buddhist tradition often divides meditative practice into samatha, or calm abiding, and vipassana, insight. Mindfulness of breathing, a form of focused attention, calms down the mind; this calmed mind can then investigate the nature of reality, by monitoring the fleeting and ever-changing constituents of experience, by reflective investigation, or by "turning back the radiance," focusing awareness on awareness itself and discerning the true nature of mind as awareness itself.
Matko and Sedlmeier (2019) "call into question the common division into 'focused attention' and 'open-monitoring' practices." They argue for "two orthogonal dimensions along which meditation techniques could be classified," namely "activation" and "amount of body orientation," proposing seven clusters of techniques: "mindful observation, body-centered meditation, visual concentration, contemplation, affect-centered meditation, mantra meditation, and meditation with movement."
Jonathan Shear argues that transcendental meditation is an "automatic self-transcending" technique, different from focused attention and open monitoring. In this kind of practice, "there is no attempt to sustain any particular condition at all. Practices of this kind, once started, are reported to automatically 'transcend' their own activity and disappear, to be started up again later if appropriate." Yet, Shear also states that "automatic self-transcending" also applies to the way other techniques such as from Zen and Qigong are practiced by experienced meditators "once they had become effortless and automatic through years of practice."
Technique
Posture
Asanas or body postures such as padmasana (full-lotus, half-lotus), cross-legged sitting, seiza, and kneeling positions are popular meditative postures in Hinduism, Buddhism and Jainism, although other postures such as sitting, supine (lying), and standing are also used. Meditation is also sometimes done while walking, known as kinhin, while doing a simple task mindfully, known as samu, or while lying down, known as shavasana.
Frequency
The Transcendental Meditation technique recommends practice of 20 minutes twice per day. Some techniques suggest less time, especially when starting meditation, and Richard Davidson has quoted research saying benefits can be achieved with a practice of only 8 minutes per day. Research shows improvement in meditation time with simple oral and video training. Some meditators practice for much longer, particularly when on a course or retreat. Some meditators find practice best in the hours before dawn.
Supporting aids
Use of prayer beads
Some religions have traditions of using prayer beads as tools in devotional meditation. Most prayer beads and Christian rosaries consist of pearls or beads linked together by a thread. The Roman Catholic rosary is a string of beads containing five sets with ten small beads. Eastern and Oriental Orthodox have traditions of using prayer ropes called Comboschini or Meqetaria as an aid to prayerful meditation. The Hindu japa mala has 108 beads. The figure 108 in itself having spiritual significance as the energy of the sounds equivalates to Om, as well as those used in Gaudiya Vaishnavism, the Hare Krishna tradition, and Jainism. Buddhist prayer beads also have 108 beads, but hold a different meaning. In Buddhism, there are 108 human passions that impede enlightenment. Each bead is counted once as a person recites a mantra until the person has gone all the way around the mala. The Muslim misbaha has 99 beads. There is also quite a variance when it comes to materials used for beads. Beads made from seeds of rudraksha trees are considered sacred by devotees of Shiva, while followers of Vishnu revere the wood that comes from the Tulsi plant, also known as Holy Basil.
Striking the meditator
The Buddhist literature has many stories of Enlightenment being attained through disciples being struck by their masters. T. Griffith Foulk recounts how the encouragement stick was an integral part of the Zen practice when he trained:
Using a narrative
Neuroscientist and long-time meditator Richard Davidson has expressed the view that having a narrative can help the maintenance of daily practice. For instance, he himself prostrates to the teachings, and meditates "not primarily for my benefit, but for the benefit of others".
Psychedelics
Studies suggest the potential of psychedelics, such as psilocybin and DMT, to enhance meditative training.
Meditation traditions
Origins
The history of meditation is intimately bound up with the religious context within which it was practiced. Rossano suggested that the emergence of the capacity for focused attention, an element of many methods of meditation, may have contributed to the latest phases of human biological evolution. Some of the earliest references to meditation, as well as proto-Samkhya, are found in the Upanishads of India. According to Wynne, the earliest clear references to meditation are in the middle Upanishads and the Mahabharata (including the Bhagavad Gita). According to Gavin Flood, the earlier Brihadaranyaka Upanishad is describing meditation when it states that "Having become calm and concentrated, one perceives the self (Ātman) within oneself" (BU 4.4.23).
Indian religions
Hinduism
There are many schools and styles of meditation within Hinduism. In pre-modern and traditional Hinduism, Yoga and Dhyana are practised to recognize 'pure awareness', or 'pure consciousness', undisturbed by the workings of the mind, as one's eternal self. In Advaita Vedanta jivatman, individual self, is recognized as illusory, and in Reality identical with the omnipresent and non-dual Ātman-Brahman. In the dualistic Yoga school and Samkhya, the Self is called Purusha, a pure consciousness undisturbed by Prakriti, 'nature'. Depending on the tradition, the liberative event is named moksha, vimukti or kaivalya.
One of the most influential texts of classical Hindu Yoga is Patañjali's Yoga sutras (c. 400 CE), a text associated with Yoga and Samkhya and influenced by Buddhism, which outlines eight limbs leading to kaivalya ("aloneness") or inner awareness. The first four, known as the "outer limbs," include ethical discipline (yamas), rules (niyamas), physical postures (āsanas), and breath control (prāṇāyama). The fifth, withdrawal from the senses (pratyāhāra), transitions into the "inner limbs" that are one-pointedness of mind (dhāraṇā), meditation (dhyāna), and finally samādhi.
Later developments in Hindu meditation include the compilation of Hatha Yoga (forceful yoga) compendiums like the Hatha Yoga Pradipika, the development of Bhakti yoga as a major form of meditation, and Tantra. Another important Hindu yoga text is the Yoga Yajnavalkya, which makes use of Hatha Yoga and Vedanta Philosophy.
Mantra Meditation
The Bhagavata Purana emphasizes that mantra meditation is a key practice for achieving liberation; practitioners can achieve a direct vision of the divine. The text integrates both Vedic and tantric elements, where mantras are not only seen as sacred sounds but as embodiment of the deity. This approach reflects a shift from the impersonal meditation on the sound-form of Brahman (Om) in the Upanishads to a personal, devotional focus on Krishna in the Bhagavata Purana.
Jainism
Jainism has three elements called the Ratnatraya ("Three Jewels"): right perception and faith, right knowledge and right conduct. Meditation in Jainism aims to reach and to remain in the pure state of soul which is believed to be pure consciousness, beyond any attachment or aversion. The practitioner strives to be just a knower-seer (gyata-drashta). Jain meditation can be broadly categorized into Dharma dhyana and Shukla dhyana. Dharma dhyana is discriminating knowledge (bheda-vijñāna) of the tattvas (truths or fundamental principles), while shukla dhyana is meditation proper.
Jainism uses meditation techniques such as pindāstha-dhyāna, padāstha-dhyāna, rūpāstha-dhyāna, rūpātita-dhyāna, and savīrya-dhyāna. In padāstha dhyāna, one focuses on a mantra, a combination of core letters or words on deity or themes. Jain followers practice mantra regularly by chanting loudly or silently in mind.
The meditation technique of contemplation includes agnya vichāya, in which one contemplates on seven facts – life and non-life, the inflow, bondage, stoppage and removal of karmas, and the final accomplishment of liberation. In apaya vichāya, one contemplates on the incorrect insights one indulges, which eventually develops right insight. In vipaka vichāya, one reflects on the eight causes or basic types of karma. In sansathan vichāya, one thinks about the vastness of the universe and the loneliness of the soul.
Buddhism
Buddhists pursue meditation as part of the path toward awakening and nirvana (Buddhism). The closest words for meditation in the classical languages of Buddhism are bhāvanā ("development"), and the core practices of body contemplations (repulsiveness and cemetery contemplations) and anapanasati (mindfulness of in-and-out breathing) culminating in jhāna/dhyāna or samādhi.
While most classical and contemporary Buddhist meditation guides are school-specific, the root meditative practices of various body recollections and breath meditation have been preserved and transmitted in almost all Buddhist traditions, through Buddhist texts like the Satipatthana Sutta and the Dhyana sutras, and through oral teacher-student transmissions. These ancient practices are supplemented with various distinct interpretations of, and developments in, these practices.
The Theravāda tradition stresses the development of samatha and vipassana, postulating over fifty methods for developing mindfulness based on the Satipatthana Sutta, and forty for developing concentration based on the Visuddhimagga.
The Tibetan tradition incorporated Sarvastivada and Tantric practices, wedded with Madhyamaka philosophy, and developed thousands of visualization meditations.
The Zen tradition incorporated mindfulness and breath-meditation via the Dhyana sutras, which are based on the Sarvastivada-tradition. Sitting meditation, known as zazen, is a central part of Zen practice. Downplaying the "petty complexities" of satipatthana and the body-recollections (but maintaining the awareness of immanent death), the early Chan-tradition developed the notions or practices of wu nian ("no thought, no fixation on thought, such as one's own views, experiences, and knowledge") and fēi sīliàng (非思量, Japanese: hishiryō, "nonthinking"); and kanxin ("observing the mind") and shou-i pu i (守一不移, "maintaining the one without wavering," turning the attention from the objects of experience, to the nature of mind, the perceiving subject itself, which is equated with Buddha-nature.
The Silk Road transmission of Buddhism introduced Buddhist meditation to other Asian countries, reaching China in the 2nd century CE, and Japan in the 6th century CE. In the modern era, Buddhist meditation techniques have become popular in the wider world, due to the influence of Buddhist modernism on Asian Buddhism, and western lay interest in Zen and the Vipassana movement, with many non-Buddhists taking-up meditative practices. The modernized concept of mindfulness (based on the Buddhist term sati) and related meditative practices have in turn led to mindfulness based therapies.
Dhyana
Dhyana, while often presented as a form of focused attention or concentration, as in Buddhagosa's Theravada classic the Visuddhimagga ("Path of purification", 5th c. CE), according to a number of contemporary scholars and scholar-practitioners, it is actually a description of the development of perfected equanimity and mindfulness, apparently induced by satipatthana, an open monitoring of the breath, without trying to regulate it. The same description, in a different formula, can be found in the bojjhanga, the "seven factors of awakening," and may therefore refer to the core program of early Buddhist bhavana. According to Vetter, dhyana seems to be a natural development from the sense-restraint and moral constrictions prescribed by the Buddhist tradition.
Samatha and vipassana
The Buddha identified two paramount mental qualities that arise from wholesome meditative practice or bhavana, namely samatha ("calm," "serenity" "tranquility") and vipassana (insight). As the developing tradition started to emphasize the value of liberating insight, and dhyana came to be understood as concentration, samatha and vipassana were understood as two distinct meditative techniques. In this understanding, samatha steadies, composes, unifies and concentrates the mind, while vipassana enables one to see, explore and discern "formations" (conditioned phenomena based on the five aggregates).
According to this understanding, which is central to Theravada orthodoxy but also plays a role in Tibetan Buddhism, through the meditative development of serenity, one is able to weaken the obscuring hindrances and bring the mind to a collected, pliant, and still state (samadhi). This quality of mind then supports the development of insight and wisdom (Prajñā) which is the quality of mind that can "clearly see" (vi-passana) the nature of phenomena. What exactly is to be seen varies within the Buddhist traditions. In Theravada, all phenomena are to be seen as impermanent, suffering, not-self and empty. When this happens, one develops dispassion (viraga) for all phenomena, including all negative qualities and hindrances and lets them go. It is through the release of the hindrances and ending of craving through the meditative development of insight that one gains liberation.
Sikhism
In Sikhism, simran (meditation) and good deeds are both necessary to achieve the devotee's spiritual goals; without good deeds meditation is futile. When Sikhs meditate, they aim to feel God's presence and emerge in the divine light. It is only God's divine will or order that allows a devotee to desire to begin to meditate. Nām japnā involves focusing one's attention on the names or great attributes of God.
Taoism
Taoist meditation has developed techniques including concentration, visualization, qi cultivation, contemplation, and mindfulness meditations in its long history. Traditional Daoist meditative practices influenced Buddhism creating the unique meditative practices of Chinese Buddhism that then spread through the rest of east Asia from around the 5th century.Traditional Chinese medicine and the Chinese martial arts were influenced and influences of Taoist meditation.
Livia Kohn distinguishes three basic types of Taoist meditation: "concentrative", "insight", and "visualization". Ding 定 (literally means "decide; settle; stabilize") refers to "deep concentration", "intent contemplation", or "perfect absorption". Guan 觀 (lit. "watch; observe; view") meditation seeks to merge and attain unity with the Dao. It was developed by Tang dynasty (618–907) Taoist masters based upon the Tiantai Buddhist practice of Vipassanā "insight" or "wisdom" meditation. Cun 存 (lit. "exist; be present; survive") has a sense of "to cause to exist; to make present" in the meditation techniques popularized by the Taoist Shangqing and Lingbao Schools. A meditator visualizes or actualizes solar and lunar essences, lights, and deities within their body, which supposedly results in health and longevity, even xian 仙/仚/僊, "immortality".
The Guanzi essay (late 4th century BCE) Neiye "Inward training" is the oldest received writing on the subject of qi cultivation and breath-control meditation techniques. For instance, "When you enlarge your mind and let go of it, when you relax your vital breath and expand it, when your body is calm and unmoving: And you can maintain the One and discard the myriad disturbances. ... This is called "revolving the vital breath": Your thoughts and deeds seem heavenly."
The Taoist Zhuangzi (c. 3rd century BCE) records zuowang or "sitting forgetting" meditation. Confucius asked his disciple Yan Hui to explain what "sit and forget" means: "I slough off my limbs and trunk, dim my intelligence, depart from my form, leave knowledge behind, and become identical with the Transformational Thoroughfare."
Taoist meditation practices are central to Chinese martial arts (and some Japanese martial arts), especially the qi-related neijia "internal martial arts". Some well-known examples are daoyin ("guiding and pulling"), qigong ("life-energy exercises"), neigong ("internal exercises"), neidan ("internal alchemy"), and tai chi ("great ultimate boxing"), which is thought of as moving meditation. One common explanation contrasts "movement in stillness" referring to energetic visualization of qi circulation in qigong and zuochan ("seated meditation"), versus "stillness in movement" referring to a state of meditative calm in tai chi forms. Also the unification or middle road forms such as Wuxingheqidao that seeks the unification of internal alchemical forms with more external forms.`
Abrahamic religions
Judaism
Judaism has made use of meditative practices for thousands of years. For instance, in the Torah, the patriarch Isaac is described as going "לשוח" (lasuach) in the field – a term understood by all commentators as some type of meditative practice (Genesis 24:63). Similarly, there are indications throughout the Tanakh (the Hebrew Bible) that the prophets meditated. In the Old Testament, there are two Hebrew words for meditation: hāgâ (), to sigh or murmur, but also to meditate, and sîḥâ (), to muse, or rehearse in one's mind.
Classical Jewish texts espouse a wide range of meditative practices, often associated with the cultivation of kavanah or intention. The first layer of rabbinic law, the Mishnah, describes ancient sages "waiting" for an hour before their prayers, "in order to direct their hearts to the Omnipresent One" (Mishnah Berakhot 5:1). Other early rabbinic texts include instructions for visualizing the Divine Presence (B. Talmud Sanhedrin 22a) and breathing with conscious gratitude for every breath (Genesis Rabba 14:9).
One of the best-known types of meditation in early Jewish mysticism was the work of the Merkabah, from the root /R-K-B/ meaning "chariot" (of God). Some meditative traditions have been encouraged in Kabbalah, and some Jews have described Kabbalah as an inherently meditative field of study. Kabbalistic meditation often involves the mental visualization of the supernal realms. Aryeh Kaplan has argued that the ultimate purpose of Kabbalistic meditation is to understand and cleave to the Divine.
Meditation has been of interest to a wide variety of modern Jews. In modern Jewish practice, one of the best known meditative practices is called "hitbodedut" (התבודדות, alternatively transliterated as "hisbodedus"), and is explained in Kabbalistic, Hasidic, and Mussar writings, especially the Hasidic method of Rabbi Nachman of Breslav. The word derives from the Hebrew word "boded" (בודד), meaning the state of being alone. Another Hasidic system is the Habad method of "hisbonenus", related to the Sephirah of "Binah", Hebrew for understanding. This practice is the analytical reflective process of making oneself understand a mystical concept well, that follows and internalises its study in Hasidic writings. The Musar Movement, founded by Rabbi Israel Salanter in the middle of the nineteenth-century, emphasized meditative practices of introspection and visualization that could help to improve moral character. Conservative rabbi Alan Lew has emphasized meditation playing an important role in the process of teshuvah (repentance). Jewish Buddhists have adopted Buddhist styles of meditation.
Christianity
Christian meditation is a term for a form of prayer in which a structured attempt is made to get in touch with and deliberately reflect upon the revelations of God. In the Roman Empire, by 20 BCE Philo of Alexandria had written on some form of "spiritual exercises" involving attention (prosoche) and concentration and by the 3rd century Plotinus had developed meditative techniques. The word meditation comes from the Latin word meditatum, which means to "concentrate" or "to ponder". Monk Guigo II introduced this terminology for the first time in the 12th century AD. Christian meditation is the process of deliberately focusing on specific thoughts (e.g. a biblical scene involving Jesus and the Virgin Mary) and reflecting on their meaning in the context of the love of God. Christian meditation is sometimes taken to mean the middle level in a broad three-stage characterization of prayer: it then involves more reflection than first level vocal prayer, but is more structured than the multiple layers of contemplation in Christianity.
Between the 10th and 14th centuries, hesychasm was developed, particularly on Mount Athos in Greece, and involves the repetition of the Jesus prayer. Interactions with Indians or the Sufis may have influenced the Eastern Christian meditation approach to hesychasm, but this is unproven.
Western Christian meditation contrasts with most other approaches in that it does not involve the repetition of any phrase or action and requires no specific posture. Western Christian meditation progressed from the 6th century practice of Bible reading among Benedictine monks called Lectio Divina, i.e. divine reading. Its four formal steps as a "ladder" were defined by the monk Guigo II in the 12th century with the Latin terms lectio, meditatio, oratio, and contemplatio (i.e. read, ponder, pray, contemplate). Western Christian meditation was further developed by saints such as Ignatius of Loyola and Teresa of Avila in the 16th century.
On 28 April 2021, Pope Francis, in an address to the General Audience, said that meditation is a need for everyone. He noted that the term "meditation" has had many meanings throughout history, and that "the ancients used to say that the organ of prayer is the heart."
In Catholic Christianity, the Rosary is a devotion for the meditation of the mysteries of Jesus and Mary. "The gentle repetition of its prayers makes it an excellent means to moving into deeper meditation. It gives us an opportunity to open ourselves to God's word, to refine our interior gaze by turning our minds to the life of Christ. The first principle is that meditation is learned through practice. Many people who practice rosary meditation begin very simply and gradually develop a more sophisticated meditation. The meditator learns to hear an interior voice, the voice of God. Similarly, the chotki of the Eastern Orthodox denomination, the Wreath of Christ of the Lutheran faith, and the Anglican prayer beads of the Episcopalian tradition are used for Christian prayer and meditation.
According to Edmund P. Clowney, Christian meditation contrasts with Eastern forms of meditation as radically as the portrayal of God the Father in the Bible contrasts with depictions of Krishna or Brahman in Indian teachings. Unlike some Eastern styles, most styles of Christian meditation do not rely on the repeated use of mantras, and yet are also intended to stimulate thought and deepen meaning. Christian meditation aims to heighten the personal relationship based on the love of God that marks Christian communion. In Aspects of Christian meditation, the Catholic Church warned of potential incompatibilities in mixing Christian and Eastern styles of meditation. In 2003, in A Christian reflection on the New Age the Vatican announced that the "Church avoids any concept that is close to those of the New Age".
Islam
Dhikr (zikr) is a type of meditation within Islam, meaning remembering and mentioning God, which involves the repetition of the 99 Names of God since the 8th or 9th century. It is interpreted in different meditative techniques in Sufism or Islamic mysticism. This became one of the essential elements of Sufism as it was systematized traditionally. It is juxtaposed with fikr (thinking) which leads to knowledge. By the 12th century, the practice of Sufism included specific meditative techniques, and its followers practiced breathing controls and the repetition of holy words.
Sufism uses a meditative procedure like Buddhist concentration, involving high-intensity and sharply focused introspection. In the Oveyssi-Shahmaghsoudi Sufi order, for example, muraqabah takes the form of tamarkoz, "concentration" in Persian.
Tafakkur or tadabbur in Sufism literally means reflection upon the universe: this is considered to permit access to a form of cognitive and emotional development that can emanate only from the higher level, i.e. from God. The sensation of receiving divine inspiration awakens and liberates both heart and intellect, permitting such inner growth that the apparently mundane actually takes on the quality of the infinite. Muslim teachings embrace life as a test of one's submission to God.
Dervishes of certain Sufi orders practice whirling, a form of physically active meditation.
Baháʼí Faith
In the teachings of the Baháʼí Faith, which derives from an Islamic context but is universalist in orientation, meditation is a primary tool for spiritual development, involving reflection on the words of God. While prayer and meditation are linked, where meditation happens generally in a prayerful attitude, prayer is seen specifically as turning toward God, and meditation is seen as a communion with one's self where one focuses on the divine.
In Baháʼí teachings the purpose of meditation is to strengthen one's understanding of the words of God, and to make one's soul more susceptible to their potentially transformative power, more receptive to the need for both prayer and meditation to bring about and maintain a spiritual communion with God.
Bahá'u'lláh, the founder of the religion, never specified any particular form of meditation, and thus each person is free to choose their own form. However, he did state that Baháʼís should read a passage of the Baháʼí writings twice a day, once in the morning, and once in the evening, and meditate on it. He also encouraged people to reflect on one's actions and worth at the end of each day. During the Nineteen Day Fast, a period of the year during which Baháʼís adhere to a sunrise-to-sunset fast, they meditate and pray to reinvigorate their spiritual forces.
Modern spirituality
Modern dissemination in the West
Meditation has spread in the West since the late 19th century, accompanying increased travel and communication among cultures worldwide. Most prominent has been the transmission of Asian-derived practices to the West. In addition, interest in some Western-based meditative practices has been revived, and these have been disseminated to a limited extent to Asian countries.
Ideas about Eastern meditation had begun "seeping into American popular culture even before the American Revolution through the various sects of European occult Christianity", and such ideas "came pouring in [to America] during the era of the transcendentalists, especially between the 1840s and the 1880s." The following decades saw further spread of these ideas to America:
More recently, in the 1960s, another surge in Western interest in meditative practices began. The rise of communist political power in Asia led to many Asian spiritual teachers taking refuge in Western countries, oftentimes as refugees. In addition to spiritual forms of meditation, secular forms of meditation have taken root. Rather than focusing on spiritual growth, secular meditation emphasizes stress reduction, relaxation and self-improvement.
The 2012 US National Health Interview Survey of 34,525 subjects found that 8% of US adults used meditation, with lifetime and 12-month prevalence of meditation use of 5.2% and 4.1% respectively. Meditation use among workers was 10% (up from 8% in 2002).
Mantra meditation, with the use of a japa mala and especially with focus on the Hare Krishna maha-mantra, is a central practice of the Gaudiya Vaishnava faith tradition and the International Society for Krishna Consciousness, also known as the Hare Krishna movement. Other popular New Religious Movements include the Ramakrishna Mission, Vedanta Society, Divine Light Mission, Chinmaya Mission, Osho, Sahaja Yoga, Transcendental Meditation, Oneness University, Brahma Kumaris, Vihangam Yoga and Heartfulness Meditation (Sahaj Marg).
New Age
New Age meditations are often influenced by Eastern philosophy, mysticism, yoga, Hinduism and Buddhism, yet may contain some degree of Western influence. In the West, meditation found its mainstream roots through the social revolution of the 1960s and 1970s, when many of the youth of the day rebelled against traditional religion as a reaction against what some perceived as the failure of Christianity to provide spiritual and ethical guidance. New Age meditation as practised by the early hippies is regarded for its techniques of blanking out the mind and releasing oneself from conscious thinking. This is often aided by repetitive chanting of a mantra, or focusing on an object. New Age meditation evolved into a range of purposes and practices, from serenity and balance to access to other realms of consciousness to the concentration of energy in group meditation to the supreme goal of samadhi, as in the ancient yogic practice of meditation.
Guided meditation
Guided meditation is a form of meditation which uses a number of different techniques to achieve or enhance the meditative state. It may simply be meditation done under the guidance of a trained practitioner or teacher, or it may be through the use of imagery, music, and other techniques. The session can be either in person, via media comprising music or verbal instruction, or a combination of both. The most common form is a combination of meditation music and receptive music therapy, guided imagery, relaxation, mindfulness, and journaling.
Because of the different combinations used under the one term, it can be difficult to attribute positive or negative outcomes to any of the various techniques. Furthermore, the term is frequently used interchangeably with "guided imagery" and sometimes with "creative visualization" in popular psychology and self-help literature. It is less commonly used in scholarly and scientific publications. Consequently, guided meditation cannot be understood as a single technique but rather multiple techniques that are integral to its practice.
Guided meditation as an aggregate or synthesis of techniques includes meditation music, receptive music therapy, guided imagery, relaxation, meditative praxis, and self-reflective journaling, all of which have been shown to have therapeutic benefits when employed as an adjunct to primary strategies. Benefits include lower levels of stress, reducing asthmatic episodes, physical pain, insomnia, episodic anger, negative or irrational thinking, and anxiety, as well as improving coping skills, focus, and a general feeling of well-being.
Effects
Research on the processes and effects of meditation is a subfield of neurological research. Modern scientific techniques, such as functional magnetic resonance imaging and electroencephalography, were used to observe neurological responses during meditation. Concerns have been raised on the quality of meditation research, including the particular characteristics of individuals who tend to participate.
Meditation lowers heart rate, oxygen consumption, breathing frequency, stress hormones, lactate levels, and sympathetic nervous system activity (associated with the fight-or-flight response), along with a modest decline in blood pressure. However, those who have meditated for two or three years were found to already have low blood pressure. During meditation, the oxygen consumption decrease averages 10 to 20 percent over the first three minutes. During sleep for example, oxygen consumption decreases around 8 percent over four or five hours. For meditators who have practiced for years, breath rate can drop to three or four breaths per minute and "brain waves slow from the usual beta (seen in waking activity) or alpha (seen in normal relaxation) to much slower delta and theta waves".
Studies demonstrate that meditation has a moderate effect to reduce pain. There is insufficient evidence for any effect of meditation on positive mood, attention, eating habits, sleep, or body weight.
Luberto er all (2017), in a systematic review and meta-analysis of the effects of meditation on empathy, compassion, and prosocial behaviors, found that meditation practices had small to medium effects on self-reported and observable outcomes, concluding that such practices can "improve positive prosocial emotions and behaviors". However, a meta-review published on Scientific Reports showed that the evidence is very weak and "that the effects of meditation on compassion were only significant when compared to passive control groups suggests that other forms of active interventions (like watching a nature video) might produce similar outcomes to meditation".
"Challenging" and adverse effects
Contemplative traditions
Throughout East Asia the detrimental and undesirable effects of incorrect meditation and mindfulness practice are well documented due to the long varied history of cultivation in these fields. Many traditional herbal, intentional and manual treatments have been prescribed from the past to present day for what is diagnosed as zouhuorumo ().
Meditation may induce "challenging" and "unwanted" experiences, and adverse effects to physical and mental health. Some of these experiences and effects are documented in the contemplative traditions, but can be quite perplexing and burdensome when the outcomes of meditation are expected to result in more advantageous and beneficial health outcomes than detrimental ones. By extension this problem is compounded with little or no support or explanatory framework publicly for novice or laity that is easily accessible for a practitioner to know when it is appropriate to self manage or when it is advisable to seek professional advice on the adverse symptomatology that may arise in this field of self-cultivation .
According to Farias et al. (2020), the most common adverse effects are in people with a history of anxiety and depression. Other adverse psychological symptoms may include narcissistic, sociopathic behaviour and depersonalization or altered sense of self or the world, distorted emotions or thoughts, a mild form of psychosis including auditory and visual hallucinations. In extreme cases in patients with underlying undiagnosed or historical emotional conditions there have been instances of self-harm.
According to Schlosser et al. (2019), "preliminary findings suggest that their occurrence is highly dependent on a complex interaction of contextual factors." For instance, meditation-related psychosis has been linked to sleep deprivation, preceding mental dispositions, and meditation without sufficient social support or any explanatory framework. However, according to Farias et al. (2020), "minor adverse effects have been observed in individuals with no previous history of mental health problems") Farias et al. (2020) further note that "it is also possible that participants predisposed to heightened levels of anxiety and depression are more likely to begin or maintain a meditation practice to manage their symptoms."
According to Farias et al. (2020) there is a prevalence of 8.3% adverse effects, "similar to those reported for psychotherapy practice in general." Schlosser et al. (2019) reported that of 1,232 regular meditators with at least two months of meditation experience, about a quarter reported having had particularly unpleasant meditation-related experiences which they thought may have been caused by their meditation practice. Meditators with high levels of repetitive negative thinking and those who only engage in deconstructive meditation (vipassana/insight meditation) were more likely to report unpleasant side effects.
The appraisal of the experiences may be determined by the framework used to interpret these experiences. Schlosser et al. "found strong evidence that religious participants have lower odds of having particularly unpleasant meditation-related experiences," and "found weak evidence that female participants were less likely to have unpleasant meditation-related experiences," and note the importance of "understanding when these experiences are constitutive elements of meditative practice rather than merely negative effects."
Difficult experiences encountered in meditation are mentioned in traditional sources, and some may be considered to be an expected part of the process. According to Salguero,
The Visuddhimagga mentions various unpleasant stages, and possible "unwholesome or frightening visions" are mentioned in Practical Insight Meditation: Basic and Progressive Stages, a practical manual on vipassanā meditation by Mahāsi Sayādaw. Classical sources mention makyō, Zen sickness () and related difficulties, such as zouhuorumo (), and mojing (). Traditional sources also precribe cures against these experiences, for example Hakuin Ekaku's treatment of Zen-sickness.
Mindfulness
Both the soundness of the scientific foundations of mindfulness, and the desirability of its social effects, have been questioned. Hafenbrack et al. (2022), in a study on mindfulness with 1400 participants, found that focused-breathing meditation can dampen the relationship between transgressions and the desire to engage in reparative prosocial behaviors. Poullin et al. (2021) found that mindfulness can increase the trait of selfishness. The study, consisting of two interrelated parts and totaling 691 participants, found that a mindfulness induction, compared to a control condition, led to decreased prosocial behavior. This effect was moderated by self-construals such that people with relatively independent self-construals became less prosocial while people with relatively interdependent self-construals became more so. In the western world where independent self-construals generally predominate (self centric orientated) meditation may thus have potentially detrimental effects. These new findings about meditations socially problematic effects imply that it can be contraindicated to use meditation as a tool to handle acute personal conflicts or relational difficulties; in the words of Andrew Hafenbrack, one of the authors of the study, "If we 'artificially' reduce our guilt by meditating it away, we may end up with worse relationships, or even fewer relationships".
Secular applications
Psychotherapy
Carl Jung (1875–1961) was an early western explorer of eastern religious practices. He clearly advocated ways to increase the conscious awareness of an individual. Yet he expressed some caution concerning a westerner's direct immersion in eastern practices without some prior appreciation of the differing spiritual and cultural contexts. Erich Fromm (1900–1980) later explored spiritual practices of the east.
Clinical
Since the 1970s, clinical psychology and psychiatry have developed meditation techniques for numerous psychological conditions. Mindfulness practice is employed in psychology to alleviate mental and physical conditions, such as affecting the endocrine system therefore reducing depression, and helping to alleviate stress, and anxiety. Mindfulness is also used as a form of interventional therapy in the treatment of addiction including drug addiction, although the quantity and quality of evidence based research has been poor.
The US National Center for Complementary and Integrative Health states that "Meditation and mindfulness practices may have a variety of health benefits and may help people improve the quality of their lives. Recent studies have investigated if meditation or mindfulness helps people manage anxiety, stress, depression, pain, or symptoms related to withdrawal from nicotine, alcohol, or opioids." However, the NCCIC goes on to caution that, "results from the studies have been difficult to analyze and may have been interpreted too optimistically."
A 2014 review found that practice of mindfulness meditation for two to six months by people undergoing long-term psychiatric or medical therapy could produce moderate improvements in pain management, anxiety, depression. In 2017, the American Heart Association issued a scientific statement that meditation may be a reasonable adjunct practice and intervention to help reduce the risk of cardiovascular diseases, with the qualification that meditation needs to be better defined in higher-quality clinical research of these disorders. Recent findings have also found evidence of meditation affecting migraines in adults. Mindfulness meditation may allow for a decrease in migraine episodes, and a drop in migraine medication usage.
Early low-quality and low- quantity evidence indicates that the mechanism of meditation may help with irritable bowel syndrome, insomnia, cognitive decline in the elderly, and post-traumatic stress disorder. Sitting in silence, body scan meditation and concentrating on breathing was shown in a 2016 review to moderately decrease symptoms of PTSD and depression in war veterans and creating resilience to stresses in active service. Researchers have found that participating in mindfulness meditation can aid insomnia patients by improving sleep quality and total wake time. Mindfulness meditation is a supportive therapy that aides in the treatment for patients diagnosed with insomnia.
In the workplace
A 2010 review of the literature on spirituality and performance in organizations found an increase in corporate meditation programs.
As of 2016 around a quarter of U.S. employers were using stress reduction initiatives. The goal was to help reduce stress and improve reactions to stress. Aetna now offers its program to its customers. Google also implements mindfulness, offering more than a dozen meditation courses, with the most prominent one, "Search Inside Yourself", having been implemented since 2007. General Mills offers the Mindful Leadership Program Series, a course which uses a combination of mindfulness meditation, yoga and dialogue with the intention of developing the mind's capacity to pay attention.
Many military organizations around the world have found meditation and mindfulness practice can support a range of benefits related to combat, including support for mental health, mental clarity, focus and stress control.
In school
A review of 15 peer-reviewed studies of youth meditation in schools indicated transcendental meditation a moderate effect on wellbeing and a small effect on social competence. Insufficient research has been done on the effect of meditation on academic achievement. Evidence has also shown possible improvement to stress, cognitive performance in school taught meditation.
Positive effects on emotion regulation, stress and anxiety can also be seen in students in university and nursing.
Relaxation response and biofeedback
Herbert Benson of Harvard Medical School conducted a series of clinical tests on meditators from various disciplines, including the Transcendental Meditation technique and Tibetan Buddhism. In 1975, Benson published a book titled The Relaxation Response where he outlined his own version of meditation for relaxation. Also in the 1970s, the American psychologist Patricia Carrington developed a similar technique called Clinically Standardized Meditation (CSM). In Norway, another sound-based method called Acem Meditation developed a psychology of meditation and has been the subject of several scientific studies.
Biofeedback has been used by many researchers since the 1950s in an effort to enter deeper states of mind.
See also
Altered state of consciousness
Autogenic training
Ego death
Flow
Four foundations of mindfulness
Hypnosis
Immanence
Mechanisms of mindfulness meditation
Mushin (mental state)
Narrative identity
Psychology of religion
Sensory deprivation
Tukdam
Notes
References
Sources
Printed sources
(NB: has substantial overlap with the full report by , listed below. Overlap includes the first 6 authors of this paper, and the equivalence of Table 3 on p. 134 in this paper with Table B1 on p. 281 in the full report)
Reprinted as chapter 1 (pp. 5–10) in (the book was republished in 2008: )
Web sources
Further reading
Articles containing video clips
Concepts in the philosophy of mind
Concepts in the philosophy of science
History of psychology
Mind–body interventions
New Age practices
Personal development
Religion articles needing expert attention
Religious practices
Spiritual practice
Silence
Yoga | Meditation | Biology | 10,531 |
16,864,178 | https://en.wikipedia.org/wiki/Torch%20Triple%20X | The Torch Triple X (or XXX) was a UNIX workstation computer produced by the British company Torch Computers, and launched in 1985. It was based on the Motorola 68010 microprocessor and ran a version of UNIX System V.
Hardware
The Triple X was based on an 8 MHz 68010 CPU, with a Hitachi 6303 "service processor". The CPU was accompanied by a 68451 memory management unit and a 68450 DMA controller. Both VMEbus and a BBC Micro-compatible "1MHz bus" expansion buses were provided, as was a SCSI host adapter, and an optional Ethernet interface. Both RS-423 and X.25-compatible synchronous serial ports were provided. This latter feature made the Triple X attractive to the UK academic community, where X.25 networks were prevalent at the time.
Standard RAM capacity was 1 MB, expandable to 7 MB via VME cards. A 720 KB, 5.25-in floppy disk drive and ST-506-compatible 20 MB hard disk were fitted as standard, interfaced to the SCSI bus via an OMTI adapter.
Either a 10 or 13 inch colour monitor was supplied. Two graphics modes were available: 720 × 256 pixels in four colours, or 720 × 512 in two colours.
The Triple X had a novel touch-sensitive "soft" power switch. When switching off, this commanded the operating system to shut down gracefully before powering down.
Software
The Triple X's firmware was called Caretaker. The native operating system was Uniplus+ UNIX System V Release 2. A graphical user interface called OpenTop was also included as standard.
Quad X
The Quad X is an enhanced version of the Triple X, with a 68020 processor and three VME expansion slots. This was produced only in small numbers before Torch became insolvent.
References
Bibliography
Computer workstations
68k-based computers
32-bit computers
Computers designed in the United Kingdom | Torch Triple X | Technology | 405 |
20,144,524 | https://en.wikipedia.org/wiki/Anaerobic%20corrosion | Anaerobic corrosion (also known as hydrogen corrosion) is a form of metal corrosion occurring in anoxic water. Typically following aerobic corrosion, anaerobic corrosion involves a redox reaction that reduces hydrogen ions and oxidizes a solid metal. This process can occur in either abiotic conditions through a thermodynamically spontaneous reaction or biotic conditions through a process known as bacterial anaerobic corrosion. Along with other forms of corrosion, anaerobic corrosion is significant when considering the safe, permanent storage of chemical waste.
Chemical mechanisms
The overall process of corrosion can be represented by a bimodal function, where the type of corrosion varies with time, including both oxygen-driven and anaerobic mechanisms. The dominant process will depend on the given conditions. During oxygen-driven corrosion, layers of rust form, creating various non-homogenous anoxic niches throughout the metal's surface. Within the niches the diffusion of oxygen is inhibited, leading to the ideal conditions for anaerobic corrosion to occur.
Abiotic
Under anoxic conditions, the mechanism for corrosion requires a substitute for oxygen as the oxidizing agent in the redox reaction. For abiotic anaerobic corrosion, that substitute is the hydrogen ion produced in the dissociation of water and the proceeding reduction of the hydrogen ions into diatomic hydrogen gas. The anodic half-reaction involves the oxidation of a metal in aqueous solution into a metal hydroxide. A common reaction that represents this process is the transformation of solid iron in steel into ferrous hydroxide as visualized in the following overall redox reaction.
The ferrous hydroxide may be oxidized further by additional hydrogen ions in water to form the mineral magnetite (Fe3O4) in the process called the Schikorr reaction.
In general, the anaerobic corrosion of metals, such as iron and copper, occur at very slow rates. However, when in chloride-containing aqueous environments, the rate increases because of the introduction of new mechanisms with the addition of a chloride anions.
Biotic
When in biotic conditions, anaerobic corrosion can be facilitated by the metabolic activity of microorganisms in the surrounding environment. This process is known as microbiologically-influenced corrosion or bacterial anaerobic corrosion. Most notably, the production of dissolved sulfides by sulfate-reducing bacteria (SRB) react with solid metals and hydrogen ions to form metal sulfides in a redox reaction.
Environmental significance
The effects of anaerobic corrosion are evident when evaluating the safety of chemical waste disposal. Currently, the permanent disposal of nuclear waste is commonly in deep geological repositories (DGR) that use copper coating to prevent metal corrosion. In the DGR, four major types of corrosion are expected to occur, including oxygen-driven, radiation-influenced, anaerobic, and microbiologically-influenced corrosion. Of these, the most notable process is the microbiologically-influenced corrosion in terms of the magnitude of corrosion. The ability of microorganisms such as SRB to survive in a wide range of environments also lends to their relevance when considering the threat of corrosion to permanent chemical waste disposal.
See also
Corrosion
Bacterial anaerobic corrosion
Electrochemistry
Sulfate-reducing microorganism
Redox reaction
References
Corrosion
Hydrogen production | Anaerobic corrosion | Chemistry,Materials_science | 691 |
54,053,902 | https://en.wikipedia.org/wiki/Multi-mission%20Modular%20Spacecraft | Multi-mission Modular Spacecraft, also known as the MMS, was originally designed by NASA to serve the largest array of functions for the space program possible to decrease the cost of space missions. It was designed to operate in four distinct areas of missions. The MMS began development about a decade before it became implemented in the 1980s and 1990s. The basic MMS was made up of three different modules. They include the altitude control, communications and data handling, and the power subsystems. The idea of a modular system serving many purposes was the pioneer of the leading systems within the space technology ecosystem today as it has left a lasting legacy. The MMS was intended to be "Shuttle compatible", i.e. recoverable/serviceable by the Space Shuttle orbiter.
Missions
It was used for:
Solar Maximum Mission (SMM), 1980
Landsat 4, 1982
Landsat 5, 1984
Upper Atmosphere Research Satellite (UARS), 1991
Extreme Ultraviolet Explorer (EUVE), 1992
TOPEX/Poseidon, 1992
Development
Before the MMS was the standardized space ship system, they began studying how to make a cost effective method of Space exploration. To achieve lower cost space travel, NASA's approached the idea with a production line mentality, to have inherited parts in as many aspects of the rocket as possible to allow fast production. The first idea was to use existing spacecraft designs, but with slight modifications. The main issue was designing a computer that could service any mission with slight modifications to the mission. To do this, they developed a computer for the MMS that could service variance types of missions within the Solar System such as: solar, stellar, and Earth missions. By designing this space computer to be easily changed, instead of building a new computer with all new hardware every mission, they only had to make software changes. This design greatly reduced cost when developing new spacecraft.
Modules
What made MMS so effective was the adaptability of the spacecraft to be able to conduct missions in a multitude of areas. The MMS was designed using multiple modules that made this possible. The modules include ACS Module, Power Module, Small Impulse Propulsion Module, Large Impulse Propulsion Module, C & CH Module, and Module support structure. This system allows for interchangeable software and hardware, and ultimately allows it to be repaired to be used at a lower cost level.
References
NASA satellites
Spacecraft components
Spacecraft design | Multi-mission Modular Spacecraft | Astronomy,Engineering | 483 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.