id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
607,530 | https://en.wikipedia.org/wiki/Reaction%20mechanism | In chemistry, a reaction mechanism is the step by step sequence of elementary reactions by which overall chemical reaction occurs.
A chemical mechanism is a theoretical conjecture that tries to describe in detail what takes place at each stage of an overall chemical reaction. The detailed steps of a reaction are not observable in most cases. The conjectured mechanism is chosen because it is thermodynamically feasible and has experimental support in isolated intermediates (see next section) or other quantitative and qualitative characteristics of the reaction. It also describes each reactive intermediate, activated complex, and transition state, which bonds are broken (and in what order), and which bonds are formed (and in what order). A complete mechanism must also explain the reason for the reactants and catalyst used, the stereochemistry observed in reactants and products, all products formed and the amount of each.
The electron or arrow pushing method is often used in illustrating a reaction mechanism; for example, see the illustration of the mechanism for benzoin condensation in the following examples section.
Mechanisms also are of interest in inorganic chemistry. A often quoted mechanistic experiment involved the reaction of the labile hexaaquo chromous reductant with the exchange inert pentammine cobalt(III) chloride.
Reaction intermediates
Reaction intermediates are chemical species, often unstable and short-lived. They can, however, sometimes be isolated. They are neither reactants nor products of the overall chemical reaction, but temporary products and/or reactants in the mechanism's reaction steps. Reaction intermediates are often confused with the transition state. The transition states are, in contrast, fleeting, high-energy species that cannot be isolated. The kinetics (relative rates of the reaction steps and the rate equation for the overall reaction) are discussed in terms of the energy required for the conversion of the reactants to the proposed transition states (molecular states that correspond to maxima on the reaction coordinates, and to saddle points on the potential energy surface for the reaction).
Chemical kinetics
Information about the mechanism of a reaction is often provided by analyzing chemical kinetics to determine the reaction order in each reactant.
Illustrative is the oxidation of carbon monoxide by nitrogen dioxide:
CO + NO2 → CO2 + NO
The rate law for this reaction is:
This form shows that the rate-determining step does not involve CO. Instead, the slow step involves two molecules of NO2. A possible mechanism for the overall reaction that explains the rate law is:
2 NO2 → NO3 + NO (slow)
NO3 + CO → NO2 + CO2 (fast)
Each step is called an elementary step, and each has its own rate law and molecularity. The sum of the elementary steps gives the net reaction.
When determining the overall rate law for a reaction, the slowest step is the step that determines the reaction rate. Because the first step (in the above reaction) is the slowest step, it is the rate-determining step. Because it involves the collision of two NO2 molecules, it is a bimolecular reaction with a rate which obeys the rate law .
Other reactions may have mechanisms of several consecutive steps. In organic chemistry, the reaction mechanism for the benzoin condensation, put forward in 1903 by A. J. Lapworth, was one of the first proposed reaction mechanisms.
A chain reaction is an example of a complex mechanism, in which the propagation steps form a closed cycle.
In a chain reaction, the intermediate produced in one step generates an intermediate in another step.
Intermediates are called chain carriers. Sometimes, the chain carriers are radicals, they can be ions as well. In nuclear fission they are neutrons.
Chain reactions have several steps, which may include:
Chain initiation: this can be by thermolysis (heating the molecules) or photolysis (absorption of light) leading to the breakage of a bond.
Propagation: a chain carrier makes another carrier.
Branching: one carrier makes more than one carrier.
Retardation: a chain carrier may react with a product reducing the rate of formation of the product. It makes another chain carrier, but the product concentration is reduced.
Chain termination: radicals combine and the chain carriers are lost.
Inhibition: chain carriers are removed by processes other than termination, such as by forming radicals.
Even though all these steps can appear in one chain reaction, the minimum necessary ones are Initiation, propagation, and termination.
An example of a simple chain reaction is the thermal decomposition of acetaldehyde (CH3CHO) to methane (CH4) and carbon monoxide (CO). The experimental reaction order is 3/2, which can be explained by a Rice-Herzfeld mechanism.
This reaction mechanism for acetaldehyde has 4 steps with rate equations for each step :
Initiation : CH3CHO → •CH3 + •CHO (Rate=k1 [CH3CHO])
Propagation: CH3CHO + •CH3 → CH4 + CH3CO• (Rate=k2 [CH3CHO][•CH3])
Propagation: CH3CO• → •CH3 + CO (Rate=k3 [CH3CO•])
Termination: •CH3 + •CH3 → CH3CH3 (Rate=k4 [•CH3]2)
For the overall reaction, the rates of change of the concentration of the intermediates •CH3 and CH3CO• are zero, according to the steady-state approximation, which is used to account for the rate laws of chain reactions.
d[•CH3]/dt = k1[CH3CHO] – k2[•CH3][CH3CHO] + k3[CH3CO•] - 2k4[•CH3]2 = 0
and d[CH3CO•]/dt = k2[•CH3][CH3CHO] – k3[CH3CO•] = 0
The sum of these two equations is k1[CH3CHO] – 2 k4[•CH3]2 = 0. This may be solved to find the steady-state concentration of •CH3 radicals as [•CH3] = (k1 / 2k4)1/2 [CH3CHO]1/2.
It follows that the rate of formation of CH4 is d[CH4]/dt = k2[•CH3][CH3CHO] = k2 (k1 / 2k4)1/2 [CH3CHO]3/2
Thus the mechanism explains the observed rate expression, for the principal products CH4 and CO. The exact rate law may be even more complicated, there are also minor products such as acetone (CH3COCH3) and propanal (CH3CH2CHO).
Other experimental methods to determine mechanism
Many experiments that suggest the possible sequence of steps in a reaction mechanism have been designed, including:
measurement of the effect of temperature (Arrhenius equation) to determine the activation energy
spectroscopic observation of reaction intermediates
determination of the stereochemistry of products, for example in nucleophilic substitution reactions
measurement of the effect of isotopic substitution on the reaction rate
for reactions in solution, measurement of the effect of pressure on the reaction rate to determine the volume change on formation of the activated complex
for reactions of ions in solution, measurement of the effect of ionic strength on the reaction rate
direct observation of the activated complex by pump-probe spectroscopy
infrared chemiluminescence to detect vibrational excitation in the products
electrospray ionization mass spectrometry.
crossover experiments.
Theoretical modeling
A correct reaction mechanism is an important part of accurate predictive modeling. For many combustion and plasma systems, detailed mechanisms are not available or require development.
Even when information is available, identifying and assembling the relevant data from a variety of sources, reconciling discrepant values and extrapolating to different conditions can be a difficult process without expert help. Rate constants or thermochemical data are often not available in the literature, so computational chemistry techniques or group additivity methods must be used to obtain the required parameters.
Computational chemistry methods can also be used to calculate potential energy surfaces for reactions and determine probable mechanisms.
Molecularity
Molecularity in chemistry is the number of colliding molecular entities that are involved in a single reaction step.
A reaction step involving one molecular entity is called unimolecular.
A reaction step involving two molecular entities is called bimolecular.
A reaction step involving three molecular entities is called trimolecular or termolecular.
In general, reaction steps involving more than three molecular entities do not occur, because is statistically improbable in terms of Maxwell distribution to find such a transition state.
See also
Organic reactions by mechanism
Nucleophilic acyl substitution
Neighbouring group participation
Finkelstein reaction
Lindemann mechanism
Electrochemical reaction mechanism
Nucleophilic abstraction
References
L.G.WADE, ORGANIC CHEMISTRY 7TH ED, 2010
External links
Reaction mechanisms for combustion of hydrocarbons
Mechanism
Chemical kinetics
Chemical reaction engineering
Combustion | Reaction mechanism | [
"Chemistry",
"Engineering"
] | 1,869 | [
"Reaction mechanisms",
"Chemical reaction engineering",
"Chemical engineering",
"Combustion",
"Physical organic chemistry",
"nan",
"Chemical kinetics"
] |
607,547 | https://en.wikipedia.org/wiki/Otto%20Finsch | Friedrich Hermann Otto Finsch (8 August 1839, Warmbrunn – 31 January 1917, Braunschweig) was a German ethnographer, naturalist and colonial explorer. He is known for a two-volume monograph on the parrots of the world which earned him a doctorate. He also wrote on the people of New Guinea and was involved in plans for German colonization in Southeast Asia. Several species of bird (such as Oenanthe finschii, Iole finschii, Psittacula finschii) are named after him as also the town of Finschhafen in Morobe Province, Papua New Guinea and a crater on the Moon.
Biography
Finsch was born at Bad Warmbrunn in Silesia to Mortiz Finsch and Mathilde née Leder. His father was in the glass trade and he too trained as a glass painter. An interest in birds led him to use his artistic skills for the purpose. Finsch went to Budapest in 1857 and studied at the Royal Hungarian University, earning money by preparing natural history specimens. He then spent two years in Russe, Bulgaria on an invitation from the Austrian Consul and gave private tutoring in German while exploring the birdlife of the region. He published his first paper in the Journal fur Ornithologie on the birds of Bulgaria. This experience helped him obtain a curatorial position at the Rijksmuseum van Natuurlijke Historie in Leiden (1862–1865) assisting Herman Schlegel. In 1864 he returned to Germany on the suggestion of Gustav Hartlaub to become curator of the museum in Bremen and became its director in 1876. After publishing the two volume monographs on the parrots of the world, Die Papageien, monographisch bearbeitet (1867–68), he obtained an honorary doctorate from the Friedrich Wilhelms University in Bonn. Apart from ornithology he also took an interest in ethnology. In 1876 he accompanied the zoologist Alfred Brehm on an expedition to Turkestan and northwest China.
Finsch resigned as curator of the museum in 1878 in order that he could resume his travels, sponsored by the Humboldt Foundation. Between spring 1879 and 1885 he made several visits to the Polynesian Islands, New Zealand, Australia and New Guinea. His proposal was to obtain as many artefacts as possible with the claim that native cultures, fauna and flora were fast vanishing. Finsch was shocked by the punitive actions of the English Methodist missionary George Brown (1835–1917) and was concerned by the violent conflicts between the natives and westerners. He also found no support for contemporary ideas on race with neat categories and found instead a continuum of variations in the human form. After witnessing a cannibal feast at Matupit he commented that the people were still not classifiable as "savages" as they maintained neat agriculture, had their own song, dance and followed commerce. He returned to Germany in 1882 and began to promote the creation of German colonies in the Pacific along with the South Sea Plotters, an influential group led by a banker Adolph von Hansemann. In 1884 he returned aboard the steamer Samoa to New Guinea as Bismarck's Imperial Commissioner to explore potential harbours under the guise of scientists and negotiated for the north-eastern portion of that island, together with New Britain and New Ireland, to become a German protectorate. It was renamed Kaiser-Wilhelmsland and the Bismarck Archipelago. The capital of the colony was named Finschhafen in his honour. In 1885 he was the first European to discover the Sepik river, and he named it after Kaiserin Augusta, the German Empress. Newspapers of the period speculated that he would be appointed as an administrator to the new territories but this never happened. He was instead offered a position as station director which involved menial administrative tasks that would come in the way of his plans to explore and study the region. He returned to Germany and spent much of the subsequent period without formal employment. Finsch had been married to Josephine Wychodil from around 1873 but they divorced around 1880. In 1886 he married Elisabeth née Hoffman (1860–1925). Elisabeth was a talented artist and she illustrated many of his catalogues. Finsch was briefly an advisor to the Neuguinea-Kompagnie. In 1898 he abandoned his dreams in ethnology and returned to ornithology, becoming curator of the bird collections at the Rijksmuseum in Leiden. He did not enjoy this period, noting that life for him, his wife and daughter Esther, felt like living in exile. He also wrote several articles on his past work Wie ich Kaiser-Wilhelmsland erwarb (How I acquired Kaiser Wilhelm’s Land, 1902) and Kaiser-Wilhelmsland. Eine friedliche Kolonialerwerbung (Kaiser Wilhelm’s Land: A peaceful colonial acquisition, 1905). Seeking return to Germany, he finally joined the ethnographical department of the Municipal Museum in Brunswick in 1904 and worked the remainder of his life there. In 1909 he was titled professor by the Duke of Braunschweig and honoured with a 'medal for distinguished services for art and science' in silver.
One of his major works was on the parrots of the world. This was not without its critics, since he often tried to rename genera apparently to gain taxonomic authorship.
Several species of birds bear his name, including the lilac-crowned parrot (Amazona finschi), Finsch's wheatear (Oenanthe finschii), Finsch's bulbul (Alophoixus finschii), and the grey-headed parakeet (Psittacula finschii). A species of monitor lizard, Varanus finschi, is named after him, because he collected what would become the holotype for this species. The crater Finsch on the Moon is also named in his honor.
In 2008, following international treaties, some of the human remains that he had collected from Cape York and the Torres Straits that were held in the Charité Medical University in Berlin were repatriated. Additional remains have also been repatriated.
Published works
Catalog der Ausstellung ethnographischer und naturwissenschaftlicher Sammlungen (Bremen: Diercksen und Wichlein, 1877).
Anthropologische Ergebnisse einer Reise in der Südsee und dem malayischen Archipel in den Jahren 1879–1882 (Berlin: A. Asher & Co., 1884).
Otto Finsch, Masks of Faces of Races of Men from the South Sea Islands and the Malay Archipelago, taken from Living Originals in the Years 1879–82 (Rochester, NY: Ward's Natural Sciences Establishment, 1888).
Ethnologische Erfahrungen und Belegstücke aus der Südsee: Beschreibender Katalog einer Sammlung im K.K. naturhistorischen Hofmuseum in Wien (Wien: A. Holder, 1893).
Die Papageien / monographisch bearbeitet von Otto Finsch Leiden: Brill, 1867–68.
with Gustav Hartlaub, "Die Vögel der Palau-Gruppe. Über neue und weniger gekannte Vögel von den Viti-, Samoa- und Carolinen-Inseln." Journal des Museum Godeffroy'', Heft 8, 1875 and Heft 12, 1876.
References
Other sources
Herbert Abel, Otto Finsch: Ein Lebensbild Zur 50. Wiederkehr des Todestages am 31. Januar 1967. Jahrbuch der Schlesischen Friedrich-Wilhelms-Universität zu Breslau. Band XII. Wuerzburg: Holzner-Verlag.
Howes, Hilary, 2018. « A “Perceptive Observer” in the Pacific: Life and Work of Otto Finsch » in Bérose - Encyclopédie internationale des histoires de l’anthropologie
External links
Resources related to research : BEROSE - International Encyclopaedia of the Histories of Anthropology. "Finsch, Otto (1839-1917)", Paris, 2018. (ISSN 2648-2770)
AMNH anthropology collection
Digitised works by Otto Finsch at Biodiversity Heritage Library
1839 births
1917 deaths
People from Jelenia Góra
People from the Province of Silesia
19th-century German biologists
19th-century German explorers
German ornithologists
Taxon authorities
Explorers of Papua New Guinea
Explorers from the Kingdom of Prussia
Scholars from the Kingdom of Prussia | Otto Finsch | [
"Biology"
] | 1,765 | [
"Taxon authorities",
"Taxonomy (biology)"
] |
607,567 | https://en.wikipedia.org/wiki/Lacey%20V.%20Murrow%20Memorial%20Bridge | The Lacey V. Murrow Memorial Bridge is a floating bridge in the Seattle metropolitan area of the U.S. state of Washington. It is one of the Interstate 90 floating bridges that carries the eastbound lanes of Interstate 90 across Lake Washington from Seattle to Mercer Island. Westbound traffic is carried by the adjacent Homer M. Hadley Memorial Bridge.
The Murrow Bridge is the second-longest floating bridge in the world, at (the longest is the Governor Albert D. Rosellini Bridge–Evergreen Point, a few miles north on the same lake). The original Murrow Bridge opened in 1940, and was named the Lake Washington Floating Bridge. It was renamed the Lacey V. Murrow bridge in 1967. The original bridge closed the current bridge opened
Along with the east portals of the Mount Baker Ridge Tunnel, the bridge is an official City of Seattle landmark and a National Historic Civil Engineering Landmark. While the bridge originally had an opening span at the center of the bridge to allow a horizontal opening of for major waterborne traffic, the only boat passages currently are elevated fixed spans at the termini with of vertical clearance.
History
The bridge was the brainchild of engineer Homer Hadley, who had made the first proposal in 1921. The bridge came about after intensive lobbying, particularly by George Lightfoot, who came to be called the "father of the bridge." Lightfoot began campaigning for the bridge in 1930, enlisting the support of Miller Freeman. Construction began January 1, 1939 and was completed in 1940. The construction cost for the project, including approaches, was approximately $9 million. It was partially financed by a bond issue of $4.184 million. Opened July 2, 1940, the bridge carried US 10 (later decommissioned and renamed Interstate 90). Tolls were removed in 1949. The bridge sank in a storm on November 25, 1990 during refurbishment and repair. The current bridge was built in 1993. The eponymous Lacey V. Murrow was the second director of the Washington State Highway Department and a highly decorated U.S. Air Force officer who served as a bomber pilot in World War II, rising to the rank of graduate of Washington State College in Pullman, he was the oldest brother of CBS commentator Edward R. Murrow.
The original bridge was built under a -year contract awarded to the Puget Sound Bridge and Dredging Company in the amount of $3.254 million. It included a movable span that could be retracted into a pocket in the center of the fixed span to permit large boats to pass. This design resulted in a roadway bulge that required vehicles to swerve twice across polished steel joints as they passed the bulge. A reversible lane system, indicated by lighted overhead lane control signals with arrow and 'X' signs, compounded the hazard by putting one lane of traffic on the "wrong" side of the bulge during morning and evening rush hours in an effort to alleviate traffic into or out of Seattle. There were many serious collisions on the bridge. The problems grew worse as the traffic load increased over the years and far outstripped the designed capacity. Renovation or replacement became essential and a parallel bridge, the Homer M. Hadley Memorial Bridge, was completed in 1989, and named for Hadley in 1993.
With the opening of the new bridge, the 49-year-old Murrow Bridge closed on June 23, 1989, for renovation that was projected to take
1990 disaster
On November 25, 1990, while under re-construction, the original bridge sank because of a series of human errors and decisions. The process started because the bridge needed resurfacing and was to be widened by means of cantilevered additions in order to meet the necessary lane-width specifications of the Interstate Highway System. The Washington State Department of Transportation (WSDOT) decided to use hydrodemolition (high-pressure water) to remove unwanted material (the sidewalks on the bridge deck). Water from this hydrodemolition was considered contaminated under environmental law and could not be allowed to flow into Lake Washington. Engineers then analyzed the pontoons of the bridge, and realized that they were over-engineered and the water could be stored temporarily in the pontoons. The watertight doors for the pontoons were therefore removed.
A large storm on November 22–24 (the Thanksgiving holiday weekend), filled some of the pontoons with rain and lake water. On Saturday, November 24, workers noticed that the bridge was about to sink, and started pumping out some of the pontoons; on Sunday, November 25, a section of the bridge sank, dumping the contaminated water into the lake along with tons of bridge material. It sank when one pontoon filled and dragged the rest down, because they were cabled together and there was no way to separate the sections under load. No one was hurt or killed, since the bridge was closed for renovation and the sinking took All of the sinking was captured on film and shown on live TV. The cost of the disaster was $69 million in damages. A dozen anchoring cables for the new Hadley bridge were and it was closed for a short time afterward. Westbound traffic was allowed and eastbound traffic was resumed in early December.
The disaster delayed the bridge's reopening by 14 months, to September 12, 1993.
Precedents and lessons learned
WSDOT had lost another floating bridge, the Hood Canal Bridge, in February 1979 under similar circumstances. It is now known that the other major floating bridge in Washington, the Evergreen Point Floating Bridge, was under-engineered for local environmental conditions; that 1963 bridge was replaced with a new floating span
See also
List of bridges documented by the Historic American Engineering Record in Washington (state)
List of bridges in Seattle
Homer M. Hadley Memorial Bridge
Notes
External links
Bridge Camera, includes some weather information
King-5 television video of the sinking
Bridge disasters in the United States
Bridge disasters caused by construction error
Bridges in Seattle
Bridges in King County, Washington
Bridges completed in 1993
Bridges completed in 1940
Interstate 90
Landmarks in Seattle
Monuments and memorials in Washington (state)
Historic Civil Engineering Landmarks
Road bridges in Washington (state)
Bridges on the Interstate Highway System
Transportation disasters in Washington (state)
Former toll bridges in Washington (state)
1940 establishments in Washington (state)
Construction accidents in the United States
Pontoon bridges in the United States
Former National Register of Historic Places in Washington (state)
Historic American Engineering Record in Washington (state)
Concrete bridges in Washington (state) | Lacey V. Murrow Memorial Bridge | [
"Engineering"
] | 1,295 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
607,577 | https://en.wikipedia.org/wiki/Activated%20complex | In chemistry, an activated complex represents a collection of intermediate structures in a chemical reaction when bonds are breaking and forming. The activated complex is an arrangement of atoms in an arbitrary region near the saddle point of a potential energy surface. The region represents not one defined state, but a range of unstable configurations that a collection of atoms pass through between the reactants and products of a reaction. Activated complexes have partial reactant and product character, which can significantly impact their behaviour in chemical reactions.
The terms activated complex and transition state are often used interchangeably, but they represent different concepts. Transition states only represent the highest potential energy configuration of the atoms during the reaction, while activated complex refers to a range of configurations near the transition state. In a reaction coordinate, the transition state is the configuration at the maximum of the diagram while the activated complex can refer to any point near the maximum.
Transition state theory (also known as activated complex theory) studies the kinetics of reactions that pass through a defined intermediate state with standard Gibbs energy of activation . The transition state, represented by the double dagger symbol represents the exact configuration of atoms that has an equal probability of forming either the reactants or products of the given reaction.
The activation energy is the minimum amount of energy to initiate a chemical reaction and form the activated complex. The energy serves as a threshold that reactant molecules must surpass to overcome the energy barrier and transition into the activated complex. Endothermic reactions absorb energy from the surroundings, while exothermic reactions release energy. Some reactions occur spontaneously, while others necessitate an external energy input. The reaction can be visualized using a reaction coordinate diagram to show the activation energy and potential energy throughout the reaction.
Activated complexes were first discussed in transition state theory (also called activated complex theory), which was first developed by Eyring, Evans, and Polanyi in 1935.
Reaction rate
Transition state theory
Transition state theory explains the dynamics of reactions. The theory is based on the idea that there is an equilibrium between the activated complex and reactant molecules. The theory incorporates concepts from collision theory, which states that for a reaction to occur, reacting molecules must collide with a minimum energy and correct orientation. The reactants are first transformed into the activated complex before breaking into the products. From the properties of the activated complex and reactants, the reaction rate constant is
where K is the equilibrium constant, is the Boltzmann constant, T is the thermodynamic temperature, and h is the Planck constant. Transition state theory is based on classical mechanics, as it assumes that as the reaction proceeds, the molecules will never return to the transition state.
Symmetry
An activated complex with high symmetry can decrease the accuracy of rate expressions. Error can arise from introducing symmetry numbers into the rotational partition functions for the reactants and activated complexes. To reduce errors, symmetry numbers can by omitted by multiplying the rate expression by a statistical factor:
where the statistical factor is the number of equivalent activated complexes that can be formed, and the Q are the partition functions from the symmetry numbers that have been omitted.
The activated complex is a collection of molecules that forms and then explodes along a particular internal normal coordinate. Ordinary molecules have three translational degrees of freedom, and their properties are similar to activated complexes. However, activated complexed have an extra degree of translation associated with their approach to the energy barrier, crossing it, and then dissociating.
See also
Coordination complex
Reaction intermediate
References
Chemical kinetics
Reaction mechanisms | Activated complex | [
"Chemistry"
] | 705 | [
"Reaction mechanisms",
"Chemical reaction engineering",
"Chemical kinetics",
"Physical organic chemistry"
] |
607,631 | https://en.wikipedia.org/wiki/Rhombic%20dodecahedron | In geometry, the rhombic dodecahedron is a convex polyhedron with 12 congruent rhombic faces. It has 24 edges, and 14 vertices of 2 types. As a Catalan solid, it is the dual polyhedron of the cuboctahedron. As a parallelohedron, the rhombic dodecahedron can be used to tesselate its copies in space creating a rhombic dodecahedral honeycomb. There are some variations of the rhombic dodecahedron, one of which is the Bilinski dodecahedron. There are some stellations of the rhombic dodecahedron, one of which is the Escher's solid. The rhombic dodecahedron may also appear in the garnet crystal, the architectural philosophies, practical usages, and toys.
As a Catalan solid
Metric properties
The rhombic dodecahedron is a polyhedron with twelve rhombi, each of which long face-diagonal length is exactly times the short face-diagonal length and the acute angle measurement is . Its dihedral angle between two rhombi is 120°.
The rhombic dodecahedron is a Catalan solid, meaning the dual polyhedron of an Archimedean solid, the cuboctahedron; they share the same symmetry, the octahedral symmetry. It is face-transitive, meaning the symmetry group of the solid acts transitively on its set of faces. In elementary terms, this means that for any two faces, there is a rotation or reflection of the solid that leaves it occupying the same region of space while moving a face to another one. Other than rhombic triacontahedron, it is one of two Catalan solids that each have the property that their isometry groups are edge-transitive; the other convex polyhedron classes being the five Platonic solids and the other two Archimedean solids: its dual polyhedron and icosidodecahedron.
Denoting by a the edge length of a rhombic dodecahedron,
the radius of its inscribed sphere (tangent to each of the rhombic dodecahedron's faces) is: ()
the radius of its midsphere is: )
the radius of the sphere passing through the six order four vertices, but not through the eight order 3 vertices, is: ()
the radius of the sphere passing through the eight order three vertices is exactly equal to the length of the sides:
The surface area A and the volume V of the rhombic dodecahedron with edge length a are:
The rhombic dodecahedron can be viewed as the convex hull of the union of the vertices of a cube and an octahedron where the edges intersect perpendicularly. The six vertices where four rhombi meet correspond to the vertices of the octahedron, while the eight vertices where three rhombi meet correspond to the vertices of the cube.
The graph of the rhombic dodecahedron is nonhamiltonian.
Construction
For edge length , the eight vertices where three faces meet at their obtuse angles have Cartesian coordinates (±1, ±1, ±1). In the case of the coordinates of the six vertices where four faces meet at their acute angles, they are (±2, 0, 0), (0, ±2, 0) and (0, 0, ±2).
The rhombic dodecahedron can be seen as a degenerate limiting case of a pyritohedron, with permutation of coordinates and with parameter h = 1.
These coordinates illustrate that a rhombic dodecahedron can be seen as a cube with six square pyramids attached to each face, allowing them to fit together into a cube. Therefore, the rhombic dodecahedron has twice the volume of the inscribed cube with edges equal to the short diagonals of the rhombi. Alternatively, the rhombic dodecahedron can be constructed by inverting six square pyramids until their apices are meet at the cube's center.
As a space-filling polyhedron
The rhombic dodecahedron is a space-filling polyhedron, meaning it can be applied to tessellate three-dimensional space: it can be stacked to fill a space, much like hexagons fill a plane. It is a parallelohedron because it can be space-filling a honeycomb in which all of its copies meet face-to-face. More generally, every parellelohedron is zonohedron, a centrally symmetric polyhedron with centrally symmetric faces. As a parallelohedron, the rhombic dodecahedron can be constructed with four sets of six parallel edges.
The rhombic dodecahedral honeycomb (or dodecahedrille) is an example of a honeycomb constructed by filling all rhombic dodecahedrons. It is dual to the tetroctahedrille or half cubic honeycomb, and it is described by two Coxeter diagrams: and . With D3d symmetry, it can be seen as an elongated trigonal trapezohedron. It can be seen as the Voronoi tessellation of the face-centered cubic lattice. It is the Brillouin zone of body-centered cubic (bcc) crystals. Some minerals such as garnet form a rhombic dodecahedral crystal habit. As Johannes Kepler noted in his 1611 book on snowflakes (Strena seu de Nive Sexangula), honey bees use the geometry of rhombic dodecahedra to form honeycombs from a tessellation of cells each of which is a hexagonal prism capped with half a rhombic dodecahedron. The rhombic dodecahedron also appears in the unit cells of diamond and diamondoids. In these cases, four vertices (alternate threefold ones) are absent, but the chemical bonds lie on the remaining edges.
A rhombic dodecahedron can be dissected into four congruent, obtuse trigonal trapezohedra around its center. These rhombohedra are the cells of a trigonal trapezohedral honeycomb. Analogously, a regular hexagon can be dissected into 3 rhombi around its center. These rhombi are the tiles of a rhombille.
Appearances
Practical usage
In spacecraft reaction wheel layout, a tetrahedral configuration of four wheels is commonly used. For wheels that perform equally (from a peak torque and max angular momentum standpoint) in both spin directions and across all four wheels, the maximum torque and maximum momentum envelopes for the 3-axis attitude control system (considering idealized actuators) are given by projecting the tesseract representing the limits of each wheel's torque or momentum into 3D space via the 3 × 4 matrix of wheel axes; the resulting 3D polyhedron is a rhombic dodecahedron. Such an arrangement of reaction wheels is not the only possible configuration (a simpler arrangement consists of three wheels mounted to spin about orthogonal axes), but it is advantageous in providing redundancy to mitigate the failure of one of the four wheels (with degraded overall performance available from the remaining three active wheels) and in providing a more convex envelope than a cube, which leads to less agility dependence on axis direction (from an actuator/plant standpoint). Spacecraft mass properties influence overall system momentum and agility, so decreased variance in envelope boundary does not necessarily lead to increased uniformity in preferred axis biases (that is, even with a perfectly distributed performance limit within the actuator subsystem, preferred rotation axes are not necessarily arbitrary at the system level).
The polyhedron is also the basis for the HEALPix grid, used in cosmology for storing and manipulating maps of the cosmic microwave background, and in computer graphics for storing environment maps.
Miscellaneous
The collections of the Louvre include a die in the shape of a rhombic dodecahedron dating from Ptolemaic Egypt. The faces are inscribed with Greek letters representing the numbers 1 through 12: Α Β Γ Δ Ε Ϛ Z Η Θ Ι ΙΑ ΙΒ. The function of the die is unknown.
Other related figures
Topologically equivalent forms
Other symmetry constructions of the rhombic dodecahedron are also space-filling, and as parallelotopes they are similar to variations of space-filling truncated octahedra. For example, with 4 square faces, and 60-degree rhombic faces, and D4h dihedral symmetry, order 16. It can be seen as a cuboctahedron with square pyramids attached on the top and bottom.
In 1960, Stanko Bilinski discovered a second rhombic dodecahedron with 12 congruent rhombus faces, the Bilinski dodecahedron. It has the same topology but different geometry. The rhombic faces in this form have the golden ratio.
The deltoidal dodecahedron is another topological equivalence of a rhombic dodecahedron form. It is isohedral with tetrahedral symmetry order 24, distorting rhombic faces into kites (deltoids). It has 8 vertices adjusted in or out in alternate sets of 4, with the limiting case a tetrahedral envelope. Variations can be parametrized by (a,b), where b and a depend on each other such that the tetrahedron defined by the four vertices of a face has volume zero, i.e. is a planar face. (1,1) is the rhombic solution. As a approaches , b approaches infinity. It always holds that + = 2, with a, b > .
(±2, 0, 0), (0, ±2, 0), (0, 0, ±2)
(a, a, a), (−a, −a, a), (−a, a, −a), (a, −a, −a)
(−b, −b, −b), (−b, b, b), (b, −b, b), (b, b, −b)
Stellations
Like many convex polyhedra, the rhombic dodecahedron can be stellated by extending the faces or edges until they meet to form a new polyhedron. Several such stellations have been described by Dorman Luke. The first stellation, often called the stellated rhombic dodecahedron, can be seen as a rhombic dodecahedron with each face augmented by attaching a rhombic-based pyramid to it, with a pyramid height such that the sides lie in the face planes of the neighbouring faces. Luke describes four more stellations: the second and third stellations (expanding outwards), one formed by removing the second from the third, and another by adding the original rhombic dodecahedron back to the previous one.
Related polytope
The rhombic dodecahedron forms the hull of the vertex-first projection of a tesseract to three dimensions. There are exactly two ways of decomposing a rhombic dodecahedron into four congruent rhombohedra, giving eight possible rhombohedra as projections of the tesseracts 8 cubic cells. One set of projective vectors are: u = (1,1,−1,−1), v = (−1,1,−1,1), w = (1,−1,−1,1).
The rhombic dodecahedron forms the maximal cross-section of a 24-cell, and also forms the hull of its vertex-first parallel projection into three dimensions. The rhombic dodecahedron can be decomposed into six congruent (but non-regular) square dipyramids meeting at a single vertex in the center; these form the images of six pairs of the 24-cell's octahedral cells. The remaining 12 octahedral cells project onto the faces of the rhombic dodecahedron. The non-regularity of these images are due to projective distortion; the facets of the 24-cell are regular octahedra in 4-space.
This decomposition gives an interesting method for constructing the rhombic dodecahedron: cut a cube into six congruent square pyramids, and attach them to the faces of a second cube. The triangular faces of each pair of adjacent pyramids lie on the same plane, and so merge into rhombi. The 24-cell may also be constructed in an analogous way using two tesseracts.
See also
Truncated rhombic dodecahedron
Archimede construction systems
References
Further reading
(Section 3-9)
(The thirteen semiregular convex polyhedra and their duals, Page 19, Rhombic dodecahedron)
The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, (Chapter 21, Naming the Archimedean and Catalan polyhedra and tilings, p. 285, Rhombic dodecahedron )
External links
Virtual Reality Polyhedra – The Encyclopedia of Polyhedra
Computer models
Relating a Rhombic Triacontahedron and a Rhombic Dodecahedron, Rhombic Dodecahedron 5-Compound and Rhombic Dodecahedron 5-Compound by Sándor Kabai, The Wolfram Demonstrations Project.
Paper projects
Rhombic Dodecahedron Calendar – make a rhombic dodecahedron calendar without glue
Another Rhombic Dodecahedron Calendar – made by plaiting paper strips
Practical applications
Archimede Institute Examples of actual housing construction projects using this geometry
Catalan solids
Quasiregular polyhedra
Space-filling polyhedra
Zonohedra
Golden ratio | Rhombic dodecahedron | [
"Mathematics"
] | 2,850 | [
"Golden ratio"
] |
607,686 | https://en.wikipedia.org/wiki/Palatini%20variation | In general relativity and gravitation the Palatini variation is nowadays thought of as a variation of a Lagrangian with respect to the connection.
In fact, as is well known, the Einstein–Hilbert action for general relativity was first formulated purely in terms of the spacetime metric . In the Palatini variational method one takes as independent field variables not only the ten components but also the forty components of the affine connection , assuming, a priori, no dependence of the from the and their derivatives.
The reason the Palatini variation is considered important is that it means that the use of the Christoffel connection in general relativity does not have to be added as a separate assumption; the information is already in the Lagrangian. For theories of gravitation which have more complex Lagrangians than the Einstein–Hilbert Lagrangian of general relativity, the Palatini variation sometimes gives more complex connections and sometimes tensorial equations.
Attilio Palatini (1889–1949) was an Italian mathematician who received his doctorate from the University of Padova, where he studied under Levi-Civita and Ricci-Curbastro.
The history of the subject, and Palatini's connection with it, are not straightforward (see references). In fact, it seems that what the textbooks now call "Palatini formalism" was actually invented in 1925 by Einstein, and as the years passed, people tended to mix up the Palatini identity and the Palatini formalism.
See also
Palatini identity
Self-dual Palatini action
Tetradic Palatini action
References
[English translation by R. Hojman and C. Mukku in P. G. Bergmann and V. De Sabbata (eds.) Cosmology and Gravitation, Plenum Press, New York (1980)]
Lagrangian mechanics
General relativity | Palatini variation | [
"Physics",
"Mathematics"
] | 394 | [
"Lagrangian mechanics",
"Classical mechanics",
"General relativity",
"Relativity stubs",
"Theory of relativity",
"Dynamical systems"
] |
607,751 | https://en.wikipedia.org/wiki/International%20maritime%20signal%20flags | International maritime signal flags are various flags used to communicate with ships. The principal system of flags and associated codes is the International Code of Signals. Various navies have flag systems with additional flags and codes, and other flags are used in special uses, or have historical significance.
Usage
There are various methods by which the flags can be used as signals:
A series of flags can spell out a message, each flag representing a letter.
Individual flags have specific and standard meanings; for example, diving support vessels raise the "A" flag indicating their inability to move from their current location because they have a diver underwater and to warn other vessels to keep clear to avoid endangering the diver(s) with their propellers.
One or more flags form a code word whose meaning can be looked up in a code book held by both parties. An example is the Popham numeric code used at the Battle of Trafalgar.
In yacht racing and dinghy racing, flags have other meanings; for example, the P flag is used as the "preparatory" flag to indicate an imminent start, and the S flag means "shortened course" (for more details see Race signals).
NATO uses the same flags, with a few unique to warships, alone or in short sets to communicate various unclassified messages. The NATO usage generally differs from the international meanings, and therefore warships will fly the Code/answer flag above the signal to indicate it should be read using the international meaning.
During the Allied occupations of Axis countries after World War II, use and display of those nations' national flags was banned. In order to comply with the international legal requirement that a ship identify its registry by displaying the appropriate national ensign, swallow-tailed versions of the C, D, and E signal flags were designated as, respectively, provisional German, Okinawan, and Japanese civil ensigns. Being swallowtails, they are commonly referred to as the "C-pennant" (German: C-Doppelstander), "D-pennant", and "E-pennant".
Letter flags (with ICS meaning)
Notes
Number flags
Substitute
Substitute or repeater flags allow messages with duplicate characters to be signaled without the need for multiple sets of flags.
The four NATO substitute flags are as follows:
The International Code of Signals includes only the first three of these substitute flags. To illustrate their use, here are some messages and the way they would be encoded:
See also
References
External links
"How Ships Talk With Flags", October 1944, Popular Science
John Savard's flag page. Collection of different flag systems.
Freeware to aid memorizing the flags
La flag-alfabeto - signal flags used for the Esperanto language - the flags for the Esperanto letters with diacritical marks have the lighter color in the normal flag replaced with light green, which is not used in any normal flag.
International flags
Latin-script representations
Maritime flags
Maritime signalling
Nonverbal communication
Optical communications
Signal flags | International maritime signal flags | [
"Engineering"
] | 601 | [
"Optical communications",
"Telecommunications engineering"
] |
607,864 | https://en.wikipedia.org/wiki/Sunrise%20problem | The sunrise problem can be expressed as follows: "What is the probability that the sun will rise tomorrow?" The sunrise problem illustrates the difficulty of using probability theory when evaluating the plausibility of statements or beliefs.
According to the Bayesian interpretation of probability, probability theory can be used to evaluate the plausibility of the statement, "The sun will rise tomorrow."
The sunrise problem was first introduced publicly in 1763 by Richard Price in his famous coverage of Thomas Bayes' foundational work in Bayesianism.
Laplace's approach
Pierre-Simon Laplace, who treated it by means of his rule of succession. Let p be the long-run frequency of sunrises, i.e., the sun rises on 100 × p% of days. Prior to knowing of any sunrises, one is completely ignorant of the value of p. Laplace represented this prior ignorance by means of a uniform probability distribution on p.
For instance, the probability that p is between 20% and 50% is just 30%. This must not be interpreted to mean that in 30% of all cases, p is between 20% and 50%. Rather, it means that one's state of knowledge (or ignorance) justifies one in being 30% sure that the sun rises between 20% of the time and 50% of the time. Given the value of p, and no other information relevant to the question of whether the sun will rise tomorrow, the probability that the sun will rise tomorrow is p. But we are not "given the value of p". What we are given is the observed data: the sun has risen every day on record. Laplace inferred the number of days by saying that the universe was created about 6000 years ago, based on a young-earth creationist reading of the Bible.
To find the conditional probability distribution of p given the data, one uses Bayes' theorem, which some call the Bayes–Laplace rule. Having found the conditional probability distribution of p given the data, one may then calculate the conditional probability, given the data, that the sun will rise tomorrow. That conditional probability is given by the rule of succession. The plausibility that the sun will rise tomorrow increases with the number of days on which the sun has risen so far. Specifically, assuming p has an a-priori distribution that is uniform over the interval [0,1], and that, given the value of p, the sun independently rises each day with probability p, the desired conditional probability is:
By this formula, if one has observed the sun rising 10000 times previously, the probability it rises the next day is . Expressed as a percentage, this is approximately a chance.
However, Laplace recognized this to be a misapplication of the rule of succession through not taking into account all the prior information available immediately after deriving the result:
E.T. Jaynes noted that Laplace's warning had gone unheeded by workers in the field.
A reference class problem arises: the plausibility inferred will depend on whether we take the past experience of one person, of humanity, or of the earth. A consequence is that each referent would hold different plausibility of the statement. In Bayesianism, any probability is a conditional probability given what one knows. That varies from one person to another.
See also
Rule of succession
Problem of induction
Doomsday argument: a similar problem that raises intense philosophical debate
Newcomb's paradox
Unsolved problems in statistics
Additive smoothing (also called Laplace smoothing)
References
Further reading
Howie, David. (2002). Interpreting probability: controversies and developments in the early twentieth century. Cambridge University Press. pp. 24.
Probability problems
Statistical inference
Bayesian statistics | Sunrise problem | [
"Mathematics"
] | 764 | [
"Probability problems",
"Mathematical problems"
] |
607,942 | https://en.wikipedia.org/wiki/Blown%20flap | Blown flaps, blown wing or jet flaps are powered aerodynamic high-lift devices used on the wings of certain aircraft to improve their low-speed flight characteristics. They use air blown through nozzles to shape the airflow over the rear edge of the wing, directing the flow downward to increase the lift coefficient. There are a variety of methods to achieve this airflow, most of which use jet exhaust or high-pressure air bled off of a jet engine's compressor and then redirected to follow the line of trailing-edge flaps.
Blown flaps may refer specifically to those systems that use internal ductwork within the wing to direct the airflow, or more broadly to systems like upper surface blowing or nozzle systems on conventional underwing engine that direct air through the flaps. Blown flaps are one solution among a broader category known as powered lift, which also includes various boundary layer control systems, systems using directed prop wash, and circulation control wings.
Internal blown flaps were used on some land and carrier-based fast jets in the 1960s, including the Lockheed F-104, Blackburn Buccaneer and certain versions of the Mikoyan-Gurevich MiG-21. They generally fell from favour because they imposed a significant maintenance overhead in keeping the ductwork clean and various valve systems working properly, along with the disadvantage that an engine failure reduced lift in precisely the situation where it is most desired. The concept reappeared in the form of upper and lower blowing in several transport aircraft, both turboprop and turbofan.
Mechanism
In a conventional blown flap, a small amount of the compressed air produced by the jet engine is "bled" off at the compressor stage and piped to channels running along the rear of the wing. There, it is forced through slots in the wing flaps of the aircraft when the flaps reach certain angles. Injecting high energy air into the boundary layer produces an increase in the stalling angle of attack and maximum lift coefficient by delaying boundary layer separation from the airfoil. Boundary layer control by mass injecting (blowing) prevents boundary layer separation by supplying additional energy to the particles of fluid which are being retarded in the boundary layer. Therefore, injecting a high velocity air mass into the air stream essentially tangent to the wall surface of the airfoil reverses the boundary layer friction deceleration; thus, the boundary layer separation is delayed.
The lift of a wing can be greatly increased with blowing flow control. With mechanical slots, the natural boundary layer limits the boundary layer control pressure to the freestream total head. Blowing with a small proportion of engine airflow (internal blown flap) increases the lift. Using much higher quantities of gas from the engine exhaust, which increases the effective chord of the flap (the jet flap), produces supercirculation, or forced circulation up to the theoretical potential flow maximum. Surpassing this limit requires the addition of direct thrust.
Development of the general concept continued at NASA in the 1950s and 1960s, leading to simplified systems with similar performance. The externally blown flap arranges the engine to blow across the flaps at the rear of the wing. Some of the jet exhaust is deflected downward directly by the flap, while additional air travels through the slots in the flap and follows the outer edge due to the Coandă effect. The similar upper-surface blowing system arranges the engines over the wing and relies completely on the Coandă effect to redirect the airflow. Although not as effective as direct blowing, these "powered lift" systems are nevertheless quite powerful and much simpler to build and maintain.
A more recent and promising blow-type flow control concept is the counter-flow fluid injection which is able to exert high-authority control to global flows using low energy modifications to key flow regions. In this case, the air blow slit is located at the pressure side near the leading edge stagnation point location and the control air-flow is directed tangentially to the surface but with a forward direction. During the operation of such a flow control system two different effects are present. One effect, boundary layer enhancement, is caused by the increased turbulence levels away from the wall region thus transporting higher-energy outer flow into the wall region. In addition to that another effect, the virtual shaping effect, is utilized to aerodynamically thicken the airfoil at high angles of attack. Both these effects help to delay or eliminate flow separation.
In general, blown flaps can improve the lift of a wing by two to three times. Whereas a complex triple-slotted flap system on a Boeing 747 produces a coefficient of lift of about 2.45, external blowing (upper surface blowing on a Boeing YC-14) improves this to about 7, and internal blowing (jet flap on Hunting H.126) to 9.
History
Williams states some flap blowing tests were done at the Royal Aircraft Establishment before the Second World War, and that extensive tests were done during the war in Germany including flight tests with Arado Ar 232, Dornier Do 24 and Messerschmitt Bf 109 aircraft. Lachmann states the Arado and Dornier aircraft used an ejector-driven single flow of air which was sucked over part of the trailing edge span and blown over the remainder. The ejector was chemically powered using high pressure vapour. The Bf 109 used engine-driven blowers for flap blowing.
Rebuffet and Poisson-Quinton describe tests in France at O.N.E.R.A. after the war with combined sucking at le of first flap section and blowing at second flap section using a jet engine compressor bleed ejector to give both sucking and blowing. Flight testing was done on a Breguet Vultur aircraft.
Tests were also done at Westland Aircraft by W.H. Paine after the war with reports dated 1950 and 1951.
In the United States, a Grumman F9F Panther was modified with flap blowing based on work done by John Attinello in 1951. Engine compressor bleed was used. The system was known as "Supercirculation Boundary Layer Control" or BLC for short.
Between 1951 and 1955, Cessna did flap blowing tests on Cessna 309 and 319 aircraft using the Arado system.
During the 1950s and 60s, fighter aircraft generally evolved towards smaller wings in order to reduce drag at high speeds. Compared to the fighters of a generation earlier, they had wing loadings about four times as high; for instance the Supermarine Spitfire had a wing loading of and the Messerschmitt Bf 109 had the "very high" loading of , whereas the 1950s-era Lockheed F-104 Starfighter had .
One serious downside to these higher wing loadings is at low speed, when there is not enough wing left to provide lift to keep the plane flying. Even huge flaps could not offset this to any large degree, and as a result many aircraft landed at fairly high speeds, and were noted for accidents as a result.
The major reason flaps were not effective is that the airflow over the wing could only be "bent so much" before it stopped following the wing profile, a condition known as flow separation. There is a limit to how much air the flaps can deflect overall. There are ways to improve this, through better flap design; modern airliners use complex multi-part flaps for instance. However, large flaps tend to add considerable complexity, and take up room on the outside of the wing, which makes them unsuitable for use on a fighter.
The principle of the jet flap, a type of internally blown flap, was proposed and patented in 1952 by the British National Gas Turbine Establishment (NGTE) and thereafter investigated by the NGTE and the Royal Aircraft Establishment.
The concept was first tested at full-scale on the experimental Hunting H.126. It reduced the stall speed to only , a number most light aircraft cannot match. The jet flap used a large percentage of the engine exhaust, rather than compressor bleed air, for blowing.
One of the first production aircraft with blown flaps was the Lockheed F-104 Starfighter, which entered service in January 1958. After prolonged development problems, the BLCS proved to be enormously useful in compensating for the Starfighter's tiny wing surface. The Lockheed T2V SeaStar, with blown flaps, had entered service in May 1957 but was to have persistent maintenance problems with the BLCS which led to its early retirement. In June 1958, the Supermarine Scimitar with blown flaps entered service. Blown flaps were used on the North American Aviation A-5 Vigilante, the Vought F-8 Crusader variants E(FN) and J, the McDonnell Douglas F-4 Phantom II and the Blackburn Buccaneer. The Mikoyan-Gurevich MiG-21 and Mikoyan-Gurevich MiG-23 had blown flaps. Petrov states long-term operation of these aircraft showed high reliability of the BLC systems. The TSR-2, which was cancelled before it entered service, had full-span blown flaps.
Starting in the 1970s, the lessons of air combat over Vietnam changed thinking considerably. Instead of aircraft designed for outright speed, general maneuverability and load capacity became more important in most designs. The result is an evolution back to larger planforms to provide more lift. For instance the General Dynamics F-16 Fighting Falcon has a wing loading of , and uses leading edge extensions to provide considerably more lift at higher angles of attack, including approach and landing. Some later combat aircraft achieved the required low-speed characteristics using swing-wings. Internal flap blowing is still used to supplement externally blown flaps on the Shin Meiwa US-1A.
Some aircraft currently (2015) in service that require a STOL performance use external flap blowing and, in some cases, also use internal flap blowing on flaps as well as on control surfaces such as the rudder to ensure adequate control and stability at low speeds. External blowing concepts are known as the "externally blown flap" (used on the Boeing C-17 Globemaster), "upper surface blowing" (used on the Antonov An-72 and Antonov An-74) and "vectored slipstream", or "over the wing blowing", used on the Antonov An-70 and the Shin Meiwa US-1A and ShinMaywa US-2.
Powered high-lift systems, such as externally blown flaps, are not used for civil transport aircraft for reasons given by Reckzeh, which include complexity, weight, cost, sufficient existing runway lengths and certification rules.
See also
Boundary layer
Boundary layer control
Coandă effect
Circulation control wing
Thrust vectoring
References
Aircraft controls
Boundary layers
Aircraft wing design | Blown flap | [
"Chemistry"
] | 2,165 | [
"Boundary layers",
"Fluid dynamics"
] |
607,975 | https://en.wikipedia.org/wiki/Carcinology | Carcinology is a branch of zoology that consists of the study of crustaceans. Crustaceans are a large class of arthropods classified by having a hard exoskeleton made of chitin or chitin and calcium, three body regions, and jointed, paired appendages. Crustaceans include lobsters, crayfish, shrimp, krill, copepods, barnacles and crabs. Most crustaceans are aquatic, but some can be terrestrial, sessile, or parasitic. Other names for carcinology are malacostracology, crustaceology, and crustalogy, and a person who studies crustaceans is a carcinologist or occasionally a malacostracologist, a crustaceologist, or a crustalogist.
The word carcinology derives from Greek , karkínos, "crab"; and , -logia.
Subfields
Carcinology is a subdivision of arthropodology, the study of arthropods which includes arachnids, insects, and myriapods. Carcinology branches off into taxonomically oriented disciplines such as:
astacology – the study of crayfish
cirripedology – the study of barnacles
copepodology – the study of copepods
arachnology – the study of arachnids
Journals
Scientific journals devoted to the study of crustaceans include:
Crustaceana
Journal of Crustacean Biology
Nauplius
See also
Entomology
Publications in carcinology
List of carcinologists
References
Crustaceans
Subfields of zoology
Subfields of arthropodology | Carcinology | [
"Biology"
] | 333 | [
"Subfields of zoology"
] |
608,002 | https://en.wikipedia.org/wiki/Tyramine | Tyramine ( ) (also spelled tyramin), also known under several other names, is a naturally occurring trace amine derived from the amino acid tyrosine. Tyramine acts as a catecholamine releasing agent. Notably, it is unable to cross the blood-brain barrier, resulting in only non-psychoactive peripheral sympathomimetic effects following ingestion. A hypertensive crisis can result, however, from ingestion of tyramine-rich foods in conjunction with the use of monoamine oxidase inhibitors (MAOIs).
Occurrence
Tyramine occurs widely in plants and animals, and is metabolized by various enzymes, including monoamine oxidases. In foods, it often is produced by the decarboxylation of tyrosine during fermentation or decay. Foods that are fermented, cured, pickled, aged, or spoiled have high amounts of tyramine. Tyramine levels go up when foods are at room temperature or go past their freshness date.
Specific foods containing considerable amounts of tyramine include:
Strong or aged cheeses: cheddar, Swiss, Parmesan, Stilton, Gorgonzola or blue cheeses, Camembert, feta, Muenster
Meats that are cured, smoked, or processed: such as salami, pepperoni, dry sausages, hot dogs, bologna, bacon, corned beef, pickled or smoked fish, caviar, aged chicken livers, soups or gravies made from meat extract
Pickled or fermented foods: sauerkraut, kimchi, tofu (especially stinky tofu), pickles, miso soup, bean curd, tempeh, sourdough breads
Condiments: soy, shrimp, fish, miso, teriyaki, and bouillon-based sauces
Drinks: beer (especially tap or home-brewed), vermouth, red wine, sherry, liqueurs
Beans, vegetables, and fruits: fermented or pickled vegetables, overripe fruits
Chocolate
Scientists more and more consider tyramine in food as an aspect of safety. They propose projects of regulations aimed to enact control of biogenic amines in food by various strategies, including usage of proper fermentation starters, or preventing their decarboxylase activity. Some authors wrote that this has already given positive results, and tyramine content in food is now lower than it has been in the past.
In plants
Mistletoe (toxic and not used by humans as a food, but historically used as a medicine).
In animals
Tyramine also plays a role in animals including: In behavioral and motor functions in Caenorhabditis elegans; Locusta migratoria swarming behaviour; and various nervous roles in Rhipicephalus, Apis, Locusta, Periplaneta, Drosophila, Phormia, Papilio, Bombyx, Chilo, Heliothis, Mamestra, Agrotis, and Anopheles.
Biological activity
Tyramine is a norepinephrine and dopamine releasing agent (NDRA) and indirectly acting sympathomimetic. Evidence for the presence of tyramine in the human brain has been confirmed by postmortem analysis. Additionally, the possibility that tyramine acts directly as a neuromodulator was revealed by the discovery of a G protein-coupled receptor with high affinity for tyramine, called the trace amine-associated receptor (TAAR1). The TAAR1 receptor is found in the brain, as well as peripheral tissues, including the kidneys. Tyramine is a full agonist of the TAAR1 in rodents and humans.
Tyramine is physiologically metabolized by monoamine oxidases (primarily MAO-A), FMO3, PNMT, DBH, and CYP2D6. Human monoamine oxidase enzymes metabolize tyramine into 4-hydroxyphenylacetaldehyde. If monoamine metabolism is compromised by the use of monoamine oxidase inhibitors (MAOIs) and foods high in tyramine are ingested, a hypertensive crisis can result, as tyramine also can displace stored monoamines, such as dopamine, norepinephrine, and epinephrine, from pre-synaptic vesicles. Tyramine is considered a "false neurotransmitter", as it enters noradrenergic nerve terminals and displaces large amounts of norepinephrine, which enters the blood stream and causes vasoconstriction.
Additionally, cocaine has been found to block blood pressure rise that is originally attributed to tyramine, which is explained by the blocking of adrenaline by cocaine from reabsorption to the brain.
The first signs of this effect were discovered by a British pharmacist who noticed that his wife, who at the time was on MAOI medication, had severe headaches when eating cheese. For this reason, it is still called the "cheese reaction" or "cheese crisis", although other foods can cause the same problem.
Most processed cheeses do not contain enough tyramine to cause hypertensive effects, although some aged cheeses (such as Stilton) do.
A large dietary intake of tyramine (or a dietary intake of tyramine while taking MAO inhibitors) can cause the tyramine pressor response, which is defined as an increase in systolic blood pressure of 30 mmHg or more. The increased release of norepinephrine (noradrenaline) from neuronal cytosol or storage vesicles is thought to cause the vasoconstriction and increased heart rate and blood pressure of the pressor response. In severe cases, adrenergic crisis can occur. Although the mechanism is unclear, tyramine ingestion also triggers migraine attacks in sensitive individuals and can even lead to stroke. Vasodilation, dopamine, and circulatory factors are all implicated in the migraines. Double-blind trials suggest that the effects of tyramine on migraine may be adrenergic.
Research reveals a possible link between migraines and elevated levels of tyramine. A 2007 review published in Neurological Sciences presented data showing migraine and cluster diseases are characterized by an increase of circulating neurotransmitters and neuromodulators (including tyramine, octopamine, and synephrine) in the hypothalamus, amygdala, and dopaminergic system. People with migraine are over-represented among those with inadequate natural monoamine oxidase, resulting in similar problems to individuals taking MAO inhibitors. Many migraine attack triggers are high in tyramine.
If one has had repeated exposure to tyramine, however, there is a decreased pressor response; tyramine is degraded to octopamine, which is subsequently packaged in synaptic vesicles with norepinephrine (noradrenaline). Therefore, after repeated tyramine exposure, these vesicles contain an increased amount of octopamine and a relatively reduced amount of norepinephrine. When these vesicles are secreted upon tyramine ingestion, there is a decreased pressor response, as less norepinephrine is secreted into the synapse, and octopamine does not activate alpha or beta adrenergic receptors.
When using a MAO inhibitor (MAOI), an intake of approximately 10 to 25 mg of tyramine is required for a severe reaction, compared to 6 to 10 mg for a mild reaction.
Tyramine, like phenethylamine, is a monoaminergic activity enhancer (MAE) of serotonin, norepinephrine, and dopamine in addition to its catecholamine-releasing activity. That is, it enhances the action potential-mediated release of these monoamine neurotransmitters. The compound is active as a MAE at much lower concentrations than the concentrations at which it induces the release of catecholamines. The MAE actions of tyramine and other MAEs may be mediated by TAAR1 agonism.
Biosynthesis
Biochemically, tyramine is produced by the decarboxylation of tyrosine via the action of the enzyme tyrosine decarboxylase. Tyramine can, in turn, be converted to methylated alkaloid derivatives N-methyltyramine, N,N-dimethyltyramine (hordenine), and N,N,N-trimethyltyramine (candicine).
In humans, tyramine is produced from tyrosine, as shown in the following diagram.
Chemistry
In the laboratory, tyramine can be synthesized in various ways, in particular by the decarboxylation of tyrosine.
Society and culture
Legal status
United States
Tyramine is a Schedule I controlled substance, categorized as a hallucinogen, making it illegal to buy, sell, or possess in the state of Florida without a license at any purity level or any form whatsoever. The language in the Florida statute says tyramine is illegal in "any material, compound, mixture, or preparation that contains any quantity of [tyramine] or that contains any of [its] salts, isomers, including optical, positional, or geometric isomers, and salts of isomers, if the existence of such salts, isomers, and salts of isomers is possible within the specific chemical designation."
This ban is likely the product of lawmakers overly eager to ban substituted phenethylamines, which tyramine is, in the mistaken belief that ring-substituted phenethylamines are hallucinogenic drugs like the 2C series of psychedelic substituted phenethylamines. The further banning of tyramine's optical isomers, positional isomers, or geometric isomers, and salts of isomers where they exist, means that meta-tyramine and phenylethanolamine, a substance found in every living human body, and other common, non-hallucinogenic substances are also illegal to buy, sell, or possess in Florida. Given that tyramine occurs naturally in many foods and drinks (most commonly as a by-product of bacterial fermentation), e.g. wine, cheese, and chocolate, Florida's total ban on the substance may prove difficult to enforce.
Notes
References
Antihypotensive agents
Migraine
Monoamine oxidase inhibitors
Monoaminergic activity enhancers
Norepinephrine-dopamine releasing agents
Peripherally selective drugs
Phenethylamine alkaloids
Phenethylamines
TAAR1 agonists
Trace amines
4-Hydroxyphenyl compounds | Tyramine | [
"Chemistry"
] | 2,296 | [
"Alkaloids by chemical classification",
"Phenethylamine alkaloids"
] |
608,006 | https://en.wikipedia.org/wiki/Pilot%20ACE | The Pilot ACE (Automatic Computing Engine) was one of the first computers built in the United Kingdom. Built at the National Physical Laboratory (NPL) in the early 1950s, it was also one of the earliest general-purpose, stored-program computers – joining other UK designs like the Manchester Mark 1 and EDSAC of the same era. It was a preliminary version of the full ACE, which was designed by Alan Turing, who left NPL before the construction was completed.
History
Pilot ACE was built to a cut down version of Turing's full ACE design. After Turing left NPL (in part because he was disillusioned by the lack of progress on building the ACE), James H. Wilkinson took over the project. Donald Davies, Harry Huskey and Mike Woodger were involved with the design. The Pilot ACE ran its first program on 10 May 1950, and was demonstrated to the press in November 1950.
Although originally intended as a prototype, it became clear that the machine was a potentially useful resource, especially given the lack of other computing devices at the time. After some upgrades to make operational use practical, it went into service in late 1951 and saw considerable operational service over the next several years. One reason Pilot ACE was useful is that it was able to perform floating-point arithmetic necessary for scientific calculations. Wilkinson tells the story of how this came to be.
When first built, Pilot ACE did not have hardware for either multiplication or division, in contrast to other computers at that time. (Hardware multiplication was added later.) Pilot ACE started out using fixed-point multiplication and division implemented as software. It soon became apparent that fixed-point arithmetic was a bad idea because the numbers quickly went out of range. It only took a short time to write new software so that Pilot ACE could do floating-point arithmetic. After that, James Wilkinson became an expert and wrote a book on rounding errors in floating-point calculations, which eventually sold well.
Pilot ACE used approximately 800 vacuum tubes. Its main memory consisted of mercury delay lines with an original capacity of 128 words of 32 bits each, which was later expanded to 352 words. A 4096-word drum memory was added in 1954. Its basic clock rate, 1 megahertz, was the fastest of the early British computers. The time to execute instructions was highly dependent on where they were in memory (due to the use of delay-line memory). An addition could take anywhere from 64 to 1024 microseconds.
The machine was so successful that a commercial version of it, named the DEUCE, was constructed and sold by the English Electric Company.
Pilot ACE was shut down in May 1955, and was given to the Science Museum, where it remains today.
Software
Installing the magnetic drum in 1954 opened the way to develop a control program for running programs dealing with matrices. Following urging by J. M. Hahn of the British Aircraft Corporation, Brian W. Munday developed General Interpretive Programme (GIP), which required only simple codewords to run a collection of programs called "bricks". Each brick could perform a single task, such as to solve a set of simultaneous equations, to invert a matrix, and to perform matrix multiplication. Though there was nothing new in this concept, where GIP was unique was in the simplicity of the codewords that did not specify the bounds of the matrices. Bounds were taken from the matrix on the drum, where the bounds were the second and third element stored. When a matrix was punched on cards, the bounds were given as the first two elements. Thus, once a program was written, it could run automatically with different sizes of matrices, without needing to change the program.
GIP was running in 1954, and was re-written for DEUCE, the successor to Pilot ACE.
Bricks to be used with GIP were written by M. Woodger, who devised a unique scheme for storing array elements, namely, "block floating". To use regular floating-point would have required two words for each element. The compromise was to use a single exponent for all the elements of an array. Thus, only one word was required for each element. Only the largest element(s) were normalized. Smaller elements were scaled accordingly. Though there was some loss of precision associated with the smaller elements, it was not great, considering that elements tended to be within a factor of ten of each other. The exponent was stored with the matrix, along with the dimensions.
See also
List of vacuum-tube computers
References
Bibliography
James H. Wilkinson, Turing's Work at the National Physical Laboratory and the Construction of Pilot ACE, DEUCE and ACE (in Nicholas Metropolis, J. Howlett, Gian-Carlo Rota, (editors), A History of Computing in the Twentieth Century, Academic Press, New York, 1980)
Martin Campbell-Kelly, Programming the Pilot ACE (in IEEE Annals of the History of Computing, Vol. 3 (No. 2), 1981, pp. 133–162)
B. Jack Copeland (editor), Alan Turing's Automatic Computing Engine. Oxford University Press, 2005,
B. Jack Copeland, Alan Turing's Electronic Brain: The Struggle to Build the ACE, the World's Fastest Computer, Oxford University Press, 2012,
Michael R. Williams, A History of Computing Technology. IEEE Computer Society Press, 1997. . Chap. 8.3.4.
How Alan Turing's Pilot ACE changed computing, BBC News, 15 May 2010
Further reading
Simon H. Lavington, Early British Computers: The Story of Vintage Computers and The People Who Built Them (Manchester University Press, 1980)
David M. Yates, Turing's Legacy: A History of Computing at the National Physical Laboratory, 1945–1995 (Science Museum, London, 1997, )
External links
Oral history interview with Donald W. Davies, Charles Babbage Institute, University of Minnesota. Davies describes computer projects at the U.K. National Physical Laboratory, from the 1947 design work of Alan Turing to the development of the two ACE computers. Davies discusses a much larger, second ACE, and the decision to contract with English Electric Company to build the DEUCE—possibly the first commercially produced computer in Great Britain.
The Pilot ACE in the Science Museum Group Collection
How Alan Turing's Pilot ACE changed computing
The world's first multi-tasking computer
1950s computers
Early British computers
One-of-a-kind computers
32-bit computers
Vacuum tube computers
Computer-related introductions in 1950
English inventions
Collection of the Science Museum, London
Serial computers | Pilot ACE | [
"Technology"
] | 1,331 | [
"Serial computers",
"Computers"
] |
608,115 | https://en.wikipedia.org/wiki/Star%20of%20Bethlehem | The Star of Bethlehem, or Christmas Star, appears in the nativity story of the Gospel of Matthew chapter 2 where "wise men from the East" (Magi) are inspired by the star to travel to Jerusalem. There, they meet King Herod of Judea, and ask him:
Herod calls together his scribes and priests who, quoting a verse from the Book of Micah, interpret it as a prophecy that the Jewish Messiah would be born in Bethlehem to the south of Jerusalem. Secretly intending to find and kill the Messiah in order to preserve his own kingship, Herod invites the wise men to return to him on their way home.
The star leads them to Jesus' Bethlehem birthplace, where they worship him and give him gifts. The wise men are then given a divine warning not to return to Herod, so they return home by a different route.
Many Christians believe the star was a miraculous sign. Some theologians claimed that the star fulfilled a prophecy, known as the Star Prophecy. Astronomers have made several attempts to link the star to unusual celestial events, such as a conjunction of Jupiter and Saturn or Jupiter and Venus, a comet, or a supernova. Some modern scholars do not consider the story to be describing a historical event, but rather a pious fiction added later to the main gospel account.
The subject is a favorite at planetarium shows during the Christmas season. However, most ancient sources and Church tradition generally indicate that the wise men visited Bethlehem sometime after Jesus' birth. The visit is traditionally celebrated on Epiphany (January 6) in Western Christianity.
The account in the Gospel of Matthew describes Jesus with the broader Greek word , which can mean either "infant" or "child" rather than the more specific word for infant, . This possibly implies that some time has passed since the birth. However, the word is also used in the Gospel of Luke specifically concerning Jesus' birth and his later presentation at the temple. Herod I has all male Hebrew babies in the area up to age two killed in the Massacre of the Innocents.
Matthew's narrative
The Gospel of Matthew tells how the Magi (often translated as "wise men", but more accurately astrologers) arrive at the court of Herod in Jerusalem and tell the king of a star which signifies the birth of the King of the Jews:
Herod is "troubled", not because of the appearance of the star, but because the Magi have told him that a "king of the Jews" had been born, which he understands to refer to the Messiah, a leader of the Jewish people whose coming was believed to be foretold in scripture. He asks his advisors where the Messiah would be born. They answer Bethlehem, birthplace of King David, and quote the prophet Micah. The king passes this information along to the Magi.
In a dream, they are warned not to return to Jerusalem, so they leave for their own country by another route. When Herod realizes he has been tricked, he orders the execution of all male children in Bethlehem "two years old and younger," based on the age the child could be in regard to the information the magi had given him concerning the time the star first appeared.
Joseph, warned in a dream, takes his family to Egypt for their safety. The gospel links the escape to a verse from scripture, which it interprets as a prophecy: "Out of Egypt I called my son." This was a reference to the departure of the Hebrews from Egypt under Moses, so the quote suggests that Matthew saw the life of Jesus as recapitulating the story of the Jewish people, with Judea representing Egypt and Herod standing in for pharaoh.
After Herod dies, Joseph and his family return from Egypt, and settle in Nazareth in Galilee. This is also said to be a fulfillment of a prophecy ("He will be called a Nazorean," (NRSV) which could be attributed to Judges 13:5 regarding the birth of Samson and the Nazirite vow. The word Nazareth is related to the word which means "sprout", and which some Bible commentators think refers to Isaiah 11:1: "There shall come forth a Rod from the stem of Jesse, And a Branch shall grow out of his roots."
Explanations
Pious fiction
Scholars who see the gospel nativity stories as later apologetic accounts created to establish the messianic status of Jesus regard the Star of Bethlehem as a pious fiction. Aspects of Matthew's account which have raised questions of the historical event include: Matthew is the only one of the four gospels which mentions either the Star of Bethlehem or the Magi. Some scholars suggest that Jesus was born in Nazareth, and that the Bethlehem nativity narratives were later additions to the gospels intended to present his birth as the fulfillment of prophecy.
According to Bart D. Ehrman, the Matthew account conflicts with that given in the Gospel of Luke, in which the family of Jesus already lives in Nazareth, travel to Bethlehem for the census, and return home almost immediately.
Fulfillment of prophecy
The ancients believed that astronomical phenomena were connected to terrestrial events. Miracles were routinely associated with the birth of important people, including the Hebrew patriarchs, as well as Greek and Roman heroes.
The Star of Bethlehem is traditionally linked to the Star Prophecy in the Book of Numbers:
Although possibly intended to refer to a time that was long past, since the kingdom of Moab had long ceased to exist by the time the Gospels were being written, this passage had become widely seen as a reference to the coming of a Messiah. It was, for example, cited by Josephus, who believed it referred to Emperor Vespasian. Origen, one of the most influential early Christian theologians, connected this prophecy with the Star of Bethlehem:
Origen suggested that the Magi may have decided to travel to Jerusalem when they "conjectured that the man whose appearance had been foretold along with that of the star, had actually come into the world".
The Magi are sometimes called "kings" because of the belief that they fulfill prophecies in Isaiah and Psalms concerning a journey to Jerusalem by gentile kings. Isaiah mentions gifts of gold and incense. In the Septuagint, the Greek translation of the Old Testament probably used by Matthew, these gifts are given as gold and frankincense, similar to Matthew's "gold, frankincense, and myrrh." The gift of myrrh symbolizes mortality, according to Origen.
While Origen argued for a naturalistic explanation, John Chrysostom viewed the star as purely miraculous: "How then, tell me, did the star point out a spot so confined, just the space of a manger and shed, unless it left that height and came down, and stood over the very head of the young child? And at this the evangelist was hinting when he said, "Lo, the star went before them, till it came and stood over where the young Child was."
Astronomical object
Although the word magi (Greek ) is usually translated as "wise men," in this context it probably means 'astronomer'/'astrologer'. The involvement of astrologers in the story of the birth of Jesus was problematic for the early Church, because they condemned astrology as demonic; a widely cited explanation was that of Tertullian, who suggested that astrology was allowed 'only until the time of the Gospel'.
Planetary conjunction
In 1614, German astronomer Johannes Kepler determined that a series of three conjunctions of the planets Jupiter and Saturn occurred in the year 7 BC. He argued (incorrectly) that a planetary conjunction could create a nova, which he linked to the Star of Bethlehem. Modern calculations show that there was a gap of nearly a degree (approximately twice a diameter of the moon) between the planets, so these conjunctions were not visually impressive. An ancient almanac has been found in Babylon which covers the events of this period, but does not indicate that the conjunctions were of any special interest. In the 20th century, Professor Karlis Kaufmanis, an astronomer, argued that this was an astronomical event where Jupiter and Saturn were in a triple conjunction in the constellation Pisces. Archaeologist and Assyriologist Simo Parpola has also suggested this explanation.
In 3–2 BC, there was a series of seven conjunctions, including three between Jupiter and Regulus and a strikingly close conjunction between Jupiter and Venus near Regulus on June 17, 2 BC. "The fusion of two planets would have been a rare and awe-inspiring event", according to Roger Sinnott. Another Venus–Jupiter conjunction occurred earlier in August, 3 BC. While these events occurred after the generally accepted date of 4 BC for the death of Herod, they did occur during the reign of Caesar Augustus (who is referenced in the Gospel of Luke), and early Christian historians Eusebius and Clement of Alexandria calculated the birth of Jesus to 3-2 BC. Since the conjunction would have been seen in the west at sunset it could not have led the magi south from Jerusalem to Bethlehem.
Double occultation on Saturday (Sabbath) April 17, 6 BC
Astronomer Michael R. Molnar argues that the "star in the east" refers to an astronomical event with astrological significance in the context of ancient Greek astrology. He suggests a link between the Star of Bethlehem and a double occultation of Jupiter by the Moon on March 20 and April 17 of 6 BC in Aries, particularly the second occultation on April 17. Occultations of planets by the Moon are quite common, but Firmicus Maternus, an astrologer to Roman Emperor Constantine, wrote that an occultation of Jupiter in Aries was a sign of the birth of a divine king. He argues that Aries rather than Pisces was the zodiac symbol for Judea, a fact that would affect previous interpretations of astrological material. Molnar's theory was debated by scientists, theologians, and historians during a colloquium on the Star of Bethlehem at the Netherlands' University of Groningen in October 2014. Harvard astronomer Owen Gingerich supports Molnar's explanation but noted technical questions. "The gospel story is one in which King Herod was taken by surprise," said Gingerich. "So it wasn't that there was suddenly a brilliant new star sitting there that anybody could have seen [but] something more subtle." Astronomer David A. Weintraub says, "If Matthew's wise men actually undertook a journey to search for a newborn king, the bright star didn't guide them; it only told them when to set out."
There is an explanation given that the events were quite close to the Sun and would not have been visible to the naked eye.
Regulus, Jupiter, and Venus
Attorney Frederick Larson examined the biblical account in the Gospel of Matthew, chapter 2 and found the following nine qualities of Bethlehem's Star: It signified birth, it signified kingship, it was related to the Jewish nation, and it rose "in the East"; King Herod had not been aware of it; it appeared at an exact time; it endured over time; and, according to Matthew, it was in front of the Magi when they traveled south from Jerusalem to Bethlehem, and then stopped over Bethlehem.
Using the Starry Night astronomy software, and an article written by astronomer Craig Chester based on the work of archeologist and historian Ernest L. Martin, Larson thinks all nine characteristics of the Star of Bethlehem are found in events that took place in the skies of 3–2 BC. Highlights include a triple conjunction of Jupiter, called the king planet, with the fixed star Regulus, called the king star, starting in September 3 BC. Larson believes that may be the time of Jesus' conception.
By June of 2 BC, nine months later, the human gestation period, Jupiter had continued moving in its orbit around the Sun and appeared in close conjunction with Venus in June of 2 BC. In Hebrew Jupiter is called , meaning "righteousness", a term also used for the Messiah, and suggested that because the planet Venus represents love and fertility, so Chester had suggested astrologers would have viewed the close conjunction of Jupiter and Venus as indicating a coming new king of Israel, and Herod would have taken them seriously. Astronomer Dave Reneke independently found the June 2 BC planetary conjunction, and noted it would have appeared as a "bright beacon of light". According to Chester, the disks of Jupiter and Venus would have appeared to touch and there has not been as close a Venus-Jupiter conjunction since then.
Jupiter next continued to move and then stopped in its apparent retrograde motion on December 25 of 2 BC over the town of Bethlehem. Since planets in their orbits have a "stationary point", a planet moves eastward through the stars but, "As it approaches the opposite point in the sky from the sun, it appears to slow, come to a full stop, and move backward (westward) through the sky for some weeks. Again it slows, stops, and resumes its eastward course," said Chester. The date of December 25 that Jupiter appeared to stop while in retrograde took place in the season of Hanukkah, and is the date later chosen to celebrate Christmas.
Heliacal rising
The Magi told Herod that they saw the star "in the East," or according to some translations, "at its rising", which may imply the routine appearance of a constellation, or an asterism. One theory interprets the phrase in Matthew 2:2, "in the east," as an astrological term concerning a "heliacal rising." This translation was proposed by Edersheim and Heinrich Voigt, among others. The view was rejected by the philologist Franz Boll (1867–1924). Two modern translators of ancient astrological texts insist that the text does not use the technical terms for either a heliacal or an acronycal rising of a star. However, one concedes that Matthew may have used layman's terms for a rising.
Comet
Other writers highly suggest that the star was a comet. Halley's Comet was visible in 12 BC and another object, possibly a comet or nova, was seen by Chinese and Korean stargazers in about 5 BC. This object was observed for over seventy days, possibly with no movement recorded. Ancient writers described comets as "hanging over" specific cities, just as the Star of Bethlehem was said to have "stood over" the "place" where Jesus was (the town of Bethlehem). However, this is generally thought unlikely as in ancient times comets were generally seen as bad omens. The comet explanation has been recently promoted by Colin Nicholl. His theory involves a hypothetical comet which could have appeared in 6 BC.
Supernova
Physicist Frank Tipler has proposed that the star of Bethlehem was a supernova or hypernova occurring in the nearby Andromeda Galaxy. Although it is difficult to detect a supernova remnant in another galaxy, or obtain an accurate date of when it occurred, supernova remnants have been detected in Andromeda.
Another theory is the more likely supernova of February 23 4 BC, which is now known as PSR 1913+16 or the Hulse-Taylor Pulsar. It is said to have appeared in the constellation of Aquila, near the intersection of the winter colure and the equator of date. The nova was "recorded in China, Korea, and Palestine" (probably meaning the Biblical account).
A nova or comet was recorded in China in 4 BC. "In the reign of Ai-ti, in the third year of the Chien-p'ing period. In the third month, day , there was a rising at Hoku" (Han Shu, The History of the Former Han Dynasty). The date is equivalent to April 24, 4 BC. This identifies the date when it was first observed in China. It was also recorded in Korea: "In the fifty-fourth year of Hyokkose Wang, in the spring, second month, day , a appeared at Hoku" (Samguk Sagi, The Historical Record of the Three Kingdoms). The Korean text may have been corrupted because Ho (1962) points out that "the day did not fall in the second month that year but on the first month" (February 23) and on the third month (April 24). The original must have read "day , first month" (February 23) or "day , third month" (April 24). The latter would coincide with the date in the Chinese records although professor Ho suggests the date was "probably February 23, 4 BC."
Relating the star historically to Jesus' birth
If the story of the Star of Bethlehem described an actual event, it might identify the year Jesus was born. The Gospel of Matthew describes the birth of Jesus as taking place when Herod was king. According to Josephus, Herod died after a lunar eclipse and before a Passover Feast. Some scholars suggested dates in 5 BC, because it allows seven months for the events Josephus documented between the lunar eclipse and the Passover rather than the 29 days allowed by lunar eclipse in 4 BC. Others suggest it was an eclipse in 1 BC. The narrative implies that Jesus was born sometime between the first appearance of the star and the appearance of the Magi at Herod's court. That the king is said to have ordered the execution of boys two years of age and younger implies that the Star of Bethlehem appeared within the preceding two years. Some scholars date the birth of Jesus as 6–4 BC, while others suggest Jesus' birth was in 3–2 BC.
The Gospel of Luke says the census from Caesar Augustus took place when Quirinius was governor of Syria. Tipler suggests this took place in AD 6, nine years after the death of Herod, and that the family of Jesus left Bethlehem shortly after the birth. Some scholars explain the apparent disparity as an error on the part of the author of the Gospel of Luke, concluding that he was more concerned with creating a symbolic narrative than a historical account, and was either unaware of, or indifferent to, the chronological difficulty.
However, there is some debate among Bible translators about the correct reading of Luke 2:2 (). Instead of translating the registration as taking place "when" Quirinius was governor of Syria, some versions translate it as "before" or use "before" as an alternative, which Harold Hoehner, F.F. Bruce, Ben Witherington and others have suggested may be the correct translation. While not in agreement, Emil Schürer also acknowledged that such a translation can be justified grammatically. According to Josephus, the tax census conducted by the Roman senator Quirinius particularly irritated the Jews, and was one of the causes of the Zealot movement of armed resistance to Rome. From this perspective, Luke may have been trying to differentiate the census at the time of Jesus' birth from the tax census mentioned in Acts 5:37 that took place under Quirinius at a later time. One ancient writer identified the census at Jesus' birth, not with taxes, but with a universal pledge of allegiance to the emperor.
Jack Finegan noted some early writers' reckoning of the regnal years of Augustus are the equivalent to 3/2 BC, or 2 BC or later for the birth of Jesus, including Irenaeus (3/2 BC), Clement of Alexandria (3/2 BC), Tertullian (3/2 BC), Julius Africanus (3/2 BC), Hippolytus of Rome (3/2 BC), Hippolytus of Thebes (3/2 BC), Origen (3/2 BC), Eusebius of Caesarea (3/2 BC), Epiphanius of Salamis (3/2 BC), Cassiodorus Senator (3 BC), Paulus Orosius (2 BC), Dionysus Exiguus (1 BC), and Chronographer of the Year 354 (AD 1). Finegan places the death of Herod in 1 BC, and says if Jesus was born two years or less before Herod the Great died, the birth of Jesus would have been in 3 or 2 BC. Finegan also notes the Alogi reckoned Jesus's birth with the equivalent of 4 BC or AD 9.
Religious interpretations
Catholicism
The Catholic Church has no say on the Star of Bethlehem, except that it led the Magi to Jesus. Various theologians have speculated on the star's nature: star, angel, light, person, bird, illusion, hallucination, natural phenomenon, etc.
Eastern Orthodoxy
In the Eastern Orthodox Church, the Star of Bethlehem is interpreted as a miraculous event of symbolic and pedagogical significance, regardless of whether it coincides with a natural phenomenon; a sign sent by God to lead the Magi to the Christ Child. This is illustrated in the Troparion of the Nativity:
In Orthodox Christian iconography, the Star of Bethlehem is often depicted not as golden, but as a dark aureola, a semicircle at the top of the icon, indicating the Uncreated Light of Divine grace, with a ray pointing to "the place where the young child lay" (Matthew 2:9). Sometimes the faint image of an angel is drawn inside the aureola.
Simon the Athonite founded the monastery of Simonopetra on Mount Athos after seeing a star he identified with the Star of Bethlehem.
The Church of Jesus Christ of Latter-day Saints
LDS members believe that the Star of Bethlehem was an actual astronomical event visible the world over. In the 1830 Book of Mormon, which they believe contains writings of ancient prophets, Samuel the Lamanite prophesies that a new star will appear as a sign that Jesus has been born, and Nephi later writes about the fulfillment of this prophecy.
Jehovah's Witnesses
Members of Jehovah's Witnesses believe that the "star" was a vision or sign created by Satan, rather than a sign from God. This is because it led the pagan astrologers first to Jerusalem where King Herod consequently found out about the birth of the "king of the Jews", with the result that he attempted to have Jesus killed.
Seventh-day Adventist
In her 1898 book, The Desire of Ages, Ellen White states "That star was a distant company of shining angels, but of this the wise men were ignorant."
Depiction in art
Paintings and other pictures of the Adoration of the Magi may include a depiction of the star in some form. In the fresco by Giotto di Bondone, it is depicted as a comet. In the tapestry of the subject designed by Edward Burne-Jones (and in the related watercolour), the star is held by an angel.
The colourful star lantern known as a is a cherished and ubiquitous symbol of Christmas for Filipinos, its design and light recalling the star. In its basic form, the has five points and two "tails" that evoke rays of light pointing the way to the baby Jesus, and candles inside the lanterns have been superseded by electric illumination.
In the Church of the Nativity in Bethlehem, a silver star with 14 undulating rays marks the location traditionally claimed to be that of Jesus' birth.
In European textiles a common eight-pointed star design is known as the Holy Star of Bethlehem. The design has been used in stone, metal, wood-work and embroidery in the Middle East since antiquity and is one of the oldest patterns in patterns in Palestinian tatreez. In 2019 US congresswoman Rashida Tlaib was sworn in wearing a thobe that featured the design. On Vogue Arabia's November 2023 cover the star took a central position in the celebration of Palestinian embroidery. The design also features on Christmas sweaters.
See also
Caesar's Comet
Star of David – the Jewish symbol of King David, which the Star of Bethlehem is often associated with having been a miraculous appearance of.
RCW 103
Flag of Nagaland — where the Star of Bethlehem is a symbol of the Christian identity of the Naga people
Notes
References
Bibliography
External links
Case, Shirley Jackson (2006). Jesus: A New Biography, Gorgias Press LLC: New Ed. .
Coates, Richard (2008) 'A linguist's angle on the Star of Bethlehem', Astronomy and Geophysics, 49, pp. 27–49
Consolmagno S.J., Guy (2010) Looking for a star or Coming to Adore?
Gill, Victoria: Star of Bethlehem: the astronomical explanations and Reading the Stars by Helen Jacobus with link to, Jacobus, Helen, Ancient astrology: how sages read the heavens/ Did the heavens predict a king?, BBC
Jenkins, R.M., "The Star of Bethlehem and the Comet of 66AD ", Journal of the British Astronomy Association, June 2004, 114, pp. 336–43. This article argues that the Star of Bethlehem is a historical fiction influenced by the appearance of Halley's Comet in AD 66.
Larson, Frederick A. What Was the Star?
Nicholl, Colin R., The Great Christ Comet: Revealing the True Star of Bethlehem . Crossway, 2015.
Star of Bethlehem Bibliography. Provides an extensive bibliography with Web links to online sources.
Astrology
Astronomical myths
Biblical Magi in the New Testament
Christian mythology
Christian terminology
Gospel of Matthew
Nativity of Jesus in the New Testament
Symbols of Nagaland | Star of Bethlehem | [
"Astronomy"
] | 5,233 | [
"Star of Bethlehem",
"Astronomical myths",
"Astrology",
"History of astronomy"
] |
608,162 | https://en.wikipedia.org/wiki/Suprachiasmatic%20nucleus | The suprachiasmatic nucleus or nuclei (SCN) is a small region of the brain in the hypothalamus, situated directly above the optic chiasm. It is responsible for regulating sleep cycles in animals. Reception of light inputs from photosensitive retinal ganglion cells allow it to coordinate the subordinate cellular clocks of the body and entrain to the environment. The neuronal and hormonal activities it generates regulate many different body functions in an approximately 24-hour cycle.
The SCN also interacts with many other regions of the brain. It contains several cell types, neurotransmitters and peptides, including vasopressin and vasoactive intestinal peptide.
Disruptions or damage to the SCN has been associated with different mood disorders and sleep disorders, suggesting the significance of the SCN in regulating circadian timing
Neuroanatomy
The SCN is situated in the anterior part of the hypothalamus immediately dorsal, or superior (hence supra) to the optic chiasm bilateral to (on either side of) the third ventricle. It consists of two nuclei composed of approximately 10,000 neurons.
The morphology of the SCN is species dependent. Distribution of different cell phenotypes across specific SCN regions, such as the concentration of VP-IR neurons, can cause the shape of the SCN to change.
The nucleus can be divided into ventrolateral and dorsolateral portions, also known as the core and shell, respectively. These regions differ in their expression of the clock genes, the core expresses them in response to stimuli whereas the shell expresses them constitutively.
In terms of projections, the core receives innervation via three main pathways, the retinohypothalamic tract, geniculohypothalamic tract, and projections from some raphe nuclei. The dorsomedial SCN is mainly innervated by the core and also by other hypothalamic areas. Lastly, its output is mainly to the subparaventricular zone and dorsomedial hypothalamic nucleus which both mediate the influence SCN exerts over circadian regulation of the body.
The most abundant peptides found within the SCN are arginine-vasopressin (AVP), vasoactive intestinal polypeptide (VIP), and peptide histidine-isoleucine (PHI). Each of these peptides are localized in different regions. Neurons with AVP are found dorsomedially, whereas VIP-containing and PHI-containing neurons are found ventrolaterally.
Circadian clock
Different organisms such as bacteria, plants, fungi, and animals, show genetically based near-24-hour rhythms. Although all of these clocks appear to be based on a similar type of genetic feedback loop, the specific genes involved are thought to have evolved independently in each kingdom. Many aspects of mammalian behavior and physiology show circadian rhythmicity, including sleep, physical activity, alertness, hormone levels, body temperature, immune function, and digestive activity. Early experiments on the function of the SCN involved lesioning the SCN in hamsters. SCN lesioned hamsters lost their daily activity rhythms. Further, when the SCN of a hamster was transplanted into an SCN lesioned hamster, the hamster adopted the rhythms of the hamster from which the SCN was transplanted. Together, these experiments suggest that the SCN is sufficient for generating circadian rhythms in hamsters.
Later studies have shown that skeletal, muscle, liver, and lung tissues in rats generate 24-hour rhythms, which dampen over time when isolated in a dish, where the SCN maintains its rhythms. Together, these data suggest a model whereby the SCN maintains control across the body by synchronizing "slave oscillators," which exhibit their own near-24-hour rhythms and control circadian phenomena in local tissue.
The SCN receives input from specialized photosensitive ganglion cells in the retina via the retinohypothalamic tract. Neurons in the ventrolateral SCN (vlSCN) have the ability for light-induced gene expression. Melanopsin-containing ganglion cells in the retina have a direct connection to the ventrolateral SCN via the retinohypothalamic tract. When the retina receives light, the vlSCN relays this information throughout the SCN allowing entrainment, synchronization, of the person's or animal's daily rhythms to the 24-hour cycle in nature. The importance of entraining organisms, including humans, to exogenous cues such as the light/dark cycle, is reflected by several circadian rhythm sleep disorders, where this process does not function normally.
Neurons in the dorsomedial SCN (dmSCN) are believed to have an endogenous 24-hour rhythm that can persist under constant darkness (in humans averaging about 24 hours 11 min). A GABAergic mechanism is involved in the coupling of the ventral and dorsal regions of the SCN.
Circadian rhythms of endothermic (warm-blooded) and ectothermic (cold-blooded) vertebrates
Information about the direct neuronal regulation of metabolic processes and circadian rhythm-controlled behaviors is not well known among either endothermic or ectothermic vertebrates, although extensive research has been done on the SCN in model animals such as the mammalian mouse and ectothermic reptiles, particularly lizards. The SCN is known to be involved not only in photoreception through innervation from the retinohypothalamic tract, but also in thermoregulation of vertebrates capable of homeothermy as well as regulating locomotion and other behavioral outputs of the circadian clock within ectothermic vertebrates. The behavioral differences between both classes of vertebrates when compared to the respective structures and properties of the SCN as well as various other nuclei proximate to the hypothalamus provide insight into how these behaviors are the consequence of differing circadian regulation. Ultimately, many neuroethological studies must be done to completely ascertain the direct and indirect roles of the SCN on circadian-regulated behaviors of vertebrates.
The SCN of endotherms and ectotherms
In general, external temperature does not influence endothermic animal circadian rhythm because of the ability of these animals to keep their internal body temperature constant through homeostatic thermoregulation; however, peripheral oscillators (see Circadian rhythm) in mammals are sensitive to temperature pulses and will experience resetting of the circadian clock phase and associated genetic expression, suggesting how peripheral circadian oscillators may be separate entities from one another despite having a master oscillator within the SCN. Furthermore, when individual neurons of the SCN from a mouse were treated with heat pulses, a similar resetting of oscillators was observed, but when an intact SCN was treated with the same heat pulse treatment the SCN was resistant to temperature change by exhibiting an unaltered circadian oscillating phase. In ectothermic animals, particularly the ruin lizard, Podarcis siculus, temperature has been shown to affect the circadian oscillators within the SCN. This reflects a potential evolutionary relationship among endothermic and ectothermic vertebrates as ectotherms rely on environmental temperature to affect their circadian rhythms and behavior while endotherms have an evolved SCN that is resistant to external temperature fluctuations and uses photoreception as a means for entraining the circadian oscillators within their SCN. In addition, the differences of the SCN between endothermic and ectothermic vertebrates suggest that the neuronal organization of the temperature-resistant SCN in endotherms is responsible for driving thermoregulatory behaviors in those animals differently from those of ectotherms, since they rely on external temperature for engaging in certain behaviors.
Behaviors controlled by the SCN of vertebrates
Significant research has been conducted on the genes responsible for controlling circadian rhythm, in particular within the SCN. Knowledge of the gene expression of Clock (Clk) and Period2 (Per2), two of the many genes responsible for regulating circadian rhythm within the individual cells of the SCN, has allowed for a greater understanding of how genetic expression influences the regulation of circadian rhythm-controlled behaviors. Studies on thermoregulation of ruin lizards and mice have informed some connections between the neural and genetic components of both vertebrates when experiencing induced hypothermic conditions. Certain findings have reflected how evolution of SCN both structurally and genetically has resulted in the engagement of characteristic and stereotyped thermoregulatory behavior in both classes of vertebrates.
Mice: Among vertebrates, it is known that mammals are endotherms that are capable of homeostatic thermoregulation. It has been shown that mice display thermosensitivity within the SCN. However, the regulation of body temperature in hypothermic mice is more sensitive to the amount of light in their environment. Even while fasted, mice in darkened conditions and experiencing hypothermia maintained a stable internal body temperature. In light conditions, mice showed a drop in body temperature under the same fasting and hypothermic conditions. Through analyzing genetic expression of Clock genes in wild-type and knockout strains, as well as analyzing the activity of neurons within the SCN and connections to proximate nuclei of the hypothalamus in the aforementioned conditions, it has been shown that the SCN is the center of control for circadian body temperature rhythm. This circadian control, thus, includes both direct and indirect influence of many of the thermoregulatory behaviors that mammals engage in to maintain homeostasis.
Ruin lizards: Several studies have been conducted on the genes expressed in circadian oscillating cells of the SCN during various light and dark conditions, as well as effects from inducing mild hypothermia in reptiles. In terms of structure, the SCNs of lizards have a closer resemblance to those of mice, possessing a dorsomedial portion and a ventrolateral core. However, genetic expression of the circadian-related Per2 gene in lizards is similar to that in reptiles and birds, despite the fact that birds have been known to have a distinct SCN structure consisting of a lateral and medial portion. Studying the lizard SCN because of the lizard's small body size and ectothermy is invaluable to understanding how this class of vertebrates modifies its behavior within the dynamics of circadian rhythm, but it has not yet been determined whether the systems of cold-blooded vertebrates were slowed as a result of decreased activity in the SCN or showed decreases in metabolic activity as a result of hypothermia.
Other signals from the retina
The SCN is one of many nuclei that receive nerve signals directly from the retina.
Some of the others are the lateral geniculate nucleus (LGN), the superior colliculus, the basal optic system, and the pretectum:
The LGN passes information about color, contrast, shape, and movement on to the visual cortex and itself signals to the SCN.
The superior colliculus controls the movement and orientation of the eye.
The basal optic system also controls eye movements.
The pretectum controls the size of the pupil.
Genetic Basis of SCN Function
The SCN is the central circadian pacemaker of mammals, serving as the coordinator of mammalian circadian rhythms. Neurons in an intact SCN show coordinated circadian rhythms in electrical activity. Neurons isolated from the SCN have been shown to produce and sustain circadian rhythms in vitro, suggesting that each individual neuron of the SCN can function as an independent circadian oscillator at the cellular level. Each cell of the SCN synchronizes its oscillations to the cells around it, resulting in a network of mutually reinforced and precise oscillations constituting the SCN master clock.
Mammals
The SCN functions as a circadian biological clock in vertebrates including teleosts, reptiles, birds, and mammals. In mammals, the rhythms produced by the SCN are driven by a transcription-translation negative feedback loop (TTFL) composed of interacting positive and negative transcriptional feedback loops. Within the nucleus of an SCN cell, the genes Clock and Bmal1 (mop3) encode the BHLH-PAS transcription factors CLOCK and BMAL1 (MOP3), respectively. CLOCK and BMAL1 are positive activators that form CLOCK-BMAL1 heterodimers. These heterodimers then bind to E-boxes upstream of multiple genes, including per and cry, to enhance and promote their transcription and eventual translation. In mammals, there are three known homologs for the period gene in Drosophila, namely per1, per2, and per3.
As per and cry are transcribed and translated into PER and CRY, the proteins accumulate and form heterodimers in the cytoplasm. The heterodimers are phosphorylated at a rate that determines the length of the transcription-translation feedback loop (TTFL) and then translocate back into the nucleus where the phosphorylated PER-CRY heterodimers act on CLOCK and/or BMAL1 to inhibit their activity. Although the role of phosphorylation in the TTFL mechanism is known, the specific kinetics are yet to be elucidated. As a result, PER and CRY function as negative repressors and inhibit the transcription of per and cry. Over time, the PER-CRY heterodimers degrade and the cycle begins again with a period of about 24.5 hours. The integral genes involved, termed “clock genes," are highly conserved throughout both SCN-bearing vertebrates like mice, rats, and birds as well as in non-SCN bearing animals such as Drosophila.
Electrophysiology
Neurons in the SCN fire action potentials in a 24-hour rhythm, even under constant conditions. At mid-day, the firing rate reaches a maximum, and, during the night, it falls again. Rhythmic expression of circadian regulatory genes in the SCN requires depolarization in the SCN neurons via calcium and cAMP. Thus, depolarization of SCN neurons via cAMP and calcium contributes to the magnitude of the rhythmic gene expression in the SCN.
Further, the SCN synchronizes nerve impulses which spread to various parasympathetic and sympathetic nuclei. The sympathetic nuclei drive glucocorticoid output from the adrenal gland which activates Per1 in the body cells, thus resetting the circadian cycle of cells in the body. Without the SCN, rhythms in body cells dampen over time, which may be due to lack of synchrony between cells.
Many SCN neurons are sensitive to light stimulation via the retina. The photic response is likely linked to effects of light on circadian rhythms. In addition, application of melatonin in live rats and isolated SCN cells can decrease the firing rate of these neurons. Variances in light input due to jet lag, seasonal changes, and constant light conditions all change the firing rhythm in SCN neurons demonstrating the relationship between light and SCN neuronal functioning.
Clinical significance
Irregular sleep-wake rhythm disorder
Irregular sleep-wake rhythm (ISWR) disorder is thought to be caused by structural damage to the SCN, decreased responsiveness of the circadian clock to light and other stimuli, and decreased exposure to light. People who tend to stay indoors and limit their exposure to light experience decreased nocturnal melatonin production. The decrease in melatonin production at night corresponds with greater expression of SCN-generated wakefulness during night, causing irregular sleep patterns.
Major depressive disorder
Major depressive disorder (MDD) has been associated with altered circadian rhythms. Patients with MDD have weaker rhythms that express clock genes in the brain. When SCN rhythms were disturbed, anxiety-like behavior, weight gain, helplessness, and despair were reported in a study conducted with mice. Abnormal glucocorticoid levels occurred in mice with no Bmal1 expression in the SCN.
Alzheimer's disease
The functional disruption of the SCN can be observed in early stages of Alzheimer's disease (AD). Changes in the SCN and melatonin secretion are major factors that cause circadian rhythm disturbances. These disturbances cause the normal physiology of sleep to change, such as the biological clock and body temperature during rest. Patients with AD experience insomnia, hypersomnia, and other sleep disorders as a result of the degeneration of the SCN and changes in critical neurotransmitter concentrations.
History
The idea that the SCN is the main sleep cycle regulator in mammals was proposed by Robert Moore, who conducted experiments using radioactive amino acids to find where the termination of the retinohypothalamic projection occurs in rodents. Early lesioning experiments in mouse, guinea pig, cat, and opossum established how removal of the SCN results in ablation of circadian rhythm in mammals.
See also
Chronobiology
Photosensitive ganglion cell
Sense of time
Retinohypothalamic tract
Shift work sleep disorder
Non-24-hour sleep–wake disorder
References
External links
Diagram at thebrain.mcgill.ca
Hypothalamus
Circadian rhythm
Sleep physiology | Suprachiasmatic nucleus | [
"Biology"
] | 3,670 | [
"Behavior",
"Sleep physiology",
"Sleep",
"Circadian rhythm"
] |
608,168 | https://en.wikipedia.org/wiki/Titadine | Titadyn 30 AG (often referred to as Titadine) is a type of compressed dynamite used in mining and manufactured in southern France by Titanite S.A. The explosive comes in the form of salmon-coloured tubes of a range of diameters, from 50 to 120 mm. Titadine is very powerful and fast-burning, with an energy rating of 4650 J/g and a speed of detonation of over 6,000 m/s.
It was used in bomb attacks by the separatist group ETA in Spain. In September 1999 a combined group of ETA members and Breton separatists raided a factory at Plevin, Brittany, stealing over eight tonnes of Titadyn, some of which was subsequently sold to the Islamist resistance group Hamas, according to Spain's El Mundo newspaper. Another raid took place in March 2001 when an explosives factory near Grenoble in France was targeted and 1.6 tonnes of Titadyn was stolen. Much of it was later recovered by Spanish police in raids, or was used by ETA in car bomb attacks in Spanish cities.
References
External links
Last report of explosives used the 11-M
Explosives | Titadine | [
"Chemistry"
] | 238 | [
"Explosives",
"Explosions"
] |
608,181 | https://en.wikipedia.org/wiki/Brood%20parasitism | Brood parasitism is a subclass of parasitism and phenomenon and behavioural pattern of animals that rely on others to raise their young. The strategy appears among birds, insects and fish. The brood parasite manipulates a host, either of the same or of another species, to raise its young as if it were its own, usually using egg mimicry, with eggs that resemble the host's. The strategy involves a form of aggressive mimicry called Kirbyan mimicry.
The evolutionary strategy relieves the parasitic parents from the investment of rearing young. This benefit comes at the cost of provoking an evolutionary arms race between parasite and host as they coevolve: many hosts have developed strong defenses against brood parasitism, such as recognizing and ejecting parasitic eggs, or abandoning parasitized nests and starting over. It is less obvious why most hosts do care for parasite nestlings, given that for example cuckoo chicks differ markedly from host chicks in size and appearance. One explanation, the mafia hypothesis, proposes that parasitic adults retaliate by destroying host nests where rejection has occurred; there is experimental evidence to support this. Intraspecific brood parasitism also occurs, as in many duck species. Here there is no visible difference between host and parasite eggs, which may be why the parasite eggs are so readily accepted. In eider ducks, the first and second eggs in a nest are especially subject to predation, perhaps explaining why they are often laid in another eider nest.
Evolutionary strategy
Brood parasitism is an evolutionary strategy that relieves the parasitic parents from the investment of rearing young or building nests for the young by getting the host to raise their young for them. This enables the parasitic parents to spend more time on other activities such as foraging and producing further offspring.
Adaptations for parasitism
Among specialist avian brood parasites, mimetic eggs are a nearly universal adaptation. The generalist brown-headed cowbird may have evolved an egg coloration mimicking a number of their hosts. Size may also be important for the incubation and survival of parasitic species; it may be beneficial for parasitic eggs to be similar in size to the eggs of the host species.
The eggshells of brood parasites are often thicker than those of the hosts. For example, two studies of cuckoos parasiting great reed warblers reported thickness ratios of 1.02 : 0.87 and 1.04 : 0.81. The function of this thick eggshell is debated. One hypothesis, the puncture resistance hypothesis, states that the thicker eggshells serve to prevent hosts from breaking the eggshell, thus killing the embryo inside. This is supported by a study in which marsh warblers damaged their own eggs more often when attempting to break cuckoo eggs, but incurred less damage when trying to puncture great reed warbler eggs put in the nest by researchers. Another hypothesis is the laying damage hypothesis, which postulates that the eggshells are adapted to damage the eggs of the host when the former is being laid, and prevent the parasite's eggs from being damaged when the host lays its eggs. In support of this hypothesis, eggs of the shiny cowbird parasitizing the house wren and the chalk-browed mockingbird and the brown-headed cowbird parasitizing the house wren and the red-winged blackbird damaged the host's eggs when dropped, and sustained little damage when host eggs were dropped on them.
Most avian brood parasites have very short egg incubation periods and rapid nestling growth. In many brood parasites, such as cuckoos and honeyguides, this short egg incubation period is due to internal incubation periods up to 24 hours longer in cuckoos than hosts. Some non-parasitic cuckoos also have longer internal incubation periods, suggesting that this longer internal incubation period was not an adaptation following brood parasitism, but predisposed birds to become brood parasites. This is likely facilitated by a heavier yolk in the egg providing more nutrients. Being larger than the hosts on hatching is a further adaptation to being a brood parasite.
Evolutionary arms race
Bird parasites mitigate the risk of egg loss by distributing eggs amongst a number of different hosts. As such behaviours damage the host, they often result in an evolutionary arms race between parasite and host as they coevolve.
Some host species have strong rejection defenses, forcing the parasitic species to evolve excellent mimicry. In other species, hosts do not defend against parasites, and the parasitic mimicry is poor.
Intraspecific brood parasitism among coots significantly increases the reproductive fitness of the parasite, but only about half of the eggs laid parasitically in other coot nests survive. This implies that coots have somewhat effective anti-parasitism strategies. Similarly, the parasitic offspring of bearded reedlings, compared to offspring in non-parasitic nests, tend to develop much more slowly and often do not reach full maturity.
Given that the cost to the host of egg removal by the parasite is unrecoverable, the best strategy for hosts is to avoid parasitism in the first place. This can take several forms, including selecting nest sites which are difficult to parasitize, starting incubation early so they are already sitting on the nests when parasites visit them early in the morning, and aggressively defending their territory.
Once a parasitic egg has arrived in a host's nest, the next most optimal defense is to eject the parasitic egg. This requires the host to distinguish which eggs are not theirs, by identifying pattern differences or changes in the number of eggs. Eggs may be ejected by grasping, if the host has a large enough beak, or by puncturing. When the parasitic eggs are mimetic, hosts may mistake one of their own eggs for a parasite's. A host might also damage its own eggs while trying to eject a parasite's egg.
Among hosts that do not eject parasitic eggs, some abandon parasitized nests and start over again. However, at high enough parasitism frequencies, this becomes maladaptive as the new nest will most likely also be parasitized. Some host species modify their nests to exclude the parasitic egg, either by weaving over the egg or by rebuilding a new nest over the existing one. For instance, American coots may kick the parasites' eggs out, or build a new nest beside the brood nests where the parasites' chicks starve to death. In the western Bonelli's warbler, a small host, small dummy parasitic eggs were always ejected, whilst with large dummy parasitic eggs, nest desertion was more frequent.
Mafia hypothesis
There is a question as to why the majority of the hosts of brood parasites care for the nestlings of their parasites. Not only do these brood parasites usually differ significantly in size and appearance, but it is also highly probable that they reduce the reproductive success of their hosts. The "mafia hypothesis" proposes that when a brood parasite discovers that its egg has been rejected, it destroys the host's nest and injures or kills the nestlings. The threat of such a response may encourage compliant behavior from the host. Mafia-like behavior occurs in the brown-headed cowbird of North America, and the great spotted cuckoo of Europe. The great spotted cuckoo lays most of its eggs in the nests of the European magpie. It repeatedly visits nests it has parasitised, a precondition for the mafia hypothesis. In experiments, nests from which the parasite's egg has been removed are destroyed by the cuckoo, supporting the hypothesis. An alternative explanation is that the destruction encourages the magpie host to build a new nest, giving the cuckoo another opportunity for parasitism. Similarly, the brown-headed cowbird parasitises the prothonotary warbler. In other experiments, 56% of egg-ejected nests were predated upon, against 6% of non-ejected nests. 85% of parasitized nests rebuilt by hosts were destroyed. Hosts that ejected parasite eggs produced 60% fewer young than those that accepted the cowbird eggs.
Similarity hypothesis
Common cuckoo females have been proposed to select hosts with similar egg characteristics to her own. The hypothesis suggests that the female monitors a population of potential hosts and chooses nests from within this group. Study of museum nest collections shows a similarity between cuckoo eggs and typical eggs of the host species. A low percentage of parasitized nests were shown to contain cuckoo eggs not corresponding to the specific host egg morph. In these mismatched nests a high percent of the cuckoo eggs were shown to correlate to the egg morph of another host species with similar nesting sites. This has been pointed to as evidence for selection by similarity. The hypothesis has been criticised for providing no mechanism for choosing nests, nor identifying cues by which they might be recognised.
Hosts raise offspring
Sometimes hosts are completely unaware that they are caring for a bird that is not their own. This most commonly occurs because the host cannot differentiate the parasitic eggs from their own. It may also occur when hosts temporarily leave the nest after laying the eggs. The parasites lay their own eggs into these nests so their nestlings share the food provided by the host. It may occur in other situations. For example, female eiders prefer to lay eggs in the nests with one or two existing eggs of others because the first egg is the most vulnerable to predators. The presence of others' eggs reduces the probability that a predator will attack her egg when a female leaves the nest after laying the first egg.
Sometimes, the parasitic offspring kills the host nest-mates during competition for resources. For example, parasitic cowbird chicks kill the host nest-mates if food intake for each of them is low, but not if the food intake is adequate.
Taxonomic range
Birds
Intraspecific
In many socially monogamous bird species, there are extra-pair matings resulting in males outside the pair bond siring offspring and used by males to escape from the parental investment in raising their offspring. In duck species such as the goldeneye, this form of cuckoldry is taken a step further, as females often lay their eggs in the nests of other individuals. Intraspecific brood parasitism has been recorded in 234 bird species, including 74 Anseriformes, 66 Passeriformes, 32 Galliformes, 19 Charadriiformes, 8 Gruiformes, 6 Podicipediformes, and small numbers of species in other orders.
Interspecific
Interspecific brood-parasites include the indigobirds, whydahs, and honeyguides in Africa, cowbirds, Old World cuckoos, black-headed ducks, and some New World cuckoos in the Americas. Seven independent origins of obligate interspecific brood parasitism in birds have been proposed. While there is still some controversy over when and how many origins of interspecific brood parasitism have occurred, recent phylogenetic analyses suggest two origins in Passeriformes (once in New World cowbirds: Icteridae, and once in African Finches: Viduidae); three origins in Old World and New World cuckoos (once in Cuculinae, Phaenicophaeinae, and in Neomorphinae-Crotophaginae); a single origin in Old World honeyguides (Indicatoridae); and in a single species of waterfowl, the black-headed duck (Heteronetta atricapilla).
Most avian brood parasites are specialists which parasitize only a single host species or a small group of closely related host species, but four out of the five parasitic cowbirds (all except the screaming cowbird) are generalists which parasitize a wide variety of hosts; the brown-headed cowbird has 221 known hosts. They usually lay only one egg per nest, although in some cases, particularly the cowbirds, several females may use the same host nest.
The common cuckoo presents an interesting case in which the species as a whole parasitizes a wide variety of hosts, including the reed warbler and dunnock, but individual females specialize in a single species. Genes regulating egg coloration appear to be passed down exclusively along the maternal line, allowing females to lay mimetic eggs in the nest of the species they specialize in. Females generally parasitize nests of the species which raised them. Male common cuckoos fertilize females of all lines, which maintains sufficient gene flow among the different maternal lines to prevent speciation.
The mechanisms of host selection by female cuckoos are somewhat unclear, though several hypotheses have been suggested in attempt to explain the choice. These include genetic inheritance of host preference, host imprinting on young birds, returning to place of birth and subsequently choosing a host randomly ("natal philopatry"), choice based on preferred nest site (nest-site hypothesis), and choice based on preferred habitat (habitat-selection hypothesis). Of these hypotheses the nest-site selection and habitat selection have been most supported by experimental analysis.
Fish
Mouthbrooding parasites
A mochokid catfish of Lake Tanganyika, Synodontis multipunctatus, is a brood parasite of several mouthbrooding cichlid fish. The catfish eggs are incubated in the host's mouth, and—in the manner of cuckoos—hatch before the host's own eggs. The young catfish eat the host fry inside the host's mouth, effectively taking up virtually the whole of the host's parental investment.
Nest parasites
A cyprinid minnow, Pungtungia herzi is a brood parasite of the percichthyid freshwater perch Siniperca kawamebari, which live in the south of the Japanese islands of Honshu, Kyushu and Shikoku, and in South Korea. Host males guard territories against intruders during the breeding season, creating a patch of reeds as a spawning site or "nest". Females (one or more per site) visit the site to lay eggs, which the male then defends. The parasite's eggs are smaller and stickier than the host's. 65.5% of host sites were parasitised in a study area.
Insects
Kleptoparasites
There are many different types of cuckoo bees, all of which lay their eggs in the nest cells of other bees, but they are normally described as kleptoparasites (Greek: klepto-, to steal), rather than as brood parasites, because the immature stages are almost never fed directly by the adult hosts. Instead, they simply take food gathered by their hosts. Examples of cuckoo bees are Coelioxys rufitarsis, Melecta separata, Nomada and Epeoloides.
Kleptoparasitism in insects is not restricted to bees; several lineages of wasp including most of the Chrysididae, the cuckoo wasps, are kleptoparasites. The cuckoo wasps lay their eggs in the nests of other wasps, such as those of the potters and mud daubers. Some species of beetle are kleptoparasites, as well. Meloe americanus larvae are known to enter bee nests and feed on the provisions reserved for the bee larva.
True brood parasites
True brood parasitism is rare among insects. Cuckoo bumblebees (the subgenus Psithyrus) are among the few insects which, like cuckoos and cowbirds, are fed by adult hosts. Their queens kill and replace the existing queen of a colony of the host species, and then use the host workers to feed their brood.
One of only four true brood-parasitic wasps is Polistes semenowi.. This paper wasp has lost the ability to build its own nest, and relies on its host, P. dominula, to raise its brood. The adult host feeds the parasite larvae directly, unlike typical kleptoparasitic insects. Such insect social parasites are often closely related to their hosts, an observation known as Emery's rule.
Host insects are sometimes tricked into bringing offspring of another species into their own nests, as with the parasitic butterfly, Phengaris rebeli, and the host ant Myrmica schencki. The butterfly larvae release chemicals that confuse the host ant into believing that the P. rebeli larvae are actually ant larvae. Thus, the M. schencki ants bring back the P. rebeli larvae to their nests and feed them, much like the chicks of cuckoos and other brood-parasitic birds. This is also the case for the parasitic butterfly, Niphanda fusca, and its host ant Camponotus japonicus. The butterfly releases cuticular hydrocarbons that mimic those of the host male ant. The ant then brings the third instar larvae back into its own nest and raises them until pupation.
See also
Broodiness
Aggressive mimicry – including host-parasite mimicry
Kleptoparasite
Slave-making ant
Notes
References
External links
Field Museum: host lists for all known brood-parasitic birds
Parasitism
Bird breeding | Brood parasitism | [
"Biology"
] | 3,502 | [
"Parasitism",
"Symbiosis"
] |
608,186 | https://en.wikipedia.org/wiki/%CE%92-Carboline | β-Carboline (9H-pyrido[3,4-b]indole) represents the basic chemical structure for more than one hundred alkaloids and synthetic compounds. The effects of these substances depend on their respective substituent. Natural β-carbolines primarily influence brain functions but can also exhibit antioxidant effects. Synthetically designed β-carboline derivatives have recently been shown to have neuroprotective, cognitive enhancing and anti-cancer properties.
Pharmacology
The pharmacological effects of specific β-carbolines are dependent on their substituents. For example, the natural β-carboline harmine has substituents on position 7 and 1. Thereby, it acts as a selective inhibitor of the DYRK1A protein kinase, a protein necessary for neurodevelopment. It also exhibits various antidepressant-like effects in rats by interacting with serotonin receptor 2A. Furthermore, it increases levels of the brain-derived neurotrophic factor (BDNF) in rat hippocampus. A decreased BDNF level has been associated with major depression in humans. The antidepressant effect of harmine might also be due to its function as a MAO-A inhibitor by reducing the breakdown of serotonin and noradrenaline.
A synthetic derivative, 9-methyl-β-carboline, has shown neuroprotective effects including increased expression of neurotrophic factors and enhanced respiratory chain activity. This derivative has also been shown to enhance cognitive function, increase dopaminergic neuron count and facilitate synaptic and dendritic proliferation. It also exhibited therapeutic effects in animal models for Parkinson's disease and other neurodegenerative processes.
However, β-carbolines with substituents in position 3 reduce the effect of benzodiazepine on GABA-A receptors and can therefore have convulsive, anxiogenic and memory enhancing effects. Moreover, 3-hydroxymethyl-beta-carboline blocks the sleep-promoting effect of flurazepam in rodents and – by itself – can decrease sleep in a dose-dependent manner. Another derivative, methyl-β-carboline-3-carboxylate, stimulates learning and memory at low doses but can promote anxiety and convulsions at high doses. With modification in position 9 similar positive effects have been observed for learning and memory without promotion of anxiety or convulsion.
β-carboline derivatives also enhance the production of the antibiotic reveromycin A in soil-dwelling Streptomyces species. Specifically, expression of biosynthetic genes is facilitated by binding of the β-carboline to a large ATP-binding regulator of the LuxR family.
Also Lactobacillus spp. secretes a β-carboline (1-acetyl-β-carboline) preventing the pathogenic fungus Candida albicans to change to a more virulent growth form (yeast-to-filament transition). Thereby, β-carboline reverses imbalances in the microbiome composition causing pathologies ranging from vaginal candidiasis to fungal sepsis.
Since β-carbolines also interact with various cancer-related molecules such as DNA, enzymes (GPX4, kinases, etc.) and proteins (ABCG2/BRCP1, etc.), they are also discussed as potential anticancer agents.
Explorative human studies for the medical use of β-carbolines
The extract of the liana Banisteriopsis caapi has been used by the tribes of the Amazon as an entheogen and was described as a hallucinogen in the middle of the 19th century. In early 20th century, European pharmacists identified harmine as the active substance. This discovery stimulated the interest to further investigate its potential as a medicine. For example, Louis Lewin, a prominent pharmacologist, demonstrated a dramatic benefit in neurological impairments after injections of B. caapi in patients with postencephalitic Parkinsonism. By 1930, it was generally agreed that hypokinesia, drooling, mood, and sometimes rigidity improved by treatment with harmine. Altogether, 25 studies had been published in the 1920s and 1930s about patients with Parkinson's disease and postencephalitic Parkinsonism. The pharmacological effects of harmine have been attributed mainly to its central monoamine oxidase (MAO) inhibitory properties. In-vivo and rodent studies have shown that extracts of Banisteriopsis caapi and also Peganum harmala lead to striatal dopamine release. Furthermore, harmine supports the survival of dopaminergic neurons in MPTP-treated mice. Since harmine also antagonizes N-methyl-d-aspartate (NMDA) receptors, some researchers speculatively attributed the rapid improvement in patients with Parkinson's disease to these antiglutamatergic effects. However, the advent of synthetic anticholinergic drugs at that time led to the total abandonment of harmine.
Structure
β-Carbolines belong to the group of indole alkaloids and consist of a pyridine ring that is fused to an indole skeleton. The structure of β-carboline is similar to that of tryptamine, with the ethylamine chain re-connected to the indole ring via an extra carbon atom, to produce a three-ringed structure. The biosynthesis of β-carbolines is believed to follow this route from analogous tryptamines. Different levels of saturation are possible in the third ring which is indicated here in the structural formula by coloring the optionally double bonds red and blue:
Examples of β-carbolines
Some of the more important β-carbolines are tabulated by structure below. Their structures may contain the aforementioned bonds marked by red or blue.
Natural occurrence
β-Carboline alkaloids are widespread in prokaryotes, plants and animals. Some β-carbolines, notably tetrahydro-β-carbolines, may be formed naturally in plants and the human body with tryptophan, serotonin and tryptamine as precursors.
Altogether, eight plant families are known to express 64 different kinds of β-carboline alkaloids. For example, the β-carbolines harmine, harmaline, and tetrahydroharmine are components of the liana Banisteriopsis caapi and play a pivotal role in the pharmacology of the indigenous psychedelic drug ayahuasca. Moreover, the seeds of Peganum harmala (Syrian Rue) contain between 0.16% and 5.9% β-carboline alkaloids (by dry weight).
A specific group of β-carboline derivatives, termed eudistomins, were extracted from ascidians (marine tunicates of the family Ascidiacea) such as Ritterella sigillinoides, Lissoclinum fragile or Pseudodistoma aureum.
Nostocarboline was isolated from a freshwater cyanobacterium.
The fully aromatic β-carbolines also occur in many foodstuffs, however in lower concentrations. The highest amounts have been detected in brewed coffee, raisins, well-done fish and meats. Smoking is another source of fully aromatic β-carbolines, with levels up to thousands of μg per smoker each day.
β-Carbolines have also been found in the cuticle of scorpions, causing their skin to fluoresce upon exposed to ultraviolet light at certain wavelengths (e.g. blacklight).
See also
γ-Carboline
Harmala alkaloid
Oxopropaline
Tryptamine
References
External links
TiHKAL #44
TiHKAL in general
Beta-carbolines in coffee
Anxiolytics
Convulsants
Entheogens
Monoamine oxidase inhibitors
GABAA receptor negative allosteric modulators
Indole alkaloids
Tryptamines | Β-Carboline | [
"Chemistry"
] | 1,708 | [
"Tryptamine alkaloids",
"Alkaloids by chemical classification",
"Indole alkaloids"
] |
608,199 | https://en.wikipedia.org/wiki/War%20of%20aggression | A war of aggression, sometimes also war of conquest, is a military conflict waged without the justification of self-defense, usually for territorial gain and subjugation, in contrast with the concept of a just war.
Wars without international legality (i.e. not out of self-defense nor sanctioned by the United Nations Security Council) can be considered wars of aggression; however, this alone usually does not constitute the definition of a war of aggression; certain wars may be unlawful but not aggressive (a war to settle a boundary dispute where the initiator has a reasonable claim, and limited aims, is one example).
In the judgment of the International Military Tribunal at Nuremberg, which followed World War II, "War is essentially an evil thing. Its consequences are not confined to the belligerent states alone, but affect the whole world. To initiate a war of aggression, therefore, is not only an international crime; it is the supreme international crime differing only from other war crimes in that it contains within itself the accumulated evil of the whole."
Article 39 of the United Nations Charter provides that the UN Security Council shall determine the existence of any act of aggression and "shall make recommendations, or decide what measures shall be taken in accordance with Articles 41 and 42, to maintain or restore international peace and security".The Rome Statute of the International Criminal Court refers to the crime of aggression as one of the "most serious crimes of concern to the international community", and provides that the crime falls within the jurisdiction of the International Criminal Court (ICC). However, the Rome Statute stipulates that the ICC may not exercise its jurisdiction over the crime of aggression until such time as the states parties agree on a definition of the crime and set out the conditions under which it may be prosecuted. At the Kampala Review Conference on 11 June 2010, a total of 111 State Parties to the Court agreed by consensus to adopt a resolution accepting the definition of the crime and the conditions for the exercise of jurisdiction over this crime. The relevant amendments to the Statute entered into force on July 17, 2018 after being ratified by 35 States Parties.
Possibly the first trial for waging aggressive war is that of the Sicilian king Conradin in 1268.
Definitions
The origin of the concept, the author Peter Maguire argues, emerged from the debate on Article 231 of the Treaty of Versailles of 1919: "Germany accepts the responsibility of Germany and her allies for causing all the loss and damage to which the Allied and Associated Governments and their nationals have been subjected as a consequence of the war imposed upon them by the aggression of Germany and her allies." Maguire argues:
The Japanese invasion of Manchuria had a significant negative effect on the moral strength and influence of the League of Nations. As critics had predicted, the League was powerless if a strong nation decided to pursue an aggressive policy against other countries, allowing a country such as Japan to commit blatant aggression without serious consequences. Adolf Hitler and Benito Mussolini were also aware of this, and ultimately both followed Japan's example in aggression against their neighbors: in the case of Italy, against Ethiopia (1935–1937) and Albania (1939); and Germany, against Czechoslovakia (1938–1939) and Poland (1939).
In November 1935, the League of Nations condemned Italy's aggression in Ethiopia and imposed economic sanctions. The prominent jurist Hans Kelsen argued that in the Ethiopian case, the League had "at least made certain efforts to fulfill its duty in the cases of illegal aggression undertaken by member states against other member states."
The Convention for the Definition of Aggression
Two Conventions for the Definition of Aggression were signed in London on 3 and 4 July 1933. The first was signed by Czechoslovakia, Romania, the Soviet Union, Turkey and Yugoslavia, and came into effect on 17 February 1934, when it was ratified by all of them but Turkey. The second was signed by Afghanistan (ratified 20 October 1933), Estonia (4 December), Latvia (4 December), Persia (16 November), Poland (16 October), Romania (16 October), the Soviet Union (16 October) and Turkey, which ratified both treaties on 23 March 1934. Finland acceded to the second convention on 31 January 1934. The second convention was the first to be registered with the League of Nations Treaty Series on 29 March 1934, while the first was registered on 26 April. As Lithuania refused to sign any treaty including Poland, it signed the definition of aggression in a separate pact with the Soviet Union on 5 July 1933, also in London, and exchanged ratifications on 14 December. It was registered in the Treaty Series on 16 April 1934.
The signatories of both treaties were also signatories of the Kellogg–Briand Pact prohibiting aggression, and were seeking an agreed definition of the latter. Czechoslovakia, Romania and Yugoslavia were members of the Little Entente, and their signatures alarmed Bulgaria, since the definition of aggression clearly covered its support of the Internal Macedonian Revolutionary Organization. Both treaties base their definition on the "Politis Report" of the Committee of Security Questions made 24 March 1933 to the Conference for the Reduction and Limitation of Armaments, in answer to a proposal of the Soviet delegation. The Greek politician Nikolaos Politis was behind the inclusion of "support for armed bands" as a form of aggression. Ratifications for both treaties were deposited in Moscow, as the convention was primarily the work of Maxim Litvinov, the Soviet signatory. The convention defined an act of aggression as follows:
Declaration of war upon another State.
Invasion by its armed forces, with or without a declaration of war, of the territory of another State.
Attack by its land, naval or air forces, with or without a declaration of war, on the territory, vessels or aircraft of another State.
Naval blockade of the coasts or ports of another State.
Provision of support to armed bands formed in its territory which have invaded the territory of another State, or refusal, notwithstanding the request of the invaded State, to take, in its own territory, all the measures in its power to deprive those bands of all assistance or protection.
The League prerogative under that convention to expel a League member found guilty of aggression was used by the League Assembly only once, against the Soviet government itself, on December 14, 1939, following the Soviet invasion of Finland.
Primary documents:
Text of the Convention of 3 July
Text of the Convention of 4 July
Text of the Convention of 5 July
The Nuremberg Principles
In 1945, the London Charter of the International Military Tribunal defined three categories of crimes, including crimes against peace. This definition was first used by Finland to prosecute the political leadership in the war-responsibility trials in Finland. The principles were later known as the Nuremberg Principles.
In 1950, the Nuremberg Tribunal defined Crimes against Peace, in Principle VI, specifically Principle VI(a), submitted to the United Nations General Assembly, as:
See: Nuremberg Trials: "The legal basis for the jurisdiction of the court was that defined by the Instrument of Surrender of Germany, political authority for Germany had been transferred to the Allied Control Council, which having sovereign power over Germany could choose to punish violations of international law and the laws of war. Because the court was limited to violations of the laws of war, it did not have jurisdiction over crimes that took place before the outbreak of war on September 1, 1939."
For committing this crime, the Nuremberg Tribunal sentenced a number of persons responsible for starting World War II. One consequence of this is that nations who are starting an armed conflict must now argue that they are either exercising the right of self-defense, the right of collective defense, or – it seems – the enforcement of the criminal law of jus cogens. It has made formal declaration of war uncommon after 1945.
Reading the Tribunal's final judgment in court, British alternate judge Norman Birkett said:
Associate Supreme Court Justice William O. Douglas charged that the Allies were guilty of "substituting power for principle" at Nuremberg: "I thought at the time and still think that the Nuremberg trials were unprincipled. Law was created ex post facto to suit the passion and clamor of the time."
The United Nations Charter
The relevant provisions of the Charter of the United Nations mentioned in the RSICC article 5.2 were framed to include the Nuremberg Principles. The specific principle is Principle VI.a "Crimes against peace", which was based on the provisions of the London Charter of the International Military Tribunal that was issued in 1945 and formed the basis for the post World War II war crime trials. The Charter's provisions based on the Nuremberg Principle VI.a are:
The Inter-American Treaty of Reciprocal Assistance (Rio Pact)
The Inter-American Treaty of Reciprocal Assistance, signed in Rio de Janeiro on September 2, 1947, included a clear definition of aggression. Article 9 stated:
In addition to other acts which the Organ of Consultation may characterize as aggression, the following shall be considered as such:
Further discussions on defining aggression
The discussions on definition of aggression under the UN began in 1950, following the outbreak of the Korean War. As the western governments, headed by Washington, were in favor of defining the governments of North Korea and the People's Republic of China as aggressor states, the Soviet government proposed to formulate a new UN resolution defining aggression and based on the 1933 convention. As a result, on November 17, 1950, the General Assembly passed resolution 378, which referred the issue to be defined by the International Law Commission. The commission deliberated over this issue in its 1951 session and due to large disagreements among its members, decided "that the only practical course was to aim at a general and abstract definition (of aggression)". However, a tentative definition of aggression was adopted by the commission on June 4, 1951, which stated:
Aggression is the use of force by a State or Government against another State or Government, in any manner, whatever the weapons used and whether openly or otherwise, for any reason or for any purpose other than individual or collective self-defence or in pursuance of a decision or recommendation by a competent organ of the United Nations.
General Assembly Resolution 3314
On December 14, 1974, the United Nations General Assembly adopted Resolution 3314, which defined the crime of aggression. This definition is not binding as such under international law, though it may reflect customary international law.
This definition makes a distinction between aggression (which "gives rise to international responsibility") and war of aggression (which is "a crime against international peace"). Acts of aggression are defined as armed invasions or attacks, bombardments, blockades, armed violations of territory, permitting other states to use one's own territory to perpetrate acts of aggression and the employment of armed irregulars or mercenaries to carry out acts of aggression. A war of aggression is a series of acts committed with a sustained intent. The definition's distinction between an act of aggression and a war of aggression make it clear that not every act of aggression would constitute a crime against peace; only war of aggression does. States would nonetheless be held responsible for acts of aggression.
The wording of the definition has been criticised by many commentators. Its clauses on the use of armed irregulars are notably vague, as it is unclear what level of "involvement" would entail state responsibility. It is also highly state-centric, in that it deems states to be the only actors liable for acts of aggression. Domestic or transnational insurgent groups, such as those that took part in the Sierra Leone Civil War and the Yugoslav Wars, were key players in their respective conflicts despite being non-state parties; they would not have come within the scope of the definition.
The Definition of Aggression also does not cover acts by international organisations. The two key military alliances at the time of the definition's adoption, NATO and the Warsaw Pact, were non-state parties and thus were outside the scope of the definition. Moreover, the definition does not deal with the responsibilities of individuals for acts of aggression. It is widely perceived as an insufficient basis on which to ground individual criminal prosecutions.
While this Definition of Aggression has often been cited by opponents of conflicts such as the 1999 Kosovo War and the 2003 Iraq War, it has no binding force in international law. The doctrine of Nulla poena sine lege means that, in the absence of binding international law on the subject of aggression, no penalty exists for committing acts in contravention of the definition. It is only recently that heads of state have been indicted over acts committed in wartime, in the cases of Slobodan Milošević of Serbia and Charles Taylor of Liberia. However, both were charged with war crimes, i.e., violations of the laws of war, rather than with the broader offence of "a crime against international peace" as envisaged by the Definition of Aggression.
The definition is not binding on the Security Council. The United Nations Charter empowers the General Assembly to make recommendations to the United Nations Security Council but the Assembly may not dictate to the Council. The resolution accompanying the definition states that it is intended to provide guidance to the Security Council to aid it "in determining, in accordance with the Charter, the existence of an act of aggression". The Security Council may apply or disregard this guidance as it sees fit. Legal commentators argue that the Definition of Aggression has had "no visible impact" on the deliberations of the Security Council.
Rome Statute of the International Criminal Court
The Rome Statute of the International Criminal Court lists the crime of aggression as one of the most serious crimes of concern to the international community, and provides that the crime falls within the jurisdiction of the International Criminal Court (ICC). However, Article 5.2 of the Rome Statute states that "The Court shall exercise jurisdiction over the crime of aggression once a provision is adopted in accordance with articles 121 and 123 defining the crime and setting out the conditions under which the Court shall exercise jurisdiction with respect to this crime. Such a provision shall be consistent with the relevant provisions of the Charter of the United Nations." The Assembly of States Parties of the ICC adopted such a definition in 2010 at the Review Conference in Kampala, Uganda.
See also
Right of conquest
Command responsibility
Crime against peace
Peremptory norm
International criminal law
International law
Invasion
Jus ad bellum
List of war crimes
Nuremberg principles
Preventive war
Six-day war
Russian invasion of Ukraine
Voluntary war
War crime
War of liberation
Notes
References
List of reference documents (alphabetical by author):
Lyal S. Sunga The Emerging System of International Criminal Law: Developments in Codification and Implementation, Kluwer (1997) 508 p.
Lyal S. Sunga Individual Responsibility in International Law for Serious Human Rights Violations, Nijhoff (1992) 252 p.
H. K. Thompson, Jr. and Henry Strutz, Dönitz at Nuremberg: A Reappraisal, Torrance, Calif.: 1983.
J. Hogan-Doran and B. van Ginkel, "Aggression as a Crime under International Law and the Prosecution of Individuals by the Proposed International Criminal Court" Netherlands International Law Review, Volume 43, Issue 3, December 1996, pp. 321–351, T.M.C. Asser Press 1996.
External links
Dinstein, Yoram. Aggression, Max Planck Encyclopedia of Public International Law
Amendments to the Rome Statute of the International Criminal Court on the crime of aggression
Stefano Pietropaoli, Defining evil. The war of aggression and international law
From Nuremberg to Kampala – Reflections on the Crime of Aggression, Address by Judge Dr. jur. h. c. Hans-Peter Kaul of the ICC at the 4th International Humanitarian Law Dialogs, 2010
Historical Review of Developments relating to Aggression, 2003 (UN publication)
Aggression
Military law
International criminal law
Aggression in international law
Aggression
Crime of aggression | War of aggression | [
"Biology"
] | 3,212 | [
"Behavior",
"Aggression",
"Aggression in international law"
] |
608,228 | https://en.wikipedia.org/wiki/Goma-2 | Goma-2 was a type of high explosive manufactured for industrial use (chiefly mining) by Unión Española de Explosivos S.A.
It was a gelatinous, nitroglycol-based explosive widely used within Spain and exported abroad.
It was used by ETA in the 1980s and 1990s.
There were two variants of Goma-2: Goma-2 EC and Goma-2 ECO. As of 2017, the manufacturer MAXAM Corp. S. L. has reformulated the Goma-type ammonia gelatine dynamites which are marketed worldwide under the new Riodin trade name.
Properties
Goma-2 explosive was a mixture of several chemicals:
Ammonium nitrate - 60–70%
Nitroglycol - 26–34%
Nitrocellulose - 0.5–2%
Dibutyl phthalate - 1–3%
Fuels - 1–3%
As with other commercial blasting explosives, detonators were needed to initiate a detonation (usually a blasting cap # 8).
Terrorist use
Goma-2 ECO was the explosive that was used in the 2004 Madrid train bombings. Terrorist Jamal Ahmidan, also known as El Chino, bought the explosive illegally from a mine in northern Spain. It was also planned by the same cell that carried out the Madrid bombings to use the explosive to derail a high-speed train. In 1973, about 80 kilograms of the explosive was used by ETA in Operation Ogro to assassinate Luis Carrero Blanco. The explosion was so powerful it threw Blanco's car over a five story building.
References
External links
MAXAM
Content of Goma-2 ECO
NordNorsk Goma-2 ECO fact sheet
Explosives | Goma-2 | [
"Chemistry"
] | 352 | [
"Explosives",
"Explosions"
] |
608,311 | https://en.wikipedia.org/wiki/List%20of%20video%20game%20industry%20people | Below is a list of notable people who work or have worked in the video game industry.
The list is divided into different roles, but some people fit into more than one category. For example, Sid Meier is both a game designer and programmer. In these cases, the people appear in both sections.
Art and animation
Dennis Hwang: graphic designer working for Niantic
Edmund McMillen: artist whose art style is synonymous with Flash games
Jordan Mechner: introduced realistic movement to video games with Prince of Persia
Jim Sachs: created new standard for quality of art with the release of the Amiga game Defender of the Crown
Derek Yu: Indie video game artist, designer, and blogger. Known for working on Spelunky, Spelunky 2, Aquaria, Eternal Daughter, I'm O.K – A Murder Simulator, and DRL
Company officers
J Allard: Xbox Officer President
David Baszucki: founder and CEO of the Roblox Corporation
Marc Blank: co-founder of Infocom
Cliff Bleszinski: founder of Boss Key Productions
Doug Bowser: president of Nintendo of America (2019–present)
Arjan Brussee: co-founder of Guerrilla Games & Boss Key Productions
Jon Burton: founder of Traveller's Tales and its parent company TT Games
Nolan Bushnell: founder of Atari
David Cage: founder of Quantic Dream
Doug Carlston: co-founder of Brøderbund
Trevor Chan: founder and CEO of Enlight Software
Adrian Chmielarz: founder of Metropolis Software, People Can Fly, and The Astronauts
Raphaël Colantonio: founder of Arkane Studios
Josef Fares: founder of Hazelight Studios
Reggie Fils-Aimé: former president of Nintendo of America (2006–2019)
Greg Fischbach: CEO of Acclaim Entertainment before it bankrupted
Jack Friedman: Founder of Jakks Pacific, LJN, and THQ
Andy Gavin & Jason Rubin: founders of Naughty Dog
David Gordon: founder of Programma International
Yves Guillemot: co-founder and CEO of Ubisoft
Shuntaro Furukawa: President of Nintendo (2018–present)
Hal Halpin: President of ECA
John Hanke: founder and CEO of Niantic
Cai Haoyu: founder and former chairman of miHoYo
Trip Hawkins: founder of Electronic Arts
Akihiro Hino: founder and CEO of Level-5
Sam Houser: co-founder and President of Rockstar Games
Atsushi Inaba, Hideki Kamiya, Shinji Mikami, & Tatsuya Minami: founders of PlatinumGames
Tsunekazu Ishihara: CEO of The Pokémon Company
Tomonobu Itagaki: Founder of Team Ninja & Valhalla Game Studios
Satoru Iwata: Former President of Nintendo (2002–2015)
Jennell Jaquays: started the game design unit at Coleco
Sampo Karjalainen: Founder of Sulake
Tatsumi Kimishima: Former President of Nintendo (2015–2018)
Michael Kogan: founder of Taito
Bobby Kotick: former CEO of Activision Blizzard (2008-2023)
Kagemasa Kōzuki: founder of Konami
Ken Kutaragi: Former President of Sony Computer Entertainment, Inc. (1997 - 2007)
Doug Lowenstein: founder and former President of the Entertainment Software Association
Hiroshi Matsuyama: CEO of CyberConnect2
Masafumi Miyamoto: founder of Square
Shigeru Miyamoto: Representative Director of Nintendo
Hidetaka Miyazaki: President of FromSoftware, creator of the Dark Souls series
Peter Moore: COO at Electronic Arts
Peter Molyneux: founder of Lionhead Studios, co-founder of Bullfrog Productions
Michael Morhaime: co-founder and former president of Blizzard Entertainment
Masaya Nakamura: founder of Namco
Jay Obernolte: founder of FarSight Studios
Philip & Andrew Oliver: co-founders of Blitz Games; twins
Scott Orr: founder of GameStar, Glu Mobile
Mark Pincus & Justin Waldron: founders of Zynga
Randy Pitchford: CEO of Gearbox Software
Ted Price: President of Insomniac Games
Paul Reiche III & Fred Ford: founders of Toys for Bob
John Riccitiello: former CEO of Unity Technologies (2014-2023)
Warren Robinett: Founder of The Learning Company
John Romero: co-founded at least seven game companies: Capitol Ideas Software, Inside Out Software, Ideas from the Deep, id Software, Ion Storm, Monkeystone Games, and Gazillion Entertainment
Bonnie Ross: Founder and former vice-president of 343 Industries (2007-2022)
Yoot Saito: founder and CEO of Vivarium Inc.
Alex Seropian & Jason Jones: Founders of Bungie
Jeremiah Slaczka: co-founder of 5th Cell
Jeff Spangenberg: founder of Retro Studios
Phil Spencer: head of the Xbox brand
Tim & Chris Stamper: founders of Ultimate Play the Game & Rare
Goichi Suda: Founder & CEO of Grasshopper Manufacture
Hirokazu Tanaka: President of Creatures
Kenzo Tsujimoto: founder of Capcom and Irem
Feargus Urquhart: CEO of Obsidian Entertainment
Swen Vincke: founder and CEO of Larian Studios
Christopher Weaver: founder of Bethesda Softworks and co-founder of ZeniMax Media
Paul Wedgwood: founder and former CEO of Splash Damage
Jordan Weisman: founder of FASA
Maximo Cavazzani: founder and CEO of etermax
David Whatley: founder of Simutronics
Jim Whitehurst: CEO of Unity Technologies (2023-Present)
Andrew Wilson: CEO of Electronic Arts (2013–Present) & director of Intel (2017–Present)
Hiroshi Yamauchi: former president of Nintendo (1949–2002)
Riccardo Zacconi: founder of King
Strauss Zelnick: CEO of Take-Two Interactive
Design
Hardware
Ralph Baer: inventor of the Magnavox Odyssey, the first video game console
Seamus Blackley: main designer and developer of the original Xbox
William Higinbotham: main developer of Tennis for Two. One of the first video games developed in the early history of video games.
Josef Kates: engineer who developed the first digital game-playing machine
Ken Kutaragi: creator of the PlayStation brand
Jerry Lawson: pioneered the video game cartridge by designing the Fairchild Channel F console
Palmer Luckey: founder of Oculus VR (now Reality Labs) and the designer of the Oculus Rift
Ivan Sutherland: Internet pioneer who is regarded as the "father of computer graphics." Also invented the first virtual reality headset with the help of his students
Xiaoyuan Tu & Wei Yen: founders of AiLive, who helped create the motion sensing hardware for the Wii
Gunpei Yokoi: inventor of the Game & Watch, Game Boy and WonderSwan
Music and sound
Online gaming
Richard Bartle: wrote the first MUD along with Roy Trubshaw
David Baszucki: creator of Roblox
John D. Carmack: developed an early online version of Doom which supported up to four players; later Quake supported 16 players which helped popularize online gaming
Jess Cliffe & Minh Le: developed the first Counter-Strike game and thus started the franchise.
J. Todd Coleman: Lead creative director of Shadowbane, Wizard101, Pirate101, Crowfall, and many other MMORPG titles.
Don Daglow: designed first MMORPG with graphics, Neverwinter Nights for AOL
Alex Evans: created the game engine for the LittleBigPlanet games & Dreams
Jeff Kaplan: lead designer of Overwatch
Sampo Karjalainen & Aapo Kyrölä: creators of Habbo Hotel
John De Margheriti: CEO of BigWorld Pty Ltd, makers of Massively Multiplayer Online Game Middleware (MMOG) technology
Elonka Dunin: General Manager at Simutronics, senior editor of IGDA Online Games White Papers
Kelton Flinn: designer of Air Warrior and many other pioneering online games, co-founder of Kesmai
Richard Garriott (a.k.a. Lord British): Creator of Ultima Online, Work on Lineage, Lineage II (Electronic Arts, NCsoft)
Dean Hall: Creator of DayZ
IceFrog: lead designer of Defense of the Ancients and Dota 2
Raph Koster: LegendMUD, Ultima Online, Star Wars Galaxies. (Electronic Arts, Sony Online Entertainment)
Brad McQuaid: co-creator of EverQuest (Verant Interactive, Sony Online Entertainment, Sigil Games)
Rob Pardo: lead designer and producer of World of Warcraft
Philip Rosedale: founded the virtual world Second Life
John Smedley: co-creator of EverQuest (Verant Interactive, Sony Online Entertainment) and president of Sony Online Entertainment
Robin Walker: co-developer of Team Fortress Classic and Team Fortress 2
Gordon Walton: executive producer
Jordan Weisman: founder of 42 Entertainment, co-creator of I Love Bees and The Beast
Will Wright: c of The Sims Online (Electronic Arts)
Naoki Yoshida: producer and director of Final Fantasy XIV and its following expansions.
Producing
Eiji Aonuma, The Legend of Zelda series
Mark Cerny, Jak and Daxter series, Spyro the Dragon series and Ratchet & Clank
Katsuya Eguchi, Animal Crossing series, Star Fox series and Wii series
Guillaume de Fondaumiere, Fahrenheit (or Indigo Prophecy), Heavy Rain, Beyond: Two Souls and Detroit: Become Human
Yuji Horii, Dragon Quest series
Sam Houser, Grand Theft Auto series, Bully, The Warriors, Max Payne 3
Todd Howard, The Elder Scrolls III: Morrowind, Oblivion, Skyrim, Fallout 3, Fallout 4, Fallout 76 and Starfield
Keiji Inafune, Megaman character designer, producer of Dead Rising and Onimusha
Hideo Kojima, Metal Gear, Zone of the Enders, and Death Stranding
Ken Levine, BioShock, System Shock 2
Hisashi Nogami, Animal Crossing series and Splatoon series
Jade Raymond, Assassin's Creed
John Romero, executive producer and designer of Heretic, Hexen: Beyond Heretic
Hironobu Sakaguchi, Final Fantasy series
Bruce Shelley, Age of Empires
Rod Fergusson, Gears of War series
Warren Spector, Thief, Deus Ex
Daniel Stahl, Star Trek Online, Champions Online
Yu Suzuki, Virtua Fighter series, Shenmue
Satoshi Tajiri, Pokémon franchise
Dave D. Taylor, Abuse
Swen Vincke: Baldur's Gate 3 and Divinity series
Programming
Michael Abrash: rendering optimization for Quake, author of graphics programming books
Dave Akers: programmer of the arcade games Klax and Escape from the Planet of the Robot Monsters
Ed Boon: programmer and creator of the Mortal Kombat series
Jens Bergensten: lead developer of Minecraft since 2011
Danielle Bunten Berry: M.U.L.E., Seven Cities of Gold
Jonathan Blow: Braid and The Witness
David Braben: co-creator of Elite
Bill Budge: Raster Blaster and Pinball Construction Set
John D. Carmack: Wolfenstein 3D, Doom, Quake, co-founded id Software
Don Daglow: 1970s mainframe games Baseball, Dungeon; also did Intellivision Utopia, first sim game
Fred Ford: lead programmer of The Horde, Pandemonium!, the Star Control series, and the Skylanders series
Richard Garriott (a.k.a. Lord British): creator of the Ultima Online series, Tabula Rasa and founder of Origin Systems
Nasir Gebelli: programmer of Final Fantasy (NES), Secret of Mana (Super NES), and Apple II games
Mark Healey: worked on Theme Park, Magic Carpet, Dungeon Keeper, the Fun School games, the LittleBigPlanet games, and Dreams
Rebecca Heineman: Out of this world and The Bard's Tale
William Higinbotham: designer and programmer of Tennis for Two, one of the first video games developed during the early history of video games
Alec Holowka: Aquaria, I'm O.K – A Murder Simulator, Night in the Woods
Wesley Huntress: Rendezvous: A Space Shuttle Simulator and Wilderness: A Survival Adventure
André LaMothe: author of several game programming books
Maddy Thorson: founder of Extremely OK Games (previously Matt Makes Games) and lead developer of TowerFall and Celeste
Al Lowe: Leisure Suit Larry series
Seumas McNally: founder of Longbow Digital Arts and lead programmer of DX-Ball, DX-Ball 2, and Tread Marks. The Seumas McNally Grand Prize is named after him.
Jordan Mechner: Karateka and Prince of Persia series
Sid Meier: Civilization series, Railroad Tycoon, co-founder of Firaxis Games
Alan Miller: original programmer for Atari 2600, co-founded Activision and Accolade
Jeff Minter: founder of Llamasoft and programmer of most of their games
David Mullich: The Prisoner and other Edu-Ware games
Yuji Naka: Sonic the Hedgehog and other Sega games
Gabe Newell: Half-Life; co-founder of Valve
Markus Persson (a.k.a. Notch): created Minecraft; founder of Mojang
Steve Polge: Unreal series and other Epic Games games
Zoë Quinn: programmer and video game blogger. Known for developing Depression Quest and for her role in the Gamergate controversy
Frédérick Raynal: Alone in the Dark and the Little Big Adventure series
Chris Roberts: programmer and designer of Freelancer, Star Citizen, and the Wing Commander games.
Warren Robinett: Adventure, Rocky's Boots, & Robot Odyssey
John Romero: Commander Keen, Doom, Quake
Jim Sachs: programmer of Saucer Attack and other home computer era games
Chris Sawyer: programmer and designer RollerCoaster Tycoon series and other games
Cher Scarlett: programmer who worked at Blizzard Entertainment and known for her role in California Department of Fair Employment and Housing v. Activision Blizzard
Tim Schafer: programmer and designer of Full Throttle, Grim Fandango, Psychonauts, Brütal Legend, and Broken Age. Also worked on the Monkey Island games.
Ken Silverman: author of the Build engine game engine
Tim Sweeney: founded Epic Games, Unreal series and the Unreal Engine
Swen Vincke: founder of Larian Studios, lead developer of Baldur's Gate 3 and Divinity series
Anne Westfall: programmer of the Archon series of games
Will Wright: programmer of first games in SimCity series and co-founder of Maxis
Brianna Wu: programmer known for working on Revolution 60 and for her role in the Gamergate controversy
Corrinne Yu: Halo lead and principal engine architect (Microsoft Halo team), Gearbox Software (studio wide) Director of Platform Technology, Ion Storm (studio wide) Director of Technology, Prey engine lead programmer at 3D Realms
References
Video game development
Video game industry
+People | List of video game industry people | [
"Technology"
] | 3,064 | [
"Computing-related lists",
"Video game lists"
] |
608,385 | https://en.wikipedia.org/wiki/E%C3%B6rs%20Szathm%C3%A1ry | Eörs Szathmáry (born 1959) is a Hungarian theoretical evolutionary biologist at the now-defunct Collegium Budapest Institute for Advanced Study and at the Department of Plant Taxonomy and Ecology of Eötvös Loránd University, Budapest. He is the co-author with John Maynard Smith of The Major Transitions in Evolution, a seminal work which continues to contribute to ongoing issues in evolutionary biology. He is a member of the Batthyány Society of Professors.
Main interest
His main interest is theoretical evolutionary biology and focuses on the common principles of the major steps in evolution, such as the origin of life, the emergence of cells, the origin of animal societies, and the appearance of human language. Together with his mentor, John Maynard Smith, he has published two important books which serve as the main references in the field (The Major Transitions in Evolution, Freeman, 1995, and The Origins of Life, Oxford University Press, 1999). Both books have been translated into other languages (so far, German, French, Japanese, and Hungarian). He serves on the editorial board of several journals (Journal of Theoretical Biology, Journal of Evolutionary Biology, Origins of Life and Evolution of the Biosphere, Evolutionary Ecology and Evolution of Communication).
Awards
Professor Szathmáry was awarded the New Europe Prize in 1996 by a group of institutes for advanced study. He used the prize to establish the NEST (New Europe School for Theoretical Biology) foundation, whose task is to help young Hungarian theoretical biologists. The Juhász-Nagy junior fellowship that he endowed in 1996 at Collegium Budapest also serves this purpose. In 1996 he was the Executive Vice-President of ICSEB V (Fifth International Congress of Systematic and Evolutionary Biology) that took place in Budapest, partially sponsored by Collegium Budapest. He served as President of the International Organisation for Systematic and Evolutionary Biology (IOSEB, 1996–2002). The Hungarian Academy of Sciences acknowledged his outstanding scientific contribution with the Academy Prize in 1999. He was invited to prestigious institutions, including the Wissenschaftskolleg zu Berlin and the Collège de France. He is a member of Academia Europaea and the Hungarian Academy of Sciences.
Achievements
Professor Szathmáry's main achievements include:
a mathematical description of some phases of early evolution;
a scenario for the origin of the genetic code;
an analysis of epistasis in terms of metabolic control theory;
a demonstration of the selection consequences of parabolic growth;
a derivation of the optimal size of the genetic alphabet;
a general framework to discuss the major transitions in evolution.
a discussion on the confusion between hypercycle and collectively autocatalytic system (link)
Author
Apart from the aforementioned co-authored books, he has also published numerous papers in important journals, including Nature, Science, Proceedings of the National Academy of Sciences of the USA, and Journal of Theoretical Biology.
References
1959 births
Living people
Hungarian biologists
Theoretical biologists
Evolutionary biologists
Members of Academia Europaea | Eörs Szathmáry | [
"Biology"
] | 600 | [
"Bioinformatics",
"Theoretical biologists"
] |
608,406 | https://en.wikipedia.org/wiki/Moonmilk | Moonmilk (sometimes called mondmilch, also known as montmilch or cave milk) is a white, creamy substance found inside limestone, dolomite, and possibly other types of caves. It is a precipitate from limestone comprising aggregates of fine crystals of varying composition, usually made of carbonates such as calcite, aragonite, hydromagnesite, and/or monohydrocalcite.
Formation and Composition
Moonmilk forms as a result of several processes, including both chemical reactions and possible bacterial action. One hypothesis suggests that moonmilk is created by the bacterium Macromonas bipunctata. However, no microbiological studies have been carried out to confirm this. Moonmilk was originally thought to be created by moon rays, a misconception reflected in its name.
It is possible that moonmilk forms when water dissolves and softens the karst in caves, carrying dissolved nutrients that are used by microbes, such as Actinomycetes. As microbial colonies grow, they trap and accumulate chemically precipitated crystals in an organic matter-rich matrix. These heterotrophic microbes, which produce CO2 as a waste product of respiration and possibly organic acids, may help to dissolve the carbonate.
Historical and Cultural Uses
In 2017, archaeologists at the University of Chinese Academy of Sciences in China discovered a bronze jar dating back over 2,700 years, containing animal fat combined with moonmilk. This mixture is believed to have been used as a cosmetic face cream by Chinese noblemen.
Being soft, moonmilk was frequently used as a medium for finger fluting, a form of prehistoric art.
The Swiss naturalist Conrad Gessner described moonmilk's use as a medicine in the 16th century. It continued to be prescribed until the 19th century.
Notable Formations
The world's largest formation of brushite moonmilk is found in the Big Room of Kartchner Caverns State Park in southern Arizona.
References
External links
Moonmilk and Cave-dwelling Microbes
Micromonas bipunctata
The Virtual Cave: Moonmilk
Novedades Rio Subterráneo de Leche de Luna (Spanish)
Speleothems | Moonmilk | [
"Biology"
] | 459 | [
"Bacteria stubs",
"Bacteria"
] |
608,432 | https://en.wikipedia.org/wiki/List%20of%20video%20game%20musicians | The following is a list of computer and video game musicians, those who have worked in the video game industry to produce video game soundtracks or otherwise contribute musically. A broader list of major figures in the video game industry is also available.
For a full article, see video game music. The list is sorted in alphabetical order by last name.
A
Rod Abernethy – The Hobbit, Star Trek: Legacy, King Arthur, Rise of the Kasai, Blazing Angels, Marvel Universe, The Gauntlet, The Sims Bustin' Out
Masamichi Amano – Quest 64
Yoshino Aoki – Breath of Fire III, Breath of Fire IV, Mega Man Battle Network series, Mega Man Star Force series
Hirokazu Ando - Kirby series, Super Smash Bros., Super Smash Bros. Melee
Noriyuki Asakura – Tenchu: Stealth Assassins, Tenchu 2: Birth of the Stealth Assassins, Tenchu: Wrath of Heaven, Way of the Samurai, Way of the Samurai 2, Kamiwaza
B
Michael Bacon – VS, Duke Nukem: Land of the Babes, EverQuest II: The Shadow Odyssey
Angelo Badalamenti – Indigo Prophecy
Kelly Bailey – Half-Life, Half-Life 2, Half-Life 2: Episode One, Half-Life 2: Episode Two, Portal
Clint Bajakian – Star Wars Jedi Knight II: Jedi Outcast, Star Wars Jedi Knight: Jedi Academy, Star Wars: Knights of the Old Republic
Lorne Balfe – Call of Duty: Modern Warfare 2, Skylanders: Spyro's Adventure, Assassin's Creed: Revelations, Skylanders: Giants, Assassin's Creed III, Beyond: Two Souls
Danny Baranowsky – Super Meat Boy, Canabalt, Crypt of the Necrodancer and The Binding of Isaac
Stephen Barton – Apex Legends, Call of Duty 4: Modern Warfare, Titanfall series, Star Wars Jedi: Fallen Order, Star Wars Jedi: Survivor, Watch Dogs: Legion
Joe Basquez – Ultima Online
Jean Baudlot – Bio Challenge, Bad Dudes, Future Wars, Castle Warrior, Beach Volley, Snow Brothers, Operation Stealth, Ivanhoe, Cruise for a Corpse, Flashback: The Quest for Identity
Stephen Baysted – Project Cars series, Fast and Furious Crossroads, Need for Speed: Shift 2 Unleashed
Robin Beanland – Killer Instinct series, Conker's Bad Fur Day, Sea of Thieves
David Bergeaud – Ratchet and Clank series, Resistance: Fall of Man
Daniel Bernstein – Blood, Claw
Teddy Blass – Chain Shooter, Fortune's Prime
Alexander Brandon – Unreal, Unreal Tournament, Deus Ex, Gauntlet: Seven Sorrows, Alpha Protocol, Unreal 2, Deus Ex: Invisible War, Battlestar Galactica, Bejeweled 3 (with Peter Hajba)
Allister Brimble – Backyard Sports series, Medal of Honor: Infiltrator, Mortal Kombat, Mortal Kombat II, RollerCoaster Tycoon, RollerCoaster Tycoon 2, Star Wars: Episode II – Attack of the Clones, Star Wars: Flight of the Falcon, Star Wars Episode I: Jedi Power Battles, X-COM: Terror from the Deep
Jeff Broadbent – Assassin's Creed Identity, Call of Duty: Mobile, Grid, PlanetSide Arena, PlanetSide 2, Resident Evil 3, Resident Evil: Resistance, Transformers: Dark of the Moon
Russell Brower – World of Warcraft: The Burning Crusade, World of Warcraft: Wrath of the Lich King, World of Warcraft: Cataclysm, Diablo III, Starcraft 2
Bill Brown – Command and Conquer: Generals, Command & Conquer: Generals Zero Hour, Return to Castle Wolfenstein, Tom Clancy's Ghost Recon, Tom Clancy's Ghost Recon: Island Thunder, Tom Clancy's Ghost Recon: Jungle Storm, Tom Clancy's Rainbow Six, Tom Clancy's Rainbow Six: Rogue Spear, Tom Clancy's Rainbow Six: Black Thorn, Tom Clancy's Rainbow Six 3: Raven Shield, Tom Clancy's Rainbow Six: Lockdown, Wolfenstein: Enemy Territory
David Buckley — Shrek Forever After (video game), additional music for Metal Gear Solid 4: Guns of the Patriots, Batman: Arkham Knight and Batman: Arkham VR with Nick Arundel
C
Sean Callery – James Bond 007: Everything or Nothing, 24: The Game
Pedro Macedo Camacho – Star Citizen, Wolfenstein II: The New Colossus, Audiosurf (Independent Games Festival 2008 Excellence in Audio Award Winner), Fury (Auran, Gamecock, Codemasters), A Vampyre Story (Autumn Moon Entertainment)
Marc Canham — Stuntman (video game), Taz: Wanted, Driver 3, Act of War: Direct Action, 24: The Game, Reservoir Dogs (video game), Driver: Parallel Lines, Driver 76, Far Cry 2, Killzone 2, Split/Second, Chime, Driver: San Francisco, The Secret World, Infamous First Light, Infamous Second Son
Stuart Chatwood – Road Rash 3D, NHL 2002, Prince of Persia: The Sands of Time, Prince of Persia: Warrior Within, Prince of Persia: The Two Thrones, Battles of Prince of Persia, Prince of Persia: Revelations, Prince of Persia: Rival Swords
Jun Chikuma – Faxanadu, Adventure Island, Bomberman series
Jamie Christopherson – Lineage II: The Chaotic Chronicle, Lost Planet. Lord of the Rings: The Battle For Middle-Earth, Metal Gear Rising: Revengeance
Elia Cmíral – The Last Express
Combichrist – DmC: Devil May Cry
Gareth Coker – Ori and the Blind Forest, Ori and the Will of the Wisps, Ark: Survival Evolved, Immortals Fenyx Rising, Halo Infinite
Peter Connelly – Three games from the Tomb Raider series
Stewart Copeland – Urban Strike, Spyro the Dragon series (Spyro the Dragon to Spyro: Enter the Dragonfly), Alone in the Dark: The New Nightmare
Normand Corbeil – Fahrenheit, Heavy Rain, Beyond: Two Souls
Jonathan Coulton – Portal, Portal 2, Left 4 Dead 2
Jessica Curry - Dear Esther, Everybody's Gone to the Rapturee
D
Ben Daglish
Joris de Man – Killzone series, Horizon Zero Dawn
Charles Deenen – M.C. Kids, The Lost Vikings (with Allister Brimble), Descent II (mixing)
Rom Di Prisco – Fortnite, Xtreme Sports Arcade, Rebel Moon Rising, Need for Speed II, Need for Speed III: Hot Pursuit, World Cup 98, Sled Storm, Carnivores 2, Need for Speed: High Stakes, NHL 2000, FIFA 2000, 007 Racing, Need for Speed: Porsche Unleashed, Rune, Blair Witch Volume 2: The Legend of Coffin Rock, Rune: Viking Warlord, NHL 2001, Rune: Halls of Valhalla, SSX Tricky, Need for Speed: Hot Pursuit 2, NHL 2002, SpyHunter 2, Dead Man's Hand, Full Auto, Full Auto 2: Battlelines, Unreal Tournament 3, SSX, Guacamelee! and Guacamelee! 2
Sascha Dikiciyan – Quake II, The Long Dark, Mass Effect 3, Quake 3 Arena, James Bond Tomorrow never dies
Ramin Djawadi – Game of Thrones: A Telltale Games Series, Gears of War 4, Gears 5, Medal of Honor, Medal of Honor: Warfighter
James Dooley – Epic Mickey
Christopher Drake – Batman: Arkham Origins, Injustice: Gods Among Us, Injustice 2
Howard Drossin – Comix Zone, Sonic Spinball
Dynamedion - Hitman: Absolution, Halo Legends, Call of Duty 4, Mortal Kombat X
E
Randy Edelman
Greg Edmonson – Uncharted series
Takahito Eguchi – The Bouncer, Final Fantasy X-2
Jared Emerson-Johnson – Sam & Max series, Wallace and Gromit's Grand Adventures, Back to the Future: The Game, Jurassic Park: The Game, The Walking Dead (video game)
Eminence Symphony Orchestra – Odin Sphere, Deltora Quest: The Seven Jewels, Valkyria Chronicles, Diablo III, Soulcalibur IV
Jon Everist - BattleTech (video game), Overwatch 2, Shadowrun: Hong Kong, The Solitaire Conspiracy
F
Eveline Fischer (now Eveline Novakovic) – Donkey Kong Country (with Robin Beanland and David Wise), Donkey Kong Country 3: Dixie Kong's Double Trouble! (with David Wise)
Ron Fish – Batman: Arkham City, God of War
The Flight – Alien: Isolation, Horizon: Zero Dawn
Tim Follin – Ghouls and Ghosts (Amiga and C64 versions), Ecco the Dolphin: Defender of the Future (Dreamcast), and various assorted tracks for 8- and 16-bit videogames.
Troels Brun Folmann – Tomb Raider: Legend and Tomb Raider: Anniversary
Dan Forden – Mortal Kombat series
Toby Fox – Undertale, Hiveswap, Deltarune, Super Smash Bros. Ultimate, Little Town Hero, Pokémon Sword and Shield
Hiroshi Fujioka – Growlanser II, Growlanser III, Langrisser III
Yasuhiko Fukuda – sometimes credited as Hirohiko Fukuda, known for Emerald Dragon (SNES)
Kenichiro Fukui - Einhänder
Brad Fuller (composer) – Tengen Tetris, Marble Madness, Gauntlet II, S.T.U.N. Runner, RoadBlasters, Xybots, Blasteroids, Klax (video game), Steel Talons, Off the Wall (video game), Toobin', Rampart (video game), APB (1987 video game), 720°, Peter Pack Rat, T-Mek, Rolling Thunder (video game), Vindicators, Space Lords, Firefox (video game), Road Runner (video game), Explore Technologies
Takeshi Furukawa – The Last Guardian
G
Martin Galway – Commodore 64 sound programmer and composer for Ocean Software and Imagine Software (after Ocean bought the company).
Genki Rockets – Lumines II, Child of Eden
Raphaël Gesqua – Flashback: The Quest For Identity, Mr. Nutz, Moto Racer
Michael Giacchino – Medal of Honor series (1999–2003, 2007), Call of Duty series (2003-2004), Secret Weapons Over Normandy, BLACK
Mick Gordon – Need for Speed: Shift, Shift 2 Unleashed, Killer Instinct, Wolfenstein: The New Order, Doom, Doom Eternal
Simon Gosling – Croc 2
Jason Graves – Command and Conquer 4, Dead Space series, City of Heroes, Silent Hunter series, Tomb Raider (2013 video game), Until Dawn
Fred Gray – Shadowfire, Mutants, Madballs, Enigma Force, Black Lamp, Eco, Stargoose, Victory Road
Gustaf Grefberg – Enclave (video game), The Chronicles of Riddick: Escape from Butcher Bay
Harry Gregson-Williams – Metal Gear Solid 2: Sons of Liberty, Metal Gear Solid 3: Snake Eater, Metal Gear Solid 4: Guns of the Patriots
Mark Griskey – Harry Potter and the Goblet of Fire, Star Wars: Knights of the Old Republic 2: The Sith Lords, Star Wars Episode 3: Revenge of the Sith, Rayman Raving Rabbids, Rayman Raving Rabbids 2
H
Gordy Haab – Star Wars: The Old Republic, Star Wars Battlefront series, Halo Wars 2
Peter Hajba – Bejeweled series
Masashi Hamauzu – SaGa Frontier 2, Tobal No. 1, Final Fantasy X, Final Fantasy XIII trilogy, Final Fantasy VII Remake
Kentarō Haneda – Wizardry 1, 2, and 3; Wizardry V: Heart of the Maelstrom
James Hannigan – Command and Conquer: Red Alert 3, Command & Conquer 4: Tiberian Twilight, F1 Manager, FIFA Soccer Manager, Grand Prix 4, Harry Potter & the Deathly Hallows – Part 1 , Harry Potter & the Order of the Phoenix, Harry Potter & the Deathly Hallows – Part 2 , Harry Potter & the Half-Blood Prince, The Lord of the Rings: Aragorn's Quest, Sim Coaster, Sim Theme Park, Warhammer: Dark Omen
Jon Hare – Cannon Fodder series, Sensible Soccer series, Sensible Golf
Kurt Harland – Soul Reaver
Aki Hata (occasionally credited as AKI) – Rocket Knight Adventures (with Masanori Ohuchi, Masanori Adachi, Hiroshi Kobayashi and Michiru Yamane), Dynamite Headdy (with Norio Hanzawa and Nazo² Suzuki)
Kärtsy Hatakka – Max Payne, Max Payne 2
Christophe Héral – Beyond Good & Evil, Rayman Origins, Rayman Legends
Norihiko Hibino – Metal Gear: Ghost Babel, Zone of the Enders, Metal Gear Solid 2: Sons of Liberty, Boktai, Metal Gear Solid 3: Snake Eater
Miki Higashino – Genso Suikoden II, Genso Suikogaiden series
Susumu Hirasawa – Sword of the Berserk: Guts' Rage, BERSERK ~Hawk of the Millennium Empire Arc - Chapter of the Holy Demon War~
Joe Hisaishi - professional name of Mamoru Fujisawa, composer and musical director known for Ni no Kuni
Silas Hite – Skate 3, The Simpsons, The Sims 2, The Sims 2: University, The Sims 2: Open For Business, The Sims 2: Pets, The Sims 2: Castaway, The Sims 2: Bon Voyage, My Sims, My Sims: Agents, MySims Kingdom, MySims Racing, Boom Blox, Boom Blox: Bash Party, Academy of Champions, Mean Girls, Legally Blonde, Wordsworth, Frogger: Ancient Shadow (with Mutato Muzika)
Michael Hoenig – Baldur's Gate, Baldur's Gate II: Shadows of Amn
Alec Holowka – Aquaria
Shinji Hosoe – Ridge Racer series, Street Fighter EX series
Niamh Houston - Super Hexagon, Dicey Dungeons
Rob Hubbard – Many Atari 8-bit and C64 games including International Karate and Jet Set Willy
Chris Huelsbeck – Apidya, Great Giana Sisters, Turrican series
Michael Hunter – Grand Theft Auto: San Andreas, Grand Theft Auto IV
Andrew Hulshult - Dusk, Quake Champions, Doom Eternal: The Ancient Gods Part 1
I
Go Ichinose – Pokémon series, Drill Dozer, Pocket Card Jockey
Arata Iiyoshi - Kamaitachi no Yoru series (with Kojiro Nakashima), Pokémon Mystery Dungeon series, Ninjala, Bemani series
Tsuneo Imahori – Gungrave
Laura Intravia - Arkhangel: The House of the Seven Stars
Mark Isham
Jun Ishikawa – Kirby series
Daisuke Ishiwatari – Guilty Gear series
Naoki Itamura – Tail to Nose: Great Championship, Pipe Dream, Aero Fighters, Hyper V-Ball, F-1 Grand Prix series
Kenji Ito – SaGa series, Seiken Densetsu 1, Tobal No. 1, Shin Megami Tensei: Devil Survivor 2
Noriyuki Iwadare – Langrisser, Lunar, Grandia, Growlancer, Phoenix Wright: Ace Attorney − Trials and Tribulations, Radiata Stories
Masaharu Iwata – Final Fantasy Tactics (with Hitoshi Sakimoto), Stella Deus: The Gate of Eternity (with Hitoshi Sakimoto), Tactics Ogre: Let Us Cling Together (with Hitoshi Sakimoto)
Hiroyuki Iwatsuki – Mitsume ga Tōru (NES), Wild Guns (with Haruo Ohashi), Mighty Morphin Power Rangers: The Fighting Edition (with Haruo Ohashi), Mighty Morphin Power Rangers: The Movie (SNES, with Haruo Ohashi), Mighty Morphin Power Rangers (SNES, with Kinuyo Yamashita and Iku Mizutani)
Takahiro Izutani – Metal Gear Solid: Portable Ops, Yakuza 2, Metal Gear Solid 4: Guns of the Patriots, Ninja Blade
J
Henry Jackman - Just Cause 3, Uncharted 4: A Thief's End
Steve Jablonsky – The Sims 3, Prince of Persia: The Forgotten Sands, Command & Conquer, Transformers, Gears of War series
Lee Jackson – Rise of the Triad, Duke Nukem 3D, Shadow Warrior, Stargunner
Richard Jacques – Sonic R, Metropolis Street Racer, Headhunter, Sonic Chronicles: The Dark Brotherhood
JAM Project – Super Robot Wars series
Richard Joseph – Sensible Software, Bitmap Brothers, many others from 1986 to 2006
K
Akari Kaida – Breath of Fire III, Mega Man & Bass, Mega Man Battle Network, Ōkami
Yuki Kajiura – Xenosaga Episode II: Jenseits von Gut und Böse, Xenosaga Episode III: Also sprach Zarathustra
Yoko Kanno – Genghis Khan, Nobunaga's Ambition series, Uncharted Waters series, Macross Ace Frontier, Macross Ultimate Frontier, Macross Triangle Frontier
Jake Kaufman – Shantae series, M&M's Minis Madness, Legend of Kay, TMNT (Nintendo DS), Contra 4
Hiroshi Kawaguchi – Space Harrier, Out Run, Fantasy Zone, After Burner
Kenji Kawai – Sansara Naga (series), Deep Fear, Folklore
Motohiro Kawashima - Batman Returns (Sega 8-bit versions), Streets of Rage series
Hiroki Kikuta – Secret of Mana, Seiken Densetsu 3, Soukaigi, Koudelka
Grant Kirkhope – Banjo-Kazooie, Donkey Kong 64, Banjo-Tooie, GoldenEye 007, Perfect Dark, Blast Corps
Frank Klepacki – All of Westwood Studios' games while the developer was independent, including the popular Command & Conquer series.
Chris Kline – Bionic Commando (2009), Pinball Hall of Fame: The Gottlieb Collection, Pinball Hall of Fame: The Williams Collection
Mark Knight - Duke Nukem: Total Meltdown, Dungeon Keeper 2, F1 2015/2016, Populous: The Beginning
Geoff Knorr - Civilization IV, Civilization V, Elemental: Fallen Enchantress, Civilization: Beyond Earth, Civilization VI
Saori Kobayashi – Panzer Dragoon series, Shadowgate 64: Trials of the Four Towers
Konami Kukeiha Club (KONAMI's sound team)
Koji Kondo – Super Mario series, The Legend of Zelda series, Star Fox 64, Yume Kojo: Doki Doki Panic, Shin Onigashima, The Mysterious Murasame Castle
Yuzo Koshiro – Castlevania: Portrait of Ruin (with Michiru Yamane), ActRaiser, ActRaiser 2, Ys series, Streets of Rage series, Revenge of Shinobi, Super Adventure Island, Etrian Odyssey series, 7th Dragon series
Taro Kudo – Axelay, Super Castlevania IV (with Masanori Adachi)
Kukeiha Club – see KONAMI KuKeiHa CLUB
Jesper Kyd – Assassin's Creed series (Ezio story), Borderlands series, Hitman series. Darksiders II, Warhammer: End Times – Vermintide, Warhammer: Vermintide 2, Warhammer 40,000: Darktide.
L
Michael Land – Monkey Island series, Star Wars games, The Dig
Tim Larkin – realMyst, Uru: Ages Beyond Myst, Pariah (video game), Myst V: End of Ages
Jean-Marc Lederman – Atlantis, Atlantis Sky Patrol, Fairies, Mystic Inn, Titanic Hidden Expedition, Snow Racer 1998, Solar Crusade, Turbogems, Fever Frenzy, SocioTown, Force of Arms
Barry Leitch – Gauntlet Dark Legacy, Rush 2, Rush 2049, Spider, Privateer Righteous Fire, TFX, Lotus 2, Utopia, Top Gear, Pixter, Supercars 2
Christopher Lennertz – Medal of Honor series (2003–2005), James Bond 007: From Russia with Love, The Sims 3: Pets, Mass Effect 3, Scalebound
Paul Leonard-Morgan – Battlefield Hardline, Warhammer 40,000: Dawn of War III, Cyberpunk 2077
Daniel Licht – Silent Hill: Downpour, Dishonored series
Russell Lieblich – Early Intellivision games
Ian Livingstone - F1 racing series, Battlefield 1943, Grid 2, Grid Legends, MotoGP 18 & 19, Napoleon: Total War, Total War: Warhammer, Total War: Warhammer III, Warhammer 40,000: Fire Warrior
Richard Ludlow - Hexany Audio
Rob Lord – Aladdin
Alph Lyla (CAPCOM's sound team)
M
Naoki Maeda – Bemani series
Jun Maeda – Moon, Kanon, Air, Clannad, Tomoyo After: It's a Wonderful Life, Little Busters!, Rewrite
Josh Mancell – Crash Bandicoot (first four games), Interstate '82 (with Mark Mothersbaugh), Jak and Daxter Trilogy, The Condemned, The Megalex, Johnny Mnemonic
Mark Mancina
Christopher Mann – Independence War Deluxe Edition and Independence War 2: Edge of Chaos
Kevin Manthei – Kung Fu Panda, Marvel Universe Online, Upshift Strikeracer, Xiaolin Showdown, Ultimate Spider-Man, Kill Switch, Twisted Metal Black, Civilization II
Jerry Martin – SimCity 4: Rush Hour
Junichi Masuda – Pokémon video game series, Pulseman, Mario & Wario
Noriko Matsueda – Bahamut Lagoon, Chrono Trigger, Tobal No. 1, The Bouncer, Final Fantasy X-2
Hayato Matsuo – Shiren the Wanderer series, Ogre Battle series, Front Mission 3, Final Fantasy XII
Michael McCann – Splinter Cell: Double Agent, Deus Ex: Human Revolution, Deus Ex: Mankind Divided, XCOM: Enemy Unknown, XCOM 2
Peter McConnell – Grim Fandango, Psychonauts
Bear McCreary - God of War, SOCOM 4: U.S. Navy SEALs, Dark Void
Nathan McCree – Tomb Raider, Tomb Raider 2 and Tomb Raider 3
Shoji Meguro – Shin Megami Tensei III: Nocturne, Devil Summoner series, Persona series, Metaphor: ReFantazio
Robyn Miller – Myst, Riven, Obduction (video game)
Toru Minegishi – The Legend of Zelda series, Super Mario 3D World, Splatoon series
Yasunori Mitsuda – Chrono Trigger, Mario Party, Xenogears, Xenosaga, Chrono Cross, Front Mission: Gun Hazard (with Nobuo Uematsu, Junya Nakano, and Masashi Hamauzu), Radical Dreamers, Legaia 2: Duel Saga, Shadow Hearts
Takenobu Mitsuyoshi - Daytona USA, Virtua Fighter series
Yuu Miyake – Tekken series, Katamari Damacy
Hiroshi Miyazaki (sometimes referred to as Miyashiro Sugito or MIYA) – Captain Tsubasa 5: Hasha no Shogo Campione, Tecmo Super Bowl (SFC), Ninja Gaiden III: The Ancient Ship of Doom, Ninja Gaiden Trilogy, Kagero: Deception II, Deception III: Dark Delusion, Monster Rancher Hop-A-Bout
Naoshi Mizuta – Rockman & Forte, Parasite Eve 2, Final Fantasy XI
Jonathan Morali - Life Is Strange series
Mike Morasky – Portal, Portal 2, Team Fortress 2, Left 4 Dead, Left 4 Dead 2, Counter-Strike: Global Offensive, Half-Life: Alyx
Mark Morgan – Fallout, Fallout 2, Planescape: Torment, Descent II, Wasteland series
Akihiko Mori – Wonder Project J series
Trevor Morris – Need for Speed Carbon, Dragon Age: Inquisition
Mark Mothersbaugh – Crash Bandicoot (as a music producer), The Sims 2, Sewer Shark
Atsuhiro Motoyama – Umihara Kawase, Ace Striker, Battle Bakraid, Bloody Roar (video game), Sorcer Striker, Dimahoo, Tekken Advance, Kuru Kuru Kururin, Kururin Paradise, Fire Pro Wrestling Returns, Style Savvy
Rika Muranaka – Castlevania: Symphony of the Night, Silent Hill, Metal Gear Solid series (all ending themes)
Mutato Muzika – see Mark Mothersbaugh
N
Hideki Naganuma – Jet Set Radio, Jet Set Radio Future, Ollie King, Sonic Rush
Masato Nakamura – Sonic the Hedgehog, Sonic the Hedgehog 2
Takayuki Nakamura – Virtua Fighter, Tobal 2, Ehrgeiz
Junya Nakano – Front Mission: Gun Hazard, Dewprism (Threads of Fate in the U.S.), Tobal No. 1, Final Fantasy X
Akito Nakatsuka – Zelda II: The Adventure of Link, Ice Climber
Junichi Nakatsuru – Soul series, Ace Combat series, Super Smash Bros. for Nintendo 3DS / Wii U
Manabu Namiki – Battle Garegga, Armed Police Batrider, DoDonPachi Dai Ou Jou, Ketsui: Kizuna Jigoku Tachi, Espgaluda, Mushihimesama, Deathsmiles, Konami ReBirth series
Michiko Naruke – Wild Arms series
Tomohito Nishiura – Dark Cloud, Dark Cloud 2
Graeme Norgate – TimeSplitters, TimeSplitters 2, TimeSplitters: Future Perfect
O
Martin O'Donnell – Halo series, Myth series Destiny, Oni
Hisayoshi Ogura – Zuntata sound team, Darius, Darius II (also called Sagaia), Darius Gaiden, G Darius, The Ninja Warriors, Rainbow Islands (Master System version, with Tadashi Kimijima)
Kow Otani –Shadow of the Colossus
Tomoya Ohtani – Sonic the Hedgehog series, Rhythm Thief & the Emperor's Treasure
Keiichi Okabe – Drakengard 3, Nier series, Tekken series
Shinji Orito – Dōsei, Moon, One: Kagayaku Kisetsu e, Kanon, Air, Clannad, Tomoyo After: It's a Wonderful Life, Little Busters!, Rewrite
Michiru Ōshima – Genghis Khan II: Clan of the Grey Wolf, ICO
Kenichi Ōkuma – Ring ni Kakero
P
John Paesano – Mass Effect: Andromeda, Detroit: Become Human, Spider-Man
Winifred Phillips – Assassin's Creed Liberation, LittleBigPlanet 3, God of War, Homefront: The Revolution, Total War Battles: Kingdom, LittleBigPlanet 2, Speed Racer, LittleBigPlanet Vita, The Da Vinci Code, LittleBigPlanet Karting, Call of Champions, Shrek the Third, LittleBigPlanet 2: Toy Story (DLC), SimAnimals, LittleBigPlanet 2: Cross Controller, Charlie and the Chocolate Factory, Legend of the Guardians: The Owls of Ga'Hoole, The Maw, Fighter Within, Spore Hero
Stéphane Picq – MegaRace, Qin, KULT: The Temple of Flying Saucers, Dune, Extase, Jumping Jackson, Purple Saturn Day, Full Metal Planete, Lost Eden, KGB (computer game), Commander Blood
Kirill Pokrovsky – Divinity series
Robert "Bobby" Prince – Doom, Doom II, The Ultimate Doom, Wolfenstein 3D, Spear of Destiny, Commander Keen in Goodbye Galaxy!, Commander Keen in Aliens Ate My Babysitter, Duke Nukem 3D, Rise of the Triad, Axis and Allies, DemonStar, Abuse, Word Rescue, Pickle Wars, Math Rescue, Xenophage: Alien Bloodsport, Catacomb 3D
Marcin Przybyłowicz – The Witcher series, Cyberpunk 2077
Ari Pulkkinen – Angry Birds, Angry Birds Seasons, Trine (video game), Trine 2, Dead Nation, Outland (video game), Super Stardust HD, Shadowgrounds, Shadowgrounds Survivor
R
Lena Raine – Celeste, Minecraft, Chicory: A Colorful Tale, Deltarune
Simon Ravn – Viking: Battle for Asgard, Empire: Total War, Napoleon: Total War
Mike Reagan – God of War, God of War II, God of War III, God of War: Ghost of Sparta, Darkwatch, Darksiders, Life Is Strange: Farewell, Conan, Devil's Third, Trials Evolution, Twisted Metal: Black
Trent Reznor – Quake
Kevin Riepl – Unreal Tournament 2003, Unreal Tournament 2004, Unreal Championship 2: The Liandri Conflict, The Bible Game, Gears of War, Unreal Tournament 3, Huxley, Hunted: The Demon's Forge, Aliens: Colonial Marines
Stephen Rippy – Age of Empires series, Halo Wars
Paul Romero – Heroes of Might and Magic series, EverQuest
Daniel Rosenfeld – Minecraft
Lior Rosner – Syphon Filter: Dark Mirror
Mark Rutherford – Rogue Warrior (video game) (2009) – Bethesda Softworks, Aliens vs. Predator (2010) – Sega, NeverDead – Konami, Sniper Elite V2 – Rebellion Developments and 505 Games.
S
Toshihiko Sahashi – Blue Stinger
Sakari – Independent game musicians from around the world.
Hitoshi Sakimoto – Super Hockey '94, Radiant Silvergun, Final Fantasy Tactics, Final Fantasy Tactics Advance, Final Fantasy XII, Breath of Fire V: Dragon Quarter, Tactics Ogre: Let Us Cling Together (with Masaharu Iwata)
Motoi Sakuraba – Tales Series (with Shinji Tamura), Tenshi no Uta: Shiroki Tsubasa no Inori, Star Ocean series, Golden Sun series, Hiouden Series, Valkyrie Profile, Mario Tennis and Mario Golf series, Baten Kaitos series, Mario Sports Superstars
Tom Salta – Deathloop, Halo: Spartan Assault, Halo: Spartan Strike, Need For Speed Underground 2, Tom Clancy's Ghost Recon: Future Soldier, Tom Clancy's Ghost Recon Advanced Warfighter, Tom Clancy's Ghost Recon Advanced Warfighter 2, Tom Clancy's H.A.W.X 2, Wolfenstein: Youngblood
Michael Salvatori – Halo series, Destiny series
George 'The Fat Man' Sanger – Wing Commander, The 7th Guest, Master of Orion
Nobuyoshi Sano – Drakengard, Ghost in the Shell: Stand Alone Complex (PS2), Ridge Racer series, Tekken series
Gustavo Santaolalla – The Last of Us series
Ryuji Sasai – Final Fantasy Mystic Quest, Bushido Blade 2, Final Fantasy Legend III (with Chihiro Fujioka), Rudora no Hihou (Rudra's Secret Treasure), Tobal No.1 (with Yasunori Mitsuda, Masashi Hamauzu, Kenji Ito, Yasuhiro Kawakami, Junya Nakano, Yoko Shimomura & Noriko Matsueda), Xak (with Tadahiro Nitta)
Tenpei Sato – Marl Kingdom series, Disgaea: Hour of Darkness, Phantom Brave
Kan Sawada
Hiroyuki Sawano – Xenoblade Chronicles X
Kazuo Sawa – Nekketsu Kouha: Kunio-Kun, River City Ransom, Super Dodge Ball
Sarah Schachner - Assassin's Creed Origins, Assassin's Creed Valhalla, Call of Duty: Infinite Warfare, Call of Duty: Modern Warfare
Rik Schaffer – The Elder Scrolls Online, Neverwinter Nights 2: Mask of the Betrayer, Vampire: The Masquerade – Bloodlines, Vampire: The Masquerade – Bloodlines 2
Brian L. Schmidt – NARC (video game), John Madden Football and many others.
Garry Schyman – Voyeur, Destroy All Humans!, BioShock, Middle-earth: Shadow of Mordor
Andrew Sega – Unreal, Unreal Tournament, Freelancer, Crusader series
Mark Seibert – Quest for Glory series, King's Quest series
Kazuyuki Sekiguchi - Momotaro Dentetsu series
Tsuyoshi Sekito – All-Star Pro Wrestling series, Brave Fencer Musashi, Final Fantasy II (WonderSwan Color and Final Fantasy Origins versions), Chrono Trigger (PlayStation version), Romancing SaGa: Minstrel's Song, Final Fantasy VII Advent Children, Teenage Mutant Ninja Turtles III: The Manhattan Project
Jun Senoue – Sonic the Hedgehog series
Alex Seropian – Marathon
Russell Shaw – Dungeon Keeper, Syndicate, Fable and Fable 2
Laura Shigihara – To The Moon, Finding Paradise, Plants vs. Zombies, Rakuen, Quintessence: The Blighted Venom and High School Story
Go Shiina – Tales, Mr. Driller, The Idolmaster, God Eater series
Yoko Shimomura – Street Fighter II, Front Mission series, Live-A-Live, Super Mario RPG, Parasite Eve, Legend of Mana, Mario & Luigi series, Kingdom Hearts series, Final Fantasy XV
Hidenori Shoji - Like a Dragon series, Super Monkey Ball, Super Monkey Ball 2, F-Zero GX
Mark Snow – Syphon Filter: The Omega Strain, Syphon Filter: Dark Mirror
Masayoshi Soken – Mario Hoops 3-on-3, Mario Sports Mix, Final Fantasy XIV, Final Fantasy XVI
Maribeth Solomon - Sunless Sea, Sunless Skies
Jeremy Soule – The Elder Scrolls, Harry Potter series. Secret of Evermore, Total Annihilation, Icewind Dale, Neverwinter Nights, Dungeon Siege, Guildwars, Star Wars: Knights of the Old Republic series, Prey (2006)
Christopher Stevens – Syphon Filter 3
Martin Stig Andersen – Limbo, Inside, Wolfenstein II: The New Colossus
Mikolai Stroinski – The Vanishing of Ethan Carter, The Witcher 3: Wild Hunt, Sniper Ghost Warrior 3, Gwent: The Witcher Card Game, Thronebreaker: The Witcher Tales, Sniper Ghost Warrior Contracts, Metamorphosis, Chernobylite, Age of Empires 4, Diablo Immortal
Koichi Sugiyama – Dragon Quest series, E.V.O.: Search for Eden, Shiren the Wanderer series, Hanjuku Hero: Aa, Sekaiyo Hanjukunare...!, Tetris 2 + Bombliss
Keiichi Suzuki – Mother, EarthBound
T
Bobby Tahouri – Rise of the Tomb Raider, Marvel's Avengers
Masafumi Takada – killer7, God Hand, No More Heroes, Earth Defense Force, Danganronpa
Yukihide Takekawa – Soul Blazer
Tommy Tallarico – Advent Rising, Earthworm Jim series (Earthworm Jim 2 on), Spot Goes To Hollywood, MDK, Maximo: Ghosts to Glory, Wild 9
Hirokazu 'Hip' Tanaka – Balloon Fight, EarthBound, Kid Icarus, Metroid, Mother, Super Mario Land, Tetris
Kōhei Tanaka – Paladin's Quest, Lennus 2 (Paladin's Quest 2), Xardion, Alundra, Sakura Wars series
Kumi Tanioka – Final Fantasy XI, Final Fantasy Crystal Chronicles, Code Age Commanders
Mikko Tarmia – Amnesia: The Dark Descent, The Penumbra Series, Overgrowth
Tsukasa Tawada – Super Earth Defense Force, Ihatovo Monogatari, Thoroughbred Breeder (series)
Jeroen Tel – Cybernoid, Cybernoid II, Eliminator, Turbo Outrun
Soichi Terada – Ape Escape series (except Ape Escape 2)
Chance Thomas – Lord of the Rings Online, Left Behind: Eternal Forces, Marvel Ultimate Alliance, X-Men: The Official Game, The Lord of the Rings: War of the Ring, Unreal II: The Awakening
Chris Tilton – Mercenaries: Playground of Destruction, Black, Assassin's Creed Unity
Christopher Tin – Civilization IV, Civilization VI
Magome Togoshi – Air, Clannad, Planetarian: The Reverie of a Little Planet, Tomoyo After: It's a Wonderful Life, Little Busters!
Kazumi Totaka – Super Mario Land 2, Yoshi's Story, Animal Crossing series, Luigi's Mansion, Wii Sports, The Legend of Zelda: Link's Awakening
Yuka Tsujiyoko – Fire Emblem series, Paper Mario, Paper Mario: The Thousand-Year Door
Hyakutaro Tsukumo - Hyper Duel, Blast Wind, Thunder Force V, Armored Hunter Gunhound EX
Brian Tyler - Lego Universe, Call of Duty: Modern Warfare 3, Need for Speed: The Run, Far Cry 3, Army of Two: The Devil's Cartel, Assassin's Creed IV: Black Flag
Jeff Tymoschuk – James Bond: Nightfire, DeathSpank series, Penny Arcade series, Sleeping Dogs
U
Matt Uelmen – Diablo, Diablo II, StarCraft, World of Warcraft
Tatsuya Uemura – Performan, Tiger-Heli, Flying Shark, Twin Cobra, Hellfire, Zero Wing, Out Zone, Dogyuun
Nobuo Uematsu – Final Fantasy series, Apple Town Monogatari, Cruise Chaser Blassty, King's Knight, DynamiTracer, Front Mission: Gun Hazard, Ehrgeiz, Makaitoushi SaGa (Final Fantasy Legend I), SaGa 2: Hihou Densetsu (Final Fantasy Legend II), Rad Racer, Chrono Trigger, Blue Dragon, Lost Odyssey, Super Smash Bros. Brawl (Main Theme), The Last Story, Terra Battle, Fantasian
V
Jan Valta – Kingdom Come: Deliverance, Kingdom Come: Deliverance II series
Michiel van den Bos – Unreal, Age of Wonders, Unreal Tournament, Deus Ex, Overlord
Jeff van Dyck – Audio Director of The Creative Assembly (Total War franchise), Electronic Arts sports games (e.g. Need for Speed)
Cris Velasco – Hellgate: London, God of War, Mass Effect 3, ZombiU
Neil D. Voss – Tetrisphere, The New Tetris, Racing Gears Advance and others.
Rich Vreeland – Puzzle Agent, Drawn to Life: The Next Chapter, Fez, Runner 2, Hyper Light Drifter
Chris Vrenna – American McGee's Alice, Doom 3, Quake 4
W
Jph Wacheski
Jack Wall – Splinter Cell: Pandora Tomorrow, Myst III: Exile, Myst IV: Revelation, Jade Empire, Mass Effect series
Guy Whitmore – Blood, Claw, Die Hard: Nakatomi Plaza, Shivers, No One Lives Forever
David Whittaker – many Atari 8-bit, C64, and Amiga games, including Amaurote, BMX Simulator, Colony, Grand Prix Simulator, Panther, Speedball, Shadow of the Beast and Obliterator
Austin Wintory - Journey, flOw, Monaco: What's Yours is Mine, Tooth and Tail, Assassin's Creed Syndicate, Abzu
David Wise – All NES games by Rare, Donkey Kong Country series, Diddy Kong Racing, Jet Force Gemini, Star Fox Adventures, Wizards and Warriors series
Jezz Woodroffe - Composed music to the two Horror Soft titles Elvira II: The Jaws of Cerberus with Philip Nixon, and Waxworks.
Tim Wright – Welsh composer who goes by the name CoLD SToRAGE, known for his work on Shadow of the Beast II, Shadow of the Beast III, Agony, Lemmings, Wipeout and Colony Wars
Y
Ippo Yamada - Mega Man series, Azure Striker Gunvolt series, Gal Gun series
Kenji Yamamoto – Dragon Ball Z: Super Butōden 1, 2, & 3 (#1: with Kumagorou; #2: with Switch-E, Kumatarou; #3: with Amayang, Chatrasch, Switch-E), Dragon Ball Z: Super Goku Den 1 & 2, Dragon Ball Z: Ultimate Battle 22, Dragon Ball Z: The Legend, Dragon Ball: Final Bout, Dragon Ball Z: Budokai 1, 2, & 3, Dragon Ball Z: Shin Budokai 1 & 2, Dragon Ball Z: Harukanaru Goku Densetsu, Dragon Ball Z: Burst Limit, Dragon Ball Z: Infinite World.
Kenji Yamamoto – Metroid series, Famicom Wars, Famicom Detective Club: The Girl Who Stands Behind, Donkey Kong Country Returns
Michiru Yamane – Twinbee (NES), Castlevania series, Gungage, Suikoden III, Suikoden IV, Bloodstained: Ritual of the Night
Akira Yamaoka – Silent Hill series, Contra: Shattered Soldier
Toshiharu Yamanishi - Thunder Force series
Kinuyo Yamashita – Castlevania, Esper Dream, Arumana no Kiseki, Stinger, Maze of Galious, Mega Man X3, Medabot, Bass Masters Classic, Power Rangers: Lightspeed Rescue, WWF WrestleMania 2000, Buffy the Vampire Slayer, Croc 2, Monsters, Inc., WWF Road to WrestleMania, Power Rangers: Dino Thunder, Keitai Denjū Telefang
Yousuke Yasui - Eschatos, Ginga Force, Natsuki Chronicles
Mahito Yokota – Donkey Kong Jungle Beat, Super Mario Galaxy series, The Legend of Zelda: Skyward Sword, New Super Mario Bros. U, Captain Toad: Treasure Tracker
Ryo Yonemitsu - Ys series
Kenneth Young – Media Molecule, London Studio : LittleBigPlanet series, Tearaway series
Yes - Homeworld
Z
Hans Zimmer – Call of Duty: Modern Warfare 2, Crysis 2, FIFA 19
ZUN – Touhou
Zuntata (Taito's sound team)
Inon Zur – Dragon Age, EverQuest, Fallout, Prince of Persia, Star Trek, Syberia series. Icewind Dale II, Champions of Norrath, Crysis, The Elder Scrolls: Blades, Starfield
See also
OverClocked ReMix
References
Video game musicians
Video game musicians
Video game musician
+Musicians | List of video game musicians | [
"Technology"
] | 8,807 | [
"Computing-related lists",
"Video game lists"
] |
608,507 | https://en.wikipedia.org/wiki/Hypolimnion | The hypolimnion or under lake is the dense, bottom layer of water in a thermally-stratified lake. The word "hypolimnion" is derived from . It is the layer that lies below the thermocline.
Typically the hypolimnion is the coldest layer of a lake in summer, and the warmest layer during winter. In deep, temperate lakes, the bottom-most waters of the hypolimnion are typically close to 4 °C throughout the year. The hypolimnion may be much warmer in lakes at warmer latitudes. Being at depth, it is isolated from surface wind-mixing during summer, and usually receives insufficient irradiance (light) for photosynthesis to occur.
Oxygen dynamics
The deepest portions of the hypolimnion often have lower oxygen concentrations than the surface waters (i.e., epilimnion). While oxygen can typically exchange between surface waters and the atmosphere (i.e., in the absence of ice cover), bottom waters are comparatively isolated from atmospheric replenishment of oxygen. In particular, during periods of thermal stratification, gas exchange between the epilimnion and hypolimnion is limited by the density difference between these two layers. Consequently, decomposition of organic matter in the water column and sediments can cause oxygen concentrations to decline to the point of hypoxia (low oxygen) or anoxia (no oxygen). In dimictic, eutrophic lakes, the hypolimnion is often anoxic throughout a majority of the stratified period. However, hypolimnetic oxygen concentrations are replenished in the fall and early winter in many temperate lakes, as lake turnover allows mixing of oxic surface waters and anoxic bottom waters.
Notably, anoxic conditions in temperate lakes have the potential to create a positive feedback, whereby anoxia during a given year begets increasingly severe and frequent occurrences of anoxia in future years. Anoxia can lead to release of nutrients from sediment, which contribute to increased phytoplankton growth. Increased phytoplankton growth subsequently increases decomposition, perpetuating hypolimnetic oxygen declines. This positive feedback effect has been termed the Anoxia Begets Anoxia feedback.
Hypolimnetic aeration
In eutrophic lakes where the hypolimnion is anoxic, hypolimnetic aeration may be used to add oxygen to the hypolimnion. Adding oxygen to the system through aeration can be costly because it requires significant amounts of energy.
See also
Epilimnion
Metalimnion
References
External links
Water on the Web
Limnology
Aquatic ecology | Hypolimnion | [
"Biology"
] | 584 | [
"Aquatic ecology",
"Ecosystems"
] |
608,510 | https://en.wikipedia.org/wiki/Epilimnion | The epilimnion or surface layer is the top-most layer in a thermally stratified lake.
The epilimnion is the layer that is most affected by sunlight, its thermal energy heating the surface, thereby making it warmer and less dense. As a result, the epilimnion sits above the deeper metalimnion and hypolimnion, which are colder and denser. Additionally, the epilimnion is typically has a higher pH and higher dissolved oxygen concentration than the hypolimnion.
Physical Structure
Properties
In the water column, the epilimnion sits above all other layers. The epilimnion is only present in stratified lakes. On the topside of the epilimnion it is in contact with air, which leaves it open to wind action, which allows the water to experience turbulence. Turbulence and convection work together to make waves which increases aeration. On the bottom side of the epilimnion is the metalimnion, which contains the thermocline. The thermocline is created because of the difference in temperature between the epilimnion and the metalimnion. This is due to the fact that since the epilimnion is in contact with air and is above everything, it interacts with the sun and heat more, making it warmer than the layers below. In certain areas during the winter, the epilimnion will freeze over, cutting off the lake from being aerated directly. Because of the epilimnion's susceptibility to air temperature change, it is often used to monitor warming trends.
Lake Turnover and Mixing
In most stratified lakes, seasonal changes in the spring and fall air temperature cause the epilimnion to warm up or cool down. During these seasonal changes stratified lakes may experience a lake turnover. During this, the epilimnion and hypolimnion mix together and the lake generally becomes un-stratified, meaning it has a constant temperature throughout, and the nutrients are even throughout the lake. There are different names for these turnovers based on how many times the lake does it in a year. Monomictic lakes flip only once, dimictic flip twice, and polymictic lakes flip more than twice. These turnovers can be based on seasonal differences, or can even happen daily. In some cases this causes the lake to have inverse stratification, where the epilimnion has cooler water than the hypolimnion.
Chemistry
With the layer being open to air, the epilimnion usually has high amounts of dissolved O2 and CO2. This means the epilimnion is in a constant state of exchange of dissolved gases with the atmosphere. The epilimnion's thickness can be impacted by light exposure; more transparent lakes receive greater levels of light, leading to more stored energy in the water and a shallower epilimnion. The epilimnion is also an area of concern for algal blooms due to phosphorus and nitrogen runoff from terrestrial sources. Wind erosion carrying soil particles can also introduce many different nutrients into the water as well, and those particles will enter the lake system through the epilimnion.
Biology
Because of its closeness to the surface, and being the area that receives the most sunlight, the epilimnion is a great home for phytoplankton, and other primary producers. Algal blooms are common in this layer as a result of large accumulations of nutrients. In response to large amounts of algae and phytoplankton being present, many fish species are common in this layer as they look for their source of food. Birds will often use the epilimnion as an area for rest and/or fishing. Many insects also make various uses of the epilimnion when it comes to nest making and habitat. Human interactions are also an important part of the biological part of the epilimnion. Some direct human interactions are recreational uses such as swimming, boating, or other activities. Other indirect interactions may come from sewage, runoff of agricultural fields, or land development. These are all able to affect properties of the epilimnion.
References
External links
http://wow.nrri.umn.edu/wow/teacher/thermal/teaching.html
Limnology
Aquatic ecology | Epilimnion | [
"Biology"
] | 924 | [
"Aquatic ecology",
"Ecosystems"
] |
608,625 | https://en.wikipedia.org/wiki/Buffer%20overflow%20protection | Buffer overflow protection is any of various techniques used during software development to enhance the security of executable programs by detecting buffer overflows on stack-allocated variables, and preventing them from causing program misbehavior or from becoming serious security vulnerabilities. A stack buffer overflow occurs when a program writes to a memory address on the program's call stack outside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, which could lead to program crashes, incorrect operation, or security issues.
Typically, buffer overflow protection modifies the organization of stack-allocated data so it includes a canary value that, when destroyed by a stack buffer overflow, shows that a buffer preceding it in memory has been overflowed. By verifying the canary value, execution of the affected program can be terminated, preventing it from misbehaving or from allowing an attacker to take control over it. Other buffer overflow protection techniques include bounds checking, which checks accesses to each allocated block of memory so they cannot go beyond the actually allocated space, and tagging, which ensures that memory allocated for storing data cannot contain executable code.
Overfilling a buffer allocated on the stack is more likely to influence program execution than overfilling a buffer on the heap because the stack contains the return addresses for all active function calls. However, similar implementation-specific protections also exist against heap-based overflows.
There are several implementations of buffer overflow protection, including those for the GNU Compiler Collection, LLVM, Microsoft Visual Studio, and other compilers.
Overview
A stack buffer overflow occurs when a program writes to a memory address on the program's call stack outside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, and in cases where the overflow was triggered by mistake, will often cause the program to crash or operate incorrectly. Stack buffer overflow is a type of the more general programming malfunction known as buffer overflow (or buffer overrun). Overfilling a buffer on the stack is more likely to derail program execution than overfilling a buffer on the heap because the stack contains the return addresses for all active function calls.
Stack buffer overflow can be caused deliberately as part of an attack known as stack smashing. If the affected program is running with special privileges, or if it accepts data from untrusted network hosts (for example, a public webserver), then the bug is a potential security vulnerability that allows an attacker to inject executable code into the running program and take control of the process. This is one of the oldest and more reliable methods for attackers to gain unauthorized access to a computer.
Typically, buffer overflow protection modifies the organization of data in the stack frame of a function call to include a "canary" value that, when destroyed, shows that a buffer preceding it in memory has been overflowed. This provides the benefit of preventing an entire class of attacks. According to some researchers, the performance impact of these techniques is negligible.
Stack-smashing protection is unable to protect against certain forms of attack. For example, it cannot protect against buffer overflows in the heap. There is no sane way to alter the layout of data within a structure; structures are expected to be the same between modules, especially with shared libraries. Any data in a structure after a buffer is impossible to protect with canaries; thus, programmers must be very careful about how they organize their variables and use their structures.
Canaries
Canaries or canary words or stack cookies are known values that are placed between a buffer and control data on the stack to monitor buffer overflows. When the buffer overflows, the first data to be corrupted will usually be the canary, and a failed verification of the canary data will therefore alert of an overflow, which can then be handled, for example, by invalidating the corrupted data. A canary value should not be confused with a sentinel value.
The terminology is a reference to the historic practice of using canaries in coal mines, since they would be affected by toxic gases earlier than the miners, thus providing a biological warning system. Canaries are alternately known as stack cookies, which is meant to evoke the image of a "broken cookie" when the value is corrupted.
There are three types of canaries in use: terminator, random, and random XOR. Current versions of StackGuard support all three, while ProPolice supports terminator and random canaries.
Terminator canaries
Terminator canaries use the observation that most buffer overflow attacks are based on certain string operations which end at string terminators. The reaction to this observation is that the canaries are built of null terminators, CR, LF, and FF. As a result, the attacker must write a null character before writing the return address to avoid altering the canary. This prevents attacks using strcpy() and other methods that return upon copying a null character, while the undesirable result is that the canary is known. Even with the protection, an attacker could potentially overwrite the canary with its known value and control information with mismatched values, thus passing the canary check code, which is executed soon before the specific processor's return-from-call instruction.
Random canaries
Random canaries are randomly generated, usually from an entropy-gathering daemon, in order to prevent an attacker from knowing their value. Usually, it is not logically possible or plausible to read the canary for exploiting; the canary is a secure value known only by those who need to know it—the buffer overflow protection code in this case.
Normally, a random canary is generated at program initialization, and stored in a global variable. This variable is usually padded by unmapped pages so that attempting to read it using any kinds of tricks that exploit bugs to read off RAM cause a segmentation fault, terminating the program. It may still be possible to read the canary if the attacker knows where it is or can get the program to read from the stack.
Random XOR canaries
Random XOR canaries are random canaries that are XOR-scrambled using all or part of the control data. In this way, once the canary or the control data is clobbered, the canary value is wrong.
Random XOR canaries have the same vulnerabilities as random canaries, except that the "read from stack" method of getting the canary is a bit more complicated. The attacker must get the canary, the algorithm, and the control data in order to re-generate the original canary needed to spoof the protection.
In addition, random XOR canaries can protect against a certain type of attack involving overflowing a buffer in a structure into a pointer to change the pointer to point at a piece of control data. Because of the XOR encoding, the canary will be wrong if the control data or return value is changed. Because of the pointer, the control data or return value can be changed without overflowing over the canary.
Although these canaries protect the control data from being altered by clobbered pointers, they do not protect any other data or the pointers themselves. Function pointers especially are a problem here, as they can be overflowed into and can execute shellcode when called.
Bounds checking
Bounds checking is a compiler-based technique that adds run-time bounds information for each allocated block of memory, and checks all pointers against those at run-time. For C and C++, bounds checking can be performed at pointer calculation time or at dereference time.
Implementations of this approach use either a central repository, which describes each allocated block of memory, or fat pointers, which contain both the pointer and additional data, describing the region that they point to.
Tagging
Tagging is a compiler-based or hardware-based (requiring a tagged architecture) technique for tagging the type of a piece of data in memory, used mainly for type checking. By marking certain areas of memory as non-executable, it effectively prevents memory allocated to store data from containing executable code. Also, certain areas of memory can be marked as non-allocated, preventing buffer overflows.
Historically, tagging has been used for implementing high-level programming languages; with appropriate support from the operating system, tagging can also be used to detect buffer overflows. An example is the NX bit hardware feature, supported by Intel, AMD and ARM processors.
Implementations
GNU Compiler Collection (GCC)
Stack-smashing protection was first implemented by StackGuard in 1997, and published at the 1998 USENIX Security Symposium. StackGuard was introduced as a set of patches to the Intel x86 backend of GCC 2.7. StackGuard was maintained for the Immunix Linux distribution from 1998 to 2003, and was extended with implementations for terminator, random and random XOR canaries. StackGuard was suggested for inclusion in GCC 3.x at the GCC 2003 Summit Proceedings, but this was never achieved.
From 2001 to 2005, IBM developed GCC patches for stack-smashing protection, known as ProPolice. It improved on the idea of StackGuard by placing buffers after local pointers and function arguments in the stack frame. This helped avoid the corruption of pointers, preventing access to arbitrary memory locations.
Red Hat engineers identified problems with ProPolice though, and in 2005 re-implemented stack-smashing protection for inclusion in GCC 4.1. This work introduced the -fstack-protector flag, which protects only some vulnerable functions, and the -fstack-protector-all flag, which protects all functions whether they need it or not.
In 2012, Google engineers implemented the -fstack-protector-strong flag to strike a better balance between security and performance. This flag protects more kinds of vulnerable functions than -fstack-protector does, but not every function, providing better performance than -fstack-protector-all. It is available in GCC since its version 4.9.
All Fedora packages are compiled with -fstack-protector since Fedora Core 5, and -fstack-protector-strong since Fedora 20. Most packages in Ubuntu are compiled with -fstack-protector since 6.10. Every Arch Linux package is compiled with -fstack-protector since 2011. All Arch Linux packages built since 4 May 2014 use -fstack-protector-strong. Stack protection is only used for some packages in Debian, and only for the FreeBSD base system since 8.0. Stack protection is standard in certain operating systems, including OpenBSD, Hardened Gentoo and DragonFly BSD.
StackGuard and ProPolice cannot protect against overflows in automatically allocated structures that overflow into function pointers. ProPolice at least will rearrange the allocation order to get such structures allocated before function pointers. A separate mechanism for pointer protection was proposed in PointGuard and is available on Microsoft Windows.
Microsoft Visual Studio
The compiler suite from Microsoft implements buffer overflow protection since version 2003 through the command-line switch, which is enabled by default since version 2005. Using disables the protection.
IBM Compiler
Stack-smashing protection can be turned on by the compiler flag -qstackprotect.
Clang/LLVM
Clang supports the same -fstack-protector options as GCC and a stronger "safe stack" () system with similarly low performance impact. Clang also has three buffer overflow detectors, namely AddressSanitizer (-fsanitize=address), UBSan (-fsanitize=bounds),
and the unofficial SafeCode (last updated for LLVM 3.0).
These systems have different tradeoffs in terms of performance penalty, memory overhead, and classes of detected bugs. Stack protection is standard in certain operating systems, including OpenBSD.
Intel Compiler
Intel's C and C++ compiler supports stack-smashing protection with options similar to those provided by GCC and Microsoft Visual Studio.
Fail-Safe C
Fail-Safe C is an open-source memory-safe ANSI C compiler that performs bounds checking based on fat pointers and object-oriented memory access.
StackGhost (hardware-based)
Invented by Mike Frantzen, StackGhost is a simple tweak to the register window spill/fill routines which makes buffer overflows much more difficult to exploit. It uses a unique hardware feature of the Sun Microsystems SPARC architecture (that being: deferred on-stack in-frame register window spill/fill) to detect modifications of return pointers (a common way for an exploit to hijack execution paths) transparently, automatically protecting all applications without requiring binary or source modifications. The performance impact is negligible, less than one percent. The resulting gdb issues were resolved by Mark Kettenis two years later, allowing enabling of the feature. Following this event, the StackGhost code was integrated (and optimized) into OpenBSD/SPARC.
See also
Control-flow integrity
Address space layout randomization
Executable space protection
Memory debugger
Static code analysis
References
External links
The GCC 2003 Summit Proceedings (PDF)
Smashing the Stack for Fun and Profit by Aleph One
ProPolice official home
Immunix StackGuard Homepage
Original StackGuard paper in USENIX Security 1998
StackGhost: Hardware Facilitated Stack Protection
FreeBSD 5.4 and 6.2 propolice implementation
Four different tricks to bypass StackShield and StackGuard protection
Stack Smashing Protector
Software bugs
Computer security exploits | Buffer overflow protection | [
"Technology"
] | 2,881 | [
"Computer security exploits"
] |
608,675 | https://en.wikipedia.org/wiki/30%20Hudson%20Street | 30 Hudson Street, also known as Goldman Sachs Tower, is a , 42-story building in Jersey City, New Jersey. It is the second tallest building in New Jersey. Completed in 2004, the tower was designed by César Pelli, and was the tallest building in the state for 14 years. It houses offices, a cafeteria, a health unit, and a full-service fitness facility including a physical therapy clinic.
The building is in the Exchange Place area close to a PATH station and is accessible by the Hudson-Bergen Light Rail at the Essex Street and Exchange Place stops.
The tower sits on the waterfront overlooking the Hudson River and Lower Manhattan and is visible from all five of the New York City boroughs. On a clear day, the building may be visible from Highlands, New Jersey to the south and from Bear Mountain, New York to the north, away.
Originally intended to be a dedicated use building for Goldman Sachs' middle and back office units, lower than projected staffing levels at the bank following the global financial crisis forced Goldman to seek occupancy from other tenants to avoid forgone rental income. Royal Bank of Canada currently shares the space, with plans for other professional service firms to take occupancy as well in the near future. Since 2020, the building also houses the headquarters of Organon International., AIG & Lord Abbett.
History
Originally the tower was meant to be the centerpiece of an entire Goldman Sachs campus at Exchange Place, which was to include a training center, a university, and a large hotel complex. Many of the company's Manhattan-based equity traders refused to move away from Wall Street, delaying the occupation of the building's top 13 floors, which remained vacant until early 2008.
Once a derelict and mostly industrial part of Jersey City, the Exchange Place area forms part of New Jersey's Gold Coast, a revitalized strip of land along the formerly industrial west bank of the Hudson. Economic development in the 2000s spurred large-scale residential, commercial, and office development along the waterfront.
Although the location was largely rejected by the company's financial executives, 4,000 Goldman Sachs employees made the move to the building, including much of the company's real estate, technology, operations, and administrative departments. The building is certified under LEED-NC Version 2.0 of the U.S. Green Building Council. The building has been surrounded by pedestrian protective scaffolding since 2010.
The company completed construction of another tower in 2010 to house the bulk of their sales and trading departments. It is located at 200 West Street in Lower Manhattan just north of Brookfield Place (originally the World Financial Center), almost directly across the water from 30 Hudson. Under their "Venice strategy", Goldman Sachs launched a ferry service between the two buildings in 2013, operated by the Billybey Ferry Company.
In media
The building was used by the Bernie Sanders 2016 presidential campaign to symbolize Goldman Sachs and Hillary Clinton's ties to the company.
The building can briefly be seen in The Avengers when Iron Man prevents New York City from being struck by a missile, and in Spider-Man: Homecoming, when Tony Stark is about to revoke Peter Parker's Spider Man suit.
The building, along with its surrounding skyscrapers, is the background image for the recorded audio for Markus Schulz's Global DJ Broadcast World Tour recorded in Barcode in nearby Elizabeth.
Gallery
See also
List of tallest buildings by U.S. state
List of tallest buildings in Jersey City
List of tallest buildings in New Jersey
List of tallest buildings in the United States
International Finance Centre Tower 1 in Hong Kong, designed similarly to the Goldman Sachs Tower
200 West Street in Lower Manhattan, Goldman Sachs global headquarters
References
Notes
External links
Skyscrapers in Jersey City, New Jersey
César Pelli buildings
Skyscrapers in New Jersey
Goldman Sachs
Skyscraper office buildings in Jersey City, New Jersey
Office buildings completed in 2004
Leadership in Energy and Environmental Design certified buildings
2004 establishments in New Jersey | 30 Hudson Street | [
"Engineering"
] | 804 | [
"Building engineering",
"Leadership in Energy and Environmental Design certified buildings"
] |
608,751 | https://en.wikipedia.org/wiki/Pinealocyte | Pinealocytes are the main cells contained in the pineal gland, located behind the third ventricle and between the two hemispheres of the brain. The primary function of the pinealocytes is the secretion of the hormone melatonin, important in the regulation of circadian rhythms. In humans, the suprachiasmatic nucleus of the hypothalamus communicates the message of darkness to the pinealocytes, and as a result, controls the day and night cycle. It has been suggested that pinealocytes are derived from photoreceptor cells. Research has also shown the decline in the number of pinealocytes by way of apoptosis as the age of the organism increases. There are two different types of pinealocytes, type I and type II, which have been classified based on certain properties including shape, presence or absence of infolding of the nuclear envelope, and composition of the cytoplasm.
Types of pinealocytes
Type 1 pinealocytes
Type 1 pinealocytes are also known as light pinealocytes because they stain at a low density when viewed under a light microscope and appear lighter to the human eye. These Type 1 cells have been identified through research to have a round or oval shape and a diameter ranging from 7–11 micrometers. Type 1 pinealocytes are typically more numerous in both children and adults than Type 2 pinealocytes. They are also considered to be the more active cell because of the presence of certain cellular contents, including a high concentration of mitochondria. Another finding consistent with Type 1 pinealocytes is the increase in the amount of lysosomes and dense granules present in the cells as the age of the organism increases, possibly indicating the importance of autophagocytosis in these cells. Research has also shown that Type 1 pinealocytes contain the neurotransmitter serotonin, which later is converted to melatonin, the main hormone secreted by the pineal gland.
Type 2 pinealocytes
Type 2 pinealocytes are also known as dark pinealocytes because they stain at a high density when viewed under a light microscope and appear darker to the human eye. As indicated by research and microscopy, they are round, oval, or elongated cells with a diameter of about 7–11.2 micrometers. The nucleus of a Type 2 pinealocyte contains many infoldings which contain large amounts of rough endoplasmic reticulum and ribosomes. An abundance of cilia and centrioles has also been found in these Type 2 cells of the pineal gland. Unique to the Type 2 is the presence of vacuoles containing 2 layers of membrane. As Type 1 cells contain serotonin, Type 2 cells contain melatonin and are thought to have similar characteristics as endocrine and neuronal cells.
Synaptic ribbons
Synaptic ribbons are organelles seen in pinealocytes using electron microscopy. Synaptic ribbons are found in pinealocytes in both children and adults, but are not found in human fetuses. Research on rats has revealed more information about these organelles. The characteristic protein of synaptic ribbons is RIBEYE, as revealed by light and electron microscopy. In lower vertebrates, synaptic ribbons serve as a photoreceptive organ, but in upper vertebrates, they serve secretory functions within the cell. The presence of proteins such as Munc13-1 indicates that they are important in neurotransmitter release. At night, synaptic ribbons of rats appear larger and slightly curved, but during the day, they appear smaller and rod-like.
Evolution of pinealocytes
A common theory on the evolution of pinealocytes is that they evolved from photoreceptor cells. It is speculated that in ancestral vertebrates, the pinealocytes served the same function as photoreceptor cells, such as retinal cells; in many non-mammalian vertebrates, pineal cells in the retina are still actively photoreceptive, although these cell do not contribute to a visual image. Structural, functional, and genetic similarities exist between the two cell types. Structurally, both develop from the area of the brain designated the diencephalon, also the area containing the thalamus and hypothalamus, during embryological development. Both types of cells have similar features, including cilia, folded membranes, and polarity. Functional evidence for this theory of evolution can be seen in non-mammalian vertebrates. The retention of photosensitivity of the pinealocytes of lampreys, fish, amphibians, reptiles, and birds and the secretion of melatonin by some of these lower vertebrates suggests that mammalian pinealocytes may have once served as photoreceptor cells. Researchers have also indicated the presence of several photoreceptor proteins found in the retina in the pinealocytes in chicken and fish. Genetic evidence demonstrates that phototransduction genes expressed in the photoreceptors of the retina are also present in pinealocytes.
More evidence for the evolution of pinealocytes from photoreceptor cells is the similarities between the ribbon complexes in the two types of cells. The presence of the protein RIBEYE and other proteins in both pinealocytes and sensory cells (both photoreceptors and hair cells) suggests that the two cells are related to one another evolutionarily. Differences between the two synaptic ribbons exist in the presence of certain proteins, such as ERC2/CAST1, and the distribution of proteins within the complexes of each cell.
Melatonin
Regulation
Regulation of melatonin synthesis is important to melatonin’s main function in circadian rhythms. The main molecular control mechanism that exists for melatonin secretion in vertebrates is the enzyme AANAT (arylalkylamine N-acetyltransferase). The expression of the AANAT gene is controlled by the transcription factor pCREB, and this is evident when cells treated with epithalone, a peptide which affects pCREB transcription, have a resulting increase in melatonin synthesis. AANAT is activated through a protein kinase A system in which cyclic AMP (cAMP) is involved. The activation of AANAT leads to an increase in melatonin production. Though there are some differences specific to certain species of vertebrates, the effect of cAMP on AANAT and AANAT on melatonin synthesis remains fairly consistent.
Melatonin synthesis is also regulated by the nervous system. Nerve fibers in the retinohypothalamic tract connect the retina to the suprachiasmatic nucleus (SCN). The SCN stimulates the release of norepinephrine from sympathetic nerve fibers from the superior cervical ganglia that synapse with the pinealocytes. Norepinephrine causes the production of melatonin in the pinealocytes by stimulating the production of cAMP. Because the release of norepinephrine from the nerve fibers occurs at night, this system of regulation maintains the body’s circadian rhythms.
Synthesis
Pinealocytes synthesize the hormone melatonin by first converting the amino acid tryptophan to serotonin. The serotonin is then acetylated by the AANAT enzyme and converted into N-acetylserotonin. N-acetylserotonin is converted into melatonin by the enzyme hydroxyindole O-methyltransferase (HIOMT), also known as acetylserotonin O-methyltransferase (ASMT). Activity of these enzymes is high during the night and regulated by the mechanisms previously discussed involving norepinephrine.
See also
List of human cell types derived from the germ layers
References
External links
Endocrine system | Pinealocyte | [
"Biology"
] | 1,619 | [
"Organ systems",
"Endocrine system"
] |
608,754 | https://en.wikipedia.org/wiki/File%20descriptor | In Unix and Unix-like computer operating systems, a file descriptor (FD, less frequently fildes) is a process-unique identifier (handle) for a file or other input/output resource, such as a pipe or network socket.
File descriptors typically have non-negative integer values, with negative values being reserved to indicate "no value" or error conditions.
File descriptors are a part of the POSIX API. Each Unix process (except perhaps daemons) should have three standard POSIX file descriptors, corresponding to the three standard streams:
Overflow
In the traditional implementation of Unix, file descriptors index into a per-process maintained by the kernel, that in turn indexes into a system-wide table of files opened by all processes, called the . This table records the mode with which the file (or other resource) has been opened: for reading, writing, appending, and possibly other modes. It also indexes into a third table called the inode table that describes the actual underlying files. To perform input or output, the process passes the file descriptor to the kernel through a system call, and the kernel will access the file on behalf of the process. The process does not have direct access to the file or inode tables.
On Linux, the set of file descriptors open in a process can be accessed under the path /proc/PID/fd/, where PID is the process identifier. File descriptor /proc/PID/fd/0 is stdin, /proc/PID/fd/1 is stdout, and /proc/PID/fd/2 is stderr. As a shortcut to these, any running process can also access its own file descriptors through the folders /proc/self/fd and /dev/fd.
In Unix-like systems, file descriptors can refer to any Unix file type named in a file system. As well as regular files, this includes directories, block and character devices (also called "special files"), Unix domain sockets, and named pipes. File descriptors can also refer to other objects that do not normally exist in the file system, such as anonymous pipes and network sockets.
The FILE data structure in the C standard I/O library usually includes a low level file descriptor for the object in question on Unix-like systems. The overall data structure provides additional abstraction and is instead known as a file handle.
Operations on file descriptors
The following lists typical operations on file descriptors on modern Unix-like systems. Most of these functions are declared in the <unistd.h> header, but some are in the <fcntl.h> header instead.
Creating file descriptors
(Linux)
(Linux)
(Linux)
(Linux)
(Linux)
(Linux)
(Linux)
(Linux)
(with flag CLONE_PIDFD, Linux)
(Linux)
(Linux)
(BSD)
(kFreeBSD)
Deriving file descriptors
Operations on a single file descriptor
,
,
,
,
, (also used for sending FDs to other processes over a Unix domain socket)
,
,
, (Linux)
, (Linux)
(Linux)
(Linux)
(Linux)
(Linux)
(kFreeBSD)
(with P_PIDFD ID type, Linux)
(stdio function:converts file descriptor to FILE*)
(stdio function: prints to file descriptor)
Operations on multiple file descriptors
,
,
, , (Linux, takes a single epoll filedescriptor to wait on many other file descriptors)
(for Linux)
(for BSD-based systems).
, (for Linux)
(for Linux)
Operations on the file descriptor table
The function is used to perform various operations on a file descriptor, depending on the command argument passed to it. There are commands to get and set attributes associated with a file descriptor, including and .
(BSD and Solaris only; deletes all file descriptors greater than or equal to specified number)
(for Linux)
(duplicates an existing file descriptor guaranteeing to be the lowest number available file descriptor)
, (Close fd1 if necessary, and make file descriptor fd1 point to the open file of fd2)
Operations that modify process state
(sets the process's current working directory based on a directory file descriptor)
(maps ranges of a file into the process's address space)
File locking
and
Sockets
(creates a new file descriptor for an incoming connection)
(shuts down one or both halves of a full duplex connection)
Miscellaneous
(a large collection of miscellaneous operations on a single file descriptor, often associated with a device)
at suffix operations
A series of new operations has been added to many modern Unix-like systems, as well as numerous C libraries, to be standardized in a future version of POSIX. The at suffix signifies that the function takes an additional first argument supplying a file descriptor from which relative paths are resolved, the forms lacking the at suffix thus becoming equivalent to passing a file descriptor corresponding to the current working directory. The purpose of these new operations is to defend against a certain class of TOCTOU attacks.
File descriptors as capabilities
Unix file descriptors behave in many ways as capabilities. They can be passed between processes across Unix domain sockets using the sendmsg() system call. Note, however, that what is actually passed is a reference to an "open file description" that has mutable state (the file offset, and the file status and access flags). This complicates the secure use of file descriptors as capabilities, since when programs share access to the same open file description, they can interfere with each other's use of it by changing its offset or whether it is blocking or non-blocking, for example. In operating systems that are specifically designed as capability systems, there is very rarely any mutable state associated with a capability itself.
A Unix process' file descriptor table is an example of a C-list.
See also
fuser (Unix)
lsof
File Control Block (FCB) - an alternative scheme in CP/M and early versions of DOS
References
POSIX
Unix file system technology
de:Handle#Datei-Handle | File descriptor | [
"Technology"
] | 1,346 | [
"Computer standards",
"POSIX"
] |
608,908 | https://en.wikipedia.org/wiki/Dust%20jacket | The dust jacket (sometimes book jacket, dust wrapper or dust cover) of a book is the detachable outer cover, usually made of paper and printed with text and illustrations. This outer cover has folded flaps that hold it to the front and back book covers; these flaps may also double as bookmarks.
Dust jackets originally displayed cover information on top of a simple binding, at a time when it was not feasible to print directly onto the binding. The role of a dust jacket has been largely supplanted by modern hardcover printing technologies, which print such information directly onto the binding.
Modern dust covers still serve to display promotional material and shield the book from damage. The back panel or flaps of the dust cover are printed with biographical information about the author, a summary of the book from the publisher (known as a blurb) or critical praise from celebrities or authorities in the book's subject area. The back of a dust jacket often has a barcode for retail purchase, and the book's ISBN. The information on the dust jacket often resembles that of the binding but may have additional promotions about an edition, and the information on the flaps is not typically copied onto the binding.
The dust jacket protects the book covers from damage. However, since it is itself relatively fragile, and since dust jackets have practical, aesthetic, and sometimes financial value, the jacket may in turn be wrapped in another jacket, usually transparent, especially if the book is a library volume.
Early history
Before the 1820s, most books were published unbound and were generally sold to customers either in this form, or in simple bindings executed for the bookseller, or in bespoke bindings commissioned by the customer. At this date, publishers did not have their books bound in uniform "house" bindings, so there was no reason for them to issue dust jackets. Book owners did occasionally fashion their own jackets out of leather, wallpaper, fur, or other material, and many other types of detachable protective covers were made for codices, manuscripts, and scrolls from ancient times through the Middle Ages and into the modern period.
At the end of the 18th century, publishers began to issue books in plain paper-covered boards, sometimes with a printed spine label; this form of binding was intended to be temporary. Some collections of loose prints were issued at this period in printed paper wrappings, again intended to be temporary. In the first two decades of the nineteenth century, publishers started issuing some smaller books in bindings of printed paper-covered boards, and throughout the 1820s and 1830s some small popular books, notably annual gift books and almanacs, were issued in detachable printed pasteboard sheaths. These small boxes are sometimes loosely and erroneously referred to as the first dust jackets. True publisher's bindings in cloth and leather, in which all, or a substantial part of, an edition were bound, were also introduced shortly before 1820, by the innovative publisher William Pickering.
Oldest dust jackets
After publishers' cloth bindings started coming into common use on all types of books in the 1820s, the first publishers' dust jackets appeared by the end of that decade. The earliest known examples were issued on English literary annuals which were popular from the 1820s to the 1850s. These books often had fancy bindings that needed protection. The jackets that were used at this time completely enclosed the books like wrapping paper and were sealed shut with wax or glue.
The oldest publishers' dust jacket now on record was issued in 1829 on an English annual, Friendship's Offering for 1830. It was discovered at the Bodleian Library in Oxford by Michael Turner, a former curator and Head of Conservation at the Library. Its existence was announced by Oxford in 2009. It is three years older than the previous oldest known jacket, which was discovered in 1934 by the English bookman John Carter on another English annual, The Keepsake for 1833 (issued in 1832). Both jackets are of the type that completely enclosed the books.
Most jackets of this type were torn when they were opened and then discarded like gift-wrapping paper; they were not designed to be reused, and surviving examples are known on only a handful of titles. The scarcity of jackets of this type, together with the lack of written documentation from publishers of the period, makes it very hard to determine how widely these all-enclosing jackets were used during the period from 1820 to 1850, but they were likely common on ornately bound annuals and on some trade books.
The earliest known dust jackets of the modern style, with flaps, which covered just the binding and left the text block exposed, date from the 1850s, although this type of jacket was likely in at least limited use some years earlier. This is the jacket that became standard in the publishing industry and is still in use today. It is believed that flap-style jackets were in general use by the 1880s, and probably earlier, although the number of surviving examples from the 1850s, 1860s, and 1870s is too small to prove exactly when they became ubiquitous, and again, there are no known publishers' records that document the use of dust jackets during these decades. There are, however, enough surviving examples from the 1890s to state unequivocally that dust jackets were all but universal throughout that decade. They were probably issued more often than not by the 1860s and 1870s in Europe, Great Britain, and the United States.
Late 19th and early 20th centuries
Throughout the nineteenth century, nearly all dust jackets were discarded at or soon after purchase. Many were discarded in bookstores as the books were put out for display, or when they were sold; there is evidence that this was common practice in England until World War I. The period from the 1820s to 1900 was a golden age for publishers' decorative bookbinding, and most dust jackets were much plainer than the books they covered, often simply repeating the main elements of the binding decoration in black on cream or brown paper. For this reason, most people preferred to display their books in their bindings, much as earlier generations had displayed their library books in their gold-tooled individual bindings, usually in leather or vellum. Even late in the nineteenth century there were still some publishers who were not using dust jackets at all (the English publisher Methuen is one example). Some firms, such as subscription houses which sold millions of cheap books door-to-door, probably never used them.
Cloth dust jackets became popular late in the nineteenth century. These jackets, with the outer cloth usually reinforced with an underlayer of paper, were issued mostly on ornate gift editions, often in two volumes and often with a slipcase. Other types of publishers' boxes were also popular in the second half of the nineteenth century, including many made to hold multi-volume sets of books. The jackets on boxed volumes were often plain, sometimes with cutouts on the spine to allow the title or volume numbers of the books to be seen.
After 1900, fashion and the economics of publishing caused book bindings to become less decorative, and it was cheaper for publishers to make the jackets more attractive. By around 1920, most of the artwork and decoration had migrated from the binding to the dust jacket, and jackets were routinely printed with multiple colors, extensive advertising and blurbs; even the underside of the jacket was now sometimes used for advertising.
As dust jackets became more attractive than the bindings, more people began to keep the jackets on their books, at least until they became soiled, torn, or worn out. One bit of evidence that indicates when jackets became saved objects is the movement of the printed price from the spine of the jacket to a corner of one of the flaps. This also occurred in the 1910s and early 1920s. When jackets were routinely discarded at point of purchase, it did not matter where the price was printed (and many early jackets were not printed with any price), but now if book buyers of the 1910s and 1920s wanted to save the jacket and give a book as a gift, they could clip off the price without ruining the jacket.
In 1939, Arthur Brody, a student at Columbia University, invented a film-based jacket, which is used by libraries to protect paper dust jackets.
Supplementary bands
In Japan, both hardcover and softcover books frequently come with two dust jackets – a full-sized one, serving the same purpose as in the West (it is usually retained with the book), and a thin "obi" ("belt"; colloquially "belly band" in English), which is generally disposed of and serves a similar function to 19th-century Western dust jackets.
Similar bands occasionally appear in the west, for example in Palookaville #20.
As collectible items
Dust jackets from the 1920s and later were often decorated in art deco styles which are highly prized by collectors. Some of them are worth far more than the books they cover. The most famous example is the jacket on the first edition of The Great Gatsby by F. Scott Fitzgerald, published in 1925. Without jacket, the book brings $1,000 or so. With the jacket it can bring $20,000 or $30,000 or more, depending on condition. One copy in a near mint jacket was listed for sale in 2009 for half a million dollars. The most valuable jackets are usually those on the high spots of literature. Condition is of paramount importance to value. Other examples of highly prized jackets include those on most of Ernest Hemingway's titles, and the first editions of books such as Harper Lee's To Kill A Mockingbird, J. D. Salinger's Catcher in the Rye and Dashiell Hammett's The Maltese Falcon, among many others. Prices for dust jackets have become so inflated in recent years that even early reprints of certain titles in jacket can command good prices. Conversely, if the book itself is unimportant, or at least has little demand, the jacket is usually of little value either, but nearly all surviving pre-1920 jackets add some additional value to the book they cover.
Some collectors and dealers, in an effort to increase the value of a first edition that has lost its original jacket, will take a jacket from a later printing and "marry" it to the earlier one. This practice persists because some customers will pay more for a first edition in a later jacket than they would for a jacketless copy. However, switching jackets muddles the bibliographical record and creates a forgery of sorts.
See also
Blurb
Book collecting
Book cover
Book design
Footnotes
Further reading
G. Thomas Tanselle, Book-Jackets: Their History, Forms, and Use. Charlottesville, VA: Bibliographical Society of the University of Virginia, 2011.
Mark R. Godburn: Nineteenth-century dust-jackets. Pinner, Middlesex, England: Private Libraries Association; New Castle, Delaware: Oak Knoll Press, 2016. , .
External links
Mark R. Godburn, Early Dust Jackets website.
Dust Jacket Artists
"A Brief History of the Dust Jacket" – biblio.com
Book design
Book terminology
Book covers | Dust jacket | [
"Engineering"
] | 2,252 | [
"Book design",
"Design"
] |
608,916 | https://en.wikipedia.org/wiki/Slipcase | A slipcase is a five-sided box, usually made of high-quality cardboard, into which binders, books or book sets are slipped for protection, leaving the spine exposed. Special editions of books are often slipcased for a stylish appearance when placed on a bookshelf. A few publishers, such as the Folio Society, publish nearly all their books in slipcases.
Protective slipcases have been issued for records, cassettes, 8-track tapes, film, video cassettes, compact discs, DVDs and even toys instead of or in addition to the more common jewel cases or keep case, and may be chosen for aesthetic or economic reasons. Larger slipcases that are designed to house one or more items are often used in packaging for special edition releases or box sets.
See also
Solander box
References
External links
Making the 10-Minute Slipcase
Cloth Covered Slipcase: Making the Box, Covering the Box
Publishing
Containers
Book design | Slipcase | [
"Engineering"
] | 195 | [
"Book design",
"Design"
] |
608,935 | https://en.wikipedia.org/wiki/Marginalia | Marginalia (or apostils) are marks made in the margins of a book or other document. They may be scribbles, comments, glosses (annotations), critiques, doodles, drolleries, or illuminations.
Biblical manuscripts
Biblical manuscripts have notes in the margin, for liturgical use. Numbers of texts' divisions are given at the margin (, Ammonian Sections, Eusebian Canons). There are some scholia, corrections and other notes usually made later by hand in the margin. Marginalia may also be of relevance because many ancient or medieval writers of marginalia may have had access to other relevant texts that, although they may have been widely copied at the time, have since then been lost due to wars, prosecution, or censorship. As such, they might give clues to an earlier, more widely known context of the extant form of the underlying text than is currently appreciated. For this reason, scholars of ancient texts usually try to find as many still existing manuscripts of the texts they are researching, because the notes scribbled in the margin might contain additional clues to the interpretation of these texts.
History
The scholia on classical manuscripts are the earliest known form of marginalia.
In Europe, before the invention of the printing press, books were copied by hand, originally onto vellum and later onto paper. Paper was expensive and vellum was much more expensive. A single book cost as much as a house. Books, therefore, were long-term investments expected to be handed down to succeeding generations. Readers commonly wrote notes in the margins of books in order to enhance the understanding of later readers. Of the 52 extant manuscript copies of Lucretius' "De rerum natura" (On the Nature of Things) available to scholars, all but three contain marginal notes.
The practice of writing in the margins of books gradually declined over several centuries after the invention of the printing press. Printed books gradually became much less expensive, so they were no longer regarded as long-term assets to be improved for succeeding generations. The first Gutenberg Bible was printed in the 1450s. Hand annotations occur in most surviving books through the end of the 1500s. Marginalia did not become unusual until sometime in the 1800s.
Fermat's claim, written around 1637, of a proof of Fermat's last theorem too big to fit in the margin is the most famous mathematical marginal note. Voltaire, in the 1700s, annotated books in his library so extensively that his annotations have been collected and published. The first recorded use of the word marginalia is in 1819 in Blackwood's Magazine. From 1845 to 1849 Edgar Allan Poe titled some of his reflections and fragmentary material "Marginalia". Five volumes of Samuel T. Coleridge's marginalia have been published. Beginning in the 1990s, attempts have been made to design and market e-book devices permitting a limited form of marginalia.
Some famous marginalia were serious works, or drafts thereof, written in margins due to scarcity of paper. Voltaire composed in book margins while in prison, and Sir Walter Raleigh wrote a personal statement in margins just before his execution.
Recent studies
Marginalia can add to or detract from the value of an association copy of a book, depending on the author of the marginalia and on the book.
Catherine C. Marshall, doing research on the future of user interface design, has studied the phenomenon of user annotation of texts. She discovered that in several university departments, students would scour the piles of textbooks at used book dealers for consistently annotated copies. The students had a good appreciation for their predecessors' distillation of knowledge. In recent years, the marginalia left behind by university students as they engage with library textbooks has also been a topic of interest to sociologists looking to understand the experience of being a university student.
The former Moscow correspondent of The Financial Times, John Lloyd, has stated that he was shown Stalin's copy of Machiavelli's The Prince, with marginal comments.
American poet Billy Collins has explored the phenomenon of annotation within his poem titled "Marginalia".
A study on medieval and Renaissance manuscripts where snails are depicted on marginalia shows that these illustrations are a comic relief due to the similarity between the armor of knights and the shell of snails.
Writers known for their marginalia
David Foster Wallace
Edgar Allan Poe
Herman Melville
Isaac Newton
John Adams
Machiavelli
Mark Twain
Michel de Montaigne
Oscar Wilde
Pierre de Fermat
Samuel T. Coleridge
Sylvia Plath
Hester Thrale Piozzi
Voltaire
See also
Annotation, often in the form of a margin note but written by another hand.
Interpolation (manuscripts)
References
Other resources
Alston, R. C. Books with Manuscript: A short title catalog of Books with Manuscript Notes in the British Library. London: British Library, 1994.
Camille, M. (1992). Image on the edge: the margins of medieval art. Harvard University Press.
Coleridge, S. T. Marginalia, Ed. George Walley and H. J. Jackson. The Collected works of Samuel Taylor Coleridge 12. Bolligen Series 75. 5 vols. Princeton University Press, 1980-.
Jackson, H. J. Marginalia: Readers writing in Books, New Haven: Yale University Press, 2001. N.B: one of the first books on this subject
Screti, Z. (2024). Finding the Marginal in Marginalia: The Importance of Including Marginalia Descriptions in Catalog Entries. Collections, 20(1), 122-141.
Spedding, P., & Tankard, P. (2021). Marginal notes: social reading and the literal margins. Palgrave Macmillan.
External links
. Barry Brahier, 2006 (University of Minnesota).
Book design
Book collecting
Writing | Marginalia | [
"Engineering"
] | 1,203 | [
"Book design",
"Design"
] |
609,062 | https://en.wikipedia.org/wiki/Foxing | Foxing is an age-related process of deterioration that causes spots and browning on old paper documents such as books, postage stamps, old paper money and certificates. The name may be a variant form of the English West country dialect term foust and Scots foze, to become moldy. Alternatively, it may derive from the fox-like reddish-brown color of the stains. Paper so affected is said to be "foxed".
Foxing is seldom found in incunabula, or books printed before 1501. Decrease in rag fibre quality may be a culprit; as demand for paper rose in later centuries, papermakers used less water and spent less time cleansing the rag fibres used to make paper. An early work of art to have been affected by foxing is the Portrait of a Man in Red Chalk, a drawing on paper by Leonardo da Vinci.
Foxing also occurs in biological study skins or specimens, as an effect of chemical reactions or mold on melanin. Textiles, such as articles of clothing, so affected may also be said to be foxed.
Aside from foxing, other types of age-related paper deterioration include destruction of the lignin by sunlight and absorbed atmospheric pollution, typically causing the paper to become brown and crumble at the edges, and acid-related damage to cheap paper such as newsprint, which manufacturers make without neutralizing acidic contaminants.
Causes of foxing
The causes of foxing are not well understood. One conjecture is that foxing is caused by a fungal growth on the paper. Another is that foxing is caused by the effect on certain papers of the oxidation of iron, copper, or other substances in the pulp or rag from which the paper was made. It is possible that multiple factors are involved. High humidity may contribute to foxing.
Repairing foxed documents
Foxed documents can be repaired, with greater or lesser success, using sodium borohydride, proprietary bleaches, dilute hydrogen peroxide or lasers. Each method risks side effects or damage to the paper or ink.
Another method is to scan the image and process that image using a high-level image processing program. This can usually remove the effects of foxing while leaving text and images intact.
In biological specimens
It is generally not advisable to repair study specimens, except perhaps for mechanical damage. Type specimens should – if at all possible – not be altered in any way. If foxing affects the study value of a specimen (e.g. in bird or mammal skins or in insects, where it may affect diagnostic coloration), this might rather be remarked on the specimen label. Color standards can provide a means of documenting coloration before or in the early stages of foxing.
See also
List of used book conditions
Distressing
References
Cited Sources
Related Works
Smithe, Frank B (1974): Naturalists' Color Guide Supplement. American Museum of Natural History, NYC. .
Smithe, Frank B (1975-): Naturalist's Color Guide. American Museum of Natural History, NYC. .
Smithe, Frank B (1981): Naturalist's Color Guide Part III. American Museum of Natural History, NYC. .
External links
The Library of Congress: 'Preserving Works on Paper'
Foxing
Book collecting
Materials degradation
Papermaking
Philatelic terminology | Foxing | [
"Materials_science",
"Engineering"
] | 672 | [
"Materials degradation",
"Materials science"
] |
609,079 | https://en.wikipedia.org/wiki/Paper%20towel | A paper towel is an absorbent, disposable towel made from paper. In Commonwealth English, paper towels for kitchen use are also known as kitchen rolls, kitchen paper, or kitchen towels. For home use, paper towels are usually sold in a roll of perforated sheets, but some are sold in stacks of pre-cut and pre-folded layers for use in paper-towel dispensers. Unlike cloth towels, paper towels are disposable and intended to be used only once. Paper towels absorb water because they are loosely woven, which enables water to travel between the fibers, even against gravity (capillary effect). They have similar purposes to conventional towels, such as drying hands, wiping windows and other surfaces, dusting, and cleaning up spills. Paper towel dispensers are commonly used in toilet facilities shared by many people (such as at schools or shopping malls), as they are often considered more hygienic than hot-air hand dryers or shared cloth towels.
History
In 1907, the Philadelphia-based Scott Paper Company developed the first restroom tissues. They started the paper towel industry when they began selling Sani-Towels and used advertising to convince the public that paper towels were essential for personal hygiene.
In 1919, William E. Corbin, Henry Chase, and Harold Titus began experimenting with paper towels in the Research and Development building of the Brown Company in Berlin, New Hampshire. By 1922, Corbin perfected their product and began mass-producing it at the Cascade Mill on the Berlin/Gorham line. This product was called Nibroc Paper Towels (Corbin spelled backwards). In 1931, the Scott Paper Company introduced their paper towel rolls for kitchens.
Paper towels are commonly used for drying hands in public bathrooms. In the 21st century, however, electric jet-air dryers have threatened their dominance. While there is no clear scientific consensus over which method is more hygienic, the paper towel industry and hand dryer manufacturers such as Dyson have each attempted to discredit each other by funding studies which spur sensationalist headlines and running advertisements. The public relations battle has also been fueled by animosity between both sides.
Production
Paper towels are made from either virgin or recycled paper pulp, which is extracted from wood or fiber crops. They are sometimes bleached during the production process to lighten coloration, and may also be decorated with colored images on each square (such as flowers or teddy bears). Resin size is used to improve the wet strength. Paper towels are packed individually and sold as stacks, or are held on a continuous roll, and come in two distinct classes: domestic and institutional. Many companies produce paper towels. Some common brand names are Bounty, Seventh Generation, Scott, Viva, and Kirkland brand among many others.
Market
Tissue products in North America, including paper towels, are divided into consumer and commercial markets, with household consumer usage accounting for approximately two thirds of total North American consumption. Commercial usage, or otherwise any use outside of the household, accounts for the remaining third of North American consumption. The growth in commercial use of paper towels can be attributed to the migration from folded towels (in public bathrooms, for example) to roll towel dispensers, which reduces the amount of paper towels used by each patron.
Within the forest products industry, paper towels are a major part of the "tissue market", second only to toilet paper.
Globally, Americans are the highest per capita users of paper towels in the home, at approximately yearly consumption per capita (combined consumption approximately per year). This is 50% higher than in Europe and nearly 500% higher than in Latin America. By contrast, people in the Middle East tend to prefer reusable cloth towels, and people in Europe tend to prefer reusable cleaning sponges.
Paper towels are popular primarily among people who have disposable income, so their use is higher in wealthy countries and low in developing countries.
Growing hygiene consciousness during the COVID-19 pandemic led to a boost in paper towel market growth.
Environmental issues
Paper towels are a global product with rising production and consumption. Being second in tissue consumption only to toilet paper (36% vs. 45% in the U.S.), the proliferation of paper towels, which are mostly non-recyclable, has globally adverse effects on the environment. However, paper towels made from recycled paper do exist, and are sold at many outlets. Some are manufactured from bamboo, which grows faster than trees.
Electric hand dryers are an alternative to using paper towels for hand drying. However, paper towels are quicker than hand dryers: after ten seconds, paper towels achieve 90% dryness, while hot air dryers require 40 seconds to achieve a similar dryness. Electric hand dryers may also spread bacteria to hands and clothing.
See also
Hand washing
Handle-o-Meter
References
External links
Paper products
Cleaning products
Personal hygiene products
Domestic implements
Disposable products
American inventions
20th-century inventions
Towels | Paper towel | [
"Chemistry"
] | 1,018 | [
"Cleaning products",
"Products of chemical industry"
] |
609,096 | https://en.wikipedia.org/wiki/Cotton%20swab | Cotton swabs (American English) or cotton buds (British English) are wads of cotton wrapped around a short rod made of wood, rolled paper, or plastic. They are most commonly used for ear cleaning, although this is not recommended by physicians. Other uses for cotton swabs include first aid, cosmetics application, cleaning, infant care, and crafts. Some countries have banned the plastic-stemmed versions in favor of biodegradable alternatives over concerns about marine pollution.
History
The first mass-produced cotton swab was developed in 1923 by Polish-American Jew Leo Gerstenzang after he watched his wife attach wads of cotton to toothpicks to clean their infant's ears. His product was originally named "Baby Gays" in recognition of their being intended for infants before being renamed "Q-tips Baby Gays", with the "Q" standing for "quality". The product eventually became known as "Q-tips", which went on to become the most widely sold brand name of cotton swabs in North America. The term "Q-tip" is often used as a genericized trademark for a cotton swab in the United States and Canada. The Q-tips brand is owned by Elida Beauty. It was formerly owned by Unilever and had over $200 million in US sales in 2014. "Johnson's buds" are made by Johnson & Johnson.
However, according to the United States Patent Case (C-10,415) Q-Tips, Inc. v. Johnson & Johnson, 108 F. Supp. 845 (D.N.J. 1952), it would appear that the first commercial producer of cotton tipped applicators was a Mrs. Hazel Tietjen Forbis, who manufactured them in her home. She also owned a patent on the article, numbered 1,652,108, dated December 6, 1927, and sold the product under the appellation Baby Nose-Gay. In 1925, Leo Gerstenzang Co., Inc. purchased an assignment of the product patent from Mrs. Forbis. On January 2, 1937, Q-Tips, Inc's president, Mr. Leo Gerstenzang, and his wife Mrs. Ziuta Gerstenzang formed a partnership and purchased from Mrs. Forbis "All merchandise, machinery and fixtures now contained in the premises 132 W. 36th Street and used by said Q-Tips, Inc., for the manufacture of Q-Tips or medicated swabs together with the accounts receivable of said Q-Tips, Inc." The contract recited that Q-Tips, Inc. was the owner of patents covering the manufacture of applicators.
Originally, when cotton tipped applicators were made by Mrs. Forbis, they were sold under the name of Baby Nose-Gays. In 1925, after The Leo Gerstenzang Co., Inc. purchased an assignment of the product patent from Mrs. Forbis, the packages of applicators were labelled Baby-Gays. In 1926, the legend was changed to read "Q-Tips Baby Gays", and in 1927 an application was made to register the mark "Q-Tips Baby Gays". Sometime after 1926, the words "Baby Gays" were dropped and the concern began to develop "Q-Tips" as its identifying mark, applying for registration of it on September 14, 1933. Packages were made up using blue paper with pictures of double tipped applicators upon them, features which have been the basis for the Q Tips packaged sign since that time. The design of the crossed applicators was made by dropping them and then photographing the resulting pattern.
Description
The traditional cotton swab has a single tip on a wooden handle, and these are still often used, especially in medical settings. They are usually relatively long, about . These often are packaged sterile, one or two to a paper or plastic sleeve. The advantage of the paper sleeve and the wooden handle is that the package can be autoclaved to be sterilized (plastic sleeves or handles would melt in the autoclave).
Cotton swabs manufactured for home use are usually shorter, about long, and usually double-tipped. The handles were first made of wood, then made of rolled paper, which is still most common (although tubular plastic is also used). They are often sold in large quantities, 100 or more to a container.
Plastic swab stems exist in a wide variety of colors, such as blue, pink or green. However, the cotton itself is traditionally white.
Use
The most common use for cotton swabs is to clean the ear canal by removing earwax. This use is usually against manufacturer instructions. Cotton swabs are also commonly used for cosmetic purposes such as applying and removing makeup and touching up nail polish, as well as for household uses such as cleaning and arts and crafts.
Medical-type swabs are often used to take microbiological cultures. The swabs are rubbed onto or into the infected area, then wiped across the culture medium, such as an agar plate, where bacteria from the swab may grow. They are also used to take DNA samples, most commonly by scraping cells from the inner cheek in the case of humans. They can be used to apply medicines to a targeted area, to selectively remove substances from a targeted area, or to apply cleaning substances like Betadine. They are also used as an applicator for various cosmetics, ointments, and other substances.
A related area is the use of swabs for microbiological environmental monitoring. Once taken, the swab can be streaked onto an agar plate, or the contents of the tip removed by agitation or dilution into the broth. The broth can either then be filtered or incubated and examined for microbial growth.
Cotton swabs are also often used outside of the field of personal hygiene:
They are often used in the construction of plastic model kits, for various applications during the application of decals or painting. Special brands of cotton swabs exist for this purpose, characterised by sturdier cotton heads and varied shapes of those heads.
They can be used in the dyne test for measuring surface energy. This use is problematic, as manufacturers differ in the binders they use to fix the cotton to the stem, affecting the outcome of the test.
They are frequently used for cleaning the laser diode lens of an optical drive in conjunction with rubbing alcohol. Similarly, they are used for cleaning larger computer parts such as video cards and fans. They were also widely used in the past to clean video game cartridges.
Role in medical diagnostics
The importance of swab technology in medical diagnostics is immense. Swabs are a primary tool for collecting patient specimens, vital for accurately detecting pathogens, DNA sampling, and disease diagnosis. The collection's precise nature and the swab's quality are critical in ensuring reliable test results.
Nasopharyngeal swabs for respiratory virus detection swabs for efficient DNA material collection swabs to assess the presence of microbial infection in sterility and prevention of contamination.
Medical risks
The use of cotton swabs in the ear canal has no associated medical benefits and poses definite medical risks. Cerumen (ear wax) is a naturally occurring, normally extruded, product of the external auditory canal that protects the skin inside the ear, serves beneficial lubrication and cleaning functions, and provides some protection from bacteria, fungi, insects, and water.
Attempts to remove cerumen with cotton swabs may result in cerumen impaction, a buildup or blockage of cerumen in the ear canal, which can cause pain, hearing problems, ringing in the ear, or dizziness, and may require medical treatment to resolve. The use of cotton swabs in the ear canal is one of the most common causes of perforated eardrum, a condition which sometimes requires surgery to correct.
A 2004 study found that the "use of a cotton-tip applicator to clean the ear seems to be the leading cause of otitis externa in children and should be avoided." Instead, wiping wax away from the ear with a washcloth after a shower almost completely cleans the outer one-third of the ear canal, where earwax is made. In the US between 1990 and 2010, an estimated 263,338 children went to hospital emergency rooms for cotton swab injuries, accounting for an estimated annual hospitalization of 13,167 children.
Environmental impact
Plastic cotton swabs are often flushed down the toilet, increasing the risk of marine pollution. Some manufacturers and retailers have stopped the production and sale of plastic swabs and are only selling biodegradable paper versions.
The European Union instated a ban on the use of plastic-stemmed cotton swabs in 2021. Italy had previously instated a ban in 2019 and Monaco in 2020. England, Scotland, Wales, and the Isle of Man each instated a ban between 2019 and 2021.
See also
Cotton pad
Ear pick
References
External links
American inventions
Cotton
Disposable products
Microbiology equipment
Personal hygiene products
Polish inventions | Cotton swab | [
"Biology"
] | 1,884 | [
"Microbiology equipment"
] |
609,099 | https://en.wikipedia.org/wiki/Gastrin | Gastrin is a peptide hormone that stimulates secretion of gastric acid (HCl) by the parietal cells of the stomach and aids in gastric motility. It is released by G cells in the pyloric antrum of the stomach, duodenum, and the pancreas.
Gastrin binds to cholecystokinin B receptors to stimulate the release of histamines in enterochromaffin-like cells, and it induces the insertion of K+/H+ ATPase pumps into the apical membrane of parietal cells (which in turn increases H+ release into the stomach cavity). Its release is stimulated by peptides in the lumen of the stomach.
Physiology
Genetics
In humans, the GAS gene is located on the long arm of the seventeenth chromosome (17q21).
Synthesis
Gastrin is a linear peptide hormone produced by G cells of the duodenum and in the pyloric antrum of the stomach. It is secreted into the bloodstream. The encoded polypeptide is preprogastrin, which is cleaved by enzymes in posttranslational modification to produce progastrin (an intermediate, inactive precursor) and then gastrin in various forms, primarily the following three:
gastrin-34 ("big gastrin")
gastrin-17 ("little gastrin")
gastrin-14 ("minigastrin")
Also, pentagastrin is an artificially synthesized, five amino acid sequence identical to the last five amino acid sequence at the C-terminus end of gastrin.
The numbers refer to the amino acid count.
Release
Gastrin is released in response to certain stimuli. These include:
stomach antrum distension
vagal stimulation (mediated by the neurocrine bombesin, or GRP in humans)
the presence of partially digested proteins, especially amino acids, in the stomach. Aromatic amino acids are particularly powerful stimuli for gastrin release.
hypercalcemia (via calcium-sensing receptors)
Gastrin release is inhibited by:
the presence of acid (primarily the secreted HCl) in the stomach (a case of negative feedback)
somatostatin also inhibits the release of gastrin, along with secretin, GIP (gastroinhibitory peptide), VIP (vasoactive intestinal peptide), glucagon and calcitonin.
Function
The presence of gastrin stimulates parietal cells of the stomach to secrete hydrochloric acid (HCl)/gastric acid. This is done both directly on the parietal cell and indirectly via binding onto CCK2/gastrin receptors on ECL cells in the stomach, which respond by releasing histamine, which in turn acts in a paracrine manner on parietal cells stimulating them to secrete H+ ions. This is the major stimulus for acid secretion by parietal cells.
Along with the above-mentioned function, gastrin has been shown to have additional functions as well:
Stimulates parietal cell maturation and fundal growth.
Causes chief cells to secrete pepsinogen, the zymogen (inactive) form of the digestive enzyme pepsin.
Increases antral muscle mobility and promotes stomach contractions.
Strengthens antral contractions against the pylorus, and relaxes the pyloric sphincter, which increases the rate of gastric emptying.
Plays a role in the relaxation of the ileocecal valve.
Induces pancreatic secretions and gallbladder emptying.
May impact lower esophageal sphincter (LES) tone, causing it to contract, - although pentagastrin, rather than endogenous gastrin, may be the cause.
Gastrin contributes to the gastrocolic reflex.
Factors influencing secretion
Factors influencing secretion of gastrin can be divided into 2 categories:
Physiologic
Gastric lumen
Stimulatory factors: dietary protein and amino acids (meat), hypercalcemia. (i.e. during the gastric phase)
Inhibitory factor: acidity (pH below 3) - a negative feedback mechanism, exerted via the release of somatostatin from δ cells in the stomach, which inhibits gastrin and histamine release.
Paracrine
Stimulatory factor: bombesin or gastrin-releasing peptide (GRP)
Inhibitory factor: somatostatin - acts on somatostatin-2 receptors on G cells. in a paracrine manner via local diffusion in the intercellular spaces, but also systemically through its release into the local mucosal blood circulation; it inhibits acid secretion by acting on parietal cells.
Nervous
Stimulatory factors: Beta-adrenergic agents, cholinergic agents, gastrin-releasing peptide (GRP)
Inhibitory factor: Enterogastric reflex
Circulation
Stimulatory factor: gastrin
Inhibitory factors:gastric inhibitory peptide (GIP), secretin, somatostatin, glucagon, calcitonin
Pathophysiologic
Paraneoplastic
Gastrinoma paraneoplastic oversecretion (see Role in disease)
Role in disease
In the Zollinger–Ellison syndrome, gastrin is produced at excessive levels, often by a gastrinoma gastrin-producing tumor, mostly benign of the duodenum or the pancreas. To investigate for hypergastrinemia high blood levels of gastrin, a "pentagastrin test" can be performed.
In autoimmune gastritis, the immune system attacks the parietal cells leading to hypochlorhydria low stomach acid secretion. This results in an elevated gastrin level in an attempt to compensate for increased pH in the stomach. Eventually, all the parietal cells are lost and achlorhydria results leading to a loss of negative feedback on gastrin secretion. Plasma gastrin concentration is elevated in virtually all individuals with mucolipidosis type IV (mean 1507 pg/mL; range 400-4100 pg/mL) (normal 0-200 pg/mL) secondary to a constitutive achlorhydria. This finding facilitates the diagnosis of patients with this neurogenetic disorder. Additionally, elevated gastrin levels may be present in chronic gastritis resulting from H. pylori infection.
History
Its existence was first suggested in 1905 by the British physiologist John Sydney Edkins, and gastrins were isolated in 1964 by Hilda Tracy and Roderic Alfred Gregory at the University of Liverpool. In 1964 the structure of gastrin was determined.
References
Further reading
External links
Overview at colostate.edu
Peptide hormones
Gastric hormones
Digestive system
Cholecystokinin agonists | Gastrin | [
"Biology"
] | 1,457 | [
"Digestive system",
"Organ systems"
] |
609,125 | https://en.wikipedia.org/wiki/Expression%20%28mathematics%29 | In mathematics, an expression is a written arrangement of symbols following the context-dependent, syntactic conventions of mathematical notation. Symbols can denote numbers, variables, operations, and functions. Other symbols include punctuation marks and brackets, used for grouping where there is not a well-defined order of operations.
Expressions are commonly distinguished from formulas: expressions are a kind of mathematical object, whereas formulas are statements about mathematical objects. This is analogous to natural language, where a noun phrase refers to an object, and a whole sentence refers to a fact. For example, is an expression, while the inequality is a formula.
To evaluate an expression means to find a numerical value equivalent to the expression. Expressions can be evaluated or simplified by replacing operations that appear in them with their result. For example, the expression simplifies to , and evaluates to
An expression is often used to define a function, by taking the variables to be arguments, or inputs, of the function, and assigning the output to be the evaluation of the resulting expression. For example, and define the function that associates to each number its square plus one. An expression with no variables would define a constant function. Usually, two expressions are considered equal or equivalent if they define the same function. Such an equality is called a "semantic equality", that is, both expressions "mean the same thing."
History
Early written mathematics
The earliest written mathematics likely began with tally marks, where each mark represented one unit, carved into wood or stone. An example of early counting is the Ishango bone, found near the Nile and dating back over 20,000 years ago, which is thought to show a six-month lunar calendar. Ancient Egypt developed a symbolic system using hieroglyphics, assigning symbols for powers of ten and using addition and subtraction symbols resembling legs in motion. This system, recorded in texts like the Rhind Mathematical Papyrus (c. 2000–1800 BC), influenced other Mediterranean cultures. In Mesopotamia, a similar system evolved, with numbers written in a base-60 (sexagesimal) format on clay tablets written in Cuneiform, a technique originating with the Sumerians around 3000 BC. This base-60 system persists today in measuring time and angles.
Syncopated stage
The "syncopated" stage of mathematics introduced symbolic abbreviations for commonly used operations and quantities, marking a shift from purely geometric reasoning. Ancient Greek mathematics, largely geometric in nature, drew on Egyptian numerical systems (especially Attic numerals), with little interest in algebraic symbols, until the arrival of Diophantus of Alexandria, who pioneered a form of syncopated algebra in his Arithmetica, which introduced symbolic manipulation of expressions. His notation represented unknowns and powers symbolically, but without modern symbols for relations (such as equality or inequality) or exponents. An unknown number was called . The square of was ; the cube was ; the fourth power was ; the fifth power was ; and meant to subtract everything on the right from the left. So for example, what would be written in modern notation as:
Would be written in Diophantus's syncopated notation as:
In the 7th century, Brahmagupta used different colours to represent the unknowns in algebraic equations in the Brāhmasphuṭasiddhānta. Greek and other ancient mathematical advances, were often trapped in cycles of bursts of creativity, followed by long periods of stagnation, but this began to change as knowledge spread in the early modern period.
Symbolic stage and early arithmetic
The transition to fully symbolic algebra began with Ibn al-Banna' al-Marrakushi (1256–1321) and Abū al-Ḥasan ibn ʿAlī al-Qalaṣādī, (1412–1482) who introduced symbols for operations using Arabic characters. The plus sign (+) appeared around 1351 with Nicole Oresme, likely derived from the Latin et (meaning "and"), while the minus sign (−) was first used in 1489 by Johannes Widmann. Luca Pacioli included these symbols in his works, though much was based on earlier contributions by Piero della Francesca. The radical symbol (√) for square root was introduced by Christoph Rudolff in the 1500s, and parentheses for precedence by Niccolò Tartaglia in 1556. François Viète’s New Algebra (1591) formalized modern symbolic manipulation. The multiplication sign (×) was first used by William Oughtred and the division sign (÷) by Johann Rahn.
René Descartes further advanced algebraic symbolism in La Géométrie (1637), where he introduced the use of letters at the end of the alphabet (x, y, z) for variables, along with the Cartesian coordinate system, which bridged algebra and geometry. Isaac Newton and Gottfried Wilhelm Leibniz independently developed calculus in the late 17th century, with Leibniz's notation becoming the standard.
Variables and evaluation
In elementary algebra, a variable in an expression is a letter that represents a number whose value may change. To evaluate an expression with a variable means to find the value of the expression when the variable is assigned a given number. Expressions can be evaluated or simplified by replacing operations that appear in them with their result, or by combining like-terms.
For example, take the expression ; it can be evaluated at in the following steps:
, (replace x with 3)
(use definition of exponent)
(simplify)
A term is a constant or the product of a constant and one or more variables. Some examples include The constant of the product is called the coefficient. Terms that are either constants or have the same variables raised to the same powers are called like terms. If there are like terms in an expression, one can simplify the expression by combining the like terms. One adds the coefficients and keeps the same variable.
Any variable can be classified as being either a free variable or a bound variable. For a given combination of values for the free variables, an expression may be evaluated, although for some combinations of values of the free variables, the value of the expression may be undefined. Thus an expression represents an operation over constants and free variables and whose output is the resulting value of the expression.
For a non-formalized language, that is, in most mathematical texts outside of mathematical logic, for an individual expression it is not always possible to identify which variables are free and bound. For example, in , depending on the context, the variable can be free and bound, or vice-versa, but they cannot both be free. Determining which value is assumed to be free depends on context and semantics.
Equivalence
An expression is often used to define a function, or denote compositions of functions, by taking the variables to be arguments, or inputs, of the function, and assigning the output to be the evaluation of the resulting expression. For example, and define the function that associates to each number its square plus one. An expression with no variables would define a constant function. In this way, two expressions are said to be equivalent if, for each combination of values for the free variables, they have the same output, i.e., they represent the same function. The equivalence between two expressions is called an identity and is sometimes denoted with
For example, in the expression the variable is bound, and the variable is free. This expression is equivalent to the simpler expression ; that is The value for is 36, which can be denoted
Polynomial evaluation
A polynomial consists of variables and coefficients, that involve only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. The problem of polynomial evaluation arises frequently in practice. In computational geometry, polynomials are used to compute function approximations using Taylor polynomials. In cryptography and hash tables, polynomials are used to compute k-independent hashing.
In the former case, polynomials are evaluated using floating-point arithmetic, which is not exact. Thus different schemes for the evaluation will, in general, give slightly different answers. In the latter case, the polynomials are usually evaluated in a finite field, in which case the answers are always exact.
For evaluating the univariate polynomial the most naive method would use multiplications to compute , use multiplications to compute and so on for a total of multiplications and additions. Using better methods, such as Horner's rule, this can be reduced to multiplications and additions. If some preprocessing is allowed, even more savings are possible.
Computation
A computation is any type of arithmetic or non-arithmetic calculation that is "well-defined". The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing machine. Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages.
Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements. All statements characterised in modern programming languages are well-defined, including C++, Python, and Java.
Common examples of computation are basic arithmetic and the execution of computer algorithms. A calculation is a deliberate mathematical process that transforms one or more inputs into one or more outputs or results. For example, multiplying 7 by 6 is a simple algorithmic calculation. Extracting the square root or the cube root of a number using mathematical models is a more complex algorithmic calculation.
Rewriting
Expressions can be computed by means of an evaluation strategy. To illustrate, executing a function call f(a,b) may first evaluate the arguments a and b, store the results in references or memory locations ref_a and ref_b, then evaluate the function's body with those references passed in. This gives the function the ability to look up the original argument values passed in through dereferencing the parameters (some languages use specific operators to perform this), to modify them via assignment as if they were local variables, and to return values via the references. This is the call-by-reference evaluation strategy. Evaluation strategy is part of the semantics of the programming language definition. Some languages, such as PureScript, have variants with different evaluation strategies. Some declarative languages, such as Datalog, support multiple evaluation strategies. Some languages define a calling convention.
In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation. A rewriting strategy specifies, out of all the reducible subterms (redexes), which one should be reduced (contracted) within a term. One of the most common systems involves lambda calculus.
Well-defined expressions
The language of mathematics exhibits a kind of grammar (called formal grammar) about how expressions may be written. There are two considerations for well-definedness of mathematical expressions, syntax and semantics. Syntax is concerned with the rules used for constructing, or transforming the symbols of an expression without regard to any interpretation or meaning given to them. Expressions that are syntactically correct are called well-formed. Semantics is concerned with the meaning of these well-formed expressions. Expressions that are semantically correct are called well-defined.
Well-formed
The syntax of mathematical expressions can be described somewhat informally as follows: the allowed operators must have the correct number of inputs in the correct places (usually written with infix notation), the sub-expressions that make up these inputs must be well-formed themselves, have a clear order of operations, etc. Strings of symbols that conform to the rules of syntax are called well-formed, and those that are not well-formed are called, ill-formed, and are do not constitute mathematical expressions.
For example, in arithmetic, the expression 1 + 2 × 3 is well-formed, but
.
is not.
However, being well-formed is not enough to be considered well-defined. For example in arithmetic, the expression is well-formed, but it is not well-defined. (See Division by zero). Such expressions are called undefined.
Well-defined
Semantics is the study of meaning. Formal semantics is about attaching meaning to expressions. An expression that defines a unique value or meaning is said to be well-defined. Otherwise, the expression is said to be ill defined or ambiguous. In general the meaning of expressions is not limited to designating values; for instance, an expression might designate a condition, or an equation that is to be solved, or it can be viewed as an object in its own right that can be manipulated according to certain rules. Certain expressions that designate a value simultaneously express a condition that is assumed to hold, for instance those involving the operator to designate an internal direct sum.
In algebra, an expression may be used to designate a value, which might depend on values assigned to variables occurring in the expression. The determination of this value depends on the semantics attached to the symbols of the expression. The choice of semantics depends on the context of the expression. The same syntactic expression 1 + 2 × 3 can have different values (mathematically 7, but also 9), depending on the order of operations implied by the context (See also Operations § Calculators).
For real numbers, the product is unambiguous because ; hence the notation is said to be well defined. This property, also known as associativity of multiplication, guarantees the result does not depend on the sequence of multiplications; therefore, a specification of the sequence can be omitted. The subtraction operation is non-associative; despite that, there is a convention that is shorthand for , thus it is considered "well-defined". On the other hand, Division is non-associative, and in the case of , parenthesization conventions are not well established; therefore, this expression is often considered ill-defined.
Unlike with functions, notational ambiguities can be overcome by means of additional definitions (e.g., rules of precedence, associativity of the operator). For example, in the programming language C, the operator - for subtraction is left-to-right-associative, which means that a-b-c is defined as (a-b)-c, and the operator = for assignment is right-to-left-associative, which means that a=b=c is defined as a=(b=c). In the programming language APL there is only one rule: from right to left – but parentheses first.
Formal definition
The term 'expression' is part of the language of mathematics, that is to say, it is not defined within mathematics, but taken as a primitive part of the language. To attempt to define the term would not be doing mathematics, but rather, one would be engaging in a kind of metamathematics (the metalanguage of mathematics), usually mathematical logic. Within mathematical logic, mathematics is usually described as a kind of formal language, and a well-formed expression can be defined recursively as follows:
The alphabet consists of:
A set of individual constants: Symbols representing fixed objects in the domain of discourse, such as numerals (1, 2.5, 1/7, ...), sets (, ...), truth values (T or F), etc.
A set of individual variables: A countably infinite amount of symbols representing variables used for representing an unspecified object in the domain. (Usually letters like , or )
A set of operations: Function symbols representing operations that can be performed on elements over the domain, like addition (+), multiplication (×), or set operations like union (∪), or intersection (∩). (Functions can be understood as unary operations)
Brackets ( )
With this alphabet, the recursive rules for forming a well-formed expression (WFE) are as follows:
Any constant or variable as defined are the atomic expressions, the simplest well-formed expressions (WFE's). For instance, the constant or the variable are syntactically correct expressions.
Let be a metavariable for any n-ary operation over the domain, and let be metavariables for any WFE's.
Then is also well-formed. For the most often used operations, more convenient notations (like infix notation) have been developed over the centuries.
For instance, if the domain of discourse is the real numbers, can denote the binary operation +, then is well-formed. Or can be the unary operation so is well-formed.
Brackets are initially around each non-atomic expression, but they can be deleted in cases where there is a defined order of operations, or where order doesn't matter (i.e. where operations are associative).
A well-formed expression can be thought as a syntax tree. The leaf nodes are always atomic expressions. Operations and have exactly two child nodes, while operations , and have exactly one. There are countably infinitely many WFE's, however, each WFE has a finite number of nodes.
Lambda calculus
Formal languages allow formalizing the concept of well-formed expressions.
In the 1930s, a new type of expression, the lambda expression, was introduced by Alonzo Church and Stephen Kleene for formalizing functions and their evaluation. The lambda operators (lambda abstraction and function application) form the basis for lambda calculus, a formal system used in mathematical logic and programming language theory.
The equivalence of two lambda expressions is undecidable (but see unification (computer science)). This is also the case for the expressions representing real numbers, which are built from the integers by using the arithmetical operations, the logarithm and the exponential (Richardson's theorem).
Types of expressions
Algebraic expression
An algebraic expression is an expression built up from algebraic constants, variables, and the algebraic operations (addition, subtraction, multiplication, division and exponentiation by a rational number). For example, is an algebraic expression. Since taking the square root is the same as raising to the power , the following is also an algebraic expression:
See also: Algebraic equation and Algebraic closure
Polynomial expression
A polynomial expression is an expression built with scalars (numbers of elements of some field), indeterminates, and the operators of addition, multiplication, and exponentiation to nonnegative integer powers; for example
Using associativity, commutativity and distributivity, every polynomial expression is equivalent to a polynomial, that is an expression that is a linear combination of products of integer powers of the indeterminates. For example the above polynomial expression is equivalent (denote the same polynomial as
Many author do not distinguish polynomials and polynomial expressions. In this case the expression of a polynomial expression as a linear combination is called the canonical form, normal form, or expanded form of the polynomial.
Computational expression
In computer science, an expression is a syntactic entity in a programming language that may be evaluated to determine its value or fail to terminate, in which case the expression is undefined. It is a combination of one or more constants, variables, functions, and operators that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value. This process, for mathematical expressions, is called evaluation.
In simple settings, the resulting value is usually one of various primitive types, such as string, Boolean, or numerical (such as integer, floating-point, or complex).
In computer algebra, formulas are viewed as expressions that can be evaluated as a Boolean, depending on the values that are given to the variables occurring in the expressions. For example takes the value false if is given a value less than 1, and the value true otherwise.
Expressions are often contrasted with statements—syntactic entities that have no value (an instruction).
Except for numbers and variables, every mathematical expression may be viewed as the symbol of an operator followed by a sequence of operands. In computer algebra software, the expressions are usually represented in this way. This representation is very flexible, and many things that seem not to be mathematical expressions at first glance, may be represented and manipulated as such. For example, an equation is an expression with "=" as an operator, a matrix may be represented as an expression with "matrix" as an operator and its rows as operands.
See: Computer algebra expression
Logical expression
In mathematical logic, a "logical expression" can refer to either terms or formulas. A term denotes a mathematical object while a formula denotes a mathematical fact. In particular, terms appear as components of a formula.
A first-order term is recursively constructed from constant symbols, variables, and function symbols.
An expression formed by applying a predicate symbol to an appropriate number of terms is called an atomic formula, which evaluates to true or false in bivalent logics, given an interpretation.
For example, is a term built from the constant 1, the variable , and the binary function symbols and ; it is part of the atomic formula which evaluates to true for each real-numbered value of .
Formal expression
A formal expression is a kind of string of symbols, created by the same production rules as standard expressions, however, they are used without regard to the meaning of the expression. In this way, two formal expressions are considered equal only if they are syntactically equal, that is, if they are the exact same expression. For instance, the formal expressions "2" and "1+1" are not equal.
See also
Analytic expression
Closed-form expression
Formal calculation
Functional programming
Infinite expression
Number sentence
Rewriting
Signature (logic)
Notes
References
Works Cited
Abstract algebra
Logical expressions
Elementary algebra | Expression (mathematics) | [
"Mathematics"
] | 4,613 | [
"Algebra",
"Mathematical logic",
"Elementary algebra",
"Elementary mathematics",
"Logical expressions",
"Abstract algebra"
] |
609,147 | https://en.wikipedia.org/wiki/Transmission%20%28mechanical%20device%29 | A transmission (also called a gearbox) is a mechanical device which uses a gear set—two or more gears working together—to change the speed, direction of rotation, or torque multiplication/reduction in a machine.
Transmissions can have a single fixed-gear ratio, multiple distinct gear ratios, or continuously variable ratios. Variable-ratio transmissions are used in all sorts of machinery, especially vehicles.
Applications
Early uses
Early transmissions included the right-angle drives and other gearing in windmills, horse-powered devices, and steam-powered devices. Applications of these devices included pumps, mills and hoists.
Bicycles
Bicycles traditionally have used hub gear or Derailleur gear transmissions, but there are other more recent design innovations.
Automobiles
Since the torque and power output of an internal combustion engine varies with its rpm, automobiles powered by ICEs require multiple gear ratios to keep the engine within its power band to produce optimal power, fuel efficiency, and smooth operation. Multiple gear ratios are also needed to provide sufficient acceleration and velocity for safe and reliable operation at modern highway speeds. ICEs typically operate over a range of approximately 600–7000 rpm, while the vehicle's speeds requires the wheels to rotate in the range of 0–1800 rpm.
In the early mass-produced automobiles, the standard transmission design was manual: the combination of gears was selected by the driver through a lever (the gear stick) that displaced gears and gear groups along their axes. Starting in 1939, cars using various types of automatic transmission became available in the US market. These vehicles used the engine's own power to change the effective gear ratio depending on the load so as to keep the engine running close to its optimal rotation speed. Automatic transmissions now are used in more than two thirds of cars globally, and on almost all new cars in the US.
Most currently-produced passenger cars with gasoline or diesel engines use transmissions with 4–10 forward gear ratios (also called speeds) and one reverse gear ratio. Electric vehicles typically use a fixed-gear or two-speed transmission with no reverse gear ratio.
Motorcycles
Fixed-ratio
The simplest transmissions used a fixed ratio to provide either a gear reduction or increase in speed, sometimes in conjunction with a change in the orientation of the output shaft. Examples of such transmissions are used in helicopters and wind turbines. In the case of a wind turbine, the first stage of the gearbox is usually a planetary gear, to minimize the size while withstanding the high torque inputs from the turbine.
Multi-ratio
Many transmissions – especially for transportation applications – have multiple gears that are used to change the ratio of input speed (e.g. engine rpm) to the output speed (e.g. the speed of a car) as required for a given situation. Gear (ratio) selection can be manual, semi-automatic, or automatic.
Manual
A manual transmission requires the driver to manually select the gears by operating a gear stick and clutch (which is usually a foot pedal for cars or a hand lever for motorcycles).
Most transmissions in modern cars use synchromesh to synchronise the speeds of the input and output shafts. However, prior to the 1950s, most cars used non-synchronous transmissions.
Sequential manual
A sequential manual transmission is a type of non-synchronous transmission used mostly for motorcycles and racing cars. It produces faster shift times than synchronized manual transmissions, through the use of dog clutches rather than synchromesh. Sequential manual transmissions also restrict the driver to selecting either the next or previous gear, in a successive order.
Semi-automatic
A semi-automatic transmission is where some of the operation is automated (often the actuation of the clutch), but the driver's input is required to move off from a standstill or to change gears.
Automated manual / clutchless manual
An automated manual transmission (AMT) is essentially a conventional manual transmission that uses automatic actuation to operate the clutch and/or shift between gears.
Many early versions of these transmissions were semi-automatic in operation, such as Autostick, which automatically control only the clutch, but still require the driver's input to initiate gear changes. Some of these systems are also referred to as clutchless manual systems. Modern versions of these systems that are fully automatic in operation, such as Selespeed and Easytronic, can control both the clutch operation and the gear shifts automatically, without any input from the driver.
Automatic
An automatic transmission does not require any input from the driver to change forward gears under normal driving conditions.
Hydraulic automatic
The most common design of automatic transmissions is the hydraulic automatic, which typically uses planetary gearsets that are operated using hydraulics. The transmission is connected to the engine via a torque converter (or a fluid coupling prior to the 1960s), instead of the friction clutch used by most manual transmissions and dual-clutch transmissions.
Dual-clutch (DCT)
A dual-clutch transmission (DCT) uses two separate clutches for odd and even gear sets. The design is often similar to two separate manual transmissions with their respective clutches contained within one housing, and working as one unit. In car and truck applications, the DCT functions as an automatic transmission, requiring no driver input to change gears.
Continuously-variable Ratio
A continuously variable transmission (CVT) can change seamlessly through a continuous range of gear ratios. This contrasts with other transmissions that provide a limited number of gear ratios in fixed steps. The flexibility of a CVT with suitable control may allow the engine to operate at a constant RPM while the vehicle moves at varying speeds.
CVTs are used in cars, tractors, side-by-sides, motor scooters, snowmobiles, bicycles, and earthmoving equipment.
The most common type of CVT uses two pulleys connected by a belt or chain; however, several other designs have also been used at times.
Noise and vibration
Gearboxes are often a major source of noise and vibration in vehicles and stationary machinery. Higher sound levels are generally emitted when the vehicle is engaged in lower gears. The design life of the lower ratio gears is shorter, so cheaper gears may be used, which tend to generate more noise due to smaller overlap ratio and a lower mesh stiffness etc. than the helical gears used for the high ratios. This fact has been used to analyze vehicle-generated sound since the late 1960s, and has been incorporated into the simulation of urban roadway noise and corresponding design of urban noise barriers along roadways.
See also
Bicycle gearing
Direct-drive mechanism
List of auto parts
Transfer case
References
Mechanisms (engineering) | Transmission (mechanical device) | [
"Physics",
"Engineering"
] | 1,326 | [
"Mechanical power transmission",
"Mechanics",
"Mechanical engineering",
"Mechanisms (engineering)"
] |
609,152 | https://en.wikipedia.org/wiki/Signal%20transmission | In telecommunications, transmission (sometimes abbreviated as "TX") is the process of sending or propagating an analog or digital signal via a medium that is wired, wireless, or fiber-optic.
Transmission system technologies typically refer to physical layer protocol duties such as modulation, demodulation, line coding, equalization, error control, bit synchronization and multiplexing, but it may also involve higher-layer protocol duties, for example, digitizing an analog signal, and data compression.
Transmission of a digital message, or of a digitized analog signal, is known as data transmission.
Examples of transmission are the sending of signals with limited duration, for example, a block or packet of data, a phone call, or an email.
See also
Radio transmitter
References
Telecommunications engineering | Signal transmission | [
"Engineering"
] | 159 | [
"Electrical engineering",
"Telecommunications engineering"
] |
609,188 | https://en.wikipedia.org/wiki/Kinemage | A kinemage (short for kinetic image) is an interactive graphic scientific illustration. It often is used to visualize molecules, especially proteins although it can also represent other types of 3-dimensional data (such as geometric figures, social networks, or tetrahedra of RNA base composition). The kinemage system is designed to optimize ease of use, interactive performance, and the perception and communication of detailed 3D information. The kinemage information is stored in a text file, human- and machine-readable, that describes the hierarchy of display objects and their properties, and includes optional explanatory text. The kinemage format is a defined chemical MIME type of 'chemical/x-kinemage' with the file extension '.kin'.
Early history
Kinemages were first developed by David Richardson at Duke University School of Medicine, for the Protein Society's journal Protein Science that premiered in January 1992. For its first 5 years (1992–1996), each issue of Protein Science included a supplement on floppy disk of interactive, kinemage 3D computer graphics to illustrate many of the articles, plus the Mage software (cross-platform, free, open-source) to display them; kinemage supplementary material is still available on the journal web site. Mage and RasMol were the first widely used macromolecular graphics programs to support interactive display on personal computers. Kinemages are used for teaching, and for textbook supplements, individual exploration, and analysis of macromolecular structures.
Research uses
More recently, with the availability of a much wider variety of other molecular graphics tools, presentation use of kinemages has been overtaken by a wide variety of research uses, concomitant with new display features and with the development of software that produces kinemage-format output from other types of molecular calculations. All-atom contact analysis adds and optimizes explicit hydrogen atoms, and then uses patches of dot surface to display the hydrogen bond, van der Waals, and steric clash interactions between atoms. The results can be used visually (in kinemages) and quantitatively to analyze the detailed interactions between molecular surfaces, most extensively for the purpose of validating and improving the molecular models from experimental x-ray crystallography data. Both Mage and KiNG (see below) have been enhanced for kinemage display of data in higher than 3 dimensions (moving between views in various 3-D projections, coloring and selecting candidate clusters of datapoints, and switching to a parallel coordinates representation), used for instance for defining clusters of favorable RNA backbone conformations in the 7-dimensional space of backbone dihedral angles between one ribose and the next.
Online web use
KiNG is an open-source kinemage viewer, written in the programming language Java by Ian Davis and Vincent Chen, that can work interactively either standalone on a user machine with no network connection, or as a web service in a web page. The interactive nature of kinemages is their primary purpose and attribute. To appreciate their nature, the demonstration KiNG in browser has two examples that can be moved around in 3D, plus instructions for how to embed a kinemage on a web page. The figure below shows KiNG being used to remodel a lysine sidechain in a high-resolution crystal structure. KiNG is one of the viewers provided on each structure page at the Protein Data Bank site, and displays validation results in 3D on the MolProbity site. Kinemages can also be shown in immersive virtual reality systems, with the open-source KinImmerse software. All of the kinemage display and all-atom contact software is available free and open-source on the kinemage web site.
See also
Molecular graphics
Ribbon diagram
Comparison of software for molecular mechanics modeling
References
External links
, Duke University original, with examples and software
kinemage example in a browser
RCSB Protein Data Bank
MolProbity: structure validation, with KiNG on-line kinemages
Chemistry software | Kinemage | [
"Chemistry"
] | 823 | [
"Chemistry software",
"nan"
] |
609,194 | https://en.wikipedia.org/wiki/Personality%20test | A personality test is a method of assessing human personality constructs. Most personality assessment instruments (despite being loosely referred to as "personality tests") are in fact introspective (i.e., subjective) self-report questionnaire (Q-data, in terms of LOTS data) measures or reports from life records (L-data) such as rating scales. Attempts to construct actual performance tests of personality have been very limited even though Raymond Cattell with his colleague Frank Warburton compiled a list of over 2000 separate objective tests that could be used in constructing objective personality tests. One exception, however, was the Objective-Analytic Test Battery, a performance test designed to quantitatively measure 10 factor-analytically discerned personality trait dimensions. A major problem with both L-data and Q-data methods is that because of item transparency, rating scales, and self-report questionnaires are highly susceptible to motivational and response distortion ranging from lack of adequate self-insight (or biased perceptions of others) to downright dissimulation (faking good/faking bad) depending on the reason/motivation for the assessment being undertaken.
The first personality assessment measures were developed in the 1920s and were intended to ease the process of personnel selection, particularly in the armed forces. Since these early efforts, a wide variety of personality scales and questionnaires have been developed, including the Minnesota Multiphasic Personality Inventory (MMPI), the Sixteen Personality Factor Questionnaire (16PF), the Comrey Personality Scales (CPS), among many others. Although popular especially among personnel consultants, the Myers–Briggs Type Indicator (MBTI) has numerous psychometric deficiencies. More recently, a number of instruments based on the Five Factor Model of personality have been constructed such as the Revised NEO Personality Inventory. However, the Big Five and related Five Factor Model have been challenged for accounting for less than two-thirds of the known trait variance in the normal personality sphere alone.
Estimates of how much the personality assessment industry in the US is worth range anywhere from $2 and $4 billion a year (as of 2013). Personality assessment is used in wide a range of contexts, including individual and relationship counseling, clinical psychology, forensic psychology, school psychology, career counseling, employment testing, occupational health and safety and customer relationship management.
History
The origins of personality assessment date back to the 18th and 19th centuries, when personality was assessed through phrenology, the measurement of bumps on the human skull, and physiognomy, which assessed personality based on a person's outer appearances. Sir Francis Galton took another approach to assessing personality late in the 19th century. Based on the lexical hypothesis, Galton estimated the number of adjectives that described personality in the English dictionary. Galton's list was eventually refined by Louis Leon Thurstone to 60 words that were commonly used for describing personality at the time. Through factor analyzing responses from 1300 participants, Thurstone was able to reduce this severely restricted pool of 60 adjectives into seven common factors. This procedure of factor analyzing common adjectives was later utilized by Raymond Cattell (7th most highly cited psychologist of the 20th Century—based on the peer-reviewed journal literature), who subsequently utilized a data set of over 4000 affect terms from the English dictionary that eventually resulted in construction of the Sixteen Personality Factor Questionnaire (16PF) which also measured up to eight second-stratum personality factors. Of the many introspective (i.e., subjective) self-report instruments constructed to measure the putative Big Five personality dimensions, perhaps the most popular has been the Revised NEO Personality Inventory (NEO-PI-R) However, the psychometric properties of the NEO-PI-R (including its factor analytic/construct validity) has been severely criticized.
Another early personality instrument was the Woodworth Personal Data Sheet, a self-report inventory developed for World War I and used for the psychiatric screening of new draftees.
Overview
There are many different types of personality assessment measures. The self-report inventory involves administration of many items requiring respondents to introspectively assess their own personality characteristics. This is highly subjective, and because of item transparency, such Q-data measures are highly susceptible to motivational and response distortion. Respondents are required to indicate their level of agreement with each item using a Likert scale or, more accurately, a Likert-type scale. An item on a personality questionnaire, for example, might ask respondents to rate the degree to which they agree with the statement "I talk to a lot of different people at parties" on a scale from 1 ("strongly disagree") to 5 ("strongly agree").
Historically, the most widely used multidimensional personality instrument is the Minnesota Multiphasic Personality Inventory (MMPI), a psychopathology instrument originally designed to assess archaic psychiatric nosology.
In addition to subjective/introspective self-report inventories, there are several other methods for assessing human personality, including observational measures, ratings of others, projective tests (e.g., the TAT and Ink Blots), and actual objective performance tests (T-data).
Topics
Norms
The meaning of personality test scores are difficult to interpret in a direct sense. For this reason substantial effort is made by producers of personality tests to produce norms to provide a comparative basis for interpreting a respondent's test scores. Common formats for these norms include percentile ranks, z scores, sten scores, and other forms of standardized scores.
Test development
A substantial amount of research and thinking has gone into the topic of personality test development. Development of personality tests tends to be an iterative process whereby a test is progressively refined. Test development can proceed on theoretical or statistical grounds. There are three commonly used general strategies: Inductive, Deductive, and Empirical. Scales created today will often incorporate elements of all three methods.
Deductive assessment construction begins by selecting a domain or construct to measure. The construct is thoroughly defined by experts and items are created which fully represent all the attributes of the construct definition. Test items are then selected or eliminated based upon which will result in the strongest internal validity for the scale. Measures created through deductive methodology are equally valid and take significantly less time to construct compared to inductive and empirical measures. The clearly defined and face valid questions that result from this process make them easy for the person taking the assessment to understand. Although subtle items can be created through the deductive process, these measure often are not as capable of detecting lying as other methods of personality assessment construction.
Inductive assessment construction begins with the creation of a multitude of diverse items. The items created for an inductive measure to not intended to represent any theory or construct in particular. Once the items have been created they are administered to a large group of participants. This allows researchers to analyze natural relationships among the questions and label components of the scale based upon how the questions group together. Several statistical techniques can be used to determine the constructs assessed by the measure. Exploratory Factor Analysis and Confirmatory Factor Analysis are two of the most common data reduction techniques that allow researchers to create scales from responses on the initial items.
The Five Factor Model of personality was developed using this method. Advanced statistical methods include the opportunity to discover previously unidentified or unexpected relationships between items or constructs. It also may allow for the development of subtle items that prevent test takers from knowing what is being measured and may represent the actual structure of a construct better than a pre-developed theory. Criticisms include a vulnerability to finding item relationships that do not apply to a broader population, difficulty identifying what may be measured in each component because of confusing item relationships, or constructs that were not fully addressed by the originally created questions.
Empirically derived personality assessments require statistical techniques. One of the central goals of empirical personality assessment is to create a test that validly discriminates between two distinct dimensions of personality. Empirical tests can take a great deal of time to construct. In order to ensure that the test is measuring what it is purported to measure, psychologists first collect data through self- or observer reports, ideally from a large number of participants.
Self- vs. observer-reports
A personality test can be administered directly to the person being evaluated or to an observer. In a self-report, the individual responds to personality items as they pertain to the person himself/herself. Self-reports are commonly used. In an observer-report, a person responds to the personality items as those items pertain to someone else. To produce the most accurate results, the observer needs to know the individual being evaluated. Combining the scores of a self-report and an observer report can reduce error, providing a more accurate depiction of the person being evaluated. Self- and observer-reports tend to yield similar results, supporting their validity.
Direct observation reports
Direct observation involves a second party directly observing and evaluating someone else. The second party observes how the target of the observation behaves in certain situations (e.g., how a child behaves in a schoolyard during recess). The observations can take place in a natural (e.g., a schoolyard) or artificial setting (social psychology laboratory). Direct observation can help identify job applicants (e.g., work samples) who are likely to be successful or maternal attachment in young children (e.g., Mary Ainsworth's strange situation). The object of the method is to directly observe genuine behaviors in the target. A limitation of direct observation is that the target persons may change their behavior because they know that they are being observed. A second limitation is that some behavioral traits are more difficult to observe (e.g., sincerity) than others (e.g., sociability). A third limitation is that direct observation is more expensive and time-consuming than a number of other methods (e.g., self-report).
Personality tests in the workplace
Though personality tests date back to the early 20th century, it was not until 1988 when it became illegal in the United States for employers to use polygraphs that they began to more broadly utilize personality tests. The idea behind these personality tests is that employers can reduce their turnover rates and prevent economic losses in the form of people prone to thievery, drug abuse, emotional disorders or violence in the workplace. There is a chance that an applicant may fake responses to personality test items in order to make the applicant appear more attractive to the employing organization than the individual actually is.
Personality tests are often part of management consulting services, as having a certification to conduct a particular test is a way for a consultant to offer an additional service and demonstrate their qualifications. The tests are used in narrowing down potential job applicants, as well as which employees are more suitable for promotion. The United States federal government is a notable customer of personality test services outside the private sector with approximately 200 federal agencies, including the military, using personality assessment services.
Despite evidence showing personality tests as one of the least reliable metrics in assessing job applicants, they remain popular as a way to screen candidates.
Test evaluation
There are several criteria for evaluating a personality test. For a test to be successful, users need to be sure that (a) test results are replicable and (b) the test measures what its creators purport it to measure. Fundamentally, a personality test is expected to demonstrate reliability and validity. Reliability refers to the extent to which test scores, if a test were administered to a sample twice within a short period of time, would be similar in both administrations. Test validity refers to evidence that a test measures the construct (e.g., neuroticism) that it is supposed to measure.
Analysis
A respondent's response is used to compute the analysis. Analysis of data is a long process. Two major theories are used here: classical test theory (CTT), used for the observed score; and item response theory (IRT), "a family of models for persons' responses to items". The two theories focus upon different 'levels' of responses and researchers are implored to use both in order to fully appreciate their results.
Non-response
Firstly, item non-response needs to be addressed. Non-response can either be unit, where a person gave no response for any of the n items, or item, i.e., individual question. Unit non-response is generally dealt with exclusion. Item non-response should be handled by imputation – the method used can vary between test and questionnaire items.
Scoring
The conventional method of scoring items is to assign '0' for an incorrect answer and '1' for a correct answer. When tests have more response options (e.g. multiple choice items) '0' when incorrect, '1' for being partly correct and '2' for being correct. Personality tests can also be scored using a dimensional (normative) or a typological (ipsative) approach. Dimensional approaches such as the Big 5 describe personality as a set of continuous dimensions on which individuals differ. From the item scores, an 'observed' score is computed. This is generally found by summing the un-weighted item scores.
Criticism and controversy
Personality versus social factors
In the 1960s and 1970s some psychologists dismissed the whole idea of personality, considering much behaviour to be context-specific. This idea was supported by the fact that personality often does not predict behaviour in specific contexts. However, more extensive research has shown that when behaviour is aggregated across contexts, that personality can be a mostly good predictor of behaviour. Almost all psychologists now acknowledge that both social and individual difference factors (i.e., personality) influence behaviour. The debate is currently more around the relative importance of each of these factors and how these factors interact.
Respondent faking
One problem with self-report measures of personality is that respondents are often able to distort their responses. Intentional faking is when responses are distorted inorder to gain a benefit. There are two main types of faking: faking-good presenting a better self image and faking-bad presenting a worse self image.
Several meta-analyses show that people are able to substantially change their scores on personality tests when such tests are taken under high-stakes conditions, such as part of a job selection procedure.
Work in experimental settings has also shown that when student samples have been asked to deliberately fake on a personality test, they clearly demonstrated that they are capable of doing so.
In 2007 over 5000 job applicants who completed the same personality test twice after a six month gap, found that their results showed no significant differences, potentially indicating that people may not significantly distort their responses.
Several strategies have been adopted for reducing and detecting respondent faking. Researchers are looking at the timing of responses on electronically administered tests to assess faking. Brief simple syntax tends to show longer response times in faked responses than in comparison to truthful responses; longer, more complex, and negative phrasing does not show differences in timing. One strategy involves providing a warning on the test that methods exist for detecting faking and that detection will result in negative consequences for the respondent (e.g., not being considered for the job). Forced choice (ipsative testing) has three formats: PICK (selecting a best fitting statement), MOLE (selecting a most and least fitting statement), and RANK (a most to least alike ranking), the effectiveness of using forced choice to prevent faking is inconclusive.
More recently, Item Response Theory approaches have been adopted with some success in identifying item response profiles that flag fakers. While people can fake in practice they seldom do so to any significant level. To successfully fake means knowing what the ideal answer would be. Even with something as simple as assertiveness people who are unassertive and try to appear assertive often endorse the wrong items. This is because unassertive people confuse assertion with aggression, anger, oppositional behavior, etc.
Psychological research
Research on the importance of personality and intelligence in education shows evidence that when others provide the personality rating, rather than providing a self-rating, the outcome is nearly four times more accurate for predicting grades.
Additional applications
The MBTI questionnaire is a popular tool for people to use as part of self-examination or to find a shorthand to describe how they relate to others in society. It is well known from its widespread adoption in hiring practices, but popular among individuals for its focus exclusively on positive traits and "types" with memorable names. Some users of the questionnaire self-identify by their personality type on social media and dating profiles. Due to the publisher's strict copyright enforcement, many assessments come from free websites which provide modified tests based on the framework.
Unscientific personality type quizzes are also a common form of entertainment. In particular Buzzfeed became well known for publishing user-created quizzes, with personality-style tests often based on deciding which pop culture character or celebrity the user most resembles.
Personality test have also been used as a from of aptitude test in workplace or school environments. A test covering 15 personality types, including the "Big-5" personality traits, was used in a study to see if there is correlation between pilots personality scores and success in the aviation field. The results showed correlation between high scores in conscientiousness and self-confidence but low levels of neuroticism had higher passing scores on aviation tests. However, personality test are not a true science and cannot accurately predict the "ideal pilot."
Personality tests are also being adapted to be used on livestock. They are looking to see if the animals are bold, fearful or fearless, and how they interact with other livestock. These test are designed to predict a wide variety of things and how well they will do on the farm. For example, with chickens the test can predict whether they will vocalize their fear. These test can help farmers improve the well-being and productivity of their animals.
Dangers
There is an issue of privacy to be of concern forcing applicants to reveal private thoughts and feelings through his or her responses that seem to become a condition for employment. Another danger is the illegal discrimination of certain groups under the guise of a personality test.
In addition to the risks of personality test results being used outside of an appropriate context, they can give inaccurate results when conducted incorrectly. In particular, ipsative personality tests are often misused in recruitment and selection, where they are mistakenly treated as if they were normative measures.
Effects of technological advancements on the field
New technological advancements are increasing the possible ways that data can be collected and analyzed, and broadening the types of data that can be used to reliably assess personality. Although qualitative assessments of job-applicants' social media have existed for nearly as long as social media itself, many scientific studies have successfully quantized patterns in social media usage into various metrics to assess personality quantitatively. Smart devices, such as smart phones and smart watches, are also now being used to collect data in new ways and in unprecedented quantities. Also, brain scan technology has dramatically improved, which is now being developed to analyze personalities of individuals extremely accurately.
Aside from the advancing data collection methods, data processing methods are also improving rapidly. Strides in big data and pattern recognition in enormous databases (data mining) have allowed for better data analysis than ever before. Also, this allows for the analysis of large amounts of data that was difficult or impossible to reliably interpret before (for example, from the internet). There are other areas of current work too, such as gamification of personality tests to make the tests more interesting and to lower effects of psychological phenomena that skews personality assessment data.
This data collection has also brought about machine created personality tests based on a person's digital footprint. So far, the test have proven to be fairly accurate, but this kind of tech is very new. This paper focuses specifically on how data mining plays a role in personality.
With new data collection methods comes new ethical concerns, such as over the analysis of one's public data to make assessments on their personality and when consent is needed.
Examples of personality tests
The first modern personality test was the Woodworth Personal Data Sheet, which was first used in 1919. It was designed to help the United States Army screen out recruits who might be susceptible to shell shock.
The Rorschach inkblot test was introduced in 1921 as a way to determine personality by the interpretation of inkblots.
The Thematic Apperception Test was commissioned by the Office of Strategic Services (O.S.S.) in the 1930s to identify personalities that might be susceptible to being turned by enemy intelligence.
The Minnesota Multiphasic Personality Inventory was published in 1942 as a way to aid in assessing psychopathology in a clinical setting. It can also be used to assess the Personality Psychopathology Five (PSY-5), which are similar to the Five Factor Model (FFM; or Big Five personality traits). These five scales on the MMPI-2 include aggressiveness, psychoticism, , negative emotionality/neuroticism, and introversion/low positive emotionality.
Myers–Briggs Type Indicator (MBTI) is a questionnaire designed to measure psychological preferences in how people perceive the world and make decisions. This 16-type indicator test is based on Carl Jung's Psychological Types, developed during World War II by Isabel Myers and Katharine Briggs. The 16-type indicator includes a combination of Extroversion-Introversion, Sensing-Intuition, Thinking-Feeling and Judging-Perceiving. The MBTI utilizes two opposing behavioral divisions on four scales that yields a "personality type".
OAD Survey is an adjective word list designated to measure seven work related personality traits and job behaviors: Assertiveness-Compliance, Extroversion-Introversion, Patience-Impatience, Detail-Broad, High Versatility-Low Versatility, Low Emotional IQ-High Emotional IQ, Low Creativity-High Creativity. It was first published in 1990 with periodic norm revisions to assure scale validity, reliability, and non-bias.
Keirsey Temperament Sorter developed by David Keirsey is influenced by Isabel Myers sixteen types and Ernst Kretschmer's four types.
The True Colors Test developed by Don Lowry in 1978 is based on the work of David Keirsey in his book, Please Understand Me, as well as the Myers-Briggs Type Indicator and provides a model for understanding personality types using the colors blue, gold, orange and green to represent four basic personality temperaments.
The 16PF Questionnaire (16PF) was developed by Raymond Cattell and his colleagues in the 1940s and 1950s in a search to try to discover the basic traits of human personality using scientific methodology. The test was first published in 1949, and is now in its 5th edition, published in 1994. It is used in a wide variety of settings for individual and marital counseling, career counseling and employee development, in educational settings, and for basic research.
The EQSQ Test developed by Simon Baron-Cohen, Sally Wheelwright centers on the empathizing-systemizing theory of the male versus the female brain types.
The Personality and Preference Inventory (PAPI), originally designed by Dr Max Kostick, Professor of Industrial Psychology at Boston State College, in Massachusetts, USA, in the early 1960s evaluates the behaviour and preferred work styles of individuals.
The Strength Deployment Inventory, developed by Elias Porter in 1971 and is based on his theory of Relationship Awareness. Porter was the first known psychometrician to use colors (Red, Green and Blue) as shortcuts to communicate the results of a personality test.
The Newcastle Personality Assessor (NPA), created by Daniel Nettle, is a short questionnaire designed to quantify personality on five dimensions: Extraversion, Neuroticism, Conscientious, Agreeableness, and Openness.
The DISC assessment is based on the research of William Moulton Marston and later work by John Grier, and identifies four personality types: Dominance; Influence; Steadiness and Conscientiousness. It is used widely in Fortune 500 companies, for-profit and non-profit organizations.
The Winslow Personality Profile measures 24 traits on a decile scale. It has been used in the National Football League, the National Basketball Association, the National Hockey League and every draft choice for Major League Baseball for the last 30 years and can be taken online for personal development.
Other personality tests include Forté Profile, Millon Clinical Multiaxial Inventory, Eysenck Personality Questionnaire, Swedish Universities Scales of Personality, Edwin E. Wagner's The Hand Test, and Enneagram of Personality.
The HEXACO Personality Inventory – Revised (HEXACO PI-R) is based on the HEXACO model of personality structure, which consists of six domains, the five domains of the Big Five model, as well as the domain of Honesty-Humility.
The Personality Inventory for DSM-5 (PID-5) was developed in September 2012 by the DSM-5 Personality and Personality Disorders Workgroup with regard to a personality trait model proposed for DSM-5. The PID-5 includes 25 maladaptive personality traits as determined by Krueger, Derringer, Markon, Watson, and Skodol.
The Process Communication Model (PCM), developed by Taibi Kahler with NASA funding, was used to assist with shuttle astronaut selection. Now it is a non-clinical personality assessment, communication and management methodology that is now applied to corporate management, interpersonal communications, education, and real-time analysis of call centre interactions among other uses.
The Birkman Method (TBM) was developed by Roger W. Birkman in the late 1940s. The instrument consists of ten scales describing "occupational preferences" (Interests), 11 scales describing "effective behaviors" (Usual behavior) and 11 scales describing interpersonal and environmental expectations (Needs). A corresponding set of 11 scale values was derived to describe "less than effective behaviors" (Stress behavior). TBM was created empirically. The psychological model is most closely associated with the work of Kurt Lewin. Occupational profiling consists of 22 job families with over 200 associated job titles connected to O*Net.
The International Personality Item Pool (IPIP) is a public domain set of more than 2000 personality items which can be used to measure many personality variables, including the Five Factor Model.
The Guilford-Zimmerman Temperament Survey examined 10 factors that represented normal personality, and was used in both longitudinal studies and to examine the personality profiles of Italian pilots.
The Short Dark Triad (SD-3) examines three socially unacceptable traits: narcissism, Machiavellianism, and psychopathy.
The Dark Triad Dirty Dozen (DTDD) is a 12-item version of a dark triad test.
Personality tests of the five factor model
Different types of the Big Five personality traits:
The NEO PI-R, or the Revised NEO Personality Inventory, is one of the most significant measures of the Five Factor Model (FFM). The measure was created by Costa and McCrae and contains 240 items in the forms of sentences. Costa and McCrae had divided each of the five domains into six facets each, 30 facets total, and changed the way the FFM is measured.
The Five-Factor Model Rating Form (FFMRF) was developed by Lynam and Widiger in 2001 as a shorter alternative to the NEO PI-R. The form consists of 30 facets, 6 facets for each of the Big Five factors.
The Ten-Item Personality Inventory (TIPI) and the Five Item Personality Inventory (FIPI) are very abbreviated rating forms of the Big Five personality traits.
The Five Factor Personality Inventory – Children (FFPI-C) was developed to measure personality traits in children based upon the Five Factor Model (FFM).
The Big Five Inventory (BFI), developed by John, Donahue, and Kentle, is a 44-item self-report questionnaire consisting of adjectives that assess the domains of the Five Factor Model (FFM). The 10-Item Big Five Inventory is a simplified version of the well-established BFI. It is developed to provide a personality inventory under time constraints. The BFI-10 assesses the five dimensions of BFI using only two items each to cut down on length of BFI.
The Semi-structured Interview for the Assessment of the Five-Factor Model (SIFFM) is the only semi-structured interview intended to measure a personality model or personality disorder. The interview assesses the five domains and 30 facets as presented by the NEO PI-R, and it additional assesses both normal and abnormal extremities of each facet.
See also
References
Personality | Personality test | [
"Biology"
] | 5,880 | [
"Behavior",
"Personality",
"Human behavior"
] |
609,487 | https://en.wikipedia.org/wiki/Gelfond%E2%80%93Schneider%20theorem | In mathematics, the Gelfond–Schneider theorem establishes the transcendence of a large class of numbers.
History
It was originally proved independently in 1934 by Aleksandr Gelfond and Theodor Schneider.
Statement
If a and b are algebraic numbers with a and b not rational, then any value of ab is a transcendental number.
Comments
The values of a and b are not restricted to real numbers; complex numbers are allowed (here complex numbers are not regarded as rational when they have an imaginary part not equal to 0, even if both the real and imaginary parts are rational).
In general, is multivalued, where log stands for the complex natural logarithm. (This is the multivalued inverse of the exponential function exp.) This accounts for the phrase "any value of" in the theorem's statement.
An equivalent formulation of the theorem is the following: if α and γ are nonzero algebraic numbers, and we take any non-zero logarithm of α, then is either rational or transcendental. This may be expressed as saying that if , are linearly independent over the rationals, then they are linearly independent over the algebraic numbers. The generalisation of this statement to more general linear forms in logarithms of several algebraic numbers is in the domain of transcendental number theory.
If the restriction that a and b be algebraic is removed, the statement does not remain true in general. For example,
Here, a is , which (as proven by the theorem itself) is transcendental rather than algebraic. Similarly, if and , which is transcendental, then is algebraic. A characterization of the values for a and b which yield a transcendental ab is not known.
Kurt Mahler proved the p-adic analogue of the theorem: if a and b are in Cp, the completion of the algebraic closure of Qp, and they are algebraic over Q, and if and then is either rational or transcendental, where logp is the p-adic logarithm function.
Corollaries
The transcendence of the following numbers follows immediately from the theorem:
Gelfond–Schneider constant and its square root
Gelfond's constant
Applications
The Gelfond–Schneider theorem answers affirmatively Hilbert's seventh problem.
See also
Lindemann–Weierstrass theorem
Baker's theorem; an extension of the result
Schanuel's conjecture; if proven it would imply both the Gelfond–Schneider theorem and the Lindemann–Weierstrass theorem
References
Further reading
External links
A proof of the Gelfond–Schneider theorem
Transcendental numbers
Theorems in number theory | Gelfond–Schneider theorem | [
"Mathematics"
] | 550 | [
"Mathematical theorems",
"Theorems in number theory",
"Mathematical problems",
"Number theory"
] |
609,717 | https://en.wikipedia.org/wiki/Aluminium%E2%80%93silicon%20alloys | Aluminium–silicon alloys or Silumin is a general name for a group of lightweight, high-strength aluminium alloys based on an aluminum–silicon system (AlSi) that consist predominantly of aluminum - with silicon as the quantitatively most important alloying element. Pure AlSi alloys cannot be hardened, the commonly used alloys AlSiCu (with copper) and AlSiMg (with magnesium) can be hardened. The hardening mechanism corresponds to that of AlCu and AlMgSi.
AlSi alloys are by far the most important of all aluminum cast materials. They are suitable for all casting processes and have excellent casting properties. Important areas of application are in car parts, including engine blocks and pistons. In addition, their use as a functional material for high-energy heat storage in electric vehicles is currently being focused on.
Alloying elements
Aluminium-silicon alloys typically contain 3% to 25% silicon content. Casting is the primary use of aluminum-silicon alloys, but they can also be utilized in rapid solidification processes and powder metallurgy. Alloys used by powder metallurgy, rather than casting, may contain even more silicon, up to 50%. Silumin has a high resistance to corrosion, making it useful in humid environments.
The addition of silicon to aluminum also makes it less viscous when in liquid form, which, together with its low cost (as both component elements are relatively cheap to extract), makes it a very good casting alloy. Silumin with good castability may give a stronger finished casting than a potentially stronger alloy that is more difficult to cast.
All aluminum alloys also contain iron as an admixture. It is generally undesirable because it lowers strength and elongation at break. Together with Al and Si it forms the -phase AlFeSi, which is present in the structure in the form of small needles. However, iron also prevents the castings from sticking to the molds in die casting, so that special die-casting alloys contain a small amount of iron, while iron is avoided as far as possible in other alloys.
Manganese also reduces the tendency to stick, but affects the mechanical properties less than iron. Manganese forms a phase with other elements that is in the form of globulitic (round) grains.
Copper occurs in almost all technical alloys, at least as an admixture. From a content of 0.05% Cu, the corrosion resistance is reduced. Additions of about 1% Cu are alloyed to increase strength through solid solution strengthening. This also improves machinability. In the case of the AlSiCu alloys, higher proportions of copper are also added, which means that the materials can be hardened (see Aluminum-copper alloy).
Together with silicon, magnesium forms the Mg2Si (magnesium silicide) phase, which is the basis of hardenability, similar to aluminum-magnesium-silicon alloys (AlMgSi). In these there is an excess of Mg, so the structure consists of aluminum mixed crystal with magnesium and Mg2Si. In the AlSiMg alloys, on the other hand, there is an excess of silicon and the structure consists of aluminum mixed crystal, silicon and Mg2Si.
Silicon powders are used in aluminum-silicon alloys for enhancing strength and castability, providing better durability under high-stress conditions. It also improves the fluidity of molten aluminum which allows easier casting of complex shapes with fewer defects.
Small additions of titanium and boron serve to refine the grain.
Pure aluminium–silicon alloys
Aluminum forms a eutectic with silicon, which is at 577 °C, with a Si content of 12.5% or 12.6%. Up to 1.65% Si can be dissolved in aluminum at this temperature. However, the solubility decreases rapidly with temperature. At 500 °C it is still 0.8% Si, at 400 °C 0.3% Si and at 250 °C only 0.05% Si. At room temperature, silicon is practically insoluble. Aluminum cannot be dissolved in silicon at all, not even at high temperatures. Only in the molten state are both completely soluble. Increases in strength due to solid solution strengthening are negligible.
Pure AlSi alloys are smelted from primary aluminium, while AlSi alloys with other elements are usually smelted from secondary aluminium. The pure AlSi alloys are medium strength, non-hardenable, but corrosion resistant, even in salt water environments.
The exact properties depend on whether the composition of the alloy is above, near or below the eutectic point. Castability increases with increasing Si content and is best at about 17% Si; the mechanical properties are best at 6% to 12% Si.
The mold filling capacity reaches its maximum at 12% Si, but is also good with other contents.
The tendency to form cavities is lowest at 6% to 8% Si and considered low overall.
The tendency to hot cracking is low with less than 6% Si.
Otherwise, AlSi alloys generally have favorable casting properties: the shrinkage is only 1.25% and the influence of the wall thickness is small.
Hypereutectic alloys, with a silicon content of 16 to 19%, such as Alusil, can be used in high-wear applications such as pistons, cylinder liners and internal combustion engine blocks. The metal is etched after casting, exposing hard, wear-resistant silicon precipitates. The rest of the surface becomes slightly porous and retains oil. Overall this makes for an excellent bearing surface, and at lower cost than traditional bronze bearing bushes.
Hypoeutectic alloys
Hypoeutectic alloys (also hypoeutectic) have a silicon content of less than 12%. With them, the aluminum solidifies first. As the temperature falls and the proportion of solidified aluminum increases, the silicon content of the residual melt increases until the eutectic point is reached. Then the entire residual melt solidifies as a eutectic. The microstructure is consequently characterized by primary aluminium, which is often present in the form of dendrites, and the eutectic of the residual melt lying between them. The lower the silicon content, the larger the dendrites.
In pure AlSi alloys, the eutectic is often in a degenerate form. Instead of the fine structure that is otherwise typical of eutectics with its good mechanical properties, AlSi takes the form of a coarse-grained structure on slow cooling, in which silicon forms large plates or needles. These can sometimes be seen with the naked eye and make the material brittle. This is not a problem in chill casting, since the cooling rates are high enough to avoid degeneration.
In sand casting in particular, with its slow cooling rates, additional elements are added to the melt to prevent degeneration. Sodium, strontium and antimony are suitable. These elements are added to the melt at around 720 °C to 780 °C, causing supercooling that reduces the diffusion of silicon, resulting in a common fine eutectic, resulting in higher strength and elongation at break.
Eutectic and near-eutectic alloys
Alloys with 11% Si to 13% Si are counted among the eutectic alloys. Annealing improves elongation and fatigue strength. Solidification is shell -forming in untreated alloys and smooth-walled in refined alloys, resulting in very good castability. Above all, the flowability and mold filling ability is very good, which is why eutectic alloys are suitable for thin-walled parts.
Hypereutectic Alloys
Alloys with more than 13% Si are referred to as over- or hypereutectic. The Si content is usually up to 17%, with special piston alloys also over 20%. Hypereutectic alloys have very low thermal expansion and are very wear resistant. In contrast to many other alloys, AlSi alloys do not show their maximum fluidity near the eutectic, but at 14 to 16% Si, in the case of overheating at 17% to 18% Si. The tendency to hot cracking is minimal in the range from 10% to 14%. In the case of hypereutectic alloys, the silicon crystals solidify first in the melt, until the remaining melt solidifies as a eutectic. For grain refinement copper-phosphorus alloys are used. The hard and brittle silicon leads to increased tool wear during subsequent machining, which is why diamond tools are sometimes used (See also Machinability).
Aluminium–silicon–magnesium alloys
AlSiMg alloys with small additions of magnesium (below 0.3 to 0.6% Mg) can be hardened both cold and warm. The proportion of magnesium decreases with increasing silicon content, which is between 5% Si and 10% Si. They are related to the AlMgSi alloys: Both are based on the fact that magnesium silicide Mg2Si is precipitated, which is present in the material in the form of finely divided particles and thus increases the strength. In addition, magnesium increases the elongation at break. In contrast to AlSiCu, which can also be hardened, these alloys are corrosion-resistant and easy to cast. However, copper is present as an impurity in some AlSiMg alloys, which reduces corrosion resistance. This applies above all to materials that have been melted from secondary aluminium.
Aluminium–silicon–copper alloys
AlSiCu alloys are also heat-hardenable and additionally high-strength, but susceptible to corrosion and less, but still adequately, castable. It is often smelted from secondary aluminium. The hardening is based on the same mechanism as the AlCu alloys. The copper content is 1% to 4%, that of silicon 4% to 10%. Small additions of magnesium improve strength.
Compositions of standardized varieties
All data are in percent by mass. The rest is aluminum.
Wrought alloys
Cast Alloys
Mechanical properties of standardized and non-standard grades
4000 series
4000 series are alloyed with silicon. Variations of aluminium–silicon alloys intended for casting (and therefore not included in 4000 series) are also known as silumin.
Applications
Within the Aluminum Association numeric designation system, Silumin corresponds to alloys of two systems: 3xxx, aluminum–silicon alloys also containing magnesium and/or copper, and 4xx.x, binary aluminum–silicon alloys. Copper increases strength, but reduces corrosion resistance.
In general, AlSi alloys are mainly used in foundries, especially for vehicle construction. Wrought alloys are very rare. They are used as a filler metal (welding wire) or as a solder in brazing. In some cases, forged AlSi pistons are also built for aviation.
AlSi eutectic casting alloys are used for machine parts, cylinder heads, cylinder crankcases, impellers and ribbed bodies. Hypereutectic (high silicon) alloys are used for engine parts because of low thermal expansion and high strength and wear resistance. This also includes special piston alloys with around 25% Si.
Alloys with additions of magnesium (AlSiMg) can be hardened by heat treatment. An example use-case are wheel rims produced by low -pressure casting because of their good strength, corrosion resistance and elongation at break. Alloys with about 10% Si are used for cylinder heads, switch housings, intake manifolds, transformer tanks, wheel suspensions and oil pans. Alloys with 5% Si to 7% Si are used for chassis parts and wheels. At levels of 9%, they are suitable for structural components and body nodes.
The copper-containing AlSiCu alloys are used for gear housings, crankcases and cylinder heads because of their heat resistance and hardenability.
In addition to the use of AlSi alloys as a structural material, in which the mechanical properties are paramount, another area of application is latent heat storage. In the phase change of the alloy at 577 °C, thermal energy can be stored in the form of the enthalpy of fusion. AlSi can therefore also be used as a metallic phase change material (mPCM) be used. Compared to other phase change materials, metals are characterized by a high specific energy density combined with high thermal conductivity. The latter is important for the rapid entry and exit of heat in the storage material and thus increases the performance of a heat storage system. These advantageous properties of mPCM such as AlSi are of particular importance for vehicle applications, since low masses and volumes as well as high thermal performance are the main goals here. By using storage systems based on mPCM, the range of electric cars can be increased by thermally storing the necessary thermal energy for heating in the mPCM instead of taking it from the traction battery.
Almost eutectic AlSi melts are also used for hot-dip aluminizing. In the process of continuous strip galvanizing, steel strips are finished with a heat-resistant metallic coating 10-25 μm thick. Hot-dip aluminized sheet steel is an inexpensive material for thermally stressed components. Unlike zinc coatings, the coating does not provide cathodic protection under atmospheric conditions.
Characteristics
High castability, fluidity, corrosion resistance, ductility, and low density.
Usable for large castings, which can operate under heavy load conditions.
Considered to not be a heat-treatable alloy, but the addition of Mg & Cu can allow it to be heat treated, e.g. AΠ4 alloys.
Strengthened by solution treatment, e.g. adding 0.01% sodium (in the form of sodium fluoride [NaF] and sodium chloride [NaCl]) to the melt just before casting.
A disadvantage is a tendency for porosity in the casting, i.e. the casting can become foam-like. This can be avoided by casting under pressure in autoclaves.
References
Further reading
Aluminium alloys
Aluminium–silicon alloys | Aluminium–silicon alloys | [
"Chemistry"
] | 2,848 | [
"Alloys",
"Aluminium alloys"
] |
609,737 | https://en.wikipedia.org/wiki/Duality%20%28mathematics%29 | In mathematics, a duality translates concepts, theorems or mathematical structures into other concepts, theorems or structures in a one-to-one fashion, often (but not always) by means of an involution operation: if the dual of is , then the dual of is . In other cases the dual of the dual – the double dual or bidual – is not necessarily identical to the original (also called primal). Such involutions sometimes have fixed points, so that the dual of is itself. For example, Desargues' theorem is self-dual in this sense under the standard duality in projective geometry.
In mathematical contexts, duality has numerous meanings. It has been described as "a very pervasive and important concept in (modern) mathematics" and "an important general theme that has manifestations in almost every area of mathematics".
Many mathematical dualities between objects of two types correspond to pairings, bilinear functions from an object of one type and another object of the second type to some family of scalars. For instance, linear algebra duality corresponds in this way to bilinear maps from pairs of vector spaces to scalars, the duality between distributions and the associated test functions corresponds to the pairing in which one integrates a distribution against a test function, and Poincaré duality corresponds similarly to intersection number, viewed as a pairing between submanifolds of a given manifold.
From a category theory viewpoint, duality can also be seen as a functor, at least in the realm of vector spaces. This functor assigns to each space its dual space, and the pullback construction assigns to each arrow its dual .
Introductory examples
In the words of Michael Atiyah,
The following list of examples shows the common features of many dualities, but also indicates that the precise meaning of duality may vary from case to case.
Complement of a subset
A simple duality arises from considering subsets of a fixed set . To any subset , the complement consists of all those elements in that are not contained in . It is again a subset of . Taking the complement has the following properties:
Applying it twice gives back the original set, i.e., . This is referred to by saying that the operation of taking the complement is an involution.
An inclusion of sets is turned into an inclusion in the opposite direction .
Given two subsets and of , is contained in if and only if is contained in .
This duality appears in topology as a duality between open and closed subsets of some fixed topological space : a subset of is closed if and only if its complement in is open. Because of this, many theorems about closed sets are dual to theorems about open sets. For example, any union of open sets is open, so dually, any intersection of closed sets is closed. The interior of a set is the largest open set contained in it, and the closure of the set is the smallest closed set that contains it. Because of the duality, the complement of the interior of any set is equal to the closure of the complement of .
Dual cone
A duality in geometry is provided by the dual cone construction. Given a set of points in the plane (or more generally points in the dual cone is defined as the set consisting of those points satisfying
for all points in , as illustrated in the diagram.
Unlike for the complement of sets mentioned above, it is not in general true that applying the dual cone construction twice gives back the original set . Instead, is the smallest cone containing which may be bigger than . Therefore this duality is weaker than the one above, in that
Applying the operation twice gives back a possibly bigger set: for all , is contained in . (For some , namely the cones, the two are actually equal.)
The other two properties carry over without change:
It is still true that an inclusion is turned into an inclusion in the opposite direction ().
Given two subsets and of the plane, is contained in if and only if is contained in .
Dual vector space
A very important example of a duality arises in linear algebra by associating to any vector space its dual vector space . Its elements are the linear functionals , where is the field over which is defined.
The three properties of the dual cone carry over to this type of duality by replacing subsets of by vector space and inclusions of such subsets by linear maps. That is:
Applying the operation of taking the dual vector space twice gives another vector space . There is always a map . For some , namely precisely the finite-dimensional vector spaces, this map is an isomorphism.
A linear map gives rise to a map in the opposite direction ().
Given two vector spaces and , the maps from to correspond to the maps from to .
A particular feature of this duality is that and are isomorphic for certain objects, namely finite-dimensional vector spaces. However, this is in a sense a lucky coincidence, for giving such an isomorphism requires a certain choice, for example the choice of a basis of . This is also true in the case if is a Hilbert space, via the Riesz representation theorem.
Galois theory
In all the dualities discussed before, the dual of an object is of the same kind as the object itself. For example, the dual of a vector space is again a vector space. Many duality statements are not of this kind. Instead, such dualities reveal a close relation between objects of seemingly different nature. One example of such a more general duality is from Galois theory. For a fixed Galois extension , one may associate the Galois group to any intermediate field (i.e., ). This group is a subgroup of the Galois group . Conversely, to any such subgroup there is the fixed field consisting of elements fixed by the elements in .
Compared to the above, this duality has the following features:
An extension of intermediate fields gives rise to an inclusion of Galois groups in the opposite direction: .
Associating to and to are inverse to each other. This is the content of the fundamental theorem of Galois theory.
Order-reversing dualities
Given a poset (short for partially ordered set; i.e., a set that has a notion of ordering but in which two elements cannot necessarily be placed in order relative to each other), the dual poset comprises the same ground set but the converse relation. Familiar examples of dual partial orders include
the subset and superset relations and on any collection of sets, such as the subsets of a fixed set . This gives rise to the first example of a duality mentioned above.
the divides and multiple-of relations on the integers.
the descendant-of and ancestor-of relations on the set of humans.
A duality transform is an involutive antiautomorphism of a partially ordered set , that is, an order-reversing involution . In several important cases these simple properties determine the transform uniquely up to some simple symmetries. For example, if , are two duality transforms then their composition is an order automorphism of ; thus, any two duality transforms differ only by an order automorphism. For example, all order automorphisms of a power set are induced by permutations of .
A concept defined for a partial order will correspond to a dual concept on the dual poset . For instance, a minimal element of will be a maximal element of : minimality and maximality are dual concepts in order theory. Other pairs of dual concepts are upper and lower bounds, lower sets and upper sets, and ideals and filters.
In topology, open sets and closed sets are dual concepts: the complement of an open set is closed, and vice versa. In matroid theory, the family of sets complementary to the independent sets of a given matroid themselves form another matroid, called the dual matroid.
Dimension-reversing dualities
There are many distinct but interrelated dualities in which geometric or topological objects correspond to other objects of the same type, but with a reversal of the dimensions of the features of the objects. A classical example of this is the duality of the Platonic solids, in which the cube and the octahedron form a dual pair, the dodecahedron and the icosahedron form a dual pair, and the tetrahedron is self-dual. The dual polyhedron of any of these polyhedra may be formed as the convex hull of the center points of each face of the primal polyhedron, so the vertices of the dual correspond one-for-one with the faces of the primal. Similarly, each edge of the dual corresponds to an edge of the primal, and each face of the dual corresponds to a vertex of the primal. These correspondences are incidence-preserving: if two parts of the primal polyhedron touch each other, so do the corresponding two parts of the dual polyhedron. More generally, using the concept of polar reciprocation, any convex polyhedron, or more generally any convex polytope, corresponds to a dual polyhedron or dual polytope, with an -dimensional feature of an -dimensional polytope corresponding to an -dimensional feature of the dual polytope. The incidence-preserving nature of the duality is reflected in the fact that the face lattices of the primal and dual polyhedra or polytopes are themselves order-theoretic duals. Duality of polytopes and order-theoretic duality are both involutions: the dual polytope of the dual polytope of any polytope is the original polytope, and reversing all order-relations twice returns to the original order. Choosing a different center of polarity leads to geometrically different dual polytopes, but all have the same combinatorial structure.
From any three-dimensional polyhedron, one can form a planar graph, the graph of its vertices and edges. The dual polyhedron has a dual graph, a graph with one vertex for each face of the polyhedron and with one edge for every two adjacent faces. The same concept of planar graph duality may be generalized to graphs that are drawn in the plane but that do not come from a three-dimensional polyhedron, or more generally to graph embeddings on surfaces of higher genus: one may draw a dual graph by placing one vertex within each region bounded by a cycle of edges in the embedding, and drawing an edge connecting any two regions that share a boundary edge. An important example of this type comes from computational geometry: the duality for any finite set of points in the plane between the Delaunay triangulation of and the Voronoi diagram of . As with dual polyhedra and dual polytopes, the duality of graphs on surfaces is a dimension-reversing involution: each vertex in the primal embedded graph corresponds to a region of the dual embedding, each edge in the primal is crossed by an edge in the dual, and each region of the primal corresponds to a vertex of the dual. The dual graph depends on how the primal graph is embedded: different planar embeddings of a single graph may lead to different dual graphs. Matroid duality is an algebraic extension of planar graph duality, in the sense that the dual matroid of the graphic matroid of a planar graph is isomorphic to the graphic matroid of the dual graph.
A kind of geometric duality also occurs in optimization theory, but not one that reverses dimensions. A linear program may be specified by a system of real variables (the coordinates for a point in Euclidean space a system of linear constraints (specifying that the point lie in a halfspace; the intersection of these halfspaces is a convex polytope, the feasible region of the program), and a linear function (what to optimize). Every linear program has a dual problem with the same optimal solution, but the variables in the dual problem correspond to constraints in the primal problem and vice versa.
Duality in logic and set theory
In logic, functions or relations and are considered dual if , where ¬ is logical negation. The basic duality of this type is the duality of the ∃ and ∀ quantifiers in classical logic. These are dual because and are equivalent for all predicates in classical logic: if there exists an for which fails to hold, then it is false that holds for all (but the converse does not hold constructively). From this fundamental logical duality follow several others:
A formula is said to be satisfiable in a certain model if there are assignments to its free variables that render it true; it is valid if every assignment to its free variables makes it true. Satisfiability and validity are dual because the invalid formulas are precisely those whose negations are satisfiable, and the unsatisfiable formulas are those whose negations are valid. This can be viewed as a special case of the previous item, with the quantifiers ranging over interpretations.
In classical logic, the and operators are dual in this sense, because and are equivalent. This means that for every theorem of classical logic there is an equivalent dual theorem. De Morgan's laws are examples. More generally, . The left side is true if and only if , and the right side if and only if ¬∃i.xi.
In modal logic, means that the proposition is "necessarily" true, and that is "possibly" true. Most interpretations of modal logic assign dual meanings to these two operators. For example in Kripke semantics, " is possibly true" means "there exists some world such that is true in ", while " is necessarily true" means "for all worlds , is true in ". The duality of and then follows from the analogous duality of and . Other dual modal operators behave similarly. For example, temporal logic has operators denoting "will be true at some time in the future" and "will be true at all times in the future" which are similarly dual.
Other analogous dualities follow from these:
Set-theoretic union and intersection are dual under the set complement operator . That is, , and more generally, . This follows from the duality of and : an element is a member of if and only if , and is a member of if and only if .
Bidual
The dual of the dual, called the bidual or double dual, depending on context, is often identical to the original (also called primal), and duality is an involution. In this case the bidual is not usually distinguished, and instead one only refers to the primal and dual. For example, the dual poset of the dual poset is exactly the original poset, since the converse relation is defined by an involution.
In other cases, the bidual is not identical with the primal, though there is often a close connection. For example, the dual cone of the dual cone of a set contains the primal set (it is the smallest cone containing the primal set), and is equal if and only if the primal set is a cone.
An important case is for vector spaces, where there is a map from the primal space to the double dual, , known as the "canonical evaluation map". For finite-dimensional vector spaces this is an isomorphism, but these are not identical spaces: they are different sets. In category theory, this is generalized by , and a "natural transformation" from the identity functor to the double dual functor. For vector spaces (considered algebraically), this is always an injection; see . This can be generalized algebraically to a dual module. There is still a canonical evaluation map, but it is not always injective; if it is, this is known as a torsionless module; if it is an isomophism, the module is called reflexive.
For topological vector spaces (including normed vector spaces), there is a separate notion of a topological dual, denoted to distinguish from the algebraic dual , with different possible topologies on the dual, each of which defines a different bidual space . In these cases the canonical evaluation map is not in general an isomorphism. If it is, this is known (for certain locally convex vector spaces with the strong dual space topology) as a reflexive space.
In other cases, showing a relation between the primal and bidual is a significant result, as in Pontryagin duality (a locally compact abelian group is naturally isomorphic to its bidual).
Dual objects
A group of dualities can be described by endowing, for any mathematical object , the set of morphisms into some fixed object , with a structure similar to that of . This is sometimes called internal Hom. In general, this yields a true duality only for specific choices of , in which case is referred to as the dual of . There is always a map from to the bidual, that is to say, the dual of the dual,
It assigns to some the map that associates to any map (i.e., an element in ) the value .
Depending on the concrete duality considered and also depending on the object , this map may or may not be an isomorphism.
Dual vector spaces revisited
The construction of the dual vector space
mentioned in the introduction is an example of such a duality. Indeed, the set of morphisms, i.e., linear maps, forms a vector space in its own right. The map mentioned above is always injective. It is surjective, and therefore an isomorphism, if and only if the dimension of is finite. This fact characterizes finite-dimensional vector spaces without referring to a basis.
Isomorphisms of and and inner product spaces
A vector space is isomorphic to precisely if is finite-dimensional. In this case, such an isomorphism is equivalent to a non-degenerate bilinear form
In this case is called an inner product space.
For example, if is the field of real or complex numbers, any positive definite bilinear form gives rise to such an isomorphism. In Riemannian geometry, is taken to be the tangent space of a manifold and such positive bilinear forms are called Riemannian metrics. Their purpose is to measure angles and distances. Thus, duality is a foundational basis of this branch of geometry. Another application of inner product spaces is the Hodge star which provides a correspondence between the elements of the exterior algebra. For an -dimensional vector space, the Hodge star operator maps -forms to -forms. This can be used to formulate Maxwell's equations. In this guise, the duality inherent in the inner product space exchanges the role of magnetic and electric fields.
Duality in projective geometry
In some projective planes, it is possible to find geometric transformations that map each point of the projective plane to a line, and each line of the projective plane to a point, in an incidence-preserving way. For such planes there arises a general principle of duality in projective planes: given any theorem in such a plane projective geometry, exchanging the terms "point" and "line" everywhere results in a new, equally valid theorem. A simple example is that the statement "two points determine a unique line, the line passing through these points" has the dual statement that "two lines determine a unique point, the intersection point of these two lines". For further examples, see Dual theorems.
A conceptual explanation of this phenomenon in some planes (notably field planes) is offered by the dual vector space. In fact, the points in the projective plane correspond to one-dimensional subvector spaces while the lines in the projective plane correspond to subvector spaces of dimension 2. The duality in such projective geometries stems from assigning to a one-dimensional the subspace of consisting of those linear maps which satisfy . As a consequence of the dimension formula of linear algebra, this space is two-dimensional, i.e., it corresponds to a line in the projective plane associated to .
The (positive definite) bilinear form
yields an identification of this projective plane with the . Concretely, the duality assigns to its orthogonal . The explicit formulas in duality in projective geometry arise by means of this identification.
Topological vector spaces and Hilbert spaces
In the realm of topological vector spaces, a similar construction exists, replacing the dual by the topological dual vector space. There are several notions of topological dual space, and each of them gives rise to a certain concept of duality. A topological vector space that is canonically isomorphic to its bidual is called a reflexive space:
Examples:
As in the finite-dimensional case, on each Hilbert space its inner product defines a map which is a bijection due to the Riesz representation theorem. As a corollary, every Hilbert space is a reflexive Banach space.
The dual normed space of an -space is where provided that , but the dual of is bigger than . Hence is not reflexive.
Distributions are linear functionals on appropriate spaces of functions. They are an important technical means in the theory of partial differential equations (PDE): instead of solving a PDE directly, it may be easier to first solve the PDE in the "weak sense", i.e., find a distribution that satisfies the PDE and, second, to show that the solution must, in fact, be a function. All the standard spaces of distributions — , , — are reflexive locally convex spaces.
Further dual objects
The dual lattice of a lattice is given by
the set of linear functions on the real vector space containing the lattice that map the points of the lattice to the integers . This is used in the construction of toric varieties. The Pontryagin dual of locally compact topological groups G is given by
continuous group homomorphisms with values in the circle (with multiplication of complex numbers as group operation).
Dual categories
Opposite category and adjoint functors
In another group of dualities, the objects of one theory are translated into objects of another theory and the maps between objects in the first theory are translated into morphisms in the second theory, but with direction reversed. Using the parlance of category theory, this amounts to a contravariant functor between two categories and :
which for any two objects X and Y of C gives a map
That functor may or may not be an equivalence of categories. There are various situations, where such a functor is an equivalence between the opposite category of , and . Using a duality of this type, every statement in the first theory can be translated into a "dual" statement in the second theory, where the direction of all arrows has to be reversed. Therefore, any duality between categories and is formally the same as an equivalence between and ( and ). However, in many circumstances the opposite categories have no inherent meaning, which makes duality an additional, separate concept.
A category that is equivalent to its dual is called self-dual. An example of self-dual category is the category of Hilbert spaces.
Many category-theoretic notions come in pairs in the sense that they correspond to each other while considering the opposite category. For example, Cartesian products and disjoint unions of sets are dual to each other in the sense that
and
for any set . This is a particular case of a more general duality phenomenon, under which limits in a category correspond to colimits in the opposite category ; further concrete examples of this are epimorphisms vs. monomorphism, in particular factor modules (or groups etc.) vs. submodules, direct products vs. direct sums (also called coproducts to emphasize the duality aspect). Therefore, in some cases, proofs of certain statements can be halved, using such a duality phenomenon. Further notions displaying related by such a categorical duality are projective and injective modules in homological algebra, fibrations and cofibrations in topology and more generally model categories.
Two functors and are adjoint if for all objects c in C and d in D
in a natural way. Actually, the correspondence of limits and colimits is an example of adjoints, since there is an adjunction
between the colimit functor that assigns to any diagram in indexed by some category its colimit and the diagonal functor that maps any object of to the constant diagram which has at all places. Dually,
Spaces and functions
Gelfand duality is a duality between commutative C*-algebras A and compact Hausdorff spaces X is the same: it assigns to X the space of continuous functions (which vanish at infinity) from X to C, the complex numbers. Conversely, the space X can be reconstructed from A as the spectrum of A. Both Gelfand and Pontryagin duality can be deduced in a largely formal, category-theoretic way.
In a similar vein there is a duality in algebraic geometry between commutative rings and affine schemes: to every commutative ring A there is an affine spectrum, Spec A. Conversely, given an affine scheme S, one gets back a ring by taking global sections of the structure sheaf OS. In addition, ring homomorphisms are in one-to-one correspondence with morphisms of affine schemes, thereby there is an equivalence
(Commutative rings)op ≅ (affine schemes)
Affine schemes are the local building blocks of schemes. The previous result therefore tells that the local theory of schemes is the same as commutative algebra, the study of commutative rings.
Noncommutative geometry draws inspiration from Gelfand duality and studies noncommutative C*-algebras as if they were functions on some imagined space. Tannaka–Krein duality is a non-commutative analogue of Pontryagin duality.
Galois connections
In a number of situations, the two categories which are dual to each other are actually arising from partially ordered sets, i.e., there is some notion of an object "being smaller" than another one. A duality that respects the orderings in question is known as a Galois connection. An example is the standard duality in Galois theory mentioned in the introduction: a bigger field extension corresponds—under the mapping that assigns to any extension L ⊃ K (inside some fixed bigger field Ω) the Galois group Gal (Ω / L) —to a smaller group.
The collection of all open subsets of a topological space X forms a complete Heyting algebra. There is a duality, known as Stone duality, connecting sober spaces and spatial locales.
Birkhoff's representation theorem relating distributive lattices and partial orders
Pontryagin duality
Pontryagin duality gives a duality on the category of locally compact abelian groups: given any such group G, the character group
χ(G) = Hom (G, S1)
given by continuous group homomorphisms from G to the circle group S1 can be endowed with the compact-open topology. Pontryagin duality states that the character group is again locally compact abelian and that
G ≅ χ(χ(G)).
Moreover, discrete groups correspond to compact abelian groups; finite groups correspond to finite groups. On the one hand, Pontryagin is a special case of Gelfand duality. On the other hand, it is the conceptual reason of Fourier analysis, see below.
Analytic dualities
In analysis, problems are frequently solved by passing to the dual description of functions and operators.
Fourier transform switches between functions on a vector space and its dual:
and conversely
If f is an L2-function on R or RN, say, then so is and . Moreover, the transform interchanges operations of multiplication and convolution on the corresponding function spaces. A conceptual explanation of the Fourier transform is obtained by the aforementioned Pontryagin duality, applied to the locally compact groups R (or RN etc.): any character of R is given by ξ ↦ e−2πixξ. The dualizing character of Fourier transform has many other manifestations, for example, in alternative descriptions of quantum mechanical systems in terms of coordinate and momentum representations.
Laplace transform is similar to Fourier transform and interchanges operators of multiplication by polynomials with constant coefficient linear differential operators.
Legendre transformation is an important analytic duality which switches between velocities in Lagrangian mechanics and momenta in Hamiltonian mechanics.
Homology and cohomology
Theorems showing that certain objects of interest are the dual spaces (in the sense of linear algebra) of other objects of interest are often called dualities. Many of these dualities are given by a bilinear pairing of two K-vector spaces
A ⊗ B → K.
For perfect pairings, there is, therefore, an isomorphism of A to the dual of B.
Poincaré duality
Poincaré duality of a smooth compact complex manifold X is given by a pairing of singular cohomology with C-coefficients (equivalently, sheaf cohomology of the constant sheaf C)
Hi(X) ⊗ H2n−i(X) → C,
where n is the (complex) dimension of X. Poincaré duality can also be expressed as a relation of singular homology and de Rham cohomology, by asserting that the map
(integrating a differential k-form over a (2n − k)-(real-)dimensional cycle) is a perfect pairing.
Poincaré duality also reverses dimensions; it corresponds to the fact that, if a topological manifold is represented as a cell complex, then the dual of the complex (a higher-dimensional generalization of the planar graph dual) represents the same manifold. In Poincaré duality, this homeomorphism is reflected in an isomorphism of the kth homology group and the (n − k)th cohomology group.
Duality in algebraic and arithmetic geometry
The same duality pattern holds for a smooth projective variety over a separably closed field, using l-adic cohomology with Qℓ-coefficients instead. This is further generalized to possibly singular varieties, using intersection cohomology instead, a duality called Verdier duality. Serre duality or coherent duality are similar to the statements above, but applies to cohomology of coherent sheaves instead.
With increasing level of generality, it turns out, an increasing amount of technical background is helpful or necessary to understand these theorems: the modern formulation of these dualities can be done using derived categories and certain direct and inverse image functors of sheaves (with respect to the classical analytical topology on manifolds for Poincaré duality, l-adic sheaves and the étale topology in the second case, and with respect to coherent sheaves for coherent duality).
Yet another group of similar duality statements is encountered in arithmetics: étale cohomology of finite, local and global fields (also known as Galois cohomology, since étale cohomology over a field is equivalent to group cohomology of the (absolute) Galois group of the field) admit similar pairings. The absolute Galois group G(Fq) of a finite field, for example, is isomorphic to , the profinite completion of Z, the integers. Therefore, the perfect pairing (for any G-module M)
Hn(G, M) × H1−n (G, Hom (M, Q/Z)) → Q/Z
is a direct consequence of Pontryagin duality of finite groups. For local and global fields, similar statements exist (local duality and global or Poitou–Tate duality).
See also
Adjoint functor
Autonomous category
Convex body and polar body.
Dual abelian variety
Dual basis
Dual (category theory)
Dual code
Duality (electrical engineering)
Duality (optimization)
Dualizing module
Dualizing sheaf
Dual lattice
Dual norm
Dual numbers, a certain associative algebra; the term "dual" here is synonymous with double, and is unrelated to the notions given above.
Dual system
Koszul duality
Langlands dual
Linear programming#Duality
List of dualities
Matlis duality
Petrie duality
Pontryagin duality
S-duality
T-duality, Mirror symmetry
Notes
References
Duality in general
.
.
(a non-technical overview about several aspects of geometry, including dualities)
Duality in algebraic topology
James C. Becker and Daniel Henry Gottlieb, A History of Duality in Algebraic Topology
Specific dualities
. Also author's site.
. Also author's site.
ja:双対 | Duality (mathematics) | [
"Mathematics"
] | 6,752 | [
"Mathematical structures",
"Category theory",
"Duality theories",
"Geometry"
] |
609,808 | https://en.wikipedia.org/wiki/Kinesin | A kinesin is a protein complex belonging to a class of motor proteins found in eukaryotic cells. Kinesins move along microtubule (MT) filaments and are powered by the hydrolysis of adenosine triphosphate (ATP) (thus kinesins are ATPases, a type of enzyme). The active movement of kinesins supports several cellular functions including mitosis, meiosis and transport of cellular cargo, such as in axonal transport, and intraflagellar transport. Most kinesins walk towards the plus end of a microtubule, which, in most cells, entails transporting cargo such as protein and membrane components from the center of the cell towards the periphery. This form of transport is known as anterograde transport. In contrast, dyneins are motor proteins that move toward the minus end of a microtubule in retrograde transport.
Discovery
The first kinesins to be discovered were microtubule-based anterograde intracellular transport motors in 1985, based on their motility in cytoplasm extruded from the giant axon of the squid.
The founding member of this superfamily, kinesin-1, was isolated as a heterotetrameric fast axonal organelle transport motor consisting of four parts: two identical motor subunits (called Kinesin Heavy Chain (KHC) molecules) and two other molecules each known as a Kinesin Light Chain (KLC). These were discovered via microtubule affinity purification from neuronal cell extracts. Subsequently, a different, heterotrimeric plus-end-directed MT-based motor named kinesin-2, consisting of two distinct KHC-related motor subunits and an accessory "KAP" subunit, was purified from echinoderm egg/embryo extracts and is best known for its role in transporting protein complexes (intraflagellar transport particles) along axonemes during ciliogenesis. Molecular genetic and genomic approaches have led to the recognition that the kinesins form a diverse superfamily of motors that are responsible for multiple intracellular motility events in eukaryotic cells. For example, the genomes of mammals encode more than 40 kinesin proteins, organized into at least 14 families named kinesin-1 through kinesin-14.
Structure
Overall structure
Members of the kinesin superfamily vary in shape but the prototypical kinesin-1 motor consists of two Kinesin Heavy Chain (KHC) molecules which form a protein dimer (molecule pair) that binds two light chains (KLCs), which are unique for different cargos.
The heavy chain of kinesin-1 comprises a globular head (the motor domain) at the amino terminal end connected via a short, flexible neck linker to the stalk – a long, central alpha-helical coiled coil domain – that ends in a carboxy terminal tail domain which associates with the light-chains. The stalks of two KHCs intertwine to form a coiled coil that directs dimerization of the two KHCs. In most cases transported cargo binds to the kinesin light chains, at the TPR motif sequence of the KLC, but in some cases cargo binds to the C-terminal domains of the heavy chains.
Kinesin motor domain
The head is the signature of kinesin and its amino acid sequence is well conserved among various kinesins. Each head has two separate binding sites: one for the microtubule and the other for ATP. ATP binding and hydrolysis as well as ADP release change the conformation of the microtubule-binding domains and the orientation of the neck linker with respect to the head; this results in the motion of the kinesin. Several structural elements in the head, including a central beta-sheet domain and the Switch I and II domains, have been implicated as mediating the interactions between the two binding sites and the neck domain. Kinesins are structurally related to G proteins, which hydrolyze GTP instead of ATP. Several structural elements are shared between the two families, notably the Switch I and Switch II domain.
Basic kinesin regulation
Kinesins tend to have low basal enzymatic activity which becomes significant when microtubule-activated. In addition, many members of the kinesin superfamily can be self-inhibited by the binding of tail domain to the motor domain. Such self-inhibition can then be relieved via additional regulation such as binding to cargo, cargo adapters or other microtubule-associated proteins.
Cargo transport
In the cell, small molecules, such as gases and glucose, diffuse to where they are needed. Large molecules synthesised in the cell body, intracellular components such as vesicles and organelles such as mitochondria are too large (and the cytosol too crowded) to be able to diffuse to their destinations. Motor proteins fulfill the role of transporting large cargo about the cell to their required destinations. Kinesins are motor proteins that transport such cargo by walking unidirectionally along microtubule tracks hydrolysing one molecule of adenosine triphosphate (ATP) at each step. It was thought that ATP hydrolysis powered each step, the energy released propelling the head forwards to the next binding site. However, it has been proposed that the head diffuses forward and the force of binding to the microtubule is what pulls the cargo along. In addition viruses, HIV for example, exploit kinesins to allow virus particle shuttling after assembly.
There is significant evidence that cargoes in-vivo are transported by multiple motors.
Direction of motion
Motor proteins travel in a specific direction along a microtubule. Microtubules are polar; meaning, the heads only bind to the microtubule in one orientation, while ATP binding gives each step its direction through a process known as neck linker zippering.
It has been previously known that kinesin move cargo towards the plus (+) end of a microtubule, also known as anterograde transport/orthograde transport. However, it has been recently discovered that in budding yeast cells kinesin Cin8 (a member of the Kinesin-5 family) can move toward the minus end as well, or retrograde transport. This means, these unique yeast kinesin homotetramers have the novel ability to move bi-directionally. Kinesin, so far, has only been shown to move toward the minus end when in a group, with motors sliding in the antiparallel direction in an attempt to separate microtubules. This dual directionality has been observed in identical conditions where free Cin8 molecules move towards the minus end, but cross-linking Cin8 move toward the plus ends of each cross-linked microtubule. One specific study tested the speed at which Cin8 motors moved, their results yielded a range of about 25-55 nm/s, in the direction of the spindle poles. On an individual basis it has been found that by varying ionic conditions Cin8 motors can become as fast as 380 nm/s. It is suggested that the bidirectionality of yeast kinesin-5 motors such as Cin8 and Cut7 is a result of coupling with other Cin8 motors and helps to fulfill the role of dynein in budding yeast, as opposed to the human homologue of these motors, the plus directed Eg5. This discovery in kinesin-14 family proteins (such as Drosophila melanogaster NCD, budding yeast KAR3, and Arabidopsis thaliana ATK5) allows kinesin to walk in the opposite direction, toward microtubule minus end. This is not typical of kinesin, rather, an exception to the normal direction of movement.
Another type of motor protein, known as dyneins, move towards the minus end of the microtubule. Thus, they transport cargo from the periphery of the cell towards the center. An example of this would be transport occurring from the terminal boutons of a neuronal axon to the cell body (soma). This is known as retrograde transport.
Mechanism of movement
In 2023 direct visualization of kinesin "walking" along a microtubule in real-time was reported. In a "hand-over-hand" mechanism, the kinesin heads step past one another, alternating the lead position. Thus in each step the leading head becomes the trailing head, while the trailing head becomes the leading head.
This cycle begins with the trailing head releasing inorganic phosphate (Pi) derived from the hydrolysis of ATP.
The trailing head detaches from the microtubule and rotates into its rightward displaced unbound state.
The leading head binds ATP which causes the neck linker to dock to it, which moves the trailing head around the leading head into a position further along the microtubule in the direction of travel. The trailing head remains unbound.
The ATP in the leading head is hydrolyzed.
The trailing head releases its ADP and the binds to the microtubule becoming the leading head.
Theoretical modeling
A number of theoretical models of the molecular motor protein kinesin have been proposed. Many challenges are encountered in theoretical investigations given the remaining uncertainties about the roles of protein structures, the precise way energy from ATP is transformed into mechanical work, and the roles played by thermal fluctuations. This is a rather active area of research. There is a need especially for approaches which better make a link with the molecular architecture of the protein and data obtained from experimental investigations.
The single-molecule dynamics are already well described but it seems that these nano scale machines typically work in large teams.
Single-molecule dynamics are based on the distinct chemical states of the motor and observations about its mechanical steps. For small concentrations of adenosine diphosphate, the motor's behaviour is governed by the competition of two chemomechanical motor cycles which determine the motor's stall force. A third cycle becomes important for large ADP concentrations. Models with a single cycle have been discussed too. Seiferth et al. demonstrated how quantities such as the velocity or the entropy production of a motor change when adjacent states are merged in a multi-cyclic model until eventually the number of cycles is reduced.
Recent experimental research has shown that kinesins, while moving along microtubules, interact with each other, the interactions being short range and weak attractive (1.6±0.5 KBT). One model that has been developed takes into account these particle interactions, where the dynamic rates change accordingly with the energy of interaction. If the energy is positive the rate of creating bonds (q) will be higher while the rate of breaking bonds (r) will be lower. One can understand that the rates of entrance and exit in the microtubule will be changed as well by the energy (See figure 1 in reference 30). If the second site is occupied the rate of entrance will be α*q and if the last but one site is occupied the rate of exit will be β*r. This theoretical approach agrees with the results of Monte Carlo simulations for this model, especially for the limiting case of very large negative energy. The normal totally asymmetric simple exclusion process for (or TASEP) results can be recovered from this model making the energy equal to zero.
Mitosis
In recent years, it has been found that microtubule-based molecular motors (including a number of kinesins) have a role in mitosis (cell division). Kinesins are important for proper spindle length and are involved in sliding microtubules apart within the spindle during prometaphase and metaphase, as well as depolymerizing microtubule minus ends at centrosomes during anaphase. Specifically, Kinesin-5 family proteins act within the spindle to slide microtubules apart, while the Kinesin 13 family act to depolymerize microtubules.
Kinesin superfamily
Human kinesin superfamily members include the following proteins, which in the standardized nomenclature developed by the community of kinesin researchers, are organized into 14 families named kinesin-1 through kinesin-14:
1A – KIF1A, 1B – KIF1B, 1C – KIF1C = kinesin-3
2A – KIF2A, 2C – KIF2C = kinesin-13
3B – KIF3B or 3C – KIF3C ,3A - KIF3A = kinesin-2
4A – KIF4A, 4B – KIF4B = kinesin-4
5A – KIF5A, 5B – KIF5B, 5C – KIF5C = kinesin-1
6 – KIF6 = kinesin-9
7 – KIF7 = kinesin-4
9 – KIF9 = kinesin-9
11 – KIF11 = kinesin-5
12 – KIF12 = kinesin-12
13A – KIF13A, 13B – KIF13B = kinesin-3
14 – KIF14 = kinesin-3
15 – KIF15 = kinesin-12
16B – KIF16B = kinesin-3
17 – KIF17 = kinesin-2
18A – KIF18A, 18B – KIF18B = kinesin-8
19 – KIF19 = kinesin-8
20A – KIF20A, 20B – KIF20B = kinesin-6
21A – KIF21A, 21B – KIF21B = kinesin-4
22 – KIF22 = kinesin-10
23 – KIF23 = kinesin-6
24 – KIF24 = kinesin-13
25 – KIF25 = kinesin-14
26A – KIF26A, 26B – KIF26B = kinesin-11
27 – KIF27 = kinesin-4
C1 – KIFC1, C2 – KIFC2, C3 – KIFC3 = kinesin-14
kinesin-1 light chains:
1 – KLC1, 2 – KLC2, 3 – KLC3, 4 – KLC4
kinesin-2 associated protein:
KIFAP3 (also known as KAP-1, KAP3)
See also
Axonal transport
Dynein
Intraflagellar transport along cilia
Kinesin 8
Kinesin 13
KRP
Molecular motor
Transport by multiple-motor proteins
References
Further reading
External links
MBInfo - Kinesin transports cargo along microtubules
Animated model of kinesin walking
Ron Vale's Seminar: "Molecular Motor Proteins"
Animation of kinesin movement ASCB image library
The Inner Life of a Cell, 3D animation featuring a Kinesin transporting a vesicle
The Kinesin Homepage
3D electron microscopy structures of kinesin from the EM Data Bank(EMDB)
Motor proteins | Kinesin | [
"Chemistry"
] | 3,126 | [
"Molecular machines",
"Motor proteins"
] |
609,812 | https://en.wikipedia.org/wiki/Dynein | Dyneins are a family of cytoskeletal motor proteins (though they are actually protein complexes) that move along microtubules in cells. They convert the chemical energy stored in ATP to mechanical work. Dynein transports various cellular cargos, provides forces and displacements important in mitosis, and drives the beat of eukaryotic cilia and flagella. All of these functions rely on dynein's ability to move towards the minus-end of the microtubules, known as retrograde transport; thus, they are called "minus-end directed motors". In contrast, most kinesin motor proteins move toward the microtubules' plus-end, in what is called anterograde transport.
Classification
Dyneins can be divided into two groups: cytoplasmic dyneins and axonemal dyneins, which are also called ciliary or flagellar dyneins.
cytoplasmic
heavy chain: DYNC1H1, DYNC2H1
intermediate chain: DYNC1I1, DYNC1I2
light intermediate chain: DYNC1LI1, DYNC1LI2, DYNC2LI1
light chain: DYNLL1, DYNLL2, DYNLRB1, DYNLRB2, DYNLT1, DYNLT3
axonemal
heavy chain: DNAH1, DNAH2, DNAH3, DNAH5, DNAH6, DNAH7, DNAH8, DNAH9, DNAH10, DNAH11, DNAH12, DNAH13, DNAH14, DNAH17
intermediate chain: DNAI1, DNAI2
light intermediate chain: DNALI1
light chain: DNAL1, DNAL4
Function
Axonemal dynein causes sliding of microtubules in the axonemes of cilia and flagella and is found only in cells that have those structures.
Cytoplasmic dynein, found in all animal cells and possibly plant cells as well, performs functions necessary for cell survival such as organelle transport and centrosome assembly. Cytoplasmic dynein moves processively along the microtubule; that is, one or the other of its stalks is always attached to the microtubule so that the dynein can "walk" a considerable distance along a microtubule without detaching.
Cytoplasmic dynein helps to position the Golgi complex and other organelles in the cell. It also helps transport cargo needed for cell function such as vesicles made by the endoplasmic reticulum, endosomes, and lysosomes (Karp, 2005). Dynein is involved in the movement of chromosomes and positioning the mitotic spindles for cell division. Dynein carries organelles, vesicles and possibly microtubule fragments along the axons of neurons toward the cell body in a process called retrograde axonal transport. Additionally, dynein motor is also responsible for the transport of degradative endosomes retrogradely in the dendrites.
Mitotic spindle positioning
Cytoplasmic dynein positions the spindle at the site of cytokinesis by anchoring to the cell cortex and pulling on astral microtubules emanating from centrosome. While a postdoctoral student at MIT, Tomomi Kiyomitsu discovered how dynein has a role as a motor protein in aligning the chromosomes in the middle of the cell during the metaphase of mitosis. Dynein pulls the microtubules and chromosomes to one end of the cell. When the end of the microtubules become close to the cell membrane, they release a chemical signal that punts the dynein to the other side of the cell. It does this repeatedly so the chromosomes end up in the center of the cell, which is necessary in mitosis. Budding yeast have been a powerful model organism to study this process and has shown that dynein is targeted to plus ends of astral microtubules and delivered to the cell cortex via an offloading mechanism.
Viral replication
Dynein and kinesin can both be exploited by viruses to mediate the viral replication process. Many viruses use the microtubule transport system to transport nucleic acid/protein cores to intracellular replication sites after invasion host the cell membrane. Not much is known about virus' motor-specific binding sites, but it is known that some viruses contain proline-rich sequences (that diverge between viruses) which, when removed, reduces dynactin binding, axon transport (in culture), and neuroinvasion in vivo. This suggests that proline-rich sequences may be a major binding site that co-opts dynein.
Structure
Each molecule of the dynein motor is a complex protein assembly composed of many smaller polypeptide subunits. Cytoplasmic and axonemal dynein contain some of the same components, but they also contain some unique subunits.
Cytoplasmic dynein
Cytoplasmic dynein, which has a molecular mass of about 1.5 megadaltons (MDa), is a dimer of dimers, containing approximately twelve polypeptide subunits: two identical "heavy chains", 520 kDa in mass, which contain the ATPase activity and are thus responsible for generating movement along the microtubule; two 74 kDa intermediate chains which are believed to anchor the dynein to its cargo; two 53–59 kDa light intermediate chains; and several light chains.
The force-generating ATPase activity of each dynein heavy chain is located in its large doughnut-shaped "head", which is related to other AAA proteins, while two projections from the head connect it to other cytoplasmic structures. One projection, the coiled-coil stalk, binds to and "walks" along the surface of the microtubule via a repeated cycle of detachment and reattachment. The other projection, the extended tail, binds to the light intermediate, intermediate and light chain subunits which attach dynein to its cargo. The alternating activity of the paired heavy chains in the complete cytoplasmic dynein motor enables a single dynein molecule to transport its cargo by "walking" a considerable distance along a microtubule without becoming completely detached.
In the apo-state of dynein, the motor is nucleotide free, the AAA domain ring exists in an open conformation, and the MTBD exists in a high affinity state. Much about the AAA domains remains unknown, but AAA1 is well established as the primary site of ATP hydrolysis in dynein. When ATP binds to AAA1, it initiates a conformational change of the AAA domain ring into the “closed” configuration, movement of the buttress, and a conformational change in the linker. The linker becomes bent and shifts from AAA5 to AAA2 while remaining bound to AAA1. One attached alpha-helix from the stalk is pulled by the buttress, sliding the helix half a heptad repeat relative to its coilled-coil partner, and kinking the stalk. As a result, the MTBD of dynein enters a low-affinity state, allowing the motor to move to new binding sites. Following hydrolysis of ATP, the stalk rotates, moving dynein further along the MT. Upon the release of the phosphate, the MTBD returns to a high affinity state and rebinds the MT, triggering the power stroke. The linker returns to a straight conformation and swings back to AAA5 from AAA2 and creates a lever-action, producing the greatest displacement of dynein achieved by the power stroke The cycle concludes with the release of ADP, which returns the AAA domain ring back to the “open” configuration.
Yeast dynein can walk along microtubules without detaching, however in metazoans, cytoplasmic dynein must be activated by the binding of dynactin, another multisubunit protein that is essential for mitosis, and a cargo adaptor. The tri-complex, which includes dynein, dynactin and a cargo adaptor, is ultra-processive and can walk long distances without detaching in order to reach the cargo's intracellular destination. Cargo adaptors identified thus far include BicD2, Hook3, FIP3 and Spindly. The light intermediate chain, which is a member of the Ras superfamily, mediates the attachment of several cargo adaptors to the dynein motor. The other tail subunits may also help facilitate this interaction as evidenced in a low resolution structure of dynein-dynactin-BicD2.
One major form of motor regulation within cells for dynein is dynactin. It may be required for almost all cytoplasmic dynein functions. Currently, it is the best studied dynein partner. Dynactin is a protein that aids in intracellular transport throughout the cell by linking to cytoplasmic dynein. Dynactin can function as a scaffold for other proteins to bind to. It also functions as a recruiting factor that localizes dynein to where it should be. There is also some evidence suggesting that it may regulate kinesin-2. The dynactin complex is composed of more than 20 subunits, of which p150(Glued) is the largest. There is no definitive evidence that dynactin by itself affects the velocity of the motor. It does, however, affect the processivity of the motor. The binding regulation is likely allosteric: experiments have shown that the enhancements provided in the processivity of the dynein motor do not depend on the p150 subunit binding domain to the microtubules.
Axonemal dynein
Axonemal dyneins come in multiple forms that contain either one, two or three non-identical heavy chains (depending upon the organism and location in the cilium). Each heavy chain has a globular motor domain with a doughnut-shaped structure believed to resemble that of other AAA proteins, a coiled coil "stalk" that binds to the microtubule, and an extended tail (or "stem") that attaches to a neighboring microtubule of the same axoneme. Each dynein molecule thus forms a cross-bridge between two adjacent microtubules of the ciliary axoneme. During the "power stroke", which causes movement, the AAA ATPase motor domain undergoes a conformational change that causes the microtubule-binding stalk to pivot relative to the cargo-binding tail with the result that one microtubule slides relative to the other (Karp, 2005). This sliding produces the bending movement needed for cilia to beat and propel the cell or other particles. Groups of dynein molecules responsible for movement in opposite directions are probably activated and inactivated in a coordinated fashion so that the cilia or flagella can move back and forth. The radial spoke has been proposed as the (or one of the) structures that synchronizes this movement.
The regulation of axonemal dynein activity is critical for flagellar beat frequency and cilia waveform. Modes of axonemal dynein regulation include phosphorylation, redox, and calcium. Mechanical forces on the axoneme also affect axonemal dynein function. The heavy chains of inner and outer arms of axonemal dynein are phosphorylated/dephosphorylated to control the rate of microtubule sliding. Thioredoxins associated with the other axonemal dynein arms are oxidized/reduced to regulate where dynein binds in the axoneme. Centerin and components of the outer axonemal dynein arms detect fluctuations in calcium concentration. Calcium fluctuations play an important role in altering cilia waveform and flagellar beat frequency (King, 2012).
History
The protein responsible for movement of cilia and flagella was first discovered and named dynein in 1963 (Karp, 2005). 20 years later, cytoplasmic dynein, which had been suspected to exist since the discovery of flagellar dynein, was isolated and identified (Karp, 2005).
Chromosome segregation during meiosis
Segregation of homologous chromosomes to opposite poles of the cell occurs during the first division of meiosis. Proper segregation is essential for producing haploid meiotic products with a normal complement of chromosomes. The formation of chiasmata (crossover recombination events) appears to generally facilitate proper segregation. However, in the fission yeast Schizosaccharomyces pombe, when chiasmata are absent, dynein promotes segregation. Dhc1, the motor subunit of dynein, is required for chromosomal segregation in both the presence and absence of chiasmata. The dynein light chain Dlc1 protein is also required for segregation, specifically when chiasmata are absent.
See also
Molecular motors
References
Further reading
External links
Ron Vale's Seminar: "Molecular Motor Proteins"
Motor proteins | Dynein | [
"Chemistry"
] | 2,757 | [
"Molecular machines",
"Motor proteins"
] |
609,865 | https://en.wikipedia.org/wiki/Tape%20head | A tape head is a type of transducer used in tape recorders to convert electrical signals to magnetic fluctuations and vice versa.They can also be used to read credit/debit/gift cards because the strip of magnetic tape on the back of a credit card stores data the same way that other magnetic tapes do. Cassettes, reel-to-reel tapes, 8-tracks, VHS tapes, and even floppy disks and early hard drive disks all use the same principle of physics to store and read back information. The medium is magnetized in a pattern. It then moves at a constant speed over an electromagnet. Since the moving tape is carrying a changing magnetic field with it, it induces a varying voltage across the head. That voltage can then be amplified and connected to speakers in the case of audio, or measured and sorted into ones and zeroes in the case of digital data.
Principles of operation
The electromagnetic arrangement of a tape head is generally similar for all types, though the physical design varies considerably depending on the application - for example videocassette recorders (VCR) use rotating heads which implement a helical scan, whereas most audio recorders have fixed heads. A head consists of a core of magnetic material arranged into a doughnut shape or toroid, into which a very narrow gap has been let. This gap is filled with a diamagnetic material, such as gold. This forces the magnetic flux out of the gap into the magnetic tape medium more than air would, and also forces the magnetic flux out of the magnetic tape medium into the gap. The flux thus magnetises the tape or induces current in the coil at that point. A coil of wire wrapped around the core opposite the gap interfaces to the electrical side of the apparatus. The basic head design is fully reversible - a variable magnetic field at the gap will induce an electric current in the coil, and an electric current in the coil will induce a magnetic field at the gap.
Reversibility
While a head is reversible in principle, and very often in practice, there are desirable characteristics that differ between the playback and recording phases. One of these is the impedance of the coil - playback preferring a high impedance, and recording a low one. In the very best tape recorders, separate heads are used to avoid compromising these desirable characteristics. Having separate heads for recording and playback has other advantages, such as off-tape monitoring during recording, etc.
Head gap width
The width of the head gap is also critical - the narrower the gap, the better the head will be - a narrow gap gives much better transcription in the magnetic domain (which equals to more output with high frequency signals in the case with playback heads). The desirability for a narrow gap means that most practical heads are made by forming a narrow V-shaped groove in the back face of the core, and grinding away the front face until the V-groove is just breached. In this way, gaps of the order of micrometres are achievable.
A record head, on the other hand, has a gap typically six times larger than that of the replay head, this gives a larger flux to magnetise the tape. The ideal gap size in a cassette deck are; wide record head gap and narrow playback head. The larger gap does not affect frequency response because the 'image' is largely made by the trailing edge of the gap. A combined record/replay head has a compromise size gap typically three times that of a replay only head.
There are also negative aspects of narrow head gaps, particularly for magnetic recording. The narrower the head gap, the more bias signal must be used to maintain linearity of the signal on tape which in turn will reduce the high frequency headroom or SOL (Saturated Output Level), particularly with slower tape speeds. Manufacturers must find a compromise between intended tape speeds and head gaps for this reason.
Types
The physical design of a head depends on whether it is fixed or rotating. In either case, the face of the head where the gap is must be made hard wearing and highly smooth to avoid excessive head wear. It can also be seen that due to the construction method of the head gap, head wear will tend to widen the gap, reducing the head's performance over time. The vertical alignment of the heads (the azimuth) must also match between recording and playback for good fidelity, and the gap should be as close to exactly vertical as possible for highest frequency response. Most tape transport mechanisms will allow fine mechanical adjustment of the azimuth of the heads. Sometimes this can be achieved by automatic circuitry - the actual mechanical azimuth adjustment being carried out by taking advantage of the piezo effect of certain types of crystal material.
Rotating heads
Rotating play heads, as used in video recorders, digital audio tape and other applications, are used to achieve a high relative head/tape speed while maintaining a low overall tape transport speed. One or more transducers are mounted on a rotating drum set at an angle to the tape. The drum spins rapidly compared to the speed that the tape moves past it, so that the transducers describe a path of stripes across the tape, rather than linearly along it as a fixed head does. The wear characteristics of such helical scan heads are even more critical, and highly polished heads and tapes are required. The electrical signals of rotating heads are coupled either inductively or capacitively - there is no direct connection to the head coils.
Erase heads
An erase head is constructed in a similar manner to a record or replay head, but has a much larger gap, or more frequently, two large gaps. The erase head is powered during recording from a high frequency source (usually the same oscillator that provides the AC bias). In some inexpensive cassette recorder designs, the erase head is a permanent magnet that is mechanically moved into contact with the moving tape only during recording. Permanent magnet erase heads are also sometimes used in machines that are equipped with DC bias.
Cross-field heads
Instead of feeding both the bias signal and the audio signal into the same recording head, a few brands of audio tape recorder, notably Tandberg, Akai and its US cousin Roberts, used a separate bias head on the opposite side of the tape from the recording head; this system was termed cross-field.
Head materials
Record and replay heads are traditionally made of soft iron (the softness is an essential requisite for good record and replay characteristics). This material features extremely good electro-acoustical properties, but wears away fairly rapidly with a consequent deterioration of performance. Some higher end recorders featured heads made from ferrite, which features excellent electro-acoustical properties while being a very hard material which resists wear. Its two main disadvantages are that it is brittle and easily damaged, and that it has a much higher noise output due to the Barkhausen effect. In more recent years, more exotic materials have appeared, some involving ceramics, which offer the best of both of the traditional materials.
Cleaning
With use the head will become dirty with loose tape shedding, and distort the sound. The tape head can be cleaned using a cloth with alcohol. Video head cleaner can be used to clean video, audio, erase, or control track heads.
Gallery
See also
Recording head
Magnetic tape sound recording
References
Magnetic devices
Audio storage
Tape recording | Tape head | [
"Technology"
] | 1,502 | [
"Recording devices",
"Tape recording"
] |
610,000 | https://en.wikipedia.org/wiki/EUMETSAT | The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) is an intergovernmental organisation created through an international convention agreed by a current total of 30 European Member States.
EUMETSAT's primary objective is to establish, maintain and exploit European systems of operational meteorological satellites. EUMETSAT is responsible for the launch and operation of the satellites and for delivering satellite data to end-users as well as contributing to the operational monitoring of climate and the detection of global climate changes.
The activities of EUMETSAT contribute to a global meteorological satellite observing system coordinated with other space-faring states.
Satellite observations are an essential input to numerical weather prediction systems and also assist the human forecaster in the diagnosis of potentially hazardous weather developments. Of growing importance is the capacity of weather satellites to gather long-term measurements from space in support of climate change studies.
EUMETSAT is not an institution or agency of the European Union, although the majority of its members are EU member states. The organisation became a signatory to the International Charter on Space and Major Disasters in 2012, thus providing for the global charitable use of its space assets.
Member and cooperating states
The national mandatory contributions of member states are proportional to their gross national income. However, the cooperating countries contribute only half of the fee they would pay for full membership. The convention establishing EUMETSAT was opened for signature in 1983 and entered into force on 19 June 1986.
Satellite programmes
There are two types of programmes:
Geostationary satellites, providing a continuous view of the Earth disc from a stationary position in space.
Polar-orbiting satellites, flying at a much lower altitude, sending back more precise details about atmospheric temperature and moisture profiles, although with less frequent global coverage.
High-level, stationary in space (Geostationary satellites)
The current provision of geostationary satellite surveillance is enabled by the Meteosat series of satellites operated by EUMETSAT, generating images of the full Earth disc and data for forecasting.
The first generation of Meteosat, launched in 1977, provided continuous, reliable observations to a large user group. In response to demand for more frequent and comprehensive data, Meteostat Second Generation (MSG) was developed with key improvements in swift recognition and prediction of thunderstorms, fog, and the small depressions which can lead to dangerous wind storms. MSG was launched in 2004. To capture foreseeable user needs up to 2025, a Meteostat Third Generation (MTG) is in active preparation.
Low-level orbiting (Polar satellites)
EUMETSAT Polar System
The lack of observational coverage in certain parts of the globe, particularly the Pacific Ocean and continents of the southern hemisphere, has led to the increasingly important role for polar-orbiting satellite data in numerical weather prediction and climate monitoring.
EUMETSAT Polar System (EPS) Metop mission consists of three polar orbiting Metop satellites, to be flown successively for more than 14 years. The first, Metop-A, was launched by a Russian Soyuz-2.1a rocket from Baikonur on October 19, 2006, at 22:28 Baikonur time (16:28 UTC). Metop-A was initially controlled by ESOC for the LEOP phase immediately following launch, with control handed over to EUMETSAT 72 hours after lift-off. EUMETSAT's first commands to the satellite were sent at 14:04 UTC on October 22, 2006.
The second EPS satellite, Metop-B, was launched from Baikonur on 17 September 2012, and the third, Metop-C, was launched from Centre Spatial Guyanais in Kourou, French Guiana on 7 November 2018 by Arianespace using a Soyuz ST-B launch vehicle with a Fregat-M upper stage.
Positioned at approximately above the Earth, special instruments on board Metop-A can deliver far more precise details about atmospheric temperature and moisture profiles than a geostationary satellite.
The satellites also ensure that the more remote regions of the globe, particularly in Northern Europe as well as the oceans in the Southern hemisphere, are fully covered.
The EPS programme is also the European half of a joint program with NOAA, called the International Joint Polar System (IJPS). NOAA has operated a continuous series of low earth orbiting meteorological satellite since April 1960. Many of the instruments on Metop are also operated on NOAA/POES satellites, providing similar data types across the IJPS.
Instruments on Metop
A/DCS (Advanced Data Collection System)
AMSU-A1 and AMSU-A2
ASCAT Advanced Scatterometer
AVHRR (Advanced Very High Resolution Radiometer)
GOME-2 (Global Ozone Monitoring Experiment) — instrument to monitor ozone levels
GRAS (GNSS Receiver for Atmospheric Sounding: global navigation satellite systems radio occultation)
HIRS (High Resolution Infrared Sounder)
IASI (Infrared atmospheric sounding interferometer)
MHS (Microwave Humidity Sounder)
SARP-3 and SARR (Search And Rescue Processor and Search And Rescue Repeater)
SEM (Space Environment Monitor, to measure the intensity of the Earth's radiation belts and the proton/electron flux.)
Jason / Sentinel-6
Jason-2
The Jason-2 programme is an international partnership across multiple organisations, including EUMETSAT, CNES, and the US agencies NASA and NOAA.
Jason-2 was launched successfully from Vandenberg Air Force Base aboard a Delta-II rocket on 20 June 2008, 7:46 UTC. EUMETSAT – What We Do – Jason-2 – Launch Description
Jason-2 reliably delivers detailed oceanographic data vital to our understanding of weather forecasting and climate change monitoring. Jason-2 provides data on the decadal (10-yearly) oscillations in large ocean basins, such as the Atlantic Ocean; mesoscale variability, and surface wind and wave conditions. Jason-2 measurements contribute to the European Centre for Medium-Range Weather Forecasts (ECMWF) satellite data assimilation, helping improve global atmosphere and ocean forecasting.
Altimetric data from Jason-2 have also helped create detailed decade-long global observations and analyses of the El Niño and La Niña phenomena, opening the way to new discoveries about ocean circulation and its effects on climate, and providing new insights into ocean tides, turbulent ocean eddies and marine gravity.
Jason-3
Jason-3 was Launched on 17 January 2016, Vandenberg Air Force Base in California, on a SpaceX Falcon 9 launcher. It is operational since 14 October 2016.
Jason-3 is on a non-Sun-synchronous low Earth orbit at 66° inclination and 1336 km altitude, optimised to eliminate tidal aliasing from sea surface height and mean sea level measurements. Jason-2, flies on the same orbit but at 162°.
It is built on the same cooperation as Jason-2, involving EUMETSAT, NOAA, CNES and NASA, with Copernicus expected to support the European contribution to operations, as part of its HPOA activity, which also covers contributions to the Jason-CS programme.
Sentinel-6/Jason-CS
The Jason satellites were succeeded by the Sentinel-6 for the radar altimeter mission, part of the European Union's Copernicus Programme for Earth observation, with the objective of providing an operational service for high-precision measurements of global sea-level. This mission is implemented as a multi-partner cooperation between the European Commission and EUMETSAT, ESA, NOAA and NASA, with support from the French space agency, CNES.
The mission, implemented through the two Sentinel-6/Jason-CS satellites (Sentinel-6 Michael Freilich and Sentinel-6B), aims to continue high precision ocean altimetry measurements in the 2020–2030 time-frame. A secondary objective is to collect high resolution vertical profiles of temperature, using the GNSS Radio-Occultation sounding technique, to assess temperature changes in the troposphere and stratosphere and to support Numerical Weather Prediction.
The launch of the first satellite – Sentinel-6 Michael Freilich – occurred successfully on 21 November 2020 from Vandenberg AFB in California, USA on a SpaceX Falcon-9 launch vehicle. The satellite was named in honour of Michael Freilich (oceanographer), an oceanographer and former director of NASA's Earth Science Division. Sentinel-6 Michael Freilich succeeded Jason-3 as the reference mission for satellite ocean altimetry in April 2022.
The launch of Sentinel-6B is foreseen for late-2025, also on a SpaceX Falcon-9.
See also
EUMETNET
the European Centre for Medium-Range Weather Forecasts (ECMWF)
the French CNES (CNES)
the US National Oceanic and Atmospheric Administration (NOAA), the US equivalent of EUMETSAT
the US NASA (NASA), the US equivalent of the ESA
References
External links
EUMETSAT weather satellite viewer Online EUMETSAT weather satellite viewer with 2 months of archived data.
European space programmes
Satellite meteorology
Meteorological organizations
Space organizations
Intergovernmental organizations established by treaty
International organisations based in Germany
1986 establishments in Europe
Scientific organizations established in 1986 | EUMETSAT | [
"Astronomy",
"Engineering"
] | 1,878 | [
"Space programs",
"European space programmes",
"Astronomy organizations",
"Space organizations"
] |
610,074 | https://en.wikipedia.org/wiki/Alexander%20Oparin | Alexander Ivanovich Oparin (; – 21 April 1980) was a Soviet biochemist notable for his theories about the origin of life and for his book The Origin of Life.
He also studied the biochemistry of material processing by plants and enzyme reactions in plant cells. He showed that many food production processes were based on biocatalysis and developed the foundations for industrial biochemistry in the USSR.
Life
Oparin was born in Uglich in 1894 into a merchant family. He and his parents soon moved to Kokayevo, a nearby village. Oparin had an older brother, , who became an economist.
Oparin graduated from the Moscow State University in 1917 and became a professor of biochemistry there in 1927. Many of his early papers were about plant enzymes and their role in metabolism. His first experimental studies were devoted to the chemistry of respiration. In them, he showed that chlorogenic acid is an essential component of redox reactions in the cell. In 1924 he put forward a hypothesis suggesting that life on Earth developed through a gradual chemical evolution of carbon-based molecules in the Earth's primordial soup. In 1935, along with academician Aleksei Bach, he founded the Biochemistry Institute of the Soviet Academy of Sciences. In 1939, Oparin became a Corresponding Member of the Academy, and, in 1946, a full member. In 1937, he organized the Department of Technical Biochemistry at the Moscow Technological Institute of Food Industry.
In 1940s and 1950s, Oparin supported the theories of Trofim Lysenko and Olga Lepeshinskaya, who made claims about "the origin of cells from noncellular matter". "Taking the party line" helped advance his career. However, according to cytologist :
From 1942 to 1960, Oparin headed the Department of Plant Biochemistry at Moscow State University, where he gave lectures on general biochemistry, technical biochemistry, and special courses on enzymology and the problem of the origin of life. In 1970, he was elected President of the International Society for the Study of the Origins of Life.
Oparin was one of the academicians of the USSR Academy of Sciences who signed a letter from scientists to the newspaper Pravda in 1973 condemning "the behavior of Academician A.D. Sakharov." The letter accused Sakharov of having "made a number of statements discrediting the political system, foreign and domestic policies of the Soviet Union," and the academics assessed his human rights activities as "discrediting the honor and dignity of the Soviet scientist."
Oparin died in Moscow on 21 April 1980, and was interred in Novodevichy Cemetery in Moscow.
Oparin became Hero of Socialist Labour in 1969, received the Lenin Prize in 1974, and was awarded the Lomonosov Gold Medal in 1979 "for outstanding achievements in biochemistry". He was also a five-time recipient of the Order of Lenin.
Theory of the origin of life
Although Oparin's started out reviewing various panspermia theories, including those of Hermann von Helmholtz and William Thomson (Lord Kelvin), he was primarily interested in how life began. As early as 1922, he asserted that:
There is no fundamental difference between a living organism and lifeless matter. The complex combination of manifestations and properties characteristic of life must have arisen as a part of the process of the evolution of matter.
Taking into account the recent discovery of methane in the atmospheres of Jupiter and the other giant planets, Oparin suggested that the infant Earth had possessed a strongly reducing atmosphere, containing methane, ammonia, hydrogen and water vapor. In his opinion, these were the raw materials for the evolution of life.
In Oparin's formulation, there were first only simple solutions of organic matter, the behavior of which was governed by the properties of their component atoms and the arrangement of these atoms into a molecular structure. Gradually though, he said, the resulting growth and increased complexity of molecules brought new properties into being and a new colloidal-chemical order developed as a successor to more simple relationships between and among organic chemicals. These newer properties were determined by the interactions of these more complex molecules.
Oparin posited that this process brought biological orderliness into prominence. According to Oparin, competition, speed of cell growth, survival of the fittest, struggle for existence and, finally, natural selection determined the form of material organization characteristic of modern-day living things.
Oparin outlined a way he thought that basic organic chemicals might have formed into microscopic localized systems, from which primitive living things could have developed. He cited work done by de Jong and Sidney W. Fox on coacervates and research by others, including himself, into organic chemicals which, in solution, might spontaneously form droplets and layers. Oparin suggested that different types of coacervates could have formed in the Earth's primordial ocean and been subject to a selection process that led, eventually, to life.
While Oparin himself was unable to conduct experiments to test any of these ideas, later researchers tried. In 1953, Stanley Miller attempted an experiment to investigate whether chemical self-organization could have been possible on pre-historic Earth. The Miller–Urey experiment introduced heat (to provide reflux) and electrical energy (sparks, to simulate lightning) into a mixture of several simple components that would be present in a reducing atmosphere. Within a fairly short period of time a variety of familiar organic compounds, such as amino acids, were synthesised. The compounds that formed were somewhat more complex than the molecules present at the beginning of the experiment.
The influence of dialectical materialism on Oparin's theory
The Communist Party's official interpretation of Marxism, dialectical materialism, fit Oparin's speculation on the origins of life as 'a flow, an exchange, a dialectical unity'. This notion was re-enforced by Oparin's association with Lysenko.
Major works
Oparin, A. I. Proiskhozhdenie zhizni. Moscow: Izd. Moskovskii Rabochii, 1924.
English translations:
Oparin, A. I. "The origin of life", translation by Ann Synge. In: Bernal, J. D. (ed.), The origin of life, Weidenfeld & Nicolson, London, 1967, p. 199–234. Google, Valencia University.
Oparin, A. I. The Origin and Development of Life (NASA TTF-488). Washington: D.C.L GPO, 1968.
Oparin, A. I. Vozniknovenie zhizni na zemle. Moscow: Izd. Akad. Nauk SSSR, 1936.
English translations:
Oparin, A. I. The Origin of Life, 1st ed., New York: Macmillan, 1938.
Oparin, A. I. The Origin of Life, 2nd ed., New York: Dover, 1953, reprinted in 2003, Google.
Oparin, A. I. The Origin of Life on the Earth, 3rd ed., New York: Academic Press, 1957, BHL
Oparin, A., Fesenkov, V. Life in the Universe. Moscow: USSR Academy of Sciences publisher, 3rd edition, 1956.
English translation: Oparin, A., and V. Fesenkov. Life in the Universe. New York: Twayne Publishers (1961).
"The External Factors in Enzyme Interactions Within a Plant Cell"
"Life, Its Nature, Origin and Evolution"
"The History of the Theory of Genesis and Evolution of Life"
See also
Abiogenesis
Biochemistry
List of independent discoveries ("Primordial soup" theory of the origin of life from carbon-based molecules, 1924)
Microsphere
Oparin Medal
Proteinoid
Sidney W. Fox
Stanley Miller
References
1894 births
1980 deaths
20th-century biochemists
People from Uglich
Academic staff of the D. Mendeleev University of Chemical Technology of Russia
Foreign members of the Bulgarian Academy of Sciences
Full Members of the USSR Academy of Sciences
Members of the German Academy of Sciences at Berlin
Members of the German National Academy of Sciences Leopoldina
Moscow State University alumni
Members of the Supreme Soviet of the Russian Soviet Federative Socialist Republic, 1951–1955
Members of the Supreme Soviet of the Russian Soviet Federative Socialist Republic, 1955–1959
Heroes of Socialist Labour
Kalinga Prize recipients
Recipients of the Lenin Prize
Recipients of the Lomonosov Gold Medal
Recipients of the Order of Lenin
Recipients of the Order of the Red Banner of Labour
Evolutionary biologists
Proceedings of the USSR Academy of Sciences editors
Origin of life
Russian biochemists
Russian biologists
Soviet biochemists
Soviet biologists
Burials at Novodevichy Cemetery | Alexander Oparin | [
"Technology",
"Biology"
] | 1,801 | [
"Science and technology awards",
"Recipients of the Lomonosov Gold Medal",
"Origin of life",
"Biological hypotheses"
] |
610,149 | https://en.wikipedia.org/wiki/Detonating%20cord | Detonating cord (also called detonation cord, detacord, detcord, blasting rope, or primer cord) is a thin, flexible plastic tube usually filled with pentaerythritol tetranitrate (PETN, pentrite). With the PETN exploding at a rate of approximately , any common length of detonation cord appears to explode instantaneously. It is a high-speed fuse which explodes, rather than burns, and is suitable for detonating high explosives. The detonation velocity is sufficient to use it for synchronizing multiple charges to detonate almost simultaneously even if the charges are placed at different distances from the point of initiation. It is used to reliably and inexpensively chain together multiple explosive charges. Typical uses include mining, drilling, demolitions, and warfare.
"Cordtex" and "Primacord" are two of many trademarks which have slipped into use as a generic term for this material.
Effects
As a transmission medium, it can act as a downline between the initiator (usually a trigger) and the blast area, and as a trunkline connecting several different explosive charges. As a timing mechanism, detonation cord detonates at a very reliable rate (about 6,000–7,000 m/s or ), enabling engineers to control the pattern in which charges are detonated. This is particularly useful for demolitions, when structural elements need to be destroyed in a specific order to control the collapse of a building.
While it looks like nylon cord, the core is a compressed powdered explosive, usually PETN (pentrite), and it is initiated by the use of a blasting cap. Detonation cord will initiate most commercial high explosives (dynamite, gelignite, sensitised gels, etc.) but will not initiate less sensitive blasting agents like ANFO on its own. 25 to 50 grain/foot (5.3 to 10.6 g/m) detonation cord has approximately the same initiating power as a #8 blasting cap in every 2 to 4 inches (5 to 10 cm) along its entire length. A small charge of PETN, TNT, or other explosive booster is required to bridge between the cord and a charge of insensitive blasting agent like ANFO or most water gels.
Rating
Detonating cord is rated in explosive mass per unit length. This is expressed in grains per foot in the United States, or in grams per metre elsewhere. A "grams per metre" rating will be roughly one fifth the "grains per foot" rating. For example, "50 grain det. cord" refers to detonating cord which has 50 grains of explosive per foot of length—or approximately 10 g/m. This is a typical "default" rating for connecting charges for blasting; lighter detonating cords may be used for "low noise blasting" and movie special effects, while heavier cords, used where the cord is employed to have some direct explosive effect—such as for precision rock carving work—may use 50 to 250 grain/foot (10 to 50 g/m) detonating cord.
Direct employment
Low-yield detonating cord can be used as a precision cutting charge to remove cables, pipes, wiring, fiber optics, and other utility bundles by placing one or more complete wraps around the target. Detonation cord is used in commercial boilers to break up clinkers (solidified coal ash slag) adhering to tube structures. Also a vertical centered cord being lowered into the water of a drilled well can remove any clogging that obstructs water flow. Higher-yield detonating cord can be used to cut down small trees, although the process is very uneconomical compared to using bulk explosive, or even a chainsaw. High-yield detonating cord placed by divers has been used to remove old dock pilings and other underwater obstructions.
Creating a slipknot from detonating cord yields a field-improvised device that can be quickly employed to cut a locked doorknob off a door. Detonation cord can be taped in several rings to the outline of a military man-sized target and detonated, breaching a man-sized hole into wooden doors or light interior walls. Detonating cord is also employed directly in building demolition where thin concrete slabs need be broken via channels drilled parallel to the surface, an advantage over dynamite since a lower minimum of explosive force may be used and smaller diameter holes are sufficient to contain the explosive. Anything much more substantial than these uses requires the use of additional explosives.
Colloquialisms
In Filipino, the corresponding word mitsa has come to be used in the phrase mitsa ng buhay, which translates to "detonating cord of [one's] life", a metaphor for something that is very likely to cause one's death via direct jeopardy (e.g. extreme sports, versus smoking).
In media
Detonation cord was referenced in the 2009 film A Perfect Getaway by Timothy Olyphant's character as "a handy tool".
A length of detonation cord was used to clear a path through a minefield in the 2009 film Terminator Salvation.
Detonation cord was shown to be used by US cavalry troops to clear trees from a landing zone in the 2002 movie We Were Soldiers.
A spool of primer cord was used by Alan Alda's character Hawkeye in the TV series M*A*S*H (Season 9, Episode 12, "Depressing News") to demolish his newly built replica of the Washington Monument crafted from tongue depressors.
Fiona Glenanne from the TV show Burn Notice often uses detonating cord to breach enemy buildings.
Agent Gibbs from NCIS uses detonating cord and a dead-man switch to bring a Mafia family into submission.
See also
Shock tube detonator - another type of tubular explosive detonation system with a very light loading of explosive and no direct blasting effect
References
External links
Explosives
Detonators
Pyrotechnic initiators | Detonating cord | [
"Chemistry"
] | 1,252 | [
"Explosives",
"Explosions"
] |
610,165 | https://en.wikipedia.org/wiki/Cusp%20form | In number theory, a branch of mathematics, a cusp form is a particular kind of modular form with a zero constant coefficient in the Fourier series expansion.
Introduction
A cusp form is distinguished in the case of modular forms for the modular group by the vanishing of the constant coefficient a0 in the Fourier series expansion (see q-expansion)
This Fourier expansion exists as a consequence of the presence in the modular group's action on the upper half-plane via the transformation
For other groups, there may be some translation through several units, in which case the Fourier expansion is in terms of a different parameter. In all cases, though, the limit as q → 0 is the limit in the upper half-plane as the imaginary part of z → ∞. Taking the quotient by the modular group, this limit corresponds to a cusp of a modular curve (in the sense of a point added for compactification). So, the definition amounts to saying that a cusp form is a modular form that vanishes at a cusp. In the case of other groups, there may be several cusps, and the definition becomes a modular form vanishing at all cusps. This may involve several expansions.
Dimension
The dimensions of spaces of cusp forms are, in principle, computable via the Riemann–Roch theorem. For example, the Ramanujan tau function τ(n) arises as the sequence of Fourier coefficients of the cusp form of weight 12 for the modular group, with a1 = 1. The space of such forms has dimension 1, which means this definition is possible; and that accounts for the action of Hecke operators on the space being by scalar multiplication (Mordell's proof of Ramanujan's identities). Explicitly it is the modular discriminant
which represents (up to a normalizing constant) the discriminant of the cubic on the right side of the Weierstrass equation of an elliptic curve; and the 24-th power of the Dedekind eta function. The Fourier coefficients here are written
and called 'Ramanujan's tau function', with the normalization τ(1) = 1.
Related concepts
In the larger picture of automorphic forms, the cusp forms are complementary to Eisenstein series, in a discrete spectrum/continuous spectrum, or discrete series representation/induced representation distinction typical in different parts of spectral theory. That is, Eisenstein series can be 'designed' to take on given values at cusps. There is a large general theory, depending though on the quite intricate theory of parabolic subgroups, and corresponding cuspidal representations.
Consider a standard parabolic subgroup of some reductive group (over , the adele ring), an automorphic form on is called cuspidal if for all parabolic subgroups such that we have , where is the standard minimal parabolic subgroup. The notation for is defined as .
References
Serre, Jean-Pierre, A Course in Arithmetic, Graduate Texts in Mathematics, No. 7, Springer-Verlag, 1978.
Shimura, Goro, An Introduction to the Arithmetic Theory of Automorphic Functions, Princeton University Press, 1994.
Gelbart, Stephen, Automorphic Forms on Adele Groups, Annals of Mathematics Studies, No. 83, Princeton University Press, 1975.
Moeglin C, Waldspurger JL Spectral Decomposition and Eisenstein Series: A Paraphrase of the Scriptures, Schneps L, trans. Cambridge University Press; 1995.
Modular forms | Cusp form | [
"Mathematics"
] | 722 | [
"Modular forms",
"Number theory"
] |
610,184 | https://en.wikipedia.org/wiki/Eric%20Lander | Eric Steven Lander (born February 3, 1957) is an American mathematician and geneticist who is a professor of biology at the Massachusetts Institute of Technology (MIT), and a professor of systems biology at Harvard Medical School. Eric Lander is founding director emeritus of the Broad Institute of MIT and Harvard.
Lander served as the 11th director of the Office of Science and Technology Policy and Science Advisor to the President in Joe Biden's presidential Cabinet. In response to allegations that he had engaged in bullying and abusive conduct, Lander apologized and resigned from the Biden Administration effective February 18, 2022.
Early life and education
Lander was born in Brooklyn, New York City, to Jewish parents, the son of Rhoda G. Lander, a social studies teacher, and Harold Lander, an attorney. He was captain of the math team at Stuyvesant High School, graduating in 1974 as valedictorian and an International Mathematical Olympiad Silver Medalist for the U.S. At age 17, he wrote a paper on quasiperfect numbers, for which he won the Westinghouse Science Talent Search.
After graduating from Stuyvesant High School as valedictorian in 1974, Lander graduated from Princeton University in 1978 as valedictorian and with a Bachelor of Arts in Mathematics. He completed his senior thesis, "On the structure of projective modules", under John Coleman Moore's supervision. He then moved to the University of Oxford where he was a Rhodes Scholar and student of Wolfson College, Oxford. He was awarded a Doctor of Philosophy degree by the University of Oxford in 1980 with a thesis on algebraic coding theory and symmetric block designs supervised by Peter Cameron.
Career
During his career, Lander has worked on human genetic variation, human population history, genome evolution, non-coding RNAs, three-dimensional folding of the human genome and genome-wide association studies to discover the genes essential for biological processes using CRISPR-based editing.
Early mathematical career
As a mathematician, Lander studied combinatorics and applications of representation theory to coding theory. He enjoyed mathematics but did not wish to spend his life in such a "monastic" career. Unsure what to do next, he took a job teaching managerial economics at Harvard Business School. At the suggestion of his brother, developmental biologist Arthur Lander, he started to look at neurobiology, saying at the time, "because there's a lot of information in the brain". To understand mathematical neurobiology, he felt he had to study cellular neurobiology; this, in turn, led to studying microbiology and eventually genetics. "When I finally feel I have learned genetics, I should get back to these other problems. But I'm still trying to get the genetics right", Lander said.
Lander later became acquainted with David Botstein, a geneticist at the Massachusetts Institute of Technology (MIT). Botstein was working on a way to unravel how subtle differences in complex genetic systems can become disorders such as cancer, diabetes, schizophrenia, and even obesity. The two collaborated to develop a computer algorithm to analyze the maps of genes. In 1986 Lander joined the Whitehead Institute and became an assistant professor at MIT. He was awarded a MacArthur Fellowship in 1987. In 1990, he founded the Whitehead Institute/MIT Center for Genome Research (WICGR). The WICGR became one of the world's leading centers of genome research, and under Lander's leadership made great progress in developing new methods of analyzing mammalian genomes. It also made important breakthroughs in applying this information to the study of human genetic variation and formed the basis for the foundation of the Broad Institute—a transformation Lander spearheaded.
Human Genome Project
Two main groups attempted to sequence the human genome. The first was the Human Genome Project, a loosely organized, publicly funded effort that intended to publish the information it obtained freely and without restrictions. Many research groups from countries all over the world were involved in this effort. The second was undertaken by Celera Genomics, which intended to patent the information obtained and charge subscriptions for use of the sequence data. Established first, the Human Genome Project moved slowly in the early phases as the Department of Energy's role was unclear and sequencing technology was in its infancy. Officially, the Human Genome Project had an eight-year head start before Celera entered the race, though discussions for the Human Genome Project began fourteen years before Celera announced their own project. Because the Human Genome Project was a $3 billion publicly funded venture, the consortia raced to enter as much of the human genome into the public domain as quickly as possible once Celera began work in 1998. This was a change of strategy for the Human Genome Project, because many scientists at the time wanted to establish a more complete copy of the genome, not simply publish the many fragments individually. Lander aggressively pressured Human Genome Project scientists to work longer and faster to publish genome fragments before Celera. Lander himself is now listed on 73 patents and patent applications related to genomics.
In February 2001, both the Human Genome Project and Celera published drafts of the human genome in the scientific journals Nature and Science, respectively. In the Human Genome Project's Nature publication, the Whitehead Institute for Biomedical Research, Center for Genome Research, was listed first, with Lander listed as the first named author.
Leveraging Celera's sequencing and analysis techniques, the Whitehead Institute also made a contribution to the sequencing of the mouse genome, an important step in fully understanding the molecular biology of mice, which are often used as model organisms in studies of everything from human diseases to embryonic development. The WICGR has since sequenced the genomes of Ciona savignyi (sea squirt), the pufferfish, the filamentous fungus Neurospora crassa, and multiple relatives of Saccharomyces cerevisiae, one of the most studied yeasts. The Ciona savignyi genome provides a good system for exploring the evolutionary origins of all vertebrates. Pufferfish have smaller-sized genomes than other vertebrates; as a result, their genomes are "mini" models for vertebrates. The sequencing of the yeasts related to Saccharomyces cerevisiae will facilitate the identification of key gene regulatory elements, some of which may be common to all eukaryotes (including both plant and animal kingdoms).
Lander was the founding editor of the Annual Review of Genomics and Human Genetics. He remained editor till 2004.
After Human Genome Project
Lander is the founding director of the Broad Institute, a collaboration between MIT, Harvard, the Whitehead Institute, and affiliated hospitals. Its goal is "to bring the power of genomics to bear on the understanding of disease and to accelerate the search for cures." In particular, Lander has discovered scientific facts in cell biology and molecular biology of cancer, as well as push precision medicine approaches. He is often credited as among the drivers for the Broad Institute's meteoric rise during the 16 years he was a director.
During the Obama presidency, Lander cochaired the Presidential Council of Advisors on Science and Technology.
Toast to James Watson
Lander toasted to James Watson in 2018 for his 90th birthday, which caused controversy in the wake of Watson's widely criticized comments around intelligence and race. Lander had included a brief aside in his toast stating that Watson was flawed, but still later apologized for his toast after significant outrage from academics on Twitter. STAT News noted that other scientists had also similarly toasted Watson, but had not elicited similar outrage.
CRISPR-Cas 9 Controversy
Lander received criticism in the past for allegedly diminishing the accomplishments of Jennifer Doudna and Emmanuelle Charpentier after publishing "The Heroes of CRISPR" in Cell. Some argued that his article was misogynistic for having removed women scientists from history. Of particular note, Lander was accused of a conflict of interest, as the Broad Institute had been competing with UC Berkeley for patent rights to commercialize CRISPR. Lander responded by suggesting he had not meant "to diminish anybody" and noted that science is collaborative by nature. Criticism was particularly harsh online by other academics and biologists, due to previous resentment with Lander. During questioning for his role of Science Advisor to the President, Lander admitted that he had made a mistake in understating the accomplishments of Doudna and Charpentier.
Forensic science and criminal justice
In 1989, Lander provided expert testimony in the New York criminal case People v. Castro. He showed that the then-current method of interpreting DNA evidence was liable to give false positive matches, implicating innocent defendants. Two of the defense attorneys in that case, Peter Neufeld and Barry Scheck, went on to found the Innocence Project, an organization that uses DNA analysis to exonerate wrongly convicted prisoners. Lander is a member of the Innocence Project's board of directors.
Science Advisor to the President
In 2009, Lander was appointed by President Obama as co-chair of the President's Council of Advisors on Science and Technology (PCAST), serving for the entire term (2009 to 2017).
In January 2021, President-elect Joe Biden nominated Lander as Science Advisor to the President and announced that he would elevate the position to a Cabinet-level post. In January 2021, the organization "500 Women Scientists" published an editorial in Scientific American to consider naming someone else to the position, because he was well known within the scientific community for offending women.
His nomination had been held up possibly due to requests for clarification about his having attended two gatherings where Jeffrey Epstein, a wealthy large-scale donor to science who was also a convicted sex offender, was present. He was also questioned about accusations of sexism and his toast to James Watson. On April 29, a confirmation hearing was held in the Senate Committee on Commerce, Science, and Transportation. On May 20, the committee voted to report favorably on the nomination, with five Republican senators voting against. On May 28, 2021, before a Memorial Day recess, his nomination was confirmed by voice vote by the full Senate. Lander was sworn in as director of the Office of Science and Technology Policy (OSTP) on June 2, 2021. He took his oath using a rare 1492 copy of the Pirkei Avot.
On February 7, 2022, Politico reported on a White House investigation in which fourteen current and former Office of Science and Technology Policy staffers accused Lander on February 4 of having bullied and demeaned his subordinates. Lander issued an apology to staff on February 4, his apology includes, "I am devastated that I caused hurt to past and present colleagues by the way in which I have spoken to them... I believe it is not possible to continue effectively in my role, and the work of this office is far too important to be hindered." He later resigned on February 7. In the following month, Politico published an analysis of Lander's connections with Eric Schmidt. Politico documented the appearance of conflicts of interest related to Schmidt's financial support for many of the employees of the OSTP.
After resignation
Since 2023, Eric Lander has returned to his tenured professor positions at MIT and Harvard as well as the Broad Institute as a Core Institute Member and Founding Director Emeritus. While some opinion pieces argued that "Eric Lander is getting uncanceled", The Chronicles of Higher Education noted that some staffers at the Broad expressed alarm at Lander's sudden return without further discussion from their leadership. In 2023, Lander started a non-profit called Science for America focused on "moonshot" ideas such as nuclear fusion or cancer research.
Recognition and service
In 1999, Lander received the Golden Plate Award of the American Academy of Achievement.
In 2004, Lander was named one of Time magazine's 100 most influential people of our time for his work on the Human Genome Project. He has appeared in numerous PBS documentaries about genetics. He was ranked #2 on the MIT150 list of MIT's innovators and ideas.
In December 2008, Lander and Harold E. Varmus were named co-chairs of the Obama administration's Council of Advisors on Science and Technology. In 2012 he received the Dan David Prize.
Lander is a member of the advisory board to the USA Science and Engineering Festival.
In 2013, Lander was awarded the first Breakthrough Prize in Life Sciences. In 2016, Semantic Scholar AI program ranked him #1 on its list of most influential biomedical researchers.
In 2016, he received the Award for Excellence in Molecular Diagnostics from the Association for Molecular Pathology.
In 2017, Lander received an honoris causa doctorate from the Université catholique de Louvain. Also in 2017, he received the William Allan Award from the American Society of Human Genetics.
In 2019, he served on the Life Sciences jury for the Infosys Prize. In 2020, Pope Francis appointed him a member of the Pontifical Academy of Science. In 2021, Lander, who holds many patents, disclosed ownership of assets worth more than $45 million.
References
External links
Lander at MIT
MIT Broad Institute Bio
|-
1957 births
20th-century American mathematicians
21st-century American mathematicians
21st-century American biologists
20th-century American Jews
21st-century American Jews
American Rhodes Scholars
Annual Reviews (publisher) editors
Biden administration cabinet members
Biotechnologists
Broad Institute people
Fellows of the AACR Academy
Genetic epidemiologists
Harvard Business School faculty
Human Genome Project scientists
International Mathematical Olympiad participants
Jewish American members of the Cabinet of the United States
Jewish American scientists
Living people
MacArthur Fellows
Massachusetts Institute of Technology School of Science faculty
Mathematicians from Brooklyn
Mathematics popularizers
Members of the United States National Academy of Sciences
Princeton University alumni
Scientists from Brooklyn
Stuyvesant High School alumni
Whitehead Institute faculty
Members of the National Academy of Medicine
Directors of the Office of Science and Technology Policy | Eric Lander | [
"Engineering"
] | 2,872 | [
"Human Genome Project scientists"
] |
610,202 | https://en.wikipedia.org/wiki/Fine%20structure | In atomic physics, the fine structure describes the splitting of the spectral lines of atoms due to electron spin and relativistic corrections to the non-relativistic Schrödinger equation. It was first measured precisely for the hydrogen atom by Albert A. Michelson and Edward W. Morley in 1887, laying the basis for the theoretical treatment by Arnold Sommerfeld, introducing the fine-structure constant.
Background
Gross structure
The gross structure of line spectra is the structure predicted by the quantum mechanics of non-relativistic electrons with no spin. For a hydrogenic atom, the gross structure energy levels only depend on the principal quantum number n. However, a more accurate model takes into account relativistic and spin effects, which break the degeneracy of the energy levels and split the spectral lines. The scale of the fine structure splitting relative to the gross structure energies is on the order of (Zα)2, where Z is the atomic number and α is the fine-structure constant, a dimensionless number equal to approximately 1/137.
Relativistic corrections
The fine structure energy corrections can be obtained by using perturbation theory. To perform this calculation one must add three corrective terms to the Hamiltonian: the leading order relativistic correction to the kinetic energy, the correction due to the spin–orbit coupling, and the Darwin term coming from the quantum fluctuating motion or zitterbewegung of the electron.
These corrections can also be obtained from the non-relativistic limit of the Dirac equation, since Dirac's theory naturally incorporates relativity and spin interactions.
Hydrogen atom
This section discusses the analytical solutions for the hydrogen atom as the problem is analytically solvable and is the base model for energy level calculations in more complex atoms.
Kinetic energy relativistic correction
The gross structure assumes the kinetic energy term of the Hamiltonian takes the same form as in classical mechanics, which for a single electron means
where is the potential energy, is the momentum, and is the electron rest mass.
However, when considering a more accurate theory of nature via special relativity, we must use a relativistic form of the kinetic energy,
where the first term is the total relativistic energy, and the second term is the rest energy of the electron ( is the speed of light). Expanding the square root for large values of , we find
Although there are an infinite number of terms in this series, the later terms are much smaller than earlier terms, and so we can ignore all but the first two. Since the first term above is already part of the classical Hamiltonian, the first order correction to the Hamiltonian is
Using this as a perturbation, we can calculate the first order energy corrections due to relativistic effects.
where is the unperturbed wave function. Recalling the unperturbed Hamiltonian, we see
We can use this result to further calculate the relativistic correction:
For the hydrogen atom,
and
where is the elementary charge, is the vacuum permittivity, is the Bohr radius, is the principal quantum number, is the azimuthal quantum number and is the distance of the electron from the nucleus. Therefore, the first order relativistic correction for the hydrogen atom is
where we have used:
On final calculation, the order of magnitude for the relativistic correction to the ground state is .
Spin–orbit coupling
For a hydrogen-like atom with protons ( for hydrogen), orbital angular momentum and electron spin , the spin–orbit term is given by:
where is the spin g-factor.
The spin–orbit correction can be understood by shifting from the standard frame of reference (where the electron orbits the nucleus) into one where the electron is stationary and the nucleus instead orbits it. In this case the orbiting nucleus functions as an effective current loop, which in turn will generate a magnetic field. However, the electron itself has a magnetic moment due to its intrinsic angular momentum. The two magnetic vectors, and couple together so that there is a certain energy cost depending on their relative orientation. This gives rise to the energy correction of the form
Notice that an important factor of 2 has to be added to the calculation, called the Thomas precession, which comes from the relativistic calculation that changes back to the electron's frame from the nucleus frame.
Since
by Kramers–Pasternack relations and
the expectation value for the Hamiltonian is:
Thus the order of magnitude for the spin–orbital coupling is:
When weak external magnetic fields are applied, the spin–orbit coupling contributes to the Zeeman effect.
Darwin term
There is one last term in the non-relativistic expansion of the Dirac equation. It is referred to as the Darwin term, as it was first derived by Charles Galton Darwin, and is given by:
The Darwin term affects only the s orbitals. This is because the wave function of an electron with vanishes at the origin, hence the delta function has no effect. For example, it gives the 2s orbital the same energy as the 2p orbital by raising the 2s state by .
The Darwin term changes potential energy of the electron. It can be interpreted as a smearing out of the electrostatic interaction between the electron and nucleus due to zitterbewegung, or rapid quantum oscillations, of the electron. This can be demonstrated by a short calculation.
Quantum fluctuations allow for the creation of virtual electron-positron pairs with a lifetime estimated by the uncertainty principle . The distance the particles can move during this time is , the Compton wavelength. The electrons of the atom interact with those pairs. This yields a fluctuating electron position . Using a Taylor expansion, the effect on the potential can be estimated:
Averaging over the fluctuations
gives the average potential
Approximating , this yields the perturbation of the potential due to fluctuations:
To compare with the expression above, plug in the Coulomb potential:
This is only slightly different.
Another mechanism that affects only the s-state is the Lamb shift, a further, smaller correction that arises in quantum electrodynamics that should not be confused with the Darwin term. The Darwin term gives the s-state and p-state the same energy, but the Lamb shift makes the s-state higher in energy than the p-state.
Total effect
The full Hamiltonian is given by
where is the Hamiltonian from the Coulomb interaction.
The total effect, obtained by summing the three components up, is given by the following expression:
where is the total angular momentum quantum number ( if and otherwise). It is worth noting that this expression was first obtained by Sommerfeld based on the old Bohr theory; i.e., before the modern quantum mechanics was formulated.
Exact relativistic energies
The total effect can also be obtained by using the Dirac equation. The exact energies are given by
This expression, which contains all higher order terms that were left out in the other calculations, expands to first order to give the energy corrections derived from perturbation theory. However, this equation does not contain the hyperfine structure corrections, which are due to interactions with the nuclear spin. Other corrections from quantum field theory such as the Lamb shift and the anomalous magnetic dipole moment of the electron are not included.
See also
Angular momentum coupling
Fine electronic structure
References
External links
Hyperphysics: Fine Structure
University of Texas: The fine structure of hydrogen
Atomic physics | Fine structure | [
"Physics",
"Chemistry"
] | 1,523 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
610,329 | https://en.wikipedia.org/wiki/Clandestine%20chemistry | Clandestine chemistry is chemistry carried out in secret, and particularly in illegal drug laboratories. Larger labs are usually run by gangs or organized crime intending to produce for distribution on the black market. Smaller labs can be run by individual chemists working clandestinely in order to synthesize smaller amounts of controlled substances or simply out of a hobbyist interest in chemistry, often because of the difficulty in ascertaining the purity of other, illegally synthesized drugs obtained on the black market. The term clandestine lab is generally used in any situation involving the production of illicit compounds, regardless of whether the facilities being used qualify as a true laboratory.
History
Ancient forms of clandestine chemistry included the manufacturing of explosives.
From 1919 to 1933, the United States prohibited the sale, manufacture, or transportation of alcoholic beverages. This opened a door for brewers to supply their own town with alcohol. Just like modern-day drug labs, distilleries were placed in rural areas. The term moonshine generally referred to "corn whiskey", that is, a whiskey-like liquor made from corn. Today, American-made corn whiskey can be labeled or sold under that name, or as Bourbon or Tennessee whiskey, depending on the details of the production process.
Psychoactive substances
By precursor chemicals
Prepared substances (as opposed to those that occur naturally in a consumable form, such as cannabis and psilocybin mushrooms) require reagents. Some drugs, like cocaine and morphine, are extracted from plant sources and refined with the aid of chemicals. Semi-synthetic drugs such as heroin are made starting from alkaloids extracted from plant sources which are the precursors for further synthesis. In the case of heroin, a mixture of alkaloids is extracted from the opium poppy (Papaver somniferum) by incising its seed capsule, whereupon a milky fluid (the opium 'latex') bleeds out of the incisions which is then left to dry out and scraped off the bulbs, yielding raw opium. Morphine, one of many alkaloids in opium, is then extracted out of the opium by acid-base extraction and turned into heroin by reacting it with acetic anhydride. Other drugs (such as methamphetamine and MDMA) are normally made from commercially available chemicals, though both can also be made from naturally occurring precursors. Methamphetamine can also be made from ephedrine, one of the naturally occurring alkaloids in ephedra (Ephedra sinica). MDMA can be made from safrole, the major constituent of several etheric oils like sassafras. Governments have adopted a strategy of chemical control as part of their overall drug control and enforcement plans. Chemical control offers a means of attacking illicit drug production and disrupting the process before the drugs have entered the market.
Because many legitimate industrial chemicals such as anhydrous ammonia and iodine are also necessary in the processing and synthesis of most illicitly produced drugs, preventing the diversion of these chemicals from legitimate commerce to illicit drug manufacturing is a difficult job. Governments often place restrictions on the purchase of large quantities of chemicals that can be used in the production of illicit drugs, usually requiring licenses or permits to ensure that the purchaser has a legitimate need for them.
Suppliers of precursor chemicals
Chemicals critical to the production of cocaine, heroin, and synthetic drugs are produced in many countries throughout the world. Many manufacturers and suppliers exist in Europe, China, India, the United States, and many other countries.
Historically, chemicals critical to the synthesis or manufacture of illicit drugs are introduced into various venues via legitimate purchases by companies that are registered and licensed to do business as chemical importers or handlers. Once in a country or state, the chemicals are diverted by rogue importers or chemical companies, by criminal organizations and individual violators, or acquired as a result of coercion and/or theft on the part of drug traffickers. In response to stricter international controls, drug traffickers have increasingly been forced to divert chemicals by mislabeling the containers, forging documents, establishing front companies, using circuitous routing, hijacking shipments, bribing officials, or smuggling products across international borders.
Enforcement of controls on precursor chemicals
General
The Multilateral Chemical Reporting Initiative encourages governments to exchange information on a voluntary basis in order to monitor international chemical shipments. Over the past decade, key international bodies like the Commission on Narcotic Drugs and the U.N. General Assembly's Special Session (UNGASS) have addressed the issue of chemical diversion in conjunction with U.S. efforts. These organizations raised specific concerns about potassium permanganate and acetic anhydride.
To facilitate the international flow of information about precursor chemicals, the United States, through its relationship with the Inter-American Drug Control Abuse Commission (CICAD), continues to evaluate the use of precursor chemicals and assist countries in strengthening controls. Many nations still lack the capacity to determine whether the import or export of precursor chemicals is related to legitimate needs or illicit drugs. The problem is complicated by the fact that many chemical shipments are either brokered or transshipped through third countries in an attempt to disguise their purpose or destination.
Beginning in July 2001, the International Narcotics Control Board (INCB) has opted to organize an international conference with the goal of devising a specific action plan to counter the traffic in MDMA precursor chemicals. They hope to prevent the diversion of chemicals used in the production of amphetamine-type stimulants (ATS), including MDMA (ecstasy) and methamphetamine.
In June 2015, the European Commission approved Regulation (EU) 2015/1013 which outlined for the monitoring of drug precursors traded between the Union and third countries. The Regulation also establishes uniform procedures for licensing and registration of operators and users who are listed in a European database tracking drug precursors.
Despite this long history of law enforcement actions, restrictions of chemicals, and even covert military actions, many illicit drugs are still widely available all over the world.
Cocaine
Operation Purple is a U.S. DEA driven international chemical control initiative designed to reduce the illicit manufacture of cocaine in the Andean Region, identifying rogue firms and suspect individuals; gathering intelligence on diversion methods, trafficking trends, and shipping routes; and taking administrative, civil and/or criminal action as appropriate. Critical to the success of this operation is the communication network that gives notification of shipments and provides the government of the importer sufficient time to verify the legitimacy of the transaction and take appropriate action. The effects of this initiative have been dramatic and far-reaching. Operation Purple has exposed a significant vulnerability among traffickers, and has grown to include almost thirty nations. According to the DEA, Operation Purple has been highly effective at interfering with cocaine production. However, illicit chemists always find new methods to evade the DEA's scrutiny.
In countries where strict chemical controls have been put in place, illicit drug production has been seriously affected. For example, few of the chemicals needed to process coca leaf into cocaine are manufactured in Bolivia or Peru. Most are smuggled in from neighbouring countries with advanced chemical industries or diverted from a smaller number of licit handlers. Increased interdiction of chemicals in Peru and Bolivia has contributed to final product cocaine from those countries being of lower, minimally oxidized quality.
As a result, Bolivian lab operators are now using inferior substitutes such as cement instead of lime and sodium bicarbonate instead of ammonia and recycled solvents like ether. Some non-solvent fuels such as gasoline, kerosene and diesel fuel are even used in place of solvents.
Manufacturers are attempting to streamline a production process that virtually eliminates oxidation to produce cocaine base. Some laboratories are not using sulfuric acid during the maceration state; consequently, less cocaine alkaloid is extracted from the leaf, producing less cocaine hydrochloride, the powdered cocaine marketed for overseas consumption.
Heroin
Similarly, heroin-producing countries depend on supplies of acetic anhydride (AA) from the international market. This heroin precursor continues to account for the largest volume of internationally seized chemicals, according to the International Narcotics Control Board. Since July 1999, there have been several notable seizures of acetic anhydride in Turkey (amounting to nearly seventeen metric tons) and Turkmenistan (totaling seventy-three metric tons).
Acetic anhydride, the most commonly used chemical agent in heroin processing, is virtually irreplaceable. According to the DEA, Mexico remains the only heroin source route to heroin laboratories in Afghanistan. Authorities in Uzbekistan, Turkmenistan, Kyrgyzstan, and Kazakhstan routinely seize ton-quantity shipments of diverted acetic anhydride.
The lack of acetic anhydride has caused clandestine chemists in some countries to substitute it for lower quality precursors such as acetic acid and results in the formation of impure black tar heroin that contains a mixture of drugs not found in heroin made with pure chemicals.
DEA's Operation Topaz is a coordinated international strategy targeting acetic anhydride. In place since March 2001, a total of thirty-one countries are currently organized participants in the program in addition to regional participants. The DEA reports that as of June 2001, some 125 consignments of acetic anhydride had been tracked totaling 618,902,223 kilograms. As of July 2001, there has been approximately 20 shipments of AA totaling 185,000 kilograms either stopped or seized.
Amphetamines
The practice of clandestine chemistry to synthesize controlled substance analogues and circumvent drug laws was first noticed in the late 1960s, as types of drugs became controlled substances in many countries. With the Title 21 United States Code (USC) Controlled Substances Act (CSA) of October 27, 1970 amphetamines became controlled substances in the United States. Prior to this, amphetamine sulfate first became widely available as an over-the-counter (OTC) nasal decongestant inhaler in 1933, marketed by SKF under the brand name Benzedrine. Shortly afterward, physicians began documenting amphetamine's general stimulant properties and subsequently its potential for treating narcolepsy, which prompted SKF in 1938 to begin also manufacturing amphetamine sulfate as tablets. Initially, the frequency of amphetamine use was negligible; however, by 1959 its popularity as a therapeutic agent and also an illicit drug had skyrocketed nationwide, causing the Federal Bureau of Narcotics (FBN) to reclassify amphetamine from OTC to prescription-only.
Methamphetamine
As of the early 1990s, methamphetamine use was concentrated among young white males in California and nearby states. Since then its use has spread both demographically and geographically. Methamphetamine has been a favorite among various populations including motorcycle gangs, truckers, laborers, soldiers, and ravers. Known as a "club drug", the National Institute on Drug Abuse tracks its incidence of use in children as young as twelve, and the prevalence of users increases with age.
In the 1980s and early 1990s, most methamphetamine production in the United States occurred in small independent laboratories. Phenylacetone, one precursor of methamphetamine, became a Schedule II controlled immediate precursor in 1979. Underground chemists searched for alternative methods for producing methamphetamine. The two predominant methods which appeared both involve the reduction of ephedrine or pseudoephedrine to methamphetamine. At the time, neither was a watched chemical, and pills containing the substance could be bought by the thousands without raising any kind of suspicion.
In the 1990s, the DEA recognized that legally imported precursors were being diverted to the production of methamphetamine. Changes to federal regulations in 1988 and throughout the 1990s enabled the DEA to more closely track the ephedrine and pseudoephedrine precursors. Many individual States have enacted precursor control laws which limit the sale of over-the-counter cold medications which contain ephedrine or pseudoephedrine. This made it somewhat more difficult for underground chemists to produce methamphetamine. In May 1995, the DEA shut down two major suppliers of precursors in the United States, seizing 25 metric tons of ephedrine and pseudoephedrine from Clifton Pharmaceuticals and 500 cases of pseudoephedrine from X-Pressive Looks, Inc. (XLI). The immediate market impact suggests that they had been providing more than 50 percent of the precursors used nationally to produce methamphetamine. However, the market rapidly rebounded.
The methamphetamine situation also changed in the mid-1990s as Mexican organized crime became a major player in its production and distribution, operating "super-labs" which produced a substantial percentage of the drugs being sold. According to the DEA, the seizure of 3.5 metric tons of pseudoephedrine in Texas in 1994 revealed that Mexican trafficking groups were producing methamphetamine on an unprecedented scale. More recent reports indicate an ongoing presence of Mexican trafficking.
By process
Distillation
Alcohol
Another old form of clandestine chemistry is the illegal brewing and distillation of alcohol. This is frequently done to avoid taxation on spirits.
In some countries, moonshine stills are illegal to sell, import, and own without permission. However, enthusiasts explain on internet forums how to obtain equipment and assemble it into a still. To cut costs, stainless steel vessels are often replaced with plastic stills, vessels made from polypropylene that can withstand relatively high heat.
Catalysts
Pyrolysis
THC
Conversion of CBD to THC can occur with heat acting as a catalyst.
By contamination
Alcoholic drinks
Alcoholic drinks that are known to be contaminated.
Diethylene glycol, used dangerously by some winemakers in sweet wines
Moonshine
Black tar heroin
Black tar heroin is a free base form of heroin that is sticky like tar or hard like coal. Its dark color is the result of crude processing methods that leave behind impurities.
Black tar as a type holds a variable admixture morphine derivatives—predominantly 6-MAM (6-monoacetylmorphine) which is another result of crude acetylation. The lack of proper reflux during acetylation fails to remove much of the moisture retained in the acetylating agent, glacial acetic acid.
Contaminated cocaine
Black cocaine
Black cocaine () is a mixture of regular cocaine base or cocaine hydrochloride with various other substances.
Cocaine paste
Coca paste (paco, basuco, oxi) is a crude extract of the coca leaf which contains 40% to 91% cocaine freebase along with companion coca alkaloids and varying quantities of benzoic acid, methanol, and kerosene.
Krokodil
Illicitly produced desomorphine is typically far from pure and often contains large amounts of toxic substances and contaminants as a result of being "cooked" and used without any significant effort to remove the byproducts and leftovers from synthesis. Injecting any such mixture can cause serious damage to the skin, blood vessels, bone and muscles, sometimes requiring limb amputation in long-term users. Its melting point is 189 °C.
Causes of this damage are from iodine, phosphorus and other toxic substances that are present after synthesis.
Methamphetamine
A common adulterant is dimethyl sulfone, a solvent and cosmetic base without known effect on the nervous system; other adulterants include dimethylamphetamine HCl, ephedrine HCl, sodium thiosulfate, sodium chloride, sodium glutamate, and a mixture of caffeine with sodium benzoate.
Although the prevalence of domestic meth labs continues to be high in western states, they have spread throughout the United States. It has been suggested that "do-it-yourself" meth production in rural areas is reflective of a broader DIY approach that includes activities such as hunting, fishing, and fixing one’s cars, trucks, equipment, and house. Toxic chemicals resulting from methamphetamine production may be hoarded or clandestinely dumped, damaging land, water, plant life and wild life, and posing a risk to humans. Waste from methamphetamine labs is frequently dumped on federal, public, and tribal lands. The chemicals involved can explode and clandestine chemistry has been implicated in both house and wild land fires.
In Oregon, Brett Sherry of the Oregon Clandestine Drug Lab Cleanup Program has been quoted as stating that only 10–20% of drug labs are discovered by police.
Statistics reporting the prevalence of meth labs and arrest of meth producers can vary greatly from county to county and state to state. Factors affecting policing and reporting include funding, specialized training, support from local residents, and willingness to make the issue a priority in policing. How information is categorized and tracked may also inflate or minimize the apparent results.
Missouri has reported some of the highest rates of meth-lab arrests in the country, and has pursued an aggressive and highly publicized policy of policing meth labs. This has resulted in as many as 205 cases per year in one county. In contrast, West Virginia reports and/or prosecutes very few cases. It's possible that these low numbers are because of cost.
In WV, a police agency which reports a meth lab is responsible for the cost its cleanup—which can cost tens of thousands of dollars, as proper disposal of toxic and hazardous materials is very expensive. The high cost of cleanup is a clear disincentive for all agencies, but especially those with limited budgets.
In 2016, Michigan reported an increase in incidents following the formation of the Midland County Methamphetamine Protocol Team in 2015. However, many of the cases reported involved meth users making small amounts of the drug using a crude and dangerous "one-pot method". These small operations were for both personal use and for sale to others.
The DEA's El Paso Intelligence Center data from 2012 to 2014 is showing a downward trend in the number of clandestine methamphetamine labs; down from a high of 15,196 in 2010. Drug seizure quantities, on the other hand, are steadily increasing since 2007, according to data from the DEA's System to Retrieve Information from Drug Evidence (STRIDE) (see table to the right).
Cleanup
Clean up processes were regulated by the EPA as of 2007. The Methamphetamine Remediation Research Act of 2007 required EPA to develop guidelines for remediation of former methamphetamine labs. This creates guidelines for States and local agencies to improve "our national understanding of identifying the point at which former methamphetamine laboratories become clean enough to inhabit again." The legislation also required that EPA periodically update the guidelines, as appropriate, to reflect the best available knowledge and research.
Making a former meth lab site safer for habitation requires two basic efforts:
Gross chemical removal This is the process in which law enforcement or a Drug Enforcement Administration contractors removes the obvious dangers from the site. Obvious dangers include containers of chemicals, equipment, and apparatus that could be used to make illegal drugs, drug paraphernalia, and other illegal items. This process does not cleanup or remove chemical spills, stains or residue that could be harmful to inhabitants. A property that has had only a gross chemical removal is not fit for habitation.
Clandestine remediation The cleaning of interior structures and, if applicable, the surrounding land, surface waters and groundwater by an EPA approved or National Crime Scene Cleanup Association certified company. This is the process of removing the residue and waste from the site after the gross chemical removal is done. A property that has been remediated should present minimal to no health risk to occupants.
MPPP
MPTP may be accidentally produced during the manufacture of MPPP. 1-Methyl-4-phenylpyridinium (MPP+), a metabolite of MPTP, causes rapid onset of irreversible symptoms similar to Parkinson's disease.
PCP
Embalming fluid has been found as a by-product of PCP manufacture. Marijuana cigarettes dipped in embalming fluid, sometimes also laced with PCP are known as fry or fry sticks.
Explosives
Clandestine chemistry is not limited to drugs; it is also associated with explosives, and other illegal chemicals. Of the explosives manufactured illegally, nitroglycerin and acetone peroxide are easiest to produce due to the ease with which the precursors can be acquired.
Uncle Fester is a writer who commonly writes about different aspects of clandestine chemistry. Secrets of Methamphetamine Manufacture is among his most popular books, and is considered required reading for DEA agents. More of his books deal with other aspects of clandestine chemistry, including explosives, and poisons. Fester is, however, considered by many to be a faulty and unreliable source for information in regard to the clandestine manufacture of chemicals.
See also
The Hive (website)
Owsley Stanley
William Leonard Pickard
Nicholas Sand
Casey William Hardison
Uncle Fester (author)
Rolling meth lab
Cold water extraction
DEA list of chemicals
Breaking Bad
References
External links
Clandestine labs FAQ at Erowid
New 'shake-and-bake' method for making crystal meth gets around drug laws but is no less dangerous
Chemistry
Illegal drug trade
Science and law
Adulteration | Clandestine chemistry | [
"Chemistry"
] | 4,357 | [
"Adulteration",
"Drug safety"
] |
610,367 | https://en.wikipedia.org/wiki/Labrador%20Sea | The Labrador Sea (; ) is an arm of the North Atlantic Ocean between the Labrador Peninsula and Greenland. The sea is flanked by continental shelves to the southwest, northwest, and northeast. It connects to the north with Baffin Bay through the Davis Strait. It is a marginal sea of the Atlantic.
The sea formed upon separation of the North American Plate and Greenland Plate that started about 60 million years ago and stopped about 40 million years ago. It contains one of the world's largest turbidity current channel systems, the Northwest Atlantic Mid-Ocean Channel (NAMOC), that runs for thousands of kilometers along the sea bottom toward the Atlantic Ocean.
The Labrador Sea is a major source of the North Atlantic Deep Water, a cold water mass that flows at great depth along the western edge of the North Atlantic.
History
The Labrador Sea formed upon separation of the North American Plate and Greenland Plate that started about 60 million years ago (Paleocene) and stopped about 40 million years ago. A sedimentary basin, which is now buried under the continental shelves, formed during the Cretaceous. Onset of magmatic sea-floor spreading was accompanied by volcanic eruptions of picrites and basalts in the Paleocene at the Davis Strait and Baffin Bay.
Between about 500 BC and 1300 AD, the southern coast of the sea contained Dorset, Beothuk, and Inuit settlements; Dorset tribes were later replaced by Thule people.
Extent
The International Hydrographic Organization defines the limits of the Labrador Sea as follows:
On the North: the South limit of Davis Strait [The parallel of 60° North between Greenland and Labrador].
On the East: a line from Cape St. Francis (Newfoundland) to Cape Farewell (Greenland).
On the West: the East Coast of Labrador and Newfoundland and the Northeast limit of the Gulf of St. Lawrence – a line running from Cape Bauld (North point of Kirpon Island, ) to the East extreme of Belle Isle and on to the Northeast Ledge (). Thence a line joining this ledge with the East extreme of Cape St. Charles (52°13'N) in Labrador.
Natural Resources Canada uses a slightly different definition, putting the northern boundary of the Labrador Sea on a straight line from a headland on Killiniq Island abutting Lady Job Harbour to Cape Farewell.
Oceanography
The Labrador Sea is about deep and wide where it joins the Atlantic Ocean. It becomes shallower, to less than towards Baffin Bay (see depth map) and passes into the wide Davis Strait. A deep turbidity current channel system, which is about wide and long, runs on the bottom of the sea, near its center from the Hudson Strait into the Atlantic. It is called the Northwest Atlantic Mid-Ocean Channel (NAMOC) and is one of the world's longest drainage systems of Pleistocene age. It appears as a submarine river bed with numerous tributaries and is maintained by high-density turbidity currents flowing within the levees.
The water temperature varies between in winter and in summer. The salinity is relatively low, at 31–34.9 parts per thousand. Two-thirds of the sea is covered in ice in winter. Tides are semi-diurnal (i.e. occur twice a day), reaching .
There is an anticlockwise water circulation in the sea. It is initiated by the East Greenland Current and continued by the West Greenland Current, which brings warmer, more saline waters northwards, along the Greenland coasts up to the Baffin Bay. Then, the Baffin Island Current and Labrador Current transport cold and less saline water southward along the Canadian coast. These currents carry numerous icebergs and therefore hinder navigation and exploration of the gas fields beneath the sea bed. The speed of the Labrador current is typically , but can reach in some areas, whereas the Baffin Current is somewhat slower at about . The Labrador Current maintains the water temperature at and salinity between 30 and 34 parts per thousand.
The sea provides a significant part of the North Atlantic Deep Water (NADW) — a cold water mass that flows at great depth along the western edge of the North Atlantic, spreading out to form the largest identifiable water mass in the World Ocean. The NADW consists of three parts of different origin and salinity, and the top one, the Labrador Sea Water (LSW), is formed in the Labrador Sea. This part occurs at a medium depth and has a relatively low salinity (34.84–34.89 parts per thousand), low temperature () and high oxygen content compared to the layers above and below it. LSW also has a relatively low vorticity, i.e. the tendency to form vortices, than any other water in North Atlantic that reflects its high homogeneity. It has a potential density of 27.76–27.78 mg/cm3 relatively to the surface layers, meaning it is denser, and thus sinks under the surface and remains homogeneous and unaffected by the surface fluctuations.
Fauna
The northern and western parts of the Labrador Sea are covered in ice between December and June. This drift ice serves as a breeding ground for several types of pinnipeds (including Atlantic walrus and bearded, grey, harbor, harp, hooded and ringed seals). Several cetacean species feed in these abundant waters in early spring, including blue, fin, humpback, long-finned pilot, minke, North Atlantic right, sei and sperm whales. The sea contains one of the two primary populations of sei whales, the other being the Scotian Shelf. Pods of beluga (white) whales are more common further to the north, west and south (notably in Baffin Bay, where their population reaches around 20,000 animals), and further afield in Hudson Bay and the Gulf of Saint Lawrence. While somewhat rarer in the Labrador Sea—especially since the 1950s— some sightings still take place. Additionally, pods of orca are drawn to the sea by the large shoals of fish, as well as the many marine mammal species they may hunt (including other cetaceans and pinnipeds), such as harbour porpoise and Atlantic white-sided, common, striped and white-beaked dolphins.
The sea is also a feeding-ground for Atlantic salmon. Shrimp fisheries began in 1978, intensifying by 2000, in addition to cod fishing. However, by the 1990s, the cod fishing had already depleted the fishes' population near the Labrador and West Greenland banks, and was therefore halted in 1992. Other fishery targets include haddock, Atlantic herring, lobster, several species of flatfish, and pelagic fish, such as sand lance and capelin. They are most abundant in the southern parts of the sea.
The Labrador duck was a common bird on the Canadian coast until the 19th century, but is now extinct. Other coastal animals include the Labrador wolf (Canis lupus labradorius), woodland caribou (Rangifer tarandus caribou), moose (Alces alces), black bear (Ursus americanus), Canada lynx (Lynx canadensis), red fox (Vulpes vulpes), Arctic fox (Alopex lagopus), wolverine (G. gulo), American mink (Neogale vison), North American river otter (Lontra canadensis), snowshoe hare (Lepus americanus), grouse (Dendragapus spp.), osprey (Pandion haliaetus), raven (Corvus corax), ducks, geese, swans, partridge and pheasant. Occasionally, coastal polar bear (Ursus maritimus) sightings occur along the sea, mainly further north but sometimes as far south as Conception Bay and the mouth of the Gulf of Saint Lawrence.
Flora
Coastal vegetation includes black spruce (Picea mariana), tamarack, white spruce (P. glauca), dwarf birch (Betula spp.), aspen, willow (Salix spp.), ericaceous shrubs (Ericaceae), cottongrass (Eriophorum spp.), sedge (Carex spp.), lichens and moss. Evergreen bushes of Labrador tea, which is used to make herbal teas, are common in the area, both on the Greenland and Canadian coasts.
References
Oceanography
Seas of Greenland
Seas of the Atlantic Ocean
Bodies of water of Newfoundland and Labrador
Seas of Canada
Seas of North America
Geography of North America
Cenozoic rifts and grabens | Labrador Sea | [
"Physics",
"Environmental_science"
] | 1,753 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
610,419 | https://en.wikipedia.org/wiki/Waybill | A waybill is a document issued by a carrier giving details and instructions relating to the shipment of a consignment of cargo. Typically it will show the names of the consignor and consignee, the point of origin of the consignment, its destination, and route. Most freight forwarders and trucking companies use an in-house waybill called a house bill. These typically contain "conditions of contract of carriage" terms on the back of the form that cover limits to liability and other terms and conditions.
A waybill is similar to a courier's receipt, which contains the details of the consignor and the consignee and the point of origin and the destination.
Air waybills
Most airlines use a different form called an air waybill which lists additional items such as airport of destination, flight number, and time.
Sea waybills
The UK Carriage of Goods by Sea Act 1992 s.1(1) applies to:
bills of lading s.1(2),
sea waybills s.1(3), and
ships' delivery orders s.1(4),
... whether in paper or electronic form s.1(5).
Under s.1(3) of the Act, a sea waybill is: "any document which is not a bill of lading but is such a receipt for goods as contains a contract for the carriage of goods by sea; and identifies the person to whom delivery of the goods is to be made by the carrier in accordance with that contract".
s.2 continues: "...a person who becomes the person who (without being an original party to the contract of carriage) is the person to whom delivery of the goods to which a sea waybill relates is to be made by the carrier in accordance with that contract ... shall (by virtue of becoming the person to whom delivery is to be made) have transferred to and vested in him all rights of suit under the contract of carriage as if he had been a party to the contract of carriage".
Note: the UK's Contracts (Rights of Third Parties) Act 1999 does NOT apply to contracts for the carriage of goods by sea.
See also
Carriage of goods
References
Freight transport
Business law | Waybill | [
"Physics"
] | 466 | [
"Physical systems",
"Transport",
"Transport stubs"
] |
610,431 | https://en.wikipedia.org/wiki/Computer%20Professionals%20for%20Social%20Responsibility | Computer Professionals for Social Responsibility (CPSR) was a global organization promoting the responsible use of computer technology. CPSR was incorporated in 1983 following discussions and organizing that began in 1981. It educated policymakers and the public on a wide range of issues. CPSR incubated numerous projects such as Privaterra, the Public Sphere Project, the Electronic Privacy Information Center, the 21st Century Project, the Civil Society Project, and the Computers, Freedom and Privacy Conference. Founded by U.S. computer scientists at Stanford University and Xerox PARC, CPSR had members in over 30 countries on six continents. CPSR was a non-profit 501.c.3 organization registered in California.
When CPSR was established, it was concerned solely about the use of computers in warfare. It was focused on the Strategic Computing Initiative, a US Defense project to use artificial intelligence in military systems, but added opposition to the Strategic Defense Initiative (SDI) shortly after the program was announced. The Boston chapter helped organize a debate related to the software reliability of SDI systems which drew national attention ("Software Seen as Obstacle in Developing 'Star Wars', Philip M. Boffey, (The New York Times, September 16, 1986) to these issues. Later, workplace issues, privacy, and community networks were added to CPSR's agenda.
CPSR began as a chapter-based organization and had chapters in Palo Alto, Boston, Seattle, Austin, Washington DC, Portland (Oregon) and other US locations as well as a variety of international chapters including Peru and Spain. The chapters often developed innovative projects including a slide show about the dangers of launch on warning (Boston chapter) and the Seattle Community Network (Seattle chapter).
CPSR sponsored two conferences: the Participatory Design Conferences which was held biennially and the Directions and Implications of Advanced Computing (DIAC) symposium series which was launched in 1987 in Seattle. The DIAC symposia have been convened roughly every other year since that time. Four books (Directions and Implications of Advanced Computing; Reinventing Technology, Rediscovering Community; Community Practice in the Network Society; Shaping the Network Society; "Liberating Voices: A Pattern Language for Communication Revolution") and two special sections in the Communications of the ACM ("Social Responsibility" and "Social Computing") resulted from the DIAC symposia.
CPSR awarded the Norbert Wiener Award for Social and Professional Responsibility. Some notable recipients include David Parnas, Joseph Weizenbaum, Severo Ornstein, Kristen Nygaard, Barbara Simons, Antonia Stone, Peter G. Neumann, Marc Rotenberg, Mitch Kapor, and Douglas Engelbart. The final award in 2013 went posthumously to the organisation's first executive director, Gary Chapman. Since CPSR's dissolution, the IEEE Society on Social Implications of Technology (SSIT)is now making the Norbert Weiner awards.
There's a debate about holding computer professionals accountable for unforeseen negative consequences of their work. However, some believe that most computer-related disasters can be prevented through a deeper understanding of professional responsibility. The organization was dissolved in May 2013.
References
External links
Computer Professionals for Social Responsibility
Documentary film about Norbert Wiener Award winner, Joseph Weizenbaum ("Weizenbaum. Rebel at Work." )
Computer Professionals for Social Responsibility Records, 1983–1991. Charles Babbage Institute, University of Minnesota.
Oral history interview with Severo Ornstein and Laura Gould, Charles Babbage Institute, University of Minnesota. Oral history interview by Bruce Bruemmer, 1994, discussing the formation and activities of Computer Professionals for Social Responsibility.
Computing and society
Information technology organizations
Organizations established in 1981
Organizations disestablished in 2013
Privacy in the United States | Computer Professionals for Social Responsibility | [
"Technology"
] | 768 | [
"Information technology",
"Computing and society",
"Information technology organizations"
] |
610,501 | https://en.wikipedia.org/wiki/X-ray%20background | The observed X-ray background is thought to result from, at the "soft" end (below 0.3 keV), galactic X-ray emission, the "galactic" X-ray background, and, at the "hard" end (above 0.3keV), from a combination of many unresolved X-ray sources outside of the Milky Way, the "cosmic" X-ray background (CXB).
The galactic X-ray background is produced largely by emission from hot gas in the Local Bubble within 100 parsecs of the Sun.
Deep surveys with X-ray telescopes, such as the Chandra X-ray Observatory, have demonstrated that around 80% of the cosmic X-ray background is due to resolved extra-galactic X-ray sources, the bulk of which are unobscured ("type-1") and obscured ("type-2") active galactic nuclei (AGN).
References
T Shanks, I Georgantopoulos, GC Stewart, KA Pounds, "The origin of the cosmic X-ray background", Nature 353, 315 - 320 (26 September 1991);
Xavier Barcons, The X-ray Background, 1992 Cambridge University Press, 324 pages
Audio Cain/Gay (2009) Astronomy Cast X-ray Astronomy
See also
X-ray astronomy
Wilkinson Microwave Anisotropy Probe
S150 Galactic X-Ray Mapping
X-ray astronomy
X-ray background
Observational astronomy
Cosmic background radiation | X-ray background | [
"Astronomy"
] | 302 | [
"Observational astronomy",
"X-ray astronomy",
"Astronomical X-ray sources",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
610,527 | https://en.wikipedia.org/wiki/Whiteout%20%28weather%29 | Whiteout or white-out is a weather condition in which visibility and contrast are severely reduced by snow, fog, or sand. The horizon disappears from view while the sky and landscape appear featureless, leaving no points of visual reference by which to navigate.
A whiteout may be due simply to extremely heavy snowfall rates as seen in lake effect conditions, or to other factors such as diffuse lighting from overcast clouds, mist or fog, or a background of snow. A person traveling in a true whiteout is at significant risk of becoming completely disoriented and losing their way, even in familiar surroundings. Motorists typically have to stop their cars where they are, as the road is impossible to see. Normal snowfalls and blizzards, where snow is falling at /h), or where the relief visibility is not clear yet having a clear field of view for over , are often incorrectly called whiteouts.
Types
There are three different forms of a whiteout:
In blizzard conditions, snow already on the ground can become windblown, reducing visibility to near zero.
In snowfall conditions, the volume of snow falling may obscure objects reducing visibility to near zero. An example of this is during lake-effect snow or mountain-effect snow, where the volume of snow can be many times greater than normal snows or blizzards.
Where ground-level thick fog exists in a snow-covered environment, especially on open areas devoid of features.
Variations
A whiteout should not be confused with flat-light. Whilst there are similarities, both the causes and effects are different.
A whiteout is a reduction and scattering of sunlight.
Cause: Sunlight is blocked, reduced and scattered by ice crystals in falling snow, wind-blown spin-drift, water droplets in low-lying clouds or localised fog, etc. The remaining scattered light is merged and blended.
Result: Due to a reduction in reflected light, visual references e.g. the horizon, terrain features, slope aspect, etc. are significantly reduced or completely blocked. This leads to an inability to position yourself relative to the surroundings. In severe conditions an individual may experience a loss of kinesthesia (ability to discern position and movement), confusion, loss of balance, and an overall reduction in the ability to operate.
Flat-light is a diffusion of sunlight.
Cause: Sunlight is both scattered and diffused by atmospheric particles (e.g. water molecules, ice crystals) and by snow lying on the ground; this causes light to be received from multiple directions. Commonly, the effect is increased during a whiteout and/or later in the day when the sun drops towards the horizon, due to sunlight passing through the atmosphere for a greater distance.
Result: Light is received from multiple directions with each light source producing overlapping shadows which cancel-out each other. This dulls the area and removes indicators such as tones and contrast, making it difficult to discern similarly coloured slope features. The loss of visual indicators of shape and edge detail results in objects and features seeming to blend into each other, producing a flat featureless vista. An effect of visual blending may be a loss of depth of field resulting in disorientation.
Hazards
Whiteout conditions pose threats to mountain climbers, skiers, aviation, and mobile ground traffic. Motorists, especially those on large high-speed routes, are also at risk. There have been many major multiple-vehicle collisions associated with whiteout conditions. One forward motorist may come to a complete stop when they cannot see the road, while the motorist behind is still moving.
Local, short-duration whiteout conditions can be created artificially in the vicinity of airports and helipads due to aircraft operations. Snow on the ground can be stirred up by helicopter rotor down-wash or airplane jet blast, presenting hazards to both aircraft and bystanders on the ground.
See also
Air New Zealand Flight 901, an air accident on Mount Erebus, Antarctica in 1979 caused in part by whiteout conditions.
Black ice
Snowsquall
References
Severe weather and convection
Weather hazards
Snow or ice weather phenomena
Fog
Road hazards
Weather hazards to aircraft
Articles containing video clips | Whiteout (weather) | [
"Physics",
"Technology"
] | 838 | [
"Physical phenomena",
"Visibility",
"Physical quantities",
"Fog",
"Weather hazards",
"Weather",
"Road hazards"
] |
610,554 | https://en.wikipedia.org/wiki/Jodi%20%28art%20collective%29 | Jodi, is a collective of two internet artists, Joan Heemskerk (born 1968 in Kaatsheuvel, the Netherlands) and Dirk Paesmans (born 1965 in Brussels, Belgium), created in 1994. They were some of the first artists to create Web art and later started to create software art and artistic computer game modification. Their most well-known art piece is their website wwwwwwwww.jodi.org, which is a landscape of intricate designs made in basic HTML. JODI is represented by Upstream Gallery, Amsterdam.
The artists
Joan Heemskerk was born in 1968 in Kaatsheuvel, the Netherlands, and Dirk Paesmans was born in 1965 in Brussels, Belgium. They both have a background in photography and video art and studied at the CADRE Laboratory for New Media at San Jose State University in California. Paesmans also studied at Kunstakademie Düsseldorf with the founder of video art Nam June Paik.
Both Heemskerk and Paesmans live and work out of the Netherlands.
Artworks
In 1999 they began the practice of modifying old video games such as Wolfenstein 3D to create art mods like SOD. Their efforts were celebrated in the 1999 Webby Awards, where they took top prize in the category of "net art." Jodi used their 5-word acceptance speech (a Webby Award tradition) to criticize the event with the words "Ugly commercial sons of bitches." Further video game modifications soon followed for Quake, Jet Set Willy, and the latest, Max Payne 2 (2006), to create a new set of art games. Jodi's approach to game modification is comparable in many ways to deconstructivism in architecture because they would disassemble the game to its basic parts, and reassemble it in ways that do not make intuitive sense. In one of their more well-known modifications of Quake places, the player inside a closed cube with swirling black-and-white patterns on each side. The pattern is the result of a glitch in the game engine discovered by the artists, presumably, through trial and error; it is generated live as the Quake engine tries, and fails, to visualize the interior of a cube with black-and-white checkered wallpaper.
"Screen Grab" Period (2002- )
Since 2002, Jodi have been in what has been called their "Screen Grab" period, making video works by recording a computer monitor's output while working, playing video games, or coding. The "Screen Grab" period began with the four-screen video installation My%Desktop (2002), which premiered at the Plugin Media Lab in Basel. The piece appeared to depict large, malfunctioning Mac OS 9 monitors that displayed cascading windows, error messages, and files endlessly replicating themselves. To make this video, Jodi pointed-and-clicked and dragged-and-dropped frantically to give an appearance of uncontrolled chaos.
Their exhibition Jodi: goodmorning goodnight was on display at the Whitney Museum from 2013 to 2015. Another project, OXO (2018), premiered at the Lightbox Gallery at Harvard University and, later that year, would also form Jodi's contribution to the group exhibition Difference Engine at the Lisson Gallery in New York City, New York. The piece is an interactive multichannel installation based on old computer games and tic-tac-toe.
Alongside Difference Engine, Jodi also held their first solo exhibition in the Los Angeles area— a self-titled exhibition at the And/Or Gallery in Pasadena, California which involved, in part, recreating the gallery's coffered ceiling on the floor to be navigated by visitors.
A 2012 Vice magazine article said JODI's work "underlines the innate anarchy of the online medium, an arena that we've come to recognize as public but one that the duo constantly undermines and tweaks to their own purposes."
As of October 2019, My%Desktop is part of the permanent collection presentation of the new MoMA (Museum of Modern Art) in New York. The work is presented as a monumental installation of four adjacent projections, showing screen grabs of JODI's desktop-performance.
Collections
The work of JODI is represented in the permanent collection of the Museum of Modern Art, ZKM Center for Art and Media Karlsruhe, among other venues.
See also
net.art
Superbad.com
New media art
References
Sources
Conner, Michael. (2013). Required Reading: A Closer Look at JODI's 'Untitled Game'. Rhizome Journal. http://rhizome.org/editorial/2013/oct/16/required-reading-closer-look-jodis-untitled-game/
Galloway, Alexander. (2016) Jodi's Infrastructure. E-flux Journal #74. http://www.e-flux.com/journal/74/59810/jodi-s-infrastructure/
Saltiel, Natalie. (2011). From the Rhizome Artbase: %20 Wrong (2000)-JODI. Rhizome Journal. http://rhizome.org/editorial/2011/jul/5/20wrong-jodi-artbase/?ref=search_title.
External links
jodi.org
Full archive of Jodi
Talk with Dirk Peasmans, May 2006
Artists' Biography and list of video works by JODI at Electronic Arts Intermix eai.org.
Thomas Dreher: History of Computer Art, chap. VI.3.2 HTML Art with a wider explanation of one of Jodi's early works.
map.jodi.org
sod.jodi.org
From the Rhizome Artbase: %20wrong (2000)- JODI
Net.artists
Belgian contemporary artists
Webby Award winners
San Jose State University alumni | Jodi (art collective) | [
"Technology"
] | 1,229 | [
"Multimedia",
"Net.artists"
] |
610,582 | https://en.wikipedia.org/wiki/Escapement | An escapement is a mechanical linkage in mechanical watches and clocks that gives impulses to the timekeeping element and periodically releases the gear train to move forward, advancing the clock's hands. The impulse action transfers energy to the clock's timekeeping element (usually a pendulum or balance wheel) to replace the energy lost to friction during its cycle and keep the timekeeper oscillating. The escapement is driven by force from a coiled spring or a suspended weight, transmitted through the timepiece's gear train. Each swing of the pendulum or balance wheel releases a tooth of the escapement's escape wheel, allowing the clock's gear train to advance or "escape" by a fixed amount. This regular periodic advancement moves the clock's hands forward at a steady rate. At the same time, the tooth gives the timekeeping element a push, before another tooth catches on the escapement's pallet, returning the escapement to its "locked" state. The sudden stopping of the escapement's tooth is what generates the characteristic "ticking" sound heard in operating mechanical clocks and watches.
The first mechanical escapement, the verge escapement, was invented in medieval Europe during the 13th century and was the crucial innovation that led to the development of the mechanical clock. The design of the escapement has a large effect on a timepiece's accuracy, and improvements in escapement design drove improvements in time measurement during the era of mechanical timekeeping from the 13th through the 19th century.
Escapements are also used in other mechanisms besides timepieces. Manual typewriters used escapements to step the carriage as each letter (or space) was typed.
History
The invention of the escapement was an important step in the history of technology, as it made the all-mechanical clock possible. The first all-mechanical escapement, the verge escapement, was invented in 13th-century Europe. It allowed timekeeping methods to move from continuous processes such as the flow of water in water clocks, to repetitive oscillatory processes such as the swing of pendulums, enabling more accurate timekeeping. Oscillating timekeepers are the controlling devices in all modern clocks.
Liquid-driven escapements
The earliest liquid-driven escapement was described by the Greek engineer Philo of Byzantium in the 3rd century BC in chapter 31 of his technical treatise Pneumatics, as part of a washstand. A counterweighted spoon, supplied by a water tank, tips over in a basin when full, releasing a spherical piece of pumice in the process. Once the spoon has emptied, it is pulled up again by the counterweight, closing the door on the pumice by the tightening string. Remarkably, Philo's comment that "its construction is similar to that of clocks" indicates that such escapement mechanisms were already integrated in ancient water clocks.
In China, the Tang dynasty Buddhist monk Yi Xing, along with government official Liang Lingzan, made the escapement in 723 (or 725) AD for the workings of a water-powered armillary sphere and clock drive, which was the world's first clockwork escapement. Song dynasty horologists Zhang Sixun and Su Song duly applied escapement devices for their astronomical clock towers in the 10th century, where water flowed into a container on a pivot. However, the technology later stagnated and retrogressed. According to historian Derek J. de Solla Price, the Chinese escapement spread west and was the source of Western escapement technology.
According to Ahmad Y. Hassan, a mercury escapement in a Spanish work for Alfonso X in 1277 can be traced back to earlier Arabic sources. Knowledge of these mercury escapements may have spread through Europe with translations of Arabic and Spanish texts.
However, none of these were true mechanical escapements, since they still depended on the flow of liquid through a hole to measure time. In these designs, a container tipped over each time it filled up, thus advancing the clock's wheels each time an equal quantity of water was measured out. The time between releases depended on the rate of flow, as do all liquid clocks. The rate of flow of a liquid through a hole varies with temperature and viscosity changes and decreases with pressure as the level of liquid in the source container drops. The development of mechanical clocks depended on the invention of an escapement which would allow a clock's movement to be controlled by an oscillating weight, which would stay constant.
Mechanical escapements
The first mechanical escapement, the verge escapement, was used in a bell-ringing apparatus called an for several centuries before it was adapted to clocks. Some sources claim that French architect Villard de Honnecourt invented the first escapement in 1237, citing a drawing of a rope linkage to turn a statue of an angel to follow the sun, found in his notebooks; however, the consensus is that this was not an escapement.
Astronomer Robertus Anglicus wrote in 1271 that clockmakers were trying to invent an escapement, but had not yet been successful. Records in financial transactions for the construction of clocks point to the late 13th century as the most likely date for when tower clock mechanisms transitioned from water clocks to mechanical escapements. Most sources agree that mechanical escapement clocks existed by 1300.
However, the earliest available description of an escapement was not a verge escapement, but a variation called a strob escapement. Described in Richard of Wallingford's 1327 manuscript on the clock that he built at the Abbey of St. Albans, this consisted of a pair of escape wheels on the same axle, with alternating radial teeth. The verge rod was suspended between them, with a short crosspiece that rotated first in one direction and then the other as the staggered teeth pushed past. Although no other example is known, it is possible that this was the first clock escapement design.
The verge became the standard escapement used in all other early clocks and watches, and remained the only known escapement for 400 years. Its performance was limited by friction and recoil, but most importantly, the early balance wheels used in verge escapements, known as the foliot, lacked a balance spring and thus had no natural "beat", severely limiting their timekeeping accuracy.
A great leap in the accuracy of escapements happened after 1657, due to the invention of the pendulum and the addition of the balance spring to the balance wheel, which made the timekeepers in both clocks and watches harmonic oscillators. The resulting improvement in timekeeping accuracy enabled greater focus on the accuracy of the escapement. The next two centuries, the "golden age" of mechanical horology, saw the invention of over 300 escapement designs, although only about ten of these were ever widely used in clocks and watches.
The invention of the crystal oscillator and the quartz clock in the 1920s, which became the most accurate clock by the 1930s, shifted technological research in timekeeping to electronic methods, and escapement design ceased to play a role in advancing timekeeping precision.
Reliability
The reliability of an escapement depends on the quality of workmanship and the level of maintenance given. A poorly constructed or poorly maintained escapement will cause problems. The escapement must accurately convert the oscillations of the pendulum or balance wheel into rotation of the clock or watch gear train, and it must deliver enough energy to the pendulum or balance wheel to maintain its oscillation.
In many escapements, the unlocking of the escapement involves sliding motion; for example, in the animation shown above, the pallets of the anchor slide against the escapement wheel teeth as the pendulum swings. The pallets are often made of very hard materials such as polished stone (for example, artificial ruby), but even so, they normally require lubrication. Since lubricating oil degrades over time due to evaporation, dust, oxidation, etc., periodic re-lubrication is needed. If this is not done, the timepiece may work unreliably or stop altogether, and the escapement components may be subjected to rapid wear. The increased reliability of modern watches is due primarily to the higher-quality oils used for lubrication. Lubricant lifetimes can be greater than five years in a high-quality watch.
Some escapements avoid sliding friction; examples include the grasshopper escapement of John Harrison in the 18th century, This may avoid the need for lubrication in the escapement (though it does not obviate the requirement for lubrication of other parts of the gear train).
Accuracy
The accuracy of a mechanical clock is dependent on the accuracy of the timing device. If this is a pendulum, then the period of swing of the pendulum determines the accuracy. If the pendulum rod is made of metal it will expand and contract with heat, lengthening or shortening the pendulum; this changes the time taken for a swing. Special alloys are used in expensive pendulum-based clocks to minimize this distortion. The degrees of arc in which a pendulum may swing varies; highly accurate pendulum-based clocks have very small arcs in order to minimize the circular error.
Pendulum-based clocks can achieve outstanding accuracy. Even into the 20th century, pendulum-based clocks were reference timepieces in laboratories.
Escapements play a big part in accuracy as well. The precise point in the pendulum's travel at which impulse is supplied will affect how closely to time the pendulum will swing. Ideally, the impulse should be evenly distributed on either side of the lowest point of the pendulum's swing. This is called "being in beat." This is because pushing a pendulum when it is moving towards mid-swing makes it gain, whereas pushing it while it is moving away from mid-swing makes it lose. If the impulse is evenly distributed then it gives energy to the pendulum without changing the time of its swing.
The pendulum's period depends slightly on the size of the swing. If the amplitude changes from 4° to 3°, the period of the pendulum will decrease by about 0.013 percent, which translates into a gain of about 12 seconds per day. This is caused by the restoring force on the pendulum being circular not linear; thus, the period of the pendulum is only approximately linear in the regime of the small angle approximation. To be time-independent, the path must be cycloidal. To minimize the effect with amplitude, pendulum swings are kept as small as possible.
As a rule, whatever the method of impulse the action of the escapement should have the smallest effect on the oscillator which can be achieved, whether a pendulum or the balance in a watch. This effect, which all escapements have to a larger or smaller degree is known as the escapement error.
Any escapement with sliding friction will need lubrication, but as this deteriorates the friction will increase, and, perhaps, insufficient power will be transferred to the timing device. If the timing device is a pendulum, the increased frictional forces will decrease the Q factor, increasing the resonance band, and decreasing its precision. For spring-driven clocks, the impulse force applied by the spring changes as the spring is unwound, following Hooke's law. For gravity-driven clocks, the impulse force also increases as the driving weight falls and more chain suspends the weight from the gear train; in practice, however, this effect is only seen in large public clocks, and it can be avoided by a closed-loop chain.
Watches and smaller clocks do not use pendulums as the timing device. Instead, they use a balance spring: a fine spring connected to a metal balance wheel that oscillates (rotates back and forth). Most modern mechanical watches have a working frequency of 3–4 Hz (oscillations per second) or 6–8 beats per second (21,600–28,800 beats per hour; bph). Faster or slower speeds are used in some watches (33,600bph, or 19,800bph). The working frequency depends on the balance spring's stiffness (spring constant); to keep time, the stiffness should not vary with temperature. Consequently, balance springs use sophisticated alloys; in this area, watchmaking is still advancing. As with the pendulum, the escapement must provide a small kick each cycle to keep the balance wheel oscillating. Also, the same lubrication problem occurs over time; the watch will lose accuracy (typically it will speed up) when the escapement lubrication starts failing.
Pocket watches were the predecessor of modern wristwatches. Pocket watches, being in the pocket, were usually in a vertical orientation. Gravity causes some loss of accuracy as it magnifies over time any lack of symmetry in the weight of the balance. The tourbillon was invented to minimize this: the balance and spring are put in a cage that rotates (typically but not necessarily, once a minute), smoothing gravitational distortions. This very clever and sophisticated clockwork is a prized complication in wristwatches, even though the natural movement of the wearer tends to smooth gravitational influences anyway.
The most accurate commercially produced mechanical clock was the electromechanical Shortt-Synchronome free pendulum clock invented by W. H. Shortt in 1921, which had an uncertainty of about 1 second per year. The most accurate mechanical clock to date is probably the electromechanical Littlemore Clock, built by noted archaeologist E. T. Hall in the 1990s. In Hall's paper, he reports an uncertainty of 3 parts in 109 measured over 100 days (an uncertainty of about 0.02 seconds over that period). Both of these clocks are electromechanical clocks: they use a pendulum as the timekeeping element, but electrical power rather than a mechanical gear train to supply energy to the pendulum.
Mechanical escapements
Since 1658 when the introduction of the pendulum and balance spring made accurate timepieces possible, it has been estimated that more than three hundred different mechanical escapements have been devised, but only about 10 have seen widespread use. These are described below. In the 20th century, electric timekeeping methods replaced mechanical clocks and watches, so escapement design became a little-known curiosity.
Verge escapement
The earliest mechanical escapement, from the late 1200s was the verge escapement, also known as the crown-wheel escapement. It was used in the first mechanical clocks and was originally controlled by a foliot, a horizontal bar with weights at either end. The escapement consists of an escape wheel shaped somewhat like a crown, with pointed teeth sticking axially out of the side, oriented horizontally. In front of the crown wheel is a vertical shaft, attached to the foliot at the top, which carries two metal plates (pallets) sticking out like flags from a flag pole, oriented about ninety degrees apart, so only one engages the crown wheel teeth at a time. As the wheel turns, one tooth pushes against the upper pallet, rotating the shaft and the attached foliot. As the tooth pushes past the upper pallet, the lower pallet swings into the path of the teeth on the other side of the wheel. A tooth catches on the lower pallet, rotating the shaft back the other way, and the cycle repeats. A disadvantage of the escapement was that each time a tooth landed on a pallet, the momentum of the foliot pushed the crown wheel backward a short distance before the force of the wheel reversed the motion. This is called "recoil" and was a source of wear and inaccuracy.
The verge was the only escapement used in clocks and watches for 350 years. In spring-driven clocks and watches, it required a fusee to even out the force of the mainspring. It was used in the first pendulum clocks for about 50 years after the pendulum clock was invented in 1656. In a pendulum clock, the crown wheel and staff were oriented so they were horizontal, and the pendulum was hung from the staff. However, the verge is the most inaccurate of the common escapements, and after the pendulum was introduced in the 1650s, the verge began to be replaced by other escapements, being abandoned only by the late 1800s. By this time, the fashion for thin watches had required that the escape wheel be made very small, amplifying the effects of wear, and when a watch of this period is wound up today, it will often be found to run very fast, gaining many hours per day.
Cross-beat escapement
Jost Bürgi invented the cross-beat escapement in 1584, a variation of the verge escapement which had two foliots that rotated in opposite directions. According to contemporary accounts, his clocks achieved remarkable accuracy of within a minute per day, two orders of magnitude better than other clocks of the time. However, this improvement was probably not due to the escapement itself, but rather to better workmanship and his invention of the remontoire, a device that isolated the escapement from changes in drive force. Without a balance spring, the crossbeat would have been no more isochronous than the verge.
Galileo's escapement
Galileo's escapement is a design for a clock escapement, invented around 1637 by Italian scientist Galileo Galilei (1564 - 1642). It was the earliest design of a pendulum clock. Since he was by then blind, Galileo described the device to his son, who drew a sketch of it. The son began construction of a prototype, but both he and Galileo died before it was completed.
Anchor escapement
Invented around 1657 by Robert Hooke, the anchor (see animation to the right) quickly superseded the verge to become the standard escapement used in pendulum clocks through to the 19th century. Its advantage was that it reduced the wide pendulum swing angles of the verge to 3–6°, making the pendulum nearly isochronous, and allowing the use of longer, slower-moving pendulums, which used less energy. The anchor is responsible for the long narrow shape of most pendulum clocks, and for the development of the grandfather clock, the first anchor clock to be sold commercially, which was invented around 1680 by William Clement, who disputed credit for the escapement with Hooke.
The anchor consists of an escape wheel with pointed, backward slanted teeth, and an "anchor"-shaped piece pivoted above it which rocks from side to side, linked to the pendulum. The anchor has slanted pallets on the arms which alternately catch on the teeth of the escape wheel, receiving impulses. Operation is mechanically similar to the verge escapement, and it has two of the verge's disadvantages: (1) The pendulum is constantly being pushed by an escape wheel tooth throughout its cycle, and is never allowed to swing freely, which disturbs its isochronism, and (2) it is a recoil escapement; the anchor pushes the escape wheel backward during part of its cycle. This causes backlash, increased wear in the clock's gears, and inaccuracy. These problems were eliminated in the deadbeat escapement, which slowly replaced the anchor in precision clocks.
Deadbeat escapement
The Graham or deadbeat escapement was an improvement of the anchor escapement first made by Thomas Tompion to a design by Richard Towneley in 1675 although it is often credited to Tompion's successor George Graham who popularized it in 1715. In the anchor escapement the swing of the pendulum pushes the escape wheel backward during part of its cycle. This 'recoil' disturbs the motion of the pendulum, causing inaccuracy, and reverses the direction of the gear train, causing backlash and introducing high loads into the system, leading to friction and wear. The main advantage of the deadbeat is that it eliminated recoil.
In the deadbeat, the pallets have a second curved "locking" face on them, concentric about the pivot on which the anchor turns. During the extremities of the pendulum's swing, the escape wheel tooth rests against this locking face, providing no impulse to the pendulum, which prevents recoil. Near the bottom of the pendulum's swing, the tooth slides off the locking face onto the angled "impulse" face, giving the pendulum a push, before the pallet releases the tooth. The deadbeat was first used in precision regulator clocks, but because of its greater accuracy superseded the anchor in the 19th century. It is used in almost all modern pendulum clocks except for tower clocks which often use gravity escapements.
Pin wheel escapement
Invented around 1741 by Louis Amant, this version of a deadbeat escapement can be made quite rugged. Instead of using teeth, the escape wheel has round pins that are stopped and released by a scissors-like anchor. This escapement, which is also called Amant escapement or (in Germany) Mannhardt escapement, is used quite often in tower clocks.
Detent escapement
The detent or chronometer escapement was used in marine chronometers, although some precision watches during the 18th and 19th centuries also used it. It was considered the most accurate of the balance wheel escapements before the beginning of the 20th century, when lever escapement chronometers began to outperform them in competition. The early form was invented by Pierre Le Roy in 1748, who created a pivoted detent type of escapement, though this was theoretically deficient. The first effective design of detent escapement was invented by John Arnold around 1775, but with the detent pivoted. This escapement was modified by Thomas Earnshaw in 1780 and patented by Wright (for whom he worked) in 1783; however, as depicted in the patent it was unworkable. Arnold also designed a spring detent escapement but, with improved design, Earnshaw's version eventually prevailed as the basic idea underwent several minor modifications during the last decade of the 18th century. The final form appeared around 1800, and this design was used until mechanical chronometers became obsolete in the 1970s.
The detent is a detached escapement; it allows the balance wheel to swing undisturbed during most of its cycle, except the brief impulse period, which is only given once per cycle (every other swing). Because the driving escape wheel tooth moves almost parallel to the pallet, the escapement has little friction and does not need oiling. For these reasons among others, the detent was considered the most accurate escapement for balance wheel timepieces. John Arnold was the first to use the detent escapement with an overcoil balance spring (patented 1782), and with this improvement his watches were the first truly accurate pocket timekeepers, keeping time to within 1 or 2 seconds per day. These were produced from 1783 onwards.
However, the escapement had disadvantages that limited its use in watches: it was fragile and required skilled maintenance; it was not self-starting, so if the watch was jarred in use so the balance wheel stopped, it would not start up again; and it was harder to manufacture in volume. Therefore, the self-starting lever escapement became dominant in watches.
Cylinder escapement
The horizontal or cylinder escapement, invented by Thomas Tompion in 1695 and perfected by George Graham in 1726, was one of the escapements which replaced the verge escapement in pocketwatches after 1700. A major attraction was that it was much thinner than the verge, allowing watches to be made fashionably slim. Clockmakers found it suffered from excessive wear, so it was not much used during the 18th century, except in a few high-end watches with cylinders made from ruby. The French solved this problem by making the cylinder and escape wheel of hardened steel, and the escapement was used in large numbers in inexpensive French and Swiss pocketwatches and small clocks from the mid-19th to the 20th century.
Rather than pallets, the escapement uses a cutaway cylinder on the balance wheel shaft, which the escape teeth enter one by one. Each wedge-shaped tooth impulses the balance wheel by pressure on the cylinder edge as it enters, is held inside the cylinder as it turns, and impulses the wheel again as it leaves out the other side. The wheel usually had 15 teeth and impulsed the balance over an angle of 20° to 40° in each direction. It is a frictional rest escapement, with the teeth in contact with the cylinder over the whole balance wheel cycle, and so was not as accurate as "detached" escapements like the lever, and the high friction forces caused excessive wear and necessitated more frequent cleaning.
Duplex escapement
The duplex watch escapement was invented by Robert Hooke around 1700, improved by Jean Baptiste Dutertre and Pierre Le Roy, and put in final form by Thomas Tyrer, who patented it in 1782.
The early forms had two escape wheels. The duplex escapement was difficult to make but achieved much higher accuracy than the cylinder escapement, and could equal that of the (early) lever escapement and when carefully made was almost as good as a detent escapement.
It was used in quality English pocketwatches from about 1790 to 1860,
and in the Waterbury, a cheap American 'everyman's' watch, during 1880–1898.
In the duplex, as in the chronometer escapement to which it has similarities, the balance wheel only receives an impulse during one of the two swings in its cycle.
The escape wheel has two sets of teeth (hence the name 'duplex'); long locking teeth project from the side of the wheel, and short impulse teeth stick up axially from the top. The cycle starts with a locking tooth resting against the ruby disk. As the balance wheel swings counterclockwise through its center position, the notch in the ruby disk releases the tooth. As the escape wheel turns, the pallet is in just the right position to receive a push from an impulse tooth. Then the next locking tooth drops onto the ruby roller and stays there while the balance wheel completes its cycle and swings back clockwise (CW), and the process repeats. During the CW swing, the impulse tooth falls momentarily into the ruby roller notch again but is not released.
The duplex is technically a frictional rest escapement; the tooth resting against the roller adds some friction to the balance wheel during its swing but this is very minimal. As in the chronometer, there is little sliding friction during impulse since pallet and impulse tooth are moving almost parallel, so little lubrication is needed.
However, it lost favor to the lever; its tight tolerances and sensitivity to shock made duplex watches unsuitable for active people. Like the chronometer, it is not self-starting and is vulnerable to "setting;" if a sudden jar stops the balance during its CW swing, it cannot get started again.
Lever escapement
The lever escapement, invented by Thomas Mudge in 1750, has been used in the vast majority of watches since the 19th century. Its advantages are (1) it is a "detached" escapement; unlike the cylinder or duplex escapements the balance wheel is only in contact with the lever during the short impulse period when it swings through its centre position and swings freely the rest of its cycle, increasing accuracy, and (2) it is a self-starting escapement, so if the watch is shaken so that the balance wheel stops, it will automatically start again. The original form was the rack lever escapement, in which the lever and the balance wheel were always in contact via a gear rack on the lever. Later, it was realized that all the teeth from the gears could be removed except one, and this created the detached lever escapement. British watchmakers used the English detached lever, in which the lever was at right angles to the balance wheel. Later Swiss and American manufacturers used the inline lever, in which the lever is inline between the balance wheel and the escape wheel; this is the form used in modern watches. In 1798, Louis Perron invented an inexpensive, less accurate form called the pin-pallet escapement, which was used in cheap "dollar watches" in the early 20th century and is still used in cheap alarm clocks and kitchen timers.
Grasshopper escapement
A rare but interesting mechanical escapement is John Harrison's grasshopper escapement invented in 1722. In this escapement, the pendulum is driven by two hinged arms (pallets). As the pendulum swings, the end of one arm catches on the escape wheel and drives it slightly backwards; this releases the other arm which moves out of the way to allow the escape wheel to pass. When the pendulum swings back again, the other arm catches the wheel, pushes it back and releases the first arm, and so on. The grasshopper escapement has been used in very few clocks since Harrison's time. Grasshopper escapements made by Harrison in the 18th century are still operating. Most escapements wear far more quickly, and waste far more energy. However, like other early escapements, the grasshopper impulses the pendulum throughout its cycle; it is never allowed to swing freely, causing error due to variations in drive force, and 19th-century clockmakers found it uncompetitive with more detached escapements like the deadbeat. Nevertheless, with enough care in construction it is capable of accuracy. A modern experimental grasshopper clock, the Burgess Clock B, had a measured error of only of a second during 100 running days. After two years of operation, it had an error of only ±0.5 sec, after barometric correction.
Gravity escapement
A gravity escapement uses a small weight or a weak spring to give an impulse directly to the pendulum. The earliest form consisted of two arms which were pivoted very close to the suspension spring of the pendulum with one arm on each side of the pendulum. Each arm carried a small deadbeat pallet with an angled plane leading to it. When the pendulum lifted one arm far enough, its pallet would release the escape wheel. Almost immediately, another tooth on the escape wheel would start to slide up the angle face on the other arm thereby lifting the arm. It would reach the pallet and stop. The other arm meanwhile was still in contact with the pendulum and coming down again to a point lower than it had started from. This lowering of the arm provides the impulse to the pendulum. The design was developed steadily from the middle of the 18th century to the middle of the 19th century. It eventually became the escapement of choice for turret clocks, because their wheel trains are subjected to large variations in drive force caused by the large exterior hands, with their varying wind, snow, and ice loads. Since in a gravity escapement, the drive force from the wheel train does not itself impel the pendulum but merely resets the weights that provide the impulse, the escapement is not affected by variations in drive force.
The 'Double Three-legged Gravity Escapement' shown here is a form of escapement first devised by a barrister named Bloxam and later improved by Lord Grimthorpe. It is the standard for all accurate 'Tower' clocks.
In the animation shown here, the two "gravity arms" are coloured blue and red. The two three-legged escape wheels are also coloured blue and red. They work in two parallel planes so that the blue wheel only impacts the locking block on the blue arm and the red wheel only impacts the red arm. In a real escapement, these impacts give rise to loud audible "ticks" and these are indicated by the appearance of a * beside the locking blocks. The three black lifting pins are key to the operation of the escapement. They cause the weighted gravity arms to be raised by an amount indicated by the pair of parallel lines on each side of the escapement. This gain in potential energy is the energy given to the pendulum on each cycle. For the Trinity College Cambridge Clock, a mass of around 50 grams is lifted through 3 mm each 1.5 seconds - which works out to 1 mW of power. The driving power from the falling weight is about 12 mW, so there is a substantial excess of power used to drive the escapement. Much of this energy is dissipated in the acceleration and deceleration of the frictional "fly" attached to the escape wheels.
The great clock in Elizabeth Tower at Westminster that rings London's Big Ben uses a double three-legged gravity escapement.
Coaxial escapement
Invented around 1974 and patented 1980 by British watchmaker George Daniels, the coaxial escapement is one of the few new watch escapements adopted commercially in modern times.
It could be regarded as having its distant origins in the escapement invented by Robert Robin, C.1792, which gives a single impulse in one direction; with the locking achieved by passive lever pallets, the design of the coaxial escapement is more akin to that of another Robin variant, the Fasoldt escapement, which was invented and patented by the American Charles Fasoldt in 1859.
Both Robin and Fasoldt escapements give impulse in one direction only.
The latter escapement has a lever with unequal drops; this engages with two escape wheels of differing diameters. The smaller impulse wheel acts on the single pallet at the end of the lever, whilst the pointed lever pallets lock on the larger wheel.
The balance engages with and is impelled by the lever through a roller pin and lever fork. The lever 'anchor' pallet locks the larger wheel and, on this being unlocked, a pallet on the end of the lever is given an impulse by the smaller wheel through the lever fork. The return stroke is 'dead', with the 'anchor' pallets serving only to lock and unlock, impulse being given in one direction through the single lever pallet.
As with the duplex, the locking wheel is larger in order to reduce pressure and thus friction.
The Daniels escapement, however, achieves a double impulse with passive lever pallets serving only to lock and unlock the larger wheel. On one side, impulse is given by means of the smaller wheel acting on the lever pallet through the roller and impulse pin. On the return, the lever again unlocks the larger wheel, which gives an impulse directly onto an impulse roller on the balance staff.
The main advantage is that this enables both impulses to occur on or around the centre line, with disengaging friction in both directions.
This mode of impulse is in theory superior to the lever escapement, which has engaging friction on the entry pallet. For long, this was recognized as a disturbing influence on the isochronism of the balance.
Purchasers no longer buy mechanical watches primarily for their accuracy, so manufacturers had little interest in investing in the tooling required, although finally, Omega adopted it in 1990.
Other modern watch escapements
Since accuracy far greater than any mechanical watch is achievable with low-cost quartz watches, improved escapement designs are no longer motivated by practical timekeeping needs but as novelties in the high-end watch market. In an effort to attract publicity, in recent decades some high-end mechanical watchmakers have introduced new escapements. None of these have been adopted by any watchmakers beyond their original creator.
Based on patents initially submitted by Rolex on behalf of inventor Nicolas Déhon, the constant escapement was developed by Girard-Perregaux as working prototypes in 2008 (Nicolas Déhon was then head of Girard-Perregaux R&D department) and in watches by 2013.
The key component of this escapement is a silicon buckled-blade which stores elastic energy. This blade is flexed to a point close to its unstable state and is released with a snap each swing of the balance wheel to give the wheel an impulse, after which it is cocked again by the wheel train. The advantage claimed is that since the blade imparts the same amount of energy to the wheel each release, the balance wheel is isolated from variations in impulse force due to the wheel train and mainspring which cause inaccuracies in conventional escapements.
Parmigiani Fleurier with its Genequand escapement and Ulysse Nardin with its Ulysse Anchor escapement have taken advantage of the properties of silicon flat springs. The independent watchmaker, De Bethune, has developed a concept where a magnet makes a resonator vibrate at high frequency, replacing the traditional balance spring.
Electromechanical escapements
In the late 19th century, electromechanical escapements were developed for pendulum clocks. In these, a switch or phototube energised an electromagnet for a brief section of the pendulum's swing. On some clocks, the pulse of electricity that drove the pendulum also drove a plunger to move the gear train.
Hipp clock
In 1843, Matthäus Hipp first mentioned a purely mechanical clock being driven by a switch called "echappement à palette". A varied version of that escapement has been used from the 1860s inside electrically driven pendulum clocks, the so-called "hipp-toggle". Since the 1870s, in an improved version the pendulum drove a ratchet wheel via a pawl on the pendulum rod, and the ratchet wheel drove the rest of the clock train to indicate the time. The pendulum was not impelled on every swing or even at a set interval of time. It was only impelled when its arc of swing had decayed below a certain level. As well as the counting pawl, the pendulum carried a small vane, known as a Hipp's toggle, pivoted at the top, which was completely free to swing. It was placed so that it dragged across a triangular polished block with a vee-groove in the top of it. When the arc of swing of the pendulum was large enough, the vane crossed the groove and swung free on the other side. If the arc was too small the vane never left the far side of the groove, and when the pendulum swung back it pushed the block strongly downwards. The block carried a contact which completed the circuit to the electromagnet which impelled the pendulum. The pendulum was only impelled as required.
This type of clock was widely used as a master clock in large buildings to control numerous slave clocks. Most telephone exchanges used such a clock to control timed events such as were needed to control the setup and charging of telephone calls by issuing pulses of varying durations such as every second, six seconds and so on.
Synchronome switch
Designed in 1895 by Frank Hope-Jones, the Synchronome switch and gravity escapement were the basis for the majority of their clocks in the 20th century. And also the basis of the slave pendulum in the Shortt-Synchronome free pendulum clock. A gathering arm attached to the pendulum moves a 15-tooth count wheel in one position, with a pawl preventing movement in the reverse direction. The wheel has a vane attached which, once per 30-second turn, releases the gravity arm. When the gravity arm falls it pushes against a pallet attached directly to the pendulum, giving it a push. Once the arm has fallen, it makes an electrical contact that energises an electromagnet to reset the gravity arm and acts as the half-minute impulse for the slave clocks.
Free pendulum clock
In the 20th century, the English horologist William Hamilton Shortt invented a free pendulum clock, patented in September 1921 and manufactured by the Synchronome Company, with an accuracy of one-hundredth of a second a day. In this system the timekeeping "master" pendulum, whose rod is made from a special steel alloy with 36% nickel called Invar whose length changes very little with temperature, swings as free of external influence as possible sealed in a vacuum chamber and does no work. It is in mechanical contact with its escapement for only a fraction of a second every 30 seconds. A secondary "slave" pendulum turns a ratchet, which triggers an electromagnet slightly less than every thirty seconds. This electromagnet releases a gravity lever onto the escapement above the master pendulum. A fraction of a second later (but exactly every 30 seconds), the motion of the master pendulum releases the gravity lever to fall farther. In the process, the gravity lever gives a tiny impulse to the master pendulum, which keeps that pendulum swinging. The gravity lever falls onto a pair of contacts, completing a circuit that does several things:
energizes a second electromagnet to raise the gravity lever above the master pendulum to its top position,
sends a pulse to activate one or more clock dials, and
sends a pulse to a synchronizing mechanism that keeps the slave pendulum in step with the master pendulum.
Since it is the slave pendulum that releases the gravity lever, this synchronization is vital to the functioning of the clock. The synchronizing mechanism used a small spring attached to the shaft of the slave pendulum and an electromagnetic armature that would catch the spring if the slave pendulum was running slightly late, thus shortening the period of the slave pendulum for one swing. The slave pendulum was adjusted to run slightly slow, such that on approximately every other synchronization pulse the spring would be caught by the armature.
This form of clock became a standard for use in observatories (roughly 100 such clocks were manufactured), and was the first clock capable of detecting small variations in the speed of Earth's rotation.
See also
Escapement (radio control)
References
, p. 56-58
Notes
Further reading
External links
Mark Headrick's horology page, with animated pictures of many escapements
Performance Of The Daniels Coaxial Escapement, Horological Journal, August 2004
Watch and Clock Escapements, The Keystone (magazine), 1904, via Project Gutenberg: "A Complete Study in Theory and Practice of the Lever, Cylinder and Chronometer Escapements, Together with a Brief Account of the Origin and Evolution of the Escapement in Horology."
US Patent number 5140565, issued 23 March 1992, for a cycloidal pendulum similar to that of Huygens
findarticles.com: Obituary of Professor Edward Hall, The Independent (London), 16 August 2001
American Watchmakers-Clockmakers Institute, non-profit trade association
Federation of the Swiss Watch Industry FH, watch industry trade association
Method for transmitting bursts of mechanical energy from a power source to an oscillating
Alternative Escapements, Europa Star, September 2014
Evolution of the escapement, Monochrome-watches, Xavier Markl, February 2016
Ancient Greek technology
Ancient inventions
Chinese inventions
English inventions
Greek inventions
Hellenistic engineering
Mechanical power control
Timekeeping components | Escapement | [
"Physics",
"Technology"
] | 8,755 | [
"Timekeeping components",
"Mechanics",
"Mechanical power control",
"Components"
] |
610,583 | https://en.wikipedia.org/wiki/Sinc%20function | In mathematics, physics and engineering, the sinc function ( ), denoted by , has two forms, normalized and unnormalized.
In mathematics, the historical unnormalized sinc function is defined for by
Alternatively, the unnormalized sinc function is often called the sampling function, indicated as Sa(x).
In digital signal processing and information theory, the normalized sinc function is commonly defined for by
In either case, the value at is defined to be the limiting value
for all real (the limit can be proven using the squeeze theorem).
The normalization causes the definite integral of the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value of ). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values of .
The normalized sinc function is the Fourier transform of the rectangular function with no scaling. It is used in the concept of reconstructing a continuous bandlimited signal from uniformly spaced samples of that signal.
The only difference between the two definitions is in the scaling of the independent variable (the axis) by a factor of . In both cases, the value of the function at the removable singularity at zero is understood to be the limit value 1. The sinc function is then analytic everywhere and hence an entire function.
The function has also been called the cardinal sine or sine cardinal function. The term sinc was introduced by Philip M. Woodward in his 1952 article "Information theory and inverse probability in telecommunication", in which he said that the function "occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own", and his 1953 book Probability and Information Theory, with Applications to Radar.
The function itself was first mathematically derived in this form by Lord Rayleigh in his expression (Rayleigh's formula) for the zeroth-order spherical Bessel function of the first kind.
Properties
The zero crossings of the unnormalized sinc are at non-zero integer multiples of , while zero crossings of the normalized sinc occur at non-zero integers.
The local maxima and minima of the unnormalized sinc correspond to its intersections with the cosine function. That is, for all points where the derivative of is zero and thus a local extremum is reached. This follows from the derivative of the sinc function:
The first few terms of the infinite series for the coordinate of the -th extremum with positive coordinate are
where
and where odd lead to a local minimum, and even to a local maximum. Because of symmetry around the axis, there exist extrema with coordinates . In addition, there is an absolute maximum at .
The normalized sinc function has a simple representation as the infinite product:
and is related to the gamma function through Euler's reflection formula:
Euler discovered that
and because of the product-to-sum identity
Euler's product can be recast as a sum
The continuous Fourier transform of the normalized sinc (to ordinary frequency) is :
where the rectangular function is 1 for argument between − and , and zero otherwise. This corresponds to the fact that the sinc filter is the ideal (brick-wall, meaning rectangular frequency response) low-pass filter.
This Fourier integral, including the special case
is an improper integral (see Dirichlet integral) and not a convergent Lebesgue integral, as
The normalized sinc function has properties that make it ideal in relationship to interpolation of sampled bandlimited functions:
It is an interpolating function, i.e., , and for nonzero integer .
The functions ( integer) form an orthonormal basis for bandlimited functions in the function space , with highest angular frequency (that is, highest cycle frequency ).
Other properties of the two sinc functions include:
The unnormalized sinc is the zeroth-order spherical Bessel function of the first kind, . The normalized sinc is .
where is the sine integral,
(not normalized) is one of two linearly independent solutions to the linear ordinary differential equation The other is , which is not bounded at , unlike its sinc function counterpart.
Using normalized sinc,
The following improper integral involves the (not normalized) sinc function:
Relationship to the Dirac delta distribution
The normalized sinc function can be used as a nascent delta function, meaning that the following weak limit holds:
This is not an ordinary limit, since the left side does not converge. Rather, it means that
for every Schwartz function, as can be seen from the Fourier inversion theorem.
In the above expression, as , the number of oscillations per unit length of the sinc function approaches infinity. Nevertheless, the expression always oscillates inside an envelope of , regardless of the value of .
This complicates the informal picture of as being zero for all except at the point , and illustrates the problem of thinking of the delta function as a function rather than as a distribution. A similar situation is found in the Gibbs phenomenon.
Summation
All sums in this section refer to the unnormalized sinc function.
The sum of over integer from 1 to equals :
The sum of the squares also equals :
When the signs of the addends alternate and begin with +, the sum equals :
The alternating sums of the squares and cubes also equal :
Series expansion
The Taylor series of the unnormalized function can be obtained from that of the sine (which also yields its value of 1 at ):
The series converges for all . The normalized version follows easily:
Euler famously compared this series to the expansion of the infinite product form to solve the Basel problem.
Higher dimensions
The product of 1-D sinc functions readily provides a multivariate sinc function for the square Cartesian grid (lattice): , whose Fourier transform is the indicator function of a square in the frequency space (i.e., the brick wall defined in 2-D space). The sinc function for a non-Cartesian lattice (e.g., hexagonal lattice) is a function whose Fourier transform is the indicator function of the Brillouin zone of that lattice. For example, the sinc function for the hexagonal lattice is a function whose Fourier transform is the indicator function of the unit hexagon in the frequency space. For a non-Cartesian lattice this function can not be obtained by a simple tensor product. However, the explicit formula for the sinc function for the hexagonal, body-centered cubic, face-centered cubic and other higher-dimensional lattices can be explicitly derived using the geometric properties of Brillouin zones and their connection to zonotopes.
For example, a hexagonal lattice can be generated by the (integer) linear span of the vectors
Denoting
one can derive the sinc function for this hexagonal lattice as
This construction can be used to design Lanczos window for general multidimensional lattices.
Sinhc
Some authors, by analogy, define the hyperbolic sine cardinal function.
See also
(cartography)
References
External links
Signal processing
Elementary special functions | Sinc function | [
"Technology",
"Engineering"
] | 1,496 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
610,677 | https://en.wikipedia.org/wiki/Automobile%20engine%20replacement | A replacement automobile engine is an engine or a major part of one that is sold individually without any other parts required to make a functional car (for example a drivetrain). These engines are produced either as aftermarket parts or as reproductions of an engine that has gone out of production.
Use
Replacement engines are used to replace classic car engines that are in poor condition or broken, or to install a more powerful or more fuel efficient engine in a vehicle. Replacement engines are often used to make old cars more reliable for daily driving. Classic car hobbyists may also install reproductions of a rare powerplant in a classic car (this is most often seen in Mopar muscle cars that have the 426 Hemi installed into them).
Aftermarket engines are used in many forms of motorsport. Some late model racecar series use "crate engines" many of which are made by independent firms. This ensures that drivers all have similarly powered racecars. Legends and Allison Legacy Series cars also use sealed crate motors.
Types of replacement engines
The four most common types of replacement engines are:
Remanufactured engines (also known as "re-manned," "reconditioned," or "re-engineered")
Rebuilt engines
Used engines
New engines (also known as "crate engines")
Terminology
Short block - everything between the cylinder head and the oil pan (excluding those items)
Long block - a short block, with mounted and gasketed cylinder head, valves and camshaft
Crate engine - a new or remanufactured engine, considered to be equivalent to a new engine. Parts include more than a long block, including intake manifold, and carburetor or fuel injection system, oil pan, valve covers, and perhaps an alternator
Short block
A short block is an engine sub-assembly comprising the portion of the cylinder block below the head gasket but above the oil pan, which usually includes the assembled engine block, crankshaft, connecting rods, and pistons with piston rings properly installed. An in-block cam engine short block includes the camshaft, timing gear, and any balance shafts. Overhead cam engines don't include those parts. Short block engines became popular after World War II, when mass production led to great consistency between individual engines; before then, most engines were hand-built and had idiosyncratic variations. Short blocks became less popular after the 1970s when overhead camshaft (OHC) engines became the norm, as the rational unit of replacement was the long block, which includes the head, camshaft and valve gear.
A short block is the preferred replacement component for a worn-out engine that requires major servicing beyond the capabilities of a local repair garage, when instead a machine shop may be needed. The short block represents the major wear items of an engine: piston rings, and potentially a rebore of the cylinder bores or replacement liners, together with reground bearings on the crankshaft. Although replacing the rings or bearing shells was at one time considered typical garage work, the need for a boring or crank grinding machine now exceed the capabilities of a standard automotive repair garage. A short block includes the preassembled set of major parts needed that generally exceed the capability of the garage, in one item.
The third item sometimes requiring machining, the re-cutting of valve seats in the cylinder head, was less frequently needed. Grinding of valves to fit was once a regular garage task, as was light re-cutting with hand tools, when cast iron seats were common. Once steel seat inserts came into use, either as a result of the switch to unleaded petrol in the 1970s or fitted into high-performance aluminium heads, machining of heads and the replacement of seats became equally commonly required. Aluminium cylinder heads could also be damaged by warping after overheating, often requiring machining to re-flatten them.
A short block has advantages over dismantling the engine and sending the crankshaft and other related automotive parts away for rework. It is usually quicker to obtain, requiring only a single shipment, rather than having to ship parts to and from the machine shop and the interim time spent at the shop to re-machine those parts. The short block would also have been built in a workshop that was hopefully cleaner and more organised for the specialism of engine building.
Short blocks were OHV engines. Sidevalves were pre-eminent before the short block appeared as a common item, and they also offered little saving by omitting the (simple) head.
Long block
A long block is an engine sub-assembly that consists of the assembled short block with crankshaft, cylinder head, camshaft (usually), and valve train. A long block does not include the fuel system, electrical, intake, and exhaust components, as well as other components. A long block may include the balancer/damper, timing cover, oil pan and valve covers.
A long block engine replacement typically requires swapping out parts from the original engine to the long block. These parts can include the oil pan, timing cover, valve covers, intake manifold, emission-control parts, carburetor or fuel injection system, the exhaust manifold(s), alternator, starter, power steering pump (if any), and air conditioner compressor (if any).
Crate engine
Technically, a "crate engine" or "crate motor" is any automobile engine that is shipped to the installer in a crate, which can include short or long block configurations. For this article, a crate engine is defined as a fully-assembled engine that includes more than what is typically installed on a long block; the exact configuration will vary by vendor. It is also sometimes known as a "deluxe long block", which usually includes the fuel and intake system, distributor, oil pan, and ignition system. In some cases, exhaust manifold(s) and the water pump are included.
Crate engines are manufactured by many different companies, but they all share the same characteristics of being complete engines nearly ready to install once removed from the crate. Generally a crate engine still will require additional parts to be fitted, which can range from minor (engine oil dipstick) to major (intake manifold and fuel injection system), depending on the engine package purchased and the targeted vehicle application. This type of engine has various applications including general replacement, hot rod builds, and motorsports competition. Crate engines are often seen as an economical and more reliable solution as opposed to engine overhauls or custom builds. Such engines are built by specialist engine builders, working in clean and well-equipped workshops, rather than general purpose repair garages.
Crate engines may be either brand new, or substantially rebuilt. If rebuilt, they will have been rebuilt to an extent such that they are considered to be of equal quality, reliability and expected lifetime as a new engine.
Applications
Crate engines are well suited in many different vehicle platforms. Engines are often used in the following applications:
General automobile engine replacement
Custom hot rod street builds
Marine engine replacements
Motorsports Competition (Asphalt, dirt track, drag racing etc.)
Advantages
Crate engines are often seen as an economical choice no matter what the application is. In general automobile engine replacement, a crate engine is often very competitively priced when compared to the cost of a full rebuild of a faulty engine. It is also quicker to ship from stock than to wait an equal time for parts, then to begin a rebuild. Installers often opt for the crate engine because of the cost and ease of replacement. Crate engines are typically a bolt-in replacement with no internal work being performed to the engine compared to a complete overhaul that requires internal part replacement by trained mechanics. Hot Rod and other custom street applications also often choose a crate engine because of the higher value when compared to a custom built engine.
In motorsports, the crate engine option has become very popular for various reasons. Crate engines are often a more affordable option when compared to a purpose-built race engine so budget racers often go this route. The crate engine also has developed a large fan base in many different racing series because of the competitive racing. As all racers in the field have identical engines, the races are won by driver's talent and chassis setup, and not the amount of horsepower a team can afford to build into their engine.
EV crate engines
In general, simply swapping an internal combustion engine for an electric traction motor is not sufficient; a complete electric vehicle (EV) drivetrain conversion also requires installation of a storage battery, inverter, reduction gear, and controller. Most of these separate components can be packaged with the motor in a unit that is dimensionally compatible with the existing engine compartment, but the battery is usually the bulkiest, heaviest component of an EV powertrain and can create a significant challenge for fitment. In recent years, the restoration and EV conversion of a classic car has become known as an electromod, a portmanteau of electrification and restomod.
Aftermarket
Hobbyists have been converting cars to EVs since at least the 1960s. Historically, these have used aircraft starter motors and lead-acid batteries; several books have been written to document and guide these conversions, including The Complete Book of Electric Vehicles (Shacket, 1979), How to Convert to an Electric Car (Lucas & Riess, 1980), Convert It (Brown & Prange, 1993), and Build Your Own Electric Vehicle (Brant, 1994). Many recent non-factory electromods are implemented by extracting and adapting the complete drivetrain (traction motor(s), battery, controller, and inverter) from an existing mass-produced EV, such as Tesla. East Coast Defender demonstrated a Tesla EV-sourced powertrain conversion of a 1969–96 Range Rover Classic, developed with Electric Classic Cars, to Motor Trend in 2021. In October 2019 there were no purpose-built crate engine EV kits available commercially, but such projects were in development. For example, EV West announced their Revolt Tesla Crate Motor in 2020, which married an electric traction motor from Tesla with a gear reduction unit and ended in a universal joint yoke, a suitable interface for a driveshaft. Mechanically, the motor is fitted with mounts compatible with Chevrolet small-block engines to take advantage of numerous small-block repower kits.
Automobile manufacturers
Previously in 2018, Chevrolet Performance advanced an "electric crate motor" concept with the unveiling of the eCOPO Camaro at that year's SEMA show. The eCOPO Camaro was a 2019 COPO Camaro which was equipped with a pair of BorgWarner HVH250-150 motor assemblies instead of the conventional piston engine. The electric traction motor essentially served as a drop-in replacement with the same bellhousing bolt pattern and crankshaft flange as the LS engine family, so the car retained the same transmission, driveshaft, and axles as the conventional COPO Camaro. At the 2019 SEMA show, Chevrolet continued to develop the concept, following up with the E-10 Concept, which used the powertrains from two Bolts repackaged into in a restored 1962 C-10 pickup truck. The following year for SEMA, Chevrolet showcased the "Electric Connect and Cruise" eCrate package in October 2020, which included the main drivetrain components of a single Bolt (motor, battery, controller, and inverter), and was demonstrated as a retrofit to a restored 1977 K5 Blazer. The kit was scheduled to go on sale in the second half of 2021. The Bolt motor is modified by removing the differential and reduction gear unit, then fitting an adapter plate and crank flange, allowing it to bolt to a conventional transmission. Together with the controller and inverter, the motor occupies approximately the same space as a small-block V8; the battery presents a greater challenge for packaging, which is why the initial development has focused on trucks. , it was still being explored as a "future business opportunity", according to Chevrolet Vice President Scott Bell.
In November 2021, Ford Performance released the "Eluminator" crate EV motor, which was the same traction motor used in the Ford Mustang Mach-E GT Performance Edition and used to power the 1978 F-100 Eluminator restomod pickup truck. As of 3 November 2021, it was available for pre-order but not yet shipping.
Common crate engines used in North American racing
General Motors began developing several small block crate race car engines in 2001 and they were released into production in 2002. The engines are sealed and repairs being done by certified rebuilders.
Chevrolet 602
The Chevrolet engine debuted in 2002 with part number 1958602 and sold for a little under $4000 in 2012. It has 350 cubic inch displacement via a 4.000 inch bore and 3.480 inch stroke. The 602 engine is equipped with iron heads, a cast-iron block, and aluminum pistons. It produces about 350 horsepower and 390 foot-pounds of torque at 9.1:1 compression.
Applications for this engine include: IMCA Hobby Stock, IMCA Northern Sport Modified, IMCA Southern Sport Modified, Mid-American Stock cars. Northeastern (United States) Sportman, Crate Racing USA, and others.
Chevrolet 603
The Chevrolet engine has part number 88959603. It has 355 cubic inch displacement and 405 foot pounds of torque at 10.1:1 compression. The 603 engine is equipped with aluminum heads, steel crank, and high silicon pistons. The American Canadian Tour (ACT) late model sportsman utilize this engine.
Chevrolet 604
The Chevrolet engine debuted in 2002 with part number 88958604 and sold for about $5000 in 2012. The 604 engine is equipped with aluminum heads, forged steel crankshaft, and an aluminum intake. It produces about 400 horsepower and 400 foot-pounds of torque with a 9.6:1 compression.
Applications for this engine includes IMCA Modifieds (starting in 2013), CRA All Stars Tour (allowed but not required), United Crate Racing Alliance, Big 8 Series, RUSH Late Models, Crate Racing USA, and others.
Ford 347
Ford Performance 347 Cubic Inches 415 HP Sealed Racing Engine
Replacement blocks
New castings of some engines are sometimes produced by independent companies. These blocks commonly replace rare or popular designs for aftermarket rebuilding, especially when the original is no longer produced. They are sometimes available in aluminum instead of original iron, or in stronger alloys. Often they imitate the larger available displacements that were produced in small numbers or allow for displacements never available.
See also
Engine tuning
Engine swap
Engine configuration
References
External links
GM crate engines
Ford crate engines
Most common types of replacement engines
Complete Crate Engines-Auto Parts USA
Automobile engines | Automobile engine replacement | [
"Technology"
] | 2,981 | [
"Engines",
"Automobile engines"
] |
4,205,483 | https://en.wikipedia.org/wiki/Report%20Definition%20Language | Report Definition Language (RDL) is a standard proposed by Microsoft for defining reports.
RDL is an XML application primarily used with Microsoft SQL Server Reporting Services. It is usually written using Visual Studio, although there are also third-party tools; it may also be created or edited by hand in a text editor. SQL Server Reporting Services or other third-party reporting frameworks use RDL to define charts, graphs, calculations, text, images (through links), and other report objects and render them in a variety of formats.
There are three high-level sections in a typical RDL file:
Page style - The objects to display including fields, images, graphs, and tables.
Field definitions - The extended attributes of fields that are populated with formulas, dynamic data, or Database derived data.
Parameters and Database connections - Parameters that may be furnished by the user or passed in from another application; and database connections and queries for pulling data into the report.
References
External links
Report Definition Language Specification
XML-based standards
Microsoft database software | Report Definition Language | [
"Technology"
] | 209 | [
"Computer standards",
"XML-based standards"
] |
4,205,544 | https://en.wikipedia.org/wiki/World%20Wide%20Molecular%20Matrix | The World Wide Molecular Matrix (WWMM) was a proposed electronic repository for unpublished chemical data. First introduced in 2002 by Peter Murray-Rust and his colleagues in the chemistry department at the University of Cambridge in the United Kingdom, WWMM provided a free, easily searchable database for information about thousands of complicated molecules, data that would otherwise remain inaccessible to scientists.
Murray-Rust, a chemical informatics specialist, has estimated that 80% of the results produced by chemists around the world is never published in scientific journals. Most of this data is not ground-breaking, yet it could conceivably be of use to scientists doing related projects—if they could access it. The WWMM was proposed as a solution to this problem. It would house the results of experiments on over 100,000 molecules in physical chemistry, organic chemistry, biochemistry and medicinal chemistry.
In other scientific fields, the need for a similar depository to house inaccessible information could be more acute. In a presentation at the "CERN Workshop on Innovations in Scholarly Communications (OAI4)", Murray-Rust said that chemistry actually leads other fields in published data. He estimated that the majority of the data in some scientific fields never reaches publication.
Although scientific in nature, the WWMM was part of the broader open archives and open source movements, pushes to make more and more information freely available to any user via the Internet or World Wide Web. In his CERN presentation, Murray-Rust stated that the WWMM was a "response to the expense of [scientific] journals", and he asked the rhetorical question, "Can we win the war to make data open, or will it be absorbed into the publishing and pseudo-publishing world?" Murray-Rust and his colleagues are also responsible for the development of the Chemical Mark-up Language (CML), a variant of XML intended for chemists.
See also
The open archives initiative (OAI)
The science of Informatics
Chemical Mark-up language (CML)
References
External links
The home page of Dr. Peter Murray-Rust at the University of Cambridge
The Cambridge Center for molecular informatics
An outline of the WWMM
CERN Workshop on Innovations in Scholarly Communication (OAI4)
Data management | World Wide Molecular Matrix | [
"Technology"
] | 458 | [
"Data management",
"Data"
] |
4,205,730 | https://en.wikipedia.org/wiki/Meat%20hook | A meat hook is any hook normally used in butcheries to hang meat. This form of hook is a variation on the classic S hook.
Types
An S-shaped hook or jointed hook is used to hang up meat or the carcasses of animals such as pigs and cattle on a moving conveyor line. The jointed hook is able to swivel, allowing the carcass to be turned more easily.
A gambrel hook or stick is a frame (shaped like a horse's hind leg) with hooks for suspending a carcass in a more spread out fashion.
A grip hook is a single hook with a handle of some kind, to hold on to a carcass while butchering.
A bacon hook or bacon hanger is a multi-pronged coat-hanger type hook, used to hang bacon joints and other meat.
References
External links
Mechanical hand tools | Meat hook | [
"Physics"
] | 184 | [
"Mechanics",
"Mechanical hand tools"
] |
4,205,746 | https://en.wikipedia.org/wiki/Vacuum%20engineering | Vacuum engineering is the field of engineering that deals with the practical use of vacuum in industrial and scientific applications. Vacuum may improve the productivity and performance of processes otherwise carried out at normal air pressure, or may make possible processes that could not be done in the presence of air. Vacuum engineering techniques are widely applied in materials processing such as drying or filtering, chemical processing, application of metal coatings to objects, manufacture of electron devices and incandescent lamps, and in scientific research. Key developments in modern science owe their roots to exploiting vacuum engineering, be it discovering fundamental physics using particle accelerators (one needs to evacuate the space where elementary particles are made to collide), the advanced analytical equipment used to study physical properties of materials or the vacuum chambers within which cryogenic systems are placed to execute operations in solid state Qubits for quantum computation. Vacuum engineering also has its deep bearings in manufacturing technology.
Vacuum techniques vary depending on the desired vacuum pressure to be achieved. For a "rough" vacuum, over 100 Pascals pressure, conventional methods of analysis, materials, pumps and measuring instruments can be used, whereas ultrahigh vacuum systems use specialized equipment to achieve pressures below one-millionth of one Pascal. At such low pressures, even metals may emit enough gas to cause serious contamination.
Design and mechanism
Vacuum systems usually consist of gauges, vapor jet and pumps, vapor traps and valves along with other extensional piping. A vessel that is operating under vacuum system may be any of these types such as processing tank, steam simulator, particle accelerator, or any other type of space that has an enclosed chamber to maintain the system in less than atmospheric gas pressure. Since a vacuum is created in an enclosed chamber, the consideration of being able to withstand external atmospheric pressure are the usual precaution for this type of design. Along with the effect of buckling or collapsing, the outer shell of vacuum chamber will be carefully evaluated and any sign of deterioration will be corrected by the increase of thickness of the shell itself. The main materials used for vacuum design are usually mild steel, stainless steel, and aluminum. Other sections such as glass are used for gauge glass, view ports, and sometimes electrical insulation. The interior of the vacuum chamber should always be smooth and free of rust and defects.
High pressure solvents are usually used to remove any excess oil and contaminants that will negatively affect the vacuum. Because a vacuum chamber is in an enclosed space, only very specific detergents can be used to prevent any hazards or danger during cleaning. Any vacuum chamber should always have a certain number of access and viewing ports. These are usually in the form of a flange connection to the attachment of pumps, piping or any other parts required for system operation. Extremely important is the design of the vacuum chamber's sealing capability. The chamber itself must be airtight to maintain perfect vacuum. This is ensured through the process of leaking checking, generally using a mass spectrometer leak detector. All openings and connections are also assembled with o-rings and gaskets to prevent any further possible leakage of air into the system.
Technology
Vacuum engineering uses techniques and equipment that vary depending on the level of vacuum used. Pressure slightly reduced from atmospheric pressure may be used to control airflow in ventilation systems, or in material handling systems. Lower-pressure vacuums may be used in vacuum evaporation in processing of food stuffs without excessive heating. Higher grades of vacuum are used for degassing, vacuum metallurgy, and in the production of light bulbs and cathode ray tubes. So-called "ultrahigh" vacuums are required for certain semiconductor processing; the "hardest" vacuums with the lowest pressure are produced for experiments in physics, where even a few stray atoms of air would interfere with the experiment in progress.
Apparatus used varies with decreasing pressure. Blowers give way to various kinds of reciprocating and rotary pumps. For some important applications, a steam ejector can quickly evacuate a large process vessel to a rough vacuum, sufficient for some processes or as a preliminary to more complete pumping processes. The invention of the Sprengel pump was a critical step in the development of the incandescent light bulb as it allowed creation of a vacuum that was higher than previously available, which extended the life of the bulbs. At higher vacuum levels (lower pressures), diffusion pumps, absorption, cryogenic pumps are used. Pumps are more like "compressors" since they gather the rarefied gases in the vacuum vessel and push them into a much higher pressure, smaller volume, exhaust. A chain of two or more different kinds of vacuum pumps may be used in a vacuum system, with one "roughing" pump removing most of the mass of air from the system, and the additional stages handling relatively smaller amounts of air at lower and lower pressures. In some applications, a chemical element is used to combine with the air remaining in an enclosure after pumping. For example, in electronic vacuum tubes, a metallic "getter" was heated by induction to remove the air left after initial pump down and closure of the tubes. The "getter" would also slowly remove any gas evolved within the tube during its remaining life, maintaining sufficiently good vacuum.
Applications
Vacuum technology is a method used to evacuate air from a closed volume by creating a pressure differential from the closed volume to some vent, the ultimate vent being the open atmosphere. When using an industrial vacuum system, a vacuum pump or generator creates this pressure differential. A variety of technical inventions were created based on the idea of vacuum discovered during the 17th century. These range from vacuum generation pumps to X-ray tubes, which were later introduced to the medical field for use as sources of X-ray radiation. The vacuum environment has come to play an important role in scientific research as new discoveries are being made by looking back to the fundamentals of pressure. The idea of “perfect vacuum” cannot be realized, but very nearly approximated by the technological discoveries of the early 20th century. Vacuum engineering today uses a range of different material, from aluminum to zirconium and just about everything in between. There may be the popular belief that vacuum technology deals only with valves, flanges, and other vacuum components, but novel scientific discoveries are often made with the assistance of these traditional vacuum technologies, especially in the realm of high-tech. Vacuum engineering is used for compound semiconductors, power devices, memory logic, and photovoltaics.
Another technical invention is the vacuum pump. Such invention is used to remove gas molecules from sealed volume, thus leaving behind a partial vacuum. More than one vacuum pump is used in a single application to create fluent flow. Fluent flow is used to allow a clear path made using vacuum to remove any air molecules in the way of the process. Vacuum will be used in this process to attempt to create a perfect vacuum. A type of vacuum such as partial vacuum can be caused by the usage of positive displacement type pumps. A positive displacement pump is able to transfer gas load from the entrance to the exit port, but due to its design limitation, it can only achieve a relatively low vacuum. In order to reach a higher vacuum, other techniques must be used. Using a series of pumps, such as following up a fast pump down with a positive displacement type pump, will create a much better vacuum than using a single pump. The combination of pumps used is usually determined by the need of vacuum in the system. Some applications in the chemical, pharmaceutical, oil and gas and other industries require complex process vacuum systems.
Materials
Materials for use in vacuum systems must be carefully evaluated. Many materials have a degree of porosity, while unimportant at ordinary pressures, would continually admit minute amounts of air into a vacuum system if incorrectly used. Some items, such as rubber and plastic, give off gases into a vacuum that can contaminate the system. At high and ultrahigh vacuum levels, even metals must be carefully selected - air molecules and moisture can cling to the surface of metals, and any trapped gas within the metal may percolate to the surface under vacuum. In some vacuum systems, a simple coating of low-volatile grease is sufficient to seal gaps in joints, but at ultrahigh vacuum, fittings must be carefully machined and polished to minimize trapped gas. It is usual practice to bake components of a high-vacuum system; at high temperatures, any gases or moisture adhering to the surface are driven off. However, this requirement affects which materials can be used. For low pressure applications, it is possible to post process even 3-D printed plastic to make vacuum systems.
Particle accelerators are the largest ultrahigh vacuum systems and can be up to kilometres in length.
Vacuum systems have been studied for a long time so now the properties of basic materials used in vacuum tubes (carbon, ceramics, copper, glass, graphite, iron, mica, nickel, precious metals, refractory metals, steel, and all relevant alloys) and well understood, including their joining techniques and how to deal with common problems such as secondary emission and voltage breakdown.
History
The word “Vacuum” is originated from the Latin word “vacua”, which is translated to the word “empty”. Physicists use vacuum to describe a partially empty space, where air or some other gases are being removed from one container. The idea of vacuum relating to the empty space has been speculated as early as 5th century from Greek philosophers, Aristotle (384-322 B.C.) was the one who came up with the relation of vacuum being an empty space in nature would be impossible to ever create. This idea had stuck around for over centuries until the 17th century, when vacuum technology and physics was discovered. In the mid 17th century, Evangelista Torricelli studied the properties of a vacuum generated by a mercury column in a glass tube; this became the barometer, an instrument to observe variations in atmospheric air pressure. Otto von Guericke spectacularly demonstrated the effect of atmospheric pressure in 1654, when teams of horses could not separate two 20-inch diameter hemispheres, which had been placed together and evacuated. In 1698, Thomas Savery patented a steam pump that relied on condensation of steam to produce a low-grade vacuum, for pumping water out of mines. The apparatus was improved in the Newcomen atmospheric engine of 1712; while inefficient, it allowed coal mines to be exploited that otherwise would flood by ground water. During the years of 1564–1642, the famous scientist Galileo was one of the first physicist to conduct experiments to develop measured forces to develop vacuum using a piston in a cylinder. This was a big discovery for scientist and was shared among others. French scientist and philosopher Blaise Pascal used the idea that was discovered to look into further research of vacuum. Pascal discoveries were similar to Torricelli's research as Pascal used similar methods to pull vacuum using mercury. It was until the year 1661, when the mayor of the city of Magdeburg used this discovery to invent or retrofit new ideas. The mayor Otto von Guericke created the first air pump, modified the idea of water pumps, and also modified manometers. Vacuum engineering nowadays provides the solution for all thin film needs in the mechanical industry. This method of engineering is typically used for R&D needs or large scale material production.
Vacuum was used to propel trains experimentally.
Pump technology hit a plateau until Geissler and Sprengle in the mid 19th century, who finally gave access to the high-vacuum regime. This led to the study of electrical discharges in vacuum, discovery of cathode rays, discovery of X-rays and the discovery of the electron. The photoelectric effect was observed in high vacuum, which was a key discovery that lead to the formulation of quantum mechanics and much of modern physics.
See also
Foreline
Joining materials
Negative pressure (disambiguation)
Suction
Ultra high vacuum
Vacuum metallurgy
Vacuum arc remelting
Vacuum deposition
Vacuum induction melting
Vacuum plasma spraying
Vacuum molding
Vacuum casting
Vacuum chamber
Vacuum distillation
Vacuum evaporation
Vacuum flange
Vacuum furnace
Vacuum gauge
Vacuum grease
Vacuum oven
Vacuum packing
Vacuum pump
Vacuum tube
References
Vacuum pumps
Engineering disciplines
V
V | Vacuum engineering | [
"Physics",
"Engineering"
] | 2,489 | [
"Applied and interdisciplinary physics",
"Vacuum pumps",
"Vacuum",
"Mechanical engineering",
"nan",
"Electrical engineering",
"Vacuum systems",
"Matter"
] |
4,205,876 | https://en.wikipedia.org/wiki/Auxochrome | In organic chemistry, an auxochrome () is a group of atoms attached to a chromophore which modifies the ability of that chromophore to absorb light. They themselves fail to produce the colour, but instead intensify the colour of the chromogen when present along with the chromophores in an organic compound. Examples include the hydroxyl (), amino (), aldehyde (), and methyl mercaptan groups ().
An auxochrome is a functional group of atoms with one or more lone pairs of electrons when attached to a chromophore, alters both the wavelength and intensity of absorption. If these groups are in direct conjugation with the pi-system of the chromophore, they may increase the wavelength at which the light is absorbed and as a result intensify the absorption. A feature of these auxochromes is the presence of at least one lone pair of electrons which can be viewed as extending the conjugated system by resonance.
Effects on chromophore
It increases the colour of any organic compound. For example, benzene does not display colour as it does not have a chromophore; but nitrobenzene is pale yellow colour because of the presence of a nitro group (−NO2) which acts as a chromophore. But p-hydroxynitrobenzene exhibits a deep yellow colour, in which the −OH group acts as an auxochrome. Here the auxochrome (−OH) is conjugated with the chromophore −NO2. Similar behavior is seen in azobenzene which has a red colour, but p-hydroxyazobenzene is dark red in colour.
The presence of an auxochrome in a chromogen is essential to make a dye. However, if an auxochrome is present in the meta position to the chromophore, it does not affect the colour.
An auxochrome is known as a functional group that produces a bathochromic shift, also known as red shift because it increases the wavelength of absorption, therefore moving closer to infrared light. Woodward−Fieser rules estimate the shift in wavelength of maximum absorption for several auxochromes attached to a conjugated system in an organic molecule.
An auxochrome helps a dye to bind to the object that is to be coloured. Electrolytic dissociation of the auxochrome group helps in binding and it is due to this reason a basic substance takes an acidic dye.
Explanation for the colour modification
A molecule exhibits colour because it absorbs colours only of certain frequencies and reflects or transmits others. They are capable of absorbing and emitting light of various frequencies. Light waves with frequency very close to their natural frequency are absorbed readily. This phenomenon, known as resonance, means that the molecule can absorb radiation of a particular frequency which is the same as the frequency of electron movement within the molecule. The chromophore is the part of the molecule where the energy difference between two different molecular orbitals falls within the range of the visible spectrum and hence absorbs some particular colours from visible light. Hence the molecule appears coloured. When auxochromes are attached to the molecule, the natural frequency of the chromophore gets changed and thus the colour gets modified. Different auxochromes produce different effects in the chromophore which in turn causes absorption of light from other parts of the spectrum. Normally, auxochromes which intensify the colour are chosen.
Classification
There are mainly two types of auxochromes:
Acidic: −COOH, −OH, −SO3H
Basic: −NH2, −NHR, −NR2
References
Chemical compounds
Color
Chemical reactions | Auxochrome | [
"Physics",
"Chemistry"
] | 783 | [
"Chemical compounds",
"Molecules",
"nan",
"Matter"
] |
4,206,053 | https://en.wikipedia.org/wiki/Linnaean%20enterprise | The Linnaean enterprise is the task of identifying and describing all living species. It is named after Carl Linnaeus, a Swedish botanist, ecologist and physician who laid the foundations for the modern scheme of taxonomy.
As of 2006, the Linnaean enterprise is considered to be barely begun. There are estimated to be 10 million living species, but only about 1.5-1.8 million have been even named, and fewer than 1% of these have been studied enough to understand the basics of their ecological roles. Linnaean enterprise plays a larger role in applied science and basic science. With applied science, it can assist in finding new natural products and species (bioprospecting) and effective conservation practices. It allows for an understanding of evolutionary biology and how ecosystems function in basic science.
The cost of completing the Linnaean Enterprise has been estimated at US $5 billion.
Name
Carl Linnaeus (1707–1778) was one of the most well known natural scientists of his time. Very unsatisfied with the contemporary way of naming living things, he was responsible for creating the binomial nomenclature system still used in science to name species of organisms. Linnaeus's work laid the basis of modern taxonomy. As part of his work, Linnaeus formally described and classified numerous species of plants and animals, and created binomial (scientific) names that still are used today for many of the most common species in Europe. Notably, Linnaeus's taxonomic system was the first where humans were taxonomically grouped with apes, classifying both genus Homo as well as Simia (now defunct and replaced by several other genera) to be members of order Primates.
See also
Catalogue of Life
Encyclopedia of Life
Wikispecies
References
Sources
Edward O. Wilson, A Global Biodiversity Map, Science 29 September 2000: Vol. 289. no. 5488, p. 2279
Taxonomy (biology)
Carl Linnaeus | Linnaean enterprise | [
"Biology"
] | 386 | [
"Taxonomy (biology)"
] |
4,206,155 | https://en.wikipedia.org/wiki/The%20World%20Economy%3A%20Historical%20Statistics | The World Economy: Historical Statistics is a landmark book by Angus Maddison. Published in 2004 by the OECD Development Centre, it studies the growth of populations and economies across the centuries: not just the world economy as it is now, but how it was in the past.
Among other things, it showed that Europe's gross domestic product (GDP) per capita was faster progressing than the leading Asian economies since 1000 AD, reaching again a higher level than elsewhere from the 15th century, while Asian GDP per capita remained static until 1800, when it even began to shrink in absolute terms, as Maddison demonstrated in a subsequent book. At the same time, Maddison showed them recovering lost ground from the 1950s, and documents the much faster rise of Japan and East Asia and the economic shrinkage of Russia in the 1990s.
The book is a mass of statistical tables, mostly on a decade-by-decade basis, along with notes explaining the methods employed in arriving at particular figures. It is available both as a paperback book and in electronic format. Some tables are available on the official website.
See also
List of regions by past GDP (PPP) per capita
Angus Maddison statistics of the ten largest economies by GDP (PPP)
Maddison Project, a project started in March 2010 to continue Maddison's work after his death
References
External links
Angus Maddison's Homepage at the Groningen Growth and Development Centre
Official website of The World Economy
2004 non-fiction books
Demography
Economic growth
Books about economic history | The World Economy: Historical Statistics | [
"Environmental_science"
] | 313 | [
"Demography",
"Environmental social science"
] |
4,206,403 | https://en.wikipedia.org/wiki/Proto-Ionians | The Proto-Ionians are the hypothetical earliest speakers of the Ionic dialects of Ancient Greek, chiefly in the works of Jean Faucounau. The relation of Ionic to the other Greek dialects has been subject to some debate. It is mostly grouped with Arcadocypriot as opposed to Doric, reflecting two waves of migration into Greece following the Proto-Greek period, but sometimes also as separate from Arcadocypriot on equal footing with Doric, suggesting three distinct waves of migration.
Position of Ionic Greek
Mainstream Greek linguistics separates the Greek dialects into two large genetic groups, one including Doric Greek and the other including both Arcadocypriot and Ionic Greek. But alternative approaches proposing three groups are not uncommon; Thumb and Kieckers (1932) propose three groups, classifying Ionic as genetically just as separate from Arcadocypriot as from Doric. Like a few other linguists (Vladimir Georgiev, C. Rhuijgh, P. Léveque, etc.), the bipartite classification is known as the "Risch–Chadwick theory", named after its two famous proponents, Ernst Risch and John Chadwick.
The "Proto-Ionians" first appear in the work of Ernst Curtius (1887), who believed that the Attic-Ionic dialect group was due to an "Ionicization" of Attica by immigration from Ionia in historical times. Curtius hypothesized that there had been a "Proto-Ionian" migration from the Balkans to western Anatolia in the same period that brought the Arcadic dialect (the successor of the Mycenean Greek stage yet undiscovered in the time of Curtius) to mainland Greece. Curtius' hypothesis was endorsed by George Hempl in 1920. Hempl preferred to call these hypothetical, early Anatolian Greeks "Javonians". Hempl attempted to defend a reading of Hittite cuneiform as Greek, in spite of the establishment of the Hittite language as a separate branch of Indo-European by Hrozný in 1917.
Faucounau
The tripartite theory was revived by amateur linguist Jean Faucounau. In his view, the first Greek settlers in their historical territory were the (Pelasgic) "proto-Ionians", who were separated around 3000 BC from both the proto-Dorians and the proto-Mycenaeans. Faucounau traces this three-wave model to similar views put forward by Paul Kretschmer in the 1890s and the 1900s (i.e., before the decipherment of Linear B), with a modification: the (proto-Ionic) First wave came by sea, the "Proto-Ionians" settling first in the Cycladic Islands, then in Euboea and Attica. The last two waves are the generally accepted arrival of the Mycenaean Greeks (the linguistic predecessors of the Arcadocypriot speakers) in around 1700 BC and the Dorian invasion around 1100 BC.
Following Georgiev, Faucounau makes three arguments for the proto-Ionic language. The first is the explanation of certain Mycenaean forms as loan-words from the proto-Ionians already present in Greece: he asserts that digamma is unexpectedly absent from some Mycenaean words, the occasional resolution of Indo-European vocalic r to -or/ro- instead of -ar/ra-; to-pe-za for τράπεζα, and the explanation of Mycenaean pa-da-yeu as Greek παδάω/πηδάω, "spring leap, bound", which he interprets as both cognate with, and having the same meaning as, English paddle.
The second argument is a refinement of a long-established argument in archaeoastronomy, developed most recently by Michael Ovenden, which considers the motion of the North Pole with respect to the fixed stars, because of the precession of the equinoxes. Ovenden concluded, from the slant of the constellations in the present sky and the hypothesis that Aratus and Hipparchus (insofar as his work survives) correctly and completely represent immemorial tradition, that the constellations we now use had been devised when the Pole was in Draco, about 2800 BC. He also concluded that the inventors probably lived between 34°30' and 37°30' N., north of most ancient civilizations, and so were likely to be the Minoans.
Dr. Crommelin, FRAS, has disputed this latitude, arguing that the constellation makers could only see to 54° S, but that this was compatible with latitudes as low as the 31°N of Alexandria; stars which only skirt the southern horizon by a few degrees are not effectively visible. Assuming a Greek latitude would render Canopus and Fomalhaut invisible. Crommelin estimates the constellators at 2460 BC; R. A. Proctor has estimated 2170 BC. E. W. Maunder 2700 BC.
Faucounau's addition to this is the argument that Crete is also too far south, that the names of the constellations are (Ionic) Greek, not Minoan, and therefore that the constellation makers must be the proto-Ionians in the Cyclades. The south coast of Crete follows 35°N latitude; Syros, which he identifies as a center of proto-Ionian civilization, is at 37°20'. On this basis, he identifies the proto-Ionians with the archaeological Early Cycladic II culture: after all, they made round "frying pans," and one of them with an incised spiral, and the Phaistos Disc is round with an incised spiral.
His third argument depends on Herodotus's somewhat obscure use of the word Pelasgian for various peoples, Greek-speaking and otherwise, around the Aegean basin. Faucounau claims that the word, which he derives idiosyncratically from πελαγος, "sea", means the descendants of the proto-Ionians. Some of them lost their language because they settled among foreigners; others, such as the Athenians, preserved their language - Attic, apparently, arises from a mixture of proto-Ionian and other dialects. He does not explain why Homer speaks of Dodona, inland in north-western Greece, as Pelasgian (Il, 16,233); nor why no place in historic Ionia is called Pelasgian.
He adds to the above arguments with archaeological facts. For example, the Treaty of Alaksandu between Wilusa and the Hittite empire bore a Greek name at a time when there was no Mycenaean pottery at Troy. Faucounau considers that all these arguments are an indirect confirmation of his own decipherment claim of the Phaistos Disk as proto-Ionic.
Faucounau's "Proto-Ionic Disk Language" has most of the properties of Homeric Greek, including loss of labiovelars and even of digamma (both are preserved intact in the Mycenaean of the 14th century BC). Digamma, in Faucounau's reading of the Phaistos Disk, has in some instances passed to y, a sound shift not known from any other Greek dialect, but suspected in Ionic (e.g. Ion. païs v/etym. paus). For Faucounau, the Pelasgians, the Trojans, the Carians and the Philistines are all descended from the Proto-Ionians.
Faucounau's work on this subject has received two scholarly notices. Paul Faure, as below, writes warmly of many parts of the Proto-Ionian theory. He declines to address the decipherment, and omits the Celts; he also dates the Middle Cycladic culture only from 2700 BC, not 2900. Yves Duhoux expresses his disbelief in the decipherment, but does not mention the wider theory, except to deny that the Disc came from Syros. Faucounau's paper on the statistical problem of how many glyphs are likely to be omitted from a short text has never been cited. Most of it addresses the long-solved case in which the glyphs are equally likely.
See also
Pelasgians
Greek dialects
Dorian invasion
References
Jean Faucounau, Le déchiffrement du Disque de Phaistos, Paris 1999.
Jean Faucounau, Les Proto-Ioniens : histoire d'un peuple oublié, Paris 2001. Esp. pp. 33ff, 35ff, 37f, p. 57, p. 61, p. 63 124.
Review: Paul Faure, Revue des études grecques Vol. 15 (2002), p. 424f.
Jean Faucounau, Les Peuples de la Mer et leur Histoire, Paris 2003.
Jean Faucounau, Les Origines Grecques à l'Age de Bronze, Paris 2005.
Vladimir Georgiev, "Mycénien et homérique: Le problème du digamma" in Proceedings of the Cambridge Colloquium on Mycenaean Studies, Cambridge 1966, p. 104-124.
Vladimir Georgiev, "Le traitement des sonantes voyelles indo-européennes et le problème du caractère de la langue mycénienne" in Acta Mycenaea, Salamanca 1972, p. 361-379.
Jonathan M. Hallm, Hellenicity: between ethnicity and culture. University of Chicago Press, 2002, , p. 39.
George Hempl, Prehistoric Wanderings of the Hittite Greeks, in Mediterranean Studies, Vol III. Stanford University Press (1931),
Paul Kretschmer, Einleitung in die Geschichte der griechischen Sprache (1896).
Pierre Lévêque, L'aventure grecque, p. 16-29.
Michael W. Ovenden, The Origin of the Constellations in The Philosophical Journal 3 (1966), p. 1-18.
A. C. D. Crommelin "The ancient Constellation Figures" in Hutchinson's Splendour of the Heavens London, 1923 Vol . II pp. 640–669.
Cornelis J. Ruijgh, in Les Civilisations égéennes, René Treuil et al. edit, (Paris 1989), p. 401-423.
Cornelis J. Ruijgh, Sur la position dialectale du Mycénien in Atti e Memorie del Secondo Congresso Internazionale di Micenologia (Roma 1996) p. 115-124.
A. Thumb, E. Kieckers, Handbuch der griechischen Dialekte (1932).
Liddell, Scott, Jones, A Greek–English Lexicon, s.v. πηδάω.
National Geographic Atlas of the World (1992 ed.) p. 66.
External links
Discussion by Faucounau of the "Risch-Chadwick Theory"
Archaeoastronomy
Hypotheses
Aegean languages in the Bronze Age
Ionians | Proto-Ionians | [
"Astronomy"
] | 2,346 | [
"Archaeoastronomy",
"Astronomical sub-disciplines"
] |
4,206,717 | https://en.wikipedia.org/wiki/Position%20angle | In astronomy, position angle (usually abbreviated PA) is the convention for measuring angles on the sky. The International Astronomical Union defines it as the angle measured relative to the north celestial pole (NCP), turning positive into the direction of the right ascension. In the standard (non-flipped) images, this is a counterclockwise measure relative to the axis into the direction of positive declination.
In the case of observed visual binary stars, it is defined as the angular offset of the secondary star from the primary relative to the north celestial pole.
As the example illustrates, if one were observing a hypothetical binary star with a PA of 30°, that means an imaginary line in the eyepiece drawn from the north celestial pole to the primary (P) would be offset from the secondary (S) such that the angle would be 30°.
When graphing visual binaries, the NCP is, as in the illustration, normally drawn from the center point (origin) that is the Primary downward–that is, with north at bottom–and PA is measured counterclockwise. Also, the direction of the proper motion can, for example, be given by its position angle.
The definition of position angle is also applied to extended objects like galaxies, where it refers to the angle made by the major axis of the object with the NCP line.
Nautics
The concept of the position angle is inherited from nautical navigation on the oceans, where the optimum compass course is the course from a known position to a target position with minimum effort. Setting aside the influence of winds and ocean currents, the optimum course is the course of smallest distance between the two positions on the ocean surface. Computing the compass course is known as the inverse geodetic problem.
This article considers only the abstraction of minimizing the distance between and traveling on the surface of a sphere with some radius : In which direction angle relative to North should the ship steer to reach the target position?
See also
Parallactic angle
Angular distance
Further reading
References
External links
The Orbits of 150 Visual Binary Stars, by Dibon Smith (Accessed 2/26/06)
Astronomical coordinate systems
Angle
Observational astronomy | Position angle | [
"Physics",
"Astronomy",
"Mathematics"
] | 440 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Observational astronomy",
"Astronomical coordinate systems",
"Coordinate systems",
"Wikipedia categories named after physical quantities",
"Angle",
"Astronomical sub-disciplines"
] |
4,207,234 | https://en.wikipedia.org/wiki/Dehn%20plane | In geometry, Max Dehn introduced two examples of planes, a semi-Euclidean geometry and a non-Legendrian geometry, that have infinitely many lines parallel to a given one that pass through a given point, but where the sum of the angles of a triangle is at least . A similar phenomenon occurs in hyperbolic geometry, except that the sum of the angles of a triangle is less than . Dehn's examples use a non-Archimedean field, so that the Archimedean axiom is violated. They were introduced by and discussed by .
Dehn's non-archimedean field Ω(t)
To construct his geometries, Dehn used a non-Archimedean ordered Pythagorean field Ω(t), a Pythagorean closure of the field of rational functions R(t), consisting of the smallest field of real-valued functions on the real line containing the real constants, the identity function t (taking any real number to itself) and closed under the operation . The field Ω(t) is ordered by putting x > y if the function x is larger than y for sufficiently large reals. An element x of Ω(t) is called finite if m < x < n for some integers m, n, and is called infinite otherwise.
Dehn's semi-Euclidean geometry
The set of all pairs (x, y), where x and y are any (possibly infinite) elements of the field Ω(t), and with the usual metric
which takes values in Ω(t), gives a model of Euclidean geometry. The parallel postulate is true in this model, but if the deviation from the perpendicular is infinitesimal (meaning smaller than any positive rational number), the intersecting lines intersect at a point that is not in the finite part of the plane. Hence, if the model is restricted to the finite part of the plane (points (x,y) with x and y finite), a geometry is obtained in which the parallel postulate fails but the sum of the angles of a triangle is . This is Dehn's semi-Euclidean geometry. It is discussed in .
Dehn's non-Legendrian geometry
In the same paper, Dehn also constructed an example of a non-Legendrian geometry where there are infinitely many lines through a point not meeting another line, but the sum of the angles in a triangle exceeds . Riemann's elliptic geometry over Ω(t) consists of the projective plane over Ω(t), which can be identified with the affine plane of points (x:y:1) together with the "line at infinity", and has the property that the sum of the angles of any triangle is greater than The non-Legendrian geometry consists of the points (x:y:1) of this affine subspace such that tx and ty are finite (where as above t is the element of Ω(t) represented by the identity function). Legendre's theorem states that the sum of the angles of a triangle is at most , but assumes Archimedes's axiom, and Dehn's example shows that Legendre's theorem need not hold if Archimedes' axiom is dropped.
References
Planes (geometry)
Non-Euclidean geometry | Dehn plane | [
"Mathematics"
] | 678 | [
"Planes (geometry)",
"Mathematical objects",
"Infinity"
] |
4,207,339 | https://en.wikipedia.org/wiki/Cerulenin | Cerulenin is an antifungal antibiotic that inhibits fatty acid and steroid biosynthesis. It was the first natural product antibiotic known to inhibit lipid synthesis. In fatty acid synthesis, it has been reported to bind in equimolar ratio to b-keto-acyl-ACP synthase, one of the seven moieties of fatty acid synthase, blocking the interaction of malonyl-CoA. It also has the related activity of stimulating fatty acid oxidation through the activation of CPT1, another enzyme normally inhibited by malonyl-CoA. Inhibition involves covalent thioacylation that permanently inactivates the enzymes. These two behaviors may increase the availability of energy in the form of ATP, perhaps sensed by AMPK, in the hypothalamus.
In sterol synthesis, cerulenin inhibits HMG-CoA synthetase activity. It was also reported that cerulenin specifically inhibited fatty acid biosynthesis in Saccharomyces cerevisiae without having an effect on sterol formation. But in general conclusion, cerulenin has inhibitory effects on sterol synthesis.
Cerulenin causes a dose-dependent decrease in HER2/neu protein levels in breast cancer cells, from 14% at 1.25 to 78% at 10 milligrams per liter, and targeting of fatty acid synthase by related drugs has been suggested as a possible treatment. Antiproliferative and pro-apoptotic effects have been shown in colon cells as well. At an intraperitoneal dose of 30 milligrams per kilogram, it has been shown to inhibit feeding and induce dramatic weight loss in mice by a mechanism similar to, but independent or downstream of, leptin signaling. It is found naturally in the industrial strain Cephalosporium caerulens (Sarocladium oryzae, the sheath rot pathogen of rice).
See also
Satoshi Ōmura
References
External links
Cerulenin from Fermentek
Antifungals
Carboxamides
Ketones
Alkene derivatives
Epoxides | Cerulenin | [
"Chemistry"
] | 448 | [
"Ketones",
"Functional groups"
] |
4,207,510 | https://en.wikipedia.org/wiki/Oil | An oil is any nonpolar chemical substance that is composed primarily of hydrocarbons and is hydrophobic (does not mix with water) and lipophilic (mixes with other oils). Oils are usually flammable and surface active. Most oils are unsaturated lipids that are liquid at room temperature.
The general definition of oil includes classes of chemical compounds that may be otherwise unrelated in structure, properties, and uses. Oils may be animal, vegetable, or petrochemical in origin, and may be volatile or non-volatile. They are used for food (e.g., olive oil), fuel (e.g., heating oil), medical purposes (e.g., mineral oil), lubrication (e.g. motor oil), and the manufacture of many types of paints, plastics, and other materials. Specially prepared oils are used in some religious ceremonies and rituals as purifying agents.
Etymology
First attested in English 1176, the word oil comes from Old French oile, from Latin oleum, which in turn comes from the Greek (elaion), "olive oil, oil" and that from (elaia), "olive tree", "olive fruit". The earliest attested forms of the word are the Mycenaean Greek , e-ra-wo and , e-rai-wo, written in the Linear B syllabic script.
Types
Organic oils
Organic oils are produced in remarkable diversity by plants, animals, and other organisms through natural metabolic processes. Lipid is the scientific term for the fatty acids, steroids and similar chemicals often found in the oils produced by living things, while oil refers to an overall mixture of chemicals. Organic oils may also contain chemicals other than lipids, including proteins, waxes (class of compounds with oil-like properties that are solid at common temperatures) and alkaloids.
Lipids can be classified by the way that they are made by an organism, their chemical structure and their limited solubility in water compared to oils. They have a high carbon and hydrogen content and are considerably lacking in oxygen compared to other organic compounds and minerals; they tend to be relatively nonpolar molecules, but may include both polar and nonpolar regions as in the case of phospholipids and steroids.
Mineral oils
Crude oil, or petroleum, and its refined components, collectively termed petrochemicals, are crucial resources in the modern economy. Crude oil originates from ancient fossilized organic materials, such as zooplankton and algae, which geochemical processes convert into oil. The name "mineral oil" is a misnomer, in that minerals are not the source of the oil—ancient plants and animals are. Mineral oil is organic. However, it is classified as "mineral oil" instead of as "organic oil" because its organic origin is remote (and was unknown at the time of its discovery), and because it is obtained in the vicinity of rocks, underground traps, and sands. Mineral oil also refers to several specific distillates of crude oil.
Applications
Cooking
Several edible vegetable and animal oils, and also fats, are used for various purposes in cooking and food preparation. In particular, many foods are fried in oil much hotter than boiling water. Oils are also used for flavoring and for modifying the texture of foods (e.g. stir fry). Cooking oils are derived either from animal fat, as butter, lard and other types, or plant oils from olive, maize, sunflower and many other species.
Cosmetics
Oils are applied to hair to give it a lustrous look, to prevent tangles and roughness and to stabilize the hair to promote growth. See hair conditioner.
Religion
Oil has been used throughout history as a religious medium. It is often considered a spiritually purifying agent and is used for anointing purposes. As a particular example, holy anointing oil has been an important ritual liquid for Judaism and Christianity.
Health
Oils have been consumed since ancient times. Oils hold lots of fats and medical properties. A good example is olive oil. Olive oil holds a lot of fats within it which is why it was also used in lighting in ancient Greece and Rome. So people would use it to bulk out food so they would have more energy to burn through the day. Olive oil was also used to clean the body in this time as it would trap the moisture in the skin while pulling the grime to the surface. It was used as an ancient form of unsophisticated soap. It was applied on the skin then scrubbed off with a wooden stick pulling off the excess grime and creating a layer where new grime could form but be easily washed off in the water as oil is hydrophobic. Fish oils hold the omega-3 fatty acid. This fatty acid helps with inflammation and reduces fat in the bloodstream.
Painting
Color pigments are easily suspended in oil, making it suitable as a supporting medium for paints. The oldest known extant oil paintings date from 650 AD.
Heat transfer
Oils are used as coolants in oil cooling, for instance in electric transformers. Heat transfer oils are used both as coolants (see oil cooling), for heating (e.g. in oil heaters) and in other applications of heat transfer.
Lubrication
Given that they are non-polar, oils do not easily adhere to other substances. This makes them useful as lubricants for various engineering purposes. Mineral oils are more commonly used as machine lubricants than biological oils are. Whale oil is preferred for lubricating clocks, because it does not evaporate, leaving dust, although its use was banned in the US in 1980.
It is a long-running myth that spermaceti from whales has still been used in NASA projects such as the Hubble Space Telescope and the Voyager probe because of its extremely low freezing temperature. Spermaceti is not actually an oil, but a mixture mostly of wax esters, and there is no evidence that NASA has used whale oil.
Fuel
Some oils burn in liquid or aerosol form, generating light, and heat which can be used directly or converted into other forms of energy such as electricity or mechanical work. In order to obtain many fuel oils, crude oil is pumped from the ground and is shipped via oil tanker or a pipeline to an oil refinery. There, it is converted from crude oil to diesel fuel (petrodiesel), ethane (and other short-chain alkanes), fuel oils (heaviest of commercial fuels, used in ships/furnaces), gasoline (petrol), jet fuel, kerosene, benzene (historically), and liquefied petroleum gas. A barrel of crude oil produces approximately of diesel, of jet fuel, of gasoline, of other products, split between heavy fuel oil and liquified petroleum gases, and of heating oil. The total production of a barrel of crude into various products results in an increase to .
In the 18th and 19th centuries, whale oil was commonly used for lamps, which was replaced with natural gas and then electricity.
Chemical feedstock
Crude oil can be refined into a wide variety of component hydrocarbons. Petrochemicals are the refined components of crude oil and the chemical products made from them. They are used as detergents, fertilizers, medicines, paints, plastics, synthetic fibers, and synthetic rubber.
Organic oils are another important chemical feedstock, especially in green chemistry.
See also
Emulsifier, a chemical which allows oil and water to mix
References
External links
Petroleum Online e-Learning resource from IHRDC
Chemical substances | Oil | [
"Physics",
"Chemistry"
] | 1,562 | [
"Oils",
"Carbohydrates",
"Materials",
"nan",
"Chemical substances",
"Matter"
] |
4,207,738 | https://en.wikipedia.org/wiki/Electron%20spectroscopy | Electron spectroscopy refers to a group formed by techniques based on the analysis of the energies of emitted electrons such as photoelectrons and Auger electrons. This group includes X-ray photoelectron spectroscopy (XPS), which also known as Electron Spectroscopy for Chemical Analysis (ESCA), Electron energy loss spectroscopy (EELS), Ultraviolet photoelectron spectroscopy (UPS), and Auger electron spectroscopy (AES). These analytical techniques are used to identify and determine the elements and their electronic structures from the surface of a test sample. Samples can be solids, gases or liquids.
Chemical information is obtained only from the uppermost atomic layers of the sample (depth 10 nm or less) because the energies of Auger electrons and photoelectrons are quite low, typically 20 - 2000 eV. For this reason, electron spectroscopy techniques are used to analyze surface chemicals.
History
The development of electron spectroscopy can be considered to have begun in 1887 when the German physicist Heinrich Rudolf Hertz discovered the photoelectric effect but was unable to explain it. In 1900, Max Planck (1918 Nobel Prize in Physics) suggested that energy carried by electromagnetic waves could only be released in "packets" of energy. In 1905 Albert Einstein (1921 Nobel Prize of Physics) explained Planck's discovery and the photoelectric effect. He presented the hypothesis that light energy is carried in discrete quantized packets (photons), each with energy E=hν to explain the experimental observations. Two years after this publication, in 1907, P. D. Innes recorded the first XPS spectrum.
After numerous developments and the Second World War, Kai Siegbahn (Nobel Prize in 1981) with his research group in Uppsala, Sweden registered in 1954 the first XPS device to produce a high energy-resolution XPS spectrum. In 1967, Siegbahn published a comprehensive study of XPS and its usefulness, which he called electron spectroscopy for chemical analysis (ESCA). Concurrently with Siegbahn's work, in 1962, David W. Turner at Imperial College London (and later Oxford University) developed ultraviolet photoelectron spectroscopy (UPS) for molecular species using a helium lamp.
Basic theory
In electron spectroscopy, depending on the technique, irradiating the sample with high-energy particles such as X-ray photons, electron beam electrons, or ultraviolet radiation photons, causes Auger electrons and photoelectrons to be emitted. Figure 1 illustrates this on the basis of a single event in which, for example, an incoming X-ray photon from a particular energy range (E=hv) transfers its energy to an electron in the inner shell of an atom. The absorption of this photon ejects the electron and leaves a hole in the atomic shell (see figure 1 (a)). The hole can be filled in two ways, forming different characteristic rays that are specific to each element. If an electron from a shell with a higher energy level jumps to fill the hole, the energy difference can be emitted as a fluorescent photon (figure 1 (b)). In the Auger phenomenon, when the electron jumps from the higher energy level, its energy instead causes an adjacent or nearby electron to be ejected, forming an Auger electron (figure 1 (c)).
As can be seen from the discussion above and figure 1, Auger electrons and photoelectrons are different in their physical origin, however, both types of electrons carry similar information about the chemical elements in material surfaces. Each element has its own special Auger electron or photon electron energy from which these can be identified. The binding energy of a photoelectron can be calculated by the formula below.
where Ebinding is the binding energy of the photoelectron, hν is the energy of the incoming radiation particle, Ekinetic is the kinetic energy of the photoelectron measured by the device and is the work function.
The kinetic energy of the Auger electron is approximately equal to the energy difference between the binding energies of the electron shells involved in the Auger process. This can be calculated as follows:
where Ekinetic is the kinetic energy of the Auger electron, hν is the energy of the incoming radiation particle and EB is first outer shell binding energy and EC is second outer shell binding energies.
Types of electron spectroscopy
X-ray photoelectron spectroscopy
Auger electron spectroscopy
Electron energy loss spectroscopy
Ultraviolet photoelectron spectroscopy
References
Spectroscopy | Electron spectroscopy | [
"Physics",
"Chemistry"
] | 903 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Electron spectroscopy",
"Instrumental analysis",
"Spectroscopy"
] |
4,207,958 | https://en.wikipedia.org/wiki/Point%20of%20interest | A point of interest (POI) is a specific point location that someone may find useful or interesting. An example is a point on the Earth representing the location of the Eiffel Tower, or a point on Mars representing the location of its highest mountain, Olympus Mons. Most consumers use the term when referring to hotels, campsites, fuel stations or any other categories used in modern automotive navigation systems.
Users of a mobile device can be provided with geolocation and time-aware POI service that recommends geolocations nearby and with a temporal relevance (e.g. POI to special services in a ski resort are available only in winter).
The term is widely used in cartography, especially in electronic variants including GIS, and GPS navigation software. In this context the synonym waypoint is common.
A GPS point of interest specifies, at minimum, the latitude and longitude of the POI, assuming a certain map datum. A name or description for the POI is usually included, and other information such as altitude or a telephone number may also be attached. GPS applications typically use icons to graphically represent different categories of POI on a map.
A region of interest (ROI) and a volume of interest (VOI) are similar in concept, denoting a region or a volume (which may contain various individual POIs).
In medical fields such as histology, pathology, and histopathology, points of interest are selected from the general background in a field of view; for example, among hundreds of normal cells, the pathologist may find 3 or 4 neoplastic cells that stand out from the others upon staining.
POI collections
Digital maps for modern GPS devices typically include a basic selection of POI for the map area.
However, websites exist that specialize in the collection, verification, management and distribution of POI which end-users can load onto their devices to replace or supplement the existing POI. While some of these websites are generic, and will collect and categorize POI for any interest, others are more specialized in a particular category (such as speed cameras) or GPS device (e.g. TomTom/Garmin). End-users also have the ability to create their own custom collections.
Commercial POI collections, especially those that ship with digital maps, or that are sold on a subscription basis are usually protected by copyright. However, there are also many websites from which royalty-free POI collections can be obtained, e.g. SPOI - Smart Points of Interest, which is distributed under ODbL license.
Applications
The applications for POI are extensive. As GPS-enabled devices as well as software applications that use digital maps become more available, so too the applications for POI are also expanding. Newer digital cameras for example can automatically tag a photograph using Exif with the GPS location where a picture was taken; these pictures can then be overlaid as POI on a digital map or satellite image such as Google Earth. Geocaching applications are built around POI collections. In vehicle tracking systems, POIs are used to mark destination points and/or offices to that users of GPS tracking software would easily monitor position of vehicles according to POIs.
File formats
Many different file formats, including proprietary formats, are used to store point of interest data, even where the same underlying WGS84 system is used.
Reasons for variations to store the same data include:
A lack of standards in this area (GPX is a notable attempt to address this).
Attempts by some software vendors to protect their data through obfuscation.
Licensing issues that prevent companies from using competitor's file specifications.
Memory saving, for example, by converting floating point latitude and longitude co-ordinates into smaller integer values.
Speed and battery life (operations using integer latitude and longitude values are less CPU-intensive than those that use floating point values).
Requirements to add custom fields to the data.
Use of older reference systems that predate GPS (for example UTM or the British national grid reference system)
Readability/possibility to edit (plain text files are human-readable and may be edited)
The following are some of the file formats used by different vendors and devices to exchange POI (and in some cases, also navigation tracks):
ASCII Text (.asc .txt .csv .plt)
Topografix GPX (.gpx)
Garmin Mapsource (.gdb)
Google Earth Keyhole Markup Language (.kml .kmz)
Pocket Street Pushpins (.psp)
Maptech Marks (.msf)
Maptech Waypoint (.mxf)
Microsoft MapPoint Pushpin (.csv)
OziExplorer (.wpt)
TomTom Overlay (.ov2) and TomTom plain text format (.asc)
OpenStreetMap data (.osm)
Third party and vendor-supplied utilities are available to convert point of interest data between different formats to allow them to be exchanged between otherwise incompatible GPS devices or systems. Furthermore, many applications will support the generic ASCII text file format, although this format is more prone to error due to its loose structure as well as the many ways in which GPS co-ordinates can be represented (e.g. decimal vs degree/minute/second). POI format converters are often named after the POI file format they convert and convert to, such as KML2GPX (converts KML to GPX) and KML2OV2 (converts KML to OV2).
See also
Automotive navigation system
Geocoded photograph
Map database management
OpenLR
Tourist attraction
World Geodetic System (Used to represent GPS co-ordinates)
Photogrammetry
References
Global Positioning System
Geographical technology
Navigation | Point of interest | [
"Technology",
"Engineering"
] | 1,191 | [
"Global Positioning System",
"Wireless locating",
"Aircraft instruments",
"Aerospace engineering"
] |
4,208,984 | https://en.wikipedia.org/wiki/Parliamentary%20Office%20of%20Science%20and%20Technology | The Parliamentary Office of Science and Technology (POST) is the Parliament of the United Kingdom's in-house source of independent, balanced and accessible analysis of public policy issues related to science and technology. POST serves both Houses of Parliament (the House of Commons and the House of Lords).
It strives to ensure that MPs and Peers can have confidence in its analyses should they wish to cite them in debate. These principles are reflected in the structure of POST’s Board with members from the Commons and Lords together with distinguished scientists and engineers from the wider world.
History
Since 1939, a group of MPs and peers interested in science and technology, through the first parliamentary "All Party Group", the UK Parliamentary and Scientific Committee (P&S), had encouraged UK Parliamentarians to explore the implications of scientific developments for society and public policy. As the UK economy became more dependent on technological progress, and the varied effects of technology (especially on the environment) became more apparent, it was felt that UK Parliament needed its own resources on such issues. Parliamentarians not only required access to knowledge and insights into the implications of technology for their constituents and society, but also needed to exercise their scrutiny functions over UK government legislation and administration. This thinking was also influenced by the fact that specialised parliamentary science and technology organisations already existed overseas.
P&S members (Sir Ian Lloyd MP, Sir Trevor Skeet MP, Sir Gerry Vaughan MP, Lords Kennet, Gregson and Flowers among others) visited already established organisations in the US, Germany and France, and this reinforced their view that modern Parliaments needed their own ‘intelligence’ on science and technology-related issues. Initially they asked the then Thatcher government to fund such services at Westminster but were asked first to demonstrate a real need. This led to the P&S creating a charitable foundation to raise funds from P&S members.
The parliamentary reaction was positive and led to the appointment of a first Director, Dr Michael Norton. In 1989, POST was formally established as a charitable foundation, though not an internal part of Parliament.
POST had attracted more resources by 1992 and then recruited 3 specialist science advisers and began its fellowship programme with the UK research councils.
In 1992 the House of Commons Information Committee, supported by the House of Lords, recommended that Parliament should itself fund POST for 3 years, and a subsequent review in 1995 extended this for a further 5 years. This was the result of POST demonstrating real interest and demand from MPs and peers.
POST's financial reliance on donations from bodies external to Parliament, even those as prestigious as the Royal Society, had always slightly compromised the perceived independence of the office.
From 1997 the chair of the POST board was appointed by government Whips. Since then, chairs have been Dr Ian Gibson MP 1997-2001, Dr Phyllis Starkey MP 2001-2005, and Dr Ashok Kumar MP 2005-2010. Dr Kumar died shortly before the UK 2010 General Election. Since then, subject to parliamentary approval, the Chair of POST has been Adam Afiyie, MP.
In 1998 Professor David Cope took over as Director of POST. He guided the Office towards the goal of establishing POST as a permanent office of the UK Parliament - and dramatically expanded its staffing, wider links and more general recognition of its role as a special, distinctive, institution that the UK Parliament had agreed, enthusiastically, to endorse.
In 2001, both Houses decided that POST should indeed be established as a permanent bicameral institution, funded exclusively by Parliament itself, through the two Houses, in a ratio of 70%/30%, Commons/Lords.
In 2009, POST celebrated its 20th anniversary with, among other events, a special conference, arranged by the Director, Prof Cope, on "Images of the Future". The keynote participants were the Hon. Bart Gordon, Chair of the US House of Representatives' Committee on Science and Technology and Dr Jim Dator of the University of Hawaii Futures Research Centre, along with many other Members and staff of Parliaments across the world.
Because of the enthusiasm of Members of both Houses, POST had enjoyed a unique status within Parliament. It was from its inception attached administratively to the House of Commons COMMISSION (though always with its link to the House of Lords.)
This arrangement was intended specifically to distinguish the office and its functions from the Libraries of either House. These conduct non peer-reviewed "studies", of varying quality. POST - on the other hand - was expected to provide proactive analysis and options, freed from the immediate pressure of political or administrative expectations, trying to embrace futures thinking, and above all, to subject all its work to the most critical peer review analysis.
POST'S Director, Prof David Cope, returned to Cambridge University in 2012 and Dr Chris Tyler was appointed to lead POST by parliamentary authorities.
Dr Tyler left POST in 2017 for a policy position at University College London.
The Acting Directorship of POST was taken over by a longstanding staff member, Dr Chandrika Nath. In 2018, in an acknowledgement of POST's immutable science and technology assessment role, she was appointed Executive Director of the international Scientific Committee on Antarctic Research, the highly prestigious intergovernmental organisation.
Dr Grant Hill-Cawthorne (University of Sydney) became Head of POST from May 2018.
Activities
Science and technology in parliament
Most parliamentarians do not have a scientific or technological background but science and technology issues are increasingly integral to public policy. Parliamentarians are bombarded daily with lobbying, public enquiries and media stories about science and technology. These cover diverse areas such as medical advances, environmental issues and global communications. POST helps parliamentarians examine such issues effectively by providing information resources, in depth analysis and impartial advice. POST works closely with a wide range of organisations involved in science and technology, including select committees, all-party parliamentary groups, government departments, scientific societies, policy think tanks, business, academia and research funders.
Aim
POST's aim is to inform parliamentary debate through:
Publishing POSTnotes (short briefing notes) and longer reports. POSTnotes can be downloaded from the publications section of the POST website. Both focus on current science and technology issues and aim to anticipate policy implications for parliamentarians.
Supporting select committees, with informal advice, oral briefings, data analyses, background papers or follow-up research. Committees may approach POST for such advice at any stage in an inquiry.
Informing both Houses on public dialogue activities in science and technology.
Organising discussions to stimulate debate on a wide range of topical issues, from small working groups to large lectures.
Horizon scanning to anticipate issues of science and technology that are likely to impact on policy
How POST works
A parliamentary board guides POST's choice of subjects. A team of advisers conduct analyses, drawing on a wide range of external expertise. All reports and POSTnotes are externally peer reviewed, and scrutinised by the board before publication.
POST's work falls into four topical areas:
Biological Sciences and Health
Physical Sciences and ICT
Environment and Energy
Social Sciences
International activities
POST is a member of the European Parliamentary Technology Assessment network, which brings together parliamentary organisations throughout Europe sharing information and working on joint projects. POST also liaises with science and technology organisations across the world.
Between November 2005 to 2009, POST, collaborating with four of its sister organisations - at the Danish, Dutch, Flemish and German Parliaments - provided technology assessment services to the Science and Technology Options Assessment unit of the European Parliament, in Brussels and Strasbourg.
POST Africa Programme
From 2001 POST received a growing number of requests for advice from parliamentarians in developing countries. It became clear that a real need existed to strengthen capacity in developing country parliaments. In 2005, POST held discussions on this issue with the Gatsby Foundation, which led to a special initiative to assist African Parliaments, and other organisations in their countries, in building parliamentary capacity to handle policy issues related to science and technology. At a time when there is growing awareness of the importance of science and technology in decision making, as demonstrated by, for example, the focus on science, technology and innovation at the African Union summit meeting in January 2007, this programme continues to contribute towards the overall objective of ensuring that parliaments have the capacity to scrutinise decision making processes and act as the national fora for discussion and debate on the broad implications of issues with a basis in science and technology. By sharing information and best practice with overseas parliaments and assemblies, the programme supports one of the primary objectives of the House: to promote public knowledge and understanding of the work and role of Parliament through the provision of information and access.
The POST Board
(Appointed 2010)
The POST Board oversees POST's objectives, outputs and future work programme. It meets quarterly. The Board comprises:
14 parliamentarians drawn from the House of Commons (10) and the House of Lords (4), roughly reflecting the balance of parties in Parliament.
Leading non-parliamentarians from the science and technology community.
Representatives of the House of Lords and the Department of Information Services of the House of Commons.
Officers
Chairman: TBD
Vice-Chairman: Professor the Lord Winston
Head: Dr Grant Hill-Cawthorne
House of Lords
Lord Oxburgh, KBE, PhD, FRS
Lord Haskel
Lord Patel, KT, FMedSci, FRSE
Externals
Professor Elizabeth Fisher
Professor Sarah Whatmore, FBA
Paul Martynenko, FBCS
Professor Sir Bernard Silverman, FRS, FAcSS
Ex Officio Board Members
Penny Young, The House of Commons Librarian and Managing Director Research & Information, House of Commons
Nicolas Besly, Clerk of Select Committees, House of Lords
Edward Potton, Head of Science and Environment Section, House of Commons Library
Lynn Gardner, Principal Clerk, Committee Office, House of Commons
Dr Grant Hill-Cawthorne, Head of POST
Staff
Permanent staff
POST has eight science advisers, covering the fields of biology and health; physical sciences and ICT; environment and energy; and social sciences. Science advisers generally have a postgraduate qualification and science policy experience.
Fellows
POST runs formal fellowship schemes with scientific societies and research councils, whereby PhD students can spend three months working at POST through an extension of their maintenance grants. These include:
Arts and Humanities Research Council
Biotechnology and Biological Sciences Research Council
British Ecological Society
British Psychological Society
Economic and Social Research Council
Engineering and Physical Sciences Research Council
The Institute of Food Science and Technology
Institute of Physics
Medical Research Council
Natural Environment Research Council
Nuffield Foundation Flowers Fellowship
Royal Society of Chemistry
Science & Technology Facilities Council
Wellcome Trust Ethics and Society
Wellcome Trust Medical History and Humanities
The Following two organisations collaborate to offer an annual Fellowship in memory of the late Chemical Engineer and Parliamentarian Ashok Kumar MP. This Fellowship enables an Engineering or Science PhD student to spend three months working at POST and get a better understanding of how parliament works. By 2017 the 6th Ashok Kumar Fellow had been appointed to work with POST she was a postgraduate engineering student, Erin Johnson, from Imperial College, London.
Institution of Chemical Engineers (IChemE)
Northeast of England Process Industry Cluster (NEPIC)
For more information on fellowship applications see the 'POST Fellowships' section of the POST website.
See also
Ashok Kumar (British politician)
Institution of Chemical Engineers (IChemE)
Northeast of England Process Industry Cluster (NEPIC)
Parliamentary and Scientific Committee
References
External links
Parliament of the United Kingdom
Science policy in the United Kingdom
Scientific organisations based in the United Kingdom
Technology assessment organisations
1989 establishments in the United Kingdom | Parliamentary Office of Science and Technology | [
"Technology"
] | 2,310 | [
"Technology assessment organisations"
] |
4,209,079 | https://en.wikipedia.org/wiki/Pit%20of%20despair | The pit of despair was a name used by American comparative psychologist Harry Harlow for a device he designed, technically called a vertical chamber apparatus, that he used in experiments on rhesus macaque monkeys at the University of Wisconsin–Madison in the 1970s. The aim of the research was to produce an animal model of depression. Researcher Stephen Suomi described the device as "little more than a stainless-steel trough with sides that sloped to a rounded bottom":
A in. wire mesh floor 1 in. above the bottom of the chamber allowed waste material to drop through the drain and out of holes drilled in the stainless-steel. The chamber was equipped with a food box and a water-bottle holder, and was covered with a pyramid top [removed in the accompanying photograph], designed to discourage incarcerated subjects from hanging from the upper part of the chamber.
Harlow had already placed newly born monkeys in isolation chambers for up to one year. With the "pit of despair", he placed monkeys between three months and three years old who had already bonded with their mothers in the chamber alone for up to ten weeks. Within a few days, they had stopped moving about and remained huddled in a corner.
Background
Much of Harlow's scientific career was spent studying maternal bonding, what he described as the "nature of love". These experiments involved rearing newborn "total isolates" and monkeys with surrogate mothers, ranging from toweling-covered cones to a machine that modeled abusive mothers by assaulting the baby monkeys with cold air or spikes.
In 1971, Harlow's wife died of cancer and he began to suffer from depression. He was treated and returned to work but, as Lauren Slater writes, his colleagues noticed a difference in his demeanor. He abandoned his research into maternal attachment and developed an interest in isolation and depression.
Harlow's first experiments involved isolating a monkey in a cage surrounded by steel walls with a small one-way mirror, so the experimenters could look in, but the monkey could not look out. The only connection the monkey had with the world was when the experimenters' hands changed his bedding or delivered fresh water and food. Baby monkeys were placed in these boxes soon after birth; four were left for 30 days, four for six months, and four for a year. After 30 days, the "total isolates", as they were called, were found to be "enormously disturbed". After being isolated for a year, they barely moved, did not explore or play, and were incapable of having sexual relations. When placed with other monkeys for a daily play session, they were badly bullied. Two of them refused to eat and starved themselves to death.
Harlow also wanted to test how isolation would affect parenting skills, but the isolates were unable to mate. Harlow devised what he called a "rape rack", to which the female isolates were tied in normal monkey mating posture. He found that, just as they were incapable of having sexual relations, they were also unable to parent their offspring, either abusing or neglecting them. "Not even in our most devious dreams could we have designed a surrogate as evil as these real monkey mothers were", he wrote. Having no social experience themselves, they were incapable of appropriate social interaction. One mother held her baby's face to the floor and chewed off his feet and fingers. Another crushed her baby's head. Most of them simply ignored their offspring.
These experiments showed Harlow what total and partial isolation did to developing monkeys, but he felt he had not captured the essence of depression, which he believed was characterized by feelings of loneliness, helplessness, and a sense of being trapped, or being "sunk in a well of despair", he said.
Vertical chamber apparatus
The technical name for the new depression chamber was "vertical chamber apparatus", though Harlow himself insisted on calling it the "pit of despair". He had at first wanted to call it the "dungeon of despair", and also used terms like "well of despair", and "well of loneliness". Blum writes that his colleagues tried to persuade him not to use such descriptive terms, that a less visual name would be easier, politically speaking. Gene Sackett of the University of Washington in Seattle, one of Harlow's doctoral students who went on to conduct additional deprivation studies, said, "He first wanted to call it a dungeon of despair. Can you imagine the reaction to that?".
Most of the monkeys placed inside it were at least three months old and had already bonded with others. The point of the experiment was to break those bonds in order to create the symptoms of depression. The chamber was a small, inverted metal pyramid, with slippery sides slanting down to a point. The monkey was placed in the point. The opening was covered with mesh. The monkeys would spend the first day or two trying to climb up the slippery sides. After a few days, they gave up. Harlow wrote, "most subjects typically assume a hunched position in a corner of the bottom of the apparatus. One might presume at this point that they find their situation to be hopeless." Stephen J. Suomi, another of Harlow's doctoral students, placed some monkeys in the chamber in 1970 for his PhD. He wrote that he could find no monkey who had any defense against it. Even the happiest monkeys came out damaged.
Reception
The experiments were condemned, both at the time and later, from within the scientific community and elsewhere in academia. In 1974, American literary critic Wayne C. Booth wrote that "Harry Harlow and his colleagues go on torturing their nonhuman primates decade after decade, invariably proving what we all knew in advance—that social creatures can be destroyed by destroying their social ties." He writes that Harlow made no mention of the criticism of the morality of his work.
Charles Snowdon, a junior member of the faculty at the time, who became head of psychology at Wisconsin, said that Harlow had himself been very depressed by his wife's cancer. Snowdon was appalled by the design of the vertical chambers. He asked Suomi why they were using them, and Harlow replied, "Because that's how it feels when you're depressed." Leonard Rosenblum, who studied under Harlow, told Lauren Slater that Harlow enjoyed using shocking terms for his apparatus because "he always wanted to get a rise out of people".
Another of Harlow's students, William Mason, who also conducted deprivation experiments elsewhere, said that Harlow "kept this going to the point where it was clear to many people that the work was really violating ordinary sensibilities, that anybody with respect for life or people would find this offensive. It's as if he sat down and said, 'I'm only going to be around another ten years. What I'd like to do, then, is leave a great big mess behind.' If that was his aim, he did a perfect job."
See also
Animal testing
Britches (monkey)
Flowerpot technique, a method of sleep deprivation in laboratory animals
Psychological torture
Psychological trauma
Research ethics
Silver Spring monkeys
Unnecessary Fuss, video showing brain damage experiments on baboons
Notes
References
Stephens, M.L. Maternal Deprivation Experiments in Psychology: A Critique of Animal Models. AAVS, NAVS, NEAVS, 1986.
Suomi, Stephen John. "Experimental Production of Depressive Behavior in Young Rhesus Monkeys: Thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Psychology) at the University of Wisconsin," University of Wisconsin, 1971, p. 33.
Further reading
Harry Harlow's Monkey Love Experiments
1970s in science
1971 in science
Academic scandals
Animal cruelty incidents
Animal testing in the United States
Animal testing on non-human primates
Anti-vivisection movement
Clinical research ethics
Cruelty to animals
Ethically disputed research practices towards animals
Medical controversies in the United States
Psychology experiments | Pit of despair | [
"Chemistry"
] | 1,605 | [
"Animal testing",
"Anti-vivisection movement",
"Vivisection"
] |
4,209,093 | https://en.wikipedia.org/wiki/Bacterial%20cell%20structure | A bacterium, despite its simplicity, contains a well-developed cell structure which is responsible for some of its unique biological structures and pathogenicity. Many structural features are unique to bacteria and are not found among archaea or eukaryotes. Because of the simplicity of bacteria relative to larger organisms and the ease with which they can be manipulated experimentally, the cell structure of bacteria has been well studied, revealing many biochemical principles that have been subsequently applied to other organisms.
Cell morphology
Perhaps the most elemental structural property of bacteria is their morphology (shape). Typical examples include:
coccus (circle or spherical)
bacillus (rod-like)
coccobacillus (between a sphere and a rod)
spiral (corkscrew-like)
filamentous (elongated)
Cell shape is generally characteristic of a given bacterial species, but can vary depending on growth conditions. Some bacteria have complex life cycles involving the production of stalks and appendages (e.g. Caulobacter) and some produce elaborate structures bearing reproductive spores (e.g. Myxococcus, Streptomyces). Bacteria generally form distinctive cell morphologies when examined by light microscopy and distinct colony morphologies when grown on Petri plates.
Perhaps the most obvious structural characteristic of bacteria is (with some exceptions) their small size. For example, Escherichia coli cells, an "average" sized bacterium, are about 2 μm (micrometres) long and 0.5 μm in diameter, with a cell volume of 0.6–0.7 μm3. This corresponds to a wet mass of about 1 picogram (pg), assuming that the cell consists mostly of water. The dry mass of a single cell can be estimated as 23% of the wet mass, amounting to 0.2 pg. About half of the dry mass of a bacterial cell consists of carbon, and also about half of it can be attributed to proteins. Therefore, a typical fully grown 1-liter culture of Escherichia coli (at an optical density of 1.0, corresponding to c. 109 cells/ml) yields about 1 g wet cell mass. Small size is extremely important because it allows for a large surface area-to-volume ratio which allows for rapid uptake and intracellular distribution of nutrients and excretion of wastes. At low surface area-to-volume ratios the diffusion of nutrients and waste products across the bacterial cell membrane limits the rate at which microbial metabolism can occur, making the cell less evolutionarily fit. The reason for the existence of large cells is unknown, although it is speculated that the increased cell volume is used primarily for storage of excess nutrients.
Comparison of a typical bacterial cell and a typical human cell (assuming both cells are spheres) :
Cell wall
The cell envelope is composed of the cell membrane and the cell wall. As in other organisms, the bacterial cell wall provides structural integrity to the cell. In prokaryotes, the primary function of the cell wall is to protect the cell from internal turgor pressure caused by the much higher concentrations of proteins and other molecules inside the cell compared to its external environment. The bacterial cell wall differs from that of all other organisms by the presence of peptidoglycan which is located immediately outside of the cell membrane. Peptidoglycan is made up of a polysaccharide backbone consisting of alternating N-Acetylmuramic acid (NAM) and N-acetylglucosamine (NAG) residues in equal amounts. Peptidoglycan is responsible for the rigidity of the bacterial cell wall, and for the determination of cell shape. It is relatively porous and is not considered to be a permeability barrier for small substrates. While all bacterial cell walls (with a few exceptions such as extracellular parasites such as Mycoplasma) contain peptidoglycan, not all cell walls have the same overall structures. Since the cell wall is required for bacterial survival, but is absent in some eukaryotes, several antibiotics (notably the penicillins and cephalosporins) stop bacterial infections by interfering with cell wall synthesis, while having no effects on human cells which have no cell wall, only a cell membrane. There are two main types of bacterial cell walls, those of Gram-positive bacteria and those of Gram-negative bacteria, which are differentiated by their Gram staining characteristics. For both these types of bacteria, particles of approximately 2 nm can pass through the peptidoglycan. If the bacterial cell wall is entirely removed, it is called a protoplast while if it's partially removed, it is called a spheroplast. Beta-lactam antibiotics such as penicillin inhibit the formation of peptidoglycan cross-links in the bacterial cell wall. The enzyme lysozyme, found in human tears, also digests the cell wall of bacteria and is the body's main defense against eye infections.
Gram-positive cell wall
Gram-positive cell walls are thick and the peptidoglycan (also known as murein) layer constitutes almost 95% of the cell wall in some Gram-positive bacteria and as little as 5-10% of the cell wall in Gram-negative bacteria. The peptidoglycan layer takes up the crystal violet dye and stains purple in the Gram stain. Bacteria within the Deinococcota group may also exhibit Gram-positive staining but contain some cell wall structures typical of Gram-negative bacteria.
The cell wall of some Gram-positive bacteria can be completely dissolved by lysozymes which attack the bonds between N-acetylmuramic acid and N-acetylglucosamine. In other Gram-positive bacteria, such as Staphylococcus aureus, the walls are resistant to the action of lysozymes. They have O-acetyl groups on carbon-6 of some muramic acid residues.
The matrix substances in the walls of Gram-positive bacteria may be polysaccharides or teichoic acids. The latter are very widespread, but have been found only in Gram-positive bacteria. There are two main types of teichoic acid: ribitol teichoic acids and glycerol teichoic acids. The latter one is more widespread. These acids are polymers of ribitol phosphate and glycerol phosphate, respectively, and only located on the surface of many Gram-positive bacteria. However, the exact function of teichoic acid is debated and not fully understood. Some are lipid-linked to form lipoteichoic acids. Because lipoteichoic acids are covalently linked to lipids within the cytoplasmic membrane they are responsible for linking and anchoring the peptidoglycan to the cytoplasmic membrane. Lipotechoic acid is a major component of the gram-positive cell wall. One of its purposes is providing an antigenic function. The lipid element is to be found in the membrane where its adhesive properties assist in its anchoring to the membrane. Teichoic acids give the gram-positive cell wall an overall negative charge due to the presence of phosphodiester bonds between teichoic acid monomers.
Outside the cell wall, many gram-positive bacteria have an S-layer of "tiled" proteins. The S-layer assists attachment and biofilm formation. Outside the S-layer, there is often a capsule of polysaccharides. The capsule helps the bacterium evade host phagocytosis. In laboratory culture, the S-layer and capsule are often lost by reductive evolution (the loss of a trait in absence of positive selection).
Gram-negative cell wall
Gram-negative cell walls are much thinner than the Gram-positive cell walls, and they contain a second plasma membrane superficial to their thin peptidoglycan layer, in turn adjacent to the cytoplasmic membrane. Gram-negative bacteria stain as pink in the Gram stain. The chemical structure of the outer membrane's lipopolysaccharide is often unique to specific bacterial sub-species and is responsible for many of the antigenic properties of these strains.
In addition to the peptidoglycan layer the Gram-negative cell wall also contains an additional outer membrane composed of phospholipids and lipopolysaccharides which face into the external environment. The highly charged nature of lipopolysaccharides confer an overall negative charge to the Gram -negative cell wall. The chemical structure of the outer membrane lipopolysaccharides is often unique to specific bacterial strains, and is responsible for many of their antigenic properties.
As a phospholipid bilayer, the lipid portion of the outer membrane is largely impermeable to all charged molecules. However, channels called porins are present in the outer membrane that allow for passive transport of many ions, sugars and amino acids across the outer membrane. These molecules are therefore present in the periplasm, the region between the plasma membrane and outer membrane. The periplasm contains the peptidoglycan layer and many proteins responsible for substrate binding or hydrolysis and reception of extracellular signals. The periplasm is thought to exist as a gel-like state rather than a liquid due to the high concentration of proteins and peptidoglycan found within it. Because of its location between the cytoplasmic and outer membranes, signals received and substrates bound are available to be transported across the cytoplasmic membrane using transport and signaling proteins imbedded there.
Many uncultivated Gram-negative bacteria also have an S-layer and a capsule. These structures are often lost during laboratory cultivation.
Plasma membrane
The plasma membrane or bacterial cytoplasmic membrane is composed of a phospholipid bilayer and thus has all of the general functions of a cell membrane such as acting as a permeability barrier for most molecules and serving as the location for the transport of molecules into the cell. In addition to these functions, prokaryotic membranes also function in energy conservation as the location about which a proton motive force is generated. Unlike eukaryotes, bacterial membranes (with some exceptions e.g. Mycoplasma and methanotrophs) generally do not contain sterols. However, many microbes do contain structurally related compounds called hopanoids which likely fulfill the same function. Unlike eukaryotes, bacteria can have a wide variety of fatty acids within their membranes. Along with typical saturated and unsaturated fatty acids, bacteria can contain fatty acids with additional methyl, hydroxy or even cyclic groups. The relative proportions of these fatty acids can be modulated by the bacterium to maintain the optimum fluidity of the membrane (e.g. following temperature change).
Gram-negative and mycobacteria have an inner and outer bacteria membrane. As a phospholipid bilayer, the lipid portion of the bacterial outer membrane is impermeable to charged molecules. However, channels called porins are present in the outer membrane that allow for passive transport of many ions, sugars and amino acids across the outer membrane. These molecules are therefore present in the periplasm, the region between the cytoplasmic and outer membranes. The periplasm contains the peptidoglycan layer and many proteins responsible for substrate binding or hydrolysis and reception of extracellular signals. The periplasm is thought to exist in a gel-like state rather than a liquid due to the high concentration of proteins and peptidoglycan found within it. Because of its location between the cytoplasmic and outer membranes, signals received and substrates bound are available to be transported across the cytoplasmic membrane using transport and signaling proteins imbedded there.
Extracellular (external) structures
Fimbriae and pili
Fimbriae (sometimes called "attachment pili") are protein tubes that extend out from the outer membrane in many members of the Pseudomonadota. They are generally short in length and present in high numbers about the entire bacterial cell surface. Fimbriae usually function to facilitate the attachment of a bacterium to a surface (e.g. to form a biofilm) or to other cells (e.g. animal cells during pathogenesis). A few organisms (e.g. Myxococcus) use fimbriae for motility to facilitate the assembly of multicellular structures such as fruiting bodies. Pili are similar in structure to fimbriae but are much longer and present on the bacterial cell in low numbers. Pili are involved in the process of bacterial conjugation where they are called conjugation pili or "sex pili". Type IV pili (non-sex pili) also aid bacteria in gripping surfaces.
S-layers
An S-layer (surface layer) is a cell surface protein layer found in many different bacteria and in some archaea, where it serves as the cell wall. All S-layers are made up of a two-dimensional array of proteins and have a crystalline appearance, the symmetry of which differs between species. The exact function of S-layers is unknown, but it has been suggested that they act as a partial permeability barrier for large substrates. For example, an S-layer could conceivably keep extracellular proteins near the cell membrane by preventing their diffusion away from the cell. In some pathogenic species, an S-layer may help to facilitate survival within the host by conferring protection against host defence mechanisms.
Glycocalyx
Many bacteria secrete extracellular polymers outside of their cell walls called glycocalyx. These polymers are usually composed of polysaccharides and sometimes protein. Capsules are relatively impermeable structures that cannot be stained with dyes such as India ink. They are structures that help protect bacteria from phagocytosis and desiccation. Slime layer is involved in attachment of bacteria to other cells or inanimate surfaces to form biofilms. Slime layers can also be used as a food reserve for the cell.
Flagella
Perhaps the most recognizable extracellular bacterial cell structures are flagella. Flagella are whip-like structures protruding from the bacterial cell wall and are responsible for bacterial motility (movement). The arrangement of flagella about the bacterial cell is unique to the species observed. Common forms include:
Monotrichous – Single flagellum
Lophotrichous – A tuft of flagella found at one of the cell poles
Amphitrichous – Single flagellum found at each of two opposite poles
Peritrichous – Multiple flagella found at several locations about the cell
The bacterial flagellum consists of three basic components: a whip-like filament, a motor complex, and a hook that connects them. The filament is approximately 20 nm in diameter and consists of several protofilaments, each made up of thousands of flagellin subunits. The bundle is held together by a cap and may or may not be encapsulated. The motor complex consists of a series of rings anchoring the flagellum in the inner and outer membranes, followed by a proton-driven motor that drives rotational movement in the filament.
Intracellular (internal) structures
In comparison to eukaryotes, the intracellular features of the bacterial cell are extremely simple. Bacteria do not contain organelles in the same sense as eukaryotes. Instead, the chromosome and perhaps ribosomes are the only easily observable intracellular structures found in all bacteria. There do exist, however, specialized groups of bacteria that contain more complex intracellular structures, some of which are discussed below.
The bacterial DNA and plasmids
Unlike eukaryotes, the bacterial DNA is not enclosed inside of a membrane-bound nucleus but instead resides inside the cytoplasm. The processes concerning the transfer of genetic information — translation, transcription, and DNA replication — therefore all occur within the same compartment and can interact with other cytoplasmic structures, most notably ribosomes. Bacterial DNA can be located in two places:
Bacterial chromosome, located in the irregularly shaped region known as the nucleoid
Extrachromosomal DNA, located outside of the nucleoid region as circular or linear plasmids
The bacterial DNA is not packaged using histones to form chromatin as in eukaryotes but instead exists as a highly compact supercoiled structure, the precise nature of which remains unclear. Most bacterial chromosomes are circular, although some examples of linear chromosomes exist (e.g. Borrelia burgdorferi). Usually, a single bacterial chromosome is present, although some species with multiple chromosomes have been described.
Along with chromosomal DNA, most bacteria also contain small independent pieces of DNA called plasmids that often encode advantageous traits but are not essential to their bacterial host. Plasmids can be easily gained or lost by a bacterium and can be transferred between bacteria as a form of horizontal gene transfer.
Ribosomes and other multiprotein complexes
In most bacteria the most numerous intracellular structure is the ribosome, the site of protein synthesis in all living organisms. All prokaryotes have 70S (where S=Svedberg units) ribosomes while eukaryotes contain larger 80S ribosomes in their cytosol. The 70S ribosome is made up of a 50S and 30S subunits. The 50S subunit contains the 23S and 5S rRNA while the 30S subunit contains the 16S rRNA. These rRNA molecules differ in size in eukaryotes and are complexed with a large number of ribosomal proteins, the number and type of which can vary slightly between organisms. While the ribosome is the most commonly observed intracellular multiprotein complex in bacteria other large complexes do occur and can sometimes be seen using microscopy.
Intracellular membranes
While not typical of all bacteria some microbes contain intracellular membranes in addition to (or as extensions of) their cytoplasmic membranes. An early idea was that bacteria might contain membrane folds termed mesosomes, but these were later shown to be artifacts produced by the chemicals used to prepare the cells for electron microscopy. Examples of bacteria containing intracellular membranes are phototrophs, nitrifying bacteria and methane-oxidising bacteria. Intracellular membranes are also found in bacteria belonging to the poorly studied Planctomycetota group, although these membranes more closely resemble organellar membranes in eukaryotes and are currently of unknown function. Chromatophores are intracellular membranes found in phototrophic bacteria. Used primarily for photosynthesis, they contain bacteriochlorophyll pigments and carotenoids.
Cytoskeleton
The prokaryotic cytoskeleton is the collective name for all structural filaments in prokaryotes. It was once thought that prokaryotic cells did not possess cytoskeletons, but advances in imaging technology and structure determination have shown the presence of filaments in these cells. Homologues for all major cytoskeletal proteins in eukaryotes have been found in prokaryotes. Cytoskeletal elements play essential roles in cell division, protection, shape determination, and polarity determination in various prokaryotes.
Nutrient storage structures
Most bacteria do not live in environments that contain large amounts of nutrients at all times. To accommodate these transient levels of nutrients bacteria contain several different methods of nutrient storage in times of plenty for use in times of want. For example, many bacteria store excess carbon in the form of polyhydroxyalkanoates or glycogen. Some microbes store soluble nutrients such as nitrate in vacuoles. Sulfur is most often stored as elemental (S0) granules which can be deposited either intra- or extracellularly. Sulfur granules are especially common in bacteria that use hydrogen sulfide as an electron source. Most of the above-mentioned examples can be viewed using a microscope and are surrounded by a thin nonunit membrane to separate them from the cytoplasm.
Inclusions
Inclusions are considered to be nonliving components of the cell that do not possess metabolic activity and are not bounded by membranes. The most common inclusions are glycogen, lipid droplets, crystals, and pigments. Volutin granules are cytoplasmic inclusions of complexed inorganic polyphosphate. These granules are called metachromatic granules due to their displaying the metachromatic effect; they appear red or blue when stained with the blue dyes methylene blue or toluidine blue.
Gas vacuoles
Gas vacuoles are membrane-bound, spindle-shaped vesicles, found in some planktonic bacteria and Cyanobacteria, that provides buoyancy to these cells by decreasing their overall cell density. Positive buoyancy is needed to keep the cells in the upper reaches of the water column, so that they can continue to perform photosynthesis. They are made up of a shell of protein that has a highly hydrophobic inner surface, making it impermeable to water (and stopping water vapour from condensing inside) but permeable to most gases. Because the gas vesicle is a hollow cylinder, it is liable to collapse when the surrounding pressure increases. Natural selection has fine tuned the structure of the gas vesicle to maximise its resistance to buckling, including an external strengthening protein, GvpC, rather like the green thread in a braided hosepipe. There is a simple relationship between the diameter of the gas vesicle and pressure at which it will collapse – the wider the gas vesicle the weaker it becomes. However, wider gas vesicles are more efficient, providing more buoyancy per unit of protein than narrow gas vesicles. Different species produce gas vesicle of different diameter, allowing them to colonise different depths of the water column (fast growing, highly competitive species with wide gas vesicles in the top most layers; slow growing, dark-adapted, species with strong narrow gas vesicles in the deeper layers). The diameter of the gas vesicle will also help determine which species survive in different bodies of water. Deep lakes that experience winter mixing expose the cells to the hydrostatic pressure generated by the full water column. This will select for species with narrower, stronger gas vesicles.
The cell achieves its height in the water column by synthesising gas vesicles. As the cell rises up, it is able to increase its carbohydrate load through increased photosynthesis. Too high and the cell will suffer photobleaching and possible death, however, the carbohydrate produced during photosynthesis increases the cell's density, causing it to sink. The daily cycle of carbohydrate build-up from photosynthesis and carbohydrate catabolism during dark hours is enough to fine-tune the cell's position in the water column, bring it up toward the surface when its carbohydrate levels are low and it needs to photosynthesis, and allowing it to sink away from the harmful UV radiation when the cell's carbohydrate levels have been replenished. An extreme excess of carbohydrate causes a significant change in the internal pressure of the cell, which causes the gas vesicles to buckle and collapse and the cell to sink out.
Microcompartments
Bacterial microcompartments are widespread, organelle-like structures that are made of a protein shell that surrounds and encloses various enzymes. provide a further level of organization; they are compartments within bacteria that are surrounded by polyhedral protein shells, rather than by lipid membranes. These "polyhedral organelles" localize and compartmentalize bacterial metabolism, a function performed by the membrane-bound organelles in eukaryotes.
Carboxysomes
Carboxysomes are bacterial microcompartments found in many autotrophic bacteria such as Cyanobacteria, Knallgasbacteria, Nitroso- and Nitrobacteria. They are proteinaceous structures resembling phage heads in their morphology and contain the enzymes of carbon dioxide fixation in these organisms (especially ribulose bisphosphate carboxylase/oxygenase, RuBisCO, and carbonic anhydrase). It is thought that the high local concentration of the enzymes along with the fast conversion of bicarbonate to carbon dioxide by carbonic anhydrase allows faster and more efficient carbon dioxide fixation than possible inside the cytoplasm. Similar structures are known to harbor the coenzyme B12-containing glycerol dehydratase, the key enzyme of glycerol fermentation to 1,3-propanediol, in some Enterobacteriaceae (e. g. Salmonella).
Magnetosomes
Magnetosomes are bacterial microcompartments found in magnetotactic bacteria that allow them to sense and align themselves along a magnetic field (magnetotaxis). The ecological role of magnetotaxis is unknown but is thought to be involved in the determination of optimal oxygen concentrations. Magnetosomes are composed of the mineral magnetite or greigite and are surrounded by a lipid bilayer membrane. The morphology of magnetosomes is species-specific.
Endospores
Perhaps the best known bacterial adaptation to stress is the formation of endospores. Endospores are bacterial survival structures that are highly resistant to many different types of chemical and environmental stresses and therefore enable the survival of bacteria in environments that would be lethal for these cells in their normal vegetative form. It has been proposed that endospore formation has allowed for the survival of some bacteria for hundreds of millions of years (e.g. in salt crystals) although these publications have been questioned. Endospore formation is limited to several genera of gram-positive bacteria such as Bacillus and Clostridium. It differs from reproductive spores in that only one spore is formed per cell resulting in no net gain in cell number upon endospore germination. The location of an endospore within a cell is species-specific and can be used to determine the identity of a bacterium. Dipicolinic acid is a chemical compound which composes 5% to 15% of the dry weight of bacterial spores and is implicated in being responsible for the heat resistance of endospores. Archaeologists have found viable endospores taken from the intestines of Egyptian mummies as well as from lake sediments in Northern Sweden estimated to be many thousands of years old.
References
Further reading
Cell Structure and Organization
External links
Animated guide to bacterial cell structure.
Bacteria
Bacteriology | Bacterial cell structure | [
"Biology"
] | 5,568 | [
"Prokaryotes",
"Microorganisms",
"Bacteria"
] |
4,209,645 | https://en.wikipedia.org/wiki/Slide%20stop | A slide stop, sometimes referred to as a slide lock, slide release, slide catch, or bolt hold open, is a function on a semi-automatic handgun that both visually indicates when it has expended all loaded ammunition and facilitates faster reloading by pulling back the slide or depressing the slide lock to advance the first round of a new magazine.
Description
The various terms relate to the two functions of the component: while it automatically catches the slide (locking it back) after the magazine's last round has been fired, thereby allowing the user to easily release the slide by pulling down on the switch, it also allows the user to purposefully stop or lock the slide back by pressing up on the switch while racking the slide.
Use
It is sometimes debated whether one should use the slide stop to release the slide. Some argue that this may cause extra wear on the firearm, or that the slide stop may be difficult to push. Some manufactures recommend using the slide lock as a release, others recommend racking the slide. Using the slide lock as a release can accelerate wear in some models.
References
Firearm components | Slide stop | [
"Technology"
] | 225 | [
"Firearm components",
"Components"
] |
4,209,687 | https://en.wikipedia.org/wiki/C/2006%20A1%20%28Pojma%C5%84ski%29 | Comet Pojmański is a non-periodic comet discovered by Grzegorz Pojmański on January 2, 2006, and formally designated C/2006 A1. Pojmański discovered the comet at Warsaw University Astronomic Observatory using the Las Campanas Observatory in Chile as part of the All Sky Automated Survey (ASAS). Kazimieras Cernis at the Institute of Theoretical Physics and Astronomy at Vilnius, Lithuania, located it the same night and before the announcement of Pojmański's discovery, in ultraviolet images taken a few days earlier by the SWAN instrument aboard the SOHO satellite. A pre-discovery picture was later found from December 29, 2005.
At the time of its discovery, the comet was roughly 113 million miles (181 million kilometers) from the Sun. But orbital elements indicated that on February 22, 2006, it would reach perihelion at a distance of 51.6 million miles — almost half the Earth's average distance from the Sun.
The comet moved on a northward path across the night sky, and reached maximum brightness around the beginning of March. Comet Pojmański reached the very fringe of naked-eye visibility at about magnitude 5, and was best visible through binoculars or a telescope. It could be found in the dawn sky within the constellation Capricornus, close to the horizon in the northern hemisphere, during late February, but viewing circumstances became better for the northern hemisphere as the comet departed southern skies and continued north.
By early March, the comet was located in Aquila, the Eagle, and by March 7 was located in the constellation Delphinus, the Dolphin.
Comet Pojmański brightened more than initially estimated, perhaps due to over-cautious estimates by astronomers. It had previously been estimated to reach a maximum brightness of around 6.5 magnitude, but became considerably brighter.
During the comet's appearance, it sported a tail of three to seven degrees (six to fourteen times the apparent lunar diameter) and a coma of up to about 10 arcseconds.
See also
All Sky Automated Survey
Bohdan Paczyński
References
External links
Space.com: "New Comet Brightens Rapidly" (Accessed 2/27/06)
Sky and Telescope: "A Surprise Comet in the Dawn" (Accessed 2/27/06)
Comet Data and Images from Warsaw University
Non-periodic comets
20060102
Science and technology in Poland | C/2006 A1 (Pojmański) | [
"Astronomy"
] | 493 | [
"Astronomy stubs",
"Comet stubs"
] |
4,209,700 | https://en.wikipedia.org/wiki/Tim%20Sumner%20%28physicist%29 | Timothy J. Sumner is Professor of Experimental Physics at Imperial College London. He is a member of the UK Dark Matter Collaboration, and Sumner's interests cover a wide range of astronomy-related fields, focusing particularly on particle physics.
He received his degree in Physics from Sussex University in 1974, and his DPhil in Experimental Physics from Sussex University, for work carried out jointly with the Institut Laue-Langevin in Grenoble. He joined the Cosmic-Ray and Space Physics group at Imperial College in 1979, and in 1984 became the project manager for flight hardware for the x-ray satellite ROSAT. He received a Group Achievement award from NASA for the project in 1990.
He became involved in the search for the direct demonstration of the existence of galactic dark matter, known as "Weakly Interacting Massive Particles". (WIMP). He is a member of the UK Dark Matter Collaboration (one of four groups around the world looking for WIMPs) and was its spokesperson in the UK for 2002–07. New Scientist described him as "leading the search for galactic dark matter, including axions, at Imperial College London in the UK". He is now Principal Investigator of the ZEPLIN III dark matter experiment. He also leads the ELIXIR proposal for next generation instruments. In addition to ROSAT, he has worked work on the space missions Gravity Probe B, which confirmed several prediction of Einstein's Theory of General Relativity, LISA, a gravitational wave observatory in space, and STEP, a mission to test the equivalence principle in space. He is also associated with GAUGE, a new proposal to the European Space Agency.
He is a Fellow of the Institute of Physics, and of the Royal Astronomical Society, and holds the position of Vice-Chair, COSPAR - Commission H.
Publications
Scopus lists him as having had 132 peer-reviewed publications and cited in 7000. The ones with the highest citation counts are:
Rowan-Robinson, M.; Mann, R. G.; Oliver, S. J.; Efstathiou, A.; Eaton, N.; Goldschmidt, P.; Mobasher, B.; Serjeant, S. B. G.; Sumner, T. J.; Danese, L.; Elbaz, D.; Franceschini, A.; Egami, E.; Kontizas, M.; Lawrence, A.; McMahon, R.; Norgaard-Nielsen, H. U.; Perez-Fournon, I.; Gonzalez-Serrano, J. I.; "Observations of the Hubble Deep Field with the Infrared Space Observatory - V. Spectral energy distributions; starburst models and star formation history" (1997) Monthly Notices of the Royal Astronomical Society, 289 (2), pp. 490–496. Cited 155 times.
Smith, P. F.; Arnison, G. T. J.; Homer, G. J.; Lewin, J. D.; Alner, G. J.; Spooner, N. J. C.; Quenby, J. J.; Sumner, T. J.; Bewick, A.; Li, J. P.; Shaul, D.; Ali, T.; Jones, W. G.; Smith, N. J. T.; Davies, G. J.; Lally, C. H.; Van Den Putte, M. J.; Barton, J. C.; Blake, P. R.; "Improved dark matter limits from pulse shape discrimination in a low background sodium iodide detector at the Boulby mine" (1996) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 379 (1-4), pp. 299–308. Cited 103 times.
Oliver, S.; Rowan-Robinson, M.; Alexander, D. M.; Almaini, O.; Balcells, M.; Baker, A. C.; Barcons, X.; Barden, M.; Bellas-Velidis, I.; Cabrera-Guerra, F.; Carballo, R.; Cesarsky, C. J.; Ciliegi, P.; Clements, D. L.; Crockett, H.; Danese, L.; Dapergolas, A.; Drolias, B.; Eaton, N.; Efstathiou, A.; Egami, E.; Elbaz, D.; Fadda, D.; Fox, M.; Franceschini, A.; Genzel, R.; Goldschmidt, P.; Graham, M.; Gonzalez-Serrano, J. I.; Gonzalez-Solares, E. A.; Granato, G. L.; Gruppioni, C.; Herbstmeier, U.; Héraudeau, Philippe; Joshi, M.; Kontizas, E.; Kontizas, M.; Kotilainen, J. K.; Kunze, D.; La Franca, F.; Lari, C.; Lawrence, A.; Lemke, D.; Linden-Vørnle, M. J. D.; Mann, R. G.; Márquez, I.; Masegosa, J.; Mattila, K.; McMahon, R. G.; Miley, G.; Missoulis, V.; Mobasher, B.; Morel, T.; Nørgaard-Nielsen, H.; Omont, A.; Papadopoulos, P.; Perez-Fournon, I.; Puget, J.-L.; Rigopoulou, D.; Rocca-Volmerange, B.; Serjeant, S.; Silva, L.; Sumner, T.; Surace, C.; Vaisanen, P.; Van Der Werf, P. P.; Verma, A.; Vigroux, L.; Villar-Martin, M.; Willott, C. J.; "The European Large Area ISO Survey - I. Goals, definition and observations" (2000) Monthly Notices of the Royal Astronomical Society, 316 (4), pp. 749–767. Cited 95 times.
Serjeant, S.; Oliver, S.; Rowan-Robinson, M.; Crockett, H.; Missoulis, V.; Sumner, T.; Gruppioni, C.; Mann, R. G.; Eaton, N.; Elbaz, D.; Clements, D. L.; Baker, A.; Efstathiou, A.; Cesarsky, C.; Danese, L.; Franceschini, A.; Genzel, R.; Lawrence, A.; Lemke, D.; McMahon, R. G.; Miley, G.; Puget, J.-L.; Rocca-Volmerange, B.; "The European Large Area ISO Survey - II. Mid-infrared extragalactic source counts" (2000) Monthly Notices of the Royal Astronomical Society; 316 (4), pp. 768–778. Cited 66 times.
Rowan-Robinson, M.; Lari, C.; Perez-Fournon, I.; Gonzalez-Solares, E. A.; La Franca, F.; Vaccari, M.; Oliver, S.; Gruppioni, C.; Ciliegi, P.; Héraudeau, Philippe; Serjeant, S.; Efstathiou, A.; Babbedge, T.; Matute, I.; Pozzi, F.; Franceschini, A.; Vaisanen, P.; Afonso-Luis, A.; Alexander, D. M.; Almaini, O.; Baker, A. C.; Basilakos, S.; Barden, M.; Del Burgo, C.; Bellas-Velidis, I.; Cabrera-Guerra, F.; Carballo, R.; Cesarsky, C. J.; Clements, D. L.; Crockett, H.; Danese, L.; Dapergolas, A.; Drolias, B.; Eaton, N.; Egami, E.; Elbaz, D.; Fadda, D.; Fox, M.; Genzel, R.; Goldschmidt, P.; Gonzalez-Serrano, J. I.; Graham, M.; Granato, G. L.; Hatziminaoglou, E.; Herbstmeier, U.; Joshi, M.; Kontizas, E.; Kontizas, M.; Kotilainen, J. K.; Kunze, D.; Lawrence, A.; Lemke, D.; Linden-Vørnle, M. J. D.; Mann, R .G.; Márquez, I.; Masegosa, J.; McMahon, R. G.; Miley, G.; Missoulis, V.; Mobasher, B.; Morel, T.; Nørgaard-Nielsen, H.; Omont, A.; Papadopoulos, P.; Puget, J.-L.; Rigopoulou, D.; Rocca-Volmerange, B.; Sedgwick, N.; Silva, L.; Sumner, T.; Surace, C.; Vila-Vilaro, B.; Van Der Werf, P.; Verma, A.; Vigroux, L.; Villar-Martin, M.; Willott, C. J.; Carramiñana, A.; Mujica, R.; "The European Large-Area ISO Survey (ELAIS): The final band-merged catalogue" (2004) Monthly Notices of the Royal Astronomical Society, 351 (4), pp. 1290–1306. Cited 65 times.
References
External links
Official home page at ICL
Living people
21st-century British astronomers
Academics of Imperial College London
Particle physicists
Year of birth missing (living people)
Alumni of the University of Sussex
Fellows of the Royal Astronomical Society
Fellows of the Institute of Physics
20th-century British astronomers | Tim Sumner (physicist) | [
"Physics"
] | 2,207 | [
"Particle physicists",
"Particle physics"
] |
4,209,907 | https://en.wikipedia.org/wiki/Measuring%20spoon | A measuring spoon is a spoon used to measure an amount of an ingredient, either liquid or dry, when cooking. Measuring spoons may be made of plastic, metal, and other materials. They are available in many sizes, including the teaspoon and tablespoon.
Country differences
International
Metric measuring spoons are available in sets, usually between four and six, typically with decilitre (100 ml), tablespoon (15 ml), teaspoon (5 ml) and millilitre measures. For fractional measures, there is often a line inside to indicate "half" or "a quarter", or a separate measure may be included, like dl.
United States
In the U.S., measuring spoons often come in sets, usually between four and six. This usually includes ¼, ½, and 1 teaspoon and 1 tablespoon.
The volume of a traditional US teaspoon is 4.9 ml and that of a tablespoon is 14.8 ml, only slightly less than standard metric measuring spoons.
Japan
In Japan, usually two spoons are used: a and a .
A large spoon is 15 milliliters, and a small spoon is 5 milliliters.
Sometimes a much smaller spoon may be used, usually a 2.5 milliliter spoon (½ small spoon).
Australia
The Australian definition of the tablespoon as a unit of volume is larger than most:
{|
|-
|1 Australian tablespoon ||colspan=3| = 20 ml
|-
|-
| || = 2 dessertspoons, ||style="text-align:right;"| 1 dessertspoon = || 10 ml each
|-
| || = 4 teaspoons, ||style="text-align:right;"| 1 teaspoon = || 5 ml each
|}
Specialized measuring spoons
Special spoons are manufactured to measure popular materials for common tasks. For example, for coffee the standard measuring spoon contains 7 grams of coffee powder, adequate for an espresso.
Measuring with cutlery spoons
Cutlery in many countries includes two spoons (besides the fork and knife, or butterknife). These cutlery spoons are also called a "teaspoon" and "tablespoon", but are not necessarily the same volume as measuring spoons with the same names: Cutlery spoons are not made to standard sizes and may hold 2.5~7.3 ml (50%~146% of 5 ml) for teaspoons and 7~20 ml (47%~133% of 15 ml) for tablespoons. The difference in size can be dangerous when cutlery is used for critical measurements, like medication.
See also
Measuring cup
Cooking weights and measures
Kitchen utensil
References
Spoons
Food preparation utensils
Cooking weights and measures
Volumetric instruments
Units of volume | Measuring spoon | [
"Mathematics",
"Technology",
"Engineering"
] | 597 | [
"Units of volume",
"Quantity",
"Measuring instruments",
"Volumetric instruments",
"Units of measurement"
] |
4,210,325 | https://en.wikipedia.org/wiki/Brass%20fastener | A brass fastener, butterfly clips, brad, paper fastener or split pin is a stationery item used for securing multiple sheets of paper together.
A patent of the fastener was issued in 1866 to George W McGill.
The fastener is inserted into punched holes in the stack of paper, and the leaves, or tines, of the legs are separated and bent over to secure the paper. This holds the pin in place and the sheets of paper together. For few sheets of paper, holes can be made using the sharp end of the fastener.
A split pin may be used in place of staples, but they are more commonly used in situations where rotation around the joint is desirable. This lends split pins to use in mobile paper and cardboard models, and they are often used as modern scrapbooking embellishments. In the film industry, brass fasteners are an industry standard in binding screenplays.
It is shaped somewhat like a nail with a round head and flat, split length. Brass fasteners are made of a soft metal such as brass and the tines are typically of two slightly different lengths to allow easy separation. A brass fastener is similar in design and function to the mechanical counterpart split pins.
References
External links
Paper Fasteners
Brass Fasteners
Fasteners
Stationery
Office equipment
Metallic objects | Brass fastener | [
"Physics",
"Engineering"
] | 271 | [
"Metallic objects",
"Fasteners",
"Construction",
"Physical objects",
"Matter"
] |
4,210,729 | https://en.wikipedia.org/wiki/Double-stranded%20RNA | Double-stranded RNA (dsRNA) is RNA with two complementary strands found in cells. It is similar to DNA but with the replacement of thymine by uracil and the adding of one oxygen atom. Despite the structural similarities, much less is known about dsRNA.
They form the genetic material of some viruses (double-stranded RNA viruses). dsRNA, such as viral RNA or siRNA, can trigger RNA interference in eukaryotes, as well as interferon response in vertebrates. In eukaryotes, dsRNA plays a role in the activation of the innate immune system against viral infections.
History of discovery
Watson and Crick had noted early on that the 2′ hydroxyl group on each RNA nucleotide would prevent RNA from forming a double helix similar to the one they had described for DNA.
In 1995, Alexander Rich and David R. Davies propose the double helix structure of RNA for the first time.
Structure
High molecular weight RNA in the 'A' form is referred to as dsRNA and possesses the following characteristics:
A cooperative type of temperature transition profiles with ionic strength-dependent Tm values;
Sedimentation coefficients (s20,w) above 8–9 S;
A base composition expected for an RNA duplex composed of two complementary, antiparallel strands stabilized by hydrogen bonds and hydrophobic interactions;
A molar absorbance (per phosphodiester group) lower than that of single-stranded RNA (ssRNA).
An absolute hypochromicity significantly more than ssRNA;
These characteristics are found in the genomes of various organisms, as well as in the double-stranded RNA that was formerly referred to as the "replicative form" and subsequently thought to be a byproduct of phage RNA replication. Alternatively, they are found in artificial high molecular weight double-stranded polyribonucleotide complexes like poly(A) · poly(U) or poly(I) · poly(C) complexes.
The widely recognized acidic forms of polyadenylate and polycytidylate can be introduced to these canonical double-stranded RNA species. Because the bases of these polyribonucleotides are protonated at pH values lower than adenine and cytosine's pK values, they assume a well-characterized [and for poly(A) particularly stable] double-stranded structure at acidic pH levels.
The more or less abundant self-complementary sequences found in all other forms of RNA, including rRNA, mRNA, tRNA, single-stranded viral RNA, and viroid RNA, can likewise form double-helical secondary structures, albeit incomplete and/or irregular.
Sources
Endogenous retroviruses, natural sense-antisense transcript pairs, mitochondrial transcripts, and repetitive nuclear sequences, including short and long interspersed elements (SINEs and LINEs), are some of the primary sources of endogenous dsRNA.
Properties
In general, dsRNAs share some significant characteristics:
They show a remarkable resistance to RNase A.
They are not transcribed from the DNA of the host genome.
The majority of them are consistently present in the host at a low concentration.
They do not appear to have a noticeable impact on the phenotype of their host.
They are effectively carried to the next generation.
dsRNA range in size from 1.5 to 20 kbp. Smaller dsRNAs (<2.0 kbp) are frequently associated with virus-like particles, and some of these dsRNAs have already been identified as viruses belonging to the Partitiviridae family. They typically have two distinct linear dsRNA segments, each approximately 2.0 kbp in length. Segments larger than 10 kbp are unlikely to be linked to specific virus-like particles, as no unique virus-like particles have been identified in samples prepared using various purification techniques. For this reason, these large dsRNAs were previously referred to as enigmatic dsRNAs, endogenous dsRNAs, or RNA plasmids.
References
Nucleic acids
Double-stranded RNA viruses
RNA | Double-stranded RNA | [
"Chemistry"
] | 833 | [
"Biomolecules by chemical classification",
"Nucleic acids"
] |
4,210,737 | https://en.wikipedia.org/wiki/LIGA | LIGA is a fabrication technology used to create high-aspect-ratio microstructures. The term is a German acronym for – lithography, electroplating, and molding.
Overview
LIGA consists of three main processing steps: lithography, electroplating, and molding. There are two main LIGA-fabrication technologies: X-Ray LIGA, which uses X-rays produced by a synchrotron to create high-aspect-ratio structures, and UV LIGA, a more accessible method which uses ultraviolet light to create structures with relatively low aspect ratios.
Notable characteristics of X-ray LIGA-fabricated structures include:
high aspect ratios on the order of 100:1
parallel side walls with a flank angle on the order of 89.95°
smooth side walls with = , suitable for optical mirrors
structural heights from tens of micrometers to several millimeters
structural details on the order of micrometers over distances of centimeters
X-Ray LIGA
X-Ray LIGA is a fabrication process in microtechnology that was developed in the early 1980s by a team under the leadership of Erwin Willy Becker and Wolfgang Ehrfeld at the Institute for Nuclear Process Engineering (Institut für Kernverfahrenstechnik, IKVT) at the Karlsruhe Nuclear Research Center, since renamed to the Institute for Microstructure Technology (Institut für Mikrostrukturtechnik, IMT) at the Karlsruhe Institute of Technology (KIT). LIGA was one of the first major techniques to allow on-demand manufacturing of high-aspect-ratio structures (structures that are much taller than wide) with lateral precision below one micrometer.
In the process, an X-ray sensitive polymer photoresist, typically PMMA, bonded to an electrically conductive substrate, is exposed to parallel beams of high-energy X-rays from a synchrotron radiation source through a mask partly covered with a strong X-ray absorbing material. Chemical removal of exposed (or unexposed) photoresist results in a three-dimensional structure, which can be filled by the electrodeposition of metal. The resist is chemically stripped away to produce a metallic mold insert. The mold insert can be used to produce parts in polymers or ceramics through injection molding.
The LIGA technique's unique value is the precision obtained by the use of deep X-ray lithography (DXRL). The technique enables microstructures with high aspect ratios and high precision to be fabricated in a variety of materials (metals, plastics, and ceramics). Many of its practitioners and users are associated with, or are located close to, synchrotron facilities.
UV LIGA
UV LIGA utilizes an inexpensive ultraviolet light source, like a mercury lamp, to expose a polymer photoresist, typically SU-8. Because heating and transmittance are not an issue in optical masks, a simple chromium mask can be substituted for the technically sophisticated X-ray mask. These reductions in complexity make UV LIGA much cheaper and more accessible than its X-ray counterpart. However, UV LIGA is not as effective at producing precision molds and is thus used when cost must be kept low and very high aspect ratios are not required.
Process details
Mask
X-ray masks are composed of a transparent low-Z carrier, a patterned high-Z absorber, and a metallic ring for alignment and heat removal. Due to extreme temperature variations induced by the X-ray exposure, carriers are fabricated from materials with high thermal conductivity to reduce thermal gradients. Currently, vitreous carbon and graphite are considered the best material, as their use significantly reduces side-wall roughness. Silicon, silicon nitride, titanium, and diamond are also used as carrier substrates but not preferred, as the required thin membranes are comparatively fragile and titanium masks tend to round sharp features due to edge fluorescence. Absorbers are gold, nickel, copper, tin, lead, and other X-ray-absorbing metals.
Masks can be fabricated in several fashions. The most accurate and expensive masks are those created by electron-beam lithography, which provides resolutions as fine as in resist thick and features in resist thick. An intermediate method is the plated photomask, which provides resolution and can be outsourced at a cost on the order of $1000 per mask. The least expensive method is a direct photomask, which provides resolution in resist thick. In summary, masks can cost between $1000 and $20,000 and take between two weeks and three months for delivery. Due to the small size of the market, each LIGA group typically has its own mask-making capability. Future trends in mask creation include larger formats, from a diameter of to , and smaller feature sizes.
Substrate
The starting material is a flat substrate, such as a silicon wafer or a polished disc of beryllium, copper, titanium, or other material. The substrate, if not already electrically conductive, is covered with a conductive plating base, typically through sputtering or evaporation.
The fabrication of high-aspect-ratio structures requires the use of a photoresist able to form a mold with vertical sidewalls; thus, the photoresist must have a high selectivity and be relatively free from stress when applied in thick layers. The typical choice, poly(methyl methacrylate) (PMMA), is applied to the substrate by a glue-down process in which a precast, high-molecular-weight sheet of PMMA is attached to the plating base on the substrate. The applied photoresist is then milled down to the precise height by a fly cutter prior to pattern transfer by X-ray exposure. Because the layer must be relatively free from stress, this glue-down process is preferred over alternative methods such as casting. Further, the cutting of the PMMA sheet by the fly cutter requires specific operating conditions and tools to avoid introducing any stress and crazing of the photoresist.
Exposure
A key enabling technology of LIGA is the synchrotron, capable of emitting high-power, highly-collimated X-rays. This high collimation permits relatively large distances between the mask and the substrate without the penumbral blurring that occurs from other X-ray sources. In the electron storage ring or synchrotron, a magnetic field constrains electrons to follow a circular path, and the radial acceleration of the electrons causes electromagnetic radiation to be emitted forward. The radiation is thus strongly collimated in the forward direction and can be assumed to be parallel for lithographic purposes. Because of the much higher flux of usable collimated X-rays, shorter exposure times become possible. Photon energies for a LIGA exposure are approximately distributed between 2.5 and .
Unlike optical lithography, there are multiple exposure limits, identified as the top dose, bottom dose, and critical dose, whose values must be determined experimentally for a proper exposure. The exposure must be sufficient to meet the requirements of the bottom dose, the exposure under which a photoresist residue will remain, and the top dose, the exposure over which the photoresist will foam. The critical dose is the exposure at which unexposed resist begins to be attacked. Due to the insensitivity of PMMA, a typical exposure time for a -thick PMMA is six hours. During exposure, secondary radiation effects such as Fresnel diffraction, mask and substrate fluorescence, and the generation of Auger electrons and photoelectrons can lead to overexposure.
During exposure, the X-ray mask and the mask holder are heated directly by X-ray absorption and cooled by forced convection from nitrogen jets. Temperature rise in PMMA resist is mainly from heat conducted from the substrate backward into the resist and from the mask plate through the inner cavity air forward to the resist, with X-ray absorption being tertiary. Thermal effects include chemistry variations due to resist heating and geometry-dependent mask deformation.
Development
For high-aspect-ratio structures, the resist-developer system is required to have a ratio of dissolution rates in the exposed and unexposed areas of 1000:1. The standard, empirically optimized developer is a mixture of tetrahydro-1,4-oxazine (20%), 2-aminoethanol-1 (5%), 2-(2-butoxyethoxy)ethanol (60%), and water (15%). This developer provides the required ratio of dissolution rates and reduces stress-related cracking from swelling in comparison to conventional PMMA developers. After development, the substrate is rinsed with deionized water and dried either in a vacuum or by spinning. At this stage, the PMMA structures can be released as the final product (e.g., optical components) or can be used as molds for subsequent metal deposition.
Electroplating
In the electroplating step, nickel, copper, or gold is plated upward from the metalized substrate into the voids left by the removed photoresist. Taking place in an electrolytic cell, the current density, temperature, and solution are carefully controlled to ensure proper plating. In the case of nickel deposition from NiCl2 in a KCl solution, Ni is deposited on the cathode (metalized substrate) and Cl2 evolves at the anode. Difficulties associated with plating into PMMA molds include voids, where hydrogen bubbles nucleate on contaminants; chemical incompatibility, where the plating solution attacks the photoresist; and mechanical incompatibility, where film stress causes the plated layer to lose adhesion. These difficulties can be overcome through the empirical optimization of the plating chemistry and environment for a given layout.
Stripping
After exposure, development, and electroplating, the resist is stripped. One method for removing the remaining PMMA is to flood-expose the substrate and use the developing solution to cleanly remove the resist. Alternatively, chemical solvents can be used. Stripping of a thick resist chemically is a lengthy process, taking two to three hours in acetone at room temperature. In multilayer structures, it is common practice to protect metal layers against corrosion by backfilling the structure with a polymer-based encapsulant. At this stage, metal structures can be left on the substrate (e.g., microwave circuitry) or released as the final product (e.g., gears).
Replication
After stripping, the released metallic components can be used for mass replication through standard means of replication such as stamping or injection molding.
Commercialization
In the 1990s, LIGA was a cutting-edge MEMS fabrication technology, resulting in the design of components showcasing the technique's unique versatility. Several companies that begin using the LIGA process later changed their business model (e.g., Steag microParts becoming Boehringer Ingelheim microParts, Mezzo Technologies). Currently, only two companies, HTmicro and microworks, continue their work in LIGA, benefiting from limitations of other competing fabrication technologies. UV LIGA, due to its lower production cost, is employed more broadly by several companies, such as Veco, Tecan, Temicon, and Mimotec in Switzerland, who supply the Swiss watch market with metal parts made of nickel and nickel-phosphorus.
Gallery
Below is a gallery of LIGA-fabricated structures arranged by date.
Notes
See also
Photolithography
X-ray lithography
Electroplating
Molding
Synchrotron
PMMA
SU-8 photoresist
Enriched Uranium — Aerodynamic Processes
References
External links
LiMiNT - LIGA process from Singapore Synchrotron Light Source
LIGA process Karlsruhe Institute of Technology, Institute of Microstrucutre Technology
Illustrated LIGA-process by Arndt Last
Materials science
Microtechnology
Lithography (microfabrication) | LIGA | [
"Physics",
"Materials_science",
"Engineering"
] | 2,428 | [
"Applied and interdisciplinary physics",
"Microtechnology",
"Materials science",
"nan",
"Nanotechnology",
"Lithography (microfabrication)"
] |
4,210,943 | https://en.wikipedia.org/wiki/Flexural%20strength | Flexural strength, also known as modulus of rupture, or bend strength, or transverse rupture strength is a material property, defined as the stress in a material just before it yields in a flexure test. The transverse bending test is most frequently employed, in which a specimen having either a circular or rectangular cross-section is bent until fracture or yielding using a three-point flexural test technique. The flexural strength represents the highest stress experienced within the material at its moment of yield. It is measured in terms of stress, here given the symbol .
Introduction
When an object is formed of a single material, like a wooden beam or a steel rod, is bent (Fig. 1), it experiences a range of stresses across its depth (Fig. 2). At the edge of the object on the inside of the bend (concave face) the stress will be at its maximum compressive stress value. At the outside of the bend (convex face) the stress will be at its maximum tensile value. These inner and outer edges of the beam or rod are known as the 'extreme fibers'. Most materials generally fail under tensile stress before they fail under compressive stress
Flexural versus tensile strength
The flexural strength would be the same as the tensile strength if the material were homogeneous. In fact, most materials have small or large defects in them which act to concentrate the stresses locally, effectively causing a localized weakness. When a material is bent only the extreme fibers are at the largest stress so, if those fibers are free from defects, the flexural strength will be controlled by the strength of those intact 'fibers'. However, if the same material was subjected to only tensile forces then all the fibers in the material are at the same stress and failure will initiate when the weakest fiber reaches its limiting tensile stress. Therefore, it is common for flexural strengths to be higher than tensile strengths for the same material. Conversely, a homogeneous material with defects only on its surfaces (e.g., due to scratches) might have a higher tensile strength than flexural strength.
If we don't take into account defects of any kind, it is clear that the material will fail under a bending force which is smaller than the corresponding tensile force. Both of these forces will induce the same failure stress, whose value depends on the strength of the material.
For a rectangular sample, the resulting stress under an axial force is given by the following formula:
This stress is not the true stress, since the cross section of the sample is considered to be invariable (engineering stress).
is the axial load (force) at the fracture point
is width
is the depth or thickness of the material
The resulting stress for a rectangular sample under a load in a three-point bending setup (Fig. 3) is given by the formula below (see "Measuring flexural strength").
The equation of these two stresses (failure) yields:
Typically, L (length of the support span) is much larger than d, so the fraction is larger than one.
Measuring flexural strength
For a rectangular sample under a load in a three-point bending setup (Fig. 3), starting with the classical form of maximum bending stress:
M is the moment in the beam
c is the maximum distance from the neutral axis to the outermost fiber in the bending plane
I is the second moment of area
For a simple supported beam as shown in Fig. 3, assuming the load is centered between the supports, the maximum moment is at the center and is equal to:
For a rectangular cross section,
(central axis to the outermost fiber of the rectangle)
(Second moment of area for a rectangle)
Combining these terms together in the classical bending stress equation:
F is the load (force) at the fracture point (N)
L is the length of the support span
b is width
d is thickness
For a rectangular sample under a load in a four-point bending setup where the loading span is one-third of the support span:
F is the load (force) at the fracture point
L is the length of the support (outer) span
b is width
d is thickness
For the 4 pt bend setup, if the loading span is 1/2 of the support span (i.e. Li = 1/2 L in Fig. 4):
If the loading span is neither 1/3 nor 1/2 the support span for the 4 pt bend setup (Fig. 4):
Li is the length of the loading (inner) span
See also
Euler–Bernoulli beam equation
Flexural modulus
Three-point flexural test
Four-point flexural test
References
J. M. Hodgkinson (2000), Mechanical Testing of Advanced Fibre Composites, Cambridge: Woodhead Publishing, Ltd., p. 132–133.
William D. Callister, Jr., Materials Science and Engineering, Hoken: John Wiley & Sons, Inc., 2003.
ASTM C1161-02c(2008)e1, Standard Test Method for Flexural Strength of Advanced Ceramics at Ambient Temperature, ASTM International, West Conshohocken, PA.
Continuum mechanics | Flexural strength | [
"Physics"
] | 1,056 | [
"Classical mechanics",
"Continuum mechanics"
] |
4,211,219 | https://en.wikipedia.org/wiki/Kadomtsev%E2%80%93Petviashvili%20equation | In mathematics and physics, the Kadomtsev–Petviashvili equation (often abbreviated as KP equation) is a partial differential equation to describe nonlinear wave motion. Named after Boris Borisovich Kadomtsev and Vladimir Iosifovich Petviashvili, the KP equation is usually written as
where . The above form shows that the KP equation is a generalization to two spatial dimensions, x and y, of the one-dimensional Korteweg–de Vries (KdV) equation. To be physically meaningful, the wave propagation direction has to be not-too-far from the x direction, i.e. with only slow variations of solutions in the y direction.
Like the KdV equation, the KP equation is completely integrable. It can also be solved using the inverse scattering transform much like the nonlinear Schrödinger equation.
In 2002, the regularized version of the KP equation, naturally referred to as the Benjamin–Bona–Mahony–Kadomtsev–Petviashvili equation (or simply the BBM-KP equation), was introduced as an alternative model for small amplitude long waves in shallow water moving mainly in the x direction in 2+1 space.
where . The BBM-KP equation provides an alternative to the usual KP equation, in a similar way that the Benjamin–Bona–Mahony equation is related to the classical Korteweg–de Vries equation, as the linearized dispersion relation of the BBM-KP is a good approximation to that of the KP but does not exhibit the unwanted limiting behavior as the Fourier variable dual to x approaches . The BBM-KP equation can be viewed as a weak transverse perturbation of the Benjamin–Bona–Mahony equation. As a result, the solutions of their corresponding Cauchy problems share an intriguing and complex mathematical relationship. Aguilar et al. proved that the solution of the Cauchy problem for the BBM-KP model equation converges to the solution of the Cauchy problem associated to the Benjamin–Bona–Mahony equation in the -based Sobolev space for all , provided their corresponding initial data are close in as the transverse variable .
History
The KP equation was first written in 1970 by Soviet physicists Boris B. Kadomtsev (1928–1998) and Vladimir I. Petviashvili (1936–1993); it came as a natural generalization of the KdV equation (derived by Korteweg and De Vries in 1895). Whereas in the KdV equation waves are strictly one-dimensional, in the KP equation this restriction is relaxed. Still, both in the KdV and the KP equation, waves have to travel in the positive x-direction.
Connections to physics
The KP equation can be used to model water waves of long wavelength with weakly non-linear restoring forces and frequency dispersion. If surface tension is weak compared to gravitational forces, is used; if surface tension is strong, then . Because of the asymmetry in the way x- and y-terms enter the equation, the waves described by the KP equation behave differently in the direction of propagation (x-direction) and transverse (y) direction; oscillations in the y-direction tend to be smoother (be of small-deviation).
The KP equation can also be used to model waves in ferromagnetic media, as well as two-dimensional matter–wave pulses in Bose–Einstein condensates.
Limiting behavior
For , typical x-dependent oscillations have a wavelength of giving a singular limiting regime as . The limit is called the dispersionless limit.
If we also assume that the solutions are independent of y as , then they also satisfy the inviscid Burgers' equation:
Suppose the amplitude of oscillations of a solution is asymptotically small — — in the dispersionless limit. Then the amplitude satisfies a mean-field equation of Davey–Stewartson type.
See also
Novikov–Veselov equation
Schottky problem
Dispersionless KP equation
References
Further reading
. Translation of
External links
Partial differential equations
Exactly solvable models
Integrable systems
Solitons
Equations of fluid dynamics | Kadomtsev–Petviashvili equation | [
"Physics",
"Chemistry"
] | 893 | [
"Equations of fluid dynamics",
"Equations of physics",
"Integrable systems",
"Theoretical physics",
"Fluid dynamics"
] |
4,211,471 | https://en.wikipedia.org/wiki/Rapid%20sand%20filter | The rapid sand filter or rapid gravity filter is a type of filter used in water purification and is commonly used in municipal drinking water facilities as part of a multiple-stage treatment system. These systems are complex and expensive to operate and maintain, and therefore less suitable for small communities and developing nations.
History
Rapid sand filters were first developed in the 1890s, and improved designs were developed by the 1920s. The first modern rapid sand filtration plant was designed and built by George W. Fuller in Little Falls, New Jersey. Rapid sand filters were widely used in large municipal water systems by the 1920s, because they required smaller land areas compared to slow sand filters.
Design and operation
Rapid sand filters are typically designed as part of multi-stage treatment systems used by large municipalities. These systems are complex and expensive to operate and maintain, and therefore less suitable for small communities and developing nations. The filtration system requires a relatively small land area in proportion to the population served, and the design is less sensitive to changes in raw water quality, e.g. turbidity, than slow sand filters.
Rapid sand filters use relatively coarse sand (0.5 to 1.0 mm) and other granular media, such as anthracite, in beds of 0.6 to 1.2 metre depth to remove particles and impurities that have been trapped in a floc through the use of flocculation chemicals—typically alum. Since media other than silica sand can be used in such filters, a more modern term is "rapid filtration" instead of "rapid sand filtration." The unfiltered water flows at about 5 m/h, through the filter medium under gravity or under pumped pressure and the floc material is trapped in the sand matrix.
Mixing, flocculation and sedimentation processes are typical treatment stages that precede filtration. Chemical additives, such as coagulants, are often used in conjunction with the filtration system.
The two types of rapid sand filter are the gravity type (e.g. Paterson's filter) and pressure type (e.g. Candy's filter).
A disinfection system (typically using chlorine or ozone) is commonly used following filtration. Rapid sand filtration has very little effect on taste and smell and dissolved impurities of drinking water, unless activated carbon is included in the filter medium.
Rapid sand filters must be cleaned frequently, often several times a day, by backwashing, which involves reversing the direction of the water and adding compressed air. During backwashing, the bed is fluidized and care must be taken not to wash away the media.
The backwash sequence would typically be:
Close inlet valve
Isolate controller
Allow water to drain down
Close outlet valve
Start air blower
Open air inlet valve
Air scour for 0-10 minutes
Stop air blowers
Close air valve
Wait 30 seconds
Start washwater pumps
Open upwash valve slowly
Open wash out valve
Wash water for 0-10 minutes
Close upwash valve
Raise washwater weir
Open surface flush inlet,
Surface flush for 0-5 minutes
Close washout valve
Lower washwater weir
Bring controller into service
Open inlet valve when filter is full
The byproduct of backwashing is sludge. Most treatment works use a sludge thickening process, except for plant which discharge untreated sludge to sewers if the composition is within the tolerable limits. The thickening process comprise batch settling tanks or continuous picket fence thickeners. Polyelectrolytes are added upstream to enhance settleability. Liquid from the process is routed to the inlet of the works. Thickening is followed by either lagooning, drying beds or filter pressing. Thickened sludge may be discharged to a sewer system, tankered away to landfill, or incinerator.
See also
Slow sand filter
Bank filtration
Biosand filter
Trickling filter
Automated pool cleaner
Notes
References
Water filters
Water technology | Rapid sand filter | [
"Chemistry"
] | 809 | [
"Water technology",
"Water treatment",
"Filters",
"Water filters"
] |
4,211,531 | https://en.wikipedia.org/wiki/Zero-energy%20building | A Zero-Energy Building (ZEB), also known as a Net Zero-Energy (NZE) building, is a building with net zero energy consumption, meaning the total amount of energy used by the building on an annual basis is equal to the amount of renewable energy created on the site or in other definitions by renewable energy sources offsite, using technology such as heat pumps, high efficiency windows and insulation, and solar panels.
The goal is that these buildings contribute less overall greenhouse gas to the atmosphere during operation than similar non-ZNE buildings. They do at times consume non-renewable energy and produce greenhouse gases, but at other times reduce energy consumption and greenhouse gas production elsewhere by the same amount. The development of zero-energy buildings is encouraged by the desire to have less of an impact on the environment, and their expansion is encouraged by tax breaks and savings on energy costs which make zero-energy buildings financially viable.
Terminology tends to vary between countries, agencies, cities, towns, and reports, so a general knowledge of this concept and its various uses is essential for a versatile understanding of clean energy and renewables. The International Energy Agency (IEA) and European Union (EU) most commonly use "Net Zero Energy", with the term "zero net" being mainly used in the US. A similar concept approved and implemented by the European Union and other agreeing countries is nearly Zero Energy Building (nZEB), with the goal of having all new buildings in the region under nZEB standards by 2020.
Overview
Typical code-compliant buildings consume 40% of the total fossil fuel energy in the US and European Union and are significant contributors of greenhouse gases. To combat such high energy usage, more and more buildings are starting to implement the carbon neutrality principle, which is viewed as a means to reduce carbon emissions and reduce dependence on fossil fuels. Although zero-energy buildings remain limited, even in developed countries, they are gaining importance and popularity.
Most zero-energy buildings use the electrical grid for energy storage but some are independent of the grid and some include energy storage onsite. The buildings are called "energy-plus buildings" or in some cases "low energy houses". These buildings produce energy onsite using renewable technology like solar and wind, while reducing the overall use of energy with highly efficient lightning and heating, ventilation, and air conditioning (HVAC) technologies. The zero-energy goal is becoming more practical as the costs of alternative energy technologies decrease and the costs of traditional fossil fuels increase.
The development of modern zero-energy buildings became possible largely through the progress made in new energy and construction technologies and techniques. These include highly insulating spray-foam insulation, high-efficiency solar panels, high-efficiency heat pumps and highly insulating, low emissivity, triple and quadruple-glazed windows. These innovations have also been significantly improved by academic research, which collects precise energy performance data on traditional and experimental buildings and provides performance parameters for advanced computer models to predict the efficacy of engineering designs.
Zero-energy buildings can be part of a smart grid. Some advantages of these buildings are as follows:
Integration of renewable energy resources
Integration of plug-in electric vehicles – called vehicle-to-grid
Implementation of zero-energy concepts
Although the net zero concept is applicable to a wide range of resources, water and waste, energy is usually the first resource to be targeted because:
Energy, particularly electricity and heating fuel like natural gas or heating oil, is expensive. Hence reducing energy use can save the building owner money. In contrast, water and waste are inexpensive for the individual building owner.
Energy, particularly electricity and heating fuel, has a high carbon footprint. Hence reducing energy use is a major way to reduce the building's carbon footprint.
There are well-established means to significantly reduce the energy use and carbon footprint of buildings. These include: adding insulation, using heat pumps instead of furnaces, using low emissivity, triple or quadruple-glazed windows and adding solar panels to the roof.
In some countries, there are government-sponsored subsidies and tax breaks for installing heat pumps, solar panels, triple or quadruple-glazed windows and insulation that greatly reduce the cost of getting to a net-zero energy building for the building owner.
Optimizing zero-energy building for climate impact
The introduction of zero-energy buildings makes buildings more energy efficient and reduces the rate of carbon emissions once the building is in operation; however, there is still a lot of pollution associated with a building's embodied carbon. Embodied carbon is the carbon emitted in the making and transportation of a building's materials and construction of the structure itself; it is responsible for 11% of global GHG emissions and 28% of global building sector emissions. The importance of embodied carbon will grow as it will begin to account for the greater portion of a building's carbon emissions. In some newer, energy efficient buildings, embodied carbon has risen to 47% of the building's lifetime emissions. Focusing on embodied carbon is part of optimizing construction for climate impact and zero carbon emissions requires slightly different considerations from optimizing only for energy efficiency.
A 2019 study found that between 2020 and 2030, reducing upfront carbon emissions and switching to clean or renewable energy is more important than increasing building efficiency because "building a highly energy efficient structure can actually produce more greenhouse gas than a basic code compliant one if carbon-intensive materials are used." The study stated that because "Net-zero energy codes will not significantly reduce emissions in time, policy makers and regulators must aim for true net zero carbon buildings, not net zero energy buildings."
One way to reduced embodied carbon is by using low-carbon materials for construction such as straw, wood, linoleum, or cedar. For materials like concrete and steel, options to reduce embodied emissions do exist, however, these are unlikely to be available at large scale in the short-term. In conclusion, it has been determined that the optimal design point for greenhouse gas reduction appeared to be at four story multifamily buildings of low-carbon materials, such as those listed above, which could be a template for low-carbon emitting structures.
Definitions
Despite sharing the name "zero net energy", there are several definitions of what the term means in practice, with a particular difference in usage between North America and Europe.
Zero net site energy use In this type of ZNE, the amount of energy provided by on-site renewable energy sources is equal to the amount of energy used by the building. In the United States, "zero net energy building" generally refers to this type of building.
Zero net source energy use This ZNE generates the same amount of energy as is used, including the energy used to transport the energy to the building. This type accounts for energy losses during electricity generation and transmission. These ZNEs must generate more electricity than zero net site energy buildings.
Net zero energy emissions Outside the United States and Canada, a ZEB is generally defined as one with zero net energy emissions, also known as a zero carbon building (ZCB) or zero emissions building (ZEB). Under this definition the carbon emissions generated from on-site or off-site fossil fuel use are balanced by the amount of on-site renewable energy production. Other definitions include not only the carbon emissions generated by the building in use, but also those generated in the construction of the building and the embodied energy of the structure. Others debate whether the carbon emissions of commuting to and from the building should also be included in the calculation. Recent work in New Zealand has initiated an approach to include building user transport energy within zero energy building frameworks.
Net zero cost In this type of building, the cost of purchasing energy is balanced by income from sales of electricity to the grid of electricity generated on-site. Such a status depends on how a utility credits net electricity generation and the utility rate structure the building uses.
Net off-site zero energy use A building may be considered a ZEB if 100% of the energy it purchases comes from renewable energy sources, even if the energy is generated off the site.
Off-the-gridOff-the-grid buildings are stand-alone ZEBs that are not connected to an off-site energy utility facility. They require distributed renewable energy generation and energy storage capability (for when the sun is not shining, wind is not blowing, etc.). An energy autarkic house is a building concept where the balance of the own energy consumption and production can be made on an hourly or even smaller basis. Energy autarkic houses can be taken off-the-grid.
Net Zero Energy Building Based on scientific analysis within the joint research program "Towards Net Zero Energy Solar Buildings" a methodological framework was set up which allows different definitions, in accordance with country's political targets, specific (climate) conditions and respectively formulated requirements for indoor conditions: The overall conceptual understanding of a Net ZEB is an energy efficient, grid-connected building enabled to generate energy from renewable sources to compensate its own energy demand (see figure 1).The wording "Net" emphasizes the energy exchange between the building and the energy infrastructure. By the building-grid interaction, the Net ZEBs becomes an active part of the renewable energy infrastructure. This connection to energy grids prevents seasonal energy storage and oversized on-site systems for energy generation from renewable sources like in energy autonomous buildings. The similarity of both concepts is a pathway of two actions: 1) reduce energy demand by means of energy efficiency measures and passive energy use; 2) generate energy from renewable sources. However, the Net ZEBs grid interaction and plans to widely increase their numbers of evoking considerations on increased flexibility in the shift of energy loads and reduced peak demands.
Positive Energy District Expanding some of the principles of zero-energy buildings to a city district level, Positive Energy Districts (PED) are districts or other urban areas that produce at least as much energy on an annual basis as they consume. The impetus to develop whole positive energy districts instead of single buildings is based on the possibility of sharing resources, managing energy efficiently systems across many buildings and reaching economics of scale.
Within this balancing procedure several aspects and explicit choices have to be determined:
The building system boundary is split into a physical boundary which determines which renewable resources are considered (e.g. in buildings footprint, on-site or even off-site) respectively how many buildings are included in the balance (single building, cluster of buildings) and a balance boundary which determines the included energy uses (e.g. heating, cooling, ventilation, hot water, lighting, appliances, IT, central services, electric vehicles, and embodied energy, etc.). It should be noticed that renewable energy supply options can be prioritized (e.g. by transportation or conversion effort, availability over the lifetime of the building or replication potential for future, etc.) and therefore create a hierarchy. It may be argued that resources within the building footprint or on-site should be given priority over off-site supply options.
The weighting system converts the physical units of different energy carriers into a uniform metric (site/final energy, source/primary energy renewable parts included or not, energy cost, equivalent carbon emissions and even energy or environmental credits) and allows their comparison and compensation among each other in one single balance (e.g. exported PV electricity can compensate for imported biomass). Politically influenced and therefore possibly asymmetrically or time-dependent conversion/weighting factors can affect the relative value of energy carriers and can influence the required energy generation capacity.
The balancing period is often assumed to be one year (suitable to cover all operation energy uses). A shorter period (monthly or seasonal) could also be considered as well as a balance over the entire life cycle (including embodied energy, which could also be annualized and counted in addition to operational energy uses).
The energy balance can be done in two balance types: 1) Balance of delivered/imported and exported energy (monitoring phase as self-consumption of energy generated on-site can be included); 2) Balance between (weighted) energy demand and (weighted) energy generation (for design phase as normal end users temporal consumption patterns -e.g. for lighting, appliances, etc.- are lacking). Alternatively, a balance based on monthly net values in which only residuals per month are summed up to an annual balance is imaginable. This can be seen either as a load/generation balance or as a special case of import/export balance where a "virtual monthly self-consumption" is assumed (see figure 2 and compare).
Besides the energy balance, the Net ZEBs can be characterized by their ability to match the building's load by its energy generation (load matching) or to work beneficially with respect to the needs of the local grid infrastructure (grind interaction). Both can be expressed by suitable indicators which are intended as assessment tools only.
Design and construction
The most cost-effective steps toward a reduction in a building's energy consumption usually occur during the design process. To achieve efficient energy use, zero energy design departs significantly from conventional construction practice. Successful zero energy building designers typically combine time tested passive solar, or artificial/fake conditioning, principles that work with the on-site assets. Sunlight and solar heat, prevailing breezes, and the cool of the earth below a building, can provide daylighting and stable indoor temperatures with minimum mechanical means. ZEBs are normally optimized to use passive solar heat gain and shading, combined with thermal mass to stabilize diurnal temperature variations throughout the day, and in most climates are superinsulated. All the technologies needed to create zero energy buildings are available off-the-shelf today.
Sophisticated 3-D building energy simulation tools are available to model how a building will perform with a range of design variables such as building orientation (relative to the daily and seasonal position of the sun), window and door type and placement, overhang depth, insulation type and values of the building elements, air tightness (weatherization), the efficiency of heating, cooling, lighting, and other equipment, as well as local climate. These simulations help the designers predict how the building will perform before it is built, and enable them to model the economic and financial implications on building cost benefit analysis, or even more appropriate – life-cycle assessment.
Zero-energy buildings are built with significant energy-saving features. The heating and cooling loads are lowered by using high-efficiency equipment (such as heat pumps rather than furnaces. Heat pumps are about four times as efficient as furnaces) added insulation (especially in the attic and in the basement of houses), high-efficiency windows (such as low emissivity, triple-glazed windows), draft-proofing, high efficiency appliances (particularly modern high-efficiency refrigerators), high-efficiency LED lighting, passive solar gain in winter and passive shading in the summer, natural ventilation, and other techniques. These features vary depending on climate zones in which the construction occurs. Water heating loads can be lowered by using water conservation fixtures, heat recovery units on waste water, and by using solar water heating, and high-efficiency water heating equipment. In addition, daylighting with skylights or solartubes can provide 100% of daytime illumination within the home. Nighttime illumination is typically done with fluorescent and LED lighting that use 1/3 or less power than incandescent lights, without adding unwanted heat. And miscellaneous electric loads can be lessened by choosing efficient appliances and minimizing phantom loads or standby power. Other techniques to reach net zero (dependent on climate) are Earth sheltered building principles, superinsulation walls using straw-bale construction, pre-fabricated building panels and roof elements plus exterior landscaping for seasonal shading.
Once the energy use of the building has been minimized it can be possible to generate all that energy on site using roof-mounted solar panels. See examples of zero net energy houses here.
Zero-energy buildings are often designed to make dual use of energy including that from white goods. For example, using refrigerator exhaust to heat domestic water, ventilation air and shower drain heat exchangers, office machines and computer servers, and body heat to heat the building. These buildings make use of heat energy that conventional buildings may exhaust outside. They may use heat recovery ventilation, hot water heat recycling, combined heat and power, and absorption chiller units.
Energy harvest
ZEBs harvest available energy to meet their electricity and heating or cooling needs. By far the most common way to harvest energy is to use roof-mounted solar photovoltaic panels that turn the sun's light into electricity. Energy can also be harvested with solar thermal collectors (which use the sun's heat to heat water for the building). Heat pumps can also harvest heat and cool from the air (air-sourced) or ground near the building (ground-sourced otherwise known as geothermal). Technically, heat pumps move heat rather than harvest it, but the overall effect in terms of reduced energy use and reduced carbon footprint is similar. In the case of individual houses, various microgeneration technologies may be used to provide heat and electricity to the building, using solar cells or wind turbines for electricity, and biofuels or solar thermal collectors linked to a seasonal thermal energy storage (STES) for space heating. An STES can also be used for summer cooling by storing the cold of winter underground. To cope with fluctuations in demand, zero energy buildings are frequently connected to the electricity grid, export electricity to the grid when there is a surplus, and drawing electricity when not enough electricity is being produced. Other buildings may be fully autonomous.
Energy harvesting is most often more effective in regards to cost and resource utilization when done on a local but combined scale, for example a group of houses, cohousing, local district or village rather than an individual house basis. An energy benefit of such localized energy harvesting is the virtual elimination of electrical transmission and electricity distribution losses. On-site energy harvesting such as with roof top mounted solar panels eliminates these transmission losses entirely. Energy harvesting in commercial and industrial applications should benefit from the topography of each location. However, a site that is free of shade can generate large amounts of solar powered electricity from the building's roof and almost any site can use geothermal or air-sourced heat pumps. The production of goods under net zero fossil energy consumption requires locations of geothermal, microhydro, solar, and wind resources to sustain the concept.
Zero-energy neighborhoods, such as the BedZED development in the United Kingdom, and those that are spreading rapidly in California and China, may use distributed generation schemes. This may in some cases include district heating, community chilled water, shared wind turbines, etc. There are current plans to use ZEB technologies to build entire off-the-grid or net zero energy use cities.
The "energy harvest" versus "energy conservation" debate
One of the key areas of debate in zero energy building design is over the balance between energy conservation and the distributed point-of-use harvesting of renewable energy (solar energy, wind energy, and thermal energy). Most zero energy homes use a combination of these strategies.
As a result of significant government subsidies for photovoltaic solar electric systems, wind turbines, etc., there are those who suggest that a ZEB is a conventional house with distributed renewable energy harvesting technologies. Entire additions of such homes have appeared in locations where photovoltaic (PV) subsidies are significant, but many so called "Zero Energy Homes" still have utility bills. This type of energy harvesting without added energy conservation may not be cost effective with the current price of electricity generated with photovoltaic equipment, depending on the local price of power company electricity. The cost, energy and carbon-footprint savings from conservation (e.g., added insulation, triple-glazed windows and heat pumps) compared to those from on-site energy generation (e.g., solar panels) have been published for an upgrade to an existing house here.
Since the 1980s, passive solar building design and passive house have demonstrated heating energy consumption reductions of 70% to 90% in many locations, without active energy harvesting. For new builds, and with expert design, this can be accomplished with little additional construction cost for materials over a conventional building. Very few industry experts have the skills or experience to fully capture benefits of the passive design. Such passive solar designs are much more cost-effective than adding expensive photovoltaic panels on the roof of a conventional inefficient building. A few kilowatt-hours of photovoltaic panels (costing the equivalent of about US$2-3 dollars per annual kWh production) may only reduce external energy requirements by 15% to 30%. A high seasonal energy efficiency ratio 14 conventional air conditioner requires over 7 kW of photovoltaic electricity while it is operating, and that does not include enough for off-the-grid night-time operation. Passive cooling, and superior system engineering techniques, can reduce the air conditioning requirement by 70% to 90%. Photovoltaic-generated electricity becomes more cost-effective when the overall demand for electricity is lower.
Combined approach in rapid retrofits for existing buildings
Companies in Germany and the Netherlands offer rapid climate retrofit packages for existing buildings, which add a custom designed shell of insulation to the outside of a building, along with upgrades for more sustainable energy use, such as heat pumps. Similar pilot projects are underway in the US.
Occupant behavior
The energy used in a building can vary greatly depending on the behavior of its occupants. The acceptance of what is considered comfortable varies widely. Studies of identical homes have shown dramatic differences in energy use in a variety of climates. An average widely accepted ratio of highest to lowest energy consumer in identical homes is about 3, with some identical homes using up to 20 times as much heating energy as the others. Occupant behavior can vary from differences in setting and programming thermostats, varying levels of illumination and hot water use, window and shading system operation and the amount of miscellaneous electric devices or plug loads used.
Utility concerns
Utility companies are typically legally responsible for maintaining the electrical infrastructure that brings power to our cities, neighborhoods, and individual buildings. Utility companies typically own this infrastructure up to the property line of an individual parcel, and in some cases own electrical infrastructure on private land as well.
In the US utilities have expressed concern that the use of Net Metering for ZNE projects threatens the utilities base revenue, which in turn impacts their ability to maintain and service the portion of the electrical grid that they are responsible for. Utilities have expressed concern that states that maintain Net Metering laws may saddle non-ZNE homes with higher utility costs, as those homeowners would be responsible for paying for grid maintenance while ZNE home owners would theoretically pay nothing if they do achieve ZNE status. This creates potential equity issues, as currently, the burden would appear to fall on lower-income households. A possible solution to this issue is to create a minimum base charge for all homes connected to the utility grid, which would force ZNE home owners to pay for grid services independently of their electrical use.
Additional concerns are that local distribution as well as larger transmission grids have not been designed to convey electricity in two directions, which may be necessary as higher levels of distributed energy generation come on line. Overcoming this barrier could require extensive upgrades to the electrical grid, however, as of 2010, this is not believed to be a major problem until renewable generation reaches much higher levels of penetration.
Development efforts
Wide acceptance of zero-energy building technology may require more government incentives or building code regulations, the development of recognized standards, or significant increases in the cost of conventional energy.
The Google photovoltaic campus and the Microsoft 480-kilowatt photovoltaic campus relied on US Federal, and especially California, subsidies and financial incentives. California is now providing US$3.2 billion in subsidies for residential-and-commercial near-zero-energy buildings. The details of other American states' renewable energy subsidies (up to US$5.00 per watt) can be found in the Database of State Incentives for Renewables and Efficiency. The Florida Solar Energy Center has a slide presentation on recent progress in this area.
The World Business Council for Sustainable Development has launched a major initiative to support the development of ZEB. Led by the CEO of United Technologies and the Chairman of Lafarge, the organization has both the support of large global companies and the expertise to mobilize the corporate world and governmental support to make ZEB a reality. Their first report, a survey of key players in real estate and construction, indicates that the costs of building green are overestimated by 300 percent. Survey respondents estimated that greenhouse gas emissions by buildings are 19 percent of the worldwide total, in contrast to the actual value of roughly 40 percent.
Influential zero-energy and low-energy buildings
Those who commissioned construction of passive houses and zero-energy homes (over the last three decades) were essential to iterative, incremental, cutting-edge, technology innovations. Much has been learned from many significant successes, and a few expensive failures.
The zero-energy building concept has been a progressive evolution from other low-energy building designs. Among these, the Canadian R-2000 and the German passive house standards have been internationally influential. Collaborative government demonstration projects, such as the superinsulated Saskatchewan House, and the International Energy Agency's Task 13, have also played their part.
Net zero energy building definition
The US National Renewable Energy Laboratory (NREL) published a report called Net-Zero Energy Buildings: A Classification System Based on Renewable Energy Supply Options. This is the first report to lay out a full spectrum classification system for Net Zero/Renewable Energy buildings that includes the full spectrum of Clean Energy sources, both on site and off site. This classification system identifies the following four main categories of Net Zero Energy Buildings/Sites/Campuses:
NZEB:A — A footprint renewables Net Zero Energy Building
NZEB:B — A site renewables Net Zero Energy Building
NZEB:C — An imported renewables Net Zero Energy Building
NZEB:D — An off-site purchased renewables Net Zero Energy Building
Applying this US Government Net Zero classification system means that every building can become net nero with the right combination of the key net zero technologies - PV (solar), GHP (geothermal heating and cooling, thermal batteries), EE (energy efficiency), sometimes wind, and electric batteries. A graphical exposé of the scale of impact of applying these NREL guidelines for net zero can be seen in the graphic at Net Zero Foundation titled "Net Zero Effect on U.S. Total Energy Use" showing a possible 39% US total fossil fuel use reduction by changing US residential and commercial buildings to net zero, 37% savings if we still use natural gas for cooking at the same level.
Net zero carbon conversion example
Many well known universities have professed to want to completely convert their energy systems off of fossil fuels. Capitalizing on the continuing developments in both photovoltaics and geothermal heat pump technologies, and in the advancing electric battery field, complete conversion to a carbon free energy solution is becoming easier. Large scale hydroelectric has been around since before 1900. An example of such a project is in the Net Zero Foundation's proposal at MIT to take that campus completely off fossil fuel use. This proposal shows the coming application of Net Zero Energy Buildings technologies at the District Energy scale.
Advantages and disadvantages
Advantages
isolation for building owners from future energy price increases
increased comfort due to more-uniform interior temperatures (this can be demonstrated with comparative isotherm maps)
reduced total cost of ownership due to improved energy efficiency
reduced total net monthly cost of living
reduced risk of loss from grid blackouts
Minimal to no future energy price increases for building owners reduced requirement for energy austerity and carbon emission taxes
improved reliability – photovoltaic systems have 25-year warranties and seldom fail during weather problems – the 1982 photovoltaic systems on the Walt Disney World EPCOT (Experimental Prototype Community of Tomorrow) Energy Pavilion were still in use until 2018, even through three hurricanes. They were taken down in 2018 in preparation for a new ride.
higher resale value as potential owners demand more ZEBs than available supply
the value of a ZEB building relative to similar conventional building should increase every time energy costs increase
contribute to the greater benefits of the society, e.g. providing sustainable renewable energy to the grid, reducing the need of grid expansion
Optimizing bottom-up urban building energy models (UBEM) can make strides in the exactness of reenactment of building vitality.
Disadvantages
initial costs can be higher – effort required to understand, apply, and qualify for ZEB subsidies, if they exist.
very few designers or builders have the necessary skills or experience to build ZEBs
possible declines in future utility company renewable energy costs may lessen the value of capital invested in energy efficiency
new photovoltaic solar cells equipment technology price has been falling at roughly 17% per year – It will lessen the value of capital invested in a solar electric generating system – Current subsidies may be phased out as photovoltaic mass production lowers future price
challenge to recover higher initial costs on resale of building, but new energy rating systems are being introduced gradually.
while the individual house may use an average of net zero energy over a year, it may demand energy at the time when peak demand for the grid occurs. In such a case, the capacity of the grid must still provide electricity to all loads. Therefore, a ZEB may not reduce risk of loss from grid blackouts.
without an optimized thermal envelope the embodied energy, heating and cooling energy and resource usage is higher than needed. ZEB by definition do not mandate a minimum heating and cooling performance level thus allowing oversized renewable energy systems to fill the energy gap.
solar energy capture using the house envelope only works in locations unobstructed from the sun. The solar energy capture cannot be optimized in north (for northern hemisphere, or south for southern Hemisphere) facing shade, or wooded surroundings.
ZEB is not free of carbon emissions, glass has a high embodied energy, and the production requires a lot of carbon.
Building regulations such as height restrictions or fire code may prevent implementation of wind or solar power or external additions to an existing thermal envelope.
Zero energy building versus green building
The goal of green building and sustainable architecture is to use resources more efficiently and reduce a building's negative impact on the environment. Zero energy buildings achieve one key goal of exporting as much renewable energy as it uses over the course of year; reducing greenhouse gas emissions. ZEB goals need to be defined and set, as they are critical to the design process. Zero energy buildings may or may not be considered "green" in all areas, such as reducing waste, using recycled building materials, etc. However, zero energy, or net-zero buildings do tend to have a much lower ecological impact over the life of the building compared with other "green" buildings that require imported energy and/or fossil fuel to be habitable and meet the needs of occupants.
Both terms, zero energy buildings and green buildings, have similarities and differences. "Green" buildings often focus on operational energy, and disregard the embodied carbon footprint from construction. According to the IPCC, embodied carbon will make up half of the total carbon emissions between now[2020] and 2050. On the other hand, zero energy buildings are specifically designed to produce enough energy from renewable energy sources to meet its own consumption requirements, and green buildings can be generally defined as a building that reduces negative impacts or positively impacts our natural environment [1-NEWUSDE]. There are several factors that must be considered before a building is determined to be a green building. Building a green building must include an efficient use of utilities such as water and energy, use of renewable energy, use of recycling and reusing practices to reduce waste, provide proper indoor air quality, use of ethically sourced and non-toxic materials, use of a design that allows the building to adapt to changing environmental climates, and aspects of the design, construction, and operational process that address the environment and quality of life of its occupants. The term green building can also be used to refer to the practice of green building which includes being resource efficient from its design, to its construction, to its operational processes, and ultimately to its deconstruction. The practice of green building differs slightly from zero energy buildings because it considers all environmental impacts such as use of materials and water pollution for example, whereas the scope of zero energy buildings only includes the buildings energy consumption and ability to produce an equal amount, or more, of energy from renewable energy sources.
There are many unforeseen design challenges and site conditions required to efficiently meet the renewable energy needs of a building and its occupants, as much of this technology is new. Designers must apply holistic design principles, and take advantage of the free naturally occurring assets available, such as passive solar orientation, natural ventilation, daylighting, thermal mass, and night time cooling. Designers and engineers must also experiment with new materials and technological advances, striving for more affordable and efficient production.
Zero energy building versus zero heating building
With advances in ultra low U-value glazing a (nearly) zero heating building is proposed to supersede nearly-zero energy buildings in EU. The zero heating building reduces on the passive solar design and makes the building more opened to conventional architectural design. The zero heating building removes the need for seasonal / winter utility power reserve.
The annual specific heating demand for the zero-heating house should not exceed 3 kWh/m2a. Zero heating building is simpler to design and to operate. For example: there is no need for modulated sun shading.
Certification
The two most common certifications for green building are Passive House, and LEED. The goal of Passive House is to be energy efficient and reduce the use of heating/cooling to below standard. LEED certification is more comprehensive in regards to energy use, a building is awarded credits as it demonstrates sustainable practices across a range of categories. Another certification that designates a building as a net zero energy building exists within the requirements of the Living Building Challenge (LBC) called the Net Zero Energy Building (NZEB) certification provided by the International Living Future Institute (ILFI). The designation was developed in November 2011 as the NZEB certification but was then simplified to the Zero Energy Building Certification in 2017. Included in the list of green building certifications, the BCA Green Mark rating system allows for the evaluation of buildings for their performance and impact on the environment
Worldwide
International initiatives
As a response to global warming and increasing greenhouse gas emissions, countries around the world have been gradually implementing different policies to tackle ZEB. Between 2008 and 2013, researchers from Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Italy, the Republic of Korea, New Zealand, Norway, Portugal, Singapore, Spain, Sweden, Switzerland, the United Kingdom and the US worked together in the joint research program called "Towards Net Zero Energy Solar Buildings". The program was created under the umbrella of International Energy Agency (IEA) Solar Heating and Cooling Program (SHC) Task 40 / Energy in Buildings and Communities (EBC, formerly ECBCS) Annex 52 with the intent of harmonizing international definition frameworks regarding net-zero and very low energy buildings by diving them into subtasks. In 2015, the Paris Agreement was created under the United Nations Framework Convention on Climate Change (UNFCC) with the intent of keeping the global temperature rise of the 21st century below 2 degrees Celsius and limiting temperature increase to 1.5 degrees Celsius by limiting greenhouse gas emissions. While there was no enforced compliance, 197 countries signed the international treaty which bound developed countries legally through a mutual cooperation where each party would update its INDC every five years and report annually to the COP. Due to the advantages of energy efficiency and carbon emission reduction, ZEBs are widely being implemented in many different countries as a solution to energy and environmental problems within the infrastructure sector.
Australia
National trajectory
In Australia, the Trajectory for Low Energy Buildings and its Addendum were agreed by all Commonwealth, state and territory energy ministers in 2019.
The Trajectory is a national plan that aims to achieve zero energy and carbon-ready commercial and residential buildings in Australia. It is a key initiative to address Australia’s 40% energy productivity improvement target by 2030 under the National Energy Productivity Plan.
On 7 July 2023, the Energy and Climate Change Ministerial Council agreed to update the Trajectory for Low Energy Buildings by the end of 2024.
The updates to the Trajectory will:
support the delivery of a low energy, net zero emissions residential and commercial building sector by 2050
consider the success of the existing program
help develop the policy pathway for the building sector to achieve net zero by 2050.
ZEB in Australia
Council House 2 (CH2)Council House 2(also known as CH2), is an office building located at 240 Little Collins Street in the Melbourne central business district, Australia. It is used by the City of Melbourne council, and in April 2005, became the first purpose-built office building in Australia to achieve a maximum Six Green Star rating.
Belgium
In Belgium there is a project with the ambition to make the Belgian city Leuven climate-neutral in 2030.
Brazil
In Brazil, the Ordinance No. 42, of February 24, 2021, approved the Inmetro Normative Instruction for the Classification of Energy Efficiency of Commercial, Service and Public Buildings (INI-C), which improves the Technical Quality Requirements for the Energy Efficiency Level of Commercial, Service and Public Buildings (RTQ-C), specifying the criteria and methods for classifying commercial, service and public buildings as to their energy efficiency. Annex D presents the procedures for determining the potential for local renewable energy generation and the assessment conditions for Near Zero Energy Buildings (NZEBs) and Positive Energy Buildings (PEBs).
Canada
The Canadian Home Builders Association - National oversees the Net Zero Homes certification label, a voluntary industry-led labeling initiative.
In December 2017, the BC Energy Step Code entered into legal force in British Columbia. Local British Columbia governments may use the standard to incentivize or require a level of energy efficiency in new construction that goes above and beyond the requirements of the base building code. The regulation is designed as a technical roadmap to help the province reach its target that all new buildings will attain a net zero energy ready level of performance by 2032.
In August 2017, the Government of Canada released Build Smart - Canada's Buildings Strategy, as a key driver of the Pan Canadian Framework on Clean Growth and Climate Change, Canada's national climate strategy. The Build Smart strategy seeks to dramatically increase the energy efficiency of Canadian buildings in pursuit of a net zero energy ready level of performance.
In Canada the Net-Zero Energy Home Coalition is an industry association promoting net-zero energy home construction and the adoption of a near net-zero energy home (nNZEH), NZEH Ready and NZEH standard.
The Canada Mortgage and Housing Corporation is sponsoring the EQuilibrium Sustainable Housing Competition that will see the completion of fifteen zero-energy and near-zero-energy demonstration projects across the country starting in 2008.
The EcoTerra House in Eastman, Quebec is Canada's first nearly net-zero energy housing built through the CMHC EQuilibrium Sustainable Housing Competition. The house was designed by Assoc. Prof. Dr. Masa Noguchi of the University of Melbourne for Alouette Homes and engineered by Prof. Dr. Andreas K. Athienitis of Concordia University.
In 2014, the public library building in Varennes, QC, became the first ZNE institutional building in Canada. The library is also LEED gold certified.
The EcoPlusHome in Bathurst, New Brunswick. The Eco Plus Home is a prefabricated test house built by Maple Leaf Homes and with technology from Bosch Thermotechnology.
Mohawk College will be building Hamilton's first net Zero Building
China
With an estimated population of 1,439,323,776 people, China has become one of the world's leading contributor to greenhouse gas emissions due to its ongoing rapid urbanization. Even with the growing increase in building infrastructure, China has long been considered as a country where the overall energy demand has consistently grown less rapidly than the gross domestic product (GDP) of China. Since the late 1970s, China has been using half as much energy as it did in 1997, but due to its dense population and rapid growth of infrastructure, China has become the world's second largest energy consumer and is in a position to become the leading contributor to greenhouse gas emissions in the next century.
Since 2010, Chinese government has been driven by the release of new national policies to increase ZEB design standards and has also laid out a series of incentives to increase ZEB projects in China. In November 2015, China's Ministry of Housing and Urban-Rural Development (MOHURD) released a technical guide regarding passive and low energy green residential buildings. This guide was aimed at improving energy efficiency in China's infrastructure and was also the first of its kind to be formally released as a guide for energy efficiency. Also, with rapid growth in ZEBs in the last three years, there is an estimated influx of ZEBs to be built in China by 2020 along with the existing ZEB projects that are already built.
As a response to the Paris Agreement in 2015, China stated that it set a target of reducing peak carbon emissions around 2030 while also aiming to lower carbon dioxide emissions by 60-65 percent from 2005 emissions per unit of GDP. In 2020, Chinese Communist Party leader Xi Jinping released a statement in his address to the UN General Assembly declaring that China would be carbon neutral by 2060 pushing forward climate change reforms. With more than 95 percent of China's energy originating from fuel sources that emit carbon dioxide, carbon neutrality in China will require an almost complete transition to fuel sources such as solar power, wind, hydro, or nuclear power. In order to achieve carbon neutrality, China's proposed energy quota policy will have to incorporate new monitoring and mechanisms that ensure accurate measurements of energy performance of buildings. Future research should investigate the different possible challenges that could come up due to implementation of ZEB policies in China.
Net-zero energy projects in China
One of the new generation net-zero energy office buildings successfully constructed is the 71-story Pearl River Tower located in Guangzhou, China. Designed by Skidmore Owings Merrill LLP, the tower was designed with the idea that the building would generate the same amount of energy used on an annual basis while also following the four steps to net zero energy: reduction, absorption, reclamation, and generation. While initial plans for the Pearl River Tower included natural gas-fired microturbines used for generation electricity, photovoltaic panels integrated into the glazed roof and shading louvers and tactical building design in combination with the VAWT's electricity generation were chosen instead due to local regulations.
Denmark
Strategic Research Centre on Zero Energy Buildings was in 2009 established at Aalborg University by a grant from the Danish Council for Strategic Research (DSF), the Programme Commission for Sustainable Energy and Environment, and in cooperation with the Technical University of Denmark, Danish Technological Institute, The Danish Construction Association and some private companies. The purpose of the centre is through development of integrated, intelligent technologies for the buildings, which ensure considerable energy conservation and optimal application of renewable energy, to develop zero energy building concepts. In cooperation with the industry, the centre will create the necessary basis for a long-term sustainable development in the building sector.
Germany
Technische Universität Darmstadt won first place in the international zero energy design 2007 Solar Decathlon competition, with a passivhaus design (Passive house) + renewables, scoring highest in the Architecture, Lighting, and Engineering contests
Fraunhofer Institute for Solar Energy Systems ISE, Freiburg im Breisgau
Net zero energy, energy-plus or climate-neutral buildings in the next generation of electricity grids
India
India's first net zero building is Indira Paryavaran Bhawan, located in New Delhi, inaugurated in 2014. Features include passive solar building design and other green technologies. High-efficiency solar panels are proposed. It cools air from toilet exhaust using a thermal wheel in order to reduce load on its chiller system. It has many water conservation features.
Iran
In 2011, Payesh Energy House (PEH) or Khaneh Payesh Niroo by a collaboration of Fajr-e-Toseah Consultant Engineering Company and Vancouver Green Homes Ltd] under management of Payesh Energy Group (EPG) launched the first Net-Zero passive house in Iran. This concept makes the design and construction of PEH a sample model and standardized process for mass production by MAPSA.
Also, an example of the new generation of zero energy office buildings is the 24-story OIIC Office Tower, which is started in 2011, as the OIIC Company headquarters. It uses both modest energy efficiency, and a big distributed renewable energy generation from both solar and wind. It is managed by Rahgostar Naft Company in Tehran, Iran. The tower is receiving economic support from government subsidies that are now funding many significant fossil-fuel-free efforts.
Ireland
In 2005, a private company launched the world's first standardised passive house in Ireland, this concept makes the design and construction of passive house a standardised process.
Conventional low energy construction techniques have been refined and modelled on the PHPP (Passive House Design Package) to create the standardised passive house.
Building offsite allows high precision techniques to be utilised and reduces the possibility of errors in construction.
In 2009 the same company started a project to use 23,000 liters of water in a seasonal storage tank, heated up by evacuated solar tubes throughout the year, with the aim to provide the house with enough heat throughout the winter months thus eliminating the need for any electrical heat to keep the house comfortably warm. The system is monitored and documented by a research team from The University of Ulster and the results will be included in part of a PhD thesis.
In 2012 Cork Institute of Technology started renovation work on its 1974 building stock to develop a net zero energy building retrofit. The exemplar project will become Ireland's first zero energy testbed offering a post-occupancy evaluation of actual building performance against design benchmarks.
Jamaica
The first zero energy building in Jamaica and the Caribbean opened at the Mona Campus of the University of the West Indies (UWI) in 2017. The 2300 square foot building was designed to inspire more sustainable and energy efficient buildings in the area.
Japan
After the April 2011 Fukushima earthquake followed by the up with Fukushima Daiichi nuclear disaster, Japan experienced severe power crisis that led to the awareness of the importance of energy conservation.
In 2012 Ministry of Economy, Trade and Industry, Ministry of Land, Infrastructure, Transport and Tourism and Ministry of the Environment (Japan) summarized the road map for Low-carbon Society which contains the goal of ZEH and ZEB to be standard of new construction in 2020.
The Mitsubishi Electric Corporation is underway with the construction of Japan's first zero energy office building, set to be completed in October, 2020 (as of September 2020). The SUSTIE ZEB test facility is located in Kamakura, Japan, to develop ZEB technology. With the net zero certification, the facility projects to reduce energy consumption by 103%.
Japan has made it a goal that all new houses be net zero energy by 2030. The developing company Sekisui House introduced their first net zero home in 2013, and is now planning Japan's first zero energy condominium in Nagoya City, it is a three-story building with 12 units. There are solar panels on the roof and fuel cells for each unit to provide backup power.
Korea (Republic of)
South Korea's Mandatory ZEB requirements, which have been previously applied to buildings with a GFA of 1,000 m2+ in 2021 will expand to buildings with a GFA of 500 m2+ in 2022, until being applicable to all public buildings starting in 2024. For private buildings, ZEB certification will be mandated for building permits with a GFA of over 100,000 m2 from 2023. After 2025, zero-energy construction for private buildings will be expanded to GFAs over 1,000 m2. The goal of the policy is to convert all public sector buildings to ZEB grade 3 (an energy independence rate of 60% ~ 80%), and all private buildings to ZEB grade 5 (an energy independence rate of 20% ~ 40%) by 2030.
EnergyX DY-Building (에너지엑스 DY빌딩), the first commercial Net-Zero Energy Building (NZEB, or ZEB grade 1) and the first Plus Energy Building (+ZEB, or ZEB grade plus) in Korea was opened and introduced in 2023. The energy technology and sustainable architectural platform company EnergyX developed, designed, and engineered the building with its proprietary technologies and services. EnergyX DY-Building received the ZEB certification with an energy independence rate (or energy self-sufficiency rate) of 121.7%.
Malaysia
In October 2007, the Malaysia Energy Centre (PTM) successfully completed the development and construction of the PTM Zero Energy Office (ZEO) Building. The building has been designed to be a super-energy-efficient building using only 286 kWh/day. The renewable energy – photovoltaic combination is expected to result in a net zero energy requirement from the grid. The building is currently undergoing a fine tuning process by the local energy management team. Findings are expected to be published in a year.
In 2016, the Sustainable Energy Development Authority Malaysia (SEDA Malaysia) started a voluntary initiative called Low Carbon Building Facilitation Program. The purpose is to support the current low carbon cities program in Malaysia. Under the program, several project demonstration managed to reduce energy and carbon beyond 50% savings and some managed to save more than 75%. Continuous improvement of super energy efficient buildings with significant implementation of on-site renewable energy managed to make a few of them become nearly Zero Energy (nZEB) as well as Net-Zero Energy Building (NZEB). In March 2018, SEDA Malaysia has started the Zero Energy Building Facilitation Program.
Malaysia also has its own sustainable building tool special for Low Carbon and zero energy building, called GreenPASS that been developed by the Construction Industry Development Board Malaysia (CIDB) in 2012, and currently being administered and promoted by SEDA Malaysia. GreenPASS official is called the Construction Industry Standard (CIS) 20:2012.
Netherlands
In September 2006, the Dutch headquarters of the World Wildlife Fund (WWF) in Zeist was opened. This earth-friendly building gives back more energy than it uses. All materials in the building were tested against strict requirements laid down by the WWF and the architect.
Norway
In February 2009, the Research Council of Norway assigned The Faculty of Architecture and Fine Art at the Norwegian University of Science and Technology to host the Research Centre on Zero Emission Buildings (ZEB), which is one of eight new national Centres for Environment-friendly Energy Research (FME). The main objective of the FME-centres is to contribute to the development of good technologies for environmentally friendly energy and to raise the level of Norwegian expertise in this area. In addition, they should help to generate new industrial activity and new jobs. Over the next eight years, the FME-Centre ZEB will develop competitive products and solutions for existing and new buildings that will lead to market penetration of zero emission buildings related to their production, operation and demolition.
Singapore
Singapore unveiled a prominent development at the National University of Singapore that is a net-zero energy building. The building, called SDE4, is located within a group of three buildings in its School of Design and Environment (SDE). The design of the building achieved a Green Mark Platinum certification as it produces as much energy as it consumes with its solar panel covered rooftop and hybrid cooling system along with many integrated systems to achieve optimum energy efficiency. This development was the first new-build zero-energy building to come to fruition in Singapore, and the first zero-energy building at the NUS. The first retrofitted zero energy building to be developed in Singapore was a building at the Building and Construction Authority (BCA) academy by the Minister for National Development Mah Bow Tan at the inaugural Singapore Green Building Week on October 26, 2009. Singapore's Green Building Week (SGBW) promotes sustainable development and celebrates the achievements of successfully designed sustainable buildings.
A net-zero energy building unveiled more recently is the SMU Connexion (SMUC). It is the first net-zero energy building in the city that also utilizes mass engineered timber (MET). It is designed to meet the Building and Construction Authority (BCA) Green Mark Platinum certification and has been in operation since January 2020.
Switzerland
The Swiss MINERGIE-A-Eco label certifies zero energy buildings. The first building with this label, a single-family home, was completed in Mühleberg in 2011.
United Arab Emirates
Masdar City in Abu Dhabi
Dubai The Sustainable City in Dubai
United Kingdom
In December 2006, the government announced that by 2016 all new homes in England will be zero energy buildings. To encourage this, an exemption from Stamp Duty Land Tax is planned. In Wales the plan is for the standard to be met earlier in 2011, although it is looking more likely that the actual implementation date will be 2012. However, as a result of a unilateral change of policy published at the time of the March 2011 budget, a more limited policy is now planned which, it is estimated, will only mitigate two thirds of the emissions of a new home.
BedZED development
Hockerton Housing Project
In January 2019 the Ministry of Housing Communities and Local Government simply defined 'Zero Energy' as 'just meets current building standards' neatly solving this problem.
United States
In the US, ZEB research is currently being supported by the US Department of Energy (DOE) Building America Program, including industry-based consortia and researcher organizations at the National Renewable Energy Laboratory (NREL), the Florida Solar Energy Center (FSEC), Lawrence Berkeley National Laboratory (LBNL), and Oak Ridge National Laboratory (ORNL). From fiscal year 2008 to 2012, DOE plans to award $40 million to four Building America teams, the Building Science Corporation; IBACOS; the Consortium of Advanced Residential Buildings; and the Building Industry Research Alliance, as well as a consortium of academic and building industry leaders. The funds will be used to develop net-zero-energy homes that consume 50% to 70% less energy than conventional homes.
DOE is also awarding $4.1 million to two regional building technology application centers that will accelerate the adoption of new and developing energy-efficient technologies. The two centers, located at the University of Central Florida and Washington State University, will serve 17 states, providing information and training on commercially available energy-efficient technologies.
The U.S. Energy Independence and Security Act of 2007 created 2008 through 2012 funding for a new solar air conditioning research and development program, which should soon demonstrate multiple new technology innovations and mass production economies of scale.
The 2008 Solar America Initiative funded research and development into future development of cost-effective Zero Energy Homes in the amount of $148 million in 2008.
The Solar Energy Tax Credits have been extended until the end of 2016.
By Executive Order 13514, U.S. President Barack Obama mandated that by 2015, 15% of existing Federal buildings conform to new energy efficiency standards and 100% of all new Federal buildings be Zero-Net-Energy by 2030.
Energy Free Home Challenge
In 2007, the philanthropic Siebel Foundation created the Energy Free Home Foundation. The goal was to offer $20 million in global incentive prizes to design and build a 2,000 square foot (186 square meter) three-bedroom, two bathroom home with (1) net-zero annual utility bills that also has (2) high market appeal, and (3) costs no more than a conventional home to construct.
The plan included funding to build the top ten entries at $250,000 each, a $10 million first prize, and then a total of 100 such homes to be built and sold to the public.
Beginning in 2009, Thomas Siebel made many presentations about his Energy Free Home Challenge. The Siebel Foundation Report stated that the Energy Free Home Challenge was "Launching in late 2009".
The Lawrence Berkeley National Laboratory at the University of California, Berkeley participated in writing the "Feasibility of Achieving Zero-Net-Energy, Zero-Net-Cost Homes" for the $20-million Energy Free Home Challenge.
If implemented, the Energy Free Home Challenge would have provided increased incentives for improved technology and consumer education about zero energy buildings coming in at the same cost as conventional housing.
US Department of Energy Solar Decathlon
The US Department of Energy Solar Decathlon is an international competition that challenges collegiate teams to design, build, and operate the most attractive, effective, and energy-efficient solar-powered house. Achieving zero net energy balance is a major focus of the competition.
States
Arizona
Zero Energy House developed by the NAHB Research Center and John Wesley Miller Companies, Tucson.
California
The State of California has proposed that all new low- and mid-rise residential buildings, and all new commercial buildings, be designed and constructed to ZNE standards beginning in 2020 and 2030, respectively. The requirements, if implemented, will be promulgated via the California Building Code, which is updated on a three-year cycle and which currently mandates some of the highest energy efficiency standards in the United States. California is anticipated to further increase efficiency requirements by 2020, thus avoiding the trends discussed above of building standard housing and achieving ZNE by adding large amounts of renewables. The California Energy Commission is required to perform a cost-benefit analysis to prove that new regulations create a net benefit for residents of the state.
West Village, located on the University of California campus in Davis, California, was the largest ZNE-planned community in North America at the time of its opening in 2014. The development contains student housing for approximately 1,980 UC Davis students as well as leasable office space and community amenities including a community center, pool, gym, restaurant and convenience store. Office spaces in the development are currently leased by energy and transportation-related University programs. The project was a public-private partnership between the university and West Village Community Partnership LLC, led by Carmel Partners of San Francisco, a private developer, who entered into a 60-year ground lease with the university and was responsible for the design, construction, and implementation of the $300 million project, which is intended to be market-rate housing for Davis. This is unique as the developer designed the project to achieve ZNE at no added cost to themselves or to the residents. Designed and modeled to achieve ZNE, the project uses a mixture of passive elements (roof overhangs, well-insulated walls, radiant heat barriers, ducts in insulated spaces, etc.) as well as active approaches (occupancy sensors on lights, high-efficiency appliances and lighting, etc.). Designed to out-perform California's 2008 Title 24 energy codes by 50%, the project produced 87% of the energy it consumed during its first year in operation. The shortcoming in ZNE status is attributed to several factors, including improperly functioning heat pump water heaters, which have since been fixed. Occupant behavior is significantly different from that anticipated, with the all-student population using more energy on a per-capita basis than typical inhabitants of single-family homes in the area. One of the primary factors driving increased energy use appears to be the increased miscellaneous electrical loads (MEL, or plug loads) in the form of mini-refrigerators, lights, computers, gaming consoles, televisions, and other electronic equipment. The university continues to work with the developer to identify strategies for achieving ZNE status. These approaches include incentivizing occupant behavior and increasing the site's renewable energy capacity, which is a 4 MW photovoltaic array per the original design. The West Village site is also home to the Honda Smart Home US, a beyond-ZNE single-family home that incorporates cutting-edge technologies in energy management, lighting, construction, and water efficiency.
The IDeAs Z2 Design Facility is a net zero energy, zero carbon retrofit project occupied since 2007. It uses less than one fourth the energy of a typical U.S. office by applying strategies such as daylighting, radiant heating/cooling with a ground-source heat pump and high energy performance lighting and computing. The remaining energy demand is met with renewable energy from its building-integrated photovoltaic array. In 2009, building owner and occupant Integrated Design Associates (IDeAs) recorded actual measured energy use intensity of per year, with per year produced, for a net of per year. The building is also carbon neutral, with no gas connection, and with carbon offsets purchased to cover the embodied carbon of the building materials used in the renovation.
The Zero Net Energy Center, scheduled to open in 2013 in San Leandro, is to be a 46,000-square-foot electrician training facility created by the International Brotherhood of Electrical Workers Local 595 and the Northern California chapter of the National Electrical Contractors Association. Training will include energy-efficient construction methods.
The Green Idea House is a net zero energy, zero-carbon retrofit in Hermosa Beach.
George LeyVa Middle School Administrative Offices, occupied since fall 2011, is a net zero energy, net zero carbon emissions building of just over 9,000 square feet. With daylighting, variable refrigerant flow HVAC, and displacement ventilation, it is designed to use half of the energy of a conventional California school building, and, through a building-integrated solar array, provides 108% of the energy needed to offset its annual electricity use. The excess helps power the remainder of the middle school campus. It is the first publicly funded NZE K–12 building in California.
The Stevens Library at Sacred Heart Schools in California is the first net-zero library in the United States, receiving Net Zero Energy Building status from the International Living Future Institute, part of the PG&E Zero Net Energy Pilot Project.
The Santa Monica City Services Building is among the first net-zero energy, net-zero water public/municipal buildings in California. Completed in 2020, the 50,000-square-foot addition to the historic Santa Monica City Hall building was designed to provide its own energy and water, and to minimize energy use through efficient building systems.
At 402,000 square-feet, the California Air Resources Board Southern California Headquarters - Mary D. Nichols Campus, is the largest net-zero energy facility in the United States. A photovoltaic system covers 204,903 square-feet between the facility rooftop and parking pavilions. The +3.5 megawatt system is anticipated to generate roughly 6,235,000 kWh reusable energy per year. The facility was dedicated on November 18, 2021.
Colorado
The Moore House achieves net-zero energy usage with passive solar design, 'tuned' heat reflective windows, super-insulated and air-tight construction, natural daylighting, solar thermal panels for hot water and space heating, a photovoltaic (PV) system that generates more carbon-free electricity than the house requires, and an energy-recovery ventilator (ERV) for fresh air. The green building strategies used on the Moore House earned it a verified home energy rating system (HERS) score of −3.
The NREL Research Support Facility in Golden is a class A office building. Its energy efficiency features include: Thermal storage concrete structure, transpired solar collectors, 70 miles of radiant piping, high-efficiency office equipment, and an energy-efficient data center that reduces the data center's energy use by 50% over traditional approaches.
Wayne Aspinall Federal Building in Grand Junction, originally constructed in 1918, became the first Net Zero Energy building listed on the National Register of Historic Places. On-site renewable energy generation is intended to produce 100% of the building's energy throughout the year using the following energy efficiency features: Variable refrigerant flow for the HVAC, a geo-exchange system, advanced metering and building controls, high-efficient lighting systems, thermally enhanced building envelope, interior window system (to maintain historic windows), and advanced power strips (APS) with individual occupancy sensors.
Tutt Library at Colorado College was renovated to be a net-zero library in 2017, making it the largest ZNE academic library. It received an Innovation Award from the National Association of college and University Business Officers.
Florida
The 1999 side-by-side Florida Solar Energy Center Lakeland demonstration project was called the "Zero Energy Home". It was a first-generation university effort that significantly influenced the creation of the U.S. Department of Energy, Energy Efficiency and Renewable Energy, Zero Energy Home program.
Illinois
The Walgreens store located on 741 Chicago Ave, Evanston, is the first of the company's stores to be built and or converted to a net zero energy building. It is the first net zero energy retail stores to be built and will pave the way to renovating and building net zero energy retail stores in the near future. The Walgreens store includes the following energy efficiency features: Geo-exchange system, energy-efficient building materials, LED lighting and daylight harvesting, and carbon dioxide refrigerant.
The Electrical and Computer Engineering building at the University of Illinois at Urbana-Champaign, which was built in 2014, is a net zero building.
Iowa
The MUM Sustainable Living Center was designed to surpass LEED Platinum qualification. The Maharishi University of Management (MUM) in Fairfield, Iowa, founded by Maharishi Mahesh Yogi (best known for having brought Transcendental Meditation to the West) incorporates principles of Bau Biology (a German system that focuses on creating a healthy indoor environment), as well as Maharishi Vedic Architecture (an Indian system of architecture focused on the precise orientation, proportions and placement of rooms). The building is one of the few in the country to qualify as net zero, and one of even fewer that can claim the banner of grid positive via its solar power system. A rainwater catchment system and on-site natural waste-water treatment likewise take the building off (sewer) grid with respect to water and waste treatment. Additional green features include natural daylighting in every room, natural and breathable earth block walls (made by the program's students), purified rainwater for both potable and non-potable functions; and an on-site water purification and recycling system consisting of plants, algae, and bacteria.
Kentucky
Richardsville Elementary School, part of the Warren County Public School District in south-central Kentucky, is the first Net Zero energy school in the United States. To reach Net Zero, innovative energy reduction strategies were used by CMTA Consulting Engineers and Sherman Carter Barnhart Architects including dedicated outdoor air systems (DOAS) with dynamic reset, new IT systems, alternative methods to prepare lunches, and the use of solar photovoltaics. The project has an efficient thermal envelope constructed with insulated concrete form (ICF) walls, geothermal water source heat pumps, low-flow fixtures, and features daylighting extensively throughout. It is also the first truly wireless school in Kentucky.
Locust Trace AgriScience Center, an agricultural-based vocational school serving Fayette County Public Schools and surrounding districts, features a Net Zero Academic Building engineered by CMTA Consulting Engineers and designed by Tate Hill Jacobs Architects. The facility, located in Lexington, Kentucky, also has a greenhouse, riding arena with stalls, and a barn. To reach Net Zero in the Academic Building the project utilizes an air-tight envelope, expanded indoor temperature setpoints in specified areas to more closely model real-world conditions, a solar thermal system, and geothermal water source heat pumps. The school has further reduced its site impact by minimizing municipal water use through the use of a dual system consisting of a standard leach field system and a constructed wetlands system and using pervious surfaces to collect, drain, and use rainwater for crop irrigation and animal watering.
Massachusetts
The government of Cambridge has enacted a plan for "net zero" carbon emissions from all buildings in the city by 2040.
John W. Olver Transit Center, designed by Charles Rose Architects Inc, is an intermodal transit hub in Greenfield, Massachusetts. Built with American Recovery and Reinvestment Act funds, the facility was constructed with solar panels, geothermal wells, copper heat screens and other energy efficient technologies.
Michigan
The Mission Zero House is the 110-year-old Ann Arbor home of Greenovation.TV host and Environment Report contributor Matthew Grocoff. As of 2011, the home is the oldest home in America to achieve net-zero energy. The owners are chronicling their project on Greenovation.TV and The Environment Report on public radio.
The Vineyard Project is a Zero Energy Home (ZEH) thanks to the Passive Solar Design, 3.3 Kws of Photovoltaics, Solar Hot Water and Geothermal Heating and Cooling. The home is pre-wired for a future wind turbine and only uses 600 kWh of energy per month while a minimum of 20 kWh of electricity per day with many days net-metering backwards. The project also used ICF insulation throughout the entire house and is certified as Platinum under the LEED for Homes certification. This Project was awarded Green Builder Magazine Home of the Year 2009.
The Lenawee Center for a Sustainable Future, a new campus for Lenawee Intermediate School District, serves as a living laboratory for the future of agriculture. It is the first Net Zero education building in Michigan, engineered by CMTA Consulting Engineers and designed by The Collaborative, Inc. The project includes solar arrays on the ground as well as the roof, a geothermal heating and cooling system, solar tubes, permeable pavement and sidewalks, a sedum green roof, and an overhang design to regulate building temperature.
Missouri
In 2010, architectural firm HOK worked with energy and daylighting consultant The Weidt Group to design a net zero carbon emissions Class A office building prototype in St. Louis, Missouri. The team chronicled its process and results on Netzerocourt.com.
New Jersey
The 31 Tannery Project, located in Branchburg, New Jersey, serves as the corporate headquarters for Ferreira Construction, the Ferreira Group, and Noveda Technologies. The 42,000-square-foot (3,900 m2) office and shop building was constructed in 2006 and is the first building in the state of New Jersey to meet New Jersey's Executive Order 54. The building is also the first Net Zero Electric Commercial Building in the United States.
New York
Green Acres, the first true zero-net energy development in America, is located in New Paltz, about north of New York City. Greenhill Contracting began construction on this development of 25 single family homes in summer 2008, with designs by BOLDER Architecture. After a full year of occupancy, from March 2009 to March 2010, the solar panels of the first occupied home in Green Acres generated 1490 kWh more energy than the home consumed. The second occupied home has also achieved zero-net energy use. As of June 2011, five houses have been completed, purchased and occupied, two are under construction, and several more are being planned. The homes are built of insulated concrete forms with spray foam insulated rafters and triple pane casement windows, heated and cooled by a geothermal system, to create extremely energy-efficient and long-lasting buildings. The heat recovery ventilator provides constant fresh air and, with low or no VOC (volatile organic compound) materials, these homes are very healthy to live in. To the best of our knowledge, Green Acres is the first development of multiple buildings, residential or commercial, that achieves true zero-net energy use in the United States, and the first zero-net energy development of single family homes in the world.
Greenhill Contracting has built two luxury zero-net energy homes in Esopus, completed in 2008. One house was the first Energy Star rated zero-net energy home in the Northeast and the first registered zero-net energy home on the US Department of Energy's Builder's Challenge website. These homes were the template for Green Acres and the other zero-net energy homes that Greenhill Contracting has built, in terms of methods and materials.
The headquarters of Hudson Solar, a dba of Hudson Valley Clean Energy, Inc., located in Rhinebeck and completed in 2007, was determined by NESEA (the Northeast Sustainable Energy Association) to have become the first proven zero-net energy commercial building in New York State and the ten northeast United States (October 2008). The building consumes less energy than it generates, using a solar electric system to generate power from the sun, geothermal heating and cooling, and solar thermal collectors to heat all its hot water.
Oklahoma
The first zero-energy design home was built in 1979 with support from President Carter's new United States Department of Energy. It relied heavily on passive solar building design for space heat, water heat and space cooling. It heated and cooled itself effectively in a climate where the summer peak temperature was 110 degrees Fahrenheit, and the winter low temperature was −10 F. It did not use active solar systems. It is a double envelope house that uses a gravity-fed natural convection air flow design to circulate passive solar heat from of south-facing glass on its greenhouse through a thermal buffer zone in the winter. A swimming pool in the greenhouse provided thermal mass for winter heat storage. In the summer, air from two underground earth tubes is used to cool the thermal buffer zone and exhaust heat through 7200 cfm of outer-envelope roof vents.
Oregon
Net Zero Energy Building Certification launched in 2011, with an international following. The first project, Painters Hall, is Pringle Creek's Community Center, café, office, art gallery, and event venue. Originally built in the 1930s, Painters Hall was renovated to LEED Platinum Net Zero energy building standards in 2010, demonstrating the potential of converting existing building stock into high‐performance, sustainable building sites. Painters Hall features simple low-cost solutions for energy reduction, such as natural daylighting and passive cooling lighting, that save money and increase comfort. A district ground-source geothermal loop serves the building's GSHP for highly efficient heating and air conditioning. Excess generation from the 20.2 kW rooftop solar array offsets pumping for the neighborhoods geo loop system. Open to the public, Painters Hall is a hub for gatherings of friends, neighbors, and visitors at the heart of a neighborhood designed around nature and community.
Pennsylvania
The Phipps Center for Sustainable Landscapes in Pittsburgh was designed to be one of the greenest buildings in the world. It achieved Net Zero Energy Building Certification from the Living Building Challenge in February 2014 and is pursuing full certification. The Phipps Center uses energy conservation technologies such as solar hot water collectors, carbon dioxide sensors, and daylighting, as well as renewable energy technologies to allow it to achieve Net Zero Energy status.
The Lombardo Welcome Center at Millersville University became the first building in the state to become zero-energy certified. This was the largest step in Millersville University's goal to be carbon neutral by 2040. According to the International Living Future Institute, The Lombardo Welcome Center is one of the highest-performing buildings throughout the country generating 75% more energy than currently being used.
Rhode Island
In Newport, the Paul W. Crowley East Bay MET School is the first Net Zero project to be constructed in Rhode Island. It is a 17,000 sq ft building, housing eight large classrooms, seven bathrooms and a kitchen. It will have PV panels to supply all necessary electricity for the building and a geothermal well which will be the source of heat.
Tennessee
civitas, designed by archimania, Memphis, Tennessee. civitas is a case study home on the banks of the Mississippi River, currently under construction. It aims to embrace cultural, climatic, and economic challenges. The home will set a precedent for Southeastern high-performance design.
Texas
The University of North Texas (UNT) constructed a Zero Energy Research Laboratory on its 300-acre research campus, Discovery Park, in Denton, Texas. The project was funded at over $1,150,000 and will primarily benefit students in mechanical and energy engineering (UNT became the first university to offer degrees in mechanical and energy engineering in 2006). This 1,200-square-foot structure is now competed and held ribbon-cutting ceremony for the University of North Texas' Zero Energy Laboratory on April 20, 2012.
The West Irving Library in Irving, Texas, became the first net zero library in Texas in 2011, running entirely off solar energy. Since then it has produced a surplus. It has LEED gold certification.
Vermont
The Putney School's net zero Field House was opened on October 10, 2009. In use for over a year, as of December 2010, the Field House used 48,374 kWh and produced a total of 51,371 kWh during the first 12 months of operation, thus performing at slightly better than net-zero. Also in December, the building won an AIA-Vermont Honor Award.
The Charlotte Vermont House designed by Pill-Maharam Architects is a verified net zero energy house completed in 2007. The project won the Northeast Sustainable Energy Association's Net Zero Energy award in 2009.
See also
References
Further reading
Nisson, J. D. Ned; and Gautam Dutt, "The Superinsulated Home Book", John Wiley & Sons, 1985, , .
Markvart, Thomas; Editor, "Solar Electricity" John Wiley & Sons; 2nd edition, 2000, .
Clarke, Joseph; "Energy Simulation in Building Design", Second Edition Butterworth-Heinemann; 2nd edition, 2001, .
National Renewable Energy Laboratory, 2000 ZEB meeting report
Noguchi, Masa, ed., "The Quest for Zero Carbon Housing Solutions", Open House International, Vol.33, No.3, 2008, Open House International
Voss, Karsten; Musall, Eike: "Net zero energy buildings – International projects of carbon neutrality in buildings", Munich, 2011, .
Low-energy building
Sustainable building
Sustainable architecture
Building biology
Energy economics
Environmental design
ru:Активный дом | Zero-energy building | [
"Engineering",
"Environmental_science"
] | 16,110 | [
"Environmental design",
"Sustainable building",
"Sustainable architecture",
"Building engineering",
"Energy economics",
"Construction",
"Environmental social science",
"Design",
"Building biology",
"Architecture"
] |
4,211,535 | https://en.wikipedia.org/wiki/UK%20Dark%20Matter%20Collaboration | The UK Dark Matter Collaboration (UKDMC) (1987–2007) was an experiment to search for weakly interacting massive particles (WIMPs). The consortium consisted of astrophysicists and particle physicists from the United Kingdom, who conducted experiments with the ultimate goal of detecting rare scattering events which would occur if galactic dark matter consists largely of a new heavy neutral particle. Detectors were set up underground in a halite seam at the Boulby Mine in North Yorkshire.
Background
WIMPs are considered prime candidates for dark matter, which accounts for approximately nine-tenths of the mass of certain galaxies, such as the Milky Way. WIMPs are predicted by several supersymmetric theories of particle physics. The particle detectors used for this experiment are placed 1100 metres below the surface of Yorkshire's Boulby mine.
History
UKDMC began in 1987, with principal participants from several notable institutions, including the Imperial College of Science, Technology and Medicine, the CCLRC's Rutherford Appleton Laboratory, and the University of Sheffield. Funding for the programme was provided by the Particle Physics and Astronomy Research Council (PPARC), as well as Cleveland Potash Ltd. which operates the mine where the experiments were conducted. The underground laboratory was officially opened on 18 April 2003, and the experiment ran until 2007 when collaborating institutions and scientists moved on to the related projects ZEPLIN-III and DRIFT-II.
Experiments
UKDMC operated multiple dark matter detectors and developed techniques for WIMP searches in crystals and xenon.
In 1996 they published limits that were obtained using room temperature crystals. NAIAD was an array of NaI(Tl) crystals that ran 2001–2003, collecting 44.9 kg×years of exposure, setting spin-independent and spin-dependent limits on WIMPs. Then the ZEPLIN series of searches were done.
References
External links
Official site
Experiments for dark matter search
Laboratories in the United Kingdom
Research institutes in North Yorkshire
Dark Matter Collaboration
Subterranea of the United Kingdom
Underground laboratories | UK Dark Matter Collaboration | [
"Physics"
] | 407 | [
"Dark matter",
"Experiments for dark matter search",
"Unsolved problems in physics"
] |
4,211,895 | https://en.wikipedia.org/wiki/Center%20for%20Auto%20Safety | The Center for Auto Safety is a Washington, D.C.–based 501(c)(3) consumer advocacy non-profit group focused on the automotive industry in the United States. Founded in 1970 by Consumers Union and Ralph Nader, the group focuses its efforts on enacting reform though public advocacy and pressuring the National Highway Traffic Safety Administration and automakers through litigation.
For decades, it was led by Executive Director Clarence Ditlow, who died in late 2016 from cancer. Ditlow was widely admired in the auto safety community, although he also had detractors among auto manufacturers. The Center for Auto Safety is currently led by Executive Director Jason Levine.
History
The Center for Auto Safety (the Center) was founded in 1970 by Consumers Union and Ralph Nader as a consumer safety group to protect drivers. Ralph Nader, the author of Unsafe at Any Speed, believed that automakers and the government were not adequately regulating safety. For many years, the Center was led by Clarence Ditlow, a well-known consumer safety advocate. The Center has advocated vigorously for driver safety and automaker accountability by pressuring government agencies and automakers with many lawsuits campaigns. The Center has also published The Car Book annually, which presents the latest safety ratings, dealer prices, fuel economy, insurance premiums, and maintenance costs for new vehicles.
Lemon laws
The Center for Auto Safety counts the enacting of lemon laws in all 50 states among its greatest successes. The Center has testified over 50 times before Congressional Committees on auto safety, warranties and service bulletins, air pollution, consumer protection, and fuel economy. The Center was the leading consumer advocate in passage of Magnuson-Moss Warranty Act, fuel economy provisions of Energy Policy and Conservation Act and Technical Service Bulletin disclosure in MAP-21. The Center recently succeeded in a lawsuit against DOT Secretary Anthony Foxx, forcing NHTSA to make public all manufacturer communications to dealers regarding safety issues. Additionally, former Center Executive Director Clarence Ditlow and Ralph Nader published The Lemon Book in 1980 to educate drivers on how to avoid buying a "lemon" and what to do if they purchase one.
Recalls
The Center for Auto Safety has been involved in many campaigns to pressure automakers and NHTSA to issue recalls on dangerous car parts. Throughout its history, the Center has played a major role in numerous recalls including 6.7 million Chevrolets for defective engine mounts, 15 million Firestone 500 tires, 1.5 million Ford Pintos for exploding gas tanks, 3 million Evenflo child seats for defective latches. More recently, the Center was the main proponent for recalls of 7 million Toyotas for sudden acceleration, 2 million Jeeps for fuel tank fires, 11 million GM vehicles for defective ignition switches, and over 60 million exploding Takata airbag inflators.
Accomplishments
The Center for Auto Safety counts numerous far-reaching efforts among its successes:
"Lemon laws" enacted in all 50 states
State laws requiring auto manufacturers to disclose "hidden" warranties to consumers
The Firestone tire recall
The Ford Pinto recall due to its dangerous gas tank design
Exposure of a potentially lethal gas tank design in General Motors pickup trucks
Improved U.S. highway safety standards administered by the U.S. National Highway Traffic Safety Administration (NHTSA)
Recall of Jeep vehicles with fuel tanks that could explode in rear impact
Pressuring General Motors to take action on their faulty airbags and ignition switches
Annual publication of The Car Book to inform drivers of the safety of specific models
Better protection for drivers against rollover and roof crush in SUVs
Maintaining an online database of vehicle safety complaints submitted to the Center
Wrote Small—On Safety: The Designed-in Dangers of the Volkswagen.
References
External links
The Center for Auto Safety—Official website
The Safe Climate Campaign—Official website
1970 establishments in Washington, D.C.
Automotive safety
Consumer rights organizations
Organizations established in 1970
Political advocacy groups in the United States
Ralph Nader | Center for Auto Safety | [
"Physics"
] | 792 | [
"Physical systems",
"Transport",
"Transport activism"
] |
4,211,997 | https://en.wikipedia.org/wiki/Hydrophobic%20soil | Hydrophobic soil is a soil whose particles repel water. The layer of hydrophobicity is commonly found at or a few centimeters below the surface, parallel to the soil profile. This layer can vary in thickness and abundance and is typically covered by a layer of ash or burned soil.
Formation and structure
Hydrophobic soil is most familiarly formed when a fire or hot air disperses waxy compounds found in the uppermost litter layer consisting of organic matter. After the compounds disperse, they mainly coat sandy soil particles near the surface in the upper layers of soil, making the soil hydrophobic. Other producers of hydrophobic coatings are contamination and industrial spillages along with soil microbial activity. Hydrophobicity can also be seen as a natural soil property that results from the degradation of natural vegetation such as Eucalyptus that has natural wax properties.
It was found that in a particular New Zealand sand, this waxy lipid coating consisted of primarily hydrocarbons and triglycerides that were basic in pH along with a lesser value of acidic long-chain fatty acids. Capillary penetration amongst soil particles is limited by the hydrophobic coating on the particles, resulting in water repellence in each particle affected as the hydrophilic head of the lipid attaches itself to the sand particle leaving the hydrophobic tail shielding the outside of the particle. This can be seen in Figure 1 below.
Other important soil water averting factors have been found to include soil texture, microbiology, soil surface roughness, soil organic matter content, soil chemical composition, acidity, soil water content, soil type, mineralogy of clay particles, and seasonal variations of the region. Soil texture plays a large role in predicting whether a soil could be water repelling as larger grained particles in the soil such as sand have smaller surface areas, making them more prone to being fully coated by hydrophobic compounds. It is much more difficult to entirely coat a silt or clay particle with more surface area, but when it does happen, the resulting water repellency of the soil is severe. As soil organic matter in the form of plant or microbial biomass decomposes, physiochemical changes can release these hydrophobic compounds into the soil as well. This, however, depends on the type of microbial activity present in the soil as it can also hinder the development of hydrophobic compounds.
Hydrophobicity testing
Soil water repellence is almost always tested with the water droplet penetration time (WDPT) test first because of the simplicity of the test. This test is executed by recording the time it takes for one droplet of water to infiltrate a specific soil, indicating the stability of repellency. Water infiltration is expressed as water entering the soil in a spontaneous fashion and correlates with the angle of the water-soil contact. If the water-soil contact angle is greater than 90º, then the soil is determined to be hydrophobic. It has also been observed that if the test droplet is placed on hydrophobic soil, it will rapidly develop a particulate skin before disappearing.
Results of the WDPT:
Table 1: Characterizing the degree of hydrophobicity in soils based on the water droplet penetration test.
Another method for determining soil water repellency is the molarity of ethanol droplet (MED) test. The MED test uses solutions of ethanol of varying surface tensions to observe soil wetting within a time frame of 10 seconds. If there is no wetting within the specified timeframe, an aqueous solution of ethanol with lower surface tension is then placed on a different area of the sample. The results of the MED test depend on the molarity of the ethanol solution whose droplets were absorbed in the allotted 10 seconds. Classifying soil water repellency from this test can be done by using a MED index where a non-water repellent soil has an index of less than or equal to 1 and a severely water repellent soil has an index of greater than or equal to 2.2. The MED index, 90º surface tension, ethanol molarity, and volume percentage correlate and can be converted into one another. In this test, the liquid-air surface tension value of the ethanol solution that is absorbed within this timeframe is used as the ninety-degree surface tension of the soil. The water entry pressure associated with the tested soil is another indicator of infiltration rates as it is associated with the degree of water repellency along with soil pore size.
Effect on agriculture and ecosystems
Hydrophobic soils and their aversion to water have consequences on plant water availability, plant-available nutrients, hydrology, and geomorphology of the affected area. By reducing the infiltration rate, runoff generation time is reduced and leads to an increase in the land flow of water during precipitation or irrigation events. Greater runoff increases erosion, causes uneven wetting patterns in soil, accelerates nutrient leaching reducing soil fertility, develops different flow paths in the region, and increases the risk of contamination in soils.
Drainage of nutrients occurs in weaker areas of repellency in hydrophobic soil where water preferentially drains into the soil. Because the water cannot drain into the stronger areas of hydrophobicity, the water finds pathways of preferential flow where it can infiltrate deeper into the soil profile. If irrigation or precipitation events are large, the water could potentially flow below the root zone, making it unavailable to any plant life and oftentimes taking fertilizers and nutrients with it. This additionally leads to an uneven distribution of nutrients and applied chemicals resulting in patchy vegetation.
In an agricultural setting, hydrophobic soil is a large constraint on crop yields. For example, in Australia, there have been documented reports of up to 80% loss in production due to soil water repellency. This is due to low rates of seed germination in soils as well as low plant available water levels.
Locations and appearance of hydrophobic soils
Hydrophobic soils have been found on all continents except for Antarctica. It occurs in dry regions in the United States, southern Australia, and the Mediterranean Basin, and in wet regions including Sweden, the Netherlands, British Columbia, and Columbia. Although it mainly appears in coarse-textured soils such as sand-dominated soils, it affects soils of all different soil types and has been reported in forests, pastures, agricultural plots, and shrublands. Generally, the degree of hydrophobicity is more severe in the soils of legume-grass pastures compared to cultivated agricultural fields.
Hydrophobic soil management
One method of managing sandy water-repellent soil is claying. This is done by adding select clay materials, such as calcium-bentonite, to give the soil more surface area per unit volume and improve soil mineralogy supporting aggregation. A hydrophobic barley field increased crop yield from 1.7 to 3.4 t/ha, and a field of lupins increased the yield by 1 t/ha within 2 years of claying.
Liming is a method to reduce soil water repellency in acidic soils. The liming process consists of adding calcium carbonate, which increases the pH of the soil towards neutral. Separate from hydrophobic soils, increased calcium and pH are associated with increased infiltration from improved structure and [[aggregation in biologically active soils.
Increasing the soil's pH increases the ability of naturally occurring humic substances to improve infiltration in hydrophobic soils. Humic acid is only water-soluble at a pH greater than 6.5, while fulvic acid is soluble at all pH ranges. Both resident acids have a property that enables them to reduce the surface tension of water when in solution. In contrast, it has been reported that soils with a deficiency of fulvic acid in solution would have more severe water repellency.
The agricultural practice of tilling decreases the degree of soil water repellency. Tilling crop fields reduces the carbon content of the soil through mixing and mineralization, thus decreasing the likelihood of decomposition by microorganisms that can lead to the dispersal of the hydrophobic coating that triggers soil water repellency.
Naturally forming holes and cracks in hydrophobic soil patches allow water to infiltrate the surface. These can form from burrowing animals, root channels, or macropores from decayed roots. These macropores have been identified as essential pathways in forest ecosystems for water to penetrate the soil because they account for approximately 35% of the near-surface volume of the soil.
References
Hydrology
Types of soil | Hydrophobic soil | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,729 | [
"Hydrology",
"Environmental engineering"
] |
4,212,710 | https://en.wikipedia.org/wiki/Duvenhage%20lyssavirus | Duvenhage lyssavirus (DUVV) is a member of the genus Lyssavirus, which also contains the rabies virus. The virus was discovered in 1970, when a South African farmer (after whom the virus is named) died of a rabies-like encephalitic illness, after being bitten by a bat. In 2006, Duvenhage virus killed a second person, when a man was scratched by a bat in North West Province, South Africa, 80 km from the 1970 infection. He developed a rabies-like illness 27 days after the bat encounter, and died 14 days after the onset of illness. A 34-year-old woman who died in Amsterdam on December 8, 2007, was the third recorded fatality. She had been scratched on the nose by a small bat while travelling through Kenya in October 2007, and was admitted to hospital four weeks later with rabies-like symptoms.
Microbats are believed to be the natural reservoir of Duvenhage virus. It has been isolated twice from insectivorous bats, in 1981 from Miniopterus schreibersi, and in 1986 from Nycteris thebaica, and the virus is closely related to another bat-associated lyssavirus endemic to Africa, Lagos bat lyssavirus.
References
Lyssaviruses | Duvenhage lyssavirus | [
"Biology"
] | 278 | [
"Virus stubs",
"Viruses"
] |
4,213,165 | https://en.wikipedia.org/wiki/Weight%20distribution | Weight distribution is the apportioning of weight within a vehicle, especially cars, airplanes, and trains. Typically, it is written in the form x/y, where x is the percentage of weight in the front, and y is the percentage in the back.
In a vehicle which relies on gravity in some way, weight distribution directly affects a variety of vehicle characteristics, including handling, acceleration, traction, and component life. For this reason weight distribution varies with the vehicle's intended usage. For example, a drag car maximizes traction at the rear axle while countering the reactionary pitch-up torque. It generates this counter-torque by placing a small amount of counterweight at a great distance forward of the rear axle.
In the airline industry, load balancing is used to evenly distribute the weight of passengers, cargo, and fuel throughout an aircraft, so as to keep the aircraft's center of gravity close to its center of pressure to avoid losing pitch control. In military transport aircraft, it is common to have a loadmaster as a part of the crew; their responsibilities include calculating accurate load information for center of gravity calculations, and ensuring cargo is properly secured to prevent its shifting.
In large aircraft and ships, multiple fuel tanks and pumps are often used, so that as fuel is consumed, the remaining fuel can be positioned to keep the vehicle balanced, and to reduce stability problems associated with the free surface effect.
In the trucking industry, individual axle weight limits require balancing the cargo when the gross vehicle weight nears the legal limit.
See also
Center of mass
Center of percussion
Load transfer
Mass distribution
Roll center
Tilt test
Weight transfer
References
External links
Weight Distribution Calculator
Aerospace engineering
Mass
Vehicle technology | Weight distribution | [
"Physics",
"Mathematics",
"Engineering"
] | 344 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Mass",
"Size",
"Vehicle technology",
"Mechanical engineering by discipline",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Matter"
] |
3,076,815 | https://en.wikipedia.org/wiki/Jabberd2 | jabberd2 is defunct software. It was an XMPP server, written in the C language and licensed as Free software under the GNU General Public License. It was inspired by jabberd14.
Current developers
Project maintainer and developer is Tomasz Sterna.
Former developers
The project leader was Justin Kirby.
The project coordinator was Stephen Marquard.
The original project creator was Rob Norris.
See also
Extensible Messaging and Presence Protocol
iChat Server
References
External links
jabberd2 homepage
Why jabberd2 is not a new major release of jabberd 1.4
Instant messaging server software | Jabberd2 | [
"Technology"
] | 126 | [
"Instant messaging",
"Instant messaging server software"
] |
3,076,863 | https://en.wikipedia.org/wiki/Machine%20epsilon | Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point number systems. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subject of computational science. The quantity is also called macheps and it has the symbols Greek epsilon .
There are two prevailing definitions, denoted here as rounding machine epsilon or the formal definition and interval machine epsilon or mainstream definition.
In the mainstream definition, machine epsilon is independent of rounding method, and is defined simply as the difference between 1 and the next larger floating point number.
In the formal definition, machine epsilon is dependent on the type of rounding used and is also called unit roundoff, which has the symbol bold Roman u.
The two terms can generally be considered to differ by simply a factor of two, with the formal definition yielding an epsilon half the size of the mainstream definition, as summarized in the tables in the next section.
Values for standard hardware arithmetics
The following table lists machine epsilon values for standard floating-point formats.
Alternative definitions for epsilon
The IEEE standard does not define the terms machine epsilon and unit roundoff, so differing definitions of these terms are in use, which can cause some confusion.
The two terms differ by simply a factor of two. The more-widely used term (referred to as the mainstream definition in this article), is used in most modern programming languages and is simply defined as machine epsilon is the difference between 1 and the next larger floating point number. The formal definition can generally be considered to yield an epsilon half the size of the mainstream definition, although its definition does vary depending on the form of rounding used.
The two terms are described at length in the next two subsections.
Formal definition (Rounding machine epsilon)
The formal definition for machine epsilon is the one used by Prof. James Demmel in lecture scripts, the LAPACK linear algebra package, numerics research papers and some scientific computing software. Most numerical analysts use the words machine epsilon and unit roundoff interchangeably with this meaning, which is explored in depth throughout this subsection.
Rounding is a procedure for choosing the representation of a real number in a floating point number system. For a number system and a rounding procedure, machine epsilon is the maximum relative error of the chosen rounding procedure.
Some background is needed to determine a value from this definition. A floating point number system is characterized by a radix which is also called the base, , and by the precision , i.e. the number of radix digits of the significand (including any leading implicit bit). All the numbers with the same exponent, , have the spacing, . The spacing changes at the numbers that are perfect powers of ; the spacing on the side of larger magnitude is times larger than the spacing on the side of smaller magnitude.
Since machine epsilon is a bound for relative error, it suffices to consider numbers with exponent . It also suffices to consider positive numbers. For the usual round-to-nearest kind of rounding, the absolute rounding error is at most half the spacing, or . This value is the biggest possible numerator for the relative error. The denominator in the relative error is the number being rounded, which should be as small as possible to make the relative error large. The worst relative error therefore happens when rounding is applied to numbers of the form where is between and . All these numbers round to with relative error . The maximum occurs when is at the upper end of its range. The in the denominator is negligible compared to the numerator, so it is left off for expediency, and just is taken as machine epsilon. As has been shown here, the relative error is worst for numbers that round to , so machine epsilon also is called unit roundoff meaning roughly "the maximum error that can occur when rounding to the unit value".
Thus, the maximum spacing between a normalised floating point number, , and an adjacent normalised number is .
Arithmetic model
Numerical analysis uses machine epsilon to study the effects of rounding error. The actual errors of machine arithmetic are far too complicated to be studied directly, so instead, the following simple model is used. The IEEE arithmetic standard says all floating-point operations are done as if it were possible to perform the infinite-precision operation, and then, the result is rounded to a floating-point number. Suppose (1) , are floating-point numbers, (2) is an arithmetic operation on floating-point numbers such as addition or multiplication, and (3) is the infinite precision operation. According to the standard, the computer calculates:
By the meaning of machine epsilon, the relative error of the rounding is at most machine epsilon in magnitude, so:
where in absolute magnitude is at most or u. The books by Demmel and Higham in the references can be consulted to see how this model is used to analyze the errors of, say, Gaussian elimination.
Mainstream definition (Interval machine epsilon)
This alternative definition is significantly more widespread: machine epsilon is the difference between 1 and the next larger floating point number. This definition is used in language constants in Ada, C, C++, Fortran, MATLAB, Mathematica, Octave, Pascal, Python and Rust etc., and defined in textbooks like «Numerical Recipes» by Press et al.
By this definition, ε equals the value of the unit in the last place relative to 1, i.e. (where is the base of the floating point system and is the precision) and the unit roundoff is u = ε / 2, assuming round-to-nearest mode, and u = ε, assuming round-by-chop.
The prevalence of this definition is rooted in its use in the ISO C Standard for constants relating to floating-point types and corresponding constants in other programming languages. It is also widely used in scientific computing software and in the numerics and computing literature.
How to determine machine epsilon
Where standard libraries do not provide precomputed values (as <float.h> does with FLT_EPSILON, DBL_EPSILON and LDBL_EPSILON for C and <limits> does with std::numeric_limits<T>::epsilon() in C++), the best way to determine machine epsilon is to refer to the table, above, and use the appropriate power formula. Computing machine epsilon is often given as a textbook exercise. The following examples compute interval machine epsilon in the sense of the spacing of the floating point numbers at 1 rather than in the sense of the unit roundoff.
Note that results depend on the particular floating-point format used, such as float, double, long double, or similar as supported by the programming language, the compiler, and the runtime library for the actual platform.
Some formats supported by the processor might not be supported by the chosen compiler and operating system. Other formats might be emulated by the runtime library, including arbitrary-precision arithmetic available in some languages and libraries.
In a strict sense the term machine epsilon means the accuracy directly supported by the processor (or coprocessor), not some accuracy supported by a specific compiler for a specific operating system, unless it's known to use the best format.
IEEE 754 floating-point formats have the property that, when reinterpreted as a two's complement integer of the same width, they monotonically increase over positive values and monotonically decrease over negative values (see the binary representation of 32 bit floats). They also have the property that , and (where is the aforementioned integer reinterpretation of ). In languages that allow type punning and always use IEEE 754–1985, we can exploit this to compute a machine epsilon in constant time. For example, in C:
typedef union {
long long i64;
double d64;
} dbl_64;
double machine_eps (double value)
{
dbl_64 s;
s.d64 = value;
s.i64++;
return s.d64 - value;
}
This will give a result of the same sign as value. If a positive result is always desired, the return statement of machine_eps can be replaced with:
return (s.i64 < 0 ? value - s.d64 : s.d64 - value);
Example in Python:
def machineEpsilon(func=float):
machine_epsilon = func(1)
while func(1) + machine_epsilon != func(1):
machine_epsilon_last = machine_epsilon
machine_epsilon = func(machine_epsilon) / func(2)
return machine_epsilon_last
64-bit doubles give 2.220446e-16, which is 2−52 as expected.
Approximation
The following simple algorithm can be used to approximate the machine epsilon, to within a factor of two (one order of magnitude) of its true value, using a linear search.
epsilon = 1.0;
while (1.0 + 0.5 * epsilon) ≠ 1.0:
epsilon = 0.5 * epsilon
The machine epsilon, can also simply be calculated as two to the negative power of the number of bits used for the mantissa.
Relationship to absolute relative error
If is the machine representation of a number then the absolute relative error in the representation is
Proof
The following proof is limited to positive numbers and machine representations using round-by-chop.
If is a positive number we want to represent, it will be between a machine number below and a machine number above .
If , where is the number of bits used for the magnitude of the significand, then:
Since the representation of will be either or ,
Although this proof is limited to positive numbers and round-by-chop, the same method can be used to prove the inequality in relation to negative numbers and round-to-nearest machine representations.
See also
Floating point, general discussion of accuracy issues in floating point arithmetic
Unit in the last place (ULP)
Notes and references
Anderson, E.; LAPACK Users' Guide, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, third edition, 1999.
Cody, William J.; MACHAR: A Soubroutine to Dynamically Determine Machine Parameters, ACM Transactions on Mathematical Software, Vol. 14(4), 1988, 303–311.
Besset, Didier H.; Object-Oriented Implementation of Numerical Methods, Morgan & Kaufmann, San Francisco, CA, 2000.
Demmel, James W., Applied Numerical Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997.
Higham, Nicholas J.; Accuracy and Stability of Numerical Algorithms, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, second edition, 2002.
Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; and Flannery, Brian P.; Numerical Recipes in Fortran 77, 2nd ed., Chap. 20.2, pp. 881–886
Forsythe, George E.; Malcolm, Michael A.; Moler, Cleve B.; "Computer Methods for Mathematical Computations", Prentice-Hall, , 1977
External links
MACHAR, a routine (in C and Fortran) to "dynamically compute machine constants" (ACM algorithm 722)
Diagnosing floating point calculations precision, Implementation of MACHAR in Component Pascal and Oberon based on the Fortran 77 version of MACHAR published in Numerical Recipes (Press et al., 1992).
Computer arithmetic
Articles with example C code
Articles with example Python (programming language) code | Machine epsilon | [
"Mathematics"
] | 2,400 | [
"Computer arithmetic",
"Arithmetic"
] |
3,076,958 | https://en.wikipedia.org/wiki/Dyskeratosis%20congenita | Dyskeratosis congenita (DKC), also known as Zinsser-Engman-Cole syndrome, is a rare progressive congenital disorder with a highly variable phenotype. The entity was classically defined by the triad of abnormal skin pigmentation, nail dystrophy, and leukoplakia of the oral mucosa, and myelodysplastic syndrome (MDS) or acute myeloid leukemia (AML), but these components do not always occur. DKC is characterized by short telomeres. The disease initially can affect the skin, but a major consequence is progressive bone marrow failure which occurs in over 80%, causing early mortality.
Presentation
DKC can be characterized by cutaneous pigmentation, premature graying, dystrophy of the nails, leukoplakia of the oral mucosa, continuous lacrimation due to atresia of the lacrimal ducts, often thrombocytopenia, anemia, testicular atrophy in the male carriers, and predisposition to cancer. Also, liver abnormalities are associated with this syndrome, Nodular Regenerative Hypoplasia of the liver, although rare, it is one of many manifestations of liver disorders short telomeres can cause.
Predisposition to cancer
It is thought that without functional telomerase, chromosomes will likely be attached together at their ends through the non-homologous end joining pathway. If this proves to be a common enough occurrence, malignancy even without telomerase present is possible. Myelodysplastic Syndrome is associated with this syndrome usually presenting as a Hypoplastic Bone Marrow that can resemble Aplastic Anemia, but can be differentiated with >10% dysplasia in affected cell lines, sometimes not possible though because of the Hypoplastic marrow reducing blood cells to be observed, genetic clones are usually not present more often than not with Hypoplastic Myelodysplastic Disorder associated with Dyskeratosis Congenita.
Genetics
Of the components of the telomerase RNA component (TERC), one of key importance is the box H/ACA domain. This H/ACA domain is responsible for maturation and stability of TERC and therefore of telomerase as a whole. The mammalian H/ACA ribonucleoprotein contains four protein subunits: dyskerin, Gar1, Nop10, and Nhp2. Mutations in Nop10, Nhp2 and dyskerin1 have all been shown to lead to DKC-like symptoms.
X-linked
The best characterized form of dyskeratosis congenita is a result of one or more mutations in the long arm of the X chromosome in the gene DKC1. This results in the X-linked recessive form of the disease wherein the major protein affected is dyskerin. Of the five mutations described by Heiss and colleagues in Nature Genetics, four were single nucleotide polymorphisms all resulting in the change of highly conserved amino acids. One case was an in-frame deletion resulting in the loss of a leucine residue, also conserved in mammals. In three of the cases, the specific amino acids affected (phenylalanine, proline, glycine) are found in the same locus in humans as they are in yeast (S. Cerevisiae) and the brown rat (R. Norvegicus). This establishes the sequence conservation and importance of dyskerin within the eukaryotes. The relevant nature of dyskerin throughout most species is to catalyze the post-transcriptional pseudouridylation of specific uridines found in non-coding RNAs, such as ribosomal RNA (rRNA). Cbf5, the yeast analog of human dyskerin, is indeed known to be associated with the processing and maturation of rRNA. In humans, this role can be attributed to dyskerin. Thus, the X-linked form of this disease may result in specific issues related to dysfunctional RNA and perhaps a graver phenotype. Within the vertebrates, as opposed to single celled eukaryotes, dyskerin is a key component of the telomerase RNA component (TERC) in the form of the H/ACA motif. This X-linked variety, like the Nop10 and Nhp2 mutations, demonstrates shortened telomeres as a result of lower TERC concentrations.
Autosomal dominant
3 genes: TERC, TERT, TINF2
The evidence supporting the importance of the H/ACA domain in human telomerase is abundant. At least one study has shown that these mutations affect telomerase activity by negatively affecting pre-RNP assembly and maturation of human telomerase RNA. Nonetheless, mutations that directly affect the telomerase RNA components would presumably exist and should also cause premature aging or DKC-like symptoms. Indeed, three families with mutations in the human TERC gene have been studied with intriguing results. In two of these families, two family-specific single nucleotide polymorphisms were present while in the other there persisted a large-scale deletion (821 base pairs of DNA) on chromosome 3 which includes 74 bases coding for a section of the H/ACA domain. These three different mutations result in a mild form of dyskeratosis congenita which uniquely follows an autosomal dominant pattern of inheritance. Premature graying, early dental loss, predisposition to skin cancer, as well as shortening of telomere length continue to be characteristic of this disease.
Autosomal recessive
6 genes:
The true phenotype of DKC individuals may depend upon which protein has incurred a mutation. One documented autosomal recessive mutation in a family that carries DKC has been found in NOP10. Specifically, the mutation is a change of base from cytosine to thymine in a highly conserved region of the NOP10 sequence. This mutation, on chromosome 15, results in an amino acid change from arginine to tryptophan. Homozygous recessive individuals show the symptoms of dyskeratosis congenita in full. As compared to age-matched normal individuals, those suffering from DKC have telomeres of a much shorter length. Furthermore, heterozygotes, those who have one normal allele and one coding for the disease, also show relatively shortened telomeres. The cause of this was determined to be a reduction in TERC levels in those with the Nop10 mutation. With TERC levels down, telomere maintenance, especially in development, would be presumed to suffer accordingly. This would lead to the telomere shortening described.
NHP2 mutations are similar in characterization to NOP10. These mutations are also autosomal recessive with three specific single-nucleotide polymorphisms being recognized which result in dyskeratosis congenita. Also, like NOP10, individuals with these NHP2 mutations have a reduction in the amount of telomerase RNA component (TERC) present in the cell. Again, it can be presumed that a reduction in TERC results in aberrant telomere maintenance and thus shortened telomeres. Those homozygous recessive for mutations in NHP2 do show shorter telomeres when compared with age-matched normal individuals.
Pathophysiology
Dyskeratosis congenita is a disorder of poor telomere maintenance mainly due to a number of gene mutations that give rise to abnormal ribosome function, termed ribosomopathy. Specifically, the disease is related to one or more mutations which directly or indirectly affect the vertebrate telomerase RNA component (TERC).
Telomerase is a reverse transcriptase which maintains a specific repeat sequence of DNA, the telomere, during development. Telomeres are placed by telomerase on both ends of linear chromosomes as a way to protect linear DNA from general forms of chemical damage and to correct for the chromosomal end-shortening that occurs during normal DNA replication. This end-shortening is the result of the eukaryotic DNA polymerases having no mechanism for synthesizing the final nucleotides present on the end of the "lagging strand" of double stranded DNA. DNA polymerase can only synthesize new DNA from an old DNA strand in the 5'→3' direction. Given that DNA has two strands that are complementary, one strand must be 5'→3' while the other is 3'→5'. This inability to synthesize in the 3'→5' directionality is compensated with the use of Okazaki fragments, short pieces of DNA that are synthesized 5'→3' from the 3'→5' as the replication fork moves. As DNA polymerase requires RNA primers for DNA binding in order to commence replication, each Okazaki fragment is thus preceded by an RNA primer on the strand being synthesized. When the end of the chromosome is reached, the final RNA primer is placed upon this nucleotide region, and it is inevitably removed. Unfortunately once the primer is removed, DNA polymerase is unable to synthesize the remaining bases.
Sufferers of DKC have been shown to have a reduction in TERC levels invariably affecting the normal function of telomerase which maintains these telomeres. With TERC levels down, telomere maintenance during development suffers accordingly. In humans, telomerase is inactive in most cell types after early development (except in extreme cases such as cancer). Thus, if telomerase is not able to efficiently affect the DNA in the beginning of life, chromosomal instability becomes a grave possibility in individuals much earlier than would be expected.
A study shows that proliferative defects in DC skin keratinocytes are corrected by expression of the telomerase reverse transcriptase, TERT, or by activation of endogenous telomerase through expression of papillomavirus E6/E7 of the telomerase RNA component, TERC.
Diagnosis
Since the disease has a wide variety of symptoms due to involvement of multiple systems of the body, diagnostic testing depends on the clinical findings in each individual patient. Commonly used tests include a complete blood count (CBC), bone marrow examination, leukocyte telomere length test (e.g. Flow FISH), pulmonary function test, and genetic testing.
Management
The mainstay of treatment in dyskeratosis congenita is hematopoietic stem cell transplantation, best outcome with sibling donor. Short term therapy in initial stages is with anabolic steroids [oxymetholone, danazol] or with erythropoietin-like hormones or with granulocyte-colony stimulating factor [filgrastim) all these therapies are directed to cope with effects of bone marrow failure which manifests as low red and white blood cell counts. These medications help to increase the blood components and make up for the deficiencies caused due to bone marrow failure. Dyskeratosis Congenita in regards to stem cell transplantation have to be very carefully treated with low intensity radiation/chemo to avoid potentially catastrophic effects of Host versus graft disease and toxicity to other organs affected by short telomeres which makes them very sensitive to any radiation especially the lungs, and liver.
Prognosis
DC is associated with shorter life expectancy, but many live to at least age 60.
Main cause of mortality in these patients are related to bone marrow failure. Nearly 80% of the patients of dyskeratosis congenita develop bone marrow failure.
Research
Recent research has used induced pluripotent stem cells to study disease mechanisms in humans, and discovered that the reprogramming of somatic cells restores telomere elongation in dyskeratosis congenita (DKC) cells despite the genetic lesions that affect telomerase. The reprogrammed DKC cells were able to overcome a critical limitation in TERC levels and restored function (telomere maintenance and self-renewal). Therapeutically, methods aimed at increasing TERC expression could prove beneficial in DKC.
See also
Cutaneous conditions
List of cutaneous conditions
References
External links
GeneReviews/NCBI/NIH/UW entry on Dyskeratosis Congenita
Dyskeratosis Congenita research study of Inherited Bone Marrow Failure Syndromes (IBMFS)
Genodermatoses
Congenital disorders
Rare diseases
DNA replication and repair-deficiency disorders
Telomeropathies
Progeroid syndromes | Dyskeratosis congenita | [
"Biology"
] | 2,647 | [
"Senescence",
"DNA replication and repair-deficiency disorders",
"Progeroid syndromes"
] |
3,077,180 | https://en.wikipedia.org/wiki/Rumen | The rumen, also known as a paunch, is the largest stomach compartment in ruminants. The rumen and the reticulum make up the reticulorumen in ruminant animals. The diverse microbial communities in the rumen allows it to serve as the primary site for microbial fermentation of ingested feed, which is often fiber-rich roughage typically indigestible by mammalian digestive systems. The rumen is known for containing unique microbial networks within its multiple sac compartments to break down nutrients into usable energy and fatty acids.
Brief anatomy
The rumen is composed of five muscular sacs: cranial sac, ventral sac, dorsal sac, caudodorsal sac, and caudoventral blind sac. Each of these areas contain unique microbial communities, environments, and physical abilities that influence digestion.
The outer lining of the rumen, known as the epithelium, serves as a protective layer and contributes to the metabolic processing of fermentation products.
The inner lining of the rumen wall is covered in small fingerlike projections called papillae, which aid in nutrient absorption. The reticulum is lined with ridges that form a hexagonal honeycomb pattern. These features increase the surface area of the reticulorumen wall, facilitating the absorption of volatile fatty acids and capture of smaller digesta particles.
The rumen and the reticulum differ with regard to the makeup of the lining but account for approximately 80% of total ruminant stomach volume.
Digestion
Digestion in the rumen and reticulorumen occurs through fermentation by diverse microbe communities to optimize resources from nutrient dense feed. Millions of microorganisms, including bacteria, archaea, viruses, fungi, and protozoa, are known to reside in the reticulorumen and are essential to digest structural carbohydrates, like lignocellulose (hemicellulose and cellulose), non-structural carbohydrates (starch, sugar, and pectin), lipids, and nitrogenous compounds (proteins, peptides, and amino acids).
Both non-structural and structural carbohydrates are hydrolysed to monosaccharides or disaccharides by microbial enzymes. The resulting mono- and disaccharides are transported into the microbes. Once within microbial cell walls, the mono- and disaccharides may be assimilated into microbial biomass or fermented to volatile fatty acids (VFAs), such as acetate, propionate, butyrate, lactate, valerate and other branched-chain VFAs via glycolysis and other biochemical pathways to yield energy for the microbial cell. Most VFAs are absorbed across the reticulorumen wall, directly into the bloodstream, and are used by the ruminant as substrates for energy production and biosynthesis. Some branched chain VFAs are incorporated into the lipid membrane of rumen microbes. VFAs provide large amounts of energy for ruminants and are critical to the health of the rumen and its microbiome.
Lipids, lignin, minerals, and vitamins play a less prominent role in digestion than carbohydrates and protein, but they are still critical in many ways. Lipids are partly hydrolysed and hydrogenated, and glycerol, if present in the lipid, is fermented. Lipids are otherwise inert in the rumen. Some carbon from carbohydrate or protein may be used for de novo synthesis of microbial lipid. High levels of lipid, particularly unsaturated lipid, in the rumen are thought to poison microbes and suppress fermentation activity. Lignin, a phenolic compound, is recalcitrant to digestion, through it can be solubilized by fungi. Lignin is thought to shield associated nutrients from digestion and hence limits degradation. Minerals are absorbed by microbes and are necessary to their growth. Microbes in turn synthesize many vitamins, such as cyanocobalamin, in great quantities—often great enough to sustain the ruminant even when vitamins are highly deficient in the diet.
The protein ingested is either degradable intake protein or undegradable intake protein, or rumen bypass protein. Protein is hydrolysed to peptides and amino acids by microbial enzymes, which are subsequently transported across the microbial cell wall for assimilation into cell biomass, primarily. Peptides, amino acids, ammonia, and other sources of nitrogen originally present in the feed can also be used directly by microbes with little to no hydrolysis. In situations in which nitrogen for microbial growth is in excess, protein and its derivatives can also be fermented to produce energy, yielding ammonia. Excess ammonia is absorbed by the rumen and converted into urea in the liver. Non-amino acid nitrogen is used for synthesis of microbial amino acids.
Ruminants have access to food-sourced protein and microbial proteins produced by the microbes in the rumen. This creates a symbiotic relationship between the ruminant and the microbial communities, as the microbes can be used as a protein source when washed into the abomasum section of the digestive tract.
Stratification and mixing of digesta
Digested food (digesta) in the rumen is not uniform, but rather stratified into gas, liquid, and particles of different sizes, densities, and other physical characteristics. Additionally, the digesta is subject to extensive mixing and complicated flow paths upon entry into the rumen. Though they may seem trivial at first, these complicated stratification, mixing, and flow patterns of digesta are a key aspect of digestive activity in the ruminant and thus warrant detailed discussion.
After being swallowed, food travels down the oesophagus and is deposited in the dorsal part of the reticulum. Contractions of the reticulorumen propel and mix the recently ingested feed into the ruminal mat. The mat is a thick mass of digesta, consisting of partially degraded, long, fibrous material. Most material in the mat has been recently ingested, and as such, has considerable fermentable substrate remaining. Microbial fermentation proceeds rapidly in the mat, releasing many gases. Some of these gases are trapped in the mat, causing the mat to be buoyant. As fermentation proceeds, fermentable substrate is exhausted, gas production decreases, and particles lose buoyancy due to loss of entrapped gas. Digesta in the mat hence goes through a phase of increasing buoyancy followed by decreasing buoyancy. Simultaneously, the size of digesta particles–relatively large when ingested–is reduced by microbial fermentation and, later, rumination. Incomplete digestion of plant material here will result in the formation of a type of bezoar called Phytobezoars. At a certain point, particles are dense and small enough that they may “fall” through the rumen mat into the ventral sac below, or they may be swept out of the rumen mat into the reticulum by liquid gushing through the mat during ruminal contractions. Once in the ventral sac, digesta continues to ferment at decreased rates, further losing buoyancy and decreasing in particle size. It is soon swept into the ventral reticulum by ruminal contractions.
In the ventral reticulum, less dense, larger digesta particles may be propelled up into the oesophagus and mouth during contractions of the reticulum. Digesta is chewed in the mouth in a process known as rumination, then expelled back down the oesophagus and deposited in the dorsal sac of the reticulum, to be lodged and mixed into the ruminal mat again. Denser, small particles stay in the ventral reticulum during reticular contraction, and then during the next contraction may be swept out of the reticulorumen with liquid through the reticulo-omasal orifice, which leads to the next chamber in the ruminant animal's alimentary canal, the omasum.
Water and saliva enter through the rumen to form a liquid pool. Liquid will ultimately escape from the reticulorumen from absorption through the wall, or through passing through the reticulo-omasal orifice, as digesta does. However, since liquid cannot be trapped in the mat as digesta can, liquid passes through the rumen much more quickly than digesta does. Liquid often acts as a carrier for very small digesta particles, such that the dynamics of small particles is similar to that of liquid.
The uppermost area of the rumen, the headspace, is filled with gases (such as methane, carbon dioxide, and, to a much lower degree, hydrogen) released from fermentation and anaerobic respiration of food. These gases are regularly expelled from the reticulorumen through the mouth, in a process called eructation.
Microbes in reticulorumen
The different sacs of the rumen allow for varying ecological niches for microbes in the reticulorumen, including bacteria, protozoa, fungi, archaea, and viruses. Each microbial community depends on a variety of enzymes to breakdown lignocellulose, nonstructural carbohydrates, nitrogenous compounds, and lipids.
Bacteria, along with protozoa, are the predominant microbes and by mass account for 40-60% of total microbial matter in the rumen. They are categorized into several functional groups, such as fibrolytic, amylolytic, and proteolytic types, which preferentially digest structural carbohydrates, non-structural carbohydrates, and protein, respectively. Protozoa (40-60% of microbial mass) derive their nutrients through phagocytosis of other microbes, and degrade and digest feed carbohydrates, especially starch and sugars, and protein.
Ruminal fungi make up 5-10% of microbes and are absent on diets poor in fibre. Fungi occupy an important niche in the rumen because they hydrolyse some ester linkages between lignin and hemicellulose or cellulose, and help break down digesta particles. Archaea, approximately 3% of total microbes, are mostly autotrophic methanogens and produce methane through anaerobic respiration. Most of the hydrogen produced by bacteria, protozoa and fungi is used by these methanogens to reduce carbon dioxide to methane. Viruses are present in unknown numbers and have not been well studied. However, they can lyse microbes, releasing their contents for other microbes to assimilate and ferment in a process called microbial recycling, although recycling through the predatory activities of protozoa is quantitatively more important.
Microbes in the reticulorumen eventually flow out into the omasum and the remainder of the alimentary canal. Under normal fermentation conditions the environment in the reticulorumen is weakly acidic and is populated by microbes that are adapted to a pH between roughly 5.5 and 6.5; since the abomasum is strongly acidic (pH 2 to 4), it acts as a barrier that largely kills reticulorumen flora and fauna as they flow into it. Subsequently, microbial biomass is digested in the small intestine and smaller molecules (mainly amino acids) are absorbed and transported in the portal vein to the liver. The digestion of these microbes in the small intestine is a major source of nutrition, as microbes usually supply some 60 to 90% of the total amount of amino acids absorbed. On starch-poor diets, they also provide the predominant source of glucose absorbed from the small intestinal contents.
Human uses
The feed contained within the reticulorumen, known as "paunch waste", has been studied as a fertiliser for use in sustainable agriculture.
Development
At birth, the rumen organ, rumen epithelium, and rumen microbiota are not fully developed and are metabolically nonfunctional. The developing rumen does not display the level of keratinization seen in the mature organ. Generally, the most receptive time for rumen development is between the postnatal and weaning periods. Over this period, rumen organ and epithelium growth, along with the establishment of rumen microbiota, will prove to be essential to rumen development. This process is influenced by the introduction of solid food and the establishment of fermentation in the rumen. Additionally, there must be an adequate amount of short chain fatty acids, produced during fermentation, to properly develop the papillae.
Papillae growth in rumen epithelium is essential for rumen functionality. Papillae increase the surface area inside of the rumen and allow for a considerable increase in nutrient absorption inside of the rumen. Distinguishing a developed from an undeveloped rumen is simplified by observing the carpeting of tissue surrounding the interior of the rumen, as an undeveloped rumen maintains a smooth, papillae-lacking outer surface, and a developed rumen possesses thick, papillae-full walls.
Due to ruminants being born with a sterile gastrointestinal tract, the developing rumen must be exposed to an array of microflora at an early stage. Specific diets in which microflora promote an anaerobic environment suitable for fermentation in the rumen are favored. Furthermore, feeds must be tailored to the needs of the specific ruminants, as developing ruminants who have been on a strict liquid feed diet will possess different microflora when compared to that of a developing ruminant fed with a combination of a dry and liquid feed. This is due to the nutrients ingested by the animal not entering into the rumen stomach compartment, as it is instead bypassed by the reflexive closure of the esophageal groove.
The most abundant bacteria present in the rumen microbiome include Prevotella, Butyrivibrio, and Ruminococcus. This is due to ruminant organisms ingesting high-forage, commonly grass-based diets. Their typical high-forage diets cause this significant demand for cellulose digesting bacteria to be ever-present. Other bacteria, such as Lachnospira multiparus, Prevotella ruminicola, and Butyrivibrio fibrisolvens, play essential roles in the creation of volatile fatty acids (VFAs). Specific feeds can stimulate this extensive bacterial growth in the rumen and therefore aid in the production of these volatile fatty acids, which play a major role in rumen epithelium growth, capillary development, and papillae formation. Previous research identified the significant impact of volatile fatty acids on rumen development through the effects of the inter-ruminal insertion of acetate, propionate, and butyrate. The most visually notable and impactful of these volatile fatty acids was butyrate, which is synthesized naturally in ruminants through multiple anaerobic fermentation pathways of dietary substrates. Butyrate, mainly expressed in epithelial tissue lining, is involved in regulating a plethora of ruminant epithelial cell genes. Generally, butyrate regulates gene expression by acting on cell cycle control pathways. In the epithelial wall of the rumen, butyrate regulates epithelial cell gene expression to increase blood flow and papilla proliferation.
Rumen microbiome genetics
Developing feeds to support the microbiome growth of both production and pet ruminant animals is vital; both for the overall health of the maturing animal and for reducing the costs associated with raising that animal. In the production animal realm, feeding can account for up to 75% of the overall cost associated with that animal, making it crucial to identify and satisfy the nutritional demands of the rumen. Sampling microbial DNA from rumen epithelial cells has led to the identification of microbial genes and functional pathways associated with animal growth factors. Microbial clusters in the rumen possess genes associated with many animal growth-related factors. Protein encoding genes that encode for bacterial cell functions, such as aguA, ptb, K01188, and murD, also are associated with the animal’s average daily weight gain. Furthermore, vitamin B12 related genes, including cobD, tolC, and fliN, are also related to the daily feed intake of the animal.
References
Digestive system
Ruminants
Mammal anatomy | Rumen | [
"Biology"
] | 3,529 | [
"Digestive system",
"Organ systems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.