id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
15,897,912
https://en.wikipedia.org/wiki/T%20Cephei
T Cephei is a Mira variable star in the constellation Cepheus. Located approximately distant, it varies between magnitudes 5.2 and 11.3 over a period of around 388 days. When it is near its maximum brightness, it is faintly visible to the naked eye under good observing conditions. Vitold Ceraski announced his discovery that the star is a variable star, in 1879. It appeared with its variable star designation, T Cephei, in Annie Jump Cannon's 1907 publication Second catalogue of variable stars. T Cephei is a red giant of spectral type M6-9e with an effective temperature 2,400 K, a radius of , a mass of , and a luminosity of . If it were in the place of the Sun, its photosphere would at least engulf the orbit of Mars. This star is believed to be in a late stage of its life, the asymptotic giant branch phase, blowing off its own atmosphere to form a white dwarf in a distant future. See also VV Cephei R Cancri Mira S Cassiopeiae References External links T Cephei in VizieR Cepheus (constellation) Mira variables Cephei, T 202012 M-type giants 8113 104451 Durchmusterung objects Emission-line stars
T Cephei
Astronomy
271
52,136,480
https://en.wikipedia.org/wiki/Skeletocutis%20amorpha
Skeletocutis amorpha is a species of poroid fungus in the family Polyporaceae, and the type species of the genus Skeletocutis. Taxonomy The fungus was first described as new to science in 1815 by Elias Magnus Fries as Polyporus amorphus. It has since acquired an extensive synonymy. Czech mycologists František Kotlába and Zdeněk Pouzar transferred it to the genus Skeletocutis in 1958. Description Fruit bodies are effused-reflexed (crust-like with the edges curled out into rudimentary caps), or, more rarely, completely crust-like. Its spores are allantoid (sausage-shaped), and measure 3–4.5 by 1.3–1.8 μm. Habitat and distribution A widely distributed fungus, S. amorpha is found in Africa, Australia, China, and Europe. It causes a white rot in the dead wood of various species of the pine family, particularly pine, but also fir, larch, and spruce. Rarely, it grows on hardwoods such as alder, beech, and oak. References Fungi described in 1815 Fungi of Africa Fungi of Asia Fungi of Australia Fungi of Europe amorpha Taxa named by Elias Magnus Fries Fungus species
Skeletocutis amorpha
Biology
263
7,299,890
https://en.wikipedia.org/wiki/Structural%20semantics
Structural semantics (also structuralist semantics) is a linguistic school and paradigm that emerged in Europe from the 1930s, inspired by the structuralist linguistic movement started by Ferdinand de Saussure's 1916 work "Cours De Linguistique Generale" (A Course in General Linguistics). Examples of approaches within structural semantics are Lexical field theory (1931-1960s), relational semantics (from the 1960s by John Lyons) and componential analysis (from the 1960s by Eugenio Coseriu, Bernard Pottier and Algirdas Greimas). From the 1960s these approaches were incorporated into generative linguistics. Other prominent developer of structural semantics have been Louis Hjelmslev, Émile Benveniste, Klaus Heger, Kurt Baldinger and Horst Geckeler. Logical positivism asserts that structural semantics is the study of relationships between the meanings of terms within a sentence, and how meaning can be composed from smaller elements. However, some critical theorists suggest that meaning is only divided into smaller structural units via its regulation in concrete social interactions; outside of these interactions, language may become meaningless. Structural semantics is that branch that marked the modern linguistics movement started by Ferdinand de Saussure at the break of the 20th century in his posthumous discourse titled "Cours De Linguistique Generale" (A Course in General Linguistics). He posits that language is a system of inter-related units and structures and that every unit of language is related to the others within the same system. His position later became the bedding ground for other theories such as componential analysis and relational predicates. Structuralism is a very efficient aspect of Semantics, as it explains the concordance in the meaning of certain words and utterances. The concept of sense relations as a means of semantic interpretation is an offshoot of this theory as well. Structuralism has revolutionized semantics to its present state, and it also aids to the correct understanding of other aspects of linguistics. The consequential fields of structuralism in linguistics are sense relations (both lexical and sentential) among others. See also Prototype Semantics Cognitive Semantics Cognitive Linguistics Principle of compositionality Ferdinand de Saussure Algirdas Julien Greimas References Logical positivism Semantics Structuralism
Structural semantics
Mathematics
455
174,721
https://en.wikipedia.org/wiki/Village%20green
A village green is a common open area within a village or other settlement. Historically, a village green was common grassland with a pond for watering cattle and other stock, often at the edge of a rural settlement, used for gathering cattle to bring them later on to a common land for grazing. Later, planned greens were built into the centres of villages. The village green also provided, and may still provide, an open-air meeting place for the local people, which may be used for public celebrations such as May Day festivities. The term is used more broadly to encompass woodland, moorland, sports grounds, buildings, roads and urban parks. History Most village greens in England originated in the Middle Ages. Individual greens may have been created for various reasons, including protecting livestock from wild animals or human raiders during the night, or providing a space for market trading. In most cases where a village green is planned, it is placed in the centre of a settlement. Village greens can also be formed when a settlement expands to the edge of an existing area of common land, or when an area of waste land between two settlements becomes developed. Some historical village greens have been lost as a result of the agricultural revolution and urban development. Greens are now most likely to be found in the older villages of mainland Europe, the United Kingdom, and older areas of the United States. Some greens that used to be commons, or otherwise at the centres of villages, have been swallowed up by a city growing around them. Sometimes they become a city park or a square and manage to maintain a sense of place. London has several of these; one example is Newington Green, with Newington Green Unitarian Church anchoring the northern end. In mid-20th-century England, town expansion led to the formation of local conservation societies, often centring on village green preservation, as celebrated and parodied in The Kinks' album The Kinks Are the Village Green Preservation Society. The Open Spaces Society is a present-day UK national campaigning body that continues this movement. Examples United States In the United States, the most famous example of a town green is probably the New Haven Green in New Haven, Connecticut. New Haven was founded by settlers from England and was the first planned city in the United States. Originally used for grazing livestock, the Green dates from the 1630s and is well preserved today despite lying at the heart of the city centre. The largest green in the U.S. is a mile in length and can be found in Lebanon, Connecticut. This is the only village green in the United States still used for agriculture. One of the most unusual examples is the Dartmouth College Green in Hanover, New Hampshire, which was owned and cleared by the college in 1770. The college, not the town, still owns it and surrounded it with buildings as a sort of collegiate quadrangle in the 1930s, although its origin as a town green remains apparent. An example of a traditional American town green exists in downtown Morristown, NJ. The Morristown Green dates from 1715 and has hosted events ranging from executions to clothing drives. There are two places in the United States called Village Green: Village Green-Green Ridge, Pennsylvania, and Village Green, New York. Some New England towns, along with some areas settled by New Englanders such as the townships in the Connecticut Western Reserve, refer to their town square as a village green. The village green of Bedford, New York, is preserved as part of Bedford Village Historic District. Europe A notable example of a village green is that in the village of Finchingfield in Essex, England, which is said to be "the most photographed village in England". The green dominates the village, and slopes down to a duck pond, and is occasionally flooded after heavy rain. The small village of Car Colston in Nottinghamshire, England, has two village greens, totaling 29 acres (12 ha), and the village of Burton Leonard in North Yorkshire has three. The Open Spaces Society states that in 2005 there were about 3,650 registered greens in England covering and about 220 in Wales covering about . The northern part of the province of Drenthe in the Netherlands is also known for its village greens. Zuidlaren is the village with the largest number of village greens in the Netherlands. The Błonia Park, originally established in the Middle Ages, is an example of a large village green in Kraków, Poland. Indonesia In Indonesia, especially in Java, a similar place is called Alun-Alun. It is a central part of Javanese village architecture and culture. Legal definitions England and Wales Apart from the general use of the term, village green has a specific legal meaning in England and Wales, and also includes the less common term town green. Town and village greens were defined in the Commons Registration Act 1965, as amended by the Countryside and Rights of Way Act 2000, as land: which has been allotted by or under any act for the exercise or recreation of the inhabitants of any locality or on which the inhabitants of any locality have a customary right to indulge in lawful sports and pastimes or if it is land on which for not fewer than twenty years a significant number of the inhabitants of any locality, or of any neighbourhood within a locality, have indulged in lawful sports and pastimes as of right. Registered greens in England and Wales are now governed by the Commons Act 2006, but the fundamental test of whether land is a town and village green remains the same. Thus land can become a village green if it has been used for twenty years without force, secrecy or request (nec vi, nec clam, nec precario). Village green legislation is often used to try to frustrate development. Recent case law (Oxfordshire County Council vs Oxford City Council and Robinson) makes it clear that registration as a green would render any development which prevented continuing use of the green as criminal activity under the Inclosure Act 1857 and the Commons Act 1876 (39 & 40 Vict. c. 56). This leads to some most curious areas being claimed as village greens, sometimes with success. Recent examples include a bandstand, two lakes and a beach. On 11 December 2019, a Supreme Court decision put the future of some village greens at risk in England and Wales. The case involved five fields (13 hectares) in south Lancaster, the Moorside Fields, owned by Lancashire County Council. The lands had been available for public use for over 50 years. According to the Commons Act 2006, land used for informal recreation for at least 20 years can be registered as green and is then protected from development. (Granted, the Growth and Infrastructure Act 2013 specified that land designated for planning applications could not be registered as a village green, but that did not apply in the Moorside Fields case.) The Moorside Fields Community Group attempted to register the lands in 2016 under the Commons Act 2006. The local authority challenged the registration, wanting to retain control of the lands for future expansion of the nearby Moorside Primary School's playing fields. The council's challenge failed in the High Court and then in the Court of Appeal; the registration of the land as a village green could proceed. Lancashire County Council subsequently appealed to the UK Supreme Court. In the appeal decision, cited as R (on the application of Lancashire County Council) (Appellant) v Secretary of State for the Environment, Food and Rural Affairs (Respondent) the court overturned the previous judgments. At the same time, the Supreme Court also ruled against the registration of lands in a separate case in Surrey involving the 2.9 hectare Leach Grove Wood at Leatherhead, owned by the National Health Service. After publication of the decision in the Moorside Fields case, Lancashire County Council told the news media that the court had "protect[ed] this land for future generations". In effect, the Supreme Court decision left lands owned by public authorities by their statutory powers open to development for any purpose that they deem to be appropriate. This could have far-reaching ramifications in England and Wales, according to the Open Spaces Society, a national conservation group that was founded in 1865. A representative made this comment to The Guardian: "This is a deeply worrying decision as it puts at risk countless publicly owned green spaces which local people have long enjoyed, but which, unknown to them, are held for purposes which are incompatible with recreational use". Gallery See also Common land Park Town square Urban green space References External links The Open Spaces Society—gives UK information on how to claim a village green. Town Greens of Connecticut—historical information on the town greens that are found in almost every Connecticut town Green Urban studies and planning terminology Landscape history Landscape Parks Common land Grasslands
Village green
Biology
1,756
6,148,242
https://en.wikipedia.org/wiki/Magnalium
Magnalium is an aluminium alloy with 50% magnesium and 50% aluminum. Properties Alloys with small amounts of magnesium (about 5%) exhibit greater strength, greater corrosion resistance, and lower density than pure aluminium. Such alloys are also more workable and easier to weld than pure aluminum. Alloys with high amounts of magnesium (around 50%) are brittle and more susceptible to corrosion than aluminum. Uses Although they are generally more expensive than aluminium, the high strength, low density, and greater workability of alloys with low amounts of magnesium leads to their use in aircraft and automobile parts. It is also used for making balance beams and other components of light instruments. Pyrotechnics Alloys; which are generally not true alloys but intermetallic compounds; with about 50% magnesium are brittle and corrode easily, which makes them unsuitable for most engineering uses. These compounds are flammable when powdered, are more resistant to corrosion than pure magnesium, and are generally more reactive than pure aluminium; they are used in pyrotechnics as a metal fuel for colored flames, to produce sparks in some glitter and streamer stars, and as a more consistent replacement for separate magnesium and aluminum powders in crackling microstars (dragon eggs). Magnalium powder also burns with a crackling sound if burnt by itself. It is somewhat less reactive than magnesium in most cases, showing no reaction with sulfur in particular, but is nearly as reactive as magnesium with antimony trisulfide (producing the extremely poisonous and flammable Hydrogen sulfide gas) and more dangerously reactive with nitrates, slowly reacting to produce ammonia gas where magnesium only reacts slowly to produce inert products. In some cases the normally faster-reacting magnesium component of magnalium serves to slow down a reaction due to its higher reactivity in a composition. This occurs in dragon eggs, where slower oxidation of magnesium by Lead tetraoxide () allows time for the formed Lead monoxide () gas to build up without reacting with the aluminum portion, so when the magnesium is finally consumed the aluminum reaction occurs rapidly enough to produce an explosion. If too much magnesium is present in the alloy (or added to the mix), it will burn continuously but not produce the desired effect. Similarly too little magnesium will prevent enough vapor from building up to react rapidly and the aluminum will simply burn. See also Magnesium alloy Birmabright References External links Making Magnalium http://www.thegreenman.me.uk/pro/magnalium.html Aluminium–magnesium alloys Aluminium alloys Magnesium alloys Pyrotechnic fuels
Magnalium
Chemistry
527
21,890,097
https://en.wikipedia.org/wiki/Multilinear%20polynomial
In algebra, a multilinear polynomial is a multivariate polynomial that is linear (meaning affine) in each of its variables separately, but not necessarily simultaneously. It is a polynomial in which no variable occurs to a power of or higher; that is, each monomial is a constant times a product of distinct variables. For example is a multilinear polynomial of degree (because of the monomial ) whereas is not. The degree of a multilinear polynomial is the maximum number of distinct variables occurring in any monomial. Definition Multilinear polynomials can be understood as a multilinear map (specifically, a multilinear form) applied to the vectors [1 x], [1 y], etc. The general form can be written as a tensor contraction: For example, in two variables: Properties A multilinear polynomial is linear (affine) when varying only one variable, :where and do not depend on . Note that is generally not zero, so is linear in the "shaped like a line" sense, but not in the "directly proportional" sense of a multilinear map. All repeated second partial derivatives are zero:In other words, its Hessian matrix is a symmetric hollow matrix. In particular, the Laplacian , so is a harmonic function. This implies has maxima and minima only on the boundary of the domain. More generally, every restriction of to a subset of its coordinates is also multilinear, so still holds when one or more variables are fixed. In other words, is harmonic on every "slice" of the domain along coordinate axes. On a rectangular domain When the domain is rectangular in the coordinate axes (e.g. a hypercube), will have maxima and minima only on the vertices of the domain, i.e. the finite set of points with minimal and maximal coordinate values. The value of the function on these points completely determines the function, since the value on the edges of the boundary can be found by linear interpolation, and the value on the rest of the boundary and the interior is fixed by Laplace's equation, . The value of the polynomial at an arbitrary point can be found by repeated linear interpolation along each coordinate axis. Equivalently, it is a weighted mean of the vertex values, where the weights are the Lagrange interpolation polynomials. These weights also constitute a set of generalized barycentric coordinates for the hyperrectangle. Geometrically, the point divides the domain into smaller hyperrectangles, and the weight of each vertex is the (fractional) volume of the hyperrectangle opposite it. Algebraically, the multilinear interpolant on the hyperrectangle is:where the sum is taken over the vertices . Equivalently,where V is the volume of the hyperrectangle. The value at the center is the arithmetic mean of the value at the vertices, which is also the mean over the domain boundary, and the mean over the interior. The components of the gradient at the center are proportional to the balance of the vertex values along each coordinate axis. The vertex values and the coefficients of the polynomial are related by a linear transformation (specifically, a Möbius transform if the domain is the unit hypercube , and a Walsh-Hadamard-Fourier transform if the domain is the symmetric hypercube ). Applications Multilinear polynomials are the interpolants of multilinear or n-linear interpolation on a rectangular grid, a generalization of linear interpolation, bilinear interpolation and trilinear interpolation to an arbitrary number of variables. This is a specific form of multivariate interpolation, not to be confused with piecewise linear interpolation. The resulting polynomial is not a linear function of the coordinates (its degree can be higher than 1), but it is a linear function of the fitted data values. The determinant, permanent and other immanants of a matrix are homogeneous multilinear polynomials in the elements of the matrix (and also multilinear forms in the rows or columns). The multilinear polynomials in variables form a -dimensional vector space, which is also the basis used in the Fourier analysis of (pseudo-)Boolean functions. Every (pseudo-)Boolean function can be uniquely expressed as a multilinear polynomial (up to a choice of domain and codomain). Multilinear polynomials are important in the study of polynomial identity testing. See also Bilinear and trilinear interpolation, using multivariate polynomials with two or three variables Zhegalkin polynomial, a multilinear polynomial over Multilinear form and multilinear map, multilinear functions that are strictly linear (not affine) in each variable Linear form, a multivariate linear function Harmonic polynomial References Polynomials
Multilinear polynomial
Mathematics
1,005
73,192,304
https://en.wikipedia.org/wiki/HD%20170873
HD 170873, also known as HR 6954 or rarely 19 G. Telescopii, is a solitary orange-hued star located in the southern constellation Telescopium. It has an apparent magnitude of 6.20, placing it near the limit for naked eye visibility. Gaia DR3 parallax measurements imply a distance of 551 light years and it is currently receding with a heliocentric radial velocity of . At its current distance, HD 170873's brightness is diminished by 0.39 magnitudes due to interstellar dust, and it has an absolute magnitude of −0.31. HD 170873 is an evolved red giant with a stellar classification of K2 III. It has 3.2 times the mass of the Sun but at the age of 318 million years, it has expanded to 22.6 times the Sun's radius. It radiates 170 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . It has a near-solar metallicity at [Fe/H] = −0.01 and spins too slowly for its projected rotational velocity to be measured accurately. References K-type giants Telescopium Telescopii, 19 CD-52 08714 170873 091062 6954
HD 170873
Astronomy
274
71,515,469
https://en.wikipedia.org/wiki/Leucocoprinus%20magnusianus
Leucocoprinus magnusianus is a species of mushroom producing fungus in the family Agaricaceae. Taxonomy It was first described in 1891 by the German mycologist Paul Christoph Hennings who classified it as Lepiota magnusiana. The German mycologist Rolf Singer reclassified it as Hiatula magnusiana in 1943 and then as Leucocoprinus magnusianus in 1949. Description Lepiota magnusiana only has a brief description provided by Hennings. It is described as having red flesh and a stem which discolours reddish. This is a common feature in Lepiota species but would be unusual for a Leucocoprinus. Cap: 1–5 cm wide, starting ovoid or cylindrical before expanding to campanulate (bell shaped) with thin flesh. The surface is white and covered in powdery scales whilst the centre discolours yellow with age and the cap edges are striated. Gills: Free, crowded and white. Stem: 2–7 cm tall and 2–3 mm thick with a white surface that discolours reddish and a hollow interior. The membranous stem ring has a woolly (floccose) texture. Spores: Ovoid. Smell: Indistinct.Taste: Indistinct. Habitat and distribution L. magnusianus is scarcely recorded and little known. Hennings documented it from a hothouse in the Berlin botanical garden. References Leucocoprinus Taxa named by Rolf Singer Fungi described in 1891 Fungus species
Leucocoprinus magnusianus
Biology
316
833,572
https://en.wikipedia.org/wiki/Internal%20ballistics
Internal ballistics (also interior ballistics), a subfield of ballistics, is the study of the propulsion of a projectile. In guns, internal ballistics covers the time from the propellant's ignition until the projectile exits the gun barrel. The study of internal ballistics is important to designers and users of firearms of all types, from small-bore rifles and pistols, to artillery. For rocket-propelled projectiles, internal ballistics covers the period during which a rocket motor is providing thrust. General concepts Interior ballistics can be considered in three time periods: Lock time - the time from sear release until the primer is struck Ignition time - the time from when the primer is struck until the projectile starts to move Barrel time - the time from when the projectile starts to move until it exits the barrel. The burning firearm propellant produces energy in the form of hot gases that raise the chamber pressure which applies a force on the base of the projectile, causing it to accelerate. The chamber pressure depends on the amount of propellant that has burned, the temperature of the gases, and the volume of the chamber. The burn rate of the propellant depends on the chemical make up and shape of the propellant grains. The temperature depends on the energy released and the heat loss to the sides of the barrel and chamber. As the projectile travels down the barrel, the volume the gas occupies behind the projectile increases. Some energy is lost in deforming the projectile and causing it to spin. There are also frictional losses between the projectile and the barrel. The projectile, as it travels down the barrel, compresses the air in front of it, which adds resistance to its forward motion. The breech and the barrel must resist the high-pressure gases without damage. Although the pressure initially rises to a high value, the pressure starts dropping when the projectile has traveled some distance down the barrel. Consequently, the muzzle end of the barrel does not need to be as strong as the chamber end. Mathematical models have been developed for these processes. The four general concepts which are calculated in interior ballistics are: Energy - released by the propellant Motion - the relation between the projectile acceleration and the pressure on its base. Burning rate - a function of the propellant surface area and an empirically derived burning rate coefficient which is unique to the propellant. Form function - a burning rate modifying coefficient that includes the shape of the propellant. History Internal ballistics was not scientifically based prior to the mid-1800s. Barrels and actions were built strong enough to survive a known overload (Proof test). Muzzle velocity was surmised from the distance the projectile traveled. In the 1800s test barrels began to be instrumented. Holes were drilled in the barrel and fitted with standardized steel pistons which exerted pressure which compressed standardized copper cylinders when the firearm discharged. The reduction in the copper cylinder length is used as an indication of peak pressure, known as "Copper Units of Pressure", or "CUP" for high pressure firearms. Similar standards were applied to firearms with lower peak pressures, typically common handguns, with test cylinder pellets made of more easily deformed lead cylinders, hence "Lead Units of Pressure", or "LUP". The measurement only indicated the maximum pressure that was reached at that point in the barrel. Piezoelectric strain gauges were introduced in the 1960's, allowing instantaneous pressures to be measured without destructive pressure ports. Instrumented projectiles were developed by the Army Research Laboratory that measures the pressure at the base of the projectile and acceleration. Priming methods Methods of igniting the propellant evolved over time. A small hole (a touch hole) was drilled into the breech, into which a propellant was then poured, and an external flame or spark applied (see matchlock and flintlock). Percussion caps were and self-contained cartridges have primers that detonate after mechanical deformation, igniting the propellant. Propellants Black powder Gunpowder (Black powder) is a finely ground, pressed and granulated mechanical pyrotechnic mixture of sulfur, charcoal, and potassium nitrate or sodium nitrate. It can be produced in a range of grain sizes. The size and shape of the grains can increase or decrease the relative surface area, and change the burning rate significantly. The burning rate of black powder is relatively insensitive to pressure, meaning it will burn quickly and predictably even without confinement, making it also suitable for use as a low explosive. It has a very slow decomposition rate, and therefore a very low brisance. It is not, in the strictest sense of the term, an explosive, but a "deflagrant", as it does not detonate but decomposes by deflagration due to its subsonic mechanism of flame-front propagation. Nitrocellulose (single-base propellants) Nitrocellulose or "guncotton" is formed by the action of nitric acid on cellulose fibers. It is a highly combustible fibrous material that deflagrates rapidly when heat is applied. It also burns very cleanly, burning almost entirely to gaseous components at high temperatures with little smoke or solid residue. Gelatinised nitrocellulose is a plastic, which can be formed into cylinders, tubes, balls, or flakes known as single-base propellants. The size and shape of the propellant grains can increase or decrease the relative surface area, and change the burn rate significantly. Additives and coatings can be added to the propellant to further modify the burn rate. Normally, very fast powders are used for light-bullet or low-velocity pistols and shotguns, medium-rate powders for magnum pistols and light rifle rounds, and slow powders for large-bore heavy rifle rounds. Double-base propellants Nitroglycerin can be added to nitrocellulose to form "double-base propellants". Nitrocellulose desensitizes nitroglycerin to prevent detonation in propellant-sized grains, (see dynamite), and the nitroglycerin gelatinises the nitrocellulose and increases the energy. Double-base powders burn faster than single-base powders of the same shape, though not as cleanly, and burn rate increases with nitroglycerin content. In artillery, Ballistite or Cordite has been used in the form of rods, tubes, slotted-tube, perforated-cylinder or multi-tubular; the geometry being chosen to provide the required burning characteristics. (Round balls or rods, for example, are "degressive-burning" because their production of gas decreases with their surface area as the balls or rods burn smaller; thin flakes are "neutral-burning," since they burn on their flat surfaces until the flake is completely consumed. The longitudally perforated or multi-perforated cylinders used in large, long-barreled rifles or cannon are "progressive-burning;" the burning surface increases as the inside diameter of the holes enlarges, giving sustained burning and a long, continuous push on the projectile to produce higher velocity without increasing the peak pressure unduly. Progressive-burning powder compensates somewhat for the pressure drop as the projectile accelerates down the bore and increases the volume behind it.) Solid propellants (caseless ammunition) "Caseless ammunition" incorporates propellant cast as a single solid grain with the priming compound placed in a hollow at the base and the bullet attached to the front. Since the single propellant grain is so large (most smokeless powders have grain sizes around 1 mm, but a caseless grain will be perhaps 7 mm diameter and 15 mm long), the relative burn rate must be much higher. To reach this rate of burning, caseless propellants often use moderated explosives, such as RDX. The major advantages of a successful caseless round would be elimination of the need to extract and eject the spent cartridge case, permitting higher rates of fire and a simpler mechanism, and also reduced ammunition weight by eliminating the weight (and cost) of the brass or steel case. While there is at least one experimental military rifle (the H&K G11), and one commercial rifle (the Voere VEC-91), that use caseless rounds, they have met with little success. One other commercial rifle was the Daisy VL rifle made by the Daisy Air Rifle Co. and chambered for .22 caliber caseless ammunition that was ignited by a hot blast of compressed air from the lever used to compress a strong spring like for an air rifle. The caseless ammunition is of course not reloadable, since there is no casing left after firing the bullet, and the exposed propellant makes the rounds less durable. Also, the case in a standard cartridge serves as a seal, keeping gas from escaping the breech. Caseless arms must use a more complex self-sealing breech, which increases the design and manufacturing complexity. Another unpleasant problem, common to all rapid-firing arms but particularly problematic for those firing caseless rounds, is the problem of rounds "cooking off". This problem is caused by residual heat from the chamber heating the round in the chamber to the point where it ignites, causing an unintentional discharge. To minimize the risk of cartridge cook-off, machineguns can be designed to fire from an open bolt, with the round not chambered until the trigger is pulled, and so there is no chance for the round to cook off before the operator is ready. Such weapons could use caseless ammunition effectively. Open-bolt designs are generally undesirable for anything but machine guns; the mass of the bolt moving forward causes the gun to lurch in reaction, which significantly reduces the accuracy of the gun, which is generally not an issue for machinegun fire. Propellant charge Load density and consistency Load density is the percentage of the space in the cartridge case that is filled with powder. In general, loads close to 100% density (or even loads where seating the bullet in the case compresses the powder) ignite and burn more consistently than lower-density loads. In cartridges surviving from the black-powder era (examples being .45 Colt, .45-70 Government), the case is much larger than is needed to hold the maximum charge of high-density smokeless powder. This extra room allows the powder to shift in the case, piling up near the front or back of the case and potentially causing significant variations in burning rate, as powder near the rear of the case will ignite rapidly but powder near the front of the case will ignite later. This change has less impact with fast powders. Such high-capacity, low-density cartridges generally deliver best accuracy with the fastest appropriate powder, although this keeps the total energy low due to the sharp high-pressure peak. Magnum pistol cartridges reverse this power/accuracy tradeoff by using lower-density, slower-burning powders that give high load density and a broad pressure curve. The downside is the increased recoil and muzzle blast from the high powder mass, and high muzzle pressure. Most rifle cartridges have a high load density with the appropriate powders. Rifle cartridges tend to be bottlenecked, with a wide base narrowing down to a smaller diameter, to hold a light, high-velocity bullet. These cases are designed to hold a large charge of low-density powder, for an even broader pressure curve than a magnum pistol cartridge. These cases require the use of a long rifle barrel to extract their full efficiency, although they are also chambered in rifle-like pistols (single-shot or bolt-action) with barrels of 10 to 15 inches (25 to 38 cm). Chamber Straight vs bottleneck Straight walled cases were the standard from the beginnings of cartridge arms. With the low burning speed of black powder, the best efficiency was achieved with large, heavy bullets, so the bullet was the largest practical diameter. The large diameter allowed a short, stable bullet with high weight, and the maximum practical bore volume to extract the most energy possible in a given length barrel. There were a few cartridges that had long, shallow tapers, but these were generally an attempt to use an existing cartridge to fire a smaller bullet with a higher velocity and lower recoil. With the advent of smokeless powders, it was possible to generate far higher velocities by using a slow smokeless powder in a large volume case, pushing a small, light bullet. The odd, highly tapered 8 mm Lebel, made by necking down an older 11 mm black-powder cartridge, was introduced in 1886, and it was soon followed by the 7.92×57mm Mauser and 7×57mm Mauser military rounds, and the commercial .30-30 Winchester, all of which were new designs built to use smokeless powder. All of these have a distinct shoulder that closely resembles modern cartridges, and with the exception of the Lebel they are still chambered in modern firearms even though the cartridges are over a century old. Aspect ratio and consistency When selecting a rifle cartridge for maximum accuracy, a short, fat cartridge with very little case taper may yield higher efficiency and more consistent velocity than a long, thin cartridge with a lot of case taper (part of the reason for a bottle-necked design). Given current trends towards shorter and fatter cases, such as the new Winchester Super Short Magnum cartridges, it appears the ideal might be a case approaching spherical inside. Target and vermin hunting rounds require the greatest accuracy, so their cases tend to be short, fat, and nearly untapered with sharp shoulders on the case. Short, fat cases also allow short-action weapons to be made lighter and stronger for the same level of performance. The trade-off for this performance is fat rounds which take up more space in a magazine, sharp shoulders that do not feed as easily out of a magazine, and less reliable extraction of the spent round. For these reasons, when reliable feeding is more important than accuracy, such as with military rifles, longer cases with shallower shoulder angles are favored. There has been a long-term trend however, even among military weapons, towards shorter, fatter cases. The current 7.62×51mm NATO case replacing the longer .30-06 Springfield is a good example, as is the new 6.5 Grendel cartridge designed to increase the performance of the AR-15 family of rifles and carbines. Nevertheless, there is significantly more to accuracy and cartridge lethality than the length and diameter of the case, and the 7.62×51mm NATO has a smaller case capacity than the .30-06 Springfield, reducing the amount of propellant that can be used, directly reducing the bullet weight and muzzle velocity combination that contributes to lethality, (as detailed in the published cartridge specifications linked herein for comparison). The 6.5 Grendel, on the other hand, is capable of firing a significantly heavier bullet (see link) than the 5.56 NATO out of the AR-15 family of weapons, with only a slight decrease in muzzle velocity, perhaps providing a more advantageous performance tradeoff. Friction and inertia Static friction and ignition Since the burning rate of smokeless powder varies directly with the pressure, the initial pressure buildup,(i.e. "the shot-start pressure"), has a significant effect on the final velocity, especially in large cartridges with very fast powders and relatively light weight projectiles. In small caliber firearms, the friction holding the bullet in the case, determines how soon after ignition the bullet moves, and since the motion of the bullet increases the volume and drops the pressure, a difference in friction can change the slope of the pressure curve. In general, a tight fit is desired, to the extent of crimping the bullet into the case. In straight-walled rimless cases, such as the .45 ACP, an aggressive crimp is not possible, since the case is held in the chamber by the mouth of the case, but sizing the case to allow a tight interference fit with the bullet, can give the desired result. In larger caliber firearms, the shot start pressure is often determined by the force required to initially engrave the projectile driving band into the start of the barrel rifling; smoothbore guns, which do not have rifling, achieve shot start pressure by initially driving the projectile into a "forcing cone" that provides resistance as it compresses the projectile obturation ring. Kinetic friction The bullet must tightly fit the bore to seal the high pressure of the burning gunpowder. This tight fit results in a large frictional force. The friction of the bullet in the bore does have a slight impact on the final velocity, but that is generally not much of a concern. Of greater concern is the heat that is generated due to the friction. At velocities of about , lead begins to melt, and deposit in the bore. This lead build-up constricts the bore, increasing the pressure and decreasing the accuracy of subsequent rounds, and is difficult to scrub out without damaging the bore. Rounds, used at velocities up to , can use wax lubricants on the bullet to reduce lead build-up. At velocities over , nearly all bullets are jacketed in copper, or a similar alloy that is soft enough not to wear on the barrel, but melts at a high enough temperature to reduce build-up in the bore. Copper build-up does begin to occur in rounds that exceed , and a common solution is to impregnate the surface of the bullet with molybdenum disulfide lubricant. This reduces copper build-up in the bore, and results in better long-term accuracy. Large caliber projectiles also employ copper driving bands for rifled barrels for spin-stabilized projectiles; however, fin-stabilized projectiles fired from both rifle and smoothbore barrels, such as the APFSDS anti-armor projectiles, employ nylon obturation rings that are sufficient to seal high pressure propellant gasses and also minimize in-bore friction, providing a small boost to muzzle velocity. The role of inertia In the first few centimeters of travel down the bore, the bullet reaches a significant percentage of its final velocity, even for high-capacity rifles, with slow burning powder. The acceleration is on the order of tens of thousands of gravities, so even a projectile as light as can provide over of resistance due to inertia. Changes in bullet mass, therefore, have a huge impact on the pressure curves of smokeless powder cartridges, unlike black-powder cartridges. The loading or reloading of smokeless cartridges thus requires high-precision equipment, and carefully measured tables of load data for given cartridges, powders, and bullet weights. Pressure-velocity relationships Energy is imparted to the bullet in a firearm by the pressure of gases produced by burning propellant. While higher pressures produce higher velocities, pressure duration is also important. Peak pressure may represent only a small fraction of the time the bullet is accelerating. The entire duration of the bullet's travel through the barrel must be considered. Peak vs area Energy is the ability to do work on an object. Work is force applied over a distance. The total energy imparted to a bullet is indicated by the area under a curve with the y-axis being force (i.e., the pressure exerted on the base of the bullet multiplied by the area of the base of the bullet) and the x-axis being distance. Increasing the energy of the bullet requires increasing the area under that curve, either by raising the pressure, or increasing the distance the bullet travels under pressure. Pressure is limited by the strength of the firearm, and duration is limited by barrel length. Propellant design Propellants are matched to firearm strength, chamber volume and barrel length; and bullet material, weight and dimensions. The rate of gas generation is proportional to the surface area of burning propellant grains in accordance with Piobert's Law. Smokeless propellant reactions occur in a series of zones or phases as the reaction proceeds from the surface into the solid. The deepest portion of the solid experiencing heat transfer melts and begins phase transition from solid to gas in a foam zone. The gaseous propellant decomposes into simpler molecules in a surrounding fizz zone. Endothermic transformations in the foam zone and fizz zone require energy initially provided by the primer and subsequently released in a luminous outer flame zone where the simpler gas molecules react to form conventional combustion products like steam and carbon monoxide. The heat transfer rate of smokeless propellants increases with pressure, resulting in the rate of gas generation from a given grain surface area increased at higher pressures. Accelerating gas generation from fast burning propellants may rapidly create a destructively high pressure spike before bullet movement increases reaction volume. Conversely, propellants designed for a minimum heat transfer pressure may cease decomposition into gaseous reactants if bullet movement decreases pressure before a slow burning propellant has been consumed. Unburned propellant grains may remain in the barrel if the energy-releasing flame zone cannot be sustained in the resultant absence of gaseous reactants from the inner zones. Propellant burnout Another issue to consider, when choosing a powder burn rate, is the time the powder takes to completely burn vs. the time the bullet spends in the barrel. Looking carefully at the left graph, there is a change in the curve, at about 0.8 ms. This is the point at which the powder is completely burned, and no new gas is created. With a faster powder, burnout occurs earlier, and with the slower powder, it occurs later. Propellant that is unburned when the bullet reaches the muzzle is wasted — it adds no energy to the bullet, but it does add to the recoil and muzzle blast. For maximum power, the powder should burn until the bullet is just short of the muzzle. Since smokeless powders burn, not detonate, the reaction can only take place on the surface of the powder. Smokeless powders come in a variety of shapes, which serve to determine how fast they burn, and also how the burn rate changes as the powder burns. The simplest shape is a ball powder, which is in the form of round or slightly flattened spheres. Ball powder has a comparatively small surface-area-to-volume ratio, so it burns comparatively slowly, and as it burns, its surface area decreases. This means as the powder burns, the burn rate slows down. To some degree, this can be offset by the use of a retardant coating on the surface of the powder, which slows the initial burn rate and flattens out the rate of change. Ball powders are generally formulated as slow pistol powders, or fast rifle powders. Flake powders are in the form of flat, round flakes which have a relatively high surface-area-to-volume ratio. Flake powders have a nearly constant rate of burn, and are usually formulated as fast pistol or shotgun powders. The last common shape is an extruded powder, which is in the form of a cylinder, sometimes hollow. Extruded powders generally have a lower ratio of nitroglycerin to nitrocellulose, and are often progressive burning — that is, they burn at a faster rate as they burn. Extruded powders are generally medium to slow rifle powders. General concerns Bore diameter and energy transfer A firearm, in many ways, is like a piston engine on the power stroke. There is a certain amount of high-pressure gas available, and energy is extracted from it by making the gas move a piston — in this case, the projectile is the piston. The swept volume of the piston determines how much energy can be extracted from the given gas. The more volume that is swept by the piston, the lower is the exhaust pressure (in this case, the muzzle pressure). Any remaining pressure at the muzzle or at the end of the engine's power stroke represents lost energy. To extract the maximum amount of energy, then, the swept volume is maximized. This can be done in one of two ways — increasing the length of the barrel or increasing the diameter of the projectile. Increasing the barrel length will increase the swept volume linearly, while increasing the diameter will increase the swept volume as the square of the diameter. Since barrel length is limited by practical concerns to about arm's length for a rifle and much shorter for a handgun, increasing bore diameter is the normal way to increase the efficiency of a cartridge. The limit to bore diameter is generally the sectional density of the projectile (see external ballistics). Larger-diameter bullets of the same weight have much more drag, and so they lose energy more quickly after exiting the barrel. In general, most handguns use bullets between .355 (9 mm) and .45 (11.5 mm) caliber, while most rifles generally range from .223 (5.56 mm) to .32 (8 mm) caliber. There are many exceptions, of course, but bullets in the given ranges provide the best general-purpose performance. Handguns use the larger-diameter bullets for greater efficiency in short barrels, and tolerate the long-range velocity loss since handguns are seldom used for long-range shooting. Handguns designed for long-range shooting are generally closer to shortened rifles than to other handguns. Ratio of propellant to projectile mass Another issue, when choosing or developing a cartridge, is the issue of recoil. The recoil is not just the reaction from the projectile being launched, but also from the powder gas, which will exit the barrel with a velocity even higher than that of the bullet. For handgun cartridges, with heavy bullets and light powder charges (a 9×19mm, for example, might use of powder, and a bullet), the powder recoil is not a significant force; for a rifle cartridge (a .22-250 Remington, using of powder and a bullet), the powder can be the majority of the recoil force. There is a solution to the recoil issue, though it is not without cost. A muzzle brake or recoil compensator is a device which redirects the powder gas at the muzzle, usually up and back. This acts like a rocket, pushing the muzzle down and forward. The forward push helps negate the feel of the projectile recoil by pulling the firearm forwards. The downward push, on the other hand, helps counteract the rotation imparted by the fact that most firearms have the barrel mounted above the center of gravity. Overt combat guns, large-bore high-powered rifles, long-range handguns chambered for rifle ammunition, and action-shooting handguns designed for accurate rapid fire, all benefit from muzzle brakes. The high-powered firearms use the muzzle brake mainly for recoil reduction, which reduces the battering of the shooter by the severe recoil. The action-shooting handguns redirect all the energy up to counteract the rotation of the recoil, and make following shots faster by leaving the gun on target. The disadvantage of the muzzle brake is a longer, heavier barrel, and a large increase in sound levels and flash behind the muzzle of the rifle. Shooting firearms without muzzle brakes and without hearing protection can eventually damage the operator's hearing; however, shooting rifles with muzzle brakes - with or without hearing protection - causes permanent ear damage. (See muzzle brake for more on the disadvantages of muzzle brakes.) Powder-to-projectile-weight ratio also touches on the subject of efficiency. In the case of the .22-250 Remington, more energy goes into propelling the powder gas than goes into propelling the bullet. The .22-250 pays for this by requiring a large case, with much powder, all for a fairly small gain in velocity and energy over other .22 caliber cartridges. Accuracy and bore characteristics Nearly all small bore firearms, with the exception of shotguns, have rifled barrels. The rifling imparts a spin on the bullet, which keeps it from tumbling in flight. The rifling is usually in the form of sharp edged grooves cut as helices along the axis of the bore, anywhere from 2 to 16 in number. The areas between the grooves are known as lands. Another system, polygonal rifling, gives the bore a polygonal cross section. Polygonal rifling is not very common, used by only a few European manufacturers as well as the American gun manufacturer Kahr Arms. The companies that use polygonal rifling claim greater accuracy, lower friction, and less lead and/or copper buildup in the barrel. Traditional land and groove rifling is used in most competition firearms, however, so the advantages of polygonal rifling are unproven. There are four methods of rifling a barrel: A single point cutter is drawn down the bore by a machine that controls the rotation of the cutting head relative to the barrel. This is the slowest process, but requires the simplest equipment. It is often used by custom gunsmiths as it can result in very accurate barrels. Button rifling uses a die with a negative image of the rifling cut on it which is drawn down the barrel while rotated, swaging the inside of the barrel. This creates all the grooves at once by deformation, and is faster than cut rifling. Hammer forging is a process in which a slightly oversized, bored barrel is placed around a mandrel that contains a negative image of the length of the rifled barrel. The barrel and mandrel are rotated and hammered by power hammers, which forms the inside of the barrel. This is the fastest and often cheapest method of making a barrel, but the equipment is expensive. Hammer-forged barrels are generally not capable of the accuracy attainable with the first two methods mentioned. Electrical discharge machining (EDM) or Electro chemical machining (ECM) processes use electricity to erode away material, a process which produces a highly consistent diameter and very smooth finish, with less stress than other rifling methods. EDM is very costly and primarily used in large bore, long barrel cannon where traditional methods are very difficult, while ECM is used by some smaller barrel makers. The purpose of the barrel is to provide a consistent seal, allowing the bullet to accelerate to a consistent velocity. It must also impart the right spin, and release the bullet consistently, perfectly concentric to the bore. The residual pressure in the bore must be released symmetrically, so that no side of the bullet receives any more or less push than the rest. To maintain a good pressure seal, the bore must be a precise constant diameter, or have a slight decrease in diameter from breech to muzzle. Any increase in bore diameter will allow the bullet to shift, allowing gas to leak past the bullet, decreasing velocity, or cause the bullet to tip so that it is no longer perfectly coaxial with the bore. High quality barrels are lapped to remove any constrictions in the bore which will cause a change in diameter. A lapping process known as "fire lapping" uses a lead "slug" that is slightly larger than the bore and covered in fine abrasive compound to cut out the constrictions. The slug is passed from breech to muzzle to remove obstructions. Many passes are made, and as the bore becomes more uniform, finer grades of abrasive compound are used. The final result is a barrel that is mirror-smooth, and with a consistent or slightly tapering bore. The hand-lapping technique uses a wooden or soft metal rod to pull or push the slug through the bore, while the newer fire-lapping technique uses specially loaded, low-power cartridges to push abrasive-covered soft-lead bullets down the barrel. Another issue that has an effect on the barrel's hold on the bullet is the rifling. When the bullet is fired, it is forced into the rifling, which cuts or "engraves" the surface of the bullet. If the rifling is a constant twist, then the rifling rides in the grooves engraved in the bullet, and everything is secure and sealed. If the rifling has a decreasing twist, then the changing angle of the rifling in the engraved grooves of the bullet causes the rifling to become narrower than the grooves. This allows gas to blow by, and loosens the hold of the bullet on the barrel. An increasing twist, however, will make the rifling become wider than the grooves in the bullet, maintaining the seal. When a rifled-barrel blank is selected for a gun, the higher-twist end is located at the muzzle. The muzzle of the barrel is the last thing to touch the bullet before it goes into ballistic flight, and as such has the greatest potential to disrupt the bullet's flight. The muzzle must allow the gas to escape the barrel symmetrically; any asymmetry will cause an uneven pressure on the base of the bullet, which will disrupt its flight. The muzzle end of the barrel is called the "crown", and it is usually either beveled or recessed to protect it from bumps or scratches that might affect accuracy. Before the barrel can release the bullet in a consistent manner, it must grip the bullet in a consistent manner. The part of the barrel between where the bullet exits the cartridge, and engages the rifling, is called the "throat", and the length of the throat is the freebore. In some firearms, the freebore is zero as the act of chambering the cartridge forces the bullet into the rifling. This is common in low-powered rimfire target rifles. The placement of the bullet in the rifling ensures that the transition between cartridge and rifling is quick and stable. The downside is that the cartridge is firmly held in place, and attempting to extract the unfired round can be difficult, to the point of even pulling the bullet from the cartridge in extreme cases. With high-powered cartridges, a significant amount of force is required to engrave the bullet which can raise the pressure in the chamber above the maximum design pressure. Higher-powered rifles usually have a longer freebore so that the bullet is allowed to gain some momentum, allowing the and the chamber pressure to drop slightly before the bullet engages the rifling. However, any slight misalignment can cause the bullet to tip as it engages the rifling, resulting in the bullet does not entering the barrel coaxially. Revolver-specific issues The defining characteristic of a revolver is the revolving cylinder, separate from the barrel, that contains the chambers. Revolvers typically have 5 to 10 chambers, and the first issue is ensuring consistency among the chambers, because if they aren't consistent then the point of impact will vary from chamber to chamber. The chambers must also align consistently with the barrel, so the bullet enters the barrel the same way from each chamber. The throat in a revolver is composed of two separate parts, the cylinder throat and the barrel throat. part of the cylinder and sized so that it is concentric to the chamber and very slightly over the bullet diameter. The cylinder gap - the space between the cylinder and barrel - must be wide enough to allow free rotation of the cylinder even when it becomes fouled with powder residue, but not so large that excessive gas is released. The forcing cone - where the bullet is guided from the cylinder into the bore of the barrel - should be deep enough to force the bullet into the bore without significant deformation. Unlike rifles, where the threaded portion of the barrel is in the chamber, revolver barrels threads surround the breech end of the bore. It is possible that the bore is compressed when the barrel is screwed into the frame. Cutting a longer forcing cone can relieve this "choke" point, as can lapping of the barrel after it is fitted to the frame. See also External ballistics Percussion cap, for an early history of priming powder and percussion caps Terminal ballistics Transitional ballistics Physics of firearms Table of handgun and rifle cartridges References External links A (Very) Short Course in Internal Ballistics, Fr. Frog QuickLOAD Ballistics Software Ballistics Ammunition Handloading
Internal ballistics
Physics
7,472
69,445,009
https://en.wikipedia.org/wiki/Ocean%20surface%20ecosystem
Organisms that live freely at the ocean surface, termed neuston, include keystone organisms like the golden seaweed Sargassum that makes up the Sargasso Sea, floating barnacles, marine snails, nudibranchs, and cnidarians. Many ecologically and economically important fish species live as or rely upon neuston. Species at the surface are not distributed uniformly; the ocean's surface provides habitat for unique neustonic communities and ecoregions found at only certain latitudes and only in specific ocean basins. But the surface is also on the front line of climate change and pollution. Life on the ocean's surface connects worlds. From shallow waters to the deep sea, the open ocean to rivers and lakes, numerous terrestrial and marine species depend on the surface ecosystem and the organisms found there. The ocean's surface acts like a skin between the atmosphere above and the water below, and hosts an ecosystem unique to this environment. This sun-drenched habitat can be defined as roughly one metre in depth, as nearly half of UV-B is attenuated within this first meter. Organisms here must contend with wave action and unique chemical and physical properties. The surface is utilised by a wide range of species, from various fish and cetaceans, to species that ride on ocean debris (termed rafters). Most prominently, the surface is home to a unique community of free-living organisms, termed neuston (from the Greek word υεω, which means both to swim and to float). Floating organisms are also sometimes referred to as pleuston, though neuston is more commonly used. Despite the diversity and importance of the ocean's surface in connecting disparate habitats, and the risks it faces, not a lot is known about neustonic life. Overview Neuston are key ecological links connecting ecosystems as far ranging as coral reefs, islands, the deep sea, and even freshwater habitats. In the North Pacific, 80% of the loggerhead turtle diet consists of neuston prey, and nearly 30% of the Laysan albatross's diet is neuston. Diverse pelagic and reef fish species live at the surface when young, including commercially important fish species like the Atlantic cod, salmon, and billfish. Neuston can be concentrated as living islands that completely obscure the sea surface, or scattered into sparse meadows over thousands of miles. Yet the role of the neuston, and in many cases their mere existence, is often overlooked. One of the most well-known surface ecoregions is the Sargasso Sea, an ecologically distinct region packed with thick, neustonic brown seaweed in the North Atlantic. Multiple ecologically and commercially important species depend on the Sargasso Sea, but neustonic life exists in every ocean basin and may serve a similar, if unrecognised, role in regions across the planet. For example, over 50 years ago, USSR scientist A. I. Savilov characterised 7 neustonic ecoregions in the Pacific Ocean. Each ecoregion possesses a unique combination of biotic and abiotic conditions and hosts a unique community of neustonic organisms. Yet these ecoregions have been largely forgotten. But there is another reason to study neuston: The ocean's surface is on the front line of human impacts, from climate change to pollution, oil spills to plastic. The ocean's surface is hit hard by anthropogenic change, and the surface ecosystem is likely already dramatically different from even a few hundred years ago. For example, prior to widespread damming, logging, and industrialisation, more wood may have entered the open ocean, while plastic had not yet been invented. And because floating life provides food and shelter for diverse species, changes in the surface habitat will cause changes in other ecosystems and have implications that are not currently fully understand or be able to be predicted. Ocean surface life (neuston) Invoking images of the open ocean's surface, the imagination can conjure up an endless empty space. A flat line parting the blue below from the blue above. But in reality a diverse array of species occupy this unique boundary layer. A tangle of terms exist for different organisms occupying different niches of the ocean's surface. The most inclusive term, neuston, is used here to refer to all of them. Neustonic animals and plants live hanging from the surface of the ocean as if suspended from the roof of a massive cave, and are incapable of controlling their direction of movement. They are considered permanent residents of the surface layer. Many genera are globally distributed. Many organisms have morphological features that enable them to remain at the ocean's surface, with the most noticeable adaptations being floats. Floaters (pleuston) Epineuston Hyponeuston Rafting organisms Surface microlayer The sea surface microlayer (SML) is the boundary interface between the atmosphere and ocean, covering about 70% of the Earth's surface. With an operationally defined thickness between 1 and 1000 μm, the SML has physicochemical and biological properties that are measurably distinct from underlying waters. Recent studies now indicate that the SML covers the ocean to a significant extent, and evidence shows that it is an aggregate-enriched biofilm environment with distinct microbial communities. Because of its unique position at the air-sea interface, the SML is central to a range of global biogeochemical and climate-related processes. The sea surface microlayer (SML) is the boundary interface between the atmosphere and ocean, covering about 70% of the Earth's surface. The SML has physicochemical and biological properties that are measurably distinct from underlying waters. Because of its unique position at the air-sea interface, the SML is central to a range of global biogeochemical and climate-related processes. Although known for the last six decades, the SML often has remained in a distinct research niche, primarily as it was not thought to exist under typical oceanic conditions. Recent studies now indicate that the SML covers the ocean to a significant extent, highlighting its global relevance as the boundary layer linking two major components of the Earth system – the ocean and the atmosphere. In 1983, Sieburth hypothesised that the SML was a hydrated gel-like layer formed by a complex mixture of carbohydrates, proteins, and lipids. In recent years, his hypothesis has been confirmed, and scientific evidence indicates that the SML is an aggregate-enriched biofilm environment with distinct microbial communities. In 1999 Ellison et al. estimated that 200 Tg C yr−1 accumulates in the SML, similar to sedimentation rates of carbon to the ocean's seabed, though the accumulated carbon in the SML probably has a very short residence time. Although the total volume of the microlayer is very small compared to the ocean's volume, Carlson suggested in his seminal 1993 paper that unique interfacial reactions may occur in the SML that may not occur in the underlying water or at a much slower rate there. He therefore hypothesised that the SML plays an important role in the diagenesis of carbon in the upper ocean. Biofilm-like properties and highest possible exposure to solar radiation leads to an intuitive assumption that the SML is a biochemical microreactor. Historically, the SML has been summarized as being a microhabitat composed of several layers distinguished by their ecological, chemical and physical properties with an operational total thickness of between 1 and 1000 μm. In 2005 Hunter defined the SML as a "microscopic portion of the surface ocean which is in contact with the atmosphere and which may have physical, chemical or biological properties that are measurably different from those of adjacent sub-surface waters". He avoids a definite range of thickness as it depends strongly on the feature of interest. A thickness of 60 μm has been measured based on sudden changes of the pH, and could be meaningfully used for studying the physicochemical properties of the SML. At such thickness, the SML represents a laminar layer, free of turbulence, and greatly affecting the exchange of gases between the ocean and atmosphere. As a habitat for neuston (surface-dwelling organisms ranging from bacteria to larger siphonophores), the thickness of the SML in some ways depends on the organism or ecological feature of interest. In 2005, Zaitsev described the SML and associated near-surface layer (down to 5 cm) as an incubator or nursery for eggs and larvae for a wide range of aquatic organisms. Hunter's definition includes all interlinked layers from the laminar layer to the nursery without explicit reference to defined depths. In 2017, Wurl er al. proposed Hunter's definition be validated with a redeveloped SML paradigm that includes its global presence, biofilm-like properties and role as a nursery. The new paradigm pushes the SML into a new and wider context relevant to many ocean and climate sciences. According to Wurl et al.m the SML can never be devoid of organics due to the abundance of surface-active substances (e.g., surfactants) in the upper ocean and the phenomenon of surface tension at air-liquid interfaces. The SML is analogous to the thermal boundary layer, and remote sensing of the sea surface temperature shows ubiquitous anomalies between the sea surface skin and bulk temperature. Even so, the differences in both are driven by different processes. Enrichment, defined as concentration ratios of an analyte in the SML to the underlying bulk water, has been used for decades as evidence for the existence of the SML. Consequently, depletions of organics in the SML are debatable; however, the question of enrichment or depletion is likely to be a function of the thickness of the SML (which varies with sea state; including losses via sea spray, the concentrations of organics in the bulk water, and the limitations of sampling techniques to collect thin layers . Enrichment of surfactants, and changes in the sea surface temperature and salinity, serve as universal indicators for the presence of the SML. Organisms are perhaps less suitable as indicators of the SML because they can actively avoid the SML and/or the harsh conditions in the SML may reduce their populations. However, the thickness of the SML remains "operational" in field experiments because the thickness of the collected layer is governed by the sampling method. Advances in SML sampling technology are needed to improve our understanding of how the SML influences air-sea interactions. Surface slicks Slicks are meandering lines of smooth water on the ocean surface that are ubiquitous coastal features around the world. A variety of mechanisms can cause slick formation, including tidal and headland fronts, and as a consequence of subsurface waves called internal waves. Internal wave slicks are generated when internal waves interact with steep seafloor topography and drive areas of convergence and divergence at the ocean surface. The build-up of organic material (surfactants) at the surface modifies surface tension causing a smooth, oil slick-like appearance. The convergent flow can accumulate dense aggregations of plankton including larval fish and invertebrates at or below the ocean surface. Surface slicks are the focal point for numerous trophic and larval connections that are foundational for marine ecosystem function. Life for many marine organisms begins near the ocean surface. Buoyant eggs hatch into planktonic larvae that develop and disperse in the ocean for weeks to months before transitioning into juveniles and eventually finding suitable adult habitat. The pelagic larval stage connects populations and serves as a source of new adults. Oceanic processes affecting the fate of larvae have profound impacts on population replenishment, connectivity, and ecosystem structure. Although it is an important life stage, there is, as of 2021, limited knowledge of the ecology and behaviour of larvae. Understanding the biophysical interactions that govern larval fish survival and transport is essential for predicting and managing marine ecosystems, as well as the fisheries they support. The diagram shows: (1) Larval and juvenile stages of fishes from many ocean habitats aggregate in slicks in order to capitalize on dense concentrations of prey (2, phytoplankton, 3, zooplankton, 4, larval invertebrates, 5, eggs, and 6, insects). The increased predator–prey overlap in slicks increases energy flow that propagates up the food-web (dotted blue lines show trophic links), enhancing energy available to higher trophic level predators (icons outlined in blue) including humans. More than 100 species of fishes develop and grow in surface slick nurseries before transitioning to adults (solid white lines radiating outward) in Coral Reefs (7–12), Epipelagic (13–15), and Deep-water (16–17) ocean habitats. As adults these taxa (icons outlined in white) play important ecological functions and provide fisheries resources to local human populations. For example, coastal schooling fishes (7, mackerel scad) are important food and bait fish for humans. Planktivorous fish (8, some damselfishes and triggerfishes) transfer energy from zooplankton up to reef predators like jacks (9), which provide top-down control of reefs and are important targets for shoreline recreational fisherfolk. Grazers (10, chubs) help keep coral reefs from being overgrown by macroalgae. Cryptobenthic fishes such as blennies (11) and benthic macrocrustaceans (12, shrimp, stomatopods, crabs) comprise most of the consumed biomass on reefs. In the pelagic ocean, flyingfishes (13) channel energy and nutrients from zooplankton to pelagic predators such as mahi-mahi (14) and billfish (15), both of which utilize slicks as nursery habitat. Larvae of mesopelagic fishes like lanternfish (16) and bathydemersal tripod fishes (17) utilize these surface hotspots before descending to deep-water adult habitat. The distribution of prey and predators in the ocean is patchy. Larval survival depends on prey availability, predation, and transport to suitable habitat, all of which are influenced by ocean conditions. Ocean processes that drive convergent flow such as fronts, internal waves, and eddies, can structure plankton, enhance overlap of predators and prey, and influence larval dispersal. Convergent features can also lead to a cascade of effects that ultimately drive food web structure and increase ecosystem productivity. Life history Life histories connect disparate ecosystems; species that live at the surface during one life history stage may occupy the deep sea, benthos, reefs, or freshwater ecosystems during another. A diversity of fish species utilize the ocean's surface, either as adults or as nursery habitat for eggs and young. In contrast, species floating on the ocean's surface during one life cycle stage often (though not always) have pelagic larval stages. Velella and Porpita release jellyfish (medusae), and while little is known about Porpita medusae, Velella medusae could possibly sink into deeper water, or remain near the surface, where they derive nutrients from zooxanthellae. Janthina have pelagic veliger larvae, and Physalia may release reproductive clusters that drift in the water column. Halobates lay eggs on a variety of objects, including floating objects and pelagic snail shells. All species with pelagic stages must eventually find their way back to the surface. For Velella and Porpita, larvae generated by sexual reproduction of medusae develop small floats, which carry them to the surface. For the larvae of Janthina, the transition to surface life includes the degradation of their eyes and vestibule system, and at the same time, the production of an external structure, which has been reported as either a small parachute made of mucus, or a cluster of bubbles, which they ride to the surface. Young Halobates may hatch either above or below the surface, and for those below, the surface tension proves a formidable barrier. It may take Halobates nymphs several hours to break through the surface film. Despite the challenges of reaching the surface, there may be benefits to a temporary pelagic life. Connectivity of ocean surface ecosystems may be facilitated by the life history of species living there. One hypothesis is that species have pelagic stages to "escape" surface sink regions and repopulate surface source regions, where one life cycle stage drifts on surface currents in one direction, and a pelagic stage either remains geographically localised or drifts in the opposite direction. However, some surface species, such as the endemic species of the Sargasso Sea, may remain geographically isolated throughout their life history. While these hypotheses are intriguing, it is not known if or how life history shapes population/species distribution for most neustonic species. Understanding how life history varies by species is a critical component of assessing both connectivity and conservation of neustonic ecosystems. Sea spray A stream of airborne microorganisms circles the planet above weather systems but below commercial air lanes. Some peripatetic microorganisms are swept up from terrestrial dust storms, but most originate from marine microorganisms in sea spray. In 2018, scientists reported that hundreds of millions of viruses and tens of millions of bacteria are deposited daily on every square meter around the planet. These airborne microorganisms form part of the aeroplankton. The aeroplankton are tiny lifeforms that float and drift in the air, carried by the current of the wind; they are the atmospheric analogue to oceanic plankton. Most of the living things that make up aeroplankton are very small to microscopic in size, and many can be difficult to identify because of their tiny size. Scientists collect them for study in traps and sweep nets from aircraft, kites or balloons. The environmental role of airborne cyanobacteria and microalgae is only partly understood. While present in the air, cyanobacteria and microalgae can contribute to ice nucleation and cloud droplet formation. Cyanobacteria and microalgae can also impact human health. Depending on their size, airborne cyanobacteria and microalgae can be inhaled by humans and settle in different parts of the respiratory system, leading to the formation or intensification of numerous diseases and ailments, e.g., allergies, dermatitis, and rhinitis. See also Marine larval ecology Ocean surface topography Surface layer Sea spray Sea air Surface Ocean Lower Atmosphere Study Joint Global Ocean Flux Study Regional Ocean Modeling System References Aquatic organisms
Ocean surface ecosystem
Biology
3,891
19,986,969
https://en.wikipedia.org/wiki/Rise%20and%20Resurrection%20of%20the%20American%20Programmer
Rise and Resurrection of the American Programmer is a book written by Edward Yourdon in 1996. It is the sequel to Decline and Fall of the American Programmer. Synopsis In the original, written at the beginning of the 1990s, Yourdon warned American programmers that their business was not sustainable against foreign competition. By the middle of the decade Microsoft had released Windows 95, which marked a groundbreaking new direction for the operating system, the internet was beginning to rise as a serious consumer marketplace, and the Java software platform had made its first public release. Due to such large changes in the state of the software industry, Yourdon reversed some of his original predictions. References 1996 non-fiction books Software development books Software quality Software industry Science and technology in the United States Prentice Hall books
Rise and Resurrection of the American Programmer
Technology,Engineering
154
26,412,383
https://en.wikipedia.org/wiki/GraphCrunch
GraphCrunch is a comprehensive, parallelizable, and easily extendible open source software tool for analyzing and modeling large biological networks (or graphs); it compares real-world networks against a series of random graph models with respect to a multitude of local and global network properties. It is available here. Motivation Recent technological advances in experimental biology have yielded large amounts of biological network data. Many other real-world phenomena have also been described in terms of large networks (also called graphs), such as various types of social and technological networks. Thus, understanding these complex phenomena has become an important scientific problem that has led to intensive research in network modeling and analyses. An important step towards understanding biological networks is finding an adequate network model. Evaluating the fit of a model network to the data is a formidable challenge, since network comparisons are computationally infeasible and thus have to rely on heuristics, or "network properties." GraphCrunch automates the process of generating random networks drawn from a series of random graph models and evaluating the fit of the network models to a real-world network with respect to a variety of global and local network properties. Features GraphCrunch performs the following tasks: 1) computes user specified global and local properties of an input real-world network, 2) creates a user specified number of random networks belonging to user specified random graph models, 3) compares how closely each model network reproduces a range of global and local properties (specified in point 1 above) of the real-world network, and 4) produces the statistics of network property similarities between the data and the model networks. Network models supported by GraphCrunch GraphCrunch currently supports five different types of random graph models: Erdös-Rényi random graphs; random graphs with the same degree distribution as the data; Barabási-Albert preferential-attachment scale-free networks; n-dimensional geometric random graphs (for all positive integers n); and stickiness model networks. Network properties supported by GraphCrunch GraphCrunch currently supports seven global and local network properties: degree distribution; clustering coefficient; clustering spectrum; average diameter; spectrum of shortest path lengths; relative graphlet frequency distance; and graphlet degree distribution agreement. Installation and usage Instructions on how to install and run GraphCrunch are available at https://web.archive.org/web/20100717040957/http://www.ics.uci.edu/~bio-nets/graphcrunch/. Applications GraphCrunch has been used to find an optimal network model for protein-protein interaction networks, as well as for protein structure networks. References External links http://bio-nets.doc.ic.ac.uk/graphcrunch2/ Graph theory
GraphCrunch
Mathematics
580
69,627,448
https://en.wikipedia.org/wiki/Acoustic%20epidemiology
Acoustic epidemiology refers to the study of the determinants and distribution of disease. It also refers to the analysis of sounds produced by the body (coughs, sneezes, wheezing, etc.) through a single tool or a combination of diagnostic tools. In many cases, epidemiologists have worked across multiple disciplines and used different technologies in order to find answers pertaining to disease distribution. For example, in the 1800s, John Snow determined that cholera was plaguing Europe through contaminated water. This led to the decision to remove a pump that was the cause of this contamination, thus effectively ending the epidemic. More broadly, Snow's epidemiological efforts led to the development of sewage drainage and water purifying systems in other areas. As COVID-19 developed, genomic epidemiologists began using whole genomes to study the disease. On the CDC's website, they have posted a “COVID-19 Genomic Epidemiology Toolkit”, which provides a means to expand the field of genomic epidemiology with regards to COVID-19 within state and local populations. Acoustic epidemiology is a field that studies bodily sounds, such as coughs and breath sounds, in order to better identify determinants and distribution of disease. Following in the footsteps of epidemiological tools and efforts such as those outlined above, acoustic epidemiology is concerned with using body sound data to improve disease surveillance capabilities for COVID-19 and any other applicable diseases of the future. Clinical relevance Being that epidemiology is a population-based area of study, findings from acoustic disease surveillance are important on a large scale, and have far-reaching implications for society as a whole. Cough and breath sounds provide rich epidemiological data. Baseline Measurements and Deviations Studying respiratory sounds and identifying deviations from baseline is an invaluable epidemiologic tool. On a community and population level, this can help to determine to what extent a disease may be spreading or changing. One of the major themes of concern throughout the COVID-19 pandemic has been travel safety, hotspots, and outbreaks in certain areas. Acoustic Epidemiology Through Use of Smartphone Apps As a means to overcome some of the restrictions imposed by the COVID-19 pandemic, smartphone apps were developed to capture and analyze respiratory health data safely. In a 2020-2021 study of acoustic epidemiology, in Navarra, Spain, the Hyfe app was used to track respiratory sounds in over 800 study participants. Syndromic Surveillance Syndromic surveillance is a complementary, and potentially faster method of health data collection and analysis as compared to standard methods of public health monitoring. Examples of Syndromic Surveillance Instances of syndromic surveillance are easy to find. Examples include: Logs that record missed school or work due to illness Symptoms recorded on patients in emergency rooms How often certain lab tests are ordered and performed Bias in Syndromic Surveillance Sources for syndromic surveillance may be biased, as they vary based on healthcare access in a given area. Therefore, some have questioned whether certain common methods of syndromic surveillance are truly representative of the larger population. The future of acoustic epidemiology The value of being able to track signs of deviations from baseline with regards to respiratory sounds at a population level is becoming clear through research. Epidemiologists predict that respiratory viruses could continue to be a problem in the future. Therefore, effective monitoring of acoustic data will need to be easy, affordable, and available on a wide scale. See also Cough Tracking Respiratory Health Pulmonary System References Epidemiology Epidemiological study projects
Acoustic epidemiology
Environmental_science
756
30,871,887
https://en.wikipedia.org/wiki/Spikenard
Spikenard, also called nard, nardin, and muskroot, is a class of aromatic amber-colored essential oil derived from Nardostachys jatamansi, a flowering plant in the honeysuckle family which grows in the Himalayas of Nepal, China, and India. The oil has been used over centuries as a perfume, a traditional medicine, or in religious ceremonies across a wide territory from India to Europe. Historically, the name nard has also referred to essential oils derived from other species including the closely related valerian genus, as well as Spanish lavender; these cheaper, more common plants have been used in perfume-making, and sometimes to adulterate true spikenard. Etymology The name nard is derived from Latin , from Ancient Greek (). This word may ultimately derive either from Sanskrit ( 'Indian spikenard'), or from Naarda, an ancient Assyrian city (possibly the modern town of Dohuk, Iraq). The "spike" in the English name refers to the inflorescence or flowering stem of the plant. Description Nardostachys jatamansi is a flowering plant of the honeysuckle family that grows in the Himalayas of Nepal, China, and India. In bloom, the plant grows to about 1 meter (3 ft) in height and has small, pink, bell-shaped flowers. It is found at an altitude of about . Its rhizomes can be crushed and distilled into an intensely aromatic, amber-colored essential oil with a thick consistency. Oil constituents Nard oil is used as a perfume, an incense, and in Ayurvedic practices. Sesquiterpenes contribute to the major portion of the volatile compounds, with the eponymous jatamansone (also known as (-)-valeranone) being dominant. Many coumarins are also present in the oil. The alkaloid actinidine has been isolated from the oil, and valerenal alongside valerenic acid (formerly called nardal and nardin respectively). Among the other phytochemical products are found in the rhizomes are: nardostachysin, a terpenoid ester; nardostachnol; nardostachnone; jatamansic acid and jatamansinone. History In ancient Rome, nardus was used to flavor wine, and occurs frequently in the recipes of Apicius. During the early Roman empire, nardus was the main ingredient of a perfume (unguentum nardinum). Pliny's Natural History lists several species of nardus used in making perfume and spiced wine: Indian nard, a stinking nard called 'ozaenitidos' which is not used, a false nard ('pseudo-nard') with which true nard is adulterated, and several herbs local to Europe and the Eastern Mediterranean which are also called nardus, namely Syrian nard, Gallic nard, Cretan nard (also called 'agrion' or 'phun'), field nard (also called 'bacchar'), wild nard (also called 'asaron'), and Celtic nard. Celtic nard is the only species Pliny mentions which he does not describe when listing the species of nard in book 12 of Natural History suggesting it is synonymous with another species, probably with the species Pliny refers to as 'hirculus', a plant Pliny attests to growing in the same region as Gallic nard and which he says is used to adulterate Gallic nard. Both are widely assumed to be cultivars or varieties of Valeriana celtica. Indian nard refers to Nardostachys jatamansi, stinking nard possibly to Allium victorialis, false nard to Lavandula stoechas, Syrian nard to Cymbopogon nardus, Gallic nard to Valeriana celtica, Cretan nard to Valeriana italica (syn. V. dioscoridis, V. tuberosa), and wild nard to Asarum europaeum. Field nard, or 'bacchar', has not been conclusively identified and must not be confused with species now called "baccharises" referring to species native to North America. Culture Spikenard is mentioned in the Bible as being used for its fragrance. In the Iberian iconographic tradition of the Catholic Church, the spikenard is used to represent Saint Joseph. The Vatican has said that the coat of arms of Pope Francis includes the spikenard in reference to Saint Joseph. Nard (Italian ) is also mentioned in the Inferno of Dante Alighieri's Divine Comedy: Spikenard is also mentioned as an herb protecting Saint Thecla from wild beasts in the apocryphal text The Acts of Paul and Thecla. References Further reading Dalby, Andrew, "Spikenard" in Alan Davidson, The Oxford Companion to Food, 2nd ed. by Tom Jaine (Oxford: Oxford University Press, 2006. ). Perfume ingredients Spices Incense material Plants in the Bible
Spikenard
Physics
1,080
18,521,939
https://en.wikipedia.org/wiki/Palaeopascichnus
Palaeopascichnus is an Ediacaran fossil comprising a series of lobes, first originating before the Gaskiers glaciation; it is plausibly a protozoan, but probably unrelated to the classical 'Ediacaran biota'. Once thought to represent a trace fossil, it is now recognized as a body fossil and corresponds to the skeleton of an agglutinating organism. See also List of Ediacaran genera Palaeopascichnid References Ediacaran life Incertae sedis Ediacaran Europe Geology of Ukraine
Palaeopascichnus
Biology
117
57,005,722
https://en.wikipedia.org/wiki/TRENDnet
TRENDnet is a global manufacturer of computer networking products headquartered in Torrance, California, in the United States. It sells networking and surveillance products especially in the small to medium business (SMB) and home user market segments. History The company was founded in 1990 by Pei Huang and Peggy Huang. Vulnerabilities In September 2013, the Federal Trade Commission (FTC) brought an enforcement action against TRENDnet alleging that the company marketed its SecurView IP cameras describing them as "secure", when in fact the software allowed online viewing by anyone with the camera's IP address. The FTC approved a final settlement with TRENDnet in February 2014. References External links Electronics companies established in 1990 Networking companies of the United States Networking hardware Routers (computing) Wireless networking 1990 establishments in California Computer companies of the United States Computer hardware companies
TRENDnet
Technology,Engineering
170
4,367,479
https://en.wikipedia.org/wiki/Atkinson%20resistance
Atkinson resistance is commonly used in mine ventilation to characterise the resistance to airflow of a duct of irregular size and shape, such as a mine roadway. It has the symbol and is used in the square law for pressure drop, where (in English units) is pressure drop (pounds per square foot), is the air density in the duct (pounds per cubic foot), is the standard air density (0.075 pound per cubic foot), is the resistance (atkinsons), is the rate of flow of air (thousands of cubic feet per second). One atkinson is defined as the resistance of an airway which, when air flows along it at a rate of 1,000 cubic feet per second, causes a pressure drop of one pound-force per square foot. The unit is named after J J Atkinson, who published one of the earliest comprehensive mathematical treatments of mine ventilation. Atkinson based his expressions for airflow resistance on the more general work of Chézy and Darcy who defined frictional pressure drop as where is pressure drop, is the density of the fluid in question (water, air, oil etc.), is the Fanning friction factor, is the length of the duct, is the perimeter of the duct, is the area of the duct, is the velocity of the fluid. The practicalities of mine ventilation led Atkinson to group some of these variables into one all-encompassing term: Area and perimeter were incorporated because mine airways are of irregular shape, and both vary along the length of an airway. velocity was replaced by the ratio of flowrate to area () because variations in area cause variations in velocity. Area was then incorporated into the denominator of the Atkinson resistance term. Length of the airway was incorporated. This may have been a step too far, as most of his successors chose to give values of Atkinson resistance in terms of atkinsons per unit length (often 100 or 1,000 yards). The term was incorporated, which later authors definitely considered a step too far (e.g. McPherson, 1988). In Atkinson's time not only were all British mines shallow enough that the density of air could be considered constant, but fan design was primitive enough that variations in density would make no measurable difference to the amount of motive power required. Atkinson did not foresee that his methods would be applied several miles underground, where air is 30–50% denser than it is at the surface. Density variations of this magnitude can alter the power consumption of colliery ventilation fans by hundreds of kilowatts. The resulting term is one that can be easily calculated from the results of two simple measurements: a pressure survey by the gauge and tube method and a flowrate survey with a counting anemometer. This is a major strength and is the reason why Atkinson resistance remains in use today. A complete definition of Atkinson resistance in more common fluid flow terms is as follows: in which is hydraulic radius, is hydraulic diameter and is Darcy friction factor in addition to the terms defined above. Atkinson also defined a friction factor (Atkinson friction factor) used for airways of fixed section such as shafts. It accounts for Fanning friction factor, density and the constant and relates to Atkinson resistance by where is Atkinson friction factor and the other terms are as defined above. Despite its weakness with regards to density changes, the use of Atkinson resistance is so widespread in the mining industry that a corresponding term in metric units has also been defined. It, too, is termed the atkinson resistance but the unit was given the name gaul (for reasons unknown). The earliest known use of the name is a 1971 British Coal memorandum on metrication, VB/CIRC/71(26). One gaul is defined as the resistance of an airway which, when air (of density 1.2 kg/m3) flows along it at a rate of one cubic metre per second, causes a pressure drop of one pascal. The gaul has units of N·s2/m8, or alternatively Pa·s2/m6. It uses the same basic equation as its Imperial counterpart, but with slightly different dimensions: where is pressure drop (pascals), is the air density in the air duct (kilograms per cubic metre), is the standard air density (1.2 kilograms per cubic metre), is the resistance of the air path (gauls), is the rate of flow of air (cubic metres per second). The metric and Imperial resistances are related by where is the standard acceleration of gravity (metres per second squared). The metric equivalent is now more widely used than the original Imperial definition. Most suppliers quote resistances of flexible temporary ventilation ducts in gauls/100 m and in most mine ventilation software programs, branch resistances are given in gauls. References National Coal Board Information Bulletin 55/153, Planning the Ventilation of New and Reorganised Collieries, 1955 National Coal Board Mining Dept, Coal mine ventilation: a handbook for colliery ventilation engineers, NCB 1979 McPherson, M J, An analysis of the resistance and airflow characteristics of mine shafts, Proceedings of the 4th International Mine Ventilation Congress, Brisbane, 1988 National Coal Board Memorandum VB/CIRC/71(26), from DER Lloyd to All Area Ventilation Engineers: "Metrication - Airway Resistance", 5 May 1971 Further reading Atkinson, J J, Gases met with in Coal Mines, and the general principles of Ventilation Transactions of the Manchester Geological Society, Vol. III, p. 218, 1862 McPherson, M J, Subsurface Ventilation Engineering, 2nd edition (ISBN of 1st edition - Chapman & Hall 1993 - is ) Fluid dynamics Mine ventilation
Atkinson resistance
Chemistry,Engineering
1,148
38,782,187
https://en.wikipedia.org/wiki/C7H10O
{{DISPLAYTITLE:C7H10O}} The molecular formula C7H10O (molar mass: 110.15 g/mol, exact mass: 110.0732 u) may refer to: Norcamphor Tetrahydrobenzaldehyde, or 1,2,3,6-Tetrahydrobenzaldehyde Molecular formulas
C7H10O
Physics,Chemistry
80
37,630,116
https://en.wikipedia.org/wiki/Information-centric%20networking
Information-centric networking (ICN) is an approach to evolve the Internet infrastructure away from a host-centric paradigm, based on perpetual connectivity and the end-to-end principle, to a network architecture in which the focal point is identified information (or content or data). Some of the application areas of ICN are in web applications, multimedia streaming, the Internet of Things, Wireless Sensor Networks and Vehicular networks and emerging applications such as social networks, Industrial IoTs. In this paradigm, connectivity may well be intermittent, end-host and in-network storage can be capitalized upon transparently, as bits in the network and on data storage devices have exactly the same value, mobility and multi access are the norm and anycast, multicast, and broadcast are natively supported. Data becomes independent from location, application, storage, and means of transportation, enabling in-network caching and replication. The expected benefits are improved efficiency, better scalability with respect to information/bandwidth demand and better robustness in challenging communication scenarios. In information-centric networking the cache is a network level solution, and it has rapidly changing cache states, higher request arrival rates and smaller cache sizes. In particular, information-centric networking caching policies should be fast and lightweight. IRTF Working Group The Internet Research Task Force (IRTF) is sponsoring a research group on Information-Centric Networking Research, which serves as a forum for the exchange and analysis of ICN research ideas and proposals. Current and future work items and outputs are managed on the ICNRG wiki. References Computer networking
Information-centric networking
Technology,Engineering
326
5,863,735
https://en.wikipedia.org/wiki/Wonky%20hole
Wonky hole is a colloquial, Australian term for a submarine groundwater discharge, a freshwater spring flowing from the seabed. Geography Wonky holes are found in the Great Barrier Reef and the Gulf of Carpentaria, both in Queensland. Wonky holes can be found in the coral reef up to offshore. Geology Wonky holes are located along riverbeds which existed in the last glacial period ending about 11,000 years ago. At that time of the last glacial maximum much of northern Europe and North America was covered by ice sheets up to thick; the water tied up in the glacial ice lowered the sea level more than . The sediment in the submerged river beds from that period has been covered with coral in many places. Since the sediment is more permeable than the surrounding materials, it channels fresh water to thin spots in the coral, creating the fresh water springs called wonky holes. The nutrients carried by the fresh water attract fish and fishermen. Coral does not grow well in the fresh water, resulting in irregular growth around wonky holes. The rough bottoms around the outlet tend to capture fishing nets. Around 200 holes are known along the coast between Townsville and Cape York. Water flowing along the submarine riverbeds and exiting at wonky holes can be charged with nutrients carried from the mainland. These can cause eutrophication in the Great Barrier Reef lagoon. In literature A wonky hole makes a brief appearance in the science fiction novel, Camouflage. References External links Article on the effects of agriculture on wonky holes Hydrology Great Barrier Reef
Wonky hole
Chemistry,Engineering,Environmental_science
314
3,680,074
https://en.wikipedia.org/wiki/Procrustes%20analysis
In statistics, Procrustes analysis is a form of statistical shape analysis used to analyse the distribution of a set of shapes. The name Procrustes () refers to a bandit from Greek mythology who made his victims fit his bed either by stretching their limbs or cutting them off. In mathematics: an orthogonal Procrustes problem is a method which can be used to find out the optimal rotation and/or reflection (i.e., the optimal orthogonal linear transformation) for the Procrustes Superimposition (PS) of an object with respect to another. a constrained orthogonal Procrustes problem, subject to det(R) = 1 (where R is an orthogonal matrix), is a method which can be used to determine the optimal rotation for the PS of an object with respect to another (reflection is not allowed). In some contexts, this method is called the Kabsch algorithm. When a shape is compared to another, or a set of shapes is compared to an arbitrarily selected reference shape, Procrustes analysis is sometimes further qualified as classical or ordinary, as opposed to generalized Procrustes analysis (GPA), which compares three or more shapes to an optimally determined "mean shape". Introduction To compare the shapes of two or more objects, the objects must be first optimally "superimposed". Procrustes superimposition (PS) is performed by optimally translating, rotating and uniformly scaling the objects. In other words, both the placement in space and the size of the objects are freely adjusted. The aim is to obtain a similar placement and size, by minimizing a measure of shape difference called the Procrustes distance between the objects. This is sometimes called full, as opposed to partial PS, in which scaling is not performed (i.e. the size of the objects is preserved). Notice that, after full PS, the objects will exactly coincide if their shape is identical. For instance, with full PS two spheres with different radii will always coincide, because they have exactly the same shape. Conversely, with partial PS they will never coincide. This implies that, by the strict definition of the term shape in geometry, shape analysis should be performed using full PS. A statistical analysis based on partial PS is not a pure shape analysis as it is not only sensitive to shape differences, but also to size differences. Both full and partial PS will never manage to perfectly match two objects with different shape, such as a cube and a sphere, or a right hand and a left hand. In some cases, both full and partial PS may also include reflection. Reflection allows, for instance, a successful (possibly perfect) superimposition of a right hand to a left hand. Thus, partial PS with reflection enabled preserves size but allows translation, rotation and reflection, while full PS with reflection enabled allows translation, rotation, scaling and reflection. Optimal translation and scaling are determined with much simpler operations (see below). Ordinary Procrustes analysis Here we just consider objects made up from a finite number k of points in n dimensions. Often, these points are selected on the continuous surface of complex objects, such as a human bone, and in this case they are called landmark points. The shape of an object can be considered as a member of an equivalence class formed by removing the translational, rotational and uniform scaling components. Translation For example, translational components can be removed from an object by translating the object so that the mean of all the object's points (i.e. its centroid) lies at the origin. Mathematically: take points in two dimensions, say . The mean of these points is where Now translate these points so that their mean is translated to the origin , giving the point . Uniform scaling Likewise, the scale component can be removed by scaling the object so that the root mean square distance (RMSD) from the points to the translated origin is 1. This RMSD is a statistical measure of the object's scale or size: The scale becomes 1 when the point coordinates are divided by the object's initial scale: . Notice that other methods for defining and removing the scale are sometimes used in the literature. Rotation Removing the rotational component is more complex, as a standard reference orientation is not always available. Consider two objects composed of the same number of points with scale and translation removed. Let the points of these be , . One of these objects can be used to provide a reference orientation. Fix the reference object and rotate the other around the origin, until you find an optimum angle of rotation such that the sum of the squared distances (SSD) between the corresponding points is minimised (an example of least squares technique). A rotation by angle gives . where (u,v) are the coordinates of a rotated point. Taking the derivative of with respect to and solving for when the derivative is zero gives When the object is three-dimensional, the optimum rotation is represented by a 3-by-3 rotation matrix R, rather than a simple angle, and in this case singular value decomposition can be used to find the optimum value for R (see the solution for the constrained orthogonal Procrustes problem, subject to det(R) = 1). Shape comparison The difference between the shape of two objects can be evaluated only after "superimposing" the two objects by translating, scaling and optimally rotating them as explained above. The square root of the above mentioned SSD between corresponding points can be used as a statistical measure of this difference in shape: This measure is often called Procrustes distance. Notice that other more complex definitions of Procrustes distance, and other measures of "shape difference" are sometimes used in the literature. Superimposing a set of shapes We showed how to superimpose two shapes. The same method can be applied to superimpose a set of three or more shapes, as far as the above mentioned reference orientation is used for all of them. However, Generalized Procrustes analysis provides a better method to achieve this goal. Generalized Procrustes analysis (GPA) GPA applies the Procrustes analysis method to optimally superimpose a set of objects, instead of superimposing them to an arbitrarily selected shape. Generalized and ordinary Procrustes analysis differ only in their determination of a reference orientation for the objects, which in the former technique is optimally determined, and in the latter one is arbitrarily selected. Scaling and translation are performed the same way by both techniques. When only two shapes are compared, GPA is equivalent to ordinary Procrustes analysis. The algorithm outline is the following: arbitrarily choose a reference shape (typically by selecting it among the available instances) superimpose all instances to current reference shape compute the mean shape of the current set of superimposed shapes if the Procrustes distance between mean and reference shape is above a threshold, set reference to mean shape and continue to step 2. Variations There are many ways of representing the shape of an object. The shape of an object can be considered as a member of an equivalence class formed by taking the set of all sets of k points in n dimensions, that is Rkn and factoring out the set of all translations, rotations and scalings. A particular representation of shape is found by choosing a particular representation of the equivalence class. This will give a manifold of dimension kn-4. Procrustes is one method of doing this with particular statistical justification. Bookstein obtains a representation of shape by fixing the position of two points called the bases line. One point will be fixed at the origin and the other at (1,0) the remaining points form the Bookstein coordinates. It is also common to consider shape and scale that is with translational and rotational components removed. Examples Shape analysis is used in biological data to identify the variations of anatomical features characterised by landmark data, for example in considering the shape of jaw bones. One study by David George Kendall examined the triangles formed by standing stones to deduce if these were often arranged in straight lines. The shape of a triangle can be represented as a point on the sphere, and the distribution of all shapes can be thought of a distribution over the sphere. The sample distribution from the standing stones was compared with the theoretical distribution to show that the occurrence of straight lines was no more than average. See also Active shape model Alignments of random points Biometrics Generalized Procrustes analysis Image registration Kent distribution Morphometrics Orthogonal Procrustes problem Procrustes References F.L. Bookstein, Morphometric tools for landmark data, Cambridge University Press, (1991). J.C. Gower, G.B. Dijksterhuis, Procrustes Problems, Oxford University Press (2004). I.L.Dryden, K.V. Mardia, Statistical Shape Analysis, Wiley, Chichester, (1998). External links Extensions to continuum of points and distributions Procrustes Methods, Shape Recognition, Similarity and Docking, by Michel Petitjean. Multivariate statistics Euclidean symmetries Biometrics Greek mythology studies Greek words and phrases
Procrustes analysis
Physics,Mathematics
1,888
16,312,085
https://en.wikipedia.org/wiki/Hu%E2%80%93Washizu%20principle
In continuum mechanics, and in particular in finite element analysis, the Hu–Washizu principle is a variational principle which says that the action is stationary, where is the elastic stiffness tensor. The Hu–Washizu principle is used to develop mixed finite element methods. The principle is named after Hu Haichang and Kyūichirō Washizu. References Further reading K. Washizu: Variational Methods in Elasticity & Plasticity, Pergamon Press, New York, 3rd edition (1982) O. C. Zienkiewicz, R. L. Taylor, J. Z. Zhu : The Finite Element Method: Its Basis and Fundamentals, Butterworth–Heinemann, (2005). Calculus of variations Finite element method Structural analysis Principles Continuum mechanics
Hu–Washizu principle
Physics,Mathematics,Engineering
160
23,309,403
https://en.wikipedia.org/wiki/Hanging%20garden
A hanging garden is a form of sustainable landscape architecture that can take several different forms, such as roof gardens, but is generally defined as a garden planted at a suspended or elevated position off the ground. These gardens are created with walls, fences, planted on terraces, growing from cliffs, or anything where the garden is not touching the earth. Space optimization is the main intention with the gardens, with aesthetics and providing cleaner air also commonly cited reasons. Hanging gardens are popular in urban environments with limited space such as in New York City or Los Angeles. History The first known instance of hanging gardens is the Hanging Gardens of Babylon. Considered one of the Seven Wonders of the Ancient World and the source of the term, the Hanging Gardens of Babylon are still of uncertain historicity. Another example of "reclaiming for nature the land occupied by the building" in the form of hanging gardens is the modernist Villa Savoye by the Swiss-French architect Le Corbusier, a project mentioned in his Five Points of Architecture. Modern In contemporary use, hanging gardens are a green wall on a ground level facade, a balcony, a terrace, or part of a roof garden of a home, or skyrise greenery with a residential, commercial, or government office building. During the present day, many differing types of hanging gardens can be found. Perhaps the most well known hanging garden would be the one attached to the Trump Tower, where occasionally trees will be planted in each section of the slanted side of the building. Oakland Museum, located in Oakland California, also embraces the hanging gardens and roof gardens with their Great Lawn. The lawn is welcome to all who visit, also allowing the rooftop patio of the museum to be host to concerts and events. Vertical farms are another version of hanging gardens that have become much more common within the last decade, usually being grown indoors and stacked on top of each other to take up the least amount of horizontal space. These have some practical advantages over standard gardens as well, such as being grown with soilless systems such as hydroponics and aquaponics. These gardens also have less worry about the environment they're being grown in, as the environment can be produced and curated to whatever is needed. Products Prefabricated modular hanging wall garden systems have been developed and are on the market internationally. Hanging pots as well as structures like trellises can easily be bought at a local hardware store or supermarket, making setting up a personal hanging garden relatively easy and accessible. Gallery Sources “Garden.” Oakland Museum of California (OMCA), 25 May 2023, museumca.org/on-view/garden/. “Green Roofs and Rooftop Gardens - Calrecycle Home Page.” National Park Service, calrecycle.ca.gov/organics/compostmulch/toolbox/greenroofs/. Accessed 2 December 2023. “The Hanging Gardens of Babylon: History, Legends, and More.” TheCollector, 28 November 2023, www.thecollector.com/hanging-gardens-babylon/. “Hanging Gardens.” National Park Service, U.S. Department of the Interior, www.nps.gov/glca/learn/nature/hanginggardens.htm. Accessed 1 December 2023. “Roof Garden.” TCLF, www.tclf.org/category/designed-landscape-types/roof-garden. Accessed 2 December 2023. “Vertical Farming – No Longer a Futuristic Concept.” Vertical Farming – No Longer A Futuristic Concept : USDA ARS, www.ars.usda.gov/oc/utm/vertical-farming-no-longer-a-futuristic-concept/. Accessed 2 December 2023. Wright, Richardson. The Story of Gardening: From the Hanging Gardens of Babylon to the Hanging Gardens of New York. Dover, 1963. References See also Urban agriculture Container gardening Sustainable planting Landscape architecture Sustainable gardening Sustainable products Sustainable architecture Types of garden Urban agriculture
Hanging garden
Engineering,Environmental_science
812
16,764,192
https://en.wikipedia.org/wiki/Hill%20differential%20equation
In mathematics, the Hill equation or Hill differential equation is the second-order linear ordinary differential equation where is a periodic function with minimal period and average zero. By these we mean that for all and and if is a number with , the equation must fail for some . It is named after George William Hill, who introduced it in 1886. Because has period , the Hill equation can be rewritten using the Fourier series of : Important special cases of Hill's equation include the Mathieu equation (in which only the terms corresponding to n = 0, 1 are included) and the Meissner equation. Hill's equation is an important example in the understanding of periodic differential equations. Depending on the exact shape of , solutions may stay bounded for all time, or the amplitude of the oscillations in solutions may grow exponentially. The precise form of the solutions to Hill's equation is described by Floquet theory. Solutions can also be written in terms of Hill determinants. Aside from its original application to lunar stability, the Hill equation appears in many settings including in modeling of a quadrupole mass spectrometer, as the one-dimensional Schrödinger equation of an electron in a crystal, quantum optics of two-level systems, accelerator physics and electromagnetic structures that are periodic in space and/or in time. References External links Ordinary differential equations
Hill differential equation
Mathematics
277
2,686,467
https://en.wikipedia.org/wiki/Tryptone
Tryptone is the assortment of peptides formed by the digestion of casein by the protease trypsin. Tryptone is commonly used in microbiology to produce lysogeny broth (LB) for the growth of E. coli and other microorganisms. It provides a source of amino acids for the growing bacteria. Tryptone is similar to casamino acids, both being digests of casein, but casamino acids can be produced by acid hydrolysis and typically only have free amino acids and few peptide chains; tryptone by contrast is the product of an incomplete enzymatic hydrolysis with some oligopeptides present. Tryptone is also a component of some germination media used in plant propagation. See also Albumose Trypticase soy agar References Peptides Microbiological media ingredients
Tryptone
Chemistry
178
5,359,115
https://en.wikipedia.org/wiki/Grand%20Egyptian%20Museum
The Grand Egyptian Museum (GEM; al-Matḥaf al-Maṣriyy al-Kabīr), also known as the Giza Museum, is an archaeological museum under construction in Giza, Egypt, about from the Giza pyramid complex. The Museum will host over 100,000 artifacts from ancient Egyptian civilization, including the complete Tutankhamun collection, and many pieces will be displayed for the first time. With of floor space, it will be the world's largest archeological museum. It is being built as part of a new master plan for the Giza Plateau, known as "Giza 2030". The GEM will also host permanent exhibition galleries, temporary exhibitions, special exhibitions, children museum, and virtual and large format screens with a total floor area of 32,000 m2. The museum was built by a joint venture of the Belgian BESIX Group and the Egyptian Orascom Construction. The original estimated completion date was 2013, and past estimates of the opening date have varied. As of 16 October, 2024 the Grand Hall, Grand Staircase, commercial area, 12 public galleries and the exterior gardens are open for tours, while the Tutankhamun gallery and Solar Boat Museum are not yet open to the public. Overview The building design was decided by an architectural competition announced on 7 January 2002. The organisers received 1,557 entries from 82 countries, making it the second largest architectural competition in history. In the second stage of the competition, 20 entries submitted additional information on their designs. Judging was complete by 2 June 2003. The competition was won by architects Róisín Heneghan and Shi-Fu Peng, and their company Heneghan Peng Architects of Ireland; the awarded prize was US$250,000. The building was designed by Heneghan Peng Architects, Buro Happold, Arup and ACE Consulting Engineers (Moharram and Bakhoum). The landscape and site masterplan was designed by West 8; the exhibition masterplan, exhibition design, and museology was led by Atelier Brückner. On 2 February 2010, Hill International announced that Egypt's Ministry of Culture had signed a contract with a joint venture of Hill and EHAF Consulting Engineers to provide project management services during the design and construction of the Grand Egyptian Museum. The building is shaped like a chamfered triangle in plan. It sits on a site northwest of the pyramids, near a motorway interchange. The building's north and south walls line up directly with the Great Pyramid of Khufu and the Pyramid of Menkaure. The front of the museum includes a large plaza filled with date palms and a façade made of translucent alabaster stone. Inside the main entrance is a large atrium, where large statues will be exhibited. The project is estimated to cost US$550 million, of which US$300 million will be financed from Japanese loans. The remaining costs are financed by the Supreme Council of Antiquities, other donations, and international funds. The new museum is designed to include newer technologies, such as virtual reality. The museum will also be an international center of communication between museums, to promote direct contact with other local and international museums. The Grand Egyptian Museum will include a children's museum, conference center, training center, and workshops designed similarly to the old Pharaonic places. History On 5 January 2002, then-Egyptian President Hosni Mubarak laid the foundation stone of the Grand Egyptian Museum. In 2006, the 3,200 years old Statue of Ramesses II was relocated from Ramses Square in Cairo to the Grand Egyptian Museum site, near that Giza Plateau. It was moved to the atrium of the museum in January 2018. In 2007, GEM secured a $300 million loan from the Japan Bank for International Cooperation. The Egyptian Government will fund $147 million while the remaining $150 million will be funded through donations and international organisations. In late August 2008, the design team submitted over 5,000 drawings to the Egyptian Ministry of Culture. Following this, the construction tender was announced in October 2008. Earthmoving has begun to excavate the site for the building. Tendering was due in September 2009, with an estimated completion date of 2013. On 11 January 2012, a joint venture between Egypt's Orascom Construction (OC) and the Belgian BESIX was awarded the contract for phase three of the Grand Egyptian Museum (GEM), a deal valued at $810 million. In January 2018, Besix and Orascom brought in and installed an 82-ton, 3,200-year-old statue of Ramses II in the Grand Egyptian Museum. It was the first artefact to be installed in the Museum, during construction, due to its size. On 29 April 2018, a fire broke out near the entrance of the GEM but artifacts were not damaged and the cause of the fire was unknown. In May 2018, the last of King Tutankhamun's chariots was moved to GEM. In November 2018, the estimate for a full opening was pushed back to last quarter of 2020, according to Tarek Tawfik, GEM's director. In April 2020, the planned opening of the museum was pushed to 2021 due to the COVID-19 pandemic. Various subsequent estimates ranged from 2020 to 2023. In August 2020, two colossal statues discovered in the sunken city of Thonis-Heracleion by the IEASM were set up in the entrance hall of the GEM. The Grand Egyptian Museum (GEM) began limited public access in February 2023, allowing visitors to explore the main entrance hall and commercial areas. In May 2024, Gihane Zaki was appointed as the head of the museum, and it was announced that the official opening was scheduled for "later this year". However, there has been no official announcement of a definitive opening date. On October 16, 2024, the museum expanded its offerings by opening twelve main galleries as part of a trial run. These galleries showcase artifacts from various periods of ancient Egyptian history, with daily attendance capped at 4,000 visitors. Despite these significant developments, the full official opening—featuring the complete display of the Tutankhamun collection and the solar boats—remains pending. The GEM is available for private tours in advance of its official opening. Logo design On 10 June 2018, the museum's logo was revealed, which will be used in the museum's promotional campaign in Egypt and the world. The logo was designed by Tariq Atrissi. The cost of the design amounted to 800,000 Egyptian pounds, which included the costs of designing the museum exhibition implemented by the German company "Atelier Bruckner". Exhibits The exhibits cover about one-third of the total museum's 50-hectare grounds displaying over 13,000 artefacts in 12 galleries arranged by time period (c.3100BCE~400CE) and theme (Society/Kingship/Beliefs) from the museum's total collections of 50,000 objects. The artefacts were relocated from storage and museums in Cairo, Luxor, Minya, Sohag, Assiut, Beni Suef, Fayoum, the Delta, and Alexandria. As of January 2025, the Grand Egyptian Museum has only partially opened, rather than fully inaugurating as originally planned. The official grand opening was expected to take place in late 2024, with the first exhibition intended to showcase 5,000 objects from King Tutankhamun's tomb—relocated from the Egyptian Museum in Cairo—and the reconstructed (August 2021) Khufu ship, a solar barque, which was transferred from the Giza Solar boat museum beside the Great Pyramid. At present, the golden mask and primary funerary items of King Tutankhamun have yet to be moved to the new museum, and the Khufu solar ship remains inaccessible to the general public, available only through special private tours that can only be arranged in advance. Events The museum has hosted different artistic and cultural events and venues since it's partial opening. The first musical concert held in the museum had Egypt’s soprano Fatma Said along with United Philharmonic Orchestra & Choir led by Nader Abbassi on 20th of January 2023, the concert had a number of foreign ambassadors and public figures from Egypt and abroad, and it received a great reaction from the Egyptian audience, specifically for the song "Masr Heya Ommi" that received more than 2 million view on Said's YouTube Channel with audience commenting on the magnificence of Ramses II Statue next to the musical orchestra. See also Egyptian Museum List of museums of Egyptian antiquities References External links Official website Unofficial informational website Detailed building description JICA-GEM Joint Conservation project Proposed museums in Egypt Archaeological museums in Egypt Egyptological collections in Egypt Giza Plateau Buildings and structures under construction Government agencies of Egypt
Grand Egyptian Museum
Engineering
1,806
30,856,828
https://en.wikipedia.org/wiki/Volvopluteus%20gloiocephalus
Volvopluteus gloiocephalus, commonly known as the big sheath mushroom, rose-gilled grisette, or stubble rosegill, is a species of mushroom in the family Pluteaceae. For most of the 20th century it has been known under the names Volvariella gloiocephala or V. speciosa, but recent molecular studies have placed it as the type species of the genus Volvopluteus, newly created in 2011. The cap of the mushroom is about in diameter, varies from white to grey or grey-brown, and is markedly sticky when fresh. The gills start out as white but they soon turn pink. The stipe is white and has a sack-like volva at the base. Microscopical features and DNA sequence data are of great importance for separating V. gloiocephalus from related species. V. gloiocephalus is a saprotrophic fungus that grows on grassy fields and accumulations of organic matter like compost or woodchip piles. It has been reported from all continents except Antarctica. Taxonomy This taxon has a long and convoluted nomenclatural history. It was originally described as Agaricus gloiocephalus by Swiss botanist Augustin Pyramus de Candolle in 1815 and later sanctioned under this name by Elias Magnus Fries in 1821. The French mycologist Claude Gillet transferred it in 1878 to the genus Volvaria erected by Paul Kummer just a few years earlier in 1871. The name Volvaria was already taken, as it had been coined by De Candolle for a genus of lichens in 1805. The generic name Volvariella, proposed by the Argentinean mycologist Carlos Luis Spegazzini in 1899, would eventually be adopted for this group in 1953 after a proposal to conserve Kummer's Volvaria against De Candolle's Volvaria was rejected by the Nomenclature Committee for Fungi established under the principles of the International Code of Botanical Nomenclature. Despite the generic name Volvariella being adopted in 1953 the name Volvariella gloiocephala did not exist until 1986, when the placement of the species in that genus was formally proposed by mycologists Teun Boekhout and Manfred Enderle. The reason for this long interval is that most 20th-century mycologists working on Volvariella (e.g. Rolf Singer, Robert L. Shaffer, Robert Kühner, Henri Romagnesi) considered the epithet "gloiocephalus" to represent a variety with dark basidiocarps of another species of Volvariella, viz. Volvariella speciosa, that has white basidiocarps, and therefore would use the name Volvariella speciosa var. gloiocephala to refer to this taxon. Boekhout & Enderle showed that white and dark basidiocarps can arise from the same mycelium, and that the epithets "gloiocephalus" proposed by De Candolle in 1815 and "speciosa" proposed by Fries in 1818 should be considered to represent the same species with the former having nomenclatural priority. In 1996 Boekhout and Enderle designated a neotype to serve as a representative example of the species. The phylogenetic study of Justo and colleagues showed that Volvariella gloiocephala and related taxa are a separate clade from the majority of the species traditionally classified in Volvariella and therefore another name change was necessary, now as the type species of the newly proposed genus Volvopluteus. The epithet gloiocephalus comes from the Greek terms gloia (γλοία = glue or glutinous substance) and kephalē (κεφαλή = head) meaning "with a sticky head" making reference to the viscid cap surface. It is commonly known as the "big sheath mushroom", "rose-gilled grisette" or the "stubble rosegill". Description The cap of V. gloiocephalus is between in diameter, more or less ovate or conical when young, then expands to convex or flat, sometimes with a slight central depression in old specimens. The surface is markedly viscid in fresh basidiocarps; the color ranges from pure white to grey or greyish brown. The gills are crowded, free from the stipe, ventricose (swollen in the middle), and up to broad; they are white when young but turn pink with age. The stipe is long and wide, cylindrical, broadening towards the base; the surface is white, smooth or slightly pruinose (covered with fine white powdery granules). The volva is high, sacciform (pouch-like), white and has a smooth surface. The flesh is white on stipe and cap and it does not change when bruised or exposed to air. Smell and taste vary from indistinct to raphanoid (radish-like) or similar to raw peeled potatoes. The spore print is pinkish brown. The basidiospores are ellipsoid and measure 12–16 by 8–9.5 μm. Basidia are 20–35 by 7–15 μm and usually four-spored, but sometimes two-spored basidia can occur. Pleurocystidia are 60–90 by 20–50 μm with variable morphology: club-shaped, fusiform, ovoid, and sometimes with a small apical papilla. Cheilocystidia are 55–100 by 15–40 μm with similar morphology to the pleurocystidia; they completely cover the gill edge. The cap cuticle (pileipellis) is an ixocutis (parallel hyphae wide embedded in a gelatinous matrix). Stipitipellis is a cutis (parallel hyphae not embedded in a gelatinous matrix). Caulocystidia are sometimes present, measuring 70–180 by 10–25 μm; they are mostly cylindrical. Clamp connections are absent from the hyphae. Similar species Molecular analyses of the internal transcribed spacer region clearly separate the four species currently recognized in Volvopluteus, but morphological identification can be more difficult due to the sometimes overlapping morphological variation among the species. Size of the fruit bodies, color of the cap, spore size, presence or absence of cystidia and morphology of the cystidia are the most important characters for morphological species delimitation in the genus. V. earlei has smaller fruit bodies (cap less than in diameter), has no pleurocystidia (usually), and the cheilocystidia usually have a very long apical excrescence (outgrowth). In V. asiaticus the majority of the pleurocystidia have an apical excrescence up to 10–15 μm long and the cheilocystidia are predominantly lageniform (flask-shaped). V. michiganensis has smaller basidiospores, on average less than 12.5 μm long. Volvariella acystidiata, known from central Africa (Zaire) and Italy, somewhat resembles Volvopluteus gloiocephalus. It can be distinguished from the latter by its smaller fruit bodies, with caps up to in diameter, and, microscopically, by the complete absence of cheilo- and pleurocystidia. Habitat and distribution Volvopluteus gloiocephalus is a saprotrophic mushroom that grows on the ground in gardens, grassy fields, both in and outside forest areas, and on accumulations of vegetable matter like compost or woodchip piles. It has also been reported fruiting in greenhouses. In China, it grows in bamboo thickets. It usually fruits in groups of several basidiocarps but it can also be found growing solitary. It is not unusual for a season of "spectacular" fruiting to be followed by several years with no appearance of the mushroom. This species has been reported from all continents except Antarctica, usually under names such as Volvariella gloiocephala or V. speciosa. Molecular data have so far corroborated its occurrence in Europe and North America but records from other continents remain unconfirmed. Uses Volvopluteus gloiocephalus is edible, although considered watery and poor in quality. It was once sold in markets in Perth, Australia. Mature fruit bodies, collected in sufficient quantity, can be used to prepare soup, or added to dishes where wild mushrooms are used, such as stews and casseroles. The mushrooms are best used fresh as they do not preserve well. Young specimens of V. gloiocephalus have white gills so it is possible to mistake them for an Amanita and vice versa. In the United States, there have been several cases of Asian immigrants collecting and eating death caps (Amanita phalloides), under the mistaken assumption that they were Volvariella. A Greek study determined the nutritional composition of fruit bodies: protein 1.49 g/100 g fresh weight (fw), 18.36 g/100 g dry weight (dw); fat 0.54 g/100 g fw, 6.65 g/100g dw; carbohydrates 5.33 g/100g fw, 65.64 g/100 g dw. References External links Volvopluteus gloiocephalus Mushroomobserver.org name Online Atlas of Fungi in Northern Ireland Pluteaceae Edible fungi Fungi of Australia Fungi of Asia Fungi of Central America Fungi of Europe Fungi of North America Fungi of South America Fungi described in 1815 Taxa named by Augustin Pyramus de Candolle Fungus species
Volvopluteus gloiocephalus
Biology
2,068
597,844
https://en.wikipedia.org/wiki/Levitation%20%28physics%29
Levitation (from Latin , ) is the process by which an object is held aloft in a stable position, without mechanical support via any physical contact. Levitation is accomplished by providing an upward force that counteracts the pull of gravity (in relation to gravity on earth), plus a smaller stabilizing force that pushes the object toward a home position whenever it is a small distance away from that home position. The force can be a fundamental force such as magnetic or electrostatic, or it can be a reactive force such as optical, buoyant, aerodynamic, or hydrodynamic. Levitation excludes floating at the surface of a liquid because the liquid provides direct mechanical support. Levitation excludes hovering flight by insects, hummingbirds, helicopters, rockets, and balloons because the object provides its own counter-gravity force. Physics Levitation (on Earth or any planetoid) requires an upward force that cancels out the weight of the object, so that the object does not fall (accelerate downward) or rise (accelerate upward). For positional stability, any small displacement of the levitating object must result in a small change in force in the opposite direction. the small changes in force can be accomplished by gradient field(s) or by active regulation. If the object is disturbed, it might oscillate around its final position, but its motion eventually decreases to zero due to damping effects. (In a turbulent flow, the object might oscillate indefinitely.) Levitation techniques are useful tools in physics research. For example, levitation methods are useful for high-temperature melt property studies because they eliminate the problem of reaction with containers and allow deep undercooling of melts. The containerless conditions may be obtained by opposing gravity with a levitation force instead of allowing an entire experiment to freefall. Magnetic levitation Magnetic levitation is the most commonly seen and used form of levitation. This form of levitation occurs when an object is suspended using magnetic fields. Diamagnetic materials are commonly used for demonstration purposes. In this case the returning force appears from the interaction with the screening currents. For example, a superconducting sample, which can be considered either as a perfect diamagnet or an ideally hard superconductor, easily levitates in an ambient external magnetic field. The superconductor is cooled with liquid nitrogen to levitate on top of a magnet becoming super diamagnetic. In a powerful magnetic field utilizing diamagnetic levitation, even small live animals have been levitated. It is possible to levitate pyrolytic graphite by placing thin squares of it above four cube magnets with the north poles forming one diagonal and south poles forming the other diagonal. Researchers have even successfully levitated (non-magnetic) liquid droplets surrounded by paramagnetic fluids. The process of such inverse magnetic levitation is usually referred to as Magneto-Archimedes effect. Magnetic levitation is in development for use for transportation systems. For example, the Maglev includes trains that are levitated by a large number of magnets. Due to the lack of friction on the guide rails, they are faster, quieter, and smoother than wheeled mass transit systems. Electrodynamic suspension uses AC magnetic fields. Electrostatic levitation In electrostatic levitation an electric field is used to counteract gravitational force. Some spiders shoot silk into the air to ride Earth's electric field. Aerodynamic levitation In aerodynamic levitation, the levitation is achieved by floating the object on a stream of gas, either produced by the object or acting on the object. For example, a ping pong ball can be levitated with the stream of air from a vacuum cleaner set on "blow" - exploiting the Coandă effect which keeps it stable in the airstream. With enough thrust, very large objects can be levitated using this method. Gas film levitation This technique enables the levitation of an object against gravitational force by floating it on a thin gas film formed by gas flow through a porous membrane. Using this technique, high temperature melts can be kept clean from contamination and be supercooled. A common example in general usage includes air hockey, where the puck is lifted by a thin layer of air. Hovercraft also use this technique, producing a large region of high-pressure air underneath them. Acoustic levitation Acoustic levitation uses sound waves to provide a levitating force. Optical levitation Optical levitation is a technique in which a material is levitated against the downward force of gravity by an upward force stemming from photon momentum transfer (radiation pressure). Buoyant levitation Gases at high pressure can have a density exceeding that of some solids. Thus they can be used to levitate solid objects through buoyancy. Noble gases are preferred for their non-reactivity. Xenon is the densest non-radioactive noble gas, at 5.894g/L. Xenon has been used to levitate polyethylene, at a pressure of 154atm. Casimir force Scientists have discovered a way of levitating ultra small objects by manipulating the Casimir force, which normally causes objects to stick together due to forces predicted by quantum field theory. This is, however, only possible for micro-objects. Uses Maglev trains Magnetic levitation is used to suspend trains without touching the track. This permits very high speeds, and greatly reduces the maintenance requirements for tracks and vehicles, as little wear occurs. This also means there is no friction, so the only force acting against it is air resistance. Animal levitation Scientists have levitated frogs, grasshoppers, and mice by means of powerful electromagnets utilizing superconductors, producing diamagnetic repulsion of body water. The mice acted confused at first, but adapted to the levitation after approximately four hours, suffering no immediate ill effects. Further reading . See also Levitation (illusion) Levitation based inertial sensing Anti-gravity Flight Leidenfrost effect Telekinesis Weightlessness References External links Diamagnetic Levitation (YouTube) Superconducting Levitation Demos Gravity
Levitation (physics)
Physics
1,260
7,824,361
https://en.wikipedia.org/wiki/Hardware-in-the-loop%20simulation
Hardware-in-the-loop (HIL) simulation, also known by various acronyms such as HiL, HITL, and HWIL, is a technique that is used in the development and testing of complex real-time embedded systems. HIL simulation provides an effective testing platform by adding the complexity of the process-actuator system, known as a plant, to the test platform. The complexity of the plant under control is included in testing and development by adding a mathematical representation of all related dynamic systems. These mathematical representations are referred to as the "plant simulation". The embedded system to be tested interacts with this plant simulation. How HIL works HIL simulation must include electrical emulation of sensors and actuators. These electrical emulations act as the interface between the plant simulation and the embedded system under test. The value of each electrically emulated sensor is controlled by the plant simulation and is read by the embedded system under test (feedback). Likewise, the embedded system under test implements its control algorithms by outputting actuator control signals. Changes in the control signals result in changes to variable values in the plant simulation. For example, a HIL simulation platform for the development of automotive anti-lock braking systems may have mathematical representations for each of the following subsystems in the plant simulation: Vehicle dynamics, such as suspension, wheels, tires, roll, pitch and yaw; Dynamics of the brake system's hydraulic components; Road characteristics. Uses In many cases, the most effective way to develop an embedded system is to connect the embedded system to the real plant. In other cases, HIL simulation is more efficient. The metric of development and testing efficiency is typically a formula that includes the following factors: 1. Cost 2. Duration 3. Safety 4. Feasibility The cost of the approach should be a measure of the cost of all tools and effort. The duration of development and testing affects the time-to-market for a planned product. Safety factor and development duration are typically equated to a cost measure. Specific conditions that warrant the use of HIL simulation include the following: Enhancing the quality of testing Tight development schedules High-burden-rate plant Early process human factor development Enhancing the quality of testing Usage of HILs enhances the quality of the testing by increasing the scope of the testing. Ideally, an embedded system would be tested against the real plant, but most of the time the real plant itself imposes limitations in terms of the scope of the testing. For example, testing an engine control unit as a real plant can create the following dangerous conditions for the test engineer: Testing at or beyond the range of the certain ECU parameters (e.g. Engine parameters etc.) Testing and verification of the system at failure conditions In the above-mentioned test scenarios, HIL provides the efficient control and safe environment where test or application engineer can focus on the functionality of the controller. Tight development schedules The tight development schedules associated with most new automotive, aerospace and defense programs do not allow embedded system testing to wait for a prototype to be available. In fact, most new development schedules assume that HIL simulation will be used in parallel with the development of the plant. For example, by the time a new automobile engine prototype is made available for control system testing, 95% of the engine controller testing will have been completed using HIL simulation. The aerospace and defense industries are even more likely to impose a tight development schedule. Aircraft and land vehicle development programs are using desktop and HIL simulation to perform design, test, and integration in parallel. High-burden-rate plant In many cases, the plant is more expensive than a high fidelity, real-time simulator and therefore has a higher-burden rate. Therefore, it is more economical to develop and test while connected to a HIL simulator than the real plant. For jet engine manufacturers, HIL simulation is a fundamental part of engine development. The development of Full Authority Digital Engine Controllers (FADEC) for aircraft jet engines is an extreme example of a high-burden-rate plant. Each jet engine can cost millions of dollars. In contrast, a HIL simulator designed to test a jet engine manufacturer's complete line of engines may demand merely a tenth of the cost of a single engine. Early process human factors development HIL simulation is a key step in the process of developing human factors, a method of ensuring usability and system consistency using software ergonomics, human-factors research and design. For real-time technology, human-factors development is the task of collecting usability data from man-in-the-loop testing for components that will have a human interface. An example of usability testing is the development of fly-by-wire flight controls. Fly-by-wire flight controls eliminate the mechanical linkages between the flight controls and the aircraft control surfaces. Sensors communicate the demanded flight response and then apply realistic force feedback to the fly-by-wire controls using motors. The behavior of fly-by-wire flight controls is defined by control algorithms. Changes in algorithm parameters can translate into more or less flight response from a given flight control input. Likewise, changes in the algorithm parameters can also translate into more or less force feedback for a given flight control input. The “correct” parameter values are a subjective measure. Therefore, it is important to get input from numerous man-in-the-loop tests to obtain optimal parameter values. In the case of fly-by-wire flight controls development, HIL simulation is used to simulate human factors. The flight simulator includes plant simulations of aerodynamics, engine thrust, environmental conditions, flight control dynamics and more. Prototype fly-by-wire flight controls are connected to the simulator and test pilots evaluate flight performance given various algorithm parameters. The alternative to HIL simulation for human factors and usability development is to place prototype flight controls in early aircraft prototypes and test for usability during flight test. This approach fails when measuring the four conditions listed above. Cost: A flight test is extremely costly and therefore the goal is to minimize any development occurring with flight test. Duration: Developing flight controls with flight test will extend the duration of an aircraft development program. Using HIL simulation, the flight controls may be developed well before a real aircraft is available. Safety: Using flight test for the development of critical components such as flight controls has a major safety implication. Should errors be present in the design of the prototype flight controls, the result could be a crash landing. Feasibility: It may not be possible to explore certain critical timings (e.g. sequences of user actions with millisecond precision) with real users operating a plant. Likewise for problematical points in parameter space that may not be easily reachable with a real plant but must be tested against the hardware in question. Use in various disciplines Automotive systems In context of automotive applications "Hardware-in-the-loop simulation systems provide such a virtual vehicle for systems validation and verification." Since in-vehicle driving tests for evaluating performance and diagnostic functionalities of Engine Management Systems are often time-consuming, expensive and not reproducible, HIL simulators allow developers to validate new hardware and software automotive solutions, respecting quality requirements and time-to-market restrictions. In a typical HIL Simulator, a dedicated real-time processor executes mathematical models which emulate engine dynamics. In addition, an I/O unit allows the connection of vehicle sensors and actuators (which usually present high degree of non-linearity). Finally, the Electronic Control Unit (ECU) under test is connected to the system and stimulated by a set of vehicle maneuvers executed by the simulator. At this point, HIL simulation also offers a high degree of repeatability during testing phase. In the literature, several HIL specific applications are reported and simplified HIL simulators were built according to some specific purpose. When testing a new ECU software release for example, experiments can be performed in open loop and therefore several engine dynamic models are no longer required. The strategy is restricted to the analysis of ECU outputs when excited by controlled inputs. In this case, a Micro HIL system (MHIL) offers a simpler and more economic solution. Since complexity of models processing is dumped, a full-size HIL system is reduced into a portable device composed of a signal generator, an I/O board, and a console containing the actuators (external loads) to be connected to the ECU. Radar HIL simulation for radar systems have evolved from radar-jamming. Digital Radio Frequency Memory (DRFM) systems are typically used to create false targets to confuse the radar in the battlefield, but these same systems can simulate a target in the laboratory. This configuration allows for the testing and evaluation of the radar system, reducing the need for flight trials (for airborne radar systems) and field tests (for search or tracking radars), and can give an early indication to the susceptibility of the radar to electronic warfare (EW) techniques. Robotics Techniques for HIL simulation have been recently applied to the automatic generation of complex controllers for robots. A robot uses its own real hardware to extract sensation and actuation data, then uses this data to infer a physical simulation (self-model) containing aspects such as its own morphology as well as characteristics of the environment. Algorithms such as Back-to-Reality (BTR) and Estimation Exploration (EEA) have been proposed in this context. Power systems In recent years, HIL for power systems has been used for verifying the stability, operation, and fault tolerance of large-scale electrical grids. Current-generation real-time processing platforms have the capability to model large-scale power systems in real-time. This includes systems with more than 10,000 buses with associated generators, loads, power-factor correction devices, and network interconnections. These types of simulation platforms enable the evaluation and testing of large-scale power systems in a realistic emulated environment. Moreover, HIL for power systems has been used for investigating the integration of distributed resources, next-generation SCADA systems and power management units, and static synchronous compensator devices. Offshore systems In offshore and marine engineering, control systems and mechanical structures are generally designed in parallel. Testing the control systems is only possible after integration. As a result, many errors are found that have to be solved during the commissioning, with the risks of personal injuries, damaging equipment and delays. To reduce these errors, HIL simulation is gaining widespread attention. This is reflected by the adoption of HIL simulation in the Det Norske Veritas rules. References External links Introduction to Hardware-in-the-Loop Simulation. Embedded systems
Hardware-in-the-loop simulation
Technology,Engineering
2,174
48,048,702
https://en.wikipedia.org/wiki/Beyond%20Natural%20Selection
Beyond Natural Selection is a 1991 book by Robert G. Wesson, published by MIT Press. Wesson argues for the case of pluralism in biology. He suggests alternative mechanisms of evolution rather than natural selection. Wesson argues that reductionism is inadequate and looks for chaos theory as an example of a different approach that is needed to explain evolution. The book provides unsolved problems that Wesson believed natural selection could not account for. Reception The paleontologist Joseph G. Carter in a critical review for the American Scientist wrote the book "includes innumerable oversimplifications and misrepresentations of both evolutionary theory and the paleontological record." Carter noted that the book was filled with errors such as Wesson's claim there is a lack of transitional fossils. Carter wrote that the "book approaches the scientific illiteracy" of the intelligent design text Of Pandas and People, and concluded it was an "embarrassment to the editors of the MIT Press". The ecologist Arthur M. Shapiro in a review for The Quarterly Review of Biology negatively reviewed the book for misunderstanding evolutionary biology and poor scholarship. According to Shapiro he "found an average of just over one error... per page by checking pages of this book at random." The historian Peter J. Bowler compared the book to anti-Darwinian and creationist works. He criticized the book for utilizing straw man arguments and presenting no valid scientific alternative to natural selection. Bowler noted that it is "easy to criticize an established theory, much more difficult to come up with a workable alternative... Hand waving about the creative power of the organism makes a nice-sounding philosophical position but a poor scientific theory". References 1991 non-fiction books Books about evolution MIT Press books Non-Darwinian evolution 1991 in biology
Beyond Natural Selection
Biology
366
58,504,805
https://en.wikipedia.org/wiki/Swarm%203D%20printing
Swarm 3D printing or cooperative 3D printing or swarm manufacturing is a digital manufacturing platform that employs a swarm of mobile robots with different functionalities to work together to print and assemble products based on digital designs. A digital design is first divided into smaller chunks and components based on its geometry and functions, which are then assigned to different specialized robots for printing and assembly in parallel and in sequence based on the dependency of the tasks. The robots typically move freely on an open factory floor, or through the air, and could carry different tool heads. Some common tool heads include material deposition tool heads (e.g., filament extruder, inkjet printhead), pick and place tool head for embedding of pre-manufactured components, laser cutter, welding tool, etc. In some cases, operations are managed by artificial intelligence algorithms, increasingly prevalent with larger swarms or more complex robots, which require elements of autonomy to work together effectively. While in its early stage of development, swarm 3D printing is currently being commercialized by startup companies. According to Additive Manufacturing Magazine, AMBOTS is credited with creating the first end-to-end solution for cooperative 3D printing. Using the Rapid Induction Printing metal additive manufacturing process, Rosotics was the first company to demonstrate swarm 3D printing using a metallic payload, and the only to achieve metallic 3D printing from an airborne platform. See also References External links Fully decentralized robotic swarm performing collective search and exploration -- Applied Complexity Group and Motion, Energy Control Lab at SUTD Swarm-bots: Swarms of self-assembling artifacts -- EU IST-FET project (2001-2005) Award-winning swarm-bot video at AAAI 2007 3D printing
Swarm 3D printing
Technology
347
20,785,437
https://en.wikipedia.org/wiki/Molecular%20Frontiers%20Foundation
The Molecular Frontiers Foundation (MFF) was founded under the auspices of the Royal Swedish Academy of Sciences in 2007 by Bengt Nordén, a professor of physical chemistry at Chalmers University of Technology in Sweden and the former chair of the Nobel Committee for Chemistry. Part of the mission of MFF according to Nordén is to counter the "increasingly bad image that chemistry has in society" and the "decreasing interest in science by the young generation". Founding members of Molecular Frontiers Foundation include Magdalena Eriksson, Lorie Karnath,(founder Molecular Frontiers Journal), Shuguang Zhang (MIT). The MFF counts eleven Nobel Laureates amongst its 29-member Scientific Advisory Board. It holds a number of international symposia around the world. May 23, 24, 2017 Stockholm, Sweden Title: Tailored Biology: Fundamental and Medicinal Insights co-chairs Bengt Nordén and Lorie Karnath May 9, 10, 2019 Stockholm, Sweden Title: Planet Earth; A Scientific Journey co-chairs Bengt Nordén and Lorie Karnath. March 6,7, 2023 Berkeley, CA, Title: On the Nature of Water co-chairs Omar Yaghi, Lorie Karnath, Bengt Norden, Douglas Clark, Peidong Yang Through its science-discussion website "MoleClues", the foundation awards the yearly "Molecular Frontiers Inquiry Prize" also known as the "kid Nobel" to equal numbers of girls and boys from around the world for asking the most penetrating scientific question. The entries are collected online and judged by the MFF Scientific Advisory Board during the annual Spring MFF Youth Forum in the Royal Swedish Academy of Sciences in Stockholm, Sweden. In April 2023, Lorie Karnath was elected as president of the organization by the Molecular Frontiers Foundation board. The term is for a three year period. References External links Official website of the MFF Molecular Frontiers Journal Bengt Nordén interviewed in Nature Chemical Biology 3, 79 (2007) List of MFF Scientific Advisory Board Members Science discussion website for children Molecular Frontiers Inquiry Prize Op Ed in New Scientist on the Molecular Frontiers Inquiry Prize, a "Nobel prize for children" Chemistry organizations
Molecular Frontiers Foundation
Chemistry
440
45,214,668
https://en.wikipedia.org/wiki/WRKY%20transcription%20factor
WRKY transcription factors (pronounced ‘worky’) are proteins that bind DNA. They are transcription factors that regulate many processes in plants and algae (Viridiplantae), such as the responses to biotic and abiotic stresses, senescence, seed dormancy and seed germination and some developmental processes but also contribute to secondary metabolism. Like many transcription factors, WRKY transcription factors are defined by the presence of a DNA-binding domain; in this case, it is the WRKY domain. The WRKY domain was named in 1996 after the almost invariant WRKY amino acid sequence at the N-terminus and is about 60 residues in length. In addition to containing the ‘WRKY signature’, WRKY domains also possess an atypical zinc-finger structure at the C-terminus (either Cx4-5Cx22-23HxH or Cx7Cx23HxC). Most WRKY transcription factors bind to the W-box promoter element that has a consensus sequence of TTGACC/T. Individual WRKY proteins do appear in the human protozoan parasite Giardia lamblia and slime mold Dictyostelium discoideum. Structural diversity WRKY transcription factors are denoted by a 60-70 amino acid WRKY protein domain composed of a conserved WRKYGQK motif and a zinc-finger region. Based on the amino acid sequence WRKY transcription factors are classified into three major categories, group I, group II, and group III. Group I WRKY proteins are primarily denoted by the presence of two WRKY protein domains, whereas both groups II and III each possess only one domain. Group III WRKY proteins have a C2HC zinc finger instead of the C2H2 motif of group I and II factors. The structure of several plant WRKY domains has been elucidated using crystallography and nuclear magnetic resonance spectroscopy. As soon as the WRKY domain was characterized, it was suggested that it contained a novel zinc finger structure and the first evidence to support this came from studies with 2-phenanthroline that chelates zinc ions. Addition of 2-phenenthroline to gel retardation assays that contained E. coli expressed WRKY proteins resulted in a loss of binding to the W-box target sequence. The other suggestion was that the WRKY signature amino acid sequence at the N-terminus of the WRKY domain directly binds to the W-box sequence in the DNA of target promoters. These suggestions were shown to be correct by publication of the solution structure of the C-terminal WRKY domain of the Arabidopsis WRKY4 protein. The WRKY domain was found to form a four-stranded β-sheet. Soon afterwards, a crystal structure of the C-terminal WRKY domain of the Arabidopsis WRKY1 protein was reported. This showed a similar result to the solution structure except that it may contain an additional β-strand at the N-terminus of the domain. From these two studies it appears that the conserved WRKYGQK signature amino acid sequence enters the major groove of the DNA to bind to the W-Box. Recently, the first structural determination of the WRKY domain complexed with a W-Box was reported. The NMR solution structure of the WRKY DNA-binding domain of Arabidopsis WRKY4 in complex with W Box DNA revealed that part of a four-stranded β-sheet enters the major groove of DNA in an atypical mode that the authors named the β-wedge, where this sheet is almost perpendicular to the DNA helical axis. As initially predicted, amino acids in the conserved WRKYGQK signature motif contact the W Box DNA bases mainly through extensive apolar contacts with thymine methyl groups. These structural data explain the conservation of both the WRKY signature sequence at the N-terminus of the WRKY domain and the conserved cysteine and histidine residues. It also provides the molecular basis for the previously noted remarkable conservation of both the WRKY amino acid signature sequence and the W Box DNA sequence. History In 1994 and 1995, the first two reports of WRKY transcription factors appeared. They described newly discovered but as yet ill-defined DNA binding proteins that played potential roles in the regulation of gene expression by sucrose (SPF1) or during germination (ABF1 and ABF2). A third report appeared in 1996 that identified WRKY1, WRKY2 and WRKY3 from parsley. The authors named the new transcription factor family the WRKY family (pronounced ‘worky’) after a conserved amino acid sequence at the N-terminus of the DNA-binding domain. The parsley WRKY proteins also provided the first evidence that WRKY transcription factors play roles in regulating plant responses to pathogens. Numerous papers have now shown this to be a major function of WRKY transcription factors. Since these initial publications, it has become clear that the WRKY family is among the ten largest families of transcription factors in higher plants and that these transcription factors play key roles in regulating a number of plant processes including the responses to biotic and abiotic stresses, germination, senescence, and some developmental processes. Evolution WRKY transcription factor genes are found throughout the plant lineage and also outside of the plant lineage in some diplomonads, social amoebae, fungi incertae sedis, and amoebozoa. This patchy distribution suggests that lateral gene transfer is responsible. These lateral gene transfer events appear to pre-date the formation of the WRKY groups in flowering plants, where there are seven well-defined groups, Groups I + IIc, Groups IIa + IIb, Groups IId + IIe, and Group III. Flowering plants also contain proteins with domains typical for both resistance (R) proteins and WRKY transcription factors. R protein-WRKY genes have evolved numerous times in flowering plants, each type being restricted to specific flowering plant lineages. These chimeric proteins contain not only novel combinations of protein domains but also novel combinations and numbers of WRKY domains. Several early reports proposed that a group I WRKY transcription factor was the progenitor of the family. It was thought that a single group I WRKY domain occurred first and then duplicated to form the original ancestral WRKY transcription factor. However, more recent evidence suggests that WRKY transcription factors evolved from a single group IIc-like gene, which then diversified into group I, group IIc, and group IIa+b domains. The original WRKY protein domain has been proposed to have arisen from the GCM1 and FLYWCH zinc finger factors. GCM1 and FLYWCH are proposed ancestral proteins base on their crystal structural similarity to the WRKY domain. Both GCM1 and FLYWCH belong to families of DNA-binding factors found in metazoan. The plant specific NAC transcription factor family also shares a common structural shape and origin with WRKY transcription factors. During plant evolution the WRKY family has dramatically expanded, which is proposed to be a result of through duplication. Some species including Arabidopsis thaliana, rice (Oryza sativa), and tomato (Solanum lycopersicum) have WRKY groups which dramatically expanded and diversified in recent evolutionary history. However, differences in expression, not variation in gene sequences, have likely lead to the diverse functions of WRKY genes. Such a model is plausible as WRKY family members are part of numerous phytohormone, developmental, and defense signaling transcriptional networks. Furthermore, W-box elements for WRKY binding occur in promoters of many other WRKY transcription factors indicating not simply a hierarchical rank in gene activation, but also which genes may have arisen later during evolution after initial WRKY regulatory networks were established. Function Over the last two decades great effort has been invested in characterizing WRKY transcription factors. The results show that WRKY transcription factors function in a diverse array of plant response, both to internal and external cues. Plant development Studies have demonstrated the function of WRKY transcription factors in plant development. Successful male gametogenesis and tolerance to interploidy crosses both require WRKY transcription factors. Embryo and root development also require WRKY transcription factors. WRKYs also contribute to determination of seed size and seed coat color in Arabidopsis. Furthermore, WRKY transcription factors have been shown to play key roles in regulation of developmentally programmed leaf senescence. Abiotic and biotic stresses One of the most notorious roles of the WRKY transcription factor family is the regulation of plant stress tolerance. WRKYs participate in nearly every aspect of plant defense to abiotic and biotic stressors. WRKYs are known to regulate cold, drought, flooding, heat, heavy metal toxicity, low humidity, osmotic, oxidative, salt and UV stresses. Likewise, WRKY transcription factors play an essential role in plant tolerance to biotic stresses, protecting against innumerable viruses, bacterial and fungal pathogens, as well as insect herbivory. Plants are believed to perceive pathogens via pathogen-associated molecular pattern (PAMP) triggered immunity and effector-triggered immunity. WRKY transcription factors participate in regulating responses to pathogens by targeting PAMP and effector triggered immunity. Hormone signaling WRKY transcription factors function through a variety of plant hormone signaling cascades. Over half of Arabidopsis thaliana WRKY transcription factors respond to salicylic acid treatment. At least 25% of WRKY transcription factors from Madagascar periwinkle (Catharanthus roseus) are responsive to jasmonate. Similarly, in grape (Vitis vinifera) 63%, 73%, 76%, and 81% or WRKY transcription factors are responsive to salicylic acid, ethylene, abscisic acid, and jasmonate treatment, respectively. In Arabidopsis thaliana, two important WRKY transcription factors are WRKY57 and WRKY70. WRKY57 mediates crosstalk between jasmonate and auxin signaling cascades, whereas WRKY70 moderates signaling between the jasmonate and salicylic acid pathways. Arabidopsis thaliana WRKY23 functions downstream of auxin signaling to positively activate expression of flavonols, which function as polar auxin transport inhibitors, to negatively feedback and suppress further auxin responses. Several WRKY transcription factors also respond to gibberellin treatment. Primary and secondary metabolism Due to difficulty in measuring phenotypes, less is known about the roles of WRKY transcription factors in plant metabolism. The earliest reports identified WRKYs based on their ability to regulate β-amylase, a gene involved in catabolism of starch into sugars. Since then, WRKY transcription factors have also been shown to regulate phosphate acquisition and tolerance to arsenic. Additionally, WRKYs are needed for proper expression of lignin biosynthetic pathway genes, which form products necessary for cell wall and xylem formation. Analysis of WRKY transcription factors from numerous plant species indicates the importance of the family in regulating secondary metabolism. WRKY transcription factors also play a role in regulating pathways for the biosynthesis of pharmaceutically valuable plant-specialized metabolites. Efforts to use WRKY transcription factors to improve production of the valuable anti-malarial drug artemisinin have been successful. Mode of action A long-standing question of in the field of transcriptional regulation is how large families of regulators binding a consensus DNA sequences dictate expression of different target genes. The WRKY transcription factor family has long exemplified this problem. Plant species contain numerous WRKY transcription factors which predominantly recognize a conserved cis-element. Only recently has it begun to be revealed how different WRKY transcription factors regulate unique sets of target genes. Variation in cis-element recognition Early work indicated that the WRKY family could bind W-box (T/A)TGAC(T/A). Later, a barley (Hordeum vulgare) WRKY transcription factor, SUSIBA2, was found to bind the Sugar Response Element (TAAAGATTACTAATAGGAA), illustrating some diversity exists in DNA sequence which WRKYs could recognize. Since then, WRKYs have been found to bind a more generic GAC core cis-element with flanking sequences dictating DNA-protein interactions. On the protein side differences in the consensus motif and downstream arginine or lysine residues dictate the exact flanking sequence recognized. Additionally, contrary to early reports, both WRKY domains of group I family members can bind DNA. Implications of these results are still being resolved. Protein-protein interactions One mechanism for determining WRKY binding activity is by protein-protein interactions. WRKY transcription factors have been found to interact with a variety of proteins, some of which occur by a group specific manner. Recent evidence suggests that VQ protein family is an important regulator of group I and group IIc WRKY transcription factors. VQ proteins appear to bind the WRKY domain, thus inhibiting protein-DNA interactions. At least one WRKY transcription factor, Arabidopsis WRKY57, interacts with jasmonate ZIM-domain (JAZ) and auxin/indole acetic acid (AUX/IAA) repressor of the jasmonate and auxin signaling cascade, respectively, indicating a point of crosstalk between these phytohormones. Other WRKYs interact with histone deacetylases. Group IIa WRKY factors form homodimers and heterodimers within the subgroup and with other group II subgroups. Group IId WRKY transcription factors typically possess a domain allowing interaction with calcium bound calmodulin. Phosphorylation Protein phosphorylation is a common method to regulate protein activity and WRKY transcription factors are no exception. WRKY gene involved in plant defense, hormone signaling, and secondary metabolism are regulated by phosphorylation via mitogen-activated protein kinase (MAPK) cascades. Additionally, a MAPK can phosphorylate a VQ protein, freeing the WRKY transcription factor for target gene activation. While kinases phosphorylating WRKY transcription factors are known, phosphatases removing phosphate groups have yet to be identified. Proteasomeal degradation Protein degradation via the proteasome is a common feature in plant regulatory networks to limit the duration of activation or repression by transcription factors. WRKY transcription factors have also been found to be regulated by proteasomal degradation mechanisms. In Chinese grapevine (Vitis pseudoreticulata) ERYSIPHE NECATOR-INDUCED RING FINGER PROTEIN1 targets WRKY11 for degradation leading to enhanced powdery mildew resistance. In rice, WRKY45 is degraded by the proteasome although the E3 ubiquitin ligase responsible remains unknown References External links WRKY family at PlantTFDB: Plant Transcription Factor Database WRKY family at Plant Transcription Factor Database at University of Potsdam WRKY Wide Web WRKY family at Superfamily WRKY Transcription Factor Family at The Arabidopsis Information Resource The Somssich Lab The Shen Lab Somssich’s list of WRKY-related publications The Eulgem Lab Transcription factors
WRKY transcription factor
Chemistry,Biology
3,133
36,420,451
https://en.wikipedia.org/wiki/Reversible-deactivation%20radical%20polymerization
In polymer chemistry, reversible-deactivation radical polymerizations (RDRPs) are members of the class of reversible-deactivation polymerizations which exhibit much of the character of living polymerizations, but cannot be categorized as such as they are not without chain transfer or chain termination reactions. Several different names have been used in literature, which are: Living radical polymerization Living free radical polymerization Controlled/"living" radical polymerization Controlled radical polymerization Reversible deactivation radical polymerization Though the term "living" radical polymerization was used in early days, it has been discouraged by IUPAC, because radical polymerization cannot be a truly living process due to unavoidable termination reactions between two radicals. The commonly-used term controlled radical polymerization is permitted, but reversible-deactivation radical polymerization or controlled reversible-deactivation radical polymerization (RDRP) is recommended. History and character RDRP – sometimes misleadingly called 'free' radical polymerization – is one of the most widely used polymerization processes since it can be applied to a great variety of monomers it can be carried out in the presence of certain functional groups the technique is rather simple and easy to control the reaction conditions can vary from bulk over solution, emulsion, miniemulsion to suspension it is relatively inexpensive compared with competitive techniques The steady-state concentration of the growing polymer chains is 10−7 M by order of magnitude, and the average life time of an individual polymer radical before termination is about 5–10 s. A drawback of the conventional radical polymerization is the limited control of chain architecture, molecular weight distribution, and composition. In the late 20th century it was observed that when certain components were added to systems polymerizing by a chain mechanism they are able to react reversibly with the (radical) chain carriers, putting them temporarily into a 'dormant' state. This had the effect of prolonging the lifetime of the growing polymer chains (see above) to values comparable with the duration of the experiment. At any instant most of the radicals are in the inactive (dormant) state, however, they are not irreversibly terminated (‘dead’). Only a small fraction of them are active (growing), yet with a fast rate of interconversion of active and dormant forms, faster than the growth rate, the same probability of growth is ensured for all chains, i.e., on average, all chains are growing at the same rate. Consequently, rather than a most probable distribution, the molecular masses (degrees of polymerization) assume a much narrower Poisson distribution, and a lower dispersity prevails. IUPAC also recognizes the alternative name, ‘controlled reversible-deactivation radical polymerization’ as acceptable, "provided the controlled context is specified, which in this instance comprises molecular mass and molecular mass distribution." These types of radical polymerizations are not necessarily ‘living’ polymerizations, since chain termination reactions are not precluded". The adjective ‘controlled’ indicates that a certain kinetic feature of a polymerization or structural aspect of the polymer molecules formed is controlled (or both). The expression ‘controlled polymerization’ is sometimes used to describe a radical or ionic polymerization in which reversible-deactivation of the chain carriers is an essential component of the mechanism and interrupts the propagation that secures control of one or more kinetic features of the polymerization or one or more structural aspects of the macromolecules formed, or both. The expression ‘controlled radical polymerization’ is sometimes used to describe a radical polymerization that is conducted in the presence of agents that lead to e.g. atom-transfer radical polymerization (ATRP), nitroxide-(aminoxyl) mediated polymerization (NMP), or reversible-addition-fragmentation chain transfer (RAFT) polymerization. All these and further controlled polymerizations are included in the class of reversible-deactivation radical polymerizations. Whenever the adjective ‘controlled’ is used in this context the particular kinetic or the structural features that are controlled have to be specified. Reversible-deactivation polymerization There is a mode of polymerization referred to as reversible-deactivation polymerization which is distinct from living polymerization, despite some common features. Living polymerization requires a complete absence of termination reactions, whereas reversible-deactivation polymerization may contain a similar fraction of termination as conventional polymerization with the same concentration of active species. Some important aspects of these are compared in the table: Common features As the name suggests, the prerequisite of a successful RDRP is fast and reversible activation/deactivation of propagating chains. There are three types of RDRP; namely deactivation by catalyzed reversible coupling, deactivation by spontaneous reversible coupling and deactivation by degenerative transfer (DT). A mixture of different mechanisms is possible; e.g. a transition metal mediated RDRP could switch among ATRP, OMRP and DT mechanisms depending on the reaction conditions and reagents used. In any RDRP processes, the radicals can propagate with the rate coefficient kp by addition of a few monomer units before the deactivation reaction occurs to regenerate the dormant species. Concurrently, two radicals may react with each other to form dead chains with the rate coefficient kt. The rates of propagation and termination between two radicals are not influenced by the mechanism of deactivation or the catalyst used in the system. Thus it is possible to estimate how fast a RDRP can be conducted with preserved chain end functionality? In addition, other chain breaking reactions such as irreversible chain transfer/termination reactions of the propagating radicals with solvent, monomer, polymer, catalyst, additives, etc. would introduce additional loss of chain end functionality (CEF). The overall rate coefficient of chain breaking reactions besides the direct termination between two radicals is represented as ktx. In all RDRP methods, the theoretical number average molecular weight of obtained polymers, Mn, can be defined by following equation: where Mm is the molecular weight of monomer; [M]0 and [M]t are the monomer concentrations at time 0 and time t; [R-X]0 is the initial concentration of the initiator. Besides the designed molecular weight, a well controlled RDRP should give polymers with narrow molecular distributions, which can be quantified by Mw/Mn values, and well preserved chain end functionalities. A well controlled RDRP process requires: 1) the reversible deactivation process should be sufficiently fast; 2) the chain breaking reactions which cause the loss of chain end functionalities should be limited; 3) properly maintained radical concentration; 4) the initiator should have proper activity. Examples Atom transfer radical polymerization (ATRP) The initiator of the polymerization is usually an organohalogenid and the dormant state is achieved in a metal complex of a transition metal (‘radical buffer’). This method is very versatile but requires unconventional initiator systems that are sometimes poorly compatible with the polymerization media. Nitroxide-mediated polymerization (NMP) Given certain conditions a homolytic splitting of the C-O bond in alkoxylamines can occur and a stable 2-centre 3 electron N-O radical can be formed that is able to initiate a polymerization reaction. The preconditions for an alkoxylamine suitable to initiate a polymerization are bulky, sterically obstructive substituents on the secondary amine, and the substituent on the oxygen should be able to form a stable radical, e.g. benzyl. Reversible addition-fragmentation chain transfer (RAFT) RAFT is one of the most versatile and convenient techniques in this context. The most common RAFT-processes are carried out in the presence of thiocarbonylthio compounds that act as radical buffers. While in ATRP and NMP reversible deactivation of propagating radical-radical reactions takes place and the dormant structures are a halo-compound in ATRP and the alkoxyamine in NMP, both being a sink for radicals and source at the same time and described by the corresponding equilibria. RAFT on the contrary, is controlled by chain-transfer reactions that are in a deactivation-activation equilibrium. Since no radicals are generated or destroyed an external source of radicals is necessary for initiation and maintenance of the propagation reaction. Initiation step of a RAFT polymerization I -> I^. ->[\ce M]->[\ce M] P_\mathit{n}^. Reversible chain transfer Reinitiation step R^. ->[\ce M] RM^. ->[\ce M]->[\ce M] P^._\mathit{m} Chain equilibration step Termination step {P_\mathit{m}^.} + P_\mathit{n}^. -> P_\mathit{m}P_\mathit{n} Catalytic chain transfer and cobalt mediated radical polymerization Although not a strictly living form of polymerization catalytic chain transfer polymerization must be mentioned as it figures significantly in the development of later forms of living free radical polymerization. Discovered in the late 1970s in the USSR it was found that cobalt porphyrins were able to reduce the molecular weight during polymerization of methacrylates. Later investigations showed that the cobalt glyoxime complexes were as effective as the porphyrin catalysts and also less oxygen sensitive. Due to their lower oxygen sensitivity these catalysts have been investigated much more thoroughly than the porphyrin catalysts. The major products of catalytic chain transfer polymerization are vinyl-terminated polymer chains. One of the major drawbacks of the process is that catalytic chain transfer polymerization does not produce macromonomers but instead produces addition fragmentation agents. When a growing polymer chain reacts with the addition fragmentation agent the radical end-group attacks the vinyl bond and forms a bond. However, the resulting product is so hindered that the species undergoes fragmentation, leading eventually to telechelic species. These addition fragmentation chain transfer agents do form graft copolymers with styrenic and acrylate species however they do so by first forming block copolymers and then incorporating these block copolymers into the main polymer backbone. While high yields of macromonomers are possible with methacrylate monomers, low yields are obtained when using catalytic chain transfer agents during the polymerization of acrylate and stryenic monomers. This has been seen to be due to the interaction of the radical centre with the catalyst during these polymerization reactions. The reversible reaction of the cobalt macrocycle with the growing radical is known as cobalt carbon bonding and in some cases leads to living polymerization reactions. Iniferter polymerization An iniferter is a chemical compound that simultaneously acts as initiator, transfer agent, and terminator (hence the name ini-fer-ter) in controlled free radical iniferter polymerizations, the most common is the dithiocarbamate type. Iodine-transfer polymerization (ITP) Iodine-transfer polymerization (ITP, also called ITRP), developed by Tatemoto and coworkers in the 1970s gives relatively low polydispersities for fluoroolefin polymers. While it has received relatively little academic attention, this chemistry has served as the basis for several industrial patents and products and may be the most commercially successful form of living free radical polymerization. It has primarily been used to incorporate iodine cure sites into fluoroelastomers. The mechanism of ITP involves thermal decomposition of the radical initiator (typically persulfate), generating the initiating radical In•. This radical adds to the monomer M to form the species P1•, which can propagate to Pm•. By exchange of iodine from the transfer agent R-I to the propagating radical Pm• a new radical R• is formed and Pm• becomes dormant. This species can propagate with monomer M to Pn•. During the polymerization exchange between the different polymer chains and the transfer agent occurs, which is typical for a degenerative transfer process. Typically, iodine transfer polymerization uses a mono- or diiodo-perfluoroalkane as the initial chain transfer agent. This fluoroalkane may be partially substituted with hydrogen or chlorine. The energy of the iodine-perfluoroalkane bond is low and, in contrast to iodo-hydrocarbon bonds, its polarization small. Therefore, the iodine is easily abstracted in the presence of free radicals. Upon encountering an iodoperfluoroalkane, a growing poly(fluoroolefin) chain will abstract the iodine and terminate, leaving the now-created perfluoroalkyl radical to add further monomer. But the iodine-terminated poly(fluoroolefin) itself acts as a chain transfer agent. As in RAFT processes, as long as the rate of initiation is kept low, the net result is the formation of a monodisperse molecular weight distribution. Use of conventional hydrocarbon monomers with iodoperfluoroalkane chain transfer agents has been described. The resulting molecular weight distributions have not been narrow since the energetics of an iodine-hydrocarbon bond are considerably different from that of an iodine-fluorocarbon bond and abstraction of the iodine from the terminated polymer difficult. The use of hydrocarbon iodides has also been described, but again the resulting molecular weight distributions were not narrow. Preparation of block copolymers by iodine-transfer polymerization was also described by Tatemoto and coworkers in the 1970s. Although use of living free radical processes in emulsion polymerization has been characterized as difficult, all examples of iodine-transfer polymerization have involved emulsion polymerization. Extremely high molecular weights have been claimed. Listed below are some other less described but to some extent increasingly important living radical polymerization techniques. Selenium-centered radical-mediated polymerization Diphenyl diselenide and several benzylic selenides have been explored by Kwon et al. as photoiniferters in polymerization of styrene and methyl methacrylate. Their mechanism of control over polymerization is proposed to be similar to the dithiuram disulfide iniferters. However, their low transfer constants allow them to be used for block copolymer synthesis but give limited control over the molecular weight distribution. Telluride-mediated polymerization (TERP) Telluride-mediated polymerization or TERP first appeared to mainly operate under a reversible chain transfer mechanism by homolytic substitution under thermal initiation. However, in a kinetic study it was found that TERP predominantly proceeds by degenerative transfer rather than 'dissociation combination'. Alkyl tellurides of the structure Z-X-R, were Z=methyl and R= a good free radical leaving group, give the better control for a wide range of monomers, phenyl tellurides (Z=phenyl) giving poor control. Polymerization of methyl methacrylates are only controlled by ditellurides. The importance of X to chain transfer increases in the series O<S<Se<Te, makes alkyl tellurides effective in mediating control under thermally initiated conditions and the alkyl selenides and sulfides effective only under photoinitiated polymerization. Stibine-mediated polymerization More recently Yamago et al. reported stibine-mediated polymerization, using an organostibine transfer agent with the general structure Z(Z')-Sb-R (where Z= activating group and R= free radical leaving group). A wide range of monomers (styrenics, (meth)acrylics and vinylics) can be controlled, giving narrow molecular weight distributions and predictable molecular weights under thermally initiated conditions. Yamago has also published a patent indicating that bismuth alkyls can also control radical polymerizations via a similar mechanism. Copper mediated polymerization More reversible-deactivation radical polymerizations are known to be catalysed by copper. References Polymer chemistry Free radical reactions Polymerization reactions
Reversible-deactivation radical polymerization
Chemistry,Materials_science,Engineering
3,429
55,723,137
https://en.wikipedia.org/wiki/Na%20Dae-yong
Na Dae-yong (; 1556 – January 29, 1612) was a Korean naval officer who fought against the Japanese navy in Imjin War and was also known as a designer of the turtle ship. Biography Na Dae-yong was born in Jeolla-do of Naju, passed the military examination in 1583, worked as a Hullyeonwon Bongsa (), and was promoted to military officer in Jeollajwasuyeong. According to the address to the throne written by Na Dae-yong, he worked in army for 6 years to defend the northern lands and 7 years to defend the south lands, and worked under Yi Sun-sin as Battleship Managing Military Officer (). During the Japanese invasion of Korea, he was assigned as Temporary Commander (gajang) of one of five harbors in Jwasuyeong, Balpo. He had done active service as commander of Yugun (), assistance soldiers. He served in the battles at Okpo, Sacheon, Hansan, Myeongnyang, and Noryang. He was shot during the Battle of Sacheon, but soon returned to active service. After the war, Na continued to invent several warship designs. He died in 1612 and was buried at Munpyeongmyeong at Naju, South Jeolla Province. Contribution to turtle ship design In 1587, Na dae-yong threw away his government post, and came to his hometown to design the turtle ship. He studied arithmetic, read books about ships, and produced a model of the turtle ship. At that time, there were administrators who were experts in mathematical calculations in big public works, but people who made personal inventions like Na dae-yong should have studied arithmetic alone to apply mathematical calculations on ship. Nevertheless, the design of turtle ship was successful. Na looked up the designs of warships which people had made from the period of the Three States until now. At the moment, Joseon's naval forces' major warship was the Panokseon. One of the Panokseon's chief features was a higher upper deck, which hindered enemy boarding attempts and enabled the Korean crews to fire their arrows further. However, the front of the ship was relatively lower, so it could be attacked by enemies. Also, there were no ways to fend off enemy arrows or spears on the upper deck because the soldiers were exposed. During the Goryeo period, the naval force used Gwaseon () to defeat the Japanese raiders. Gwaseon could fend off enemies by fixing knives in the ship's flanks. Additionally, they featured iron rams to attack enemy ships in front of the Gwaseon. The Byeolmaengseon () was another type of warship that Na concerned himself with. A Byeolmaengseon's top deck was completely covered, and the soldiers were posted inside of the ship. Na Dae-yong designed the turtle ship as a combination of the best features of the Gwaseon, the Byeolmaengseon, and the Panokseon. Its hull is likely based on the Panokseon, but the top deck was completely covered like with the Byeolmaengseon, and the spikes on its roof came from the Gwaseon. Related historical documents Joseon Wangjo Sillok Na Dae-yong appears to a total of 27 times in the Joseon Wangjo Sillok: 3 times in the Seonjo Sillok and 24 times in the Gwanghae-gun Ilgi. In Seonjo Sillok, Seonjo is asked for Na Dae-yong's removal for 2 times. Only one record about Na Dae-yong in the Seonjo Sillok is related to ship architecture, and it is the only one found in the entire Joseon Wangjo Sillok. After that, Gwanghae-gun Ilgi shows that the Saheon-bu (사헌부; 司憲府; Office of the Inspector General) and the Sagan-won (사간원; 司諫院; Office of the censor general) had repeatedly asked for Na Dae-Yong's removal from his position as governor of Gonyang (1608) and magistrate of Namhae district (1610). These offices are organizations responsible for licensing officials, impeachment and legal inquiries and had a duty of criticizing and giving some feedback about the king's orders. In these documents Na Dae-yong is depicted as a greedy, silly alcoholic who exploited local people with his power. Ichungmugong Jeonseo Ichungmugong Jeonseo (이충무공전서; 忠武公李舜臣全書; Admiral Yi's Testament) is a complete collection of Yi Sun-sin's posthumous works. It was published in the year 1795, and it contains famous documents like Nanjung ilgi. Cheam Na Dae-yong Jang-gun Silgi Cheam Na Dae-yong Jang-gun Silgi () is a book that was published by the Cheam Na Dae-yong Jang-gun Memorial Foundation () in the year 1976. The organization also constructed Na Dae-yong Jang-gunGijeogbi (). Other documents Other documents like Yeosu ∙ Yeocheon local history book, Naju Mokji (a book containing various information about the province of Naju) also writes about Na Dae-yong. Centers The War Memorial of Korea The War Memorial of Korea is located in Yongsan-dong, Yongsan District, Seoul, South Korea. There are six indoor exhibition rooms and an outdoor exhibition center. The six indoor exhibition rooms are named "Room of Respect to Defense of Nation", "Room of War History", "Room of the Korean War", "Room of the Dispatch of Foreign Forces", "Room of the Development of Armed Forces", and "Room of the Fully-Equipped Warship." In the outdoor exhibition, there are large-sized weaponry. The model of the Turtle Ship and documents alluding to Na Dae-yong are in the Joseon Dynasty section in the Room of War History. Sochungsa The Sochungsa () memorial was erected to extol Nadaeyong's patriotism and endeavors to reinforce the combat capabilities of the navy. The Sochungsa consists of a model of the Turtle Ship and a cenotaph of Na Dae-yong. It was built in 1977, and Na's achievements are celebrated every April 21. Birthplace and burial site Na Dae-yong's house is located on Moonpyeong-myeon, Naju. It is a north-headed house of four-by-one size. His burial ground is located in the base of the mountain, which is about 3 km away from Nadaeyong's house. It is named as the 26th remembrance of Jeollanam-do. Depictions in modern culture Portrayed by Lee Sang-in in the 2004–2005 KBS1 TV series Immortal Admiral Yi Sun-sin. Appeared in the 2009 limited comic series Yi Soon Shin: Warrior and Defender Portrayed by Jang Jun-yeong in the 2014 film The Admiral: Roaring Currents Portrayed by Jung Jin in the 2016 KBS1 TV series Imjin War. Portrayed by Park Ji-hwan in the 2022 film Hansan: Rising Dragon. See also Naval history of Korea Imjin war ROKS Na Dae-yong (SS-069) Turtle ship Immortal Admiral Yi Sun-sin References 16th-century Korean people Korean admirals Marine engineers People from Naju 16th-century engineers 1556 births 1612 deaths
Na Dae-yong
Engineering
1,608
28,300,822
https://en.wikipedia.org/wiki/Multiplier%20ideal
In commutative algebra, the multiplier ideal associated to a sheaf of ideals over a complex variety and a real number c consists (locally) of the functions h such that is locally integrable, where the fi are a finite set of local generators of the ideal. Multiplier ideals were independently introduced by (who worked with sheaves over complex manifolds rather than ideals) and , who called them adjoint ideals. Multiplier ideals are discussed in the survey articles , , and . Algebraic geometry In algebraic geometry, the multiplier ideal of an effective -divisor measures singularities coming from the fractional parts of D. Multiplier ideals are often applied in tandem with vanishing theorems such as the Kodaira vanishing theorem and the Kawamata–Viehweg vanishing theorem. Let X be a smooth complex variety and D an effective -divisor on it. Let be a log resolution of D (e.g., Hironaka's resolution). The multiplier ideal of D is where is the relative canonical divisor: . It is an ideal sheaf of . If D is integral, then . See also Canonical singularity Test ideal Nadel vanishing theorem References Commutative algebra Algebraic geometry
Multiplier ideal
Mathematics
259
56,576,360
https://en.wikipedia.org/wiki/Malacidin
Malacidins are a class of chemicals made by bacteria found in soil that can kill Gram-positive bacteria. Their activity appears to be dependent on calcium. The discovery of malacidins was published in 2018. The malacidin family were discovered using a new method of soil microbiome screening that does not require cell culturing. This allowed researchers to identify genetic components necessary to produce the chemical. Malacidin A was shown to kill Staphylococcus aureus and other Gram-positive bacteria. At the time of publication it was not certain if the discovery would lead to any new antibiotic drugs, because large investments of time and money are required to determine whether any drug is safe and effective. Chemical structure Malacidins are macrocycle lipopeptides. The 2018 paper described two chemicals in the malacidin family, differing only by a methylene at their lipid tails. Their peptide cores include four non-proteinogenic amino acids. The name "malacidin" is derived from the abbreviation of metagenomic acidic lipopeptide antibiotic and the suffix -cidin. Mechanism of action Malacidins appear to take on their active conformation after they bind to calcium; the calcium-bound molecule then appears to bind to lipid II, a bacterial cell wall precursor molecule, leading to destruction of the cell wall and death of the bacteria. Therefore, they would be a new member of the class of calcium-dependent antibiotics. The discovery of malacidins supported the view that the calcium-dependent antibiotics are a larger class than previously thought. History Malacidins were discovered by researchers at Rockefeller University, led by Brad Hover and Sean Brady. The group had been looking into antibiotics related to daptomycin and their calcium-dependent nature, but determined that it would be impractical to culture variations in lab conditions. Instead, the team used a genetics approach that was more scalable. They focused on searching for novel biosynthetic gene clusters (BGCs) – genes that are usually expressed together, that bacteria use to make secondary metabolites. To do this, they extracted DNA from around 2,000 soil samples to build metagenomic libraries that captured the genetic diversity of the environmental microbiome. They then designed degenerate primers to amplify genes likely to be similar to the BGC that make daptomycin by using a polymerase chain reaction (PCR) procedure, sequenced the amplified genes, and then used metagenomics to confirm that these genes were indeed likely to be the kind of BGCs they sought. One of the novel BGCs they found was present in around 19% of the screened soil samples but not readily found in cultured microbial collections, so they took that BGC, put it into other host bacteria, and then isolated and analyzed the secondary metabolites. The work was published in Nature Microbiology in February 2018. Research directions The approach of screening the soil for useful compounds using genomics has been done by others, and is likely to continue to be pursued as a method to further explore primary metabolites and secondary metabolites made by microorganisms. , the malacidins had not been tested on humans. At the time of their discovery it was unknown whether the discovery would lead to any new antibiotic drugs; showing that a potential drug is safe and effective takes years of work and millions of dollars, and the scientists said at the time that they had no plans to try to develop a drug based on the work. In the 2018 paper, malacidins were shown to kill only Gram-positive bacteria and not Gram-negative bacteria. They were, however, able to kill multidrug-resistant pathogens, including bacteria resistant to vancomycin in the laboratory, and methicillin-resistant Staphylococcus aureus (MRSA) skin infections in an animal wound model. Brady, Hover, and two other authors disclosed in the 2018 paper that they had "competing financial interests, as they are employees or consultants of Lodo Therapeutics." Lodo was founded in 2016 out of Brady's laboratory, to discover new chemicals in nature as starting points for drug discovery. See also Teixobactin Friulimicin References Antimicrobial peptides Lipopeptides Soil biology Cyclic peptides
Malacidin
Biology
894
34,346,779
https://en.wikipedia.org/wiki/Dynamic%20contagion%20process
In applied probability, a dynamic contagion process is a point process with stochastic intensity that generalises the Hawkes process and Cox process with exponentially decaying shot noise intensity. See also Point process Cox process Doubly stochastic model References Point processes
Dynamic contagion process
Mathematics
56
4,859,028
https://en.wikipedia.org/wiki/Zero-lift%20drag%20coefficient
In aerodynamics, the zero-lift drag coefficient is a dimensionless parameter which relates an aircraft's zero-lift drag force to its size, speed, and flying altitude. Mathematically, zero-lift drag coefficient is defined as , where is the total drag coefficient for a given power, speed, and altitude, and is the lift-induced drag coefficient at the same conditions. Thus, zero-lift drag coefficient is reflective of parasitic drag which makes it very useful in understanding how "clean" or streamlined an aircraft's aerodynamics are. For example, a Sopwith Camel biplane of World War I which had many wires and bracing struts as well as fixed landing gear, had a zero-lift drag coefficient of approximately 0.0378. Compare a value of 0.0161 for the streamlined P-51 Mustang of World War II which compares very favorably even with the best modern aircraft. The drag at zero-lift can be more easily conceptualized as the drag area () which is simply the product of zero-lift drag coefficient and aircraft's wing area ( where is the wing area). Parasitic drag experienced by an aircraft with a given drag area is approximately equal to the drag of a flat square disk with the same area which is held perpendicular to the direction of flight. The Sopwith Camel has a drag area of , compared to for the P-51 Mustang. Both aircraft have a similar wing area, again reflecting the Mustang's superior aerodynamics in spite of much larger size. In another comparison with the Camel, a very large but streamlined aircraft such as the Lockheed Constellation has a considerably smaller zero-lift drag coefficient (0.0211 vs. 0.0378) in spite of having a much larger drag area (34.82 ft2 vs. 8.73 ft2). Furthermore, an aircraft's maximum speed is proportional to the cube root of the ratio of power to drag area, that is: . Estimating zero-lift drag As noted earlier, . The total drag coefficient can be estimated as: , where is the propulsive efficiency, P is engine power in horsepower, sea-level air density in slugs/cubic foot, is the atmospheric density ratio for an altitude other than sea level, S is the aircraft's wing area in square feet, and V is the aircraft's speed in miles per hour. Substituting 0.002378 for , the equation is simplified to: . The induced drag coefficient can be estimated as: , where is the lift coefficient, AR is the aspect ratio, and is the aircraft's efficiency factor. Substituting for gives: , where W/S is the wing loading in lb/ft2. References Aerodynamics Aircraft manufacturing Drag (physics)
Zero-lift drag coefficient
Chemistry,Engineering
563
50,997,707
https://en.wikipedia.org/wiki/International%20Control%20Dam
The International Control Dam, also known as the International Control Structure, operated by Ontario Power Generation, is a weir that controls the water diversions from the Niagara River and dispatches the water between the New York Power Authority and Ontario Power Generation in accordance with the terms of the 1950 Niagara Treaty. It was completed in 1954. To preserve Niagara Falls' natural beauty and to ensure an "unbroken curtain of water" is flowing over the falls, the 1950 treaty was signed by the U.S. and Canada to limit water usage by power plants. The treaty allows higher summertime diversion at night when tourists are fewer and during the winter months when there are even fewer tourists. The treaty states that during daylight time during the tourist season (April 1 to October 31), there must be of water flowing over the falls, and during the night and off-tourist season there must be of water flowing over the falls. This treaty is monitored by the International Niagara Board of Control, part of the International Joint Commission, using a NOAA gauging station above the falls. This weir allows water from the upper river to be diverted into the intakes for the American and Canadian power stations. Two tunnels on the American side take water under the city of Niagara Falls, New York, and three tunnels on the Canadian side divert water under the city of Niagara Falls, Ontario. Once past these cities, the water flows into two canals and then into two large reservoirs. Behind the Canadian Sir Adam Beck Power Station is a reservoir covering and a similar reservoir on the US side behind the Robert Moses Power Plant. A trade-off exists between the two main industries of tourism and hydroelectric power. More water is diverted by the International Control Dam at night, between 10:00 pm and 7:00 am, filling the reservoirs overnight and allowing more water over Niagara Falls in the daytime hours for the tourists. As well, during the winter, from November 1 to March 31, when it is not the tourism season, more water is diverted for electrical power during the whole 24 hour period. The pool of water located immediately upstream of the International Water Control Dam is named the "Chippawa – Grass Island pool". References External links International Niagara Board of Control Dams in Canada Niagara River Dams completed in 1954 Ontario Power Generation Canada–United States relations Weirs
International Control Dam
Environmental_science
465
26,563,427
https://en.wikipedia.org/wiki/Plasma%20shaping
Magnetically confined fusion plasmas such as those generated in tokamaks and stellarators are characterized by a typical shape. Plasma shaping is the study of the plasma shape in such devices, and is particularly important for next step fusion devices such as ITER. This shape is conditioning partly the performance of the plasma. Tokamaks, in particular, are axisymmetric devices, and therefore one can completely define the shape of the plasma by its cross-section. History Early fusion reactor designs tended to have circular cross-sections simply because they were easy to design and understand. Generally, fusion machines using a toroidal layout, like the tokamak and most stellarators, arrange their magnetic fields so the ions and electrons in the plasma travel around the torus at high velocities. However, as the circumference of a path on the outside of the plasma area is longer than one on the inside, this caused several effects that disrupted the stability of the plasma. During the 1960s a number of different methods were used to try to address these problems. Generally they used a combination of several magnetic fields to cause the net magnetic field inside the device to be twisted into a helix. Ions and electrons following these lines found themselves moving to the inside and then outside of the plasma, mixing it and suppressing some of the most obvious instabilities. In the 1980s, further research along these lines demonstrated that further advances were possible by using external current-carrying coils to make the lines not just helical, but non-symmetric as well. This led to a series of experiments using C and D-shaped plasma volumes.. By increasing the current in one (or more) shaping coils to a high enough degree, one (or more) 'X-points' can be created. An X-point is defined as a point in space at which the poloidal field has zero magnitude. The magnetic flux surface that intersects with the X-point is called the separatrix, and, as all flux surfaces external to this surface are unconfined, the separatrix defines the last closed flux surface (LCFS). Formerly, the LCFS was established by inserting a material limiter into the plasma, which fixed the plasma temperature and potential (among other quantities) to be equal to that of the limiter. Plasma that escaped the LCFS would do so with no preferential direction, potentially damaging instruments. By establishing an X-point and separatrix, the plasma edge is uncoupled from the vessel walls, and exhausted heat and plasma particles are preferentially diverted towards a known region of the vessel near the X-point. Cross-section In the simple case of a plasma with up-down symmetry, the plasma cross-section is defined using a combination of four parameters: the plasma elongation, , where is the plasma minor radius, and is the height of the plasma measured from the equatorial plane, the plasma triangularity, , defined as the horizontal distance between the plasma major radius and the X point, the angle between the horizontal and the plasma last closed flux surface (LCFS) at the low field side, the angle between the horizontal and the plasma last closed flux surface (LCFS) at the high field side. In general (no up-down symmetry), there can be an upper-triangularity, and a lower-triangularity. Tokamaks can have negative triangularity. External links Triangularity - with diagram and source Ellipticity - with diagram and source References Fusion power
Plasma shaping
Physics,Chemistry
716
2,296,161
https://en.wikipedia.org/wiki/Aquarius%20Dwarf
The Aquarius Dwarf is a dwarf irregular galaxy, first catalogued in 1959 by the DDO survey. It is located within the boundaries of the constellation of Aquarius. It is a member of the Local Group of galaxies, albeit an extremely isolated one; it is one of only a few known Local Group members for which a past close approach to the Milky Way or Andromeda Galaxy can be ruled out, based on its current location and velocity. Local Group membership was firmly established only in 1999, with the derivation of a distance based on the tip of the red-giant branch method. Its distance from the Milky Way of 3.2 ±0.2 Mly (980 ±40 kpc) means that Aquarius Dwarf is quite isolated in space. It is one of the least luminous Local Group galaxies to contain significant amounts of neutral hydrogen and support to ongoing star formation, although it does so only at an extremely low level. Because of its large distance, the Hubble Space Telescope is required in order to study its stellar populations in detail. RR Lyrae stars have been discovered in Aquarius Dwarf, indicating the existence of stars more than 10 billion years old, but the majority of its stars are much younger (median age 6.8 billion years). Among Local Group galaxies, only Leo A has a younger mean age, leading to the suggestion that delayed star formation could be correlated with galaxy isolation. References Dwarf irregular galaxies Irregular galaxies Aquarius (constellation) Local Group 65367
Aquarius Dwarf
Astronomy
301
6,118,308
https://en.wikipedia.org/wiki/Anatoly%20Vlasov
Anatoly Aleksandrovich Vlasov (; – 22 December 1975) was a Russian, later Soviet, theoretical physicist prominent in the fields of statistical mechanics, kinetics, and especially in plasma physics. Biography Anatoly Vlasov was born in Balashov, in the family of a steamfitter. In 1927 he entered into the Moscow State University (MSU) and graduated from the MSU in 1931. After the graduation Vlasov continued to work in the MSU, where he spent all his life, collaborating with Nobelists Pyotr Kapitsa, Lev Landau, and other leading physicists. He became a full Professor at the Moscow State University in 1944 and was the head of the theoretical physics department in the Faculty of Physics at Moscow State University from 1945 to 1953. He was a member of Communist Party of USSR since 1944 In 1970 he received the Lenin Prize. Research His main works are in optics, plasma physics, physics of crystals, theory of gravitation, and statistical physics. Optics In optics he analyzed, partially with Vasily Fursov, spectral line broadening in gases at large densities (1936—1938). A new suggestion in these works was to use long range collective interactions between atoms for a correct description of spectra line broadening at large densities. Plasma physics Vlasov became world-famous for his work on plasma physics (1938) (see also). He showed that the Boltzmann equation is not suitable for a description of plasma dynamics due to the existence of long range collective forces in the plasma. Instead, an equation known now as the Vlasov equation was suggested for the correct description to take into account the long range collective forces through a self-consistent field. The field is determined by taking moments of the distribution function described in Vlasov's equation to compute both the charge density and current density. Coupled with Maxwell's equations, the resulting system of differential equations are well-posed provided correct initial conditions and boundary conditions are provided. The Vlasov equation, which is related to the Liouville's equation and the collisionless Boltzmann equation, is fundamental to plasma physics. In 1945, Vlasov showed that this equation, with the collective interaction taken into account, can explain without any additional hypotheses and specifications such effects as the presence and spontaneous origin of eigenfrequencies in polyatomic systems, the spontaneous origin of crystal structure from a "gas" medium, and the presence and spontaneous origin of currents in the media due to the collective interaction of the particles. Physics of crystals In this subject Vlasov in particular studied using the linearized Vlasov equation the conditions for spontaneous origin of crystal structure in the medium and found the criteria for the origin of the periodic structure in terms of the temperature, density, and microscopic interaction of particles of the medium. See also Vlasov equation Selected publications A. A. Vlasov (1961). Many-Particle Theory and Its Application to Plasma. New York, Gordon and Breach. ; . A. A. Vlasov (1966). Statistical Distribution Functions [in Russian]. Nauka. A. A. Vlasov (1978). Nonlocal Statistical Mechanics [in Russian]. Nauka, Moscow. References External links Anatolii Aleksandrovich Vlasov (obituary) (in English), I. P. Bazarov et al., Soviet Physics Uspekhi 19, 545—546 (1976). Anatoly Vlasov in the Great Soviet Encyclopedia Anatoly Vlasov in the All-Russia Genealogical Tree 1908 births 1975 deaths 20th-century Russian physicists People from Balashov Moscow State University alumni Academic staff of Moscow State University Recipients of the Lenin Prize Recipients of the Order of the Red Banner of Labour Plasma physicists Russian theoretical physicists Soviet physicists Burials at Donskoye Cemetery Russian scientists
Anatoly Vlasov
Physics
800
14,119,857
https://en.wikipedia.org/wiki/NFATC2
Nuclear factor of activated T-cells, cytoplasmic 2 is a protein that in humans is encoded by the NFATC2 gene. Function This gene is a member of the nuclear factor of activated T cells (NFAT) family. The product of this gene is a DNA-binding protein with a REL-homology region (RHR) and an NFAT-homology region (NHR). This protein is present in the cytosol and only translocates to the nucleus upon T cell receptor (TCR) stimulation, where it becomes a member of the nuclear factors of activated T cells transcription complex. This complex plays a central role in inducing gene transcription during the immune response. Alternate transcriptional splice variants, encoding different isoforms, have been characterized. Clinical significance Translocation forming an in frame fusions product between EWSR1 gene and the NFATc2 gene has been described in bone tumor with a Ewing sarcoma-like clinical appearance. The translocation breakpoint led to the loss of the controlling elements of the NFATc2 protein and the fusion of the N terminal region of the EWSR1 gene conferred constant activation of the protein. Interactions NFATC2 has been shown to interact with MEF2D, EP300, IRF4 and Protein kinase Mζ. Prostaglandin F2alpha stimulates a NFCT2 pathway stimulating growth of skeletal muscle cells. References Further reading External links Transcription factors Human proteins
NFATC2
Chemistry,Biology
308
200,690
https://en.wikipedia.org/wiki/Web%20container
A web container (also known as a servlet container; and compare "webcontainer") is the component of a web server that interacts with Jakarta Servlets. A web container is responsible for managing the lifecycle of servlets, mapping a URL to a particular servlet and ensuring that the URL requester has the correct access-rights. A web container handles requests to servlets, Jakarta Server Pages (JSP) files, and other types of files that include server-side code. The Web container creates servlet instances, loads and unloads servlets, creates and manages request and response objects, and performs other servlet-management tasks. A web container implements the web component contract of the Jakarta EE architecture. This architecture specifies a runtime environment for additional web components, including security, concurrency, lifecycle management, transaction, deployment, and other services. List of Servlet containers The following is a list of notable applications which implement the Jakarta Servlet specification from Eclipse Foundation, divided depending on whether they are directly sold or not. Open source Web containers Apache Tomcat (formerly Jakarta Tomcat) is an open source web container available under the Apache Software License. Apache Tomcat 6 and above are operable as general application container (prior versions were web containers only) Apache Geronimo is a full Java EE 6 implementation by Apache Software Foundation. Enhydra, from Lutris Technologies. GlassFish from Eclipse Foundation (an application server, but includes a web container). Jetty, from the Eclipse Foundation. Also supports SPDY and WebSocket protocols. Virgo from Eclipse Foundation provides modular, OSGi based web containers implemented using embedded Tomcat and Jetty. Virgo is available under the Eclipse Public License. WildFly (formerly JBoss Application Server) is a full Java EE implementation by Red Hat, division JBoss. Commercial Web containers iPlanet Web Server, from Oracle. JBoss Enterprise Application Platform from Red Hat, division JBoss is subscription-based/open-source Jakarta EE-based application server. WebLogic Application Server, from Oracle Corporation (formerly developed by BEA Systems). Orion Application Server, from IronFlare. Resin Pro, from Caucho Technology. IBM WebSphere Application Server. SAP NetWeaver. References Computer networking Java platform Software architecture Web applications Web development
Web container
Technology,Engineering
490
5,813,192
https://en.wikipedia.org/wiki/CDK-activating%20kinase
CDK-activating kinase (CAK) activates the cyclin-CDK complex by phosphorylating threonine residue 160 in the CDK activation loop. CAK itself is a member of the Cdk family and functions as a positive regulator of Cdk1, Cdk2, Cdk4, and Cdk6. Catalytic activity Cdk activation requires two steps. First, cyclin must bind to the Cdk. In the second step, CAK must phosphorylate the cyclin-Cdk complex on the threonine residue 160, which is located in the Cdk activation segment. Since Cdks need to be free of Cdk inhibitor proteins (CKIs) and associated with cyclins in order to be activated, CAK activity is considered to be indirectly regulated by cyclins. Phosphorylation is generally considered a reversible modification used to change enzyme activity in different conditions. However, activating phosphorylation of Cdk by CAK appears to be an exception to this trend. In fact, CAK activity remains high throughout the cell cycle and is not regulated by any known cell-cycle control mechanism. However compared to normal cells, CAK activity is reduced in quiescent G0 cells and slightly elevated in tumor cells. In mammals, activating phosphorylation by CAK can only occur once cyclin is bound. In budding yeast, activating phosphorylation by CAK can take place before cyclin binding. In both humans and yeast, cyclin binding is the rate limiting step in the activation of Cdk. Therefore, phosphorylation of Cdk by CAK is considered a post-translational modification that is necessary for enzyme activity. Although activating phosphorylation by CAK is not exploited for cell-cycle regulation purposes, it is a highly conserved process because CAK also regulates transcription. Orthologs CAK varies dramatically in different species. In vertebrates and Drosophila, CAK is a trimeric protein complex consisting of Cdk7 (a Cdk-related protein kinase), cyclin H, and Mat1. The Cdk7 subunit is responsible for Cdk activation while the Mat1 subunit is responsible for transcription. The CAK trimer can be phosphorylated on the activation segment of Cdk7 subunit. However, unlike other Cdks, this phosphorylation is might not be essential for CAK activity. In the presence of Mat1, activation of CAK does not require phosphorylation of the activation segment. However, in the absence of Mat1, phosphorylation of the activation segment is required for CAK activity. In vertebrates, CAK localizes to the nucleus. This suggests that CAK is not only involved in cell-cycle regulation but is also involved in transcription. In fact, the Cdk7 subunit of vertebrate CAK phosphorylates several components of the transcriptional machinery. In budding yeast, CAK is a monomeric protein kinase and is referred to as Cak1. Cak1 is distantly homologous to Cdks. Cak1 localizes to the cytoplasm and is responsible for Cdk activation. Budding yeast Cdk7 homolog, Kin28, does not have CAK activity. Fission yeasts have two CAKs with both overlapping and specialized functions. The first CAK is a complex of Msc6 and Msc2. The Msc6 and Msc2 complex is related to the vertebrate Cdk7-cyclinH complex. Msc6 and Msc2 complex not only activates cell cycle Cdks but also regulates gene expression because it is part of the transcription factor TFIIH. The second fission yeast CAK, Csk1, is an ortholog of budding yeast Cak1. Csk1 can activate Cdks but is not essential for Cdk activity. Table of Cdk-activating Kinases http://www.oup.com/uk/orc/bin/9780199206100/resources/figures/nsp-cellcycle-3-3-3_7.jpg. Credit to: Oxford University Press "Morgan: The Cell Cycle" Cdkactivation http://www.oup.com/uk/orc/bin/9780199206100/resources/figures/nsp-cellcycle-3-3-3_8.jpg Credit to: Oxford University Press "Morgan: The Cell Cycle" Structure The conformation of the Cdk2 active site changes dramatically upon cyclin binding and CAK phosphorylation. The active site of Cdk2 lies in a cleft between the two lobes of the kinase. ATP binds deep within the cleft and its phosphate is oriented outwards. Protein substrates bind to the entrance of the active site cleft. In its inactive form, Cdk2 cannot bind substrate because the entrance of its active site is blocked by the T-loop. Inactive Cdk2 also has a misoriented ATP binding site. When Cdk2 is inactive, the small L12 helix pushes the large PSTAIRE helix outwards. The PSTAIRE helix contains a residue, glutamate 51, that is important for positioning the ATP phosphates. When cyclinA binds, several conformational changes take place. The T-loop moves out of active site entrance and no longer blocks the substrate binding site. The PSTAIRE helix moves in. The L12 helix becomes a beta strand. This allows glutamate 51 to interact with lysine 33. Aspartate 145 also changes position. Together these structural changes allow ATP phosphates to bind correctly. When CAK phosphorylates Cdk's threonine residue160, the T-loop flattens and interacts more closely with cyclin A. Phosphorylation also allows the Cdk to interact more effectively with substrates that contain the SPXK sequence. Phosphorylation also increases the activity of cyclinA-Cdk2 complex. Different cyclins produce different conformation changes in Cdk. Image Link - Structural Basis of Cdk Activation http://www.oup.com/uk/orc/bin/9780199206100/resources/figures/nsp-cellcycle-3-4-3_12.jpg Credit to: Oxford University Press "Morgan: The Cell Cycle" Additional functions In addition to activating Cdks, CAK also regulates transcription. Two forms of CAK have been identified: free CAK and TFIIH-associated CAK. Free CAK is more abundant than TFIIH-associated CAK. Free CAK phosphorylates Cdks and is involved in cell cycle regulation. Associated CAK is part of the general transcription factor TFIIH. CAK associated with TFIIH phosphorylates proteins involved in transcription including RNA polymerase II. More specifically, associated CAK is involved in promoter clearance and progression of transcription from the preinitiation to the initiation stage. In vertebrates, the trimeric CAK complex is responsible for transcription regulation. In budding yeast, the Cdk7 homolog, Kin28, regulates transcription. In fission yeast, the Msc6 Msc2 complex controls basal gene transcription. In addition to regulating transcription, CAK also enhances transcription by phosphorylating retinoic acid and estrogen receptors. Phosphorylation of these receptors leads to increased expression of target genes. In leukemic cells, where DNA is damaged, CAK’s ability to phosphorylate retinoic acid and estrogen receptors is decreased. Decreased CAK activity creates a feedback loop, which turns off TFIIH activity. CAK also plays a role in DNA damage response. The activity of CAK associated with TFIIH decreases when DNA is damaged by UV irradiation. Inhibition of CAK prevents cell cycle from progressing. This mechanism ensures the fidelity of chromosome transmission. References External links Cell cycle
CDK-activating kinase
Biology
1,703
23,314,803
https://en.wikipedia.org/wiki/Scout%20X-1A
Scout X-1A was an American sounding rocket which was flown in 1962. It was a five-stage derivative of the earlier Scout X-1, with an uprated first stage, and a NOTS-17 upper stage. The Scout X-1A used an Algol 1C first stage, instead of the earlier Algol 1B used on the Scout X-1. The second, third and fourth stages were the same as those used on the Scout X-1; a Castor 1A, Antares 1A and Altair 1A respectively. The fifth stage was the NOTS-17 solid rocket motor, which had been developed by the Naval Ordnance Test Station. The Scout X-1A was launched on its only flight at 05:07 GMT on 1 March 1962. It flew from Launch Area 3 of the Wallops Flight Facility. The flight carried an atmospheric re-entry experiment to an apogee of , and was successful. Following this, the Scout X-1A was replaced by the Scout X-2. References 1962 in spaceflight X-1A
Scout X-1A
Astronomy
215
2,472,625
https://en.wikipedia.org/wiki/Digital%20Accessible%20Information%20System
Digital accessible information system (DAISY) is a technical standard for digital audiobooks, periodicals, and computerized text. DAISY is designed to be a complete audio substitute for print material and is specifically designed for use by people with print disabilities, including blindness, impaired vision, and dyslexia. Based on the MP3 and XML formats, the DAISY format has advanced features in addition to those of a traditional audiobook. Users can search, place bookmarks, precisely navigate line by line, and regulate the speaking speed without distortion. DAISY also provides aurally accessible tables, references, and additional information. As a result, DAISY allows visually impaired listeners to navigate something as complex as an encyclopedia or textbook, otherwise impossible using conventional audio recordings. DAISY multimedia can be a book, magazine, newspaper, journal, computerized text, or a synchronized presentation of text and audio. It provides up to six embedded "navigation levels" for content, including embedded objects such as images, graphics, and MathML. In the DAISY standard, navigation is enabled within a sequential and hierarchical structure consisting of (marked-up) text synchronized with audio. The original DAISY 2 specification (1998) was based on HTML and SMIL. The DAISY 2.02 revision (2001) was based on XHTML and SMIL. DAISY 3 (2005) is based on XML and is standardized as ANSI/NISO Z39.86-2005. The DAISY Consortium was founded in 1996 and consists of international organizations committed to developing equitable access to information for people who have a print disability. The consortium was selected by the National Information Standards Organization (NISO) as the official maintenance agency for the DAISY/NISO Standard. Specification A Digital Talking Book (DTB) is a collection of electronic files arranged to present information to the target population via alternative media, namely, human or synthetic speech, refreshable Braille, or visual display, e.g., large print. The DTB files comprising the DAISY format is Package File: A set of metadata describing the DTB Textual content file: Contains the text of the document in XML Audio Files: human or synthetic speech MP3 recordings Image files: for visual displays Synchronization files: synchronizes the different media files of the DTB during playback Navigation control file: for viewing the document's hierarchical structure Bookmark/Highlight file: support to user-set highlights Resource file: for playback management Distribution Information File: maps each SMIL file to a specific media unit Access to materials Since DAISY is often used by people with disabilities, many of the existing organizations which produce accessible versions of copyrighted content are moving to the DAISY standard, and slowly moving away from more traditional methods of distribution such as cassette tape. In the United States, Learning Ally, AMAC Accessibility, Bookshare, the Internet Archive and the National Library Service for the Blind and Print Disabled (NLS), among others, offer content to blind and visually impaired individuals. Learning Ally and Bookshare also allows access by those with dyslexia or other disabilities which impair the person's ability to read print. The NLS uses a library methodology, on the basis that the books are loaned (as they traditionally have been, on physical cassette), hence they are able to offer content free of charge, just as any public library can. Learning Ally and Bookshare both are subscription-based services. Bookshare membership is free to U.S. students due to funding from the U.S. Department of Education. Content from both the NLS and the Learning Ally organizations uses the DAISY Protected Digital Book (PDTB) encryption standard. The basic structure of the DAISY definition files remains the same, however, the audio itself, and in some cases certain information tags in the DAISY SMIL files, are encrypted and must be decrypted in order to be read/played back. The organization which offers the content provides a decryption key to the user, which can be installed into a DAISY player to allow decryption. As the encryption schemes are not part of the core DAISY standard, only players which specifically implement the necessary algorithms and key management will be able to access these titles. Bookshare utilizes its own digital rights management plan including fingerprinting each digital book with the identity of the downloading user. These actions are done to comply with law requiring copyrighted material to be distributed in a specialized format to prevent unauthorized individuals, such as those who do not have a qualifying disability, from accessing the materials. Playback and production DAISY books can be heard on standalone DAISY players, computers using DAISY playback software, mobile phones, and MP3 players (with limited navigation). DAISY books can be distributed on a CD/DVD, memory card or through the Internet. A computerized text DAISY book can be read using refreshable braille display or screen-reading software, printed as braille book on paper, converted to a talking book using synthesised voice or a human narration, and also printed on paper as large print book. In addition, it can be read as large print text on computer screen. See also Accessible publishing Books for the Blind Chakshumathi Design for All (in ICT) DTBook West German Audio Book Library for the Blind References External links DAISY Consortium DaisyNow.Net - The first online DAISY delivery web application Daisy 3: A Standard for Accessible Multimedia Books (PDF) Accessible information Audiobooks Blindness equipment Markup languages XML-based standards Open formats 1996 establishments
Digital Accessible Information System
Technology
1,114
23,189,698
https://en.wikipedia.org/wiki/Geastrum%20fimbriatum
Geastrum fimbriatum, commonly known as the fringed earthstar or the sessile earthstar, is an inedible species of mushroom belonging to the genus Geastrum, or earthstar fungi. First described in 1829, it is distinguished from other earthstars by the delicate fibers that line the circular pore at the top of its spore sac. The species has a widespread distribution, being found in Eurasia and the Americas. Taxonomy Elias Magnus Fries described Geastrum fimbriatum (as Geaster fimbriatus) in his 1829 Systema mycologicum. It is commonly known as the fringed earthstar or the sessile earthstar. The specific epithet fimbriatum means "fringed", referring to the characteristic edge of the apical spore of the spore sac. Description The fruit bodies of G. fimbriatum start out roughly spherical and hypogeous. As it matures, it pushed up through the soil and the other layer of the spore case (exoperidium) splits open to form between 5 and 8 rays that curve downward. The fully expanded fruit body has a diameter of up to . Before expansion, the outer surface has a cottony surface with adherent soil particles; this ultimately peels off to reveal a smooth, grayish-brown surface. The inner spore sac is yellowish brown and features a small conical pore with fringed edges. Unlike other similar earthstar fungi, the edges of this pore are not sharply delimited from the rest of the spore sac, and do not have grooves. The fruit bodies have no distinctive taste or odor. The spores are spherical, roughened by many small points or warts, and measure 2.4–4 μm. The capillitium is thick-walled, unbranched, and 4–7 μm thick. Similar species Similar species include G. saccatum, which is larger – up to across – and has a clearly delimited ring-like area around the pore opening. G. rufescens has reddish tones that are absent from G. fimbriatum. Habitat and distribution Geastrum fimbriatum is a saprobic species, and it fruit bodies grow on the ground in groups or clusters, usually near the stumps of hardwood trees. It is found in Europe, Asia (India and Mongolia), eastern North America (including Mexico), Central America (Costa Rica), and South America (Brazil). Uses Although typically listed by field guides as an inedible species, it is eaten by the tribal peoples of Madhya Pradesh. In culture The species was depicted in a Nigerian postage stamp in 1985. References fimbriatum Fungi described in 1829 Inedible fungi Fungi of Europe Fungi of North America Taxa named by Elias Magnus Fries Fungus species
Geastrum fimbriatum
Biology
585
5,518,588
https://en.wikipedia.org/wiki/1-Chloro-9%2C10-bis%28phenylethynyl%29anthracene
1-Chloro-9,10-bis(phenylethynyl)anthracene is a fluorescent dye used in lightsticks. It emits yellow-green light, used in 30-minute high-intensity Cyalume sticks. See also 9,10-Bis(phenylethynyl)anthracene 2-Chloro-9,10-bis(phenylethynyl)anthracene References Fluorescent dyes Organic semiconductors Anthracenes Alkyne derivatives Chloroarenes
1-Chloro-9,10-bis(phenylethynyl)anthracene
Chemistry
115
7,127,688
https://en.wikipedia.org/wiki/Nest%20%28magazine%29
Nest: A Quarterly of Interiors was a magazine published from 1997 to 2004, for a total run of 26 issues. The first issue was Fall 1997, and the second issue was Fall 1998. Thereafter, the issues were Winter '98-'99, Spring '99, Summer '99, Fall '99, Winter '99-'00, and so on until Fall '04. The founder was Joseph Holtzman. It was published in Upper East Side, New York City. Marketed as an interior design magazine, and edited by Joseph Holtzman, Nest generally eschewed the conventionally beautiful luxury interiors showcased in other magazines, and instead featured photographs of nontraditional, exceptional, and unusual environments. Fred A. Bernstein, writing in the New York Times, wrote that Joseph Holtzman "believed that an igloo, a prison cell or a child's attic room (adorned with Farrah Fawcett posters) could be as compelling as a room by a famous designer." During its run, Nest showed the room of a 40-year-old diaper lover, the lair of an Indonesian bird that decorates with colored stones and vomit, the final resting place of Napoleon's penis, the quarters of Navy seamen, a barbed-wire-trimmed bed that doubled as a tank, and a Gothic Christmas card from filmmaker John Waters. Noted architect Rem Koolhaas called it "an anti-materialistic, idealistic magazine about the hyperspecific in a world that is undergoing radical leveling, an 'interior design' magazine hostile to the cosmetic." Artist Richard Tuttle was quoted as saying that Mr. Holtzman "channeled the collective unconscious, to give us the pleasure of ornament before we even knew we wanted it." Awards 2000, General Excellence Award, The American Society of Magazine Editors 2001, Best Design, The American Society of Magazine Editors References External links The now defunct website of Nest: A Quarterly of Interiors Commentary on Nest Nest Magazine Closes I miss Nest Magazine - commentary with pictures Visual arts magazines published in the United States Quarterly magazines published in the United States Defunct magazines published in the United States Design magazines Independent magazines Interior design Magazines established in 1998 Magazines disestablished in 2004 Magazines published in New York City
Nest (magazine)
Engineering
456
457,830
https://en.wikipedia.org/wiki/Excess-3
Excess-3, 3-excess or 10-excess-3 binary code (often abbreviated as XS-3, 3XS or X3), shifted binary or Stibitz code (after George Stibitz, who built a relay-based adding machine in 1937) is a self-complementary binary-coded decimal (BCD) code and numeral system. It is a biased representation. Excess-3 code was used on some older computers as well as in cash registers and hand-held portable electronic calculators of the 1970s, among other uses. Representation Biased codes are a way to represent values with a balanced number of positive and negative numbers using a pre-specified number N as a biasing value. Biased codes (and Gray codes) are non-weighted codes. In excess-3 code, numbers are represented as decimal digits, and each digit is represented by four bits as the digit value plus 3 (the "excess" amount): The smallest binary number represents the smallest value (). The greatest binary number represents the largest value (). To encode a number such as 127, one simply encodes each of the decimal digits as above, giving (0100, 0101, 1010). Excess-3 arithmetic uses different algorithms than normal non-biased BCD or binary positional system numbers. After adding two excess-3 digits, the raw sum is excess-6. For instance, after adding 1 (0100 in excess-3) and 2 (0101 in excess-3), the sum looks like 6 (1001 in excess-3) instead of 3 (0110 in excess-3). To correct this problem, after adding two digits, it is necessary to remove the extra bias by subtracting binary 0011 (decimal 3 in unbiased binary) if the resulting digit is less than decimal 10, or subtracting binary 1101 (decimal 13 in unbiased binary) if an overflow (carry) has occurred. (In 4-bit binary, subtracting binary 1101 is equivalent to adding 0011 and vice versa.) Advantage The primary advantage of excess-3 coding over non-biased coding is that a decimal number can be nines' complemented (for subtraction) as easily as a binary number can be ones' complemented: just by inverting all bits. Also, when the sum of two excess-3 digits is greater than 9, the carry bit of a 4-bit adder will be set high. This works because, after adding two digits, an "excess" value of 6 results in the sum. Because a 4-bit integer can only hold values 0 to 15, an excess of 6 means that any sum over 9 will overflow (produce a carry-out). Another advantage is that the codes 0000 and 1111 are not used for any digit. A fault in a memory or basic transmission line may result in these codes. It is also more difficult to write the zero pattern to magnetic media. Example BCD 8-4-2-1 to excess-3 converter example in VHDL: entity bcd8421xs3 is port ( a : in std_logic; b : in std_logic; c : in std_logic; d : in std_logic; an : buffer std_logic; bn : buffer std_logic; cn : buffer std_logic; dn : buffer std_logic; w : out std_logic; x : out std_logic; y : out std_logic; z : out std_logic ); end entity bcd8421xs3; architecture dataflow of bcd8421xs3 is begin an <= not a; bn <= not b; cn <= not c; dn <= not d; w <= (an and b and d ) or (a and bn and cn) or (an and b and c and dn); x <= (an and bn and d ) or (an and bn and c and dn) or (an and b and cn and dn) or (a and bn and cn and d); y <= (an and cn and dn) or (an and c and d ) or (a and bn and cn and dn); z <= (an and dn) or (a and bn and cn and dn); end architecture dataflow; -- of bcd8421xs3 Extensions 3-of-6 code extension: The excess-3 code is sometimes also used for data transfer, then often expanded to a 6-bit code per CCITT GT 43 No. 1, where 3 out of 6 bits are set. 4-of-8 code extension: As an alternative to the IBM transceiver code (which is a 4-of-8 code with a Hamming distance of 2), it is also possible to define a 4-of-8 excess-3 code extension achieving a Hamming distance of 4, if only denary digits are to be transferred. See also Offset binary, excess-N, biased representation Excess-128 Excess-Gray code Shifted Gray code Gray code m-of-n code Aiken code References Binary arithmetic Numeral systems
Excess-3
Mathematics
1,101
35,051,436
https://en.wikipedia.org/wiki/Acriflavine%20resistance%20protein%20family
The Escherichia coli Acriflavine resistance (acrA and acrB genes) encode a multi-drug efflux system that is believed to protect the bacterium against hydrophobic inhibitors. The E. coli AcrB protein is a transporter that is energized by proton-motive force and that shows the widest substrate specificity among all known multidrug pumps, ranging from most of the currently used antibiotics, disinfectants, dyes, and detergents to simple solvents. The structure of ligand-free AcrB shows that it is a homotrimer of 110kDa per subunit. Each subunit contains 12 transmembrane helices and two large periplasmic domains (each exceeding 300 residues) between helices 1 and 2, and helices 7 and 8. X-ray analysis of the overexpressed AcrB protein demonstrated that the three periplasmic domains form, in the centre, a funnel-like structure and a connected narrow (or closed) pore. The pore is opened to the periplasm through three vestibules located at subunit interfaces. These vestibules were proposed to allow direct access of drugs from the periplasm as well as the outer leaflet of the cytoplasmic membrane. The three transmembrane domains of AcrB protomers form a large, 30A-wide central cavity that spans the cytoplasmic membrane and extends to the cytoplasm X-ray crystallographic structures of the trimeric AcrB pump from E. coli with four structurally diverse ligands demonstrated that three molecules of ligand bind simultaneously to the extremely large central cavity of 5000 cubic angstroms, primarily by hydrophobic, aromatic stacking and van der Waals interactions. Each ligand uses a slightly different subset of AcrB residues for binding. The bound ligand molecules often interact with each other, stabilising the binding. References External links Protein families Transmembrane transporters
Acriflavine resistance protein family
Biology
406
8,923,567
https://en.wikipedia.org/wiki/Brian%20Jones%20Presents%20the%20Pipes%20of%20Pan%20at%20Joujouka
Brian Jones Presents the Pipes of Pan at Joujouka is an album by the Moroccan group the Master Musicians of Joujouka, released on Rolling Stones Records and distributed by Atco Records in 1971. It was produced by Brian Jones of the Rolling Stones, who recorded a performance by the group on 29 July 1968 in the village of Jajouka in Morocco. Jones called the tracks "a specially chosen representation" of music played in the village during the annual week-long Rites of Pan Festival. It was significant for presenting the Moroccan group to a global audience, drawing other musicians to Jajouka, including American composer Ornette Coleman who collaborated with the group. The album was reissued in 1995. The executive producers were Philip Glass, Kurt Munkacsi, and Rory Johnston, with notes by Bachir Attar, Paul Bowles, William S. Burroughs, Stephen Davis, Jones, Brion Gysin, and David Silver. This deluxe album included additional graphics, more extensive notes by David Silver and Burroughs, and a second CD, produced by Cliff Mark, with two "full-length remixes." Background The music of Jajouka is regarded as becoming famous in the West following British writer Brion Gysin and American writer Paul Bowles' documentation of their experience hearing it at a festival in Sidi-Kacem in 1950. Entranced with the music's sound, they were led to the village to hear the music in person by Moroccan painter Mohamed Hamri. Gysin, along with Hamri, later brought Brian Jones to hear the village music in 1968. The album's music included songs meant for the village's "most important religious holiday festival, Aid el Kbir". The festival's ritual of dressing a young boy dressed as "Bou Jeloud, the Goat God" wearing the "skin of a freshly slaughtered goat", involved the child's running to "spread panic through the darkened village" as the musicians played with abandon. Gysin connected the ritual, performed to protect the village's health in the coming year, to the fertility festival of Lupercalia and the "ancient Roman rites of Pan"; he referred to the Bou Jeloud dancer as "Pan" and "the Father of Skins". This name stuck, leading to the reference to Pan in the album's title. Jones, recording engineer George Chkiantz, and Gysin travelled to the village in 1968, accompanied by Hamri and Jones' girlfriend Suki Potier to record the musicians using a portable Uher recorder. Jones worked on the two-track recordings in London, adding stereo phasing, echo, and other effects. Jones edited the full-band selection to 14 minutes by "cross-phasing fragments of a work that runs to some ninety minutes in uncut form". The album includes three types of music: repetitive vocal chants "similar to those employed throughout Islam", flute and drum music featuring "several distinct melodic motifs and improvisations over a drone" played by two flutists and several drummers, and the full village orchestra's drum and horn music played to accompany the "frenzied dance of Bou Jeloud, a Moroccan Pan". The New York Times reviewer Robert Palmer reported that the call-and-response horn motifs are "handed down from generation to generation", noting that the "drumming rhythms are definitely African", and paraphrased Gysin as connecting the musical origins to Spain, "from the Moorish courts of Cordova and Seville". The cover illustration on the 1971 album was originally a painting by Mohamed Hamri depicting the master musicians with Brian Jones in the center. Jones edited the album and prepared the artwork together with designer and illustrator Dave Field, who also designed the Joujouka logo and painted a depiction of a carpet design on the inside cover. Jones finished producing the LP several months before his death in 1969. The album's release date was initially set for September 3, 1971, but was pushed back to October 8. Legacy In 1995, a CD reissue of the album was issued. It was licensed from Musidor by Point Music. A new 1990s photo of Bachir Attar, by his wife and manager American photographer, replaced Hamri's original painting of Brian Jones and the Master Musicians of Joujouka which Jones had chosen as his cover. It also included in a side bar a photo of the late Jones by Michael Cooper as well as further contemporary photos of and a "Bou Jeloud" dancer by Nutting. The CD's album title changed to "Brian Jones Presents The Pipes of Pan At Jajouka" to tie in with The Master Musicians of Jajouka led by Bachir Attar. The name Master Musicians of Jajouka was used on the Master Musicians of Joujouka's second album due to contract conflicts. While the original vinyl album consisted of "two untitled, unbroken LP sides", the reissue separated the songs into six tracks with titles. The reissue cut the Master Musicians of Joujouka out of their rights and resulted in international protests organized by Frank Rynne and Joe Ambrose at concerts by Bachir Attar in London, New York and San Francisco as well as Philip Glass concerts in London and elsewhere. Brion Gysin's original sleeve-notes were altered to remove all reference to the central role that Hamri played in introducing him to the music of the village. A Brion Gysin illustration decorated an essay by Paul Bowles in the liner notes. The CD's executive producers were Philip Glass, Kurt Munkacsi, and Rory Johnston. Brian Jones was credited as producer. The multi-page booklet also included reminiscences and edited essays about the original band written by Brion Gysin, (who died in 1986 and therefore was not consulted), David Silver, Stephen Davis, William S. Burroughs, Brian Jones, and Bachir Attar. The Master Musicians of Joujouka, mentored by Hamri from the 1950s until his death in 2000, continued releasing records on Sub Rosa Records, with further releases including the acclaimed “Live in Paris”, recorded at Centre Pompidou Paris in 2016, using their original name, “Master Musicians of Joujouka” as used on the 1971 release and Mohamed Hamri's Tales of Joujouka. And the group The Master Musicians of Jajouka led by Bachir Attar continues to record music and now issues CDs once on their own label Jajouka Records, in addition to performing on regular tours and recording music for film scores. In 1995, the Master Musicians of Joujouka, and Mohamed Hamri launched an international campaign demanding their interest in their recording with Brian Jones be recognised and that the re-release be withdrawn from sale until their concerns were addressed. The group led by the second youngest son of Hadj Abdesalam Attar still perform under the name Master Musicians of Jajouka led by Bachir Attar, recording the song "" in Tangier with the Rolling Stones for their Steel Wheels album (1989). Led by Attar's son and successor, as band leader Bachir Attar, also released soundtrack recordings under the Jajouka name and album recordings under the name Master Musicians of Jajouka Featuring Bachir Attar in the 1990s and 2000s. According to Bachir Attar the Master Musicians of that early group were led by tribal chief Hadj Abdesalam Attar. Rikki Stein who never was manager of the Master Musicians of Jajouka noted that in 1971 the leader was Hadj Abdesalam Attar. However, Berdous and Mfdal were musicians with Hadj Abdesalam Attar and Bachir Attar until their deaths in the late 1990s. This throws doubt on the claim that Hadj Abdelsalm Attar was leader, tribal or otherwise, in the late 1960s or early 1970s. However, Rikki Stein has since pointed out that there were regular elections held amongst the musicians and their supporters, who were also permitted to vote. In the late sixties and until 1971 Hadj Abdelsalam Attar was the 'Rais' (President) of the Masters, while Hamri was president of The Jahjouka Folklore Association of the Tribe Ahl Serif created collectively by the musicians of Jajouka. ´El Hadj was considered a great Jajouka musician, despite his propensity for black magic. Subsequently, though, in the early seventies elections were held and Maalim Fedal was elected Rais and continued to retain that title, certainly until the European tour organised by Rikki Stein in 1980. Critical legacy The Daily Telegraph reviewer Tom Horan identified the Master Musicians as the world's first world music band and described Brian Jones Presents the Pipes of Pan at Jajouka'''' as a "field recording that Jones subsequently retouched back in Britain using modern studio technology". He said the album "tapped perfectly into the druggy mysticism that characterised the era". Richie Unterberger of AllMusic described it as a "document of Moroccan traditional music that achieves trance-like effects through its hypnotic, insistent percussion, eerie vocal chanting, and pipes." He noted that as the record was among the first recordings of this style of music to receive relatively wide exposure in Europe and North America, it "anticipated the wider popularity of trance-like music among both electronic rock and progressive African musicians later in the 20th century. In 1998, The Wire included Presents the Pipes of Pan at Jajouka in their list of "100 Records That Set the World on Fire (While No One Was Listening)". They noted that Jones "deployed the full arsenal of psychedelic signal processing" to enhance the music and his own experience of the musicians, resulting in an LP that "documents a millennia-old music, the sound of panic itself, as well as the fragmented mind of Jones in the months before his death." They added of its prescient musical style: According to author Louise Grey, the album was influential enough that other figures besides Jones, such as Ornette Coleman, Bill Laswell and Richard Horowitz, were also drawn into working with the Joujouka musicians. She added: "With supporters like this, Joujouka could hardly fail to generate interest in those interested in psychotropic music – even if there was a series of acrimonious fallings-out between the musicians after the appearance of their famous friends." The Independent writer Phil Sweeney highlighted the album's "ghita flutes and assorted drums" and wrote that parts of the album resemble "nothing so much as a Scottish regimental pipe band running amok on a mixture of amphetamine sulphate, Special Brew and helium." In 1999, Rob Chapman of Mojo wrote that Jones entered the project "with all the anthropological fervour of a Samuel Charters or Alan Lomax", but in his doctoring of the tapes, the resulting album is a "proto-dub masterpiece", as belatedly recognised by the Rolling Stones when they collaborated with the Master Musicians of Joujouka for Steel Wheels. Track listing All songs written by Pipes of Pan at Joujouka "55 ("Hamsa oua Hamsine)" – 0:58 "War Song/Standing" + "One Half (Kaim Oua Nos") – 2:22 "Take Me with You Darling, Take Me with You (Dinimaak A Habibi Dinimaak)" – 8:06 "Your Eyes Are Like a Cup of Tea (Al Yunic Sharbouni Ate)" – 10:35 "I Am Calling Out (L'Afta)" – 5:55 "Your Eyes Are Like a Cup of Tea" (reprise with flute) – 18:04 Titles come from Point Music reissue track listings as original vinyl release package had no titles References Album designed and illustrated by Dave Field Further reading Davis, Stephen (2001). Old Gods Almost Dead. Broadway Books, , pp 135–137, 172, 195–201, 227, 248–253, 270, 354, 504–505. Jennings, Nicholas (October 12, 1995). Liveeye PREVIEW: The Master Musicians of Jajouka. Eye Weekly. (Retrieved February 6, 2007.) Palmer, Robert (October 14, 1971). "Jajouka: Up the Mountain". Rolling Stone, p. 43. Palmer, Robert (March 23, 1989). "Into the Mystic". Rolling Stone, p. 106. Palmer, Robert (December 19, 1971). "Music for a Moroccan Pan". The New York Times. Palmer, Robert (June 11, 1992). "Up the Mountain". Rolling Stone, p. 40. Wyman, Bill and Coleman, Ray Stone Alone'', (London, 1990), p. 515 Rondeau, Daniel "Tanger Et Autres Marocs". Ed. Nil January 1997 External links The official site for The Master Musicians of Jajouka led by Bachir Attar The official site for The Master Musicians of Joujouka [ Allmusic.com listing] Master Musicians of Joujouka albums Sufi music albums 1971 live albums Rolling Stones Records live albums 1971 debut albums Arabic-language albums Albums produced by Brian Jones Psychedelic music albums Field recording World music albums by Moroccan artists
Brian Jones Presents the Pipes of Pan at Joujouka
Engineering
2,754
18,711,902
https://en.wikipedia.org/wiki/National%20Standards%20of%20the%20People%27s%20Republic%20of%20China
The National Standards of the People's Republic of China (), coded as , are the standards issued by the Standardization Administration of China under the authorization of Article 10 of the Standardization Law of the People's Republic of China. According to Article 2 of the Standardization Law, national standards are divided into mandatory national standards and recommended national standards. Mandatory national standards are prefixed "GB". Recommended national standards are prefixed "". Guidance technical documents are prefixed with "GB/Z", but are not legally part of the national standard system. Mandatory national standards are the basis for the product testing which products must undergo during the China Compulsory Certificate (CCC or 3C) certification. If there is no corresponding mandatory national standard, CCC is not required. Nomenclature A Chinese standard code has three parts: the prefix, the sequential number, and the year number. For example, GB 2312-1980 refers to the national compulsory standard (GB), sequential number 2312, revision year 1980. Besides the national standard repository, China allows the registration of standards by industry/trade, by localities (DB, Dìfāng Biāozhǔn, "local standard"), by associations (T), or by an individual company (Q). The overall prefix number-year format is retained. Copyright and availability Under the first clause of Article 5 of the Copyright Law of the People's Republic of China, compulsory standards are not copyrightable as they fall under "other documents of a legislative, administrative or judicial nature". In 1999, the Supreme People's Court ruled that although compulsory standards do not enjoy copyright protections, publishing houses can be given exclusive, sui generis rights to publish a compulsory standard. The Standardization Administration operates a website for obtaining digital copies of the standards (excluding those dealing with food safety, environment protection, and civil engineering). The availability is broken down as follows (as of October 2023): Out of 2029 included GB standards, 1464 may be read online or downloaded as a PDF file. The remaining 565 Cǎibiāo (, adopted international standards) may only be read online. Out of 41664 included GB/T standards, 27154 may be read online. The remaining 4513 Cǎibiāo are only indexed by title. Out of 573 included GB/Z documents, 261 may be read online. The remaining 312 Cǎibiāo are only indexed by title. Copies of standards (written in simplified Chinese) may be obtained from the SPC web store. List A non-exhaustive list of National Standards of the People's Republic of China is listed as follows, accompanied with similar international standards of ISO, marked as identical (IDT), equivalent (EQV), or non-equivalent (NEQ). Changes are made frequently within the Chinese regulatory system as new standards are released and existing standards are updated. See also GOST China Compulsory Certificate (CCC or 3C) China Food and Drug Administration Chinese National Standards, used in the Republic of China Vietnam Standards Other meanings of Guóbiāo: Guójì Biāozhǔn Wǔ (, International Standard Dancesport) Guóbiāo Májiàng (, National Standard Mahjong) References External links Certification marks Product certification
National Standards of the People's Republic of China
Mathematics
658
35,688,291
https://en.wikipedia.org/wiki/International%20Cooperative%20Biodiversity%20Groups
International Cooperative Biodiversity Groups (or ICBG) is a program under National Institutes of Health, National Science Foundation and USAID established in 1993 to promote collaborative research between American universities and research institutions in countries that harbor unique genetic resource in the form of biodiversity—the practice known as bioprospecting. The basic aim of the program is to benefit both the host community and the global scientific community by discovering and researching the possibilities for new solutions to human health problems based on previously unexplored genetic resources. It therefore seeks to conserve biodiversity, and to foment, encourage and support sustainable practices of usage of biological resources. Groups are headed by a principal investigator who coordinates the efforts of the research consortium which often has branches in the US and the host country as well as in the countries of other third party institutions. There are currently International Cooperative Biodiversity groups operating in Latin America, Africa, Asia and Papua-New Guinea. The Maya ICBG, a group dedicated to collecting the ethnobiological knowledge of the Maya population of Chiapas, Mexico led by Dr. Brent Berlin was closed in 2001 after two years of funding after accusations of having failed to obtain prior informed consent. References Biodiversity National Institutes of Health National Science Foundation United States Agency for International Development 1993 establishments in the United States
International Cooperative Biodiversity Groups
Biology
262
18,604,665
https://en.wikipedia.org/wiki/Pentosan
Pentosans are polymers composed of pentoses. In contrast to cellulose, which is composed of hexose (glucose) monomers, pentosans are derived from five-carbon sugars such as xylose. Pentosan-rich biomass is the precursor to furfural. The pentosan content has been determined for many natural materials: 29-25%: oat hulls, cottonseed hulls, barley, sugarcane bagasse, sunflower husks 24-20% wheat straw, flax shives, hazelnut shells, birchwood, eucalyptus wood 8% pinewood 3% peanut shells Pentosans can act as heparinoids, glycosaminoglycans which are derivatives of heparin. They can have an influence on bread quality. See also Pentosan polysulfate, a semi-synthetic polysulfated xylan sold for the relief of various medical conditions including thrombi and interstitial cystitis in humans and osteoarthritis in dogs and horses References External links Barley Dietary fiber Polysaccharides
Pentosan
Chemistry
232
31,746,419
https://en.wikipedia.org/wiki/Boletellus%20chrysenteroides
Boletellus chrysenteroides is a species of fungus in the family Boletaceae. It was first described as Boletus chrysenteroides by mycologist Wally Snell in 1936. Snell later (1941) transferred the species to Boletellus. See also List of North American boletes References Fungi of North America chrysenteroides Fungi described in 1936 Fungus species
Boletellus chrysenteroides
Biology
81
42,553,031
https://en.wikipedia.org/wiki/Beihai%20Tunnel%20%28Beigan%29
The Beihai Tunnel () is a tunnel in Banli village, Beigan Township, Lienchiang County, Taiwan. History The tunnel was opened in 1968 for amphibious landing, 10 years after the end of the Second Taiwan Strait Crisis between the Republic of China Armed Forces and the People's Liberation Army. The construction lasted for around 3 years and claimed the lives of over 100 soldiers. After the Matsu National Scenic Area Administration was established, it took over the management of the tunnel. It renovated the interior of the tunnel and neighboring tourist spots, building an access road and protective railings. Features The tunnel is 550 meters long and 9–15 meters wide. Visitors were once able to ride canoe along the tunnel but for several years the site has been closed to visitors due to falling rocks rendering it dangerous. See also List of tourist attractions in Taiwan Zhaishan Tunnel Beihai Tunnel (Nangan) Beihai Tunnel (Dongyin) References 1968 establishments in Taiwan Beigan Township Military history of Taiwan Tunnels completed in 1968 Tunnels in Lienchiang County Tunnel warfare
Beihai Tunnel (Beigan)
Engineering
216
8,066,741
https://en.wikipedia.org/wiki/Japanese%20Industrial%20Standards
are the standards used for industrial activities in Japan, coordinated by the Japanese Industrial Standards Committee (JISC) and published by the Japanese Standards Association (JSA). The JISC is composed of many nationwide committees and plays a vital role in standardizing activities across Japan. History In the Meiji era, private enterprises were responsible for making standards, although the Japanese government too had standards and specification documents for procurement purposes for certain articles, such as munitions. These were summarized to form an official standard, the Japanese Engineering Standard, in 1921. During World War II, simplified standards were established to increase matériel output. The present Japanese Standards Association was established in 1946, a year after Japan's defeat in World War II. The Japanese Industrial Standards Committee regulations were promulgated in 1946, and new standards were formed. The Industrial Standardization Law was enacted in 1949, which forms the legal foundation for the present Japanese Industrial Standards. New JIS mark The Industrial Standardization Law was revised in 2004 and the JIS product certification mark was changed; since October 1, 2005, the new JIS mark has been used upon re-certification. Use of the old mark was allowed during a three-year transition period ending on September 30, 2008, and every manufacturer was able to use the new JIS mark. Therefore all JIS-certified Japanese products manufactured since October 1, 2008, have had the new JIS mark. Standards classification and numbering Standards are named in the format "JIS X 0208:1997", where X denotes area division, followed by four digits designating the area (five digits for ISO-corresponding standards), and four final digits designating the revision year. Divisions of JIS and significant standards are: A Civil engineering and architecture JIS A 0001 – Basic module to ISO 1006 JIS A 0002 – Glossary of terms used in building module to ISO 1791 JIS A 0003 – Tolerances for building to ISO 3443-5 JIS A 0004 – Principle of modular coordination in buildings to ISO 2848 B Mechanical engineering JIS B 1012 - JIS screw drive, which is not the same as Phillips JIS B 7021:2013 – Water resistant watches for general use—Classification and water resistance JIS B 7512:2016 – Steel tape measures JIS B 7516:2005 – Metal rules C Electronics and electrical engineering JIS C 0920:2003 – Degrees of protection provided by enclosures (IP Code) JIS C 3202:2014 – Enamelled winding wires JIS C 5062:2008 – Marking codes for resistors and capacitors JIS C 5063:1997 – Preferred number series for resistors and capacitors JIS C 7001 – Type designation system for electronic tubes JIS C 7012 – Type designation system for discrete semiconductor devices JIS C 8800:2008 – Glossary of terms for fuel cell power systems D Automotive engineering JIS D 0004-1 – Earth-moving machinery−Scrapers− Part 1 : Terminology and commercial specifications to ISO 7133 JIS D 0004-2 – Earth-moving machinery−Scrapers− Part 2 : Standard form of specifications and testing methods JIS D 0004-3 – Earth-moving machinery−Scrapers− Part 3 : Bowl volumetric raing to ISO 6485 E Railway engineering JIS E 1101 – Flat bottom railway rails and special rails for switches and crossings of non-treated steel to ISO 5003 JIS E 1102 – Fish plates for rails to ISO 6305-1 JIS E 1107 – Steel bolts and nuts for fish-plates and fastenings to ISO 6305-4 JIS E 2001 – Electric traction contact lines−Vocabulary to IEC 60050 (811),IEC 60913 JIS E 4001 – Railway rolling stock-Vocabulary to IEC 60050-811 JIS E 4041 – Rolling stock-Testing of rolling stock on completion of construction and before entry into service to IEC 61133 JIS E 4042 – General rules for the test methods of electric locomotives on completion of construction to IEC 61133 JIS E 4043 – General rules for the test methods of diesel railcar on completion of construction to IEC 61133 JIS E 4044 – General rules for the test methods of diesel locomotives on completion of construction to IEC 61133 F Ship building JIS F 0013 - Ships and marine technology-Vocabulary-Deck machinery and outfittings to ISO 3828 & ISO 8147 G Ferrous materials and metallurgy JIS G 3101 – Rolled steels for general structure JIS G 3103 – Carbon steel and molybdenum alloy steel plates for boilers and pressure vessels JIS G 3106 – Rolled steels for welded structure JIS G 3108 – Rolled carbon steel for cold-finished steel bars JIS G 3114 - Hot-rolled atmospheric corrosion resisting steels for welded structure JIS G 3115 – Steel plates for pressure vessels for intermediate temperature service JIS G 3118 – Carbon steel plates for pressure vessels for intermediate and moderate temperature services JIS G 3126 – Carbon steel plates for pressure vessels for low temperature service JIS G 3141 – Commercial Cold Rolled SPCC Steels JIS G 4304 – Hot-rolled stainless steel plate, sheet and strip JIS G 4305 – Cold-rolled stainless steel plate, sheet and strip H Nonferrous materials and metallurgy JIS H 2105 – Pig lead JIS H 2107 – Zinc ingots JIS H 2113 – Cadmium metal JIS H 2116 – Tungsten powder and tungsten carbide powder JIS H 2118 – Aluminum alloy ingots for die castings JIS H 2121 – Electrolytic cathode copper JIS H 2141 – Silver bullion JIS H 2201 – Zinc alloy ingots for die casting JIS H 2202 – Copper alloy ingots for castings JIS H 2211 – Aluminium alloy ingots for castings JIS H 2501 – Phosphor copper metal JIS H 3100 – Copper and copper alloy sheets, plates and strips JIS H 3110 – Phosphor bronze and nickel silver sheets, plates and strips JIS H 3130 – Copper beryllium alloy, copper titanium alloy, phosphor bronze, copper-nickel-tin alloy and nickel silver sheets, plates and strips for springs JIS H 3140 – Copper bus bars JIS H 3250 – Copper and copper alloy rods and bars JIS H 3260 – Copper and copper alloy wires JIS H 3270 – Copper beryllium alloy, phosphor bronze and nickel silver rods, bars and wires JIS H 3300 – Copper and copper alloy seamless pipes and tubes JIS H 3320 – Copper and copper alloy welded pipes and tubes JIS H 3330 – Plastic covered copper tubes JIS H 3401 – Pipe fittings of copper and copper alloys JIS H 4000 – Aluminum and aluminum alloy sheets and plates, strips and coiled sheets JIS H 4001 – Painted aluminum and aluminum alloy sheets and strips JIS H 4040 – Aluminum and aluminum alloy rods, bars and wires JIS H 4080 – Aluminum and aluminum alloys extruded tubes and cold-drawn tubes JIS H 4090 – Aluminum and aluminum alloy welded pipes and tubes JIS H 4100 – Aluminum and aluminum alloy extruded shape JIS H 4160 – Aluminum and aluminum alloy foils JIS H 4170 – High purity aluminum foils JIS H 4301 – Lead and lead alloy sheets and plates JIS H 4303 – DM lead sheets and plates JIS H 4311 – Lead and lead alloy tubes for common industries JIS H 4461 – Tungsten wires for lighting and electronic equipment JIS H 4463 – Thoriated tungsten wires and rods for lighting and electronic equipment JIS H 4631 – Titanium and titanium alloy tubes for heat exchangers JIS H 4635 – Titanium and titanium alloy welded pipes JIS H 5401 – White metal JIS H 8300 – Thermal spraying―zinc, aluminum and their alloys JIS H 8601 – Anodic oxide coatings on aluminum and aluminum alloys JIS H 8602 – Combined coatings of anodic oxide and organic coatings on aluminum and aluminum alloys JIS H 8615 – Electroplated coatings of chromium for engineering purposes JIS H 8641 – Zinc hot dip galvanizing JIS H 8642 – Hot dip aluminized coatings on ferrous products K Chemical engineering JIS K 0061 - Test methods for density and relative density of chemical products to ISO 758 L Textile engineering M Mining P Pulp and paper JIS P 0138 - Writing paper and certain classes of printed matter −Trimmed sizes−A and B series to ISO 216 JIS P 0138-61 (JIS P 0138:1998) - Process finished paper size (ISO 216 with a slightly larger B series) Q Management systems JIS Q 0073 - Risk management-Vocabulary JIS Q 9000 - Quality management systems-Fundamentals and vocabulary to ISO 9000 JIS Q 9001 - Quality management systems - requirements JIS Q 9002 - Quality management systems - Guidelines for the application of JIS Q 9001 JIS Q 9004 - Quality management - Quality of an organization - Guidance to achieve sustained success JIS Q 9005 - Quality management systems - Guidelines for sustained success JIS Q 10002 - Quality management-Customer satisfaction- Guidelines for complaints handling in organizations to ISO 10002 JIS Q 14001 - Environment management systems - requirements with guidance for use JIS Q 15001 - Personal information protection management systems - requirements JIS Q 20000-1 - IT service management - specification JIS Q 21500 - Guidance on project management to ISO 21500 JIS Q 22300 - Societal security-Terminology to ISO 22300 JIS Q 22301 - Security and resilience-Business continuity management systems-Requirements to ISO 22301 JIS Q 22313 - Societal security-Business continuity management systems-Guidance to ISO 22313 JIS Q 27001 - Information security management systems - requirements JIS Q 31000 - Risk management-Guidelines to ISO 31000 JIS Q 31010 - Risk management-Risk assessment techniques to ISO/IEC 31010 R Ceramics S Domestic wares JIS S 5037 – Sizing system for shoes to ISO 9407 T Medical equipment and safety appliances W Aircraft and aviation JIS W 0111 – Flight dynamics−Concepts, quantities and symbols−Part 1 : Aircraft motion relative to the air to ISO 1151-1 JIS W 0112 – Flight dynamics−Concepts, quantities and symbols−Part 2 : Motions of the aircraft and the atmosphere relative to the Earth to ISO 1151-2 JIS W 0113 – Flight dynamics−Concepts, quantities and symbols−Part 3 : Derivatives of forces, moments and their coefficients to ISO 1151-3 JIS W 0114 – Flight dynamics−Concepts, quantities and symbols−Part:4 Concepts, quantities and symbols used in the study of aircraft stability and control to ISO 1151-4 JIS W 0115 – Flight dynamics−Concepts, quantities and symbols−Part 5 : Quantities used in measurements to ISO 1151-5 JIS W 0116 – Flight dynamics−Concepts, quantities and symbols−Part 6 : Aircraft geometry to ISO 1151-6 JIS W 0117 – Flight dynamics−Concepts, quantities and symbols−Part 7 : Flight points and flight envelopes to ISO 1151-7 JIS W 0118 – Flight dynamics−Concepts, quantities and symbols−Part 8 : Concepts and quantities used in thestudy of the dynamic behaviour of the aircraft to ISO 1151-8 JIS W 0119 – Flight dynamics−Concepts, quantities and symbols−Part 9:Models of atmospheric motions along the trajectory of the aircraft to ISO 1151-9 JIS W 0125-1 – Aerospace−Fluid systems−Vocabulary−Part 1 : General terms and definitions relating to pressure to ISO 8625-1 JIS W 0125-2 – Aerospace−Fluid systems−Vocabulary−Part 2 : General terms and definitions relating to flow to ISO 8625-2 JIS W 0125-3 – Aerospace−Fluid systems−Vocabulary−Part 3 : General terms and definitions relating to temperature to ISO 8625-3 X Information processing JIS X 0201:1997 – Japanese national variant of the ISO 646 7-bit character set JIS X 0202:1998 – Japanese national standard which corresponds to the ISO 2022 character encoding JIS X 0208:1997 – 7-bit and 8-bit double byte coded kanji sets for information interchange JIS X 0212:1990 – Supplementary Japanese graphic character set for information interchange JIS X 0213:2004 – 7-bit and 8-bit double byte coded extended Kanji sets for information interchange JIS X 0221-1:2001 – Japanese national standard which corresponds to ISO 10646 JIS X 0401:1973 – Todofuken (prefecture) identification code JIS X 0402:2003 – Identification code for cities, towns and villages JIS X 0405:1994 – Commodity classification code JIS X 0408:2004 – Identification code for universities and colleges JIS X 0501:1985 – Bar code symbol for uniform commodity code JIS X 0510:2004 – QR code JIS X 3001-1:2009, JIS X 3001-2:2002, JIS X 3001-3:2000 – Fortran programming language JIS X 3002:2001 – COBOL JIS X 3005-1:2010 – SQL JIS X 3010:2003 – C programming language JIS X 3014:2003 – C++ JIS X 3017:2011, JIS X 3017:2013 – Programming languages – Ruby JIS X 3030:1994 – POSIX - repealed in 2010 JIS X 4061:1996 – Collation of Japanese character string JIS X 6002:1980 – Keyboard layout for information processing using the JIS 7 bit coded character set JIS X 6054-1:1999 – MIDI JIS X 6241:2004 – 120 mm DVD – Read-only disk JIS X 6243:1998 – 120 mm DVD Rewritable Disk (DVD-RAM) JIS X 6245:1999 – 80 mm (1.23 GB/side) and 120 mm (3.95 GB/side) DVD-Recordable-Disk (DVD-R) JIS X 6302-6:2011 - Identification cards—Recording technique—Part 6: Magnetic stripe—High coercivity JIS X 9051:1984 – 16-dots matrix character patterns for display devices JIS X 9052:1983 – 24-dots matrix character patterns for dot printers Z Miscellaneous JIS Z 0310 – Abrasive blast-cleaning methods for surface preparation to ISO 8504 JIS Z 2241 – Metallic materials — Tensile testing — Method of test at room temperature to ISO 6892 JIS Z 2305 – Non-destructive testing-Qualification and certification of NDT personnel to ISO 9712 JIS Z 2371 – Methods of salt spray testing JIS Z 3001-1 – Welding and allied processes -- Vocabulary -- Part 1: General JIS Z 3001-2 – Welding and allied processes -- Vocabulary -- Part 2: Welding processes JIS Z 3001-3 – Welding and allied processes -- Vocabulary -- Part 3: Soldering and brazing JIS Z 3001-4 – Welding and allied processes -- Vocabulary -- Part 4: Imperfections in welding JIS Z 3001-5 – Welding and allied processes -- Vocabulary -- Part 5: Laser welding JIS Z 3001-6 – Welding and allied processes -- Vocabulary -- Part 6: Resistance welding JIS Z 3001-7 – Welding and allied processes -- Vocabulary -- Part 7: Arc welding JIS Z 3011 – Welding positions defined by means of angles of slope and rotation JIS Z 3021 – Welding and allied processes -- Symbolic representation JIS Z 8000-1 – Quantities and units -- Part 1: General JIS Z 8000-2 – Quantities and units -- Part 2: Mathematics JIS Z 8000-3 – Quantities and units -- Part 3: Space and time JIS Z 8000-4 – Quantities and units -- Part 4: Mechanics JIS Z 8000-5 – Quantities and units -- Part 5: Thermodynamics JIS Z 8102 – Names of non-luminous object colours JIS Z 8210 – Public Information Symbols JIS Z 8301 – Rules for the layout and drafting of Japanese Industrial Standards JIS Z 9098 – Hazard specific evacuation guidance sign system JIS Z 9112 – Classification of fluorescent lamps and light emitting diodes by chromaticity and colour rendering property See also International Organization for Standardization (ISO) International Electrotechnical Commission (IEC) Japanese Agricultural Standards Korean Standards Association Japanese typographic symbols – gives the Unicode symbol for the Japanese industrial standard List of JIS categories (in Japanese) References External links Japanese Industrial Standards Committee Japanese Standards Association Korean Standards Association List of Japanese Standards JIS G – Ferrous Materials and Metallurgy Details on the history of JIS (in Japanese) JIS in decodeunicode JIS-Logo in Unicode JIS search system (in Japanese) Standards organizations in Japan Certification marks Industry in Japan Symbols introduced in 1946 Symbols introduced in 2005 1946 establishments in Japan
Japanese Industrial Standards
Mathematics
3,650
56,041,342
https://en.wikipedia.org/wiki/Lubo%20Kristek
Lubo Kristek (born 8 May 1943) is a sculptor, painter and performance artist of Czech origin, who lived in West Germany from 1968 until the 1990s. He specializes in critical assemblages and happenings, in which he incorporates multiple forms of media. He created sculptures for public space. He is the author of a three-state sculptural pilgrims' way. During his more than half-century long work in the field of performance art, he formulated his theory of "holographic perception". Life In the 1960s, Kristek lived in a former soap factory, in Hustopeče, where he organised events incorporating music, visual art, poetry, theatre and improvisation. Testing of borders, experiments, and crossing the conventional frame is typical for his work. He follows the idea of a total work of art – Gesamtkunstwerk. At that time, he also experimented with using fire as a means of expression. He deliberately suppressed or sometimes annulled his artistic handwriting. In 1968, Kristek emigrated to West Germany. He settled in Landsberg am Lech and lived there for almost three decades. That was also where he started the tradition of Kristek's Night Vernissages, from which his happenings evolved. From Landsberg, Kristek travelled to other places in Europe (Belgium, Luxembourg, Liechtenstein, the Netherlands, France, Italy, Spain, Republic of San Marino, Switzerland, Austria) to study and create. Kristek was influenced by Arno Lehmann who lived in Salzburg, where Kristek used to go to meet him. In 1973, after Lehmann's death, Kristek created the sculpture Soul shaped by flame. A sphere dominates the top as a symbol of artistic heritage that Kristek adopted from Lehmann. He was also influenced by the Austrian ethologist Eberhard Trumler (1923–1991), especially by the mechanisms of survival of the species. Kristek's existencial assemblage Expecting (1969) was created under this influence. In 1977, Kristek travelled through the west coast of the United States and Canada with his exhibition tour American Cycle 77. In 1989, after the Velvet Revolution, he returned to the Czech Republic. He settled in Podhradí nad Dyjí in a house where there is a gallery of his works today (Lubo Chateau). On the apex of the house, he located the sculpture Divine Ephemerality of Tone – a piano balancing on one leg. Writer Jaromír Tomeček unveiled the sculpture in 1994 and, on the basis of the artwork's title, he called the entire neighbouring area of the Thaya Kristek Valley of the Divine Ephemerality of Tone. Václav Jehlička wrote in this context in his foreword for the publication issued by the Neues Stadtmuseum, Landsberg am Lech in 2008: Sculpture Kristek made sculptures in several techniques, such as bronze casting, repoussé and chasing, welding and combined techniques using materials like stone, wood, metal, ceramic and found objects. His sculptures can be found as public artworks mainly in Germany and the Czech Republic. His 1978 ceramic sculpture Birth and Simultaneously Damnation of the Sphere, is today located, as a public work of art, in a chapel at the John's Castle near Podivin, Czech Republic. Kristek's 16-metre-high sculpture Tree of Knowledge (1981) that he made for the Ignaz-Kögler-Gymnasium (high school) in Landsberg am Lech rises up through three floors of the building. The Munich magazine Steinmetz + Bildhauer noted: In 1988, Kristek created the bronze fountain The Drinking for the Theresianbad Greifenberg, Germany. Kristek's metal sculpture Monument to the Five Senses (1991) is part of the collection of the Neues Stadtmuseum, Landsberg am Lech. It is located in front of the museum since 1992. In 1992, he made a kinetic sculpture called Tree of the Wind Harp. This wind propelled musical artwork is located at the Pohansko Chateau, Czech Republic. In 2006, Kristek created bronze sculpture The Seekers that was located on the confluence of the rivers Thaya and March. The sculpture was stolen in 2009; only a fragment left. Using the fragment, Kristek created a new metal sculpture for the place and called it The Seekers – Organic Forms. The sculpture was inaugurated in 2015. The Czech art historian Barbora Putova wrote: Kristek Thaya Glyptotheque In 2005–06, he created a sculptural pilgrims' way dedicated to the river Thaya. It runs along the river Thaya through the Czech Republic, Austria and Slovakia. Kristek linked together the sculptures to inspire people to take a walk through the landscape. The route includes eleven stations, which were open by the series of ten happenings. The eleventh's station remains secret as a challenge for the pilgrim. Kristek said that the pilgrims' way "should also be a protection against the devastation of the parent riverbed. If a person experiences culture here, perhaps he will not behave so unkindly to nature." The project was under the auspices and was supported by the five regions of the three states it runs through. Critical assemblage Critical assemblages by Lubo Kristek address various social phenomena, such as oppression, consume approach, addiction to new technologies, and the medical ethics. One of his early assemblages, called Vision – Burning of Christ (1964) belongs to his artworks shaped by flame. The burned Christ symbolizes "melting of faith" in Czechoslovakia at that time. The assemblage Metastation of Abandoned Tones was created in 1975–76. It is connected to Kristek's emigration from Czechoslovakia to Germany, for which he was sentenced, in absentia, to 1.5 years in prison and the confiscation of all his property in Czechoslovakia. Kristek included his coat and hat in which he was fleeing over the border in 1968 in the assemblage. The piece is exhibited at the Ruegers Palace, Riegersburg, Austria. Kristek addressed the subject of hidden traps in modern society in his assemblage Soundproof Aesthetic of Luxuriety, which he created in 1976. In the 1980s, he made assemblages out of objects he found during his wanderings as in Barbed Wire of Christ (1983) created on the coast of Cantabria and Sea Horse (1986) made of material that was cast out by the sea on the Italian coast near Rome. His assemblage On the Landfill of Ages (1994) is made of industrial waste. Kristek's artwork In the Prematurely Cloned Age of One Planet (2003) is dedicated to the ethical context of cloning. It was also a main motif of his happening Visio Sequentes or Concerning the Prematurely Cloned Age of a Planet that occurred in 2003 in Znojmo, Czech Republic. In 2015–2017, the artist transformed his house in Brno into a monumental assemblage Kristek House. Painting In 1977, Kristek created a monumental altar painting for the sacral space, the cemetery chapel in Penzing, Germany. He called the 7 m high painting Transcendental Composition between Suffering and Hope. He has created his specific vocabulary in paintings. As far back as the 1970s, one can find a road, which is mostly bordered by the arches of bridges and which rises up, in his paintings. He calls it "the heavenly highway". The oil painting The Heavenly Highway of Aunt Fränzi (1974–75), which is today part of Neues Stadtmuseum's collection, is an example of early use of this symbol. The ballerina or the dancer is the central theme of Kristek's paintings and happenings. The development of symbol in time reflects the changes in postmodern society. In the painting Billiards for Life and the Ballerina (1987), she personifies the vitality in the world of constant metamorphosis. However, in the happening The Way of the Cross (2014), the ballerina consumes all that is left after the destruction. In the painting Peculiar Pole Vault (2016), the ballerina appears as Death. Another Kristek's lifelong motifs are tree with two apples and intergrowth or penetration of forms. Performance art Kristek has organised happenings in Germany, the US, Canada, Italy, Spain, Czech Republic, Austria, Turkey, Belgium, Poland and Slovakia. His events can be described as happenings, performances or sometimes even site-specific, but he uses the original expression happening, because the involvement of the public as well as the authentic experience are crucial for him. In 1971, he started the Kristek's Night Vernissages in his studio with garden in Landsberg am Lech. They served as a meeting point for sculptors, painters, musicians, poets, philosophers and also the visitors. The magazine Collage noted that artists from Germany, Canada, England and the USA gathered there in 1976. These experiments were at the interface between theatre, music, improvisation and ritual. Kristek studies the crowd behavior, explores the border between performer and audience, and also the death taboo in his happenings. The motifs of death, the illness of society and doom are counterbalanced by birth or rebirth, liberation from shackles and intergrowth of forms. The magazine Medizin + Kunst analyzed Kristek's happenings: Promenade with a Neurotic Fox In 1975, Kristek went for a walk with a fox's skeleton on a leash on the colonnade in Landsberg am Lech and observed the reactions of the people. His aim was to study the crowd behavior and the death taboo. Pyramidae-Klipteon II Kristek's performances can often be interpreted as a critique of consumerism. At the climax of his happening in 2002 in Podhradí nad Dyjí, he crawled out of bowels of a cow carcass to read his manifest against the destructive and self-destructive tendencies of society. Visio Sequentes or Concerning the Prematurely Cloned Age of a Planet This piece took place at the Znojmo Castle, Czech Republic in 2003. The artist dissolved the boundary between the auditorium and the stage. In the climax of the happening, he dispersed the artists, mentally disabled people, amongst the spectators. The spectators were quite shocked and looked around uncomfortably to find out who is who. Kristek forced them to wonder where the boundary is and whether it exists at all. His aim was to evoke a threshold situation, when the shocked spectator is shifted outside his stereotypes and has the possibility to re-evaluate them. Requiem for Mobile Telephones In 2007–2010, Lubo Kristek presented an interactive assemblage Requiem for Mobile Telephones that originated in a series of his happenings. The audience gave up their mobile phones and participated in incorporating the phones in the assemblage. Kristek travelled with this happening series to the Czech Republic (Znojmo), Austria (Vienna), Germany (Landsberg am Lech) and Poland (Sucha Beskidzka) and the assemblage kept changing. The project was aimed against addiction to modern technologies. Holographic perception Lubo Kristek formulated his theory of holographic perception. He does not organise scenes in a linear manner in his performance art pieces. On the contrary, there are several different actions happening at the same time during Kristek's event. According to his theory, a far more plastic and holographic picture is formed in the mind of the spectator. The layering of scenes and meanings results not in a disruption of the perception, but in its sharpening. It evokes activity and creativity in the spectators. Kristek's work in various media is interconnected. His artwork in one media becomes a means of expression for an artwork in another media. For example, his sculpture Pyramidae-Klipteon became a prop for his happening Gate to a New Dimension (2012). Then, the artist used the scene from the happening in his oil painting Landscape of Senses with Supported Clouds (2013). The art historian Hartfrid Neunzert noted on Kristek's legacy in his foreword for the monography published by the Neues Stadtmuseum in 2008: References Bibliography External links Official Website Czech artists 20th-century German sculptors 20th-century German painters 20th-century German male artists German contemporary artists German performance artists Modern painters Czech surrealist artists German surrealist artists 1943 births Living people Czech performance artists Multimedia artists Body art 21st-century German painters 21st-century German male artists
Lubo Kristek
Technology
2,605
8,960,415
https://en.wikipedia.org/wiki/Exonic%20splicing%20enhancer
In molecular biology, an exonic splicing enhancer (ESE) is a DNA sequence motif consisting of 6 bases within an exon that directs, or enhances, accurate splicing of heterogeneous nuclear RNA (hnRNA) or pre-mRNA into messenger RNA (mRNA). Introduction Short sequences of DNA are transcribed to RNA; then this RNA is translated to a protein. A gene located in DNA will contain introns and exons. Part of the process of preparing the RNA includes splicing out the introns, sections of RNA that do not code for the protein. The presence of exonic splicing enhancers is essential for proper identification of splice sites by the cellular machinery. Role in splicing SR proteins bind to and promote exon splicing in regions with ESEs, while heterogeneous ribonucleoprotein particles (hnRNPs) bind to and block exon splicing in regions with exonic splicing silencers. Both types of proteins are involved in the assembly and proper functioning of spliceosomes. During RNA splicing, U2 small nuclear RNA auxiliary factor 1 (U2AF35) and U2AF2 (U2AF65) interact with the branch site and the 3' splice site of the intron to form the lariat. It is thought that SR proteins that bind to ESEs promote exon splicing by increasing interactions with U2AF35 and U2AF65. Mutation of exonic splicing enhancer motifs is a significant contributor to genetic disorders and some cancers. Simple point mutations in ESEs can inhibit affinity for splicing factors and alter alternative splicing, leading to altered mRNA sequence and protein translation. A field of genetic research is dedicated to determining the location and significance of ESE motifs in vivo. Research Computational methods were used to identify 238 candidate ESEs. ESEs are clinically significant because synonymous point mutations previously thought to be silent mutations located in an ESEs can lead to exon skipping and the production of a non functioning protein. Disruption of an exon splicing enhancer in exon 3 of MLH1 gene is the cause of HNPCC (hereditary nonpolyposis colorectal cancer) in a Quebec family. There is evidence that these 236 hexamers that signal splicing are evolutionarily conserved. See also Exonic splicing silencer (ESS) References External links ESE finder Genetics
Exonic splicing enhancer
Biology
509
31,058,693
https://en.wikipedia.org/wiki/Intronerator
The Intronerator is a database of alternatively spliced genes and a database of introns for Caenorhabditis elegans. See also Alternative splicing AspicDB EDAS Hollywood (database) List of biological databases References External links A working copy of the Intronerator no longer exists as of 2003. Equivalent functions can be performed with the U.C. Santa Cruz genome browser: http://genome.ucsc.edu/cgi-bin/hgTracks?clade=worm&organism=C._elegans Biological databases Gene expression Spliceosome RNA splicing
Intronerator
Chemistry,Biology
131
65,396,160
https://en.wikipedia.org/wiki/Backusella%20psychrophila
Backusella psychrophila is a species of zygote fungus in the order Mucorales. It was described by Andrew S. Urquhart and James K. Douch in 2020. The specific epithet refers to the inability of this species to grow above 30 ˚C. The type locality is Jack Cann Reserve, Australia. See also Fungi of Australia References External links Zygomycota Fungi described in 2020 Fungus species
Backusella psychrophila
Biology
91
76,592,604
https://en.wikipedia.org/wiki/Remix%20%28web%20framework%29
Remix is an open source full stack web framework. The software is designed for web applications built with front-end JavaScript frameworks like React and Vue.js. Remix supports server-side rendering and client-side routing. Remix has been presented as an alternative to the popular React framework Next.js. Initially available through a paid subscription, the software was made open source in October 2021. The team developing Remix (that also developed React Router) was acquired by Shopify in 2022, but has promised that development will stay open-source and "independent". The Remix team announced at React Conf 2024 that the next major version of Remix will be merged into and released as React Router v7. References External links JavaScript web frameworks Software using the MIT license 2021 software
Remix (web framework)
Technology
164
23,295,305
https://en.wikipedia.org/wiki/Soft%20landing
A soft landing is any type of aircraft, rocket or spacecraft landing that does not result in significant damage to or destruction of the vehicle or its payload, as opposed to a hard landing. The average vertical speed in a soft landing should be about per second or less. A soft landing can be achieved by Parachute—often this is into water. Vertical rocket power using retrorockets, often referred to as VTVL (vertical landing referred to as VTOL, is usually for aircraft landing in a level attitude, rather than rockets) — first achieved on a suborbital trajectory by Bell Rocket Belt and on an orbital trajectory by the Surveyor 1. Horizontal landing, most aircraft and some spacecraft, such as the Space Shuttle, land this way accompanied with a parachute. Being caught in midair, as done with Corona spy satellites and followed by some other form of landing. Reducing landing speed by impact with the body's surface, known as lithobraking. See also List of landings on extraterrestrial bodies References Rocketry
Soft landing
Astronomy,Engineering
209
219,009
https://en.wikipedia.org/wiki/BeppoSAX
BeppoSAX was an Italian–Dutch satellite for X-ray astronomy which played a crucial role in resolving the origin of gamma-ray bursts (GRBs), the most energetic events known in the universe. It was the first X-ray mission capable of simultaneously observing targets over more than 3 orders-of-magnitude of energy, from 0.1 to 300 kiloelectronvolts (keV) with relatively large area, good (for the time) energy resolution and imaging capabilities (with a spatial resolution of 1 arc minute between 0.1 and 10 keV). BeppoSAX was a major programme of the Italian Space Agency (ASI) with the participation of the Netherlands Agency for Aerospace Programmes (NIVR). The prime contractor for the space segment was Alenia while Nuova Telespazio led the development of the ground segment. Most of the scientific instruments were developed by the Italian National Research Council (CNR) while the Wide Field Cameras were developed by the Netherlands Institute for Space Research (SRON) and the LECS was developed by the astrophysics division of the European Space Agency's ESTEC facility. BeppoSAX was named in honour of the Italian physicist Giuseppe "Beppo" Occhialini. SAX stands for "Satellite per Astronomia a raggi X" or "Satellite for X-ray Astronomy". X-ray observations cannot be performed from ground-based telescopes, since Earth's atmosphere blocks most of the incoming radiation. One of BeppoSAX's main achievements was the identification of numerous gamma-ray bursts with extra-galactic objects. Launched by an Atlas-Centaur on 30 April 1996 into a low inclination (<4 degree) low-Earth orbit, the expected operating life of two years was extended to April 30, 2002, due to high scientific interest in the mission and the continued good technical status. After this date, the orbit started to decay rapidly and various subsystems were starting to fail making it no longer worthwhile to conduct scientific observations. On April 29, 2003, the satellite ended its life falling into the Pacific Ocean. Spacecraft characteristics BeppoSAX was a three axes stabilized satellite, with a pointing accuracy of 1'. The main attitude constraint derived from the need to maintain the normal to the solar arrays within 30° from the Sun, with occasional excursions to 45° for some WFC observations. Due to the low orbit the satellite was in view of the ground station of Malindi for only a limited fraction of the time. Data was stored on-board on a tape unit with a capacity of 450 Mbits and transmitted to ground every orbit during station passage. The average data rate available to instruments was about 60 kbit/s, but peak rates of up to 100 kbit/s can be retained for part of each orbit. With the solar panels closed, the spacecraft was 3.6 m in height and 2.7 m in diameter. The total mass amounts to 1400 kg, with a payload of 480 kg. The structure of the satellite consisted of three basic functional subassemblies: the Service Module, in the lower part of the spacecraft, which housed all the subsystems and the electronic boxes of the scientific instruments. the Payload Module, which housed the scientific instruments and the star trackers. the Thermal Shade Structure, that enclosed the Payload Module. The primary sub-systems of the satellite are: The Attitude Orbital Control System (AOCS), that performed attitude determination and manoeuvred and operated the Reaction Control Subsystem in charge of orbit recovering. It included redundant magnetometers, Sun acquisition sensors, three star trackers, six gyroscopes (three of which are for redundancy), three magnetic torquers and four reaction wheels, all controlled by a dedicated computer. The AOCS ensured a pointing accuracy of 1' during source observations and manoeuvres with a slew rate of 10° per min. The On Board Data Handler (OBDH) was the core for data management and system control on the satellite and it also managed the communication interfaces between the satellite and the ground station. Its computer supervised all subsystem processor activities, such as those of each instrument, and the communication busses. Instrumentation BeppoSAX contained five science instruments: Low Energy Concentrator Spectrometer (LECS) Medium Energy Concentrator Spectrometer (MECS) High Pressure Gas Scintillation Proportional Counter (HPGSPC) Phoswich Detector System (PDS) Wide Field Camera (WFC) The first four instruments (often called Narrow Field Instruments or NFI) point to the same direction, and allow observations of an object in a broad energy band of 0.1 to 300 keV (16 to 48,000 attojoules (aJ)). The WFC contained two coded aperture cameras operating in the 2 to 30 keV (320 to 4,800 aJ) range and each covering a region of 40 x 40 degrees (20 by 20 degrees full width at half maximum) on the sky. The WFC was complemented by the shielding of PDS which had a (nearly) all-sky view in the 100 to 600 keV (16,000 to 96,000 aJ) band, ideal for detecting gamma-ray bursts (GRB). The PDS shielding has poor angular resolution. In theory, after a GRB was seen in the PDS, the position was refined first with the WFC. However, due to the many spikes in the PDS, in practice a GRB was found using the WFC, often corroborated by a BATSE-signal. The position up to arcminute precision - depending on the signal to noise ratio of the burst - was found using the deconvoluted WFC-image. The coordinates were speedily sent out as an International Astronomical Union (IAU) and Gamma-ray burst Coordinate Network Circular. After this, immediate follow-up observations with the NFI and optical observatories around the world allowed accurate positioning of the GRB and detailed observations of the X-ray, optical and radio afterglow. The MECS contained three identical gas scintillation proportional counters operating in the 1.3 to 10 keV (208 to 1602 aJ) range. On 6 May 1997 one of the three identical MECS units was lost when a fault developed in the High Voltage power supply. The LECS was similar to the MECS units, expect that it had a thinner window that allows photons with lower energies down to 0.1 keV (16 aJ) to pass through and operated in a "driftless" mode which is necessary to detect the lowest energy X-rays as these would be lost in the low field regime near the entrance window of a conventional GSPC. The LECS data above 4 keV (641 aJ) is not usable due to calibration issues probably caused by the driftless design. The LECS and MECS had imaging capability, whereas the high-energy narrow field instruments were non-imaging. The HPGSPC was also a gas scintillation proportional counter, operating at a high (5 atmospheres) pressure. High pressure equals high density, and dense photon-stopping material allowed detection of photons up to 120 keV (19,000 aJ). The PDS was a crystal (sodium iodide / caesium iodide) scintillator detector capable of absorbing photons up to 300 keV (48,000 aJ). The spectral resolution of the PDS was rather modest when compared to the gas detectors, but the low background counting rate resulting from the low inclination BeppoSAX orbit and good background rejection capabilities meant that the PDS remains one of the most sensitive high-energy instruments flown. Gallery References Other General References BeppoSAX Mission Overview, Astronomy & Astrophysics Supplement Series, Vol. 122, April II 1997, 299-307 De Kort, N., Ruimteonderzoek, de horizon voorbij, Veen/SRON, 2003 Low Energy Concentrator Spectrometer (LECS) 0.1-10 keV, A&A Supplement series, Vol. 122, April II 1997, 309-326 Medium Energy Concentrator Spectrometer (MECS) 0.1-10 keV, A&A Supplement series, Vol. 122, April II 1997, 327-340 High pressure Gas Scintillator Proportional Counter (HPGSPC), A&A Supplement series, Vol. 122, April II 1997, 341-356 Phoswich Detection System (PDS) 15-300 keV, A&A Supplement series, Vol. 122, April II 1997, 357-369 Wide Field Camera 2-28 keV, A&A Supplement series, Vol. 125, November 1997, 557-572 Piro, L. e.a., SAX Observer's Handbook, 1995 External links BeppoSAX Science Data Center HEASARC BeppoSAX Guest Observer Facility Satellites formerly orbiting Earth Italian Space Agency Satellites of Italy Satellites of the Netherlands Space telescopes Spacecraft launched in 1996 Spacecraft which reentered in 2003 X-ray telescopes
BeppoSAX
Astronomy
1,890
534,237
https://en.wikipedia.org/wiki/List%20of%20hybrid%20vehicles
This is a list of hybrid vehicles. A hybrid could theoretically have any two power sources, but hybrid vehicles have typically combined an internal combustion engine with a battery and electric motor(s). This list includes both regular hybrid electric vehicles and plug-in hybrids, in chronological order of first production. Since Porsche made the first hybrid car in 1899 there have been a number of hybrid vehicles; but there was a marked increase in interest in, and development of, hybrid vehicles for personal transport in the late 1990s. Automobiles Overview by decade Early designs: 1899–1917 1899: Carmaker Pieper of Belgium introduced a vehicle with an under-seat electric motor and a gasoline engine. It used the internal combustion engine to charge its batteries at cruise speed and used both motors to accelerate or climb a hill. Auto-Mixte, also of Belgium, built vehicles from 1906 to 1912 under the Pieper patents. 1900: Ferdinand Porsche, then a young engineer at Jacob Lohner & Co. creates the first gasoline–electric hybrid vehicles. 1901: Jacob Lohner & Co. produces the first Lohner–Porsche, a series of gasoline–electric hybrid vehicles based on employee Ferdinand Porsche's novel drivetrain. These vehicles had a driveline that was either gas or electric, but not both at the same time. 1905 or sooner: Fischer Motor Vehicle Co., Hoboken, NJ produces and sells a petrol–electric omnibus in the United States and in London, including battery storage. 1907: AL (French car) 1917: Woods Dual Power Car had a driveline similar to the current GMC/Chevrolet Silverado hybrid pickup truck. Buses Date unknown Castrosua Tempus (in use in Barcelona, Granada, Lugo, Madrid, Santiago de Compostela, Sevilla) Irisbus Hynobis (Castellón) MAN Lion's City Hybrid: Italy: Trento Portugal: Lisboa and Oporto. Spain:Barcelona, Cádiz, Madrid, Málaga, Murcia, San Sebastián, Sevilla and Valladolid Mercedes Benz/Orion VII Hybrid North American Bus Industries 60-BRT Hybrid Scania OmniLink, ethanol–electric hybrid buses, (Stockholm) Solaris: Germany: Dresden, Glonn, Hannover, Leipzig and Munich) Poland: Sosnowiec and Poznań Spain: Madrid Switzerland: Lenzburg Tecnobus Gulliver (hybrid electric and all-electric versions), sold by Hispano Carrocera Volvo 7700 Chery A3 Chang’an (Chana) Zhi-xiang Belkommunmash AKSM-4202K See also List of battery electric vehicles List of modern production plug-in electric vehicles SmILE Comparison of Toyota hybrids References External links Best Hybrid Cars (USA) Fuel Economy.gov 2007 Hybrid Cars 2021 Hybrid Auto Service Hybrid vehicles Hybrid
List of hybrid vehicles
Engineering
574
61,094,758
https://en.wikipedia.org/wiki/Natural%20element%20method
The natural element method (NEM) is a meshless method to solve partial differential equation, where the elements do not have a predefined shape as in the finite element method, but depend on the geometry. A Voronoi diagram partitioning the space is used to create each of these elements. Natural neighbor interpolation functions are then used to model the unknown function within each element. Applications When the simulation is dynamic, this method prevents the elements to be ill-formed, having the possibility to easily redefine them at each time step depending on the geometry. References Numerical differential equations Numerical analysis Computational fluid dynamics Computational mathematics Simulation
Natural element method
Physics,Chemistry,Mathematics
130
8,412,741
https://en.wikipedia.org/wiki/Floating%20airport
A floating airport is an airport built and situated on a very large floating structure (VLFS) located many miles out at sea utilizing a flotation type of device or devices such as pneumatic stabilized platform (PSP) technology. As the population increases and land becomes more expensive and scarce, very large floating structures (VLFS) such as floating airports could help solve land use, pollution and aircraft noise issues. Early history The first discussion of a floating airport was for trans-Atlantic flights. At that time a passenger aircraft capable of making the trip could be built, but because of the massive need for fuel for the flight, it had a limited payload. An article appeared in the January 1930 issue of Popular Mechanics in which a model of a floating airport located in the Atlantic was proposed. To make safe flight possible with the aviation technology of that time, it called for eight such airports in the Atlantic. But unlike future floating airport ideas which were free floating, this 1930 concept had a floating airport platform, but with stabilizer legs which prevent the flight deck from pitching and rolling, similar in concept to some of today's off shore oil rigs. The cost of establishing eight such floating airports in 1930 was estimated at approximately USD$12,000,000 . The idea of floating airports received fresh attention in 1935 when the famous French aviation pilot and builder Bleriot gave one of his last interviews in which he made the case for installing some mid-Atlantic; he called them Seadromes as a solution to economical trans-Atlantic passenger flights. In 1943, a Floating Pontoon Flight Deck, 272 feet wide and 1,810 feet long was constructed by the Seabees at Allen Harbor using a total of 10,291 pontoons. It was towed to a cove in Narragansett Bay, where 140 takeoffs, landings and refuellings were successful in both smooth and rough waters. The pontoon airfield was noted to have advantages over aircraft carriers in lack of requirement for arresting gear for landings, and these could be executed at shorter time intervals. Tests showed that damage from four 100-pound bombs exploding on the floating deck did not interfere with flight operations and could be easily repaired. Description In theory, issues and problems of land-based airports could be minimized by locating airports several miles off the coast. Takeoffs and landings would be over water, not over populated areas, thereby eliminating noise pollution and reducing risks of aircraft crashes to the land-locked population. Since little of the ocean's surface is currently being used for human activity, growth and alterations in configuration would be relatively easy to achieve with minimal impact to the environment or to local residents who would utilize the airport. Water taxis or other high speed surface vessels would be a part of an offshore mass transit system that could connect the floating airport to coastal communities and minimize traffic issues. A floating structure, such as a floating airport, is theorized to have less impact on the environment than the land-based alternative. It would not require much, if any, dredging or moving of mountains or clearing of green space and the floating structure provides a reef-like environment conducive to marine life. In theory, wave energy could be harnessed, using the structure to convert waves into energy to help sustain the energy needs of the airport. Modern Floating airport projects In 2000, the Japanese Ministry of Land, Infrastructure, and Transport sponsored the construction of Mega-Float, a 1000-metre floating runway in Tokyo Bay. After conducting several real aircraft landings, the Ministry concluded that floating runways' hydro-elastic response would not affect aircraft operations, including precision instrument approaches in a protected waterway such as a large bay. The structure has been dismantled and is no longer in use. The pneumatic stabilized platform (PSP) was proposed as a means for constructing a new floating airport for San Diego in the Pacific Ocean, at least three miles off the tip of Point Loma. However, this proposed design was rejected in October 2003 due to very high cost, the difficulty in accessing such an airport, the difficulty in transporting jet fuel, electricity, water, and gas to the structure, failure to address security concerns such as a bomb blast, inadequate room for high-speed exits and taxiways, and environmental concerns. Achmad Yani International Airport, the first floating airport in the world started construction on 17 June 2014, and was completed in 2018. However, only the passenger terminal and apron are floating. See also Lily and Clover Aerospace architecture Mobile offshore base References External links Center for Contemporary Conflict - "The Atlantis Garrison: A Comprehensive, Cost Effective Cargo and Port Security Strategy" by Dr. Michael J. Hillyard(PSP / Floating Airport technology could be used for Cost Effective Cargo & Port Security) September 1996 -"Floating Airports: Wave of the Future" November 18, 1999 by Michael McCabe, SF Chronicle Staff Writer "Planes would land on floating runways built on S.F. Bay" Centre National De La Recherche Scientifique "Floatport: a floating solution to the San Diego airport's environmental problems" Auteur(s)/Author(s)BLOOD H.; INNIS D., Float Inc., San Diego CA, ETATS-UNIS San Diego Union Tribune - SignOnSanDiego.com "Floating airport proposal resurfaces" Jim Bell - Ecological Designer "Airport: Thinking Outside the Box" 9/26/2005 USAToday.com "Today in the Sky: Will San Diego build an airport in the ocean?" by Ben Mutzabaugh San Diego CityBEAT, "The sinking of the San Diego floating airport proposal" by D.A. Kolodenko Airports by type Ship types Structural engineering Building engineering Naval architecture
Floating airport
Engineering
1,167
734,854
https://en.wikipedia.org/wiki/Subitizing
Subitizing is the rapid, accurate, and effortless ability to perceive small quantities of items in a set, typically when there are four or fewer items, without relying on linguistic or arithmetic processes. The term refers to the sensation of instantly knowing how many objects are in the visual scene when their number falls within the subitizing range. Sets larger than about four to five items cannot be subitized unless the items appear in a pattern with which the person is familiar (such as the six dots on one face of a die). Large, familiar sets might be counted one-by-one (or the person might calculate the number through a rapid calculation if they can mentally group the elements into a few small sets). A person could also estimate the number of a large set—a skill similar to, but different from, subitizing. The term subitizing was coined in 1949 by E. L. Kaufman et al., and is derived from the Latin adjective subitus (meaning "sudden"). The accuracy, speed, and confidence with which observers make judgments of the number of items are critically dependent on the number of elements to be enumerated. Judgments made for displays composed of around one to four items are rapid, accurate, and confident. However, once there are more than four items to count, judgments are made with decreasing accuracy and confidence. In addition, response times rise in a dramatic fashion, with an extra 250–350ms added for each additional item within the display beyond about four. While the increase in response time for each additional element within a display is 250–350ms per item outside the subitizing range, there is still a significant, albeit smaller, increase of 40–100ms per item within the subitizing range. A similar pattern of reaction times is found in young children, although with steeper slopes for both the subitizing range and the enumeration range. This suggests there is no span of apprehension as such, if this is defined as the number of items which can be immediately apprehended by cognitive processes, since there is an extra cost associated with each additional item enumerated. However, the relative differences in costs associated with enumerating items within the subitizing range are small, whether measured in terms of accuracy, confidence, or speed of response. Furthermore, the values of all measures appear to differ markedly inside and outside the subitizing range. So, while there may be no span of apprehension, there appear to be real differences in the ways in which a small number of elements is processed by the visual system (i.e. approximately four or fewer items), compared with larger numbers of elements (i.e. approximately more than four items). A 2006 study demonstrated that subitizing and counting are not restricted to visual perception, but also extend to tactile perception, when observers had to name the number of stimulated fingertips. A 2008 study also demonstrated subitizing and counting in auditory perception. Even though the existence of subitizing in tactile perception has been questioned, this effect has been replicated many times and can be therefore considered as robust. The subitizing effect has also been obtained in tactile perception with congenitally blind adults. Together, these findings support the idea that subitizing is a general perceptual mechanism extending to auditory and tactile processing. Enumerating afterimages As the derivation of the term "subitizing" suggests, the feeling associated with making a number judgment within the subitizing range is one of immediately being aware of the displayed elements. When the number of objects presented exceeds the subitizing range, this feeling is lost, and observers commonly report an impression of shifting their viewpoint around the display, until all the elements presented have been counted. The ability of observers to count the number of items within a display can be limited, either by the rapid presentation and subsequent masking of items, or by requiring observers to respond quickly. Both procedures have little, if any, effect on enumeration within the subitizing range. These techniques may restrict the ability of observers to count items by limiting the degree to which observers can shift their "zone of attention" successively to different elements within the display. Atkinson, Campbell, and Francis demonstrated that visual afterimages could be employed in order to achieve similar results. Using a flashgun to illuminate a line of white disks, they were able to generate intense afterimages in dark-adapted observers. Observers were required to verbally report how many disks had been presented, both at 10s and at 60s after the flashgun exposure. Observers reported being able to see all the disks presented for at least 10s, and being able to perceive at least some of the disks after 60s. Unlike simply displaying the images for 10 and 60 second intervals, when presented in the form of afterimages, eye movement cannot be employed for the purpose of counting: when the subjects move their eyes, the images also move. Despite a long period of time to enumerate the number of disks presented when the number of disks presented fell outside the subitizing range (i.e., 5–12 disks), observers made consistent enumeration errors in both the 10s and 60s conditions. In contrast, no errors occurred within the subitizing range (i.e., 1–4 disks), in either the 10s or 60s conditions. Brain structures involved in subitizing and counting The work on the enumeration of afterimages supports the view that different cognitive processes operate for the enumeration of elements inside and outside the subitizing range, and as such raises the possibility that subitizing and counting involve different brain circuits. However, functional imaging research has been interpreted both to support different and shared processes. Bálint's syndrome Social theory supporting the view that subitizing and counting may involve functionally and anatomically distinct brain areas comes from patients with simultanagnosia, one of the key components of Bálint's syndrome. Patients with this disorder suffer from an inability to perceive visual scenes properly, being unable to localize objects in space, either by looking at the objects, pointing to them, or by verbally reporting their position. Despite these dramatic symptoms, such patients are able to correctly recognize individual objects. Crucially, people with simultanagnosia are unable to enumerate objects outside the subitizing range, either failing to count certain objects, or alternatively counting the same object several times. However, people with simultanagnosia have no difficulty enumerating objects within the subitizing range. The disorder is associated with bilateral damage to the parietal lobe, an area of the brain linked with spatial shifts of attention. These neuropsychological results are consistent with the view that the process of counting, but not that of subitizing, requires active shifts of attention. However, recent research has questioned this conclusion by finding that attention also affects subitizing. Imaging enumeration A further source of research upon the neural processes of subitizing compared to counting comes from positron emission tomography (PET) research upon normal observers. Such research compares the brain activity associated with enumeration processes inside (i.e., 1–4 items) for subitizing, and outside (i.e., 5–8 items) for counting. Such research finds that within the subitizing and counting range activation occurs bilaterally in the occipital extrastriate cortex and superior parietal lobe/intraparietal sulcus. This has been interpreted as evidence that shared processes are involved. However, the existence of further activations during counting in the right inferior frontal regions, and the anterior cingulate have been interpreted as suggesting the existence of distinct processes during counting related to the activation of regions involved in the shifting of attention. Educational applications Historically, many systems have attempted to use subitizing to identify full or partial quantities. In the twentieth century, mathematics educators started to adopt some of these systems, as reviewed in examples below, but often switched to more abstract color-coding to represent quantities up to ten. In the 1990s, babies three weeks old were shown to differentiate between 1–3 objects, that is, to subitize. A more recent meta-study summarizing five different studies concluded that infants are born with an innate ability to differentiate quantities within a small range, which increases over time. By the age of seven that ability increases to 4–7 objects. Some practitioners claim that with training, children are capable of subitizing 15+ objects correctly. Abacus The hypothesized use of yupana, an Inca counting system, placed up to five counters in connected trays for calculations. In each place value, the Chinese abacus uses four or five beads to represent units, which are subitized, and one or two separate beads, which symbolize fives. This allows multi-digit operations such as carrying and borrowing to occur without subitizing beyond five. European abacuses use ten beads in each register, but usually separate them into fives by color. Twentieth century teaching tools The idea of instant recognition of quantities has been adopted by several pedagogical systems, such as Montessori, Cuisenaire and Dienes. However, these systems only partially use subitizing, attempting to make all quantities from 1 to 10 instantly recognizable. To achieve it, they code quantities by color and length of rods or bead strings representing them. Recognizing such visual or tactile representations and associating quantities with them involves different mental operations from subitizing. Other applications One of the most basic applications is in digit grouping in large numbers, which allow one to tell the size at a glance, rather than having to count. For example, writing one million (1000000) as 1,000,000 (or 1.000.000 or ) or one (short) billion (1000000000) as 1,000,000,000 (or other forms, such as 1,00,00,00,000 in the Indian numbering system) makes it much easier to read. This is particularly important in accounting and finance, as an error of a single decimal digit changes the amount by a factor of ten. This is also found in computer programming languages for literal values, some of which use digit separators. Dice, playing cards and other gaming devices traditionally split quantities into subitizable groups with recognizable patterns. The behavioural advantage of this grouping method has been scientifically investigated by Ciccione and Dehaene, who showed that counting performances are improved if the groups share the same amount of items and the same repeated pattern. A comparable application is to split up binary and hexadecimal number representations, telephone numbers, bank account numbers (e.g., IBAN, social security numbers, number plates, etc.) into groups ranging from 2 to 5 digits separated by spaces, dots, dashes, or other separators. This is done to support overseeing completeness of a number when comparing or retyping. This practice of grouping characters also supports easier memorization of large numbers and character structures. Self assessment There is at least one game that can be played online to self assess one's ability to subitize. See also Approximate number system Numerical cognition Visual indexing theory#Subitizing studies References Mathematical logic Cognitive psychology 1940s neologisms Perception
Subitizing
Mathematics,Biology
2,342
41,575,363
https://en.wikipedia.org/wiki/Spherical%20contact%20distribution%20function
In probability and statistics, a spherical contact distribution function, first contact distribution function, or empty space function is a mathematical function that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. More specifically, a spherical contact distribution function is defined as probability distribution of the radius of a sphere when it first encounters or makes contact with a point in a point process. This function can be contrasted with the nearest neighbour function, which is defined in relation to some point in the point process as being the probability distribution of the distance from that point to its nearest neighbouring point in the same point process. The spherical contact function is also referred to as the contact distribution function, but some authors define the contact distribution function in relation to a more general set, and not simply a sphere as in the case of the spherical contact distribution function. Spherical contact distribution functions are used in the study of point processes as well as the related fields of stochastic geometry and spatial statistics, which are applied in various scientific and engineering disciplines such as biology, geology, physics, and telecommunications. Point process notation Point processes are mathematical objects that are defined on some underlying mathematical space. Since these processes are often used to represent collections of points randomly scattered in space, time or both, the underlying space is usually d-dimensional Euclidean space denoted here by , but they can be defined on more abstract mathematical spaces. Point processes have a number of interpretations, which is reflected by the various types of point process notation. For example, if a point belongs to or is a member of a point process, denoted by , then this can be written as: and represents the point process being interpreted as a random set. Alternatively, the number of points of located in some Borel set is often written as: which reflects a random measure interpretation for point processes. These two notations are often used in parallel or interchangeably. Definitions Spherical contact distribution function The spherical contact distribution function is defined as: where b(o,r) is a ball with radius r centered at the origin o. In other words, spherical contact distribution function is the probability there are no points from the point process located in a hyper-sphere of radius r. Contact distribution function The spherical contact distribution function can be generalized for sets other than the (hyper-)sphere in . For some Borel set with positive volume (or more specifically, Lebesgue measure), the contact distribution function (with respect to ) for is defined by the equation: Examples Poisson point process For a Poisson point process on with intensity measure this becomes which for the homogeneous case becomes where denotes the volume (or more specifically, the Lebesgue measure) of the ball of radius . In the plane , this expression simplifies to Relationship to other functions Nearest neighbour function In general, the spherical contact distribution function and the corresponding nearest neighbour function are not equal. However, these two functions are identical for Poisson point processes. In fact, this characteristic is due to a unique property of Poisson processes and their Palm distributions, which forms part of the result known as the Slivnyak-Mecke or Slivnyak's theorem. -function The fact that the spherical distribution function and nearest neighbour function are identical for the Poisson point process can be used to statistically test if point process data appears to be that of a Poisson point process. For example, in spatial statistics the -function is defined for all  ≥ 0 as: For a Poisson point process, the function is simply =1, hence why it is used as a non-parametric test for whether data behaves as though it were from a Poisson process. It is, however, thought possible to construct non-Poisson point processes for which =1, but such counterexamples are viewed as somewhat 'artificial' by some and exist for other statistical tests. More generally, -function serves as one way (others include using factorial moment measures) to measure the interaction between points in a point process. See also Nearest neighbour function Factorial moment measure Moment measure References Stochastic processes Spatial analysis
Spherical contact distribution function
Physics
849
9,468,040
https://en.wikipedia.org/wiki/Scotobiology
Scotobiology is the study of biology as directly and specifically affected by darkness, as opposed to photobiology, which describes the biological effects of light. Overview The science of scotobiology gathers together under a single descriptive heading a wide range of approaches to the study of the biology of darkness. This includes work on the effects of darkness on the behavior and metabolism of animals, plants, and microbes. Some of this work has been going on for over a century, and lays the foundation for understanding the importance of dark night skies, not only for humans but for all biological species. The great majority of biological systems have evolved in a world of alternating day and night and have become irrevocably adapted to and dependent on the daily and seasonally changing patterns of light and darkness. Light is essential for many biological activities such as sight and photosynthesis. These are the focus of the science of photobiology. But the presence of uninterrupted periods of darkness, as well as their alternation with light, is just as important to biological behaviour. Scotobiology studies the positive responses of biological systems to the presence of darkness, and not merely the negative effects caused by the absence of light. Effects of darkness Many of the biological and behavioural activities of plants, animals (including birds and amphibians), insects, and microorganisms are either adversely affected by light pollution at night or can only function effectively either during or as the consequence of nightly darkness. Such activities include foraging, breeding and social behavior in higher animals, amphibians, and insects, which are all affected in various ways if light pollution occurs in their environment. These are not merely photobiological phenomena; light pollution acts by interrupting critical dark-requiring processes. But perhaps the most important scotobiological phenomena relate to the regular periodic alternation of light and darkness. These include breeding behavior in a range of animals, the control of flowering and the induction of winter dormancy in many plants, and the operational control of the human immune system. In many of these biological processes the critical point is the length of the dark period rather than that of the light. For example, "short-day" and "long-day" plants are, in fact, "long-night" and "short-night" respectively. That is to say, plants do not measure the length of the light period, but of the dark period. One consequence of artificial light pollution is that even brief periods of relatively bright light during the night may prevent plants or animals (including humans) from measuring the length of the dark period, and therefore from behaving in a normal or required manner. This is a critical aspect of scotobiology, and one of the major areas in the study of the responses of biological systems to darkness. In discussing scotobiology, it is important to remember that darkness (the absence of light) is seldom absolute. An important aspect of any scotobiological phenomenon is the level and quality (wavelength) of light that is below the threshold of detection for that phenomenon and in any specific organism. This important variable in scotobiological studies is not always properly noted or examined. There are substantial levels of natural light pollution at night, of which moonlight is usually the strongest. For example, plants that rely on night length to program their behaviour have the capacity to ignore full moonlight during an otherwise dark night. If this ability had not evolved, plants would not be able to respond to changing night-length for such behavioural programs as the initiation of flowering and the onset of dormancy. On the other hand, some animal behavioural patterns are strongly responsive to moonlight. It is thus most important in any scotobiological study to determine the threshold level of light that may be required to interfere with or negate the normal pattern of dark-night activity. Etymology In 2003, at a symposium on the Ecology of the Night held in Muskoka, Canada, discussion centered around the many effects of night-time light pollution on the biology of a wide range of organisms, but it went far beyond this in describing darkness as a biological imperative for the functioning of biological systems. Presentations focused on the absolute requirement of darkness for many aspects of normal behaviour and metabolism of many organisms and for the normal progression of their life cycles. Because there was no suitable term to describe the Symposium's main focus, the term scotobiology was introduced. The word is derived from the Greek scotos, σκότος, "dark," and relates to photobiology, which describes the biological effects of light (φῶς, phos; root: φωτ-, phot-). The term scotobiology appears not to have been used previously, although related terms such as skototropism and scotophyle have appeared in the literature. See also Dark-sky movement Dark-sky preserve Ecological light pollution Light effects on circadian rhythm Photoperiodism Sky brightness References Branches of biology
Scotobiology
Biology
1,011
11,865,154
https://en.wikipedia.org/wiki/DBpedia
DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in the Wikipedia project. This structured information is made available on the World Wide Web using OpenLink Virtuoso. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datasets. The project was heralded as "one of the more famous pieces" of the decentralized Linked Data effort by Tim Berners-Lee, one of the Internet's pioneers. As of June 2021, DBPedia contained over 850 million triples. Background The project was started by people at the Free University of Berlin and Leipzig University in collaboration with OpenLink Software, and is now maintained by people at the University of Mannheim and Leipzig University. The first publicly available dataset was published in 2007. The data is made available under free licenses (CC BY-SA), allowing others to reuse the dataset; it does not use an open data license to waive the sui generis database rights. Wikipedia articles consist mostly of free text, but also include structured information embedded in the articles, such as "infobox" tables (the pull-out panels that appear in the top right of the default view of many Wikipedia articles, or at the start of the mobile versions), categorization information, images, geo-coordinates and links to external Web pages. This structured information is extracted and put in a uniform dataset which can be queried. Dataset The 2016-04 release of the DBpedia data set describes 6.0 million entities, out of which 5.2 million are classified in a consistent ontology, including 1.5 million persons, 810,000 places, 135,000 music albums, 106,000 films, 20,000 video games, 275,000 organizations, 301,000 species and 5,000 diseases. DBpedia uses the Resource Description Framework (RDF) to represent extracted information and consists of 9.5 billion RDF triples, of which 1.3 billion were extracted from the English edition of Wikipedia and 5.0 billion from other language editions. From this data set, information spread across multiple pages can be extracted. For example, book authorship can be put together from pages about the work, or the author. One of the challenges in extracting information from Wikipedia is that the same concepts can be expressed using different parameters in infobox and other templates, such as and . Because of this, queries about where people were born would have to search for both of these properties in order to get more complete results. As a result, the DBpedia Mapping Language has been developed to help in mapping these properties to an ontology while reducing the number of synonyms. Due to the large diversity of infoboxes and properties in use on Wikipedia, the process of developing and improving these mappings has been opened to public contributions. Version 2014 was released in September 2014. A main change since previous versions was the way abstract texts were extracted. Specifically, running a local mirror of Wikipedia and retrieving rendered abstracts from it made extracted texts considerably cleaner. Also, a new data set extracted from Wikimedia Commons was introduced. As of June 2021, DBPedia contains over 850 million triples. Examples DBpedia extracts factual information from Wikipedia pages, allowing users to find answers to questions where the information is spread across multiple Wikipedia articles. Data is accessed using an SQL-like query language for RDF called SPARQL. For example, if one were interested in the Japanese shōjo manga series Tokyo Mew Mew, and wanted to find the genres of other works written by its illustrator Mia Ikumi. DBpedia combines information from Wikipedia's entries on Tokyo Mew Mew, Mia Ikumi and on this author's works such as Super Doll Licca-chan and Koi Cupid. Since DBpedia normalises information into a single database, the following query can be asked without needing to know exactly which entry carries each fragment of information, and will list related genres: PREFIX dbprop: <http://dbpedia.org/ontology/> PREFIX db: <http://dbpedia.org/resource/> SELECT ?who, ?WORK, ?genre WHERE { db:Tokyo_Mew_Mew dbprop:author ?who . ?WORK dbprop:author ?who . OPTIONAL { ?WORK dbprop:genre ?genre } . } Use cases DBpedia has a broad scope of entities covering different areas of human knowledge. This makes it a natural hub for connecting datasets, where external datasets could link to its concepts. The DBpedia dataset is interlinked on the RDF level with various other Open Data datasets on the Web. This enables applications to enrich DBpedia data with data from these datasets. , there are more than 45 million interlinks between DBpedia and external datasets including: Freebase, OpenCyc, UMBEL, GeoNames, MusicBrainz, CIA World Fact Book, DBLP, Project Gutenberg, DBtune Jamendo, Eurostat, UniProt, Bio2RDF, and US Census data. The Thomson Reuters initiative OpenCalais, the Linked Open Data project of The New York Times, the Zemanta API and DBpedia Spotlight also include links to DBpedia. The BBC uses DBpedia to help organize its content. Faviki uses DBpedia for semantic tagging. Samsung also includes DBpedia in its "Knowledge Sharing Platform". Such a rich source of structured cross-domain knowledge is fertile ground for artificial intelligence systems. DBpedia was used as one of the knowledge sources in IBM Watson's Jeopardy! winning system. Amazon provides a DBpedia Public Data Set that can be integrated into Amazon Web Services applications. Data about creators from DBpedia can be used for enriching artworks' sales observations. The crowdsourcing software company, Ushahidi, built a prototype of its software that leveraged DBpedia to perform semantic annotations on citizen-generated reports. The prototype incorporated the "YODIE" (Yet another Open Data Information Extraction system) service developed by the University of Sheffield, which uses DBpedia to perform the annotations. The goal for Ushahidi was to improve the speed and facility with which incoming reports could be validated managed. DBpedia Spotlight DBpedia Spotlight is a tool for annotating mentions of DBpedia resources in text. This allows linking unstructured information sources to the Linked Open Data cloud through DBpedia. DBpedia Spotlight performs named entity extraction, including entity detection and name resolution (in other words, disambiguation). It can also be used for named entity recognition, and other information extraction tasks. DBpedia Spotlight aims to be customizable for many use cases. Instead of focusing on a few entity types, the project strives to support the annotation of all 3.5million entities and concepts from more than 320 classes in DBpedia. The project started in June 2010 at the Web Based Systems Group at the Free University of Berlin. DBpedia Spotlight is publicly available as a web service for testing and a Java/Scala API licensed via the Apache License. The DBpedia Spotlight distribution includes a jQuery plugin that allows developers to annotate pages anywhere on the Web by adding one line to their page. Clients are also available in Java or PHP. The tool handles various languages through its demo page and web services. Internationalization is supported for any language that has a Wikipedia edition. Archivo ontology database From 2020, the DBpedia project provides a regularly updated database of web‑accessible ontologies written in the OWL ontology language. Archivo also provides a four star rating scheme for the ontologies it scrapes, based on accessibility, quality, and related fitness‑for‑use criteria. For instance, SHACL compliance for graph‑based data is evaluated when appropriate. Ontologies should also contain metadata about their characteristics and specify a public license describing their terms‑of‑use. the Archivo database contains 1368 entries. History DBpedia was initiated in 2007 by Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak and Zachary Ives. See also BabelNet Semantic MediaWiki Wikidata YAGO (database) References External links Free software culture and documents Open data Semantic Web Knowledge bases History of Wikipedia Java platform Free software programmed in Scala
DBpedia
Technology
1,780
77,371,428
https://en.wikipedia.org/wiki/Valeri%20Frolov%20%28physicist%29
Valeri Frolov (born October 7, 1946) is a Russian-born Canadian theoretical physicist at the University of Alberta, Canada. Education Valeri Frolov is a theoretical physicist specializing in the study of black holes. He was born and grew up in Moscow. He graduated from the Moscow State University and obtained his Master’s Degree in 1970. He received his first PhD degree (“Candidate of Sciences”) in 1973 and his second Doctor Degree (“Doctor of Sciences”) in 1980, both from P.N. Lebedev Physical Institute, Moscow, in theoretical physics. Work His professional scientific career started in 1970 when he jointed the P.N. Lebedev Physical Institute. He was working there as an assistant professor, an associate professor, and a full professor (after 1980) until 1992. During the period 1985-1992 he was also a professor at the Moscow Institute of Physics and Technology (MIPT). During 1992-1993 he spent one year as a visiting professor in the University of Copenhagen. In 1993 he moved to Edmonton, Canada where he became a full professor at the University of Alberta and received a Killam Memorial Chair position. He is working at this position till now. Research Early in his career, Frolov studied white holes and semi-closed worlds. In 1970 he with his supervisor M.A. Markov published a paper where quantum particle creation by charged black holes was discussed. In 1980 he (with Gregory Vilkovisky) proposed a model of a regular evaporating black hole and presented a conformal diagram of its spacetime. Later his main interest focused on quantum effects in black hole. In 1987 he with Vitaly Ginzburg published a paper on the equivalence principle in quantum domain. In 1989 he with Kip Thorne published a paper discussing quantum effects near the horizon of a rotating black hole and proposed a state of the vacuum, which sometimes is refereed as the Frolov-Thorne vacuum. In 1994 he (with A. Barvinsky and A. Zelnikov) introduced a no-boundary wave function of a black hole and in 1996 he with (D. Fursaev and A. Zelnikov) proposed explanation of the black hole entropy based on Sakharov’s ideas of induced gravity. During the same period of time, he also studied cosmic strings (their interaction with black holes and quantum effects in the string background), wormholes and “time machines”, and regular black hole models. During the period 2006-2018 the main focus of his research was on hidden symmetries of four and higher dimension black holes. In collaboration with D. Kubiznak and P. Ktrous he demonstrated that all these solutions of the Einstein equations possessed a special geometrical object, called Killing-Yano tensor, which is responsible for a complete integrability of equations of motion of particles and separability of most interesting physical field equations in these spacetimes. More recently, he proposed an effective action for electromagnetic and gravitational spin-optics which is a generalization of the standard geometric optics and takes into account the interaction of the spin of these fields with the spacetime curvature. Awards and honors Killam Memorial Chair (from 1993 till now) In 2016, Markov Prize of INR of Russian Academy of Sciences for outstanding contribution to the black hole theory. Books Igor D. Novikov and Valeri P. Frolov, “Black Hole Physics” (Fundamental Theories of Physics,  Volume 27, 341 pages, 1989. This is an English translation of the book in Russian, published in 1986 by Nauka Publ. Valeri P. Frolov and Igor D. Novikov, “Black Hole Physics: Basic Concepts and New Developments”, Fundamental Theories of Physics, Volume 96, 770 pages, 1998 Valeri P. Frolov and Andrei Zelnikov, “Introduction to Black Hole Physics”, Oxford University Press, 488 pages, 2011 Book chapter Frolov, V. P. The Newman-Penrose Method in the Theory of General Relativity. A chapter in: Basov, N.G. (eds) Problems in the General Theory of Relativity and Theory of Group Representations. The Lebedev Physics Institute Series. Springer, Boston, MA., pages 73–185, 1979 References External links Valeri Frolov at Google Scholar 1946 births Living people Theoretical physicists Moscow State University alumni University of Alberta People from Moscow Moscow Institute of Physics and Technology
Valeri Frolov (physicist)
Physics
913
78,536,056
https://en.wikipedia.org/wiki/Lie%20n-algebra
In mathematics, a Lie n-algebra is a generalization of a Lie algebra, a vector space with a bracket, to higher order operations. For example, in the case of a Lie 2-algebra, the Jacobi identity is replaced by an isomorphism called a Jacobiator. See also 2-ring homotopy Lie algebra References Jim Stasheff and Urs Schreiber, Zoo of Lie n-Algebras. A post about the paper at the n-category café. John Baez, Alissa Crans, Higher-Dimensional Algebra VI: Lie 2-Algebras Theory and Applications of Categories, Vol. 12, (2004) No. 15, pp 492–528. Further reading https://ncatlab.org/nlab/show/Lie+2-algebra https://golem.ph.utexas.edu/category/2007/08/string_and_chernsimons_lie_3al.html Lie algebras
Lie n-algebra
Mathematics
210
30,214,168
https://en.wikipedia.org/wiki/Bendix%20drive
A Bendix drive is a type of engagement mechanism used in starter motors of internal combustion engines. The device allows the pinion gear of the starter motor to engage or disengage the ring gear (which is attached to the flywheel or flexplate of the engine) automatically when the starter is powered or when the engine fires, respectively. It is named after its inventor, Vincent Hugo Bendix. Operation The Bendix system places the starter drive pinion on a helically-splined drive shaft of the starter motor. When the starter motor first begins turning, the inertia of the drive pinion assembly momentarily resists rotation even though the shaft through its center is turning. Since the pinion has internal splines matching those on the drive shaft, this causes the pinion gear to slide axially to make initial side contact with the gear teeth on ring gear of the engine. The pinion then rotates enough to allow the gears to mesh, after which the pinion then continues along the shaft to reach a stop on the end of its allowed travel, at which point the gears are fully meshed. Since the pinion gear can no longer travel axially, it must then turn with the drive shaft and begins to drive the ring gear. When the engine starts, backdrive from the ring gear causes the drive pinion to exceed the rotational speed of the starter, at which point the drive pinion is forced back along the helical spline and out of mesh with the ring gear. The torque of the starter motor is transferred to the starter motor drive shaft through a heavy-duty coiled spring. When the starter motor powers and drives the pinion to engage with the flywheel, this spring cushions the rotational impact as the gears and mesh and begin turning together. The main drawback to the Bendix drive is that it relies on a certain amount of "clash" between the teeth of the pinion and the ring gears before they slip into place and mate completely; the teeth of the pinion are already spinning when they come into contact with the static ring gear, and unless they happen to align perfectly at the moment they engage, the pinion teeth will strike the teeth of the ring gear side-to-side rather than face-to-face, and continue to rotate until both align. This increases wear on both sets of teeth, although the pinion gear typically wears more due to being made of a softer material than the ring gear, because the pinion gear is more easily replaced. For this reason the Bendix drive has been largely superseded in starter motor design by the pre-engagement system using a starter solenoid. References "Bendix Drive in Automobile Engineering", Dr. Jennifer Russel. Posted January 2, 2022 A REVIEW ON BENDIX DRIVE IN AUTOMOBILES, International Research Journal of Modernization in Engineering Technology and Science Internal combustion engine Engines Bendix Corporation
Bendix drive
Physics,Technology,Engineering
585
43,156,657
https://en.wikipedia.org/wiki/Interstitial%20site
In crystallography, interstitial sites, holes or voids are the empty space that exists between the packing of atoms (spheres) in the crystal structure. The holes are easy to see if you try to pack circles together; no matter how close you get them or how you arrange them, you will have empty space in between. The same is true in a unit cell; no matter how the atoms are arranged, there will be interstitial sites present between the atoms. These sites or holes can be filled with other atoms (interstitial defect). The picture with packed circles is only a 2D representation. In a crystal lattice, the atoms (spheres) would be packed in a 3D arrangement. This results in different shaped interstitial sites depending on the arrangement of the atoms in the lattice. Close packed A close packed unit cell, both face-centered cubic and hexagonal close packed, can form two different shaped holes.  Looking at the three green spheres in the hexagonal packing illustration at the top of the page, they form a triangle-shaped hole.  If an atom is arranged on top of this triangular hole it forms a tetrahedral interstitial hole. If the three atoms in the layer above are rotated and their triangular hole sits on top of this one, it forms an octahedral interstitial hole. In a close-packed structure there are 4 atoms per unit cell and it will have 4 octahedral voids (1:1 ratio) and 8 tetrahedral voids (1:2 ratio) per unit cell. The tetrahedral void is smaller in size and could fit an atom with a radius 0.225 times the size of the atoms making up the lattice.  An octahedral void could fit an atom with a radius 0.414 times the size of the atoms making up the lattice. An atom that fills this empty space could be larger than this ideal radius ratio, which would lead to a distorted lattice due to pushing out the surrounding atoms, but it cannot be smaller than this ratio. Face-centered cubic (FCC) If half of the tetrahedral sites of the parent FCC lattice are filled by ions of opposite charge, the structure formed is the zincblende crystal structure. If all the tetrahedral sites of the parent FCC lattice are filled by ions of opposite charge, the structure formed is the fluorite structure or antifluorite structure. If all the octahedral sites of the parent FCC lattice are filled by ions of opposite charge, the structure formed is the rock-salt structure. Hexagonal close packed (HCP) If half of the tetrahedral sites of the parent HCP lattice are filled by ions of opposite charge, the structure formed is the wurtzite crystal structure. If all the octahedral sites of the anion HCP lattice are filled by cations, the structure formed is the nickel arsenide structure. Simple cubic A simple cubic unit cell, with stacks of atoms arranged as if at the eight corners of a cube would form a single cubic hole or void in the center. If these voids are occupied by ions of opposite charge from the parent lattice, the cesium chloride structure is formed. Body-centered cubic (BCC) A body-centered cubic unit cell has six octahedral voids located at the center of each face of the unit cell, and twelve further ones located at the midpoint of each edge of the same cell, for a total of six net octahedral voids. Additionally, there are 24 tetrahedral voids located in a square spacing around each octahedral void, for a total of twelve net tetrahedral voids. These tetrahedral voids are not local maxima and are not technically voids, but they do occasionally appear in multi-atom unit cells. Interstitial defect An interstitial defect refers to additional atoms occupying some interstitial sites at random as crystallographic defects in a crystal which normally has empty interstitial sites by default. References Crystallography Crystals Materials science
Interstitial site
Physics,Chemistry,Materials_science,Engineering
839
18,071,097
https://en.wikipedia.org/wiki/Scoop%20wheel
A scoop wheel or scoopwheel is a pump, usually used for land drainage. A scoop wheel pump is similar in construction to a water wheel, but works in the opposite manner: a waterwheel is water-powered and used to drive machinery, a scoop wheel is engine-driven and is used to lift water from one level to another. Principally used for land drainage, early scoop wheels were wind-driven but later steam-powered beam engines were used. It can be regarded as a form of pump. A scoop wheel produces a lot of spray. They were frequently encased in a brick building. To maintain efficiency when the river into which the water was discharged was of variable level, or tidal, a 'rising breast' was used, a sort of inclined sluice. The basic construction is, of necessity, similar to an undershot water wheel. The individual blades were frequently called ladles. Scoop wheels have been used in land drainage in Northern Germany, in the Netherlands, and in the UK, and occasionally elsewhere in the world. They began to be replaced in the mid 19th century by centrifugal pumps. The East and West Fens to the north of Boston, Lincolnshire were drained by such pumps in 1867, but although they were smaller and more economical to install, a Mr. Lunn was still arguing that scoop wheels were a better solution if the initial cost did not rule them out, they were employed in situations where the water did not need to be raised by more than , and where the water levels of the input and output did not vary much. An interesting comparison between the two types of pumps is available, because a vertical spindle centrifugal pump was installed at Prickwillow on the River Lark in Cambridgeshire, alongside an existing scoop wheel. A series of tests were carried out in 1880, to check their efficiency. The scoop wheel lifted 71.45 tons per minute through , with the engine indicating that it was developing , while the newer installation was developing , and raised 75.93 tons per minute through . Efficiency was calculated as 46 per cent for the scoop wheel and 52.79 per cent for the centrifugal pump. The most significant difference was the coal consumption, which was reduced from per hour to per hour for the newer system. See also Noria Sakia Dredger Pumping stations employing a scoop wheel Dogdyke Engine, Lincolnshire Pinchbeck Engine, Lincolnshire Pode Hole, Lincolnshire (scoop wheel no longer present) Stretham Old Engine, Cambridgeshire Westonzoyland Pumping Station Museum, Somerset (scoop wheel no longer present) References Bibliography External links Berney Arms windmill, preserved by English Heritage Summary of scoopwheel history An american example, sadly without pictures of the wheel Drainage Industrial archaeology Pumps
Scoop wheel
Physics,Chemistry
553
69,652,211
https://en.wikipedia.org/wiki/Filippo%20Frontera
Filippo Frontera (Savelli, 16 November 1941) is an Italian astrophysicist and professor, who deals with astronomical investigations on celestial gamma-rays. Biography Education and career "Laurea" degree in physics with laude in 1966 at the Università di Bologna. Full professor of Experimental Physics of the University of Ferrara, Engineering Faculty, Ferrara, Italy, retired in 2012, for eight years Filippo Frontera was coordinator of the PhD course in Physics of this university. Previously, from 1969 to 1985, he was a scientist of the IASF-CNR institute (now OAS-INAF) in Bologna. As "Distinguished Scientist" of the University of Ferrara, he is continuing his research activity at the Physics and Earth Sciences Department, and is lecturer of  "Measurements and Observations of celestial X-rays and gamma-rays" for the Master course in Physics. He is also associated scientist of the National Institute of Astrophysics (INAF) in Bologna. He is also adjunct professor of the International Center of Relativistic Astrophysics Network (ICRANET) and is a Faculty member of the international IRAP-PhD doctorate. Scientific contributions Since his "Laurea", he carried out his scientific activity in the field of the hard X-ray astronomy. He was Principal Investigator (PI) of many successful balloon experiments, launched from Italy, France, US and Australia. Among the most relevant results obtained, the first evidence of Quasi Periodic Oscillations from a black hole candidate, discovery that was confirmed about 20 years later with the CGRO NASA satellite mission. He was also PI of two experiments on board the BeppoSAX satellite launched on 30 April 1996 from Cape Canaveral, Florida (USA): the high energy telescope (15-300 keV) PDS (Phoswich Detection System) and the "Gamma-Ray Burst Monitor" (GRBM), with many relevant results from both of them. Concerning the GRBM, it had a fundamental role, along with the Wide Field Cameras also on board of BeppoSAX, for the discovery, occurred in 1997, of the extragalactic origin of Gamma Ray Bursts (GRBs), a mystery about 30 years old. This discovery was classified by the American journal Science among the top ten most important discoveries of the 1997 in all science fields. During the life time of BeppoSAX (30 April 1996 – 30 April 2002), the GRBM detected more than thousand GRBs, and allowed many other discoveries, among them, the so-called "Amati relation" from the name of the first author, of great importance for GRB physics and the first association of a GRB (980425) with a supernova explosion (SN1998bw). Thanks to the BeppoSAX discovery of their extragalactic origin, GRBs are now recognized to be an ideal laboratory for settling several still open issues in cosmology and fundamental physics. Frontera was also co-investigator of the JEM-X experiment on board the INTEGRAL satellite with the realization of the field collimator and the ground calibration of the experiment at the X-ray facility LARIX designed and developed under the Frontera responsibility. LARIX, now extended and upgraded, is a trans-national facility of the European program AHEAD. Frontera collaborated to the design and, at LARIX facility, to the calibration of the high energy X-ray experiment (HE) aboard the Chinese satellite Insight-HXMT developed at the Institute of High Energy Physics (IHEP) (PI Shuang-Nan Zhang), successfully launched on 15 June 2017 from the Chinese launch base of Jiuquan in the Gobi Desert. The collaboration with IHEP is continuing with the scientific exploitation of this satellite observations. Technological innovations In collaboration with INFN section of Ferrara and IASF-INAF (now OAS-INAF) of Bologna, Frontera led the development of the first Laue lens prototype to focus high energy X-rays. The mounting technique of such lens is the subject of an Italian Space Agency (ASI)-University of Ferrara patent. The development of Laue lenses for astrophysical applications is continuing and a new concept of Laue lens is the key instrument of a mission concept, ASTENA, proposed for the ESA long-term program “Voyage 2050". Publications He is author of more than 360 papers published in international refereed journals, among which Nature and Science and about 200 publications in Proceedings of international symposia, with more than 850 titles in the NASA ADS (Astrophysics Data System) archive, inclusive of Circulars, and Telegrams. For the high citation rate (currently more than 22,000), in 2007 he was included among the "Highly cited researchers" by ISI-Thomson from Philadelphia (USA). Memberships Frontera is an emeritus member of the American Astronomical Society, member of the Italian Physical Society (SIF), of the Italian Society of General Relativity and Gravitation (SIGRAV) and of the Ferrara Academy of Sciences. He is also member of the "Gruppo 2003 for the Scientific Research", an Association made of only Italian highly cited scientists. Awards In 1998, along with the research team of the Beppo-SAX satellite, Frontera was awarded the Bruno Rossi prize of the American Astronomical Society "for the prompt discovery and accurate localization of the X-ray counterpart of Gamma Ray Bursts, thus making possible their extragalactic origin". For the same discovery, in 2002, along with the research group represented by the astrophysicist Ed van den Heuvel, he was awarded the Descartes prize for "solving the gamma ray burst riddle". In 2010, Frontera along with Enrico Costa, one of his two deputy PIs for the BeppoSAX/PDS and GRBM experiments, was awarded the SIF Enrico Fermi prize, of the Italian Physical Society, for “the discovery of the afterglow, i.e., of the X-ray luminescence, from Gamma Ray Bursts”. In 2012, Frontera received the Marcel Grossmann Award "for guiding the Gamma-ray Burst Monitor project on board the Beppo-SAX satellite which led to the discovery of the GRB X-ray afterglows, and to their optical identification". In 2024, Frontera received the Insight-HXMT International collaboration Award, “in recognition of Outstanding Contributions to the Insight-HXMT Mission”. In 2024, in honor of Frontera, the asteroid of the main belt, 2002 AP12, was named 126177 Filippofrontera. Honours Commander of the Order of Merit of the Italian Republic "Initiative of President of Italy" — Rome, 13 February 2014 See also University of Ferrara Beppo-SAX Bruno Rossi Prize Enrico Fermi Prize (SIF) Marcel Grossmann Award ICRANet INAF References External links (IT) Curriculum vitae di Filippo Frontera, su fe.infn.it. Publications by Filippo Frontera, from The SAO/NASA Astrophysics Data System X-ray astronomy Commanders of the Order of Merit of the Italian Republic Academic staff of the University of Ferrara University of Bologna alumni Living people 1941 births Italian astrophysicists
Filippo Frontera
Astronomy
1,495
31,903,706
https://en.wikipedia.org/wiki/European%20Solar%20Telescope
The European Solar Telescope (EST) is a pan-European project to build a next-generation 4-metre class solar telescope, to be located at the Roque de los Muchachos Observatory in the Canary Islands, Spain. It will use state-of-the-art instruments with high spatial and temporal resolution that can efficiently produce two-dimensional spectral information in order to study the Sun's magnetic coupling between its deep photosphere and upper chromosphere. This will require diagnostics of the thermal, dynamic and magnetic properties of the plasma over many scale heights, by using multiple wavelength imaging, spectroscopy and spectropolarimetry. The EST design will strongly emphasise the use of a large number of visible and near-infrared instruments simultaneously, thereby improving photon efficiency and diagnostic capabilities relative to other existing or proposed ground-based or space-borne solar telescopes. In May 2011 EST was at the end of its conceptual design study. The EST is being developed by the European Association for Solar Telescopes (EAST), which was set up to ensure the continuation of solar physics within the European community. Its main goal is to develop, construct and operate the EST. The European Solar Telescope is often regarded as the counterpart of the American Daniel K. Inouye Solar Telescope which finished construction in November 2021. Conceptual design study The conceptual design study conducted by research institutions and industrial companies was finalized in May 2011. The study took 3 years, cost €7 million and was co-financed by the European Commission under the EU's Seventh Framework Programme for Research (FP7). The study estimates a €150 million to design and construct the EST and projects about €6.5 million annually for its operation. Partners The European Association for Solar Telescopes (EAST) is a consortium of 7 research institutions and 29 industrial partners from 15 European countries, that exists with the aim, among others, of undertaking the development of EST, to keep Europe in the frontier of Solar Physics in the world. As well as EAST intends to develop, construct and operate a next-generation large aperture European Solar Telescope (EST) in the Roque de los Muchachos Observatory, Canaries, Spain. See also Daniel K. Inouye Solar Telescope List of solar telescopes References Notes External links Official EST homepage European Association for Solar Telescopes website European Solar Telescope in the Canary Island Solar telescopes Proposed telescopes European Union Astrophysics Plasma physics facilities Astronomical observatories in the Canary Islands
European Solar Telescope
Physics,Astronomy
494
48,847,382
https://en.wikipedia.org/wiki/Gunslinger%27s%20gait
The gunslinger's gait or KGB walk is a walking pattern observed in individuals associated with the KGB or the Red Army. It is a standard walk, except with the non-dominant hand swinging freely, but the other in place, near a pocket or a holster, so that the individual is ready to draw a gun at a moment's notice if there were to be an unexpected threat. This type of walk is taught in the manuals for KGB officers and it is where it is believed to have originated, but it is a recurring behavior in the Red Army and other military, security, and espionage organizations. The term "gunslinger's gait" was coined by a group of British researchers in 2015, who published a study analysing this unusual walking pattern in Vladimir Putin and other several high-ranked Russian government officials; Dimitry Medvedev, Anatoly Serdyukov, Sergei Ivanov, and Anatoly Sidorov. Serdyukov, Ivanov, and Sidorov all have had prior KGB or Red Army training, but Medvedev is an exception. Vladimir Putin The gunslinger's gait is notable in Russian president Vladimir Putin, an ex-KGB member. He has been seen walking in this style, as early as April 2000, while he was Acting President of Russia and prior to him becoming President of Russia. There has been speculation that it may be a sign of Parkinson's disease, however a more plausible explanation is that it is due to his KGB training, that he has had such a gait for decades, as well as other Russian officials that exhibited a similar style of walk. See also Arm swing in human locomotion References Human behavior KGB Military sociology Gait abnormalities
Gunslinger's gait
Biology
351
10,368,228
https://en.wikipedia.org/wiki/Resource
Resource refers to all the materials available in our environment which are technologically accessible, economically feasible and culturally sustainable and help us to satisfy our needs and wants. Resources can broadly be classified according to their availability as renewable or national and international resources. An item may become a resource with technology. The benefits of resource utilization may include increased wealth, proper functioning of a system, or enhanced well. From a human perspective, a regular resource is anything to satisfy human needs and wants. The concept of resources has been developed across many established areas of work, in economics, biology and ecology, computer science, management, and human resources for example - linked to the concepts of competition, sustainability, conservation, and stewardship. In application within human society, commercial or non-commercial factors require resource allocation through resource management. The concept of resources can also be tied to the direction of leadership over resources; this may include human resources issues, for which leaders are responsible, in managing, supporting, or directing those matters and the resulting necessary actions. For example, in the cases of professional groups, innovative leaders and technical experts in archiving expertise, academic management, association management, business management, healthcare management, military management, public administration, spiritual leadership and social networking administration. Definition of size asymmetry Resource competition can vary from completely symmetric (all individuals receive the same amount of resources, irrespective of their size, known also as scramble competition) to perfectly size symmetric (all individuals exploit the same amount of resource per unit biomass) to absolutely size asymmetric (the largest individuals exploit all the available resource). Economic versus biological There are three fundamental differences between economic versus ecological views: 1) the economic resource definition is human-centered (anthropocentric) and the biological or ecological resource definition is nature-centered (biocentric or ecocentric); 2) the economic view includes desire along with necessity, whereas the biological view is about basic biological needs; and 3) economic systems are based on markets of currency exchanged for goods and services, whereas biological systems are based on natural processes of growth, maintenance, and reproduction. Computer resources A computer resource is any physical or virtual component of limited availability within a computer or information management system. Computer resources include means for input, processing, output, communication, and storage. Natural Natural resources are derived from the environment. Many natural resources are essential for human survival, while others are used to satisfy human desire. Conservation is the management of natural resources with the goal of sustainability. Natural resources may be further classified in different ways. Resources can be categorized based on origin: Abiotic resources comprise non-living things (e.g., land, water, air, and minerals such as gold, iron, copper, silver). Biotic resources are obtained from the biosphere. Forests and their products, animals, birds and their products, fish and other marine organisms are important examples. Minerals such as coal and petroleum are sometimes included in this category because they were formed from fossilized organic matter, over long periods. Natural resources are also categorized based on the stage of development: Potential resources are known to exist and may be used in the future. For example, petroleum may exist in many parts of India and Kuwait that have sedimentary rocks, but until the time it is actually drilled out and put into use, it remains a potential resource. Actual resources are those, that have been surveyed, their quantity and quality determined, and are being used in present times. For example, petroleum and natural gas are actively being obtained from the Mumbai High Fields. The development of an actual resource, such as wood processing depends on the technology available and the cost involved. That part of the actual resource that can be developed profitably with the available technology is known as a reserve resource, while that part that can not be developed profitably due to a lack of technology is known as a stock resource. Natural resources can be categorized based on renewability: Non-renewable resources are formed over very long geological periods. Minerals and fossils are included in this category. Since their formation rate is extremely slow, they cannot be replenished, once they are depleted. Even though metals can be recycled and reused, whereas petroleum and gas cannot, they are still considered non-renewable resources. Renewable resources, such as forests and fisheries, can be replenished or reproduced relatively quickly. The highest rate at which a resource can be used sustainably is the sustainable yield. Some resources, such as sunlight, air, and wind, are called perpetual resources because they are available continuously, though at a limited rate. Human consumption does not affect their quantity. Many renewable resources can be depleted by human use, but may also be replenished, thus maintaining a flow. Some of these, such as crops, take a short time for renewal; others, such as water, take a comparatively longer time, while others, such as forests, need even longer periods. Depending upon the speed and quantity of consumption, overconsumption can lead to depletion or the total and everlasting destruction of a resource. Important examples are agricultural areas, fish and other animals, forests, healthy water and soil, cultivated and natural landscapes. Such conditionally renewable resources are sometimes classified as a third kind of resource or as a subtype of renewable resources. Conditionally renewable resources are presently subject to excess human consumption and the only sustainable long-term use of such resources is within the so-called zero ecological footprint, where humans use less than the Earth's ecological capacity to regenerate. Natural resources are also categorized based on distribution: Ubiquitous resources are found everywhere (for example, air, light, and water). Localized resources are found only in certain parts of the world (for example metal ores and geothermal power). Actual vs. potential natural resources are distinguished as follows: Actual resources are those resources whose location and quantity are known and we have the technology to exploit and use them. Potential resources are those of which we have insufficient knowledge or do not have the technology to exploit them at present. Based on ownership, resources can be classified as individual, community, national, and international. Labour or human resources In economics, labor or human resources refers to the human work in the production of goods and rendering of services. Human resources can be defined in terms of skills, energy, talent, abilities, or knowledge. In a project management context, human resources are those employees responsible for undertaking the activities defined in the project plan. Capital or infrastructure In economics, capital goods or capital are "those durable produced goods that are in turn used as productive inputs for further production" of goods and services. A typical example is the machinery used in a factory. At the macroeconomic level, "the nation's capital stock includes buildings, equipment, software, and inventories during a given year." Capitals are the most important economic resource. Tangible versus intangible Whereas, tangible resources such as equipment have an actual physical existence, intangible resources such as corporate images, brands and patents, and other intellectual properties exist in abstraction. Use and sustainable development Typically resources cannot be consumed in their original form, but rather through resource development they must be processed into more usable commodities and usable things. The demand for resources is increasing as economies develop. There are marked differences in resource distribution and associated economic inequality between regions or countries, with developed countries using more natural resources than developing countries. Sustainable development is a pattern of resource use, that aims to meet human needs while preserving the environment. Sustainable development means that we should exploit our resources carefully to meet our present requirement without compromising the ability of future generations to meet their own needs. The practice of the three R's – reduce, reuse, and recycle must be followed to save and extend the availability of resources. Various problems are related to the usage of resources: Environmental degradation Over-consumption Resource curse Resource depletion Tragedy of the commons Various benefits can result from the wise usage of resources: Economic growth Ethical consumerism Prosperity Quality of life Sustainability Wealth See also Natural resource management Resource-based view Waste management References Further reading Elizabeth Kolbert, "Needful Things: The raw materials for the world we've built come at a cost" (largely based on Ed Conway, Material World: The Six Raw Materials That Shape Modern Civilization, Knopf, 2023; Vince Beiser, The World in a Grain; and Chip Colwell, So Much Stuff: How Humans Discovered Tools, Invented Meaning, and Made More of Everything, Chicago), The New Yorker, 30 October 2023, pp. 20–23. Kolbert mainly discusses the importance to modern civilization, and the finite sources of, six raw materials: high-purity quartz (needed to produce silicon chips), sand, iron, copper, petroleum (which Conway lumps together with another fossil fuel, natural gas), and lithium. Kolbert summarizes archeologist Colwell's review of the evolution of technology, which has ended up giving the Global North a superabundance of "stuff", at an unsustainable cost to the world's environment and reserves of raw materials. External links Resource economics Ecology
Resource
Biology
1,856
38,854,961
https://en.wikipedia.org/wiki/Scaldicetus
Scaldicetus is an extinct genus of highly predatory macroraptorial sperm whale. Although widely used for a number of extinct physeterids with primitive dental morphology consisting of enameled teeth, Scaldicetus as generally recognized appears to be a wastebasket taxon filled with more-or-less unrelated primitive sperm whales. Taxonomy Scaldicetus is known from the Miocene to Pleistocene deposits of Western Europe, the U.S. (California, Florida, Maryland, Virginia), Baja Peninsula, Peru, New South Wales, and Japan. However, Scaldicetus is probably a grade taxon, and fossil teeth assigned to it (largely due to the lack of distinguishing characteristics in fossil teeth alone) probably represent more-or-less unrelated sperm whales united by their primitive characteristics rather than actual ancestry. Consequently, this would inflate the genus's distribution. The name Scaldicetus caretti was coined in 1867 from numerous sperm whale teeth collected in Neogene deposits near Antwerp, Belgium probably from the early-to-middle Miocene Bercham Formation. However, some of these remains may have been reworked and redeposited into younger rocks. More remains also near Antwerp from the Diest Formation date to the Tortonian (late Miocene). Synonyms of Scaldicetus include Palaeodelphis, Homocetus, and Eucetus. The genus Physodon described by French paleontologist Paul Gervais in 1872 was previously considered a synonym, but it was declared a nomen dubium in 2006. Scaldicetus is sometimes classified into the dubious subfamily Hoplocetinae along with Diaphorocetus, Idiorophus, and Hoplocetus based on the presence of large, robust, enamel-coated teeth. The macroraptorial sperm whales Livyatan, Zygophyseter, Brygmophyseter, and Acrophyseter potentially also belong to this subfamily. "Ontocetus" oxymycterus, described from the middle Miocene (Langhian) of Santa Barbara, California, was assigned to Scaldicetus in 2008, but was subsequently made the type of a new genus, Albicetus. Tooth anatomy Unlike the modern sperm whale which only has teeth on the bottom jaw, Scaldicetus had teeth in both jaws. The lectotype for S. caretti had at least 45 teeth in total in its mouth in life. Like other macroraptorial sperm whales but unlike the modern sperm whale, the teeth were covered in a thick enamel coating, about thick. The teeth were moderately curved and were deeply rooted into the skull, implying a strong bite. Like in other sperm whales, tooth dimensions vary widely; for the lectotype: the total length of the tooth root (the part of the tooth beneath the gum line) is between and the maximum total length of the entire tooth is . Like in other macroraptorial sperm whales, tooth size increased from the back of the jaw to the front. The maximum diameter of the crown (the part of the tooth that is visible and erupts from the gum line) ranges from , and diameter was greatest midway up the tooth. Paleobiology The teeth of the lectotype of S. caretti exhibit vertical root fractures which were probably brought on by chewing hard food or repetitive application of excessive force while chewing or biting. It is likely these injuries were sustained while biting a fairly large vertebrate, such as various marine mammals as other macroraptorial sperm whales are suspected of hunting. However, the killer whale–which preys on large marine mammals–is not known to exhibit these fractures, though this may be because killer whale teeth are more resistant to shock, having a smaller pulp cavity and, thus, a thicker tooth. Further, terrestrial carnivores that chew through bone display these fractures, and those that prey on larger prey have larger tooth roots. Like in the killer whale, Scaldicetus may have mashed its food in smaller pieces to ease swallowing, which would have increased the risk of hitting bone which would cause such fractures. Like other macroraptorial sperm whales, Scaldicetus probably occupied the same niche as the killer whale. Paleoecology The Deist Formation, judging from the mollusk assemblage, probably represented a shallow sea with volatile ocean currents, moving sand bars, and megaripples. Whale remains include a cetotheriid baleen whale, the baleen whale Plesiocetus, a kentriodontid dolphin, and the beaked whale Ziphirostrum. Shark remains were not very common; those found belong to the extinct broad-toothed mako (the ancestor of the great white shark), the extinct mako shark Isurus desori, a Squalus dogfish, the angelshark, a sand tiger shark, and a Pristiophorus sawshark. References Fossils of Belgium Prehistoric toothed whales Prehistoric cetacean genera Fossil taxa described in 1867 Sperm whales Nomina dubia Miocene mammals of Europe Neogene
Scaldicetus
Biology
1,048
66,008,000
https://en.wikipedia.org/wiki/Jennie%20Lasby%20Tessmann
Jennie Belle Lasby Tessmann (August 23, 1882 – December 9, 1959) was an American spectroscopist and college educator. She was a "human computer" at Mount Wilson Observatory from 1906 to 1913, the first woman research assistant at the observatory. She taught astronomy and history at Santa Ana College from 1919 to 1946. Early life Jennie Belle Lasby was born in Castle Rock, Minnesota, the daughter of Walter Lasby and Lavinia C. Freeman Lasby. Her father was born in Ontario, Canada, and her mother was from Wisconsin. She attended Carleton College, completing a bachelor's degree in 1904. She earned a master's degree in astronomy at Mount Holyoke College in 1906. Career Lasby taught astronomy and mathematics at Mount Holyoke College during her graduate studies there. She was hired as a computer at Mount Wilson Observatory in 1906. She was the first woman research assistant at Mount Wilson, starting a few months before Cora G. Burwell joined the same department. In 1910, she attended the fourth conference of the International Union for Cooperation in Solar Research, when it was held at Mount Wilson. She left Mount Wilson in 1913, after co-authoring several scientific publications, including a monograph with Walter Sydney Adams. She became a member of the National Association for the Advancement of Science in 1921. In 1914, Lasby went to work on a spectroscopy project in Germany, but she returned the following year with the start of World War I. She worked briefly at Goodsell Observatory in Minnesota, and was a librarian at Northfield, Minnesota. From 1919 to 1946, Lasby Tessmann taught history and astronomy at Santa Ana Junior College. She helped develop the Bishop Observatory in Orange County as a teaching facility. She spoke to community groups often, and was president of the City Teachers' League and the Business and Professional Women's Club, both in Santa Ana. Personal life Jennie Lasby married German scientist Heinrich Arnold Johannes (John) Tessmann in 1927, in Travemünde, Germany. She died in 1959, in Santa Ana, aged 77 years. In 1967, Tessmann Planetarium at Santa Ana College was named in her memory, and the Jennie Lasby Tessmann House is on the Santa Ana Register of Historic Properties. References External links Dylan M. Almendral, "Jennie Lasby-Tessmann: A Woman of the Stars" (January 19, 2020), a blog post about Lasby-Tessmann. 1882 births 1959 deaths Carleton College alumni Human computers Spectroscopists Scientists from Santa Ana, California 20th-century American women scientists Mount Holyoke College alumni American people of Canadian descent 20th-century American educators 20th-century American women educators People from Dakota County, Minnesota Scientists from Minnesota Educators from Minnesota 20th-century American scientists
Jennie Lasby Tessmann
Technology
565
15,852,294
https://en.wikipedia.org/wiki/Bamboo%20forest
The term bamboo forests is commonly used for bamboo plant communities even though bamboo is a grass, not a tree. Definitions of bamboo forests vary by country and may be contradictory. Overview Bamboos often create communities that are almost entirely composed of a single species unlike other forests. Bamboos also differ from ordinary trees both in appearance and characteristics like having trunks that are sturdy but do not grow thick. Bamboos grow quickly and abundant, often preventing sunlight from touching the ground, making it difficult for other plants to grow in bamboo forests which is why you will only see bamboo trees in these forests and rarely any other type of vegetation. and creating a unique landscape of dense bamboo. Human Uses Bamboo is used in various ways as a valuable natural material in many Asian countries. As such, bamboo forests are seen as a vital source to create tools and resource that are important to the livelihood of these communities. When bamboo forests are managed with moderate extraction, they have the ability of deterring landslides or erosions in the event of an earthquake or other natural disasters. Preventing deforestation Bamboos have a strong reproductive capacity which can be seen in how fast they can regrow after being cut down. Within 2 to 3 months of being cut, a bamboo shoot can grow into a full-grown tree and quickly cover the land with many trees. This is the reason why some say that when you cut a bamboo tree, you are planting a bamboo tree in its place. The underground stem of bamboo is shallow and spreads near the surface of the ground, and the underground stem is covered with “whisker roots,” which hold the ground and the area firmly in place. The spread of alternative materials to make different tools and objects has led to some regional bamboo forests to be mismanaged and left in poor states. The expansion of these neglected bamboo forests puts pressure on other plants by reducing the biodiversity in many of these regions and can also lead to natural disasters having bigger impact on communities. For this reason, some places are taking steps to cut down bamboo forests all together and replace them with forests that are more permanent and will not need as much maintenance or protection as bamboo. References Sources Bamboo Biodiversity Forest ecology
Bamboo forest
Biology
437
60,167,773
https://en.wikipedia.org/wiki/Nuclear%20fallout%20effects%20on%20an%20ecosystem
This article uses Chernobyl as a case study of nuclear fallout effects on an ecosystem. Chernobyl Officials used hydrometeorological data to create an image of what the potential nuclear fallout looked like after the Chernobyl disaster in 1986. Using this method, they were able to determine the distribution of radionuclides in the surrounding area, and discovered emissions from the nuclear reactor itself. These emissions included; fuel particles, radioactive gases, and aerosol particles. The fuel particles were due to the violent interaction between hot fuel and the cooling water in the reactor, and attached to these particles were Cerium, Zirconium, Lanthanum, and Strontium. All of these elements have low volatility, meaning they prefer to stay in a liquid or solid state rather than condensing into the atmosphere and existing as vapor. Cerium and Lanthanum can cause irreversible damage to marine life by deteriorating cell membranes, affecting reproductive capability, as well as crippling the nervous system. Strontium in its non-nuclear isotope is stable and harmless, however, when the radioactive isotope, Sr90, is released into the atmosphere it can lead to anemia, cancers, and cause shortages in oxygen. The aerosol particles had traces of Tellurium, a toxic element which can create issues in developing fetuses, along with Caesium, which is an unstable, incredibly reactive, and toxic element. Also found in the aerosol particles was enriched Uranium-235. The most prevalent radioactive gas detected was Radon, a noble gas that has no odor, no color, and no taste, and can also travel into the atmosphere or bodies of water. Radon is also directly linked to lung cancer, and is the second leading cause of lung cancer in the populace. All of these elements only deteriorate through radioactive decay, which is also known as a half-life. Half-lives of the nuclides previously discussed can range from mere hours, to decades. The shortest half-life for the previous elements is Zr95, an isotope of zirconium which takes 1.4 hours to decay. The longest is Pu235, which takes approximately 24,000 years to decay. While the initial release of these particles and elements was rather large, there were multiple low-level releases for at least a month after the initial incident at Chernobyl. Local effects Surrounding wildlife and fauna were drastically affected by Chernobyl's explosions. Coniferous trees, which are plentiful in the surrounding landscape, were heavily affected due to their biological sensitivity to radiation exposure. Within days of the initial explosion many pine trees in a 4 km radius died, with lessening yet still harmful effects being observed up to 120 km away. Many trees experienced interruptions in their growth, reproduction was crippled, and there were multiple observations of morphological changes. Hot particles also landed on these forests, causing holes and hollows to be burned into the trees. The surrounding soil was covered in radionuclides, which prevented substantial new growth. Deciduous trees such as Aspen, Birch, Alder, and Oak trees are more resistant to radiation exposure than coniferous trees, however they aren't immune. Damage seen on these trees was less harsh than observed on the pine trees. A lot of new deciduous growth suffered from necrosis, death of living tissue, and foliage on existing trees turned yellow and fell off. Deciduous trees resilience has allowed them to bounce back and they have populated where many coniferous trees, mostly pine, once stood. Herbaceous vegetation was also affected by radiation fallout. There were many observations of color changes in the cells, chlorophyll mutation, lack of flowering, growth depression, and vegetation death. Mammals are a highly radio-sensitive class, and observations of mice in the surrounding area of Chernobyl showed a population decrease. Embryonic mortality increased as well, however, migration patterns of the rodents made the damaged population number increase once again. Among the small rodents affected, it was observed that there were increasing issues in the blood and livers, which is a direct correlation to radiation exposure. Issues such as liver cirrhosis, enlarged spleens, increased peroxide oxidation of tissue lipids, and a decrease in the levels of enzymes were all present in the rodents exposed to the radioactive blasts. Larger wildlife didn't fare much better. Although most livestock were relocated a safe distance away, horses and cattle located on an isolated island 6 km away from the Chernobyl radioactivity were not spared. Hyperthyroidism, stunted growth, and, of course, death plagued the animals left on the island. The loss of human population in Chernobyl, sometimes referred to as the "exclusion zone," has allowed the ecosystems to recover. The use of herbicides, pesticides, and fertilizers has decreased because there is less agricultural activity. Biodiversity of plants and wildlife has increased, and animal populations have also increased. However, radiation continues to impact the local wildlife. Global effects Factors such as rainfall, wind currents, and the initial explosions at Chernobyl themselves caused the nuclear fallout to spread throughout Europe, Asia, as well as parts of North America. Not only was there a spread of these various radioactive elements previously mentioned, but there were also problems with what are known as hot particles. The Chernobyl reactor didn't just expel aerosol particles, fuel particles, and radioactive gases, but there was an additional expulsion of Uranium fuel fused together with radionuclides. These hot particles could spread for thousands of Kilometers and could produce concentrated substances in the form of raindrops known as Liquid hot particles. These particles were potentially hazardous, even in low-level radiation areas. The radioactive level in each individual hot particle could rise as high as 10 kBq, which is a fairly high dosage of radiation. These liquid hot particle droplets could be absorbed in two main ways; ingestion through food or water, and inhalation. Evolutionary effects Mutated organisms themselves also have effects beyond the immediate area. Møller & Mousseau 2011 find that individuals carrying deleterious mutations will not be selected out immediately but will instead survive for many generations. As such they are expected to have descendants far away from contamination sites that created them, contaminating those populations, and causing fitness decline. References Aftermath of war Aftermath of the Chernobyl disaster Environmental impact of nuclear power Nuclear chemistry Nuclear weapons Radiation health effects Radioactive contamination Radiobiology Radiological weapons Nuclear fallout
Nuclear fallout effects on an ecosystem
Physics,Chemistry,Materials_science,Technology,Biology
1,327
44,309,868
https://en.wikipedia.org/wiki/Digital%20Access%20to%20a%20Sky%20Century%20%40%20Harvard
The Digital Access to a Sky Century @ Harvard (DASCH) is a project to preserve and digitize images recorded on astronomical photographic plates created before astronomy became dominated by digital imaging. It is a major project of the Center for Astrophysics Harvard & Smithsonian. Over 500,000 glass plates held by the Harvard College Observatory are to be digitized. The digital images will contribute to time domain astronomy, providing over a hundred years of data that may be compared to current observations. From 1885 until 1992, the Harvard College Observatory repeatedly photographed the night sky using observatories in both the northern and southern hemispheres. Over half a million glass photographic plates are stored in the observatory archives providing a unique resource to astronomers. The Harvard collection is over three times the size of the next largest collection of astronomical photographic plates and is almost a quarter of all known photographic images of the sky on glass plates. Those plates were seldom used after digital imaging became the standard near the end of the twentieth century. The scope of the Harvard plate collection is unique in that it covers the entire sky for a very long period of time. Goals The project web site states that the goals of DASCH are to enable new Time Domain Astronomy (TDA) science, including: Conduct the first long-term temporal variability survey on days to decades time scales Novae and dwarf novae distributions and populations in the Galaxy Black hole and neutron star X-ray binaries in outburst: constraining the BH, NS binary populations in the Galaxy Black hole masses of bright quasars from long-term variability measures to constrain their characteristic shortest timescales and thus size Quiescent black holes in galactic nuclei revealed by tidal disruption of a passing field star and resultant optical flare Unexpected classes of variables or temporal behavior of known objects: preview of what PanSTARSS and LSST may see in much more detail but on shorter timescales. History Digitizing the Harvard College Observatory's astronomical plates archive was first considered in the 1980s by Jonathan E. Grindlay, a professor of astronomy at Harvard. Grindlay encouraged Alison Doane, then curator of the archive, to explore digitizing the collection with a commercial image scanner. Working with Jessica Mink, an archivist of the Center for Astrophysics Harvard & Smithsonian, Grindlay and Doane determined that a commercial scanner could produce suitable digital images but also found that such machines were too slow. Working full-time, it would have taken over 50 years to digitize the plates in the Harvard archive with commercial scanners. Doane presented a talk about the problem at a meeting of the Amateur Telescope Makers of Boston whose clubhouse is located on the grounds of MIT's Haystack Observatory. Bob Simcoe, a club member and retired engineer, volunteered to help design a machine suitable for the task. The machine needed to position and record the stellar images on the plates to within half a micron and account for different emulsions, plate thicknesses and densities, exposure times, processing methods and so on. Software was developed by Mink, Edward Los, another volunteer from the club, and Silas Laycock, a researcher. Thanks to a grant from the National Science Foundation and donations of time and material, creation of the scanner began in 2004. The scanner was completed and tested in 2006. Over 500 plates were imaged before the project ran out of money in July 2007. For the digital images to be useful for research, the associated metadata also needs to be digitalized. That data describes what part of the sky and what objects were recorded on each plate along with date, time, telescope, and other pertinent information. The metadata is recorded in about 1,200 logbooks and on the card catalog of the collection. In addition, each plate is stored in a paper jacket that includes related information and often scientifically and historically important notes left by previous researchers, including notable astronomers such as Henrietta Swan Leavitt and Annie Jump Cannon. George Champine, another volunteer from the Amateur Telescope Makers of Boston, photographed the logbooks. The paper jacket for each plate is photographed as each plate is cleaned and imaged. Progress Plate imaging The first plate images were created by Harvard Observatory staff members in the winter of 2001–2002 using commercial scanners. A larger test that included imaging 100 plates was conducted in the summer of 2002. Those tests indicated that commercially available scanners were too slow for digitizing the Harvard plate collection and motivated the development of a custom-built scanner. The test images are available on-line. The custom-built high-speed scanner was completed and tested in 2006. Improvement of the scanner and associated software continues. A failure of a single part in the plate loader led to a breakdown of the scanner in August 2014. A new plate loader control system was designed and built by Bob Simcoe allowing scanning to resume in November 2014. , over 80,000 plates have been scanned and the data released on the DASCH web site, approximately 6.5 percent of the plate collection. The 80,000th plate was scanned on November 13, 2014. Metadata transcription Most of the metadata for the plate collection is contained in 663 bound volumes and about 500 looseleaf logbooks. Photographs of all of the logbook pages are available on the DASCH website. The effort to digitize this information began at Harvard. Some was done in India. The effort later moved to the American Museum of Natural History where volunteers worked under the supervision of Dr Michael Shara, Curator of the Department of Astrophysics and Holly Klug, Department. of Volunteer Services. In August 2014, the transcription of the Harvard plate logbooks was taken over by the Smithsonian Transcription Center, a new program to recruit volunteers to transcribe historical documents. This citizen science project is ongoing with a goal of completing all of the transcription before 2017. DASCH Data Release 7 (DR7) The DASCH project completed its two decade effort in 2024 when the final plates were scanned. DASCH Data Release 7 (DR7) was released on 2024 December 29. The primary DASCH DR7 data product is a catalog of astrophysical lightcurves referenced to the AAVSO Photometric All-Sky Survey Data (APASS) Release 8, covering the entire sky across the years ~1880–1990. The DASCH data can be accessed through several different methods: daschlab, the DASCH web APIs, or Starglass. Daschlab is a Python toolkit that allows you to perform basic data retrievals and analysis using cloud-based Jupyter notebooks. DASCH web APIs can used to retrieve DR7 data using any number of programming languages though standard RESTful JSON-oriented interfaces The Starglass website provides a user interface and a programmatic API allowing access to DASCH plate-level data products and queries. Other activities Special projects The DASCH will not generally accept special requests for scanning a particular part of the sky from the collection so that the digitization progresses efficiently. The DASCH team did accommodate two special requests to image plates that were not part of the Harvard collection "for scientifically compelling reasons". The New Horizons team requested images of Pluto in order to improve the dwarf planet's ephemeris that was needed to plan precise adjustments to the spacecraft's trajectory. DASCH scanned 843 plates showing Pluto that were taken by the 40-inch telescope at Lowell Observatory from 1930 to 1951. Forty-two plates of the Cassiopeia A supernova remnant taken by the Hale Telescope at the Palomar Observatory from 1951 to 1989 were imaged to support a study comparing x-ray and visual emissions. , the study has not been published in a peer reviewed journal. See also References External links DASCH Home Page DASCH Logbook Transcription website 2001 establishments in Massachusetts Astronomy projects Harvard University National Science Foundation Organizations established in 2001 Smithsonian Institution research programs
Digital Access to a Sky Century @ Harvard
Astronomy
1,593
46,854,300
https://en.wikipedia.org/wiki/Beam%20park
Beam park is a radar mode used for space surveillance, particularly tracking space debris. In beam-park mode, a radar beam is kept in a fixed direction with respect to the Earth, while objects passing through the beam are tracked. In 24 hours, as a result of the Earth’s rotation, the radar effectively scans a narrow strip through 360° of the celestial sphere. The scattered waves are detected by a receiver and the measurements obtained during the observations can be used to determine object radar cross-section, time of peak occurrence, polarization ratio, doppler shift and object rotation. The obtained information for each object is then processed and matched against data from previously catalogued objects. The beam-park mode can be used to detect both previously known and uncatalogued objects at any altitude, provided that the reflected power captured by the receiver can be distinguished from the noise. This limits the use of radar-based beam park observations to objects in Low-Earth Orbit (LEO). Optical instruments, in turn, have very good performance for objects in Geostationary Earth Orbit (GEO) and in Geostationary Transfer Orbit (GTO). The radar technique typically outperforms optical facilities in LEO and can conduct observations for longer periods, both during day and night, independently of the weather and object illumination by sunlight. The tracking sensitivity of radar-based space debris tracking systems can be further improved if they are working in conjunction with another transmitter or receiver, forming a bistatic radar system. History Several US radars were the first to start carrying out beam-park tracking. These were the Haystack Radar in Massachusetts, Goldstone Radar in California, and multiple radars at the Kwajalein Atoll. Most notable, the Haystack radar is capable of tracking objects as small as 2 mm. The first known European beam-park experiments were carried out by the German TIRA system in 1993. Performing observations in the L-band during a 10-hour long experiment, the system was able to detect small-sized objects in LEO. The first 24-hour operational measurement campaign was performed on 13–14 December 1994, with TIRA and the Fylingdales Phased-Array Radar. TIRA can also be used as a transmitter, part of a bistatic system, in conjunction with the Effelsberg Radio Telescope, functioning as a receiver, where the combined system has a detection size threshold of 1 cm. Dedicated space surveillance sensors, often operating in beam-park mode, include the French GRAVES bistatic radar, the defunct US Air Force Space Surveillance System (Space Fence) and its replacement Space Fence. Other systems currently used for space surveillance that operate in beam-park mode include the multistatic EISCAT incoherent scatter radar system and the Medicina Radio Observatory's Northern Cross Radio Telescope and 32 m parabolic dish antenna. See also ESA Space Situational Awareness Programme US Space Surveillance Network Kessler syndrome References Space debris Space radars
Beam park
Technology
604
13,141,596
https://en.wikipedia.org/wiki/JPTS
Jet Propellant Thermally Stable (JPTS) is a jet fuel originally developed in 1956 for the Lockheed U-2 reconnaissance aircraft. History Because of tight security restrictions enforced during the U-2's development, batches of fuel that were delivered to the aircraft's engine manufacturer were initially labeled LF-1. In 1956, a year after the U-2's first flight, a USAF Captain assigned to the Fuels Branch was instructed to buy a tank car load of LF-1 and have it shipped to an engine manufacturer. Not knowing what LF-1 was, he obtained a sample, had it analyzed, and determined that it was paraffinic kerosene, a fluid commonly known as charcoal lighter fluid (hence LF-1). Specification MIL-T-25524 was later written to include an additive for improving JPTS' thermal oxidative stability. Properties JPTS has a flash point of 43 °C (110 °F), a freezing point of -53 °C (-64 °F) and flammability limits of 1 and 6 %. It has an appearance of a water-white clear liquid with specific gravity of 0.816. It is insoluble in water. It is composed of a complex mixture of petroleum hydrocarbons. JPTS has a lower freeze point, lower viscosity, and higher thermal stability than standard aviation fuels. The fuel's low viscosity is needed to overcome the risk of it freezing in the low temperatures encountered during flight at high altitudes. JPTS also serves as coolant of engines and aerodynamically heated surfaces. As the fuel flow to the U-2's engines at cruise altitudes is about sixteen times lower than at sea level, the dwell time over hot surfaces is longer and increases the chances of thermal breakdown; JTPS's high thermal stability is therefore desired to avoid coking and deposition of varnishes in the piping. JPTS is a specialty fuel and is produced by only two oil refineries in the United States. As such, it has limited worldwide availability and costs over three times the per-gallon price of the Air Force's primary jet fuel, JP-8. Research is under way to find a cheaper and easier alternative involving additives to generally used jet fuels. A JP-8 based alternative, JP-8+100LT, is being considered. JP-8+100 has increased thermal stability by 100 degrees F more than stock JP8, and is only 0.5 cents per gallon more expensive; low temperature additives can be blended to this stock to add the desired cold performance. References See also JP-4 JP-6 JP-7 JP-8 Aviation fuel Aviation fuels
JPTS
Engineering
554
286,681
https://en.wikipedia.org/wiki/Flash%20point
The flash point of a material is the "lowest liquid temperature at which, under certain standardized conditions, a liquid gives off vapours in a quantity such as to be capable of forming an ignitable vapour/air mixture". The flash point is sometimes confused with the autoignition temperature, the temperature that causes spontaneous ignition. The fire point is the lowest temperature at which the vapors keep burning after the ignition source is removed. It is higher than the flash point, because at the flash point vapor may not be produced fast enough to sustain combustion. Neither flash point nor fire point depends directly on the ignition source temperature, but ignition source temperature is far higher than either the flash or fire point, and can increase the temperature of fuel above the usual ambient temperature to facilitate ignition. Fuels The flash point is a descriptive characteristic that is used to distinguish between flammable fuels, such as petrol (also known as gasoline), and combustible fuels, such as diesel. It is also used to characterize the fire hazards of fuels. Fuels which have a flash point less than are called flammable, whereas fuels having a flash point above that temperature are called combustible. Mechanism All liquids have a specific vapor pressure, which is a function of that liquid's temperature and is subject to Boyle–Mariotte law. As temperature increases, vapor pressure increases. As vapor pressure increases, the concentration of vapor of a flammable or combustible liquid in the air increases. Hence, temperature determines the concentration of vapor of the flammable liquid in the air. A certain concentration of a flammable or combustible vapor is necessary to sustain combustion in air, the lower flammable limit, and that concentration is specific to each flammable or combustible liquid. The flash point is the lowest temperature at which there will be enough flammable vapor to support combustion when an ignition source is applied. Measurement There are two basic types of flash point measurement: open cup and closed cup. In open cup devices, the sample is contained in an open cup which is heated and, at intervals, a flame brought over the surface. The measured flash point will actually vary with the height of the flame above the liquid surface and, at sufficient height, the measured flash point temperature will coincide with the fire point. The best-known example is the Cleveland open cup (COC). There are two types of closed cup testers: non-equilibrial, such as Pensky-Martens, where the vapours above the liquid are not in temperature equilibrium with the liquid, and equilibrial, such as Small Scale (commonly known as Setaflash), where the vapours are deemed to be in temperature equilibrium with the liquid. In both these types, the cups are sealed with a lid through which the ignition source can be introduced. Closed cup testers normally give lower values for the flash point than open cup (typically lower) and are a better approximation to the temperature at which the vapour pressure reaches the lower flammable limit. In addition to the Penskey-Martens flash point testers, other non-equilibrial testers include TAG and Abel, both of which are capable of cooling the sample below ambient for low flash point materials. The TAG flash point tester adheres to ASTM D56 and has no stirrer, while the Abel flash point testers adheres to IP 170 and ISO 13736 and has a stirring motor so the sample is stirred during testing. The flash point is an empirical measurement rather than a fundamental physical parameter. The measured value will vary with equipment and test protocol variations, including temperature ramp rate (in automated testers), time allowed for the sample to equilibrate, sample volume and whether the sample is stirred. Methods for determining the flash point of a liquid are specified in many standards. For example, testing by the Pensky-Martens closed cup method is detailed in ASTM D93, IP34, ISO 2719, DIN 51758, JIS K2265 and AFNOR M07-019. Determination of flash point by the Small Scale closed cup method is detailed in ASTM D3828 and D3278, EN ISO 3679 and 3680, and IP 523 and 524. CEN/TR 15138 Guide to Flash Point Testing and ISO TR 29662 Guidance for Flash Point Testing cover the key aspects of flash point testing. Examples Gasoline (petrol) is a fuel used in a spark-ignition engine. The fuel is mixed with air within its flammable limits and heated by compression and subject to Boyle's law above its flash point, then ignited by the spark plug. To ignite, the fuel must have a low flash point, but in order to avoid preignition caused by residual heat in a hot combustion chamber, the fuel must have a high autoignition temperature. Diesel fuel flash points vary between . Diesel is suitable for use in a compression-ignition engine. Air is compressed until it heats above the autoignition temperature of the fuel, which is then injected as a high-pressure spray, keeping the fuel-air mix within flammable limits. A diesel-fueled engine has no ignition source (such as the spark plugs in a gasoline engine), so diesel fuel can have a high flash point, but must have a low autoignition temperature. Jet fuel flash points also vary with the composition of the fuel. Both Jet A and Jet A-1 have flash points between , close to that of off-the-shelf kerosene. Yet both Jet B and JP-4 have flash points between . Standardization Flash points of substances are measured according to standard test methods described and defined in a 1938 publication by T.L. Ainsley of South Shields entitled "Sea Transport of Petroleum" (Capt. P. Jansen). The test methodology defines the apparatus required to carry out the measurement, key test parameters, the procedure for the operator or automated apparatus to follow, and the precision of the test method. Standard test methods are written and controlled by a number of national and international committees and organizations. The three main bodies are the CEN / ISO Joint Working Group on Flash Point (JWG-FP), ASTM D02.8B Flammability Section and the Energy Institute's TMS SC-B-4 Flammability Panel. See also Autoignition temperature Fire point Safety data sheet (SDS) References Combustion Threshold temperatures
Flash point
Physics,Chemistry
1,332
9,321,146
https://en.wikipedia.org/wiki/CRAL-TRIO%20domain
CRAL-TRIO domain is a protein structural domain that binds small lipophilic molecules. This domain is named after cellular retinaldehyde-binding protein (CRALBP) and TRIO guanine exchange factor. CRALB protein carries 11-cis-retinol or 11-cis-retinaldehyde. It modulates interaction of retinoids with visual cycle enzymes. TRIO is involved in coordinating actin remodeling, which is necessary for cell migration and growth. Other members of the family are alpha-tocopherol transfer protein and phosphatidylinositol-transfer protein (Sec14). They transport their substrates (alpha-tocopherol and phosphatidylinositol or phosphatidylcholine, respectively) between different intracellular membranes. Family also include a guanine nucleotide exchange factor that may function as an effector of RAC1 small G-protein. The N-terminal domain of yeast ECM25 protein has been identified as containing a lipid binding CRAL-TRIO domain. Structure The Sec14 protein was the first CRAL-TRIO domain for which the structure was determined. The structure contains several alpha helices as well as a beta sheet composed of 6 strands. Strands 2,3,4 and 5 form a parallel beta sheet with strands 1 and 6 being anti-parallel. The structure also identified a hydrophobic binding pocket for lipid binding. Human proteins containing this domain C20orf121; MOSPD2; PTPN9; RLBP1; RLBP1L1; RLBP1L2; SEC14L1; SEC14L2; SEC14L3; SEC14L4; TTPA; NF1; References External links - Calculated spatial positions of CRAL-TRIO domains in membrane Peripheral membrane proteins Protein domains Water-soluble transporters
CRAL-TRIO domain
Biology
396
32,463,401
https://en.wikipedia.org/wiki/Synthetic%20ecosystems
Synthetic ecosystems are on-chip integrated devices where cellular cultures (individuals) and ecosystem services - such as the renewal of growth, delivery of regulatory signals as well as removal of waste - are patterned into an integrated fluidic device using principles of landscape ecology, physiology and cell signaling. References Klitgord N, Segrè D (2010) Environments that Induce Synthetic Microbial Ecosystems. PLoS Comput Biol 6(11): e1001002. doi:10.1371/journal.pcbi.1001002 Systems ecology Artificial ecosystems
Synthetic ecosystems
Biology,Environmental_science
113
13,678,647
https://en.wikipedia.org/wiki/Node%20%28autonomous%20system%29
The behaviour of a linear autonomous system around a critical point is a node if the following conditions are satisfied: Each path converges to the or away from the critical point (dependent of the underlying equation) as (or as ). Furthermore, each path approaches the point asymptotically through a line. References Ordinary differential equations
Node (autonomous system)
Mathematics
68
77,562,364
https://en.wikipedia.org/wiki/NGC%204734
NGC 4734 is a spiral galaxy in the constellation of Virgo. Its velocity with respect to the cosmic microwave background is 7835 ± 23 km/s, which corresponds to a Hubble distance of 115.56 ± 8.10 Mpc (∼377 million light-years). It was discovered by British astronomer John Herschel on 7 April 1828. The SIMBAD database lists NGC 4734 as a LINER-type active galaxy nucleus, i.e. a galaxy whose nucleus has an emission spectrum characterized by broad lines of weakly ionized atoms. One supernova has been observed in NGC 4734: SN 2024gvc (type Ic, mag 19.7178) was discovered by the Zwicky Transient Facility on 17 April 2024. See also List of NGC objects (4001–5000) References External links 4734 043525 +01-33-019 07998 12486+0507 Virgo (constellation) 18280407 Discoveries by John Herschel Spiral galaxies
NGC 4734
Astronomy
207
2,467,302
https://en.wikipedia.org/wiki/Sodium%20dithionite
Sodium dithionite (also known as sodium hydrosulfite) is a white crystalline powder with a sulfurous odor. Although it is stable in dry air, it decomposes in hot water and in acid solutions. Structure The structure has been examined by Raman spectroscopy and X-ray crystallography. The dithionite dianion has C symmetry, with almost eclipsed with a 16° O-S-S-O torsional angle. In the dihydrated form (), the dithionite anion has gauche 56° O-S-S-O torsional angle. A weak S-S bond is indicated by the S-S distance of 239 pm, which is elongated by ca. 30 pm relative to a typical S-S bond. Because this bond is fragile, the dithionite anion dissociates in solution into the [SO2]− radicals, as has been confirmed by EPR spectroscopy. It is also observed that 35S undergoes rapid exchange between S2O42− and SO2 in neutral or acidic solution, consistent with the weak S-S bond in the anion. Preparation Sodium dithionite is produced industrially by reduction of sulfur dioxide. Approximately 300,000 tons were produced in 1990. The route using zinc powder is a two-step process: 2SO2 + Zn → ZnS2O4 ZnS2O4 + 2NaOH → Na2S2O4 + Zn(OH)2 The sodium borohydride method obeys the following stoichiometry: NaBH4 + 8NaOH + 8SO2 → 4Na2S2O4 + NaBO2 + 6H2O Each equivalent of H− reduces two equivalents of sulfur dioxide. Formate has also been used as the reductant. Properties and reactions Hydrolysis Sodium dithionite is stable when dry, but aqueous solutions deteriorate due to the following reaction: 2 S2O42− + H2O → S2O32− + 2 HSO3− This behavior is consistent with the instability of dithionous acid. Thus, solutions of sodium dithionite cannot be stored for a long period of time. Anhydrous sodium dithionite decomposes to sodium sulfate and sulfur dioxide above 90 °C in the air. In absence of air, it decomposes quickly above 150 °C to sodium sulfite, sodium thiosulfate, sulfur dioxide and trace amount of sulfur. Redox reactions Sodium dithionite is a reducing agent. At pH 7, the potential is -0.66 V compared to the normal hydrogen electrode. Redox occurs with formation of bisulfite: S2O42- + 2 H2O → 2 HSO3− + 2 e− + 2 H+ Sodium dithionite reacts with oxygen: Na2S2O4 + O2 + H2O → NaHSO4 + NaHSO3 These reactions exhibit complex pH-dependent equilibria involving bisulfite, thiosulfate, and sulfur dioxide. With organic carbonyls In the presence of aldehydes, sodium dithionite reacts either to form α-hydroxy-sulfinates at room temperature or to reduce the aldehyde to the corresponding alcohol above a temperature of 85 °C. Some ketones are also reduced under similar conditions. Uses Industry Sodium dithionite is used as a water-soluble reducing agent in some industrial dyeing processes. In the case of sulfur dyes and vat dyes, an otherwise water-insoluble dye can be reduced into its water-soluble alkali metal leuco salt. Indigo dye is sometimes processed in this way. Domestic and hobby uses Sodium dithionite can also be used for water treatment, aquarium water conditioners, gas purification, cleaning, and stripping.In addition to the textile industry, this compound is used in industries concerned with leather, foods, polymers, photography, and many others, often as a decolourising agent. It is even used domestically as a decoloring agent for white laundry, when it has been accidentally stained by way of a dyed item slipping into the high temperature washing cycle. It is usually available in 5 gram sachets termed hydrosulfite after the antiquated name of the salt. It is the active ingredient in "Iron Out Rust Stain Remover", a commercial rust product. Laboratory Sodium dithionite is often used in physiology experiments as a means of lowering solutions' redox potential (Eo' -0.66 V vs SHE at pH 7). Potassium ferricyanide is usually used as an oxidizing chemical in such experiments (Eo' ~ .436 V at pH 7). In addition, sodium dithionite is often used in soil chemistry experiments to determine the amount of iron that is not incorporated in primary silicate minerals. Hence, iron extracted by sodium dithionite is also referred to as "free iron." Aqueous solutions of sodium dithionite were once used to produce 'Fieser's solution' for the removal of oxygen from a gas stream. Pyrithione can be prepared in a two-step synthesis from 2-bromopyridine by oxidation to the N-oxide with a suitable peracid followed by substitution using sodium dithionite to introduce the thiol functional group. Photography It is used in Kodak fogging developer, FD-70. This is used in the second step in processing black and white positive images, for making slides. It is part of the Kodak Direct Positive Film Developing Outfit. Safety The wide use of sodium dithionite is attributable in part to its low toxicity at 2.5 g/kg (rats, oral). See also Dithionite References External links Sodium dithionite - ipcs inchem Bleaches Dithionites Sodium compounds Reducing agents
Sodium dithionite
Chemistry
1,247
4,769,975
https://en.wikipedia.org/wiki/Koeberg%20Alert
The Koeberg Alert alliance is an anti-nuclear activist organisation which emerged from an earlier pressure group in Cape Town called "Stop Koeberg" in 1983. Both were intended to halt construction of the first nuclear power station in South Africa at Duynefontein, 28 km NNW of Cape Town: the Koeberg Nuclear Power Station. After failing to influence the then ruling National Party it turned to the broader democratic and anti-apartheid movement, hoping to influence future policy. Koeberg formed an alliance with Earthlife Africa and the emerging Environmental Justice National Forum in the 1990s, it was revitalised in 2009 in opposition to President Thabo Mbeki's Pebble bed modular reactor programme and the emergence of "Nuclear-1" (a project to build additional nuclear reactors in South Africa) under President Jacob Zuma. It currently organises various anti-nuclear campaigns, participates in the wider anti-nuclear and peace movements, and makes submissions and presentations to formal government processes relating to nuclear power. Representatives have attended international nuclear power related conferences and events, including in Yokohama, Fukushima and Sweden. In June 2021, Koeberg Alert's Peter Becker was appointed to the board of the National Nuclear Regulator. He was fired by the Minister of Minerals and Energy, Gwede Mantashe, in February 2022, citing Becker's opposition to nuclear power. Notable people Some notable people active in the organisation: Mike Kantey – former secretary Keith Gottschalk – long-standing member Peter Becker – revived organisation in 2009 David Fig See also Campaign Against Nuclear Energy List of anti-war organizations References External links Koeberg Alert website Koeberg Nuclear Power Station CANE Anti-nuclear organizations Nuclear energy in South Africa Environmental organisations based in South Africa Civic and political organisations of South Africa
Koeberg Alert
Engineering
366
14,757,553
https://en.wikipedia.org/wiki/GATA6
Transcription factor GATA-6, also known as GATA-binding factor 6 (GATA6), is protein that in humans is encoded by the GATA6 gene. The gene product preferentially binds (A/T/C)GAT(A/T)(A) of the consensus binding sequence. Clinical significance Mutations in the gene have been linked with pancreatic agenesis and congenital heart defects. Lung Endodermal Epithelial Development GATA-6, a zinc finger transcription factor, is important in the endodermal differentiation of organ tissues. It is also indicated in proper lung development by controlling the late differentiation stages of alveolar epithelium and aquaporin-5 promoter activation. Furthermore, GATA-6 has been linked to the production of LIF, a cytokine that encourages proliferation of endodermal embryonic stem cells and blocks early epiblast differentiation. If left unregulated in the developing embryo, this cytokine production and chemical signal contributes to the phenotypes discussed further below. Upon the disruption of GATA-6 in an embryo, the distal lung epithelial development is stunted in transgenic mice models The progenitor cells, or stem cells, for alveolar epithelial tissues develop and are specified appropriately, however further differentiation does not occur. Also the distal-proximal bronchiole development is affected, resulting in a reduced quantity of airway exchange sites. This branching deficit, which will cause bilateral pulmonary hypoplasia after birth, has been locally associated with areas lacking differentiated alveolar epithelium, implicating this phenotype as inherent to endodermal function, and thus may be indirectly linked to improper GATA-6 expression. That is, a deficit of bronchiole branching may not be a result of direct transcriptional error in GATA-6, but rather a side effect of such an error. See also GATA transcription factor References Further reading External links Transcription factors
GATA6
Chemistry,Biology
417
56,367,760
https://en.wikipedia.org/wiki/Kepler-277c
Kepler-277c (also known by its Kepler Objects of Interest designation KOI-1215.02) is the third most massive and second-largest rocky planet ever discovered, with a mass about 64 times that of Earth. Discovered in 2014 by the Kepler Space Telescope, Kepler-277c is a Neptune-sized exoplanet with a very high mass and density for an object of its radius, suggesting a composition made mainly of rock with some amounts of water. Along with its sister planet, Kepler-277b, the planet's mass was determined using transit-timing variations (TTVs). Characteristics Size and temperature Kepler-277c was detected using the transit method and TTVs, allowing for both its mass and radius to be determined to some level. It is approximately 3.36 , close to the size of Neptune. At that radius, most planets should be gaseous Mini-Neptunes with no solid surface. However, the mass of Kepler-277c is extremely high for its size. Transit-timing variations indicate a planetary mass of about 64.2 , close to Saturn's mass at 95.16 . The planet has a density of approximately 9.33 g/cm3 and about 5.7 times the surface gravity of Earth. Such a high density for an object of this size implies that, like its sister planet, Kepler-277c is an enormous rock-based planet with a small portion of its mass as water. It is currently the third most massive and second largest terrestrial planet ever discovered, behind Kepler-277b in mass and PSR J1719-1438 b in both radius and mass. Due to its proximity to its host star, Kepler-277c is quite hot with an equilibrium temperature of about , hot enough to melt certain metals. Orbit Kepler-277c orbits close to its host star, with one orbit lasting 33.006 days. Its semi-major axis, or average distance from the parent object, is about 0.209 AU. For comparison, the planet Mercury takes 88 days to orbit the Sun at a distance of 0.38 AU. At this distance, Kepler-277c is very hot and most likely tidally locked to its host star. It is close to a 1:2 resonance with Kepler-277b, which orbits at an average distance of about 0.136 AU. Host star The parent star Kepler-277 is a large yellow star. It is 1.69 and 1.12 , with a temperature of 5946 K, a metallicity of -0.315 [Fe/H], and an unknown age. For comparison, the Sun has a temperature of 5778 K, a metallicity of 0.00 [Fe/H], and an age of about 4.5 billion years. The large radius in comparison to its mass and temperature suggest that Kepler-277 could be a Subgiant star. See also Mega-Earth Kepler-277b References Exoplanets discovered in 2014 Transiting exoplanets Exoplanets discovered by the Kepler space telescope Lyra Mega-Earths
Kepler-277c
Astronomy
630
10,925,256
https://en.wikipedia.org/wiki/G.Skill
G.SKILL International Enterprise is a Taiwanese computer hardware manufacturing company. The company's target customers are overclocking computer users. It produces a variety of high-end PC products and is best known for its DRAM products. History Based in Taiwan, G.SKILL corporation was established in 1989. In 2003, the company debuted as a maker of computer memory. The company currently operates through several distributors and resellers in North America, Europe, Asia, and the Middle East. Products Memory G.SKILL is known for its range of DDR, DDR2, DDR3, DDR4 and DDR5 computer memory. RAM is available in single-channel, dual-channel, triple-channel and quad-channel packs for desktops, workstations, HTPC, as well as netbooks and laptops. It was shown to be the only DDR4 manufacturer not vulnerable to the rowhammer security exploit. The company does not manufacture the memory dies, it purchases the memory dies and assembles them into a DIMM memory module ready for sale to customers. In February 2020, G.SKILL announced a DDR4 256 GB memory kit that, unusually for kits of that size at the time, operated at above JEDEC specifications. Solid-state drive On 12 May 2008 G.SKILL announced its first SATA II 2.5" solid-state drives (SSDs) with 32 GB or 64 GB of capacity. On 22 October 2014 G.SKILL released its first Extreme Performance Phoenix Blade Series 480 GB PCIe 2.0 x8 SSD using MLC NAND capable of maximum read and write speeds up to 2,000 MB per second and 245K IOPS. The company has also produced flash cards in several formats including Secure Digital (SD) and MultiMediaCard (MMC) in addition to high capacity USB 2.0 and 3.0 flash drives. Peripherals Mechanical gaming keyboard On 14 September 2015 G.SKILL announced the availability of the new RIPJAWS series' KM780 RGB and KM780 MX mechanical gaming keyboards with genuine Cherry MX key switches. Announced on August 21, 2019, G.SKILL announced the KM360 mechanical keyboard with a $49.99 price tag and with Cherry MX red switches (the linear variant). Laser gaming mouse On 24 September 2015 G.SKILL released the new RIPJAWS series' MX780 customizable RGB laser gaming mouse. See also List of companies of Taiwan References External links G.SKILL Official Website 1989 establishments in Taiwan Computer companies of Taiwan Computer hardware companies Electronics companies established in 1989 Computer memory companies Electronics companies of Taiwan Manufacturing companies based in Taipei Taiwanese brands
G.Skill
Technology
542