id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
73,119,653
https://en.wikipedia.org/wiki/Aluminium%E2%80%93zinc%20alloys
Aluminium brass is a technically rather uncommon term for high-strength and partly seawater-resistant copper-zinc cast and wrought alloys with 55–66% copper, up to 7% aluminium, up to 4.5% iron, and 5% manganese. Aluminium bronze is technically correct as bronze, a zinc-free copper-tin casting alloy with aluminium content. The term "special brass" is much more common for this, which then also includes alloys that add further characteristic elements to the copper-zinc base. In addition to the already mentioned elements of iron and manganese, lead, nickel and silicon can also be found as alloy components. Due to their aluminium content, which is susceptible to oxidation at the usual melting temperatures in the range of 900 °C, the alloys require careful melting and melting treatment. Even when potting, attention must be paid to any oxides forming. 7000 series 7000 series are alloyed with zinc, and can be precipitation hardened to the highest strengths of any aluminium alloy. Most 7000 series alloys include magnesium and copper as well. References Further reading Publication series of the DKI, Berlin, number L5 "Copper-Zinc alloys". Foundry lexicon. 17. Edition, Schiele and Schön, Berlin, Aluminium–zinc alloys
Aluminium–zinc alloys
Chemistry
259
4,243,413
https://en.wikipedia.org/wiki/NGC%203766
NGC 3766 (also known as Caldwell 97) is an open star cluster in the southern constellation Centaurus. It is located in the vast star-forming region known as the Carina molecular cloud, and was discovered by Nicolas Louis de Lacaille during his astrometric survey in 1751–1752. At a distance of about 1745 pc, the cluster subtends a diameter of about 12 minutes of arc. There are 137 listed stars, but many are likely non-members, with only 36 having accurate photometric data. It has a total apparent magnitude of 5.3 and integrated spectral type of B1.7. NGC 3766 is relatively young, with an estimated age of log (7.160) or 14.4 million years, and is approaching us at 14.8 km/s. This cluster contains eleven Be stars, two red supergiants and four Ap stars. 36 examples of an unusual type of variable star were discovered in the cluster. These fast-rotating pulsating B-type stars vary by only a few hundredths of a magnitude with periods less than half a day. They are main sequence stars, hotter than δ Scuti variables and cooler than slowly pulsating B stars. See also New General Catalogue References External links NGC 3766 at SEDS WEBDA Data on NGC 3766 by Lynga Open clusters 3766 Centaurus 097b
NGC 3766
Astronomy
280
30,610,952
https://en.wikipedia.org/wiki/Wynnella%20auricula
Wynnella auricula is a species of fungus in the family Helvellaceae, and the type species of genus Wynnella. It was first described in 1763 by German mycologist Jacob Christian Schäffer as Peziza auricula. Jean Louis Émile Boudier transferred it to Wynnella in 1885. References External links Pezizales Fungi of Europe Fungi described in 1763 Taxa named by Jacob Christian Schäffer Fungus species
Wynnella auricula
Biology
92
2,769,817
https://en.wikipedia.org/wiki/Motor%20drive
A motor drive is a physical system that includes a motor. An adjustable speed motor drive is a system that includes a motor that has multiple operating speeds. A variable speed motor drive is a system that includes a motor that is continuously variable in speed. If the motor is generating electrical energy rather than using it, the motor drive could be called a generator drive but is often still referred to as a motor drive. A variable frequency drive (VFD) or variable speed drive (VSD) describes the electronic portion of the system that controls the speed of the motor. More generally, the term drive, describes equipment used to control the speed of machinery. Many industrial processes such as assembly lines must operate at different speeds for different products. Where process conditions demand adjustment of flow from a pump or fan, varying the speed of the drive may save energy compared with other techniques for flow control. Where speeds may be selected from several different pre-set ranges, usually the drive is said to be adjustable speed. If the output speed can be changed without steps over a range, the drive is usually referred to as variable speed. Adjustable and variable speed drives may be purely mechanical (termed variators), electromechanical, hydraulic, or electronic. Sometimes motor drive refers to a drive used to control a motor and therefore gets interchanged with VFD or VSD. Electric motors AC electric motors can be run in fixed-speed operation determined by the number of stator pole pairs in the motor and the frequency of the alternating current supply. AC motors can be made for "pole changing" operation, reconnecting the stator winding to vary the number of poles so that two, sometimes three, speeds are obtained. For example a machine with eight physical pairs of poles, could be connected to allow running with either four or eight pole pairs, giving two speeds - at 60 Hz, these would be 1800 RPM and 900 RPM. If speed changes are rare, the motor may be initially connected for one speed then re-wired for the other speed as process conditions change, or, magnetic contactors can be used to switch between the two speeds as process needs fluctuate. Connections for more than three speeds are uneconomic. The number of such fixed-speed-operation speeds is constrained by cost as number of pole pairs increases. If many different speeds or continuously variable speeds are required, other methods are required. Direct-current motors allow for changes of speed by adjusting the shunt field current. Another way of changing speed of a direct current motor is to change the voltage applied to the armature. An adjustable-speed motor drive might consist of an electric motor and controller that is used to adjust the motor's operating speed. The combination of a constant-speed motor and a continuously adjustable mechanical speed-changing device might also be called an "adjustable speed motor drive". Power electronics-based variable frequency drives are rapidly making older technologies redundant. Reasons for using adjustable speed drives Process control and energy conservation are the two primary reasons for using an adjustable-speed drive. Historically, adjustable-speed drives were developed for process control, but energy conservation has emerged as an equally important objective. Acceleration control An adjustable-speed drive can often provide smoother operation compared to an alternative fixed-speed mode of operation. For example, in a sewage lift station sewage usually flows through sewer pipes under the force of gravity to a wet well location. From there it is pumped up to a treatment process. When fixed-speed pumps are used, the pumps are set to start when the level of the liquid in the wet well reaches some high point and stop when the level has been reduced to a low point. Cycling the pumps on and off results in frequent high surges of electric current to start the motors that results in electromagnetic and thermal stresses in the motors and power control equipment, the pumps and pipes are subjected to mechanical and hydraulic stresses, and the sewage treatment process is forced to accommodate surges in the flow of sewage through the process. When adjustable speed drives are used, the pumps operate continuously at a speed that increases as the wet well level increases. This matches the outflow to the average inflow and provides a much smoother operation of the process. Saving energy by using efficient adjustable-speed drives Fans and pumps consume a large part of the energy used by industrial electrical motors. Where fans and pumps serve a varying process load, a simple way to vary the delivered quantity of fluid is with a damper or valve in the outlet of the fan or pump, which by its increased pressure drop, reduces the flow in the process. However, this additional pressure drop represents energy loss. Sometimes it is economically practical to put in some device that recovers this otherwise lost energy. With a variable-speed drive on the pump or fan, the supply can be adjusted to match demand and no extra loss is introduced. For example, when a fan is driven directly by a fixed-speed motor, the airflow is designed for the maximum demand of the system, and so will usually be higher than it needs to be. Airflow can be regulated using a damper but it is more efficient to directly regulate fan motor speed. Following the affinity laws, for 50% of the airflow, the variable-speed motor consumes about 20% of the input power (amps). The fixed-speed motor still consumes about 85% of the input power at half the flow. Types of drives Some prime movers (internal combustion engines, reciprocating or turbine steam engines, water wheels, and others) have a range of operating speeds which can be varied continuously (by adjusting fuel rate or similar means). However, efficiency may be low at extremes of the speed range, and there may be system reasons why the prime mover speed cannot be maintained at very low or very high speeds. Before electric motors were invented, mechanical speed changers were used to control the mechanical power provided by water wheels and steam engines. When electric motors came into use, means of controlling their speed were developed almost immediately. Today, various types of mechanical drives, hydraulic drives and electric drives compete with one another in the industrial drives market. Mechanical drives There are two types of mechanical drives, variable-pitch drives, and traction drives. Variable-pitch drives are pulley and belt drives in which the pitch diameter of one or both pulleys can be adjusted. Traction drives transmit power through metal rollers running against mating metal rollers. The input-output speed ratio is adjusted by moving the rollers to change the diameters of the contact path. Many different roller shapes and mechanical designs have been used. Hydraulic adjustable speed drives There are three types of hydraulic drives, those are: hydrostatic drives, hydrodynamic drives and hydroviscous drives. A hydrostatic drive consists of a hydraulic pump and a hydraulic motor. Since positive displacement pumps and motors are used, one revolution of the pump or motor corresponds to a set volume of fluid flow that is determined by the displacement regardless of speed or torque. Speed is regulated by regulating the fluid flow with a valve or by changing the displacement of the pump or motor. Many different design variations have been used. A swash plate drive employs an axial piston pump or motor in which the swash plate angle can be changed to adjust the displacement and thus adjust the speed. Hydrodynamic drives or fluid couplings use oil to transmit torque between an impeller on the constant-speed input shaft and a rotor on the adjustable-speed output shaft. The torque converter in the automatic transmission of a car is a hydrodynamic drive. A hydroviscous drive consists of one or more discs connected to an input shaft pressed against a similar disc or discs connected to an output shaft. Torque is transmitted from the input shaft to the output shaft through an oil film between the discs. The transmitted torque is proportional to the pressure exerted by a hydraulic cylinder that presses the discs together. This effect may be used as a clutch, such as the Hele-Shaw clutch, or as a variable-speed drive, such as the Beier variable-ratio gear. Continuously variable transmission (CVT) Mechanical and hydraulic adjustable speed drives are usually called "transmissions" or "continuously variable transmissions" when they are used in vehicles, farm equipment and some other types of equipment. Electric adjustable speed drives Types of control Control can mean either manually adjustable - by means of a potentiometer or linear hall effect device, (which is more resistant to dust and grease) or it can also be automatically controlled, for example, by using a rotational detector such as a Gray code optical encoder. Types of drives There are three general categories of electric drives: DC motor drives, eddy current drives and AC motor drives. Each of these general types can be further divided into numerous variations. Electric drives generally include both an electric motor and a speed control unit or system. The term drive is often applied to the controller without the motor. In the early days of electric drive technology, electromechanical control systems were used. Later, electronic controllers were designed using various types of vacuum tubes. As suitable solid state electronic components became available, new controller designs incorporated the latest electronic technology. DC drives DC drives are DC motor speed control systems. Since the speed of a DC motor is directly proportional to armature voltage and inversely proportional to motor flux (which is a function of field current), either armature voltage or field current can be used to control speed. Eddy current drives An eddy current drive (sometimes called a "Dynamatic drive", after one of the most common brand names) consists of a fixed-speed motor (generally an induction motor) and an eddy current clutch. The clutch contains a fixed-speed rotor and an adjustable-speed rotor separated by a small air gap. A direct current in a field coil produces a magnetic field that determines the torque transmitted from the input rotor to the output rotor. The controller provides closed loop speed regulation by varying clutch current, only allowing the clutch to transmit enough torque to operate at the desired speed. Speed feedback is typically provided via an integral AC tachometer. Eddy current drives are slip-controlled systems the slip energy of which is necessarily all dissipated as heat. Such drives are therefore generally less efficient than AC/DC-AC conversion based drives. The motor develops the torque required by the load and operates at full speed. The output shaft transmits the same torque to the load, but turns at a slower speed. Since power is proportional to torque multiplied by speed, the input power is proportional to motor speed times operating torque while the output power is output speed times operating torque. The difference between the motor speed and the output speed is called the slip speed. Power proportional to the slip speed times operating torque is dissipated as heat in the clutch. While it has been surpassed by the variable-frequency drive in most variable-speed applications, the eddy current clutch is still often used to couple motors to high-inertia loads that are frequently stopped and started, such as stamping presses, conveyors, hoisting machinery, and some larger machine tools, allowing gradual starting, with less maintenance than a mechanical clutch or hydraulic transmission. AC drives AC drives are AC motor speed control systems. A slip-controlled wound-rotor induction motor (WRIM) drive controls speed by varying motor slip via rotor slip rings either by electronically recovering slip power fed back to the stator bus or by varying the resistance of external resistors in the rotor circuit. Along with eddy current drives, resistance-based WRIM drives have lost popularity because they are less efficient than AC/DC-AC-based WRIM drives and are used only in special situations.. Slip energy recovery systems return energy to the WRIM's stator bus, converting slip energy and feeding it back to the stator supply. Such recovered energy would otherwise be wasted as heat in resistance-based WRIM drives. Slip energy recovery variable-speed drives are used in such applications as large pumps and fans, wind turbines, shipboard propulsion systems, large hydro-pumps andgenerators and utility energy storage flywheels. Early slip energy recovery systems using electromechanical components for AC/DC-AC conversion (i.e., consisting of rectifier, DC motor and AC generator) are termed Kramer drives, with more recent systems using variable-frequency drives (VFDs) being referred to as static Kramer drives. In general, a VFD in its most basic configuration controls the speed of an induction or synchronous motor by adjusting the frequency of the power supplied to the motor. When changing VFD frequency in standard low-performance variable-torque applications using Volt-per-Hertz (V/Hz) control, the AC motor's voltage-to-frequency ratio can be maintained constant, and its power can be varied, between the minimum and maximum operating frequencies up to a base frequency. Constant voltage operation above base frequency, and therefore with reduced V/Hz ratio, provides reduced torque and constant power capability. Regenerative AC drives are a type of AC drive which have the capacity to recover the braking energy of a load moving faster than the motor speed (an overhauling load) and return it to the power system. See also DC injection braking Doubly fed electric machine Regenerative variable-frequency drives Scherbius Drive References Robotics hardware Electric motors Electric power systems components Mechanical power transmission Mechanical power control Variators Electric motor control
Motor drive
Physics,Technology,Engineering
2,727
34,546,086
https://en.wikipedia.org/wiki/Polymer%20architecture
Polymer architecture in polymer science relates to the way branching leads to a deviation from a strictly linear polymer chain. Branching may occur randomly or reactions may be designed so that specific architectures are targeted. It is an important microstructural feature. A polymer's architecture affects many of its physical properties including solution viscosity, melt viscosity, solubility in various solvents, glass transition temperature and the size of individual polymer coils in solution. Different polymer architectures Random branching Branches can form when the growing end of a polymer molecule reaches either (a) back around onto itself or (b) onto another polymer chain, both of which, via abstraction of a hydrogen, can create a mid-chain growth site. Branching can be quantified by the branching index. Cross linked polymer An effect related to branching is chemical crosslinking - the formation of covalent bonds between chains. Crosslinking tends to increase Tg and increase strength and toughness. Among other applications, this process is used to strengthen rubbers in a process known as vulcanization, which is based on crosslinking by sulfur. Car tires, for example, are highly crosslinked in order to reduce the leaking of air out of the tire and to toughen their durability. Eraser rubber, on the other hand, is not crosslinked to allow flaking of the rubber and prevent damage to the paper. Polymerization of pure sulfur at higher temperatures also explains why sulfur becomes more viscous with elevated temperatures in its molten state. A polymer molecule with a high degree of crosslinking is referred to as a polymer network. A sufficiently high crosslink to chain ratio may lead to the formation of a so-called infinite network or gel, in which each chain is connected to at least one other. Complex architectures With the continual development of Living polymerization, the synthesis of polymers with specific architectures becomes more and more facile. Architectures such as star polymers, comb polymers, brush polymers, dendronized polymers, dendrimers and Ring polymers are possible. Complex architecture polymers can be synthesized either with the use of specially tailored starting compounds or by first synthesising linear chains which undergo further reactions to become connected together. Knotted polymers consist of multiple intramolecular cyclization units within a single polymer chain. Linear polymers may also fold into topological circuits, formally classified by their contact topology. Effect of architecture on physical properties In general, the higher degree of branching, the more compact a polymer chain is. Branching also affects chain entanglement, the ability of chains to slide past one another, in turn affecting the bulk physical properties. Long chain branches may increase polymer strength, toughness, and the glass transition temperature (Tg) due to an increase in the number of entanglements per chain. A random and short chain length between branches, on the other hand, may reduce polymer strength due to disruption of the chains' ability to interact with each other or crystallize. An example of the effect of branching on physical properties can be found in polyethylene. High-density polyethylene (HDPE) has a very low degree of branching, is relatively stiff, and is used in applications such as bullet-proof vests. Low-density polyethylene (LDPE), on the other hand, has significant numbers of both long and short branches, is relatively flexible, and is used in applications such as plastic films. Dendrimers are a special case of branched polymer where every monomer unit is also a branch point. This tends to reduce intermolecular chain entanglement and crystallization. A related architecture, the dendritic polymer, are not perfectly branched but share similar properties to dendrimers due to their high degree of branching. The degree of branching that occurs during polymerisation can be influenced by the functionality of the monomers that are used. For example, in a free radical polymerisation of styrene, addition of divinylbenzene, which has a functionality of 2, will result in the formation of branched polymer. See also Circuit topology (fold architecture of linear polymers) References Polymers
Polymer architecture
Chemistry,Materials_science
847
23,434,867
https://en.wikipedia.org/wiki/C7H5NS
{{DISPLAYTITLE:C7H5NS}} The molecular formula C7H5NS (molar mass: 135.19 g/mol, exact mass: 135.0143 u) may refer to: Benzothiazole Phenyl isothiocyanate (PITC)
C7H5NS
Chemistry
66
477,578
https://en.wikipedia.org/wiki/Whitney%20embedding%20theorem
In mathematics, particularly in differential topology, there are two Whitney embedding theorems, named after Hassler Whitney: The strong Whitney embedding theorem states that any smooth real -dimensional manifold (required also to be Hausdorff and second-countable) can be smoothly embedded in the real -space, if . This is the best linear bound on the smallest-dimensional Euclidean space that all -dimensional manifolds embed in, as the real projective spaces of dimension cannot be embedded into real -space if is a power of two (as can be seen from a characteristic class argument, also due to Whitney). The weak Whitney embedding theorem states that any continuous function from an -dimensional manifold to an -dimensional manifold may be approximated by a smooth embedding provided . Whitney similarly proved that such a map could be approximated by an immersion provided . This last result is sometimes called the Whitney immersion theorem. About the proof Weak embedding theorem The weak Whitney embedding is proved through a projection argument. When the manifold is compact, one can first use a covering by finitely many local charts and then reduce the dimension with suitable projections. Strong embedding theorem The general outline of the proof is to start with an immersion with transverse self-intersections. These are known to exist from Whitney's earlier work on the weak immersion theorem. Transversality of the double points follows from a general-position argument. The idea is to then somehow remove all the self-intersections. If has boundary, one can remove the self-intersections simply by isotoping into itself (the isotopy being in the domain of ), to a submanifold of that does not contain the double-points. Thus, we are quickly led to the case where has no boundary. Sometimes it is impossible to remove the double-points via an isotopy—consider for example the figure-8 immersion of the circle in the plane. In this case, one needs to introduce a local double point. Once one has two opposite double points, one constructs a closed loop connecting the two, giving a closed path in Since is simply connected, one can assume this path bounds a disc, and provided one can further assume (by the weak Whitney embedding theorem) that the disc is embedded in such that it intersects the image of only in its boundary. Whitney then uses the disc to create a 1-parameter family of immersions, in effect pushing across the disc, removing the two double points in the process. In the case of the figure-8 immersion with its introduced double-point, the push across move is quite simple (pictured). This process of eliminating opposite sign double-points by pushing the manifold along a disc is called the Whitney Trick. To introduce a local double point, Whitney created immersions which are approximately linear outside of the unit ball, but containing a single double point. For such an immersion is given by Notice that if is considered as a map to like so: then the double point can be resolved to an embedding: Notice and for then as a function of , is an embedding. For higher dimensions , there are that can be similarly resolved in For an embedding into for example, define This process ultimately leads one to the definition: where The key properties of is that it is an embedding except for the double-point . Moreover, for large, it is approximately the linear embedding . Eventual consequences of the Whitney trick The Whitney trick was used by Stephen Smale to prove the h-cobordism theorem; from which follows the Poincaré conjecture in dimensions , and the classification of smooth structures on discs (also in dimensions 5 and up). This provides the foundation for surgery theory, which classifies manifolds in dimension 5 and above. Given two oriented submanifolds of complementary dimensions in a simply connected manifold of dimension ≥ 5, one can apply an isotopy to one of the submanifolds so that all the points of intersection have the same sign. History The occasion of the proof by Hassler Whitney of the embedding theorem for smooth manifolds is said (rather surprisingly) to have been the first complete exposition of the manifold concept precisely because it brought together and unified the differing concepts of manifolds at the time: no longer was there any confusion as to whether abstract manifolds, intrinsically defined via charts, were any more or less general than manifolds extrinsically defined as submanifolds of Euclidean space. See also the history of manifolds and varieties for context. Sharper results Although every -manifold embeds in one can frequently do better. Let denote the smallest integer so that all compact connected -manifolds embed in Whitney's strong embedding theorem states that . For we have , as the circle and the Klein bottle show. More generally, for we have , as the -dimensional real projective space show. Whitney's result can be improved to unless is a power of 2. This is a result of André Haefliger and Morris Hirsch (for ) and C. T. C. Wall (for ); these authors used important preliminary results and particular cases proved by Hirsch, William S. Massey, Sergey Novikov and Vladimir Rokhlin. At present the function is not known in closed-form for all integers (compare to the Whitney immersion theorem, where the analogous number is known). Restrictions on manifolds One can strengthen the results by putting additional restrictions on the manifold. For example, the -sphere always embeds in  – which is the best possible (closed -manifolds cannot embed in ). Any compact orientable surface and any compact surface with non-empty boundary embeds in though any closed non-orientable surface needs If is a compact orientable -dimensional manifold, then embeds in (for not a power of 2 the orientability condition is superfluous). For a power of 2 this is a result of André Haefliger and Morris Hirsch (for ), and Fuquan Fang (for ); these authors used important preliminary results proved by Jacques Boéchat and Haefliger, Simon Donaldson, Hirsch and William S. Massey. Haefliger proved that if is a compact -dimensional -connected manifold, then embeds in provided . Isotopy versions A relatively 'easy' result is to prove that any two embeddings of a 1-manifold into are isotopic (see Knot theory#Higher dimensions). This is proved using general position, which also allows to show that any two embeddings of an -manifold into are isotopic. This result is an isotopy version of the weak Whitney embedding theorem. Wu proved that for , any two embeddings of an -manifold into are isotopic. This result is an isotopy version of the strong Whitney embedding theorem. As an isotopy version of his embedding result, Haefliger proved that if is a compact -dimensional -connected manifold, then any two embeddings of into are isotopic provided . The dimension restriction is sharp: Haefliger went on to give examples of non-trivially embedded 3-spheres in (and, more generally, -spheres in ). See further generalizations. See also Representation theorem Whitney immersion theorem Nash embedding theorem Takens's theorem Nonlinear dimensionality reduction Universal space Notes References External links Classification of embeddings Theorems in differential topology
Whitney embedding theorem
Mathematics
1,532
35,829,746
https://en.wikipedia.org/wiki/Jadav%20Payeng
Jadav "Molai" Payeng (born 31 October 1959) is an environmental activist and forestry worker from Majuli, popularly known as the Forest Man of India. Over the course of several decades, he has planted and tended trees on a sandbar of the river Brahmaputra turning it into a forest reserve. The forest, called Molai forest after him, is located near Kokilamukh of Jorhat, Assam, India and encompasses an area of about 1,360 acres / 550 hectares. In 2015, he was honoured with Padma Shri, the fourth highest civilian award in India. He was born in the indigenous Mising tribe of Assam. Career In 1979, Payeng, then 16, encountered a large number of snakes that had died due to excessive heat after floods washed them onto the tree-less sandbar. That is when he planted around 20 bamboo seedlings on the sandbar. He not only looked after the plants, but continued to plant more trees on his own, in an effort to transform the area into a forest. The forest, which came to be known as Molai forest, now houses Bengal tigers, Indian rhinoceros, and over 100 deer and rabbits. Molai forest is also home to monkeys and several varieties of birds, including a large number of vultures. There are several thousand trees, including valcol, arjun (Terminalia arjuna), ejar (Lagerstroemia speciosa), goldmohur (Delonix regia), koroi (Albizia procera), moj (Archidendron bigeminum) and himolu (Bombax ceiba). Bamboo covers an area of over 300 hectares. A herd of around 100 elephants regularly visits the forest every year and generally stays for around six months. They have given birth to 10 calves in the forest in recent years. His efforts became known to the authorities in 2008, when forest department officials went to the area in search of 115 elephants that had retreated into the forest after damaging property in the village of Aruna Chapori, which is about 1.5 km from the forest. The officials were surprised to see such a large and dense forest and since then the department has regularly visited the site. In 2013, poachers tried to kill the rhinos staying in the forest but failed in their attempt due to Molai who alerted department officials. Officials promptly seized various articles used by the poachers to trap the animals. Molai is ready to manage the forest in a better way and to go to other places of the state to start a similar venture. Now his aim is to spread his forest to another sand bar inside of Brahmaputra. Personal life He belongs to the indigenous Mising Tribe which located in Assam India. He, along with wife and 3 children (1 daughter and 2 sons), used to live at the house which he had built inside his Forest. In 2012, Jadav built a house at No. 1 Mishing Gaon near Kokilmukh Ghat and shifted to this house with his family. Since then, they have been living in this house. Jadav, however, travels everyday to his Forest to tend and look after the plants and trees. He has cattle and buffalo on his farm and sells the milk for his livelihood, which is his only source of income. In an interview from 2012, he revealed that he has lost around 100 of his cows and buffaloes to the tigers in the forest, but blames the people who carry out large scale encroachment and destruction of forests as the root cause of the plight of wild animals. Honours Jadav Payeng was honoured at a public function arranged by the School of Environmental Sciences, Jawaharlal Nehru University on 22 April 2012 for his achievement. He shared his experience of creating a forest in an interactive session, where Magsaysay Award winner Rajendra Singh and JNU vice-chancellor Sudhir Kumar Sopory were present. Sopory named Jadav Payeng as the "Forest Man of India". In the month of October 2013, he was honoured at the Indian Institute of Forest Management during their annual event Coalescence. In 2015, he was honoured with Padma Shri, the fourth highest civilian award in India. He received honorary doctorate degree from Assam Agricultural University and Kaziranga University for his contributions. In popular culture Payeng has been the subject of a number of documentaries in the recent years. His character was the basis for a fictional film made by a Tamil director Prabhu solaman casting Rana Daggubati released in Tamil, Telugu, Hindi as Kaadan, Aranya and Haathi Mere Saathi (2018 film). A locally made documentary film, produced by Jitu Kalita in 2012, The Molai Forest, was screened at the Jawaharlal Nehru University. Jitu Kalita, who lives near Payeng's house, has also been featured and given recognition for good reporting by projecting the life of Payeng through his documentary. The 2013 film documentary Foresting life, directed by the Indian documentary filmmaker Aarti Shrivastava, celebrates the life and work of Jadav Payeng in the Molai forest. These are also the focus of William Douglas McMaster's 2013 film documentary Forest Man. With US$8,327 pledged on its Kickstarter campaign, the film was brought to completion and taken to a number of film festivals. It was awarded the Best Documentary prize at the Emerging Filmmaker Showcase in the American Pavilion at the 2014 Cannes Film Festival. Payeng is the subject of the 2016 children's book Jadav and the Tree-Place, written and illustrated by Vinayak Varma. The book was published by the open-source children's publishing platform StoryWeaver, and its production was funded by a grant from the Oracle Giving Initiative. Jadav and the Tree-Place has been translated into 39 languages, published in print by Pratham Books in 8 Indian languages, and has won the Digital Book of the Year prize at the Publishing Next Industry Awards, 2016, and the Best of Indian Children's Writing: Contemporary Awards, 2019. Payeng is also the subject of the 2019 children's book The Boy Who Grew A Forest: The True Story of Jadav Payeng, written by Sophia Gholz and illustrated by Kayla Harren. Published by Sleeping Bear Press, the book won the Crystal Kite Award, the Sigurd F. Olson Nature Writing Award (SONWA) from Northland College, and the Florida State Book Award. It has been translated into German and French, and adapted for stage. See also Reforestation Tahir Qureshi Njattyela Sreedharan References External links Man creates forest on Brahmaputra sand bar 1963 births Living people Indian environmentalists Indian foresters People from Jorhat district Tribal people from Assam Sustainability advocates Indian climate activists Environmental ethics Recipients of the Padma Shri in other fields Scientists from Assam 20th-century Indian educational theorists
Jadav Payeng
Environmental_science
1,424
24,993,596
https://en.wikipedia.org/wiki/Negative%20multinomial%20distribution
In probability theory and statistics, the negative multinomial distribution is a generalization of the negative binomial distribution (NB(x0, p)) to more than two outcomes. As with the univariate negative binomial distribution, if the parameter is a positive integer, the negative multinomial distribution has an urn model interpretation. Suppose we have an experiment that generates m+1≥2 possible outcomes, {X0,...,Xm}, each occurring with non-negative probabilities {p0,...,pm} respectively. If sampling proceeded until n observations were made, then {X0,...,Xm} would have been multinomially distributed. However, if the experiment is stopped once X0 reaches the predetermined value x0 (assuming x0 is a positive integer), then the distribution of the m-tuple {X1,...,Xm} is negative multinomial. These variables are not multinomially distributed because their sum X1+...+Xm is not fixed, being a draw from a negative binomial distribution. Properties Marginal distributions If m-dimensional x is partitioned as follows and accordingly and let The marginal distribution of is . That is the marginal distribution is also negative multinomial with the removed and the remaining p'''s properly scaled so as to add to one. The univariate marginal is said to have a negative binomial distribution. Conditional distributions The conditional distribution of given is . That is, Independent sums If and If are independent, then . Similarly and conversely, it is easy to see from the characteristic function that the negative multinomial is infinitely divisible. Aggregation If then, if the random variables with subscripts i and j'' are dropped from the vector and replaced by their sum, This aggregation property may be used to derive the marginal distribution of mentioned above. Correlation matrix The entries of the correlation matrix are Parameter estimation Method of Moments If we let the mean vector of the negative multinomial be and covariance matrix then it is easy to show through properties of determinants that . From this, it can be shown that and Substituting sample moments yields the method of moments estimates and Related distributions Negative binomial distribution Multinomial distribution Inverted Dirichlet distribution, a conjugate prior for the negative multinomial Dirichlet negative multinomial distribution References Waller LA and Zelterman D. (1997). Log-linear modeling with the negative multi- nomial distribution. Biometrics 53: 971–82. Further reading Factorial and binomial topics Multivariate discrete distributions
Negative multinomial distribution
Mathematics
565
250,074
https://en.wikipedia.org/wiki/Binomial%20options%20pricing%20model
In finance, the binomial options pricing model (BOPM) provides a generalizable numerical method for the valuation of options. Essentially, the model uses a "discrete-time" (lattice based) model of the varying price over time of the underlying financial instrument, addressing cases where the closed-form Black–Scholes formula is wanting. The binomial model was first proposed by William Sharpe in the 1978 edition of Investments (), and formalized by Cox, Ross and Rubinstein in 1979 and by Rendleman and Bartter in that same year. For binomial trees as applied to fixed income and interest rate derivatives see . Use of the model The Binomial options pricing model approach has been widely used since it is able to handle a variety of conditions for which other models cannot easily be applied. This is largely because the BOPM is based on the description of an underlying instrument over a period of time rather than a single point. As a consequence, it is used to value American options that are exercisable at any time in a given interval as well as Bermudan options that are exercisable at specific instances of time. Being relatively simple, the model is readily implementable in computer software (including a spreadsheet). Although computationally slower than the Black–Scholes formula, it is more accurate, particularly for longer-dated options on securities with dividend payments. For these reasons, various versions of the binomial model are widely used by practitioners in the options markets. For options with several sources of uncertainty (e.g., real options) and for options with complicated features (e.g., Asian options), binomial methods are less practical due to several difficulties, and Monte Carlo option models are commonly used instead. When simulating a small number of time steps Monte Carlo simulation will be more computationally time-consuming than BOPM (cf. Monte Carlo methods in finance). However, the worst-case runtime of BOPM will be O(2n), where n is the number of time steps in the simulation. Monte Carlo simulations will generally have a polynomial time complexity, and will be faster for large numbers of simulation steps. Monte Carlo simulations are also less susceptible to sampling errors, since binomial techniques use discrete time units. This becomes more true the smaller the discrete units become. Method The binomial pricing model traces the evolution of the option's key underlying variables in discrete-time. This is done by means of a binomial lattice (Tree), for a number of time steps between the valuation and expiration dates. Each node in the lattice represents a possible price of the underlying at a given point in time. Valuation is performed iteratively, starting at each of the final nodes (those that may be reached at the time of expiration), and then working backwards through the tree towards the first node (valuation date). The value computed at each stage is the value of the option at that point in time. Option valuation using this method is, as described, a three-step process: Price tree generation, Calculation of option value at each final node, Sequential calculation of the option value at each preceding node. Step 1: Create the binomial price tree The tree of prices is produced by working forward from valuation date to expiration. At each step, it is assumed that the underlying instrument will move up or down by a specific factor ( or ) per step of the tree (where, by definition, and ). So, if is the current price, then in the next period the price will either be or . The up and down factors are calculated using the underlying volatility, , and the time duration of a step, , measured in years (using the day count convention of the underlying instrument). From the condition that the variance of the log of the price is , we have: Above is the original Cox, Ross, & Rubinstein (CRR) method; there are various other techniques for generating the lattice, such as "the equal probabilities" tree, see. The CRR method ensures that the tree is recombinant, i.e. if the underlying asset moves up and then down (u,d), the price will be the same as if it had moved down and then up (d,u)—here the two paths merge or recombine. This property reduces the number of tree nodes, and thus accelerates the computation of the option price. This property also allows the value of the underlying asset at each node to be calculated directly via formula, and does not require that the tree be built first. The node-value will be: Where is the number of up ticks and is the number of down ticks. Step 2: Find option value at each final node At each final node of the tree—i.e. at expiration of the option—the option value is simply its intrinsic, or exercise, value: , for a call option , for a put option, Where is the strike price and is the spot price of the underlying asset at the period. Step 3: Find option value at earlier nodes Once the above step is complete, the option value is then found for each node, starting at the penultimate time step, and working back to the first node of the tree (the valuation date) where the calculated result is the value of the option. In overview: the "binomial value" is found at each node, using the risk neutrality assumption; see Risk neutral valuation. If exercise is permitted at the node, then the model takes the greater of binomial and exercise value at the node. The steps are as follows: In calculating the value at the next time step calculated—i.e. one step closer to valuation—the model must use the value selected here, for "Option up"/"Option down" as appropriate, in the formula at the node. The aside algorithm demonstrates the approach computing the price of an American put option, although is easily generalized for calls and for European and Bermudan options: Relationship with Black–Scholes Similar assumptions underpin both the binomial model and the Black–Scholes model, and the binomial model thus provides a discrete time approximation to the continuous process underlying the Black–Scholes model. The binomial model assumes that movements in the price follow a binomial distribution; for many trials, this binomial distribution approaches the log-normal distribution assumed by Black–Scholes. In this case then, for European options without dividends, the binomial model value converges on the Black–Scholes formula value as the number of time steps increases. In addition, when analyzed as a numerical procedure, the CRR binomial method can be viewed as a special case of the explicit finite difference method for the Black–Scholes PDE; see finite difference methods for option pricing. See also Trinomial tree, a similar model with three possible paths per node. Tree (data structure) Lattice model (finance), for more general discussion and application to other underlyings Black–Scholes: binomial lattices are able to handle a variety of conditions for which Black–Scholes cannot be applied. Monte Carlo option model, used in the valuation of options with complicated features that make them difficult to value through other methods. Real options analysis, where the BOPM is widely used. Quantum finance, quantum binomial pricing model. Mathematical finance, which has a list of related articles. , where the BOPM is widely used. Implied binomial tree Edgeworth binomial tree References External links The Binomial Model for Pricing Options, Prof. Thayer Watkins Binomial Option Pricing (PDF), Prof. Robert M. Conroy Binomial Option Pricing Model by Fiona Maclachlan, The Wolfram Demonstrations Project On the Irrelevance of Expected Stock Returns in the Pricing of Options in the Binomial Model: A Pedagogical Note by Valeri Zakamouline A Simple Derivation of Risk-Neutral Probability in the Binomial Option Pricing Model by Greg Orosi Financial models Options (finance) Mathematical finance Models of computation Trees (data structures) Articles with example code
Binomial options pricing model
Mathematics
1,680
11,233,334
https://en.wikipedia.org/wiki/Ultratrace%20element
In biochemistry, an ultratrace element is a chemical element that normally comprises less than one microgram per gram of a given organism (i.e. less than 0.0001% by weight), but which plays a significant role in its metabolism. Possible ultratrace elements in humans include boron, silicon, nickel, vanadium and cobalt. Other possible ultratrace elements in other organisms include bromine, cadmium, fluorine, lead, lithium, and tin. See also Trace element References Dietary minerals
Ultratrace element
Chemistry,Biology
109
830,206
https://en.wikipedia.org/wiki/Magnetic%20amplifier
The magnetic amplifier (colloquially known as a "mag amp") is an electromagnetic device for amplifying electrical signals. The magnetic amplifier was invented early in the 20th century, and was used as an alternative to vacuum tube amplifiers where robustness and high current capacity were required. World War II Germany perfected this type of amplifier, and it was used in the V-2 rocket. The magnetic amplifier was most prominent in power control and low-frequency signal applications from 1947 to about 1957, when the transistor began to supplant it. The magnetic amplifier has now been largely superseded by the transistor-based amplifier, except in a few safety critical, high-reliability or extremely demanding applications. Combinations of transistor and mag-amp techniques are still used. Principle of operation Visually a mag amp device may resemble a transformer, but the operating principle is quite different from a transformer – essentially the mag amp is a saturable reactor. It makes use of magnetic saturation of the core, a non-linear property of a certain class of transformer cores. For controlled saturation characteristics, the magnetic amplifier employs core materials that have been designed to have a specific B-H curve shape that is highly rectangular, in contrast to the slowly tapering B-H curve of softly saturating core materials that are often used in normal transformers. The typical magnetic amplifier consists of two physically separate but similar transformer magnetic cores, each of which has two windings: a control winding and an AC winding. Another common design uses a single core shaped like the number "8" with one control winding and two AC windings as shown in the photo above. A small DC current from a low-impedance source is fed into the control winding. The AC windings may be connected either in series or in parallel, the configurations resulting in different types of mag amps. The amount of control current fed into the control winding sets the point in the AC winding waveform at which either core will saturate. In saturation, the AC winding on the saturated core will go from a high-impedance state ("off") into a very low-impedance state ("on") – that is, the control current controls the point at which voltage the mag amp switches "on". A relatively small DC current on the control winding is able to control or switch large AC currents on the AC windings. This results in current amplification. Two magnetic cores are used because the AC current will generate high voltage in the control windings. By connecting them in opposite phase, the two cancel each other, so that no current is induced in the control circuit. The alternate design shown above with the "8" shaped core accomplishes this same objective magnetically. Strengths The magnetic amplifier is a static device with no moving parts. It has no wear-out mechanism and has a good tolerance to mechanical shock and vibration. It requires no warm-up time. Multiple isolated signals may be summed by additional control windings on the magnetic cores. The windings of a magnetic amplifier have a higher tolerance to momentary overloads than comparable solid-state devices. The magnetic amplifier is also used as a transducer in applications such as current measurement and the flux gate compass. The reactor cores of magnetic amplifiers withstand neutron radiation extremely well. For this special reason magnetic amplifiers have been used in nuclear power applications. Limitations The gain available from a single stage is limited and low compared to electronic amplifiers. Frequency response of a high-gain amplifier is limited to about one-tenth the excitation frequency, although this is often mitigated by exciting magnetic amplifiers with currents at higher than utility frequency. Solid-state electronic amplifiers can be more compact and efficient than magnetic amplifiers. The bias and feedback windings are not unilateral and may couple energy back from the controlled circuit into the control circuit. This complicates the design of multistage amplifiers when compared with electronic devices. Magnetic amplifiers introduce substantial harmonic distortion to the output waveform consisting entirely of the odd harmonics. Unlike the silicon controlled rectifiers or TRIACs which replaced them, the magnitude of these harmonics decreases rapidly with frequency so interference with nearby electronic devices such as radio receivers is uncommon. Applications Magnetic amplifiers were important as modulation and control amplifiers in the early development of voice transmission by radio. A magnetic amplifier was used as voice modulator for a 2 kilowatt Alexanderson alternator, and magnetic amplifiers were used in the keying circuits of large high-frequency alternators used for radio communications. Magnetic amplifiers were also used to regulate the speed of Alexanderson alternators to maintain the accuracy of the transmitted radio frequency. Magnetic amplifiers were used to control large high-power alternators by turning them on and off for telegraphy or to vary the signal for voice modulation. The alternator's frequency limits were rather low to where a frequency multiplier had to be utilized to generate higher radio frequencies than the alternator was capable of producing. Even so, early magnetic amplifiers incorporating powdered-iron cores were incapable of producing radio frequencies above approximately 200 kHz. Other core materials, such as ferrite cores and oil-filled transformers, would have to be developed to allow the amplifier to produce higher frequencies. The ability to control large currents with small control power made magnetic amplifiers useful for control of lighting circuits, for stage lighting and for advertising signs. Saturable reactor amplifiers were used for control of power to industrial furnaces. Magnetic amplifiers as variable AC voltage controllers have been mostly replaced by silicon controlled rectifiers or TRIACs. Magnetic amplifiers are still used in some arc welders. Small magnetic amplifiers were used for radio tuning indicators, control of small motor and cooling fan speed, control of battery chargers. Magnetic amplifiers were used extensively as the switching element in early switched-mode (SMPS) power supplies, as well as in lighting control. Semiconductor-based solid-state switches have largely superseded them, though recently there has been some regained interest in using mag amps in compact and reliable switching power supplies. PC ATX power supplies often use mag amps for secondary side voltage regulation. Cores designed specifically for switch mode power supplies are currently manufactured by several large electromagnetics companies, including Metglas and Mag-Inc. Magnetic amplifiers were used by locomotives to detect wheel slip, until replaced by Hall effect current transducers. The cables from two traction motors passed through the core of the device. During normal operation the resultant flux was zero as both currents were the same and in opposite directions. The currents would differ during wheel slip, producing a resultant flux that acted as the Control winding, developing a voltage across a resistor in series with the AC winding which was sent to the wheel slip correction circuits. Magnetic amplifiers can be used for measuring high DC-voltages without direct connection to the high voltage and are therefore still used in the HVDC-technique. The current to be measured is passed through the two cores, possibly by a solid bus bar. There is almost no voltage drop in this bus bar. The output signal, proportional to the ampere turns in the control current bus bar, is derived from the alternating excitation voltage of the magnetic amplifier, there is no voltage created or induced on the bus bar. The output signal has only a magnetic connection with the bus bar so the bus may be, quite safely, at any (EHT) voltage with respect to the instrumentation. Instrumentation magnetic amplifiers are commonly found on space craft where a clean electromagnetic environment is highly desirable. The German Kriegsmarine made extensive use of the magnetic amplifiers. They were used for the master stable element systems, for slow moving transmission for controlling guns, directors and rangefinders and train and elevation controls. Magnetic amplifiers were used in aircraft systems (avionics) before the advent of high reliability semiconductors. They were important in implementing early autoland systems and Concorde made use of the technology for the control of its engine air intakes before development of a system using digital electronics. Magnetic amplifiers were used in stabilizer controls of V2 rockets. Usage in computing Magnetic amplifiers were widely studied during the 1950s as a potential switching element for mainframe computers. Like transistors, mag amps were somewhat smaller than the typical vacuum tube, and had the significant advantage that they were not subject to "burning out" and thus had dramatically lower maintenance requirements. Another advantage is that a single mag amp could be used to sum several inputs in a single core, which was useful in the arithmetic logic unit (ALU) as it could greatly reduce the component count. Custom tubes could do the same, but transistors could not, so the mag amp was able to combine the advantages of tubes and transistors in an era when the latter were expensive and unreliable. The principles of magnetic amplifiers were applied non linearly to create magnetic digital logic gates. That era was short, lasting from the mid-1950s to about 1960, when new fabrication techniques produced great improvements in transistors and dramatically lowered their cost. Only one large-scale mag amp machine was put into production, the UNIVAC Solid State, but a number of contemporary late-1950s/early-1960s computers used the technology, like the Ferranti Sirius, Ferranti Orion and the English Electric KDF9, or the one-off MAGSTEC. History Early development A voltage source and a series connected variable resistor may be regarded as a direct current signal source for a low resistance load such as the control coil of a saturable reactor which amplifies the signal. Thus, in principle, a saturable reactor is already an amplifier, although before 20th century they were used for simple tasks, such as controlling lighting and electrical machinery as early as 1885. In 1904 radio pioneer Reginald Fessenden placed an order for a high frequency rotary mechanical alternator from the General Electric Company capable of generate AC at a frequency of 100 kHz to be used for continuous wave radio transmission over great distances. The design job was given to General Electric engineer Ernst F. Alexanderson who developed the 2 kW Alexanderson alternator. By 1916 Alexanderson added a magnetic amplifier to control the transmission of these rotary alternators for transoceanic radio communication. The experimental telegraphy and telephony demonstrations made during 1917 attracted the attention of the US Government, especially in light of partial failures in the transoceanic cable across the Atlantic Ocean. The 50 kW alternator was commandeered by the US Navy and put into service in January 1918 and was used until 1920, when a 200 kW generator-alternator set was built and installed. Usage in electric power generation Magnetic amplifiers were extensively used in electricity power generation from the early 1960s onwards. They provided the small signal amplification for generator automatic voltage regulation (AVR) from a small error signal at milliwatt (mW) level to 100 kilowatt (kW) level. This was in turn converted by a rotating machine (exciter) to 5 megawatt (MW) level, the excitation power required by a typical 500 MW Power Plant Turbine Generator Unit. They proved durable and reliable. Many are recorded in service through the mid-1990s and some are still in use at older generating stations, notably in hydroelectric plants operating in northern California. Misnomer uses In the 1970s, Robert Carver designed and produced several high quality high-powered audio amplifiers, calling them magnetic amplifiers. In fact, they were in most respects conventional audio amplifier designs with unusual power supply circuits. They were not magnetic amplifiers as defined in this article. They should not be confused with real magnetic audio amplifiers. See also Parametron Magnetic logic Transductor References External links Electromagnetic components Electronic amplifiers Power electronics
Magnetic amplifier
Technology,Engineering
2,380
20,406,792
https://en.wikipedia.org/wiki/Nimonic
Nimonic is now a registered trademark of Special Metals Corporation that refers to a family of nickel-based high-temperature low creep superalloys. Nimonic alloys typically consist of more than 50% nickel and 20% chromium with additives such as titanium and aluminium. The main use is in gas turbine components and extremely high performance reciprocating internal combustion engines. The Nimonic family of alloys was first developed in the 1940s by research teams at the Wiggin Works in Hereford, England, in support of the development of the Whittle jet engine. Development Working at Inco-Mond's Wiggin facility at Birmingham in the United Kingdom, Leonard Bessemer Pfeil is credited with the development of Nimonic alloy 80 in 1941, and used in the Power Jets W.2B. Four years later, Nimonic alloy 80A followed, an alloy widely used in engine valves today. Progressively stronger alloys were subsequently developed: Nimonic alloy 90 (1945), Nimonic alloy 100 (1955), and Nimonic alloys 105 (1960) and 115 (1964 - Prof John Gittus FREng. DSc. D Tech 1986.). Properties and uses Due to its ability to withstand very high temperatures, Nimonic is ideal for use in aircraft parts and gas turbine components such as turbine blades and exhaust nozzles on jet engines, for instance, where the pressure and heat are extreme. It is available in different grades, including Nimonic 75, Nimonic 80A, and Nimonic 90. Nimonic 80a was used for the turbine blades on the Rolls-Royce Nene and de Havilland Ghost, Nimonic 90 on the Bristol Proteus, and Nimonic 105 on the Rolls-Royce Spey aviation gas turbines. Nimonic 263 was used in the combustion chambers of the Rolls-Royce/Snecma Olympus 593 used on the Concorde supersonic airliner. The heads of the exhaust valves as well as the turbine wheel of its Rajay turbocharger for the Corvair Spyder turbo engine were made of Nimonic 80A. Most Saab cars with high output turbos use exhaust valves made of Nimonic 80A as well. Nimonic 75 has been certified by the European Union as a standard creep reference material. See also Brightray References External links ... and now Nimonic 90 - a 1951 Flight advertisement for Nimonic 90 Nimonic Alloys - a 1952 Flight advertisement for Nimonic Alloys by Henry Wiggin and Company Nimonic Creep-Resisting Alloys - a 1960 Flight advertisement Nickel alloys Superalloys Chromium alloys Aerospace materials British inventions Nickel–chromium alloys
Nimonic
Chemistry,Engineering
560
47,851,919
https://en.wikipedia.org/wiki/Penicillium%20tropicoides
Penicillium tropicoides is a species of fungus in the genus Penicillium. References Further reading tropicoides Fungi described in 2010 Fungus species
Penicillium tropicoides
Biology
36
643,147
https://en.wikipedia.org/wiki/Association%20of%20Universities%20for%20Research%20in%20Astronomy
The Association of Universities for Research in Astronomy (AURA) is a consortium of universities and other institutions that operates astronomical observatories and telescopes. Founded October 10, 1957, with the encouragement of the National Science Foundation (NSF), AURA was incorporated by a group of seven U.S. universities: California, Chicago, Harvard, Indiana, Michigan, Ohio State, and Wisconsin. The first meeting of the board of directors took place in Ann Arbor, Michigan. Today, AURA has 47 member institutions in the United States and 3 international affiliate members. AURA began as a small organization dedicated to ground-based optical astronomy, managing a range of 1- to 4-meter telescopes and providing community advocacy for optical/infrared astronomy. Over the years, AURA expanded its focus to include Solar Astronomy and the Gemini 8-meter telescopes, going on to partner with other consortia such as WIYN (Wisconsin Indiana Yale & NOAO) and SOAR (Southern Astrophysical Research). In the 1980s, AURA took on the management of the Space Telescope Science Institute, opening up the ultraviolet, optical, and infrared wavelength bands in space with the Hubble Space Telescope and in infrared space astronomy with the James Webb Space Telescope (JWST). AURA is responsible for the successful management and operation of its three centers: NSF’s National Optical-Infrared Astronomy Research Laboratory (NOIRLab); the NSF's National Solar Observatory (NSO); and the Space Telescope Science Institute (STScI). Centers NSF’s NOIRLab is the US national center for ground-based, nighttime optical astronomy. The mission of NOIRLab is to enable breakthrough discoveries in astrophysics by developing and operating state-of-the-art ground-based observatories and providing data products and services for a diverse and inclusive community. Through its five Programs — Cerro Tololo Inter-American Observatory (CTIO), the Community Science and Data Center (CSDC), International Gemini Observatory, Kitt Peak National Observatory (KPNO) and Vera C. Rubin Observatory — NSF’s NOIRLab serves as a focal point for community development of innovative scientific programs, the exchange of ideas, and creative development. NSF's National Solar Observatory (NSO) - AURA operates NSO which is located in Boulder, Colorado and at the Daniel K. Inouye Solar Telescope (DKIST) in Maui, Hawaii. Space Telescope Science Institute (STScI) - AURA manages STScI for NASA to carry out the science mission of the Hubble Space Telescope and to carry out the operations and science missions of the James Webb Space Telescope. Construction project: The Vera C.Rubin Observatory - a public-private partnership to operate an 8.4-meter telescope on Chile’s Cerro Pachon. President Dr. Matt Mountain was appointed president of the Association of Universities for Research in Astronomy (AURA) on 1 March 2015. The president, as the chief executive officer, serves as the primary representative or spokesperson for AURA. The president is a member of the board of directors and implements policy decisions of the board. The president serves the board of directors as its principal executive officer, providing leadership and guidance on policy matters, coordinating the activities of the board and its various committees. The president is also responsible for maintaining effective working relationships with AURA Member Universities. AURA Board of Directors The board, which meets quarterly, establishes the policies of AURA, approves its budget, elects members of the Management Councils, and appoints the president, the center directors, and other principal officers. The board of directors is responsible to the member representatives for the effective management of AURA and the achievement of its purposes. Members Today, there are 47 U.S. Member Institutions and 3 International Affiliate Members which comprise the Member Institutions of AURA. The president of each member institution designates a member representative who has a voice in AURA matters. Together, the member representatives act upon membership applications. List of members as of 2022: Boston University California Institute of Technology Carnegie Institution for Science Carnegie Mellon University Cornell University Fisk University Georgia State University Harvard University Indiana University Bloomington Iowa State University Johns Hopkins University Keck Northeast Astronomy Consortium - a consortium of liberal arts colleges, including Colgate University, Haverford College (partnership with Bryn Mawr College), Middlebury College, Swarthmore College, Vassar College, Wellesley College, Wesleyan University, and Williams College. Leibniz-Institut für Sonnenphysik Massachusetts Institute of Technology Michigan State University Montana State University New Jersey Institute of Technology New Mexico Institute of Mining and Technology New Mexico State University Ohio State University  Pennsylvania State University Pontificia Universidad Catolica de Chile  Princeton University Rutgers University Smithsonian Astrophysical Observatory Stanford University Stony Brook University Texas A&M University Universidad de Chile  University of Arizona University of California, Berkeley University of California, Santa Cruz University of Chicago  University of Colorado Boulder University of Florida University of Hawaiʻi - system administers the Institute for Astronomy, Manoa administers educational programs University of Illinois Urbana-Champaign University of Maryland, College Park University of Michigan  University of Minnesota, Twin Cities University of North Carolina at Chapel Hill University of Pittsburgh University of Texas at Austin University of Texas at San Antonio University of Toledo University of Virginia University of Washington University of Wisconsin-Madison Vanderbilt University Yale University Honours The asteroid 19912 Aurapenenta was named in honour of the association's fiftieth anniversary, on 1 June 2007. See also List of astronomical societies References Literature Frank K. Edmondson. AURA and Its US National Observatories. — Cambridge University Press, 1997. — 367 p. — . — . External links 1957 establishments in the United States Astronomy organizations Astronomy institutes and departments College and university associations and consortia in the United States International college and university associations and consortia Scientific organizations established in 1957
Association of Universities for Research in Astronomy
Astronomy
1,173
48,085
https://en.wikipedia.org/wiki/Virtual%20management
Virtual management is the supervision, leadership, and maintenance of virtual teams—dispersed work groups that rarely meet face to face. As the number of virtual teams has grown, facilitated by the Internet, globalization, outsourcing, and remote work, the need to manage them has also grown. The challenging task of managing these teams have been made much easier by availability of online collaboration tools, adaptive project management software, efficient time tracking programs and other related systems and tools. This article provides information concerning some of the important management factors involved with virtual teams, and the life cycle of managing a virtual team. Due to developments in information technology within the workplace, along with a need to compete globally and address competitive demands, organizations have embraced virtual management structures. As in face-to-face teams, management of virtual teams is also a crucial component in the effectiveness of the team. However, when compared to leaders of face-to-face teams, virtual team leaders face the following difficulties: (a) logistical problems, including coordination of work across different time zones and physical distances; (b) interpersonal issues, including an ability to establish effective working relationships in the absence of frequent face-to-face communication; and (c) technological difficulties, including finding and learning to use appropriate technology. In global virtual teams, there is an additional dimension of cultural differences which impact on a virtual team's functioning. Management factors For the team to reap the benefits mentioned above, the manager considers the following factors. Trust and Leader Effectiveness A virtual team leader must ensure a feeling of trust among all team members—something all team members have an influence on and must be aware of. However, the team leader is responsible for this in the first place. Team leaders must ensure a sense of psychological safety within a team by allowing all members to speak honestly and directly, but respectfully, to each other. For a team to succeed, the manager must schedule meetings to ensure participation. This carries over to the realm of virtual teams, but in this case these meetings are also virtual. Due to the difficulties of communicating in a virtual team, it is imperative that team members attend meetings. The first team meeting is crucial and establishes lasting precedents for the team. Furthermore, there are numerous features of a virtual team environment that may impact on the development of follower trust. The team members have to trust that the leader is allocating work fairly and evaluating team members equally. An extensive study conducted over 8 years examined what factors increase leader effectiveness in virtual teams. One such factor is that virtual team leaders need to spend more time than conventional team counterparts being explicit about expectations. This is due to the patterns of behavior and dynamics of interaction which are unfamiliar. Moreover, even in information rich virtual teams using video conferencing, it is hard to replicate the rapid exchange of information and cues available in face-to-face discussions. To develop role clarity within virtual teams, leaders should focus on developing: (a) clear objectives and goals for tasks; (b) comprehensive milestones for deliverables; and (c) communication channels for seeking feedback on unclear role guidance. When determining an effective way of leadership for a culturally diverse team there are various ways: directive (from directive to participatory), transactional (rewarding) or transformational influence. Leadership must ensure effective communication and understanding, clear and shared plans and task assignments and collective sense of belonging in team. Further, the role of a team leader is to coordinate tasks and activities, motivate team members, facilitate collaboration and solve conflicts when needed. This proofs that a team leader's role in effective virtual team management and creating knowledge sharing environment is crucial. Presence and Instruction Virtual team leaders must become virtually present so they can closely monitor team members and note changes that might affect their ability to undertake their tasks. Due to the distributed nature of virtual teams, team members have less awareness of the wider situation of the team or dynamics of the overall team environment. Consequently, as situations change in a virtual team environment, such as adjustments to task requirements, modification of milestones, or changes to the goals of the team, it is important that leaders monitor followers to ensure they are aware of these changes and make amendments as required. The leaders of virtual teams do not possess the same powers of physical observation, and have to be creative in setting up structures and processes so that variations from expectations can be observed well virtually (for instance, virtual team leaders have to sense when "electronic" silence means acquiescence rather than inattention). At the same time, leaders of virtual teams cannot assume that members are prepared for virtual meetings and also have to ensure that the unique knowledge of each distributed person on the virtual team is fully utilized. Virtual team leaders should be aware that information overload may result in situations when a leader has provided too much information to a team member. Virtuality Finally, when examining virtual teams, it is crucial to consider that they differ in terms of their virtuality. Virtuality refers to a continuum of how "virtual" a team is. There are three predominant factors that contribute to virtuality, namely: (a) the richness of communication media; (b) distance between team members, both in time zones and geographical dispersion; and (c) organizational and cultural diversity. Detriments In the field of managing virtual research and development (R&D) teams there have arisen certain detriments to the management decisions made when leading a team. The first of these detriments is the lack of potential for radical innovation, this is brought about by the lack of affinity with certain technologies or processes. This causes a decrease in certainty about the feasibility of the execution. As a result, virtual R&D teams focus on incremental innovations. The second detriment is the nature of the project may need to change. Depending on how interdependent each step is, the ability for a virtual team to successfully complete the project varies at each step. Thirdly, the sharing of knowledge, which was identified above as an important ingredient in managing a virtual team, becomes even more important albeit difficult. There is some knowledge and information that is simple and easy to explain and share, but there is other knowledge that may be more content or domain specific that is not so easy to explain. In a face to face group this can be done by walking a team member through the topic slowly during a lunch break, but in a virtual team this is no longer possible and the information is at risk of being misunderstood leading to set backs in the project. Finally, the distribution and bundling of resources is also very much altered by the move from collocation to virtual space. Where once the team was all in one place and the resources could be split there as needed, now the team can be anywhere, and the same resources still need to get to the correct people. This takes time, effort, and coordination to avoid potential setbacks or conflicts. Life Cycle To effectively use the management factors described above, it is important to know when in the life cycle of a virtual team they would be most useful. According to the life cycle of virtual team management includes five stages: Preparations Launch Performance management Team development Disbanding Preparations The initial task during the implementation of a team is the definition of the general purpose of the team together with the determination of the level of virtuality that might be appropriate to achieve these goals. These decisions are usually determined by strategic factors such as mergers, increase of the market span, cost reductions, flexibility and reactivity to the market, etc. Management-related activities that should take place during preparation phase includes mission statement, personnel selection, task design, rewards system design, choose appropriate technology and organizational integration. In regards to personnel selection, virtual teams have an advantage. To maximize outcomes, management wants the best team it can have. Before virtual teams, they did this by gathering the "best available" workers and forming a team. These teams did not contain the best workers of the field, because they were busy with their own projects, or were too far away to meet the group. With virtual teams, managers can select personnel from anywhere in the world, and so from a wider pool. Launch It is highly recommended that, in the beginning of virtual teamwork, all members should meet each other face to face. Crucial elements of such a “kick-off” workshop are getting acquainted with the other team members, clarifying the team goals, clarifying the roles and functions of the team members, information and training how communication technologies can be used efficiently, and developing general rules for the teamwork. As a consequence, “kick-off” workshops are expected to promote clarification of team processes, trust building, building of a shared interpretive context, and high identification with the team. Getting acquainted, goal clarification and development of intra-team rules should also be accomplished during this phase. Initial field data that compare virtual teams with and without such “kick-off” meetings confirm a general positive effects on team effectiveness, although more differentiated research is necessary. Experimental studies demonstrate that getting acquainted before the start of computer-mediated work facilitates cooperation and trust. One of the manager's roles during launch is to create activities or events that allow for team building. These kickoff events should serve three major goals: everyone on the team is well versed in the technology involved, everyone knows what is expected of them and when it is expected, and finally have everyone get to know one another. By meeting all three goals the virtual team may be far more successful, and it lightens everyone's load. Performance management After the launch of a virtual team, work effectiveness and a constructive team climate has to be maintained using performance management strategies. These comprehensive management strategies arise from the agreed upon difficulty of working in virtual teams. Research shows that constructs and expectations of team membership, leadership, goal setting, social loafing and conflict differ in cultural groups and therefore affects team performance a lot. In early team formation process, one thing to agree on within a team is the meaning of leadership and role differentiation for the team leader and other team members. To apply this, the leader must show active leadership to create a shared conceptualization of team meaning, its focus and function. The following discussion is again restricted to issues on which empirical results are already available. These issues are leadership, communication within virtual teams, team members' motivation, and knowledge management. Leadership is a central challenge in virtual teams. Particularly, all kinds of direct control are difficult when team managers are not at the same location as the team members. As a consequence, delegative management principles are considered that shift parts of classic managerial functions to the team members. However, team members only accept and fulfill such managerial functions when they are motivated and identify with the team and its goals, which is again more difficult to achieve in virtual teams. Next, empirical results on three leadership approaches are summarized that differ in the degree of autonomy of the team members: Electronic monitoring as an attempt to realize directive leadership over distance, management by objectives (MBO) as an example for delegative leadership principles, and self-managing teams as an example for rather autonomous teamwork. One way to maintain control over a virtual team is through motivators and incentives. Both are common techniques implemented by managers for collocated teams, but with slight adjustments they can be used effectively for virtual teams as well. A commonly held belief is that working online, is not particularly important or impactful. This belief can be changed by notifying employees that their work is being sent to the managers. This attaches the importance of career prospects to the work, and makes it more meaningful for the workers. Communication processes are perhaps the most frequently investigated variables relevant for the regulation of virtual teamwork. By definition, communication in virtual teams is predominantly based on electronic media such as e-mail, telephone, video-conference, etc. The main concern here is that electronic media reduce the richness of information exchange compared to face-to-face communication. This difference in richness of information is an idea shared by multiple researchers, and there are some methods to move around the drop created by working in a virtual environment. One such method is to use the anonymity provided by working digitally. It lets people share concerns without worrying about being identified. This serves to over come the lack of richness by providing a safe method to honestly provide feedback and information. Predominant research issues have been conflict escalation and disinhibited communication (“flaming”), the fit between communication media and communication contents, and the role of non-job-related communication. These research issues revolve around the idea that people become more hostile over a virtual medium making the working environment unhealthy. These findings were quickly dismissed in the presence of virtual teams due to the fact that virtual teams have the expectation that one will work longer together, and the level of anonymity is different from just a one off online interaction. One of the important needs for successful communication is the ability to have every member of the group together repeatedly over time. Effective dispersed groups show spikes in presence during communication over time, while ineffective groups do not have as dramatic spikes. For the management of motivational and emotional processes, three groups of such processes have been addressed in empirical investigations so far: motivation and trust, team identification and cohesion, and satisfaction of the team members. Since most of the variables are originated within the person, they can vary considerably among the members of a team, requiring appropriate aggregation procedures for multilevel analyses (e.g. motivation may be mediated by interpersonal trust ). Systematic research is needed on the management of knowledge and the development of shared understanding within the teams, particularly since theoretical analyses sometimes lead to conflicting expectations. The development of such “common ground” might be particularly difficult in virtual teams because sharing of information and the development of a “transactive memory” (i.e., who knows what in the team) is harder due to the reduced amount of face-to-face communication and the reduced information about individual work contexts. Team development Virtual teams can be supported by personnel and team development interventions. The development of such training concepts should be based on an empirical assessment of the needs and/or deficits of the team and its members, and the effectiveness of the trainings should be evaluated empirically. The steps of team developments include assessment of needs/deficits, individual and team training, and evaluation of training effects. One such development intervention is to have the virtual team self-facilitate. Normally, a team brings in an outside facilitator to ensure that the team is correctly using the technology. This is a costly method of developing the team, but virtual teams can self-facilitate. This lessens the need for an outside facilitator, and saves the team time, effort, and resources. Disbanding and reintegration Finally, the disbanding of virtual teams and the re-integration of the team members is an important issue that has been neglected not only in empirical but also in most of the conceptual work on virtual teams. However, particularly when virtual project teams have only a short life-time and reform again quickly, careful and constructive disbanding is mandatory to maintain high motivation and satisfaction among the employees. Members of transient project teams anticipate the end of the teamwork in the foreseeable future, which in turn overshadows the interaction and shared outcomes. The final stage of group development should be a gradual emotional disengagement that includes both sadness about separation and (at least in successful groups) joy and pride in the achievements of the team. Pandemic factor Post pandemic the virtual team concept has been further popularized although even before the COVID-19 pandemic, many organizations were actively shifting toward remote work. As per market sources, around 80% of global corporate remote work policies had shifted to virtual and mixed forms of team collaboration during the pandemic. With the onslaught of worldwide lockdowns and challenging time management, remote work has become a necessity for the majority and virtual management has become a way of life for business owners/leaders. See also Distributed development Fractional executive Gig economy Interim Management Outline of management Virtual business Virtual community of practice Virtual team Virtual volunteering References External links Managing the virtual realm, by Denise Dubie, Network World Dr Alister Jury's research into Leadership Effectiveness within Virtual Teams (University of Queensland) Information technology management Human resource management Management by type
Virtual management
Technology
3,347
20,829,248
https://en.wikipedia.org/wiki/C5H4N4S
{{DISPLAYTITLE:C5H4N4S}} The molecular formula C5H4N4S (molar mass : 152.18 g/mol) may refer to : Mercaptopurine, an immunosuppressive drug Tisopurine, a treatment of gout References Molecular formulas
C5H4N4S
Physics,Chemistry
70
1,759,247
https://en.wikipedia.org/wiki/Highly%20abundant%20number
In number theory, a highly abundant number is a natural number with the property that the sum of its divisors (including itself) is greater than the sum of the divisors of any smaller natural number. Highly abundant numbers and several similar classes of numbers were first introduced by , and early work on the subject was done by . Alaoglu and Erdős tabulated all highly abundant numbers up to 104, and showed that the number of highly abundant numbers less than any is at least proportional to . Formal definition and examples Formally, a natural number n is called highly abundant if and only if for all natural numbers m < n, where σ denotes the sum-of-divisors function. The first few highly abundant numbers are 1, 2, 3, 4, 6, 8, 10, 12, 16, 18, 20, 24, 30, 36, 42, 48, 60, ... . For instance, 5 is not highly abundant because σ(5) = 5+1 = 6 is smaller than σ(4) = 4 + 2 + 1 = 7, while 8 is highly abundant because σ(8) = 8 + 4 + 2 + 1 = 15 is larger than all previous values of σ. The only odd highly abundant numbers are 1 and 3. Relations with other sets of numbers Although the first eight factorials are highly abundant, not all factorials are highly abundant. For example, σ(9!) = σ(362880) = 1481040, but there is a smaller number with larger sum of divisors, σ(360360) = 1572480, so 9! is not highly abundant. Alaoglu and Erdős noted that all superabundant numbers are highly abundant, and asked whether there are infinitely many highly abundant numbers that are not superabundant. This question was answered affirmatively by . Despite the terminology, not all highly abundant numbers are abundant numbers. In particular, none of the first seven highly abundant numbers (1, 2, 3, 4, 6, 8, and 10) is abundant. Along with 16, the ninth highly abundant number, these are the only highly abundant numbers that are not abundant. 7200 is the largest powerful number that is also highly abundant: all larger highly abundant numbers have a prime factor that divides them only once. Therefore, 7200 is also the largest highly abundant number with an odd sum of divisors. Notes References Divisor function Integer sequences
Highly abundant number
Mathematics
504
7,922,315
https://en.wikipedia.org/wiki/Cray%20XMT
Cray XMT (Cray eXtreme MultiThreading, codenamed Eldorado) is a scalable multithreaded shared memory supercomputer architecture by Cray, based on the third generation of the Tera MTA architecture, targeted at large graph problems (e.g. semantic databases, big data, pattern matching). Presented in 2005, it supersedes the earlier unsuccessful Cray MTA-2. It uses the Threadstorm3 CPUs inside Cray XT3 blades. Designed to make use of commodity parts and existing subsystems for other commercial systems, it alleviated the shortcomings of Cray MTA-2's high cost of fully custom manufacture and support. It brought various substantial improvements over Cray MTA-2, most notably nearly tripling the peak performance, and vastly increased maximum CPU count to 8,192 and maximum memory to 128 TB, with a data TLB of maximal 512 TB. Cray XMT uses a scrambled content-addressable memory model on DDR1 ECC modules to implicitly load-balance memory access across the whole shared global address space of the system. Use of 4 additional Extended Memory Semantics bits (full/empty, forwarding and 2 trap bits) per 64-bit memory word enables lightweight, fine-grained synchronization on all memory. There are no hardware interrupts and hardware threads are allocated by an instruction, not the OS. Front-end (login, I/O, and other service nodes, utilizing AMD Opteron processors and running SLES Linux) and back-end (compute nodes, utilizing Threadstorm3 processors and running MTK, a simple BSD Unix-based microkernel) communicate through the LUC (Lightweight User Communication) interface, a RPC-style bidirectional client/server interface. Threadstorm3 Threadstorm3 (referred to as "MT processor" and Threadstorm before XMT2) is a 64-bit single-core VLIW barrel processor (compatible with 940-pin Socket 940 used by AMD Opteron processors) with 128 hardware streams, onto each a software thread can be mapped (effectively creating 128 hardware threads per CPU), running at 500 MHz and using the MTA instruction set or a superset of it. It has a 128KB, 4-way associative data buffer. Each Threadstorm3 has 128 separate register sets and program counters (one per each stream), which are fairly fully context-switched at each cycle. Its estimated peak performance is 1.5 GFLOPS. It has 3 functional units (memory, fused multiply-add and control), which receive operations from the same MTA instruction and operate within the same cycle. Each stream has 32 general-purpose registers, 8 target registers and a status word, containing the program counter. High-level control of job allocation across threads is not possible. Due to the MTA's pipeline length of 21, each stream is selected to execute instructions again no prior than 21 cycles later. The TDP of the processor package is 30 W. Due to their thread-level context switch at each cycle, performance of Threadstorm CPUs is not constrained by memory access time. In a simplified model, at each clock cycle an instruction from one of the threads is executed and another memory request is queued with the understanding that by the time the next round of execution is ready the requested data has arrived. This is contrary to many conventional architectures which stall on memory access. The architecture excels in data walking schemes where subsequent memory access cannot be easily predicted and thus wouldn't be well suited to a conventional cache model. Threadstorm's principal architect was Burton J. Smith. Cray XMT2 Cray XMT2 (also "next-generation XMT" or simply XMT) is a scalable multithreaded shared memory supercomputer by Cray, based on the fourth generation of the Tera MTA architecture. Presented in 2011, it supersedes Cray XMT, which had issues with memory hotspots. It uses Threadstorm4 CPUs inside Cray XT5 blades and increases memory capacity eightfold to 512 TB and memory bandwidth trifold (300 MHz instead 200 MHz) compared to XMT by using twice the memory modules per node and DDR2. It introduces the Node Pair Link inter-Threadstorm connect, as well as memory-only nodes, with Threadstorm4 packages having their CPU and HyperTransport 1.x components disabled. The underlying scrambled content-addressable memory model has been inherited from XMT. XMT2 uses 2 additional EMS bits (full/empty and extended) instead of 4 as in XMT. Threadstorm4 Threadstorm4 (also "Threadstorm IV" and "Threadstorm 4.0") is a 64-bit single-core VLIW barrel processor (compatible with 1207-pin Socket F used by AMD Opteron processors) with 128 hardware streams, very similar to its predecessor, Threadstorm3. It features an improved, DDR2-capable memory controller and additional 8 trap registers per stream. Cray intentionally decided against a DDR3 controller, citing the reusing of existing Cray XT5 infrastructure and a shorter burst length than DDR3. Though the longer burst length could be compensated by higher speeds of DDR3, it would also require more power, which Cray engineers wanted to avoid. Scorpio After launching XMT, Cray researched a possible multi-core variant of the Threadstorm3, dubbed Scorpio. Most of Threadstorm3's features would be retained, including the multiplexing of many hardware streams onto an execution pipeline and the implementation of additional state bits for every 64-bit memory word. Cray later abandoned Scorpio, and the project yielded no manufactured chip. Future Development on Threadstorm4, as well as the whole MTA architecture, ended silently after XMT2, probably due to competition from commodity processors such as Intel's Xeon and possibly Xeon Phi, even though Cray never officially discontinued neither XMT nor XMT2. As of 2020, Cray has removed all customer documentation on both XMT and XMT2 from its online catalogue. Users Cray XMT2 was bought by several federal laboratories and academic facilities, as well as some commercial HPC clients: e.g. CSCS (2 TB global memory with 64 Threadstorm4 CPUs), Noblis CAHPC. Most of XMT and XMT2-based systems have been decommissioned by 2020. Notes References Xmt Supercomputers Computer-related introductions in 2005
Cray XMT
Technology
1,371
76,410,556
https://en.wikipedia.org/wiki/G299.2-2.9
G299.2-2.9 is a supernova remnant in the Milky Way, 16,000 light years from Earth. It is the remains of a Type Ia supernova. The observed radius of the remnant shell translates to approximately 4,500 years of expansion, making it one of the oldest observed Type Ia supernova remnants. Description G299.2-2.9 gives astronomers an opportunity to study how supernova remnants evolve and warp over time. G299.2-2.9 also provides a glimpse of the explosion that produced it. G299.2-2.9 is split into several distinct and different regions: an almost complete bubble interrupted only by a blow-out, a bright center, a complex "knot" region on the northeastern edge of the bubble structure and a diffuse emission extending beyond the main structure. It has been heavily documented by multiple satellites and in-orbit telescopes, including the Hubble Space Telescope, Spitzer Telescope, and Chandra. The small X-ray emission from the deep portions of G299.2-2.9 shows large quantities of iron and silicon, which indicates that it is a remnant of a Type Ia supernova. The outer "shell" is large and complex, with a multi-shell structure. Outer shells similar to G299.2-2.9 are usually not associated with exploded stars. Since theories about Type Ia supernovae assume they go off in a specified environment, detailed studies of the outer "shell" of G299.2-2.9 have helped astronomers improve their understanding of the areas and situations where thermonuclear explosions occur. Gallery References Supernovae Supernova remnants Musca
G299.2-2.9
Chemistry,Astronomy
348
1,791,712
https://en.wikipedia.org/wiki/Normality%20%28behavior%29
Normality is a behavior that can be normal for an individual (intrapersonal normality) when it is consistent with the most common behavior for that person. Normal is also used to describe individual behavior that conforms to the most common behavior in society (known as conformity). However, normal behavior is often only recognized in contrast to abnormality. In many cases normality is used to make moral judgements, such that normality is seen as good while abnormality is seen as bad, or conversely normality can be seen as boring and uninteresting. Someone being seen as normal or not normal can have social ramifications, such as being included, excluded or stigmatized by wider society. Measuring Many difficulties arise in measuring normal behaviors—biologists come across parallel issues when defining normality. One complication that arises regards whether 'normality' is used correctly in everyday language. People say "this heart is abnormal" if only a portion of it is not working correctly, yet it may be inaccurate to include the entirety of the heart under the description of 'abnormal'. There can be a difference between the normality of a body part's structure and its function. Similarly, a behavioral pattern may not conform to social norms, but still be effective and non-problematic for that individual. Where there is a dichotomy between appearance and function of a behavior, it may be difficult to measure its normality. This is applicable when trying to diagnose a pathology and is addressed in the Diagnostic and Statistical Manual of Mental Disorders. Statistical normality In general, 'normal' refers to a lack of significant deviation from the average. The word normal is used in a more narrow sense in mathematics, where a normal distribution describes a population whose characteristics center around the average or the norm. When looking at a specific behavior, such as the frequency of lying, a researcher may use a Gaussian bell curve to plot all reactions, and a normal reaction would be within one standard deviation, or the most average 68.3%. However, this mathematical model only holds for one particular trait at a time, since, for example, the probability of a single individual being within one standard deviation for 36 independent variables would be one in a million. In statistics, normal is often arbitrarily considered anything that falls within about 1.96 standard deviations of the mean, i.e. the most average 95% (1.96). The probability of an individual being within 1.96 standard deviations for 269 independent variables is approximately one in a million. For only 59 independent variables, the probability is just under 5%. Under this definition of normal, it is abnormal to be normal for 59 independent variables. Sociology Durkheim In his Rules of the Sociological Method, French sociologist Émile Durkheim indicates that it is necessary for the sociological method to offer parameters in order to distinguish normality from pathology or abnormality. He suggests that behaviors, or social facts, which are present in the majority of cases are normal, and exceptions to that behavior indicate pathology. Durkheim's model of normality further explains that the most frequent or general behaviors, and thus the most normal behaviors, will persist through transition periods in society. Crime, for instance, should be considered normal because it exists in every society through every time period. There is a two-fold version of normality; behaviors considered normal on a societal level may still be considered pathological on an individual level. On the individual level, people who violate social norms, such as criminals, will invite a punishment from others in the society. Social norms An individual's behaviors are guided by what they perceive to be society's expectations and their peers' norms. People measure the appropriateness of their actions by how far away they are from those social norms. However, what is perceived as the norm may or may not actually be the most common behavior. In some cases of pluralistic ignorance, most people wrongly believe the social norm is one thing, but in fact very few people hold that view. When people are made more aware of a social norm, particularly a descriptive norm (i.e., a norm describing what is done), their behavior changes to become closer to that norm. The power of these norms can be harnessed by social norms marketing, where the social norm is advertised to people in an attempt to stop extreme behavior, such as binge drinking. However, people at the other extreme (very little alcohol consumption) are equally likely to change their behavior to become closer to the norm, in this case by increasing alcohol consumption. Instead of using descriptive norms, more effective social norms marketing may use injunctive norms which, instead of describing the most common behavior, outline what is approved or disapproved of by society. When individuals become aware of the injunctive norm, only the extremes will change their behavior (by decreasing alcohol consumption) without the boomerang effect of under-indulgers increasing their drinking. The social norms that guide people are not always normal for everyone. Behaviors that are abnormal for most people may be considered normal for a subgroup or subculture. For example, normal college student behavior may be to party and drink alcohol, but for a subculture of religious students, normal behavior may be to go to church and pursue religious activities. Subcultures may actively reject "normal" behavior, instead replacing society norms with their own. What is viewed as normal can change dependent on both timeframe and environment. Normality can be viewed as "an endless process of man's self-creation and his reshaping of the world." Within this idea, it is possible to surmise that normality is not an all-encompassing term, but simply a relative term based around a current trend in time. With statistics, this is likened to the thought that if the data gathered provides a mean and standard deviation, over time these data that predict "normalness" start to predict or dictate it less and less since the social idea of normality is dynamic. This is shown in studies done on behavior in both psychology and sociology where behavior in mating rituals or religious rituals can change within a century in humans, showing that the "normal" way that these rituals are performed shifts and a new procedure becomes the normal one. Since normality shifts in time and environment, the mean and standard deviation are only useful for describing normality from the environment from which they are collected. Sexual behavior As another example, understandings of what is normal sexual behavior varies greatly across time and place. In many countries, perceptions on sexuality are largely becoming more liberal, especially views on the normality of masturbation and homosexuality. Social understanding on normal sexual behavior also varies greatly country by country; countries can be divided into categories of how they approach sexual normality, as conservative, homosexual-permissive, or liberal. The United States, Ireland, and Poland have more conservative social understanding of sexuality among university students, while Scandinavian students consider a wider variety of sexual acts as normal. Although some attempts have been made to define sexual acts as normal, abnormal, or indeterminate, these definitions are time-sensitive. Gayle Rubin's 1980s model of sexual 'normality' was comprehensive at the time but has since become outdated as society has liberalized. Regulation A disharmony exists between a virtual identity of the self and a real social identity, whether it be in the form of a trait or attribute. If a person does not have this disharmony, then he or she is described as normal. A virtual identity can take many definitions, but in this case a virtual identity is the identity that persons mentally create that conforms to societal standards and norms, it may not represent how they actually are, but it represents what they believe is the typical "normal" person. A real social identity is the identity that persons actually have in their society or is perceived, by themselves or others, to have. If these two identities have differences between each other, there is said to be disharmony. Individuals may monitor and adapt their behavior in terms of others' expected perceptions of the individual, which is described by the social psychology theory of self-presentation. In this sense, normality exists based on societal norms, and whether someone is normal is entirely up to how he or she views him- or herself in contrast to how society views him or her. While trying to define and quantify normality is a good start, all definitions confront the problem of whether we are even describing an idea that even exists since there are so many different ways of viewing the concept. Effects of labeling When people do not conform to the normal standard, they are often labelled as sick, disabled, abnormal, or unusual, which can lead to marginalization or stigmatization. Most people want to be normal and strive to be perceived as such, so that they can relate to society at large. Without having things in common with the general population, people may feel isolated among society. The abnormal person feels like they have less in common with the normal population, and others have difficulty relating to things that they have not experienced themselves. Additionally, abnormality may make others uncomfortable, further separating the abnormally labelled individual. Since being normal is generally considered an ideal, there is often pressure from external sources to conform to normality, as well as pressure from people's intrinsic desire to feel included. For example, families and the medical community will try to help disabled people live a normal life. However, the pressure to appear normal, while actually having some deviation, creates a conflict—sometimes someone will appear normal, while actually experiencing the world differently or struggling. When abnormality makes society feel uncomfortable, it is the exceptional person themselves who will laugh it off to relieve social tension. A disabled person is given normal freedoms, but may not be able to show negative emotions. Lastly, society's rejection of deviance and the pressure to normalize may cause shame in some individuals. Abnormalities may not be included in an individual's sense of identity, especially if they are unwelcome abnormalities. When an individual's abnormality is labelled as a pathology, it is possible for that person to take on both elements of the sick role or the stigmatization that follows some illnesses. Mental illness, in particular, is largely misunderstood by the population and often overwhelms others' impression of the patient. Intrapersonal normality Most definitions of normality consider interpersonal normality, the comparison between many different individual's behaviors to distinguish normality from abnormality. Intrapersonal normality looks at what is normal behavior for one particular person (consistency within a person) and would be expected to vary person-to-person. A mathematical model of normality could still be used for intrapersonal normality, by taking a sample of many different occurrences of behavior from one person over time. Also like interpersonal normality, intrapersonal normality may change over time, due to changes in the individual as they age and due to changes in society (since society's view of normality influences individual peoples' behavior). It is most comfortable for people to engage in behavior which conforms to their own personal habitual norms. When things go wrong, people are more likely to attribute the negative outcome on any abnormal behavior leading up to the mishap. After a car crash, people may say "if only I didn't leave work early," blaming the crash on their actions which were not normal. This counterfactual thinking particularly associates abnormal behavior with negative outcomes. Behavioral normality In medicine, behavioral normality pertains to a patient's mental condition aligning with that of a model, healthy patient. A person without any mental illness is considered a normal patient, whereas a person with a mental disability or illness is viewed as abnormal. These normals and abnormals in the context of mental health subsequently create negative stigmatic perceptions towards individuals with mental illness. According to the Brain & Behavior Research Foundation, "an estimated 26.2 percent of Americans ages 18 and older—about 1 in 4 adults—suffer from one or more of (several) disorders in a given year." Though the population of American individuals living with mental illness is not as small of a minority as commonly perceived, it is considered abnormal nonetheless, therefore the subject of discrimination and abuse such as violent therapies, punishments, or labeling for life by the normal, healthy majority. The CDC reported that "cluster[s] of negative attitudes and beliefs motivate the general public to fear, reject, avoid, and discriminate against people with mental illnesses." In continuum, the resources available to those who suffer from such illness are limited, and government support is constantly being cut from programs that help individuals living with mental illness live more comfortable, accommodative, happier lives. Neuronal and synaptic normality Hebbian associative learning and memory maintenance depends on synaptic normalization mechanisms to prevent synaptic runaway. Synaptic runaway describes overcrowding of dendritic associations, which reduce sensory or behavioral acuteness proportional to the level of synaptic runaway. Synaptic/neuronal normalization refers to synaptic competition, where the prosper of one synapse may weakening the efficacy of other nearby surrounding synapses with redundant neurotransmission. Animal dendritic density greatly increases throughout waking hours despite intrinsic normalization mechanisms as described as above. The growth rate of synaptic density is not sustained in a cumulative fashion. Without a pruning state, the signal to noise ratio of CNS mechanism would not be able to operate with maximum effectiveness, and learning would be detrimental to animal survival. Neuronal and synaptic normalization mechanisms must operate so positive association feedback loops to not become rampant while constantly processing new environmental information. Some researchers speculate that the slow oscillation (nREM) cycles of animal sleep constitute an essential 're-normalization' phase. The re-normalization occurs from cortical large amplitude brain rhythm, in the low delta range (0.5–2 Hz), synaptically downscaling the associations from the wakeful learning state. Only the strongest associations survive the pruning from this phase. This allows retention of salient information coding from the previous day, but also allows more cortical space and energy distribution to continue effective learning subsequently after a slow-wave oscillation episode of sleep. Also, organisms tend to have a normal biological developmental pathway as a central nervous system ages and/or learns. Deviations for a species' normal development frequently will result in behavior dysfunction, or death, of that organism. Clinical normality Applying normality clinically depends on the field and situation a practitioner is in. In the broadest sense, clinical normality is the idea of uniformity of physical and psychological functioning across individuals. Psychiatric normality, in a broad sense, states that psychopathology are disorders that are deviations from normality. Normality, and abnormality, can be characterized statistically. Related to the previous definition, statistical normality is usually defined it in terms of a normal distribution curve, with the so-called 'normal zone' commonly accounting for 95.45% of all the data. The remaining 4.55% will lie split outside of two standard deviations from the mean. Thus any variable case that lies outside of two deviations from the mean would be considered abnormal. However, the critical value of such statistical judgments may be subjectively altered to a less conservative estimate. It is in fact normal for a population to have a proportion of abnormals. The presence of abnormals is important because it is necessary to define what 'normal' is, as normality is a relative concept. So at a group, or macro, level of analysis, abnormalities are normal given a demographic survey; while at an individual level, abnormal individuals are seen as being deviant in some way that needs to be corrected. Statistical normality is important in determining demographic pathologies. When a variable rate, such as virus spread within a human population, exceeds its normal infection rate, then preventative or emergency measures can be introduced. However, it is often impractical to apply statistical normality to diagnose individuals. Symptom normality is the current, and assumed most effective, way to assess patient pathology. DSM Normality, as a relative concept, is intrinsically involved with contextual elements. As a result, clinical disorder classification has particular challenges in discretely diagnosing 'normal' constitutions from true disorders. The Diagnostic and Statistical Manual of Mental Disorders (DSM) is the psychiatric profession's official classification manual of mental disorders since its first published version (DSM-I) by the American Psychological Association in 1952. As the DSM evolved into its current version (DSM-V) in late 2013, there have been numerous conflicts in proposed classification between mental illness and normal mentality. In his book Saving Normal, Allen Frances, who chaired the task force for content in the DSM-IV and DSM-IV-TR, wrote a scathing indictment of the pressures incumbent on the definition of "normal" relative to psychological constructs and mental illness. Most of this difficulty stems from the DSM's ambiguity of natural contextual stressor reactions versus individual dysfunction. There are some key progressions along the DSM history that have attempted to integrate some aspects of normality into proper diagnosis classification. As a diagnostic manual for classification of abnormalities, all DSMs have been biased towards classifying symptoms as disorders by emphasizing symptomatic singularity. The result is an encompassing misdiagnosis of possible normal symptoms, appropriate as contextually derived. DSM-II The second edition of the DSM could not be effectively applied because of its vague descriptive nature. Psychodynamic etiology was a strong theme in classifying mental illnesses. The applied definitions became idiosyncratic, stressing individual unconscious roots. This made applying the DSM unreliable across psychiatrists. No distinction between abnormal to normal was established. Evidence of the classification ambiguity were punctuated by the Rosenhan experiment of 1972. This experiment demonstrated that the methodology of psychiatric diagnosis could not effectively distinguish normal from disordered mentalities. DSM-II labelled 'excessive' behavioral and emotional response as an index of abnormal mental wellness to diagnose some particular disorders. 'Excessiveness' of a reaction implied alternative normal behavior which would have to include a situational factor in evaluation. As an example, a year of intense grief from the death of a spouse may be a normal appropriate response. To have intense grief for twenty years would be indicative of a mental disorder. As well, to grieve intensely over the loss of a sock would also not be considered normal responsiveness and indicate a mental disorder. The consideration of proportionality to stimuli was a perceived strength in psychiatric diagnosis for the DSM-II. Another characteristic of the DSM-II systemization was that it classified homosexuality as a mental disorder. Thus, homosexuality was psychiatrically defined as a pathological deviation from "normal" sexual development. In the 7th printing of the DSM-II, "homosexuality" was replaced with "sexual orientation disturbance." The intent was to have a label that applied only to homosexual individuals who were bothered by their sexual orientation. In this manner homosexuality would not be viewed as an atypical mental disorder; only if it was distressing would it be classified as a mental illness. However, the DSM-II did not state that homosexuality was normal, either, and a diagnosis of distress related to one's sexual orientation was retained in all editions of the DSM until the DSM-5 in 2013, under different names. DSM-III DSM-III was a best attempt to credit psychiatry as a scientific discipline from the opprobrium resulting from DSM-II. A reduction in the psychodynamic etiologies of DSM-II spilled over into a reduction symptom etiology altogether. Thus, DSM-III was a specific set of definitions for mental illnesses, and entities more suited to diagnostic psychiatry, but which annexed response proportionality as a classification factor. The product was that all symptoms, whether normal proportional response or inappropriate pathological tendencies, could both be treated as potential signs of mental illness. DSM-IV DSM-IV explicitly distinguishes mental disorders and non-disordered conditions. A non-disordered condition results from, and is perpetuated by, social stressors. Included in DSM-IV's classification is that a mental disorder "must not be merely an expectable and culturally sanctioned response to a particular event, for example, the death of a loved one. Whatever its original cause, it must currently be considered a manifestation of a behavioral, psychological, or biological dysfunction in the individual" (American Psychiatric Association 2000:xxxi) This had supposedly injected normality consideration back into the DSM, from its removal from DSM-II. However, it has been speculated that DSM-IV still does not escape the problems DSM-III faced, where psychiatric diagnoses still include symptoms of expectable responses to stressful circumstances to be signs of disorders, along with symptoms that are individual dysfunctions. The example set by DSM-III, for principally symptom-based disorder classification, has been integrated as the norm of mental diagnostic practice. DSM-5 The DSM-5 was released in the second half of 2013. It has significant differences from DSM IV-TR, including the removal of the multi-axial classifications and reconfiguring the Asperger's/autistic spectrum classifications. Criticisms of diagnostics Since the advent of DSM-III, the subsequent editions of the DSM have all included a heavy symptom based pathology diagnosis system. Although there have been some attempts to incorporate environmental factors into mental and behavioral diagnostics, many practitioners and scientists believe that the most recent DSM's are misused. The symptom bias makes diagnosing quick and easier allowing for practitioners to increase their clientele because symptoms can be easier to classify and deal with than dealing with life or event histories which have evoked what may be a temporary and normal mental state in reaction to a patient's environmental circumstances. The easy-to-use manual not only has increased the perceived need for more mental health care, stimulating funding for mental health care facilities, but also has had a global impact on marketing strategies. Many pharmaceutical commercial ads list symptoms such as fatigue, depression, or anxiety. However, such symptoms are not necessarily abnormal, and are appropriate responses to such occurrences as the loss of a loved one. The targets of such ads in such cases do not need medication and can naturally overcome their grief, but with such an advertising strategy pharmaceutical companies can greatly expand their marketing. See also Abnormality (behavior) Attitude change Deviance (sociology) Eccentricity (behavior) Social norm Masking (behavior) References External links Lochrie, Karma Desiring Foucault Journal of Medieval and Early Modern Studies – Volume 27, Number 1, Winter 1997, pp. 3–16 Is It Normal? A community question and answer forum based specifically around surveys to determine the normality of various behaviors or thoughts Human behavior Social constructionism Behavioural sciences Stereotypes
Normality (behavior)
Biology
4,750
1,944,758
https://en.wikipedia.org/wiki/Interstellar%20probe
An interstellar probe is a space probe that has left—or is expected to leave—the Solar System and enter interstellar space, which is typically defined as the region beyond the heliopause. It also refers to probes capable of reaching other star systems. As of 2024, there are five interstellar probes, all launched by the American space agency NASA: Voyager 1, Voyager 2, Pioneer 10, Pioneer 11 and New Horizons. Also as of 2024, Voyager 1 and Voyager 2 are the only probes to have actually reached interstellar space. The other three are on interstellar trajectories. Contact to Pioneer 10 and 11 was lost long before they reached interstellar space. The termination shock is the point in the heliosphere where the solar wind slows down to subsonic speed. Even though the termination shock happens as close as 80–100 AU (astronomical units) the maximum extent of the region in which the Sun's gravitational field is dominant (the Hill sphere) is thought to be at around . This point is close to the nearest known star system, Alpha Centauri, located 4.36 light years away. Although the probes will be under the influence of the Sun for a long time, their velocities far exceed the Sun's escape velocity, so they are leaving forever. Interstellar space is defined as the space beyond a magnetic region that extends about 122 AU from the Sun, as detected by Voyager 1, and the equivalent region of influence surrounding other stars. Voyager 1 entered interstellar space in 2012. Currently, three projects are under consideration: CNSA's Shensuo, NASA's Interstellar Probe, and StarChip from the Breakthrough Initiatives. Overview Planetary scientist G. Laughlin noted that, with current technology, a probe sent to Alpha Centauri would take 40,000 years to arrive, but expressed hope for new technology to be developed to make the trip within a human lifetime. On that timescale, the stars move notably. As an example, in 40,000 years Ross 248 will be closer to Earth than Alpha Centauri. One technology that has been proposed to achieve higher speeds is an E-sail. By harnessing solar wind, it might be possible to achieve 20–30 AU per year without even using propellant. List of interstellar probes Functional spacecraft Voyager 1 (1977–) Voyager 1 is a space probe launched by NASA on September 5, 1977. At a distance of about as of , it is the farthest manmade object from Earth. It was later estimated that Voyager 1 crossed the termination shock on December 16, 2004 at a distance of 94 AU from the Sun. At the end of 2011, Voyager 1 entered and discovered a stagnation region where charged particles streaming from the Sun slow and turn inward, and the Solar System's magnetic field is doubled in strength as interstellar space appears to be applying pressure. Energetic particles originating in the Solar System declined by nearly half, while the detection of high-energy electrons from outside increases 100-fold. The inner edge of the stagnation region is located approximately 113 astronomical units (AU) from the Sun. In 2013 it was thought Voyager 1 crossed the heliopause and entered interstellar space on August 25, 2012 at distance of 121 AU from the Sun, making it the first known human-manufactured object to do so. , the probe was moving with a relative velocity to the Sun of about 16.95 km/s (3.58 AU/year). If it does not hit anything, Voyager 1 could reach the Oort cloud in about 300 years. Voyager 2 (1977–) Voyager 2 crossed the heliopause and entered interstellar space on November 5, 2018. It had previously passed the termination shock into the heliosheath on August 30, 2007. As of Voyager 2 is at a distance of from Earth. The probe was moving at a velocity of 3.25 AU/year (15.428 km/s) relative to the Sun on its way to interstellar space in 2013. It's moving at a velocity of relative to the Sun as of December 2014. Voyager 2 is expected to provide the first direct measurements of the density and temperature of the interstellar plasma. New Horizons (2006–) New Horizons was launched directly into a hyperbolic escape trajectory, getting a gravitational assist from Jupiter en route. By March 7, 2008, New Horizons was 9.37 AU from the Sun and traveling outward at 3.9 AU per year. It will, however, slow to an escape velocity of only 2.5 AU per year as it moves away from the Sun, so it will never catch up to either Voyager. As of early 2011, it was traveling at 3.356 AU/year (15.91 km/s) relative to the Sun. On July 14, 2015, it completed a flyby of Pluto at a distance of about 33 AU from the Sun. New Horizons next encountered 486958 Arrokoth on January 1, 2019, at about 43.4 AU from the Sun. The Heliosphere's termination shock was crossed by Voyager 1 at 94 astronomical units (AU) and Voyager 2 at 84 AU according to the IBEX mission. If New Horizons can reach the distance of , it will be traveling at about , around slower than Voyager 1 at that distance. Inactive missions Pioneer 10 (1972–2003) The last successful reception of telemetry from Pioneer 10 was on April 27, 2002, when it was at a distance of 80.22 AU, and the last signal from the spacecraft was received on January 23, 2003, at a distance 82 AU from the Sun traveling at about 2.54 AU/year (12 km/s). Pioneer 11 (1973–1995) Routine mission operations for Pioneer 11 were stopped September 30, 1995, when it was 6.5 billion km (approx 43.4 AU) from Earth, traveling at about 2.4 AU/year (11.4 km/s). Probe debris New Horizons' third stage, a STAR-48 booster, is on a similar escape trajectory out of the Solar System as New Horizons, but will pass millions of kilometers from Pluto. It crossed Pluto's orbit in October 2015. The third stage rocket boosters for Pioneer 10, Voyager 1, and Voyager 2 are also on escape trajectories out of the Solar System. Proposed missions StarChip In April 2016, Breakthrough Initiatives announced Breakthrough Starshot, a program to develop a proof of concept fleet of small centimeter-sized light sail spacecraft, named StarChip, capable of making the journey to Alpha Centauri, the nearest star system, at speeds of 20% and 15% of the speed of light, taking between 20 and 30 years to reach the star system, respectively, and about 4 years to notify Earth of a successful arrival. Shensuo (2019–) A CNSA space mission first proposed in 2019 would be launched in 2024 with the intention to research the heliosphere. Both probes would use gravity assists at Jupiter and fly by Kuiper belt objects, and the second is also planned to fly by Neptune and Triton. The other goal is to reach 100 AU from the Sun by 2049, the centennial of the People's Republic of China's foundation. Interstellar Probe (ISP) (2018–) A NASA funded study, led by the Applied Physics Laboratory, on possible options for an interstellar probe. The nominal concept would launch on a SLS in the 2030s. It would perform either a fast Jupiter flyby, a powered Jupiter flyby, or a very close perihelion and propulsive maneuver, and reach a distance of 1000–2000 AU (93–186 billion miles; about 1.5-3% of one light-year) within 50 years. Possibilities for planetary, astrophysical and exoplanet science along the way are also being investigated. Interstellar Heliopause Probe (IHP) (2006) A technology reference study published in 2006 with the ESA proposed an interstellar probe focused on leaving the heliosphere. The goal would be 200 AU in 25 years, with traditional launch but acceleration by a solar sail. The roughly 200–300 kg probe would carry a suite of several instruments including a plasma analyzer, plasma radio wave experiment, magnetometer, neutral and charged atom detector, dust analyzer, and a UV-photometer. Electrical power would come from an RTG. Innovative Interstellar Explorer (2003) NASA proposal to send a 35 kg science payload out to at least 200 AU. It would achieve a top speed of 7.8 AU per year using a combination of a heavy lift rocket, Jupiter gravitational assistance, and an ion engine powered by standard radioisotope thermal generators. The probe suggested a launch in 2014 (to take advantage of Jupiter gravitational assist), to reach 200 AU around 2044. Realistic Interstellar Explorer and Interstellar Explorer (2000–2002) Studies suggest various technologies including americium-241-based RTG, optical communication (as opposed to radio), and low-power semi-autonomous electronics. Trajectory uses a Jupiter gravity assist and Solar Oberth maneuver to achieve 20 AU/year, allowing 1000 AU within 50 years, and a mission extension up to 20,000 AU and 1000 years. Needed technology included advanced propulsion and solar shield for perihelion burn around the Sun. Solar thermal (STP), nuclear fission thermal (NTP), and nuclear fission pulse, as well as various RTG isotopes were examined. The studies also included recommendations for a solar probe (see also Parker Solar Probe), nuclear thermal technology, solar sail probe, 20 AU/year probe, and a long-term vision of a 200 AU/year probe to the star Epsilon Eridani. The "next step" interstellar probe in this study suggested a 5 megawatt fission reactor utilizing 16 metric tonnes of H2 propellant. Targeting a launch in the mid-21st century, it would accelerate to 200 AU/year over 4200 AU and reach the star Epsilon Eridani after 3400 years of travel in the year 5500 AD. However, this was a second-generation vision for a probe and the study acknowledged that even 20 AU/year might not be possible with then current (2002) technology. For comparison, the fastest probe at the time of the study was Voyager 1 at about 3.6 AU/year (17 km/s), relative to the Sun. Interstellar Probe (1999) Interstellar Probe was a proposed solar sail propulsion spacecraft planned by NASA Jet Propulsion Laboratory. It was planned to reach as far as 200 AU within 15 years at a speed of 14 AU/year (about 70 km/s, and function up to 400+ AU). A critical technology for the mission is a large 1 g/m2 solar sail. TAU mission (1987) TAU mission (Thousand Astronomical Units) was a proposed nuclear electric rocket craft that used a 1 MW fission reactor and an ion drive with a burn time of about 10 years to reach a speed of 106 km/s (about 20 AU/year) to achieve a distance of 1000 AU in 50 years. The primary goal of the mission was to improve parallax measurements of the distances to stars inside and outside our galaxy, with secondary goals being the study of the heliopause, measurements of conditions in the interstellar medium, and (via communications with Earth) tests of general relativity. Mission concepts Project Orion (1958–1965) Project Orion was a proposed nuclear pulse propulsion craft that would have used fission or fusion bombs to apply motive force. The design was studied during the 1950s and 1960s in the United States of America, with one variant of the craft capable of interstellar travel. Bracewell probe (1960) Interstellar communication via a probe, as opposed to sending an electromagnetic signal. Sanger Photon Rocket (1950s–1964) Eugene Sanger proposed a spacecraft powered by antimatter in the 1950s. Thrust was intended to come from reflected gamma-rays produced by electron-positron annihilation. Enzmann starship (1964/1973) Proposed by 1964 and examined in an October 1973 issue of Analog, the Enzmann Starship proposed using a 12,000 ton ball of frozen deuterium to power thermonuclear powered pulse propulsion. About twice as long as the Empire State Building and assembled in-orbit, the spacecraft was part of a larger project preceded by large interstellar probes and telescopic observation of target star systems. Project Daedalus (1973–1978) Project Daedalus was a proposed nuclear pulse propulsion craft that used inertial confinement fusion of small pellets within a magnetic field nozzle to provide motive force. The design was studied during the 1970s by the British Interplanetary Society, and was meant to flyby Barnard's Star in under a century from launch. Plans included mining Helium-3 from Jupiter and a pre-launch mass of over 50 thousand metric tonnes from orbit. Project Longshot (1987–1988) Project Longshot was a proposed nuclear pulse propulsion craft that used inertial confinement fusion of small pellets within a magnetic field nozzle to provide motive force, in a manner similar to that of Project Daedalus. The design was studied during the 1990s by NASA and the US Naval Academy. The craft was designed to reach and study Alpha Centauri. Starwisp (1985) Starwisp is a hypothetical unmanned interstellar probe design proposed by Robert L. Forward. It is propelled by a microwave sail, similar to a solar sail in concept, but powered by microwaves from an artificial source. Medusa (1990s) Medusa was a novel spacecraft design, proposed by Johndale C. Solem, using a large lightweight sail (spinnaker) driven by pressure pulses from a series of nuclear explosions. The design, published by the British Interplanetary Society, was studied during the 1990s as a means of interplanetary travel. Starseed launcher (1996) Starseed launcher was concept for launching microgram interstellar probes at up to 1/3 light speed. AIMStar (1990s–2000s) AIMStar was a proposed antimatter catalyzed nuclear pulse propulsion craft that would use clouds of antiprotons to initiate fission and fusion within fuel pellets. A magnetic nozzle derived motive force from the resulting explosions. The design was studied during the 1990s by Penn State University. The craft was designed to reach a distance of 10,000 AU from the Sun in 50 years. Project Icarus (2009+) Project Icarus is a theoretical study for an interstellar probe and is being run under the guidance of the Tau Zero Foundation (TZF) and the British Interplanetary Society (BIS), and was motivated by Project Daedalus, a similar study that was conducted between 1973 and 1978 by the BIS. The project is planned to take five years and began on September 30, 2009. Project Dragonfly (2014+) The Initiative for Interstellar Studies (i4is) has initiated a project working on small interstellar spacecraft, propelled by a laser sail in 2014 under the name of Project Dragonfly. Four student teams worked on concepts for such a mission in 2014 and 2015 in the context of a design competition. Breakthrough Starshot (2016+) In 2016, the Breakthrough Initiatives announced a program to develop a fleet of lightweight light-sail probes for interstellar travel, aiming to make the journey to Alpha Centauri. This research program, with an initial funding of US$ 100 million imagines accelerating the probes to about 15% or 20% of the speed of light, resulting in a travel time of between 20 and 30 years. Geoffrey A. Landis proposed for interstellar travel future-technology project interstellar probe with supplying the energy from an external source (laser of base station) and ion thruster. Trans-Neptunian probes at precursor distances In the early 2000s many new, relatively large planetary bodies were found beyond Pluto, and with orbits extending hundreds of AU out past the heliosheath (90–1000 AU). The NASA probe New Horizons may explore this area now that it has performed its Pluto flyby in 2015 (Pluto's orbit ranges from about 29–49 AU). Some of these large objects past Pluto include 136199 , 136108 , 136472 , and 90377 Sedna. Sedna comes as close as 76 AU, but travels out as far as 961 AU at aphelion, and minor planet goes out past 1060 AU at aphelion. Bodies like these affect how the Solar System is understood, and traverse an area previously only in the domain of interstellar missions or precursor probes. After the discoveries, the area is also in the domain of interplanetary probes; some of the discovered bodies may become targets for exploration missions, an example of which is preliminary work on a probe to Haumea and its moons (at 35–51 AU). Probe mass, power source, and propulsion systems are key technology areas for this type of mission. In addition, a probe beyond 550 AU could use the Sun itself as a gravitational lens to observe targets outside the Solar System, such as planetary systems around other nearby stars, although many challenges to this mission have been noted. Interstellar messages See also Interstellar Boundary Explorer (IBEX), space observatory that measured energetic neutral atoms from interstellar boundary. List of artificial objects leaving the Solar System List of nearest stars and brown dwarfs Local Interstellar Cloud and Local Bubble Interplanetary spaceflight Interstellar travel Intergalactic travel References Further reading NASA's Interstellar Probe Mission (1999) (.pdf) An Interstellar Probe Mission to the Boundaries of the Heliosphere and Nearby Interstellar Space(.pdf) Leonard David – Reaching for interstellar flight (2003) – MSNBC (MSNBC Webpage) Ralph L. McNutt, et al. – A Realistic Interstellar Explorer (2000) – Johns Hopkins University (.pdf) Ralph L. McNutt, et al. – Interstellar Explorer (2002) – Johns Hopkins University (.pdf) McNutt, et al. – Radioisotope Electric Propulsion (2006) – NASA Glenn Research Center (includes Centaur orbiter mission) Scott W. Benson – Solar Power for Outer Planets Study (2007) – NASA Glenn Research Center (with SEP booster) External links Spacecraft escaping the Solar System List of interstellar spaceships and probes NASA – Interstellar Probe (2002 era Study) Voyager mission website (NASA) Probe Space probes
Interstellar probe
Astronomy
3,816
53,028,511
https://en.wikipedia.org/wiki/HD%2036780
HD 36780 is a star located in Orion's belt, within the equatorial constellation of Orion. It has an orange hue and is dimly visible to the naked eye with an apparent visual magnitude of +5.92. The distance to this object is approximately 534 light years based on parallax. It is drifting away from the Sun with a radial velocity of 84 km/s, having come to within some 2.1 million years ago. This is an aging giant star with a stellar classification of K4 III. After exhausting the supply of hydrogen at its core, the star cooled and expanded off the main sequence. At present it has around 31 times the girth of the Sun. It is radiating 243 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,108 K. References K-type giants Orion (constellation) Durchmusterung objects 036780 026108 1874
HD 36780
Astronomy
195
21,830
https://en.wikipedia.org/wiki/Nature
Nature is an inherent character or constitution, particularly of the ecosphere or the universe as a whole. In this general sense nature refers to the laws, elements and phenomena of the physical world, including life. Although humans are part of nature, human activity or humans as a whole are often described as at times at odds, or outright separate and even superior to nature. During the advent of modern scientific method in the last several centuries, nature became the passive reality, organized and moved by divine laws. With the Industrial Revolution, nature increasingly became seen as the part of reality deprived from intentional intervention: it was hence considered as sacred by some traditions (Rousseau, American transcendentalism) or a mere decorum for divine providence or human history (Hegel, Marx). However, a vitalist vision of nature, closer to the pre-Socratic one, got reborn at the same time, especially after Charles Darwin. Within the various uses of the word today, "nature" often refers to geology and wildlife. Nature can refer to the general realm of living beings, and in some cases to the processes associated with inanimate objects—the way that particular types of things exist and change of their own accord, such as the weather and geology of the Earth. It is often taken to mean the "natural environment" or wilderness—wild animals, rocks, forest, and in general those things that have not been substantially altered by human intervention, or which persist despite human intervention. For example, manufactured objects and human interaction generally are not considered part of nature, unless qualified as, for example, "human nature" or "the whole of nature". This more traditional concept of natural things that can still be found today implies a distinction between the natural and the artificial, with the artificial being understood as that which has been brought into being by a human consciousness or a human mind. Depending on the particular context, the term "natural" might also be distinguished from the unnatural or the supernatural. Etymology The word nature is borrowed from the Old French nature and is derived from the Latin word natura, or "essential qualities, innate disposition", and in ancient times, literally meant "birth". In ancient philosophy, natura is mostly used as the Latin translation of the Greek word physis (φύσις), which originally related to the intrinsic characteristics of plants, animals, and other features of the world to develop of their own accord. The concept of nature as a whole, the physical universe, is one of several expansions of the original notion; it began with certain core applications of the word φύσις by pre-Socratic philosophers (though this word had a dynamic dimension then, especially for Heraclitus), and has steadily gained currency ever since. Earth Earth is the only planet known to support life, and its natural features are the subject of many fields of scientific research. Within the Solar System, it is third closest to the Sun; it is the largest terrestrial planet and the fifth largest overall. Its most prominent climatic features are its two large polar regions, two relatively narrow temperate zones, and a wide equatorial tropical to subtropical region. Precipitation varies widely with location, from several metres of water per year to less than a millimetre. 71 percent of the Earth's surface is covered by salt-water oceans. The remainder consists of continents and islands, with most of the inhabited land in the Northern Hemisphere. Earth has evolved through geological and biological processes that have left traces of the original conditions. The outer surface is divided into several gradually migrating tectonic plates. The interior remains active, with a thick layer of plastic mantle and an iron-filled core that generates a magnetic field. This iron core is composed of a solid inner phase, and a fluid outer phase. Convective motion in the core generates electric currents through dynamo action, and these, in turn, generate the geomagnetic field. The atmospheric conditions have been significantly altered from the original conditions by the presence of life-forms, which create an ecological balance that stabilizes the surface conditions. Despite the wide regional variations in climate by latitude and other geographic factors, the long-term average global climate is quite stable during interglacial periods, and variations of a degree or two of average global temperature have historically had major effects on the ecological balance, and on the actual geography of the Earth. Geology Geology is the science and study of the solid and liquid matter that constitutes the Earth. The field of geology encompasses the study of the composition, structure, physical properties, dynamics, and history of Earth materials, and the processes by which they are formed, moved, and changed. The field is a major academic discipline, and is also important for mineral and hydrocarbon extraction, knowledge about and mitigation of natural hazards, some Geotechnical engineering fields, and understanding past climates and environments. Geological evolution The geology of an area evolves through time as rock units are deposited and inserted and deformational processes change their shapes and locations. Rock units are first emplaced either by deposition onto the surface or intrude into the overlying rock. Deposition can occur when sediments settle onto the surface of the Earth and later lithify into sedimentary rock, or when as volcanic material such as volcanic ash or lava flows, blanket the surface. Igneous intrusions such as batholiths, laccoliths, dikes, and sills, push upwards into the overlying rock, and crystallize as they intrude. After the initial sequence of rocks has been deposited, the rock units can be deformed and/or metamorphosed. Deformation typically occurs as a result of horizontal shortening, horizontal extension, or side-to-side (strike-slip) motion. These structural regimes broadly relate to convergent boundaries, divergent boundaries, and transform boundaries, respectively, between tectonic plates. Historical perspective Earth is estimated to have formed 4.54 billion years ago from the solar nebula, along with the Sun and other planets. The Moon formed roughly 20 million years later. Initially molten, the outer layer of the Earth cooled, resulting in the solid crust. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, most or all of which came from ice delivered by comets, produced the oceans and other water sources. The highly energetic chemistry is believed to have produced a self-replicating molecule around 4 billion years ago. Continents formed, then broke up and reformed as the surface of Earth reshaped over hundreds of millions of years, occasionally combining to make a supercontinent. Roughly 750 million years ago, the earliest known supercontinent Rodinia, began to break apart. The continents later recombined to form Pannotia which broke apart about 540 million years ago, then finally Pangaea, which broke apart about 180 million years ago. During the Neoproterozoic era, freezing temperatures covered much of the Earth in glaciers and ice sheets. This hypothesis has been termed the "Snowball Earth", and it is of particular interest as it precedes the Cambrian explosion in which multicellular life forms began to proliferate about 530–540 million years ago. Since the Cambrian explosion there have been five distinctly identifiable mass extinctions. The last mass extinction occurred some 66 million years ago, when a meteorite collision probably triggered the extinction of the non-avian dinosaurs and other large reptiles, but spared small animals such as mammals. Over the past 66 million years, mammalian life diversified. Several million years ago, a species of small African ape gained the ability to stand upright. The subsequent advent of human life, and the development of agriculture and further civilization allowed humans to affect the Earth more rapidly than any previous life form, affecting both the nature and quantity of other organisms as well as global climate. By comparison, the Great Oxygenation Event, produced by the proliferation of algae during the Siderian period, required about 300 million years to culminate. The present era is classified as part of a mass extinction event, the Holocene extinction event, the fastest ever to have occurred. Some, such as E. O. Wilson of Harvard University, predict that human destruction of the biosphere could cause the extinction of one-half of all species in the next 100 years. The extent of the current extinction event is still being researched, debated and calculated by biologists. Atmosphere, climate, and weather The Earth's atmosphere is a key factor in sustaining the ecosystem. The thin layer of gases that envelops the Earth is held in place by gravity. Air is mostly nitrogen, oxygen, water vapor, with much smaller amounts of carbon dioxide, argon, etc. The atmospheric pressure declines steadily with altitude. The ozone layer plays an important role in depleting the amount of ultraviolet (UV) radiation that reaches the surface. As DNA is readily damaged by UV light, this serves to protect life at the surface. The atmosphere also retains heat during the night, thereby reducing the daily temperature extremes. Terrestrial weather occurs almost exclusively in the lower part of the atmosphere, and serves as a convective system for redistributing heat. Ocean currents are another important factor in determining climate, particularly the major underwater thermohaline circulation which distributes heat energy from the equatorial oceans to the polar regions. These currents help to moderate the differences in temperature between winter and summer in the temperate zones. Also, without the redistributions of heat energy by the ocean currents and atmosphere, the tropics would be much hotter, and the polar regions much colder. Weather can have both beneficial and harmful effects. Extremes in weather, such as tornadoes or hurricanes and cyclones, can expend large amounts of energy along their paths, and produce devastation. Surface vegetation has evolved a dependence on the seasonal variation of the weather, and sudden changes lasting only a few years can have a dramatic effect, both on the vegetation and on the animals which depend on its growth for their food. Climate is a measure of the long-term trends in the weather. Various factors are known to influence the climate, including ocean currents, surface albedo, greenhouse gases, variations in the solar luminosity, and changes to the Earth's orbit. Based on historical and geological records, the Earth is known to have undergone drastic climate changes in the past, including ice ages. The climate of a region depends on a number of factors, especially latitude. A latitudinal band of the surface with similar climatic attributes forms a climate region. There are a number of such regions, ranging from the tropical climate at the equator to the polar climate in the northern and southern extremes. Weather is also influenced by the seasons, which result from the Earth's axis being tilted relative to its orbital plane. Thus, at any given time during the summer or winter, one part of the Earth is more directly exposed to the rays of the sun. This exposure alternates as the Earth revolves in its orbit. At any given time, regardless of season, the Northern and Southern Hemispheres experience opposite seasons. Weather is a chaotic system that is readily modified by small changes to the environment, so accurate weather forecasting is limited to only a few days. Overall, two things are happening worldwide: (1) temperature is increasing on the average; and (2) regional climates have been undergoing noticeable changes. Water on Earth Water is a chemical substance that is composed of hydrogen and oxygen (H2O) and is vital for all known forms of life. In typical usage, "water" refers only to its liquid form, but it also has a solid state, ice, and a gaseous state, water vapor, or steam. Water covers 71% of the Earth's surface. On Earth, it is found mostly in oceans and other large bodies of water, with 1.6% of water below ground in aquifers and 0.001% in the air as vapor, clouds, and precipitation. Oceans hold 97% of surface water, glaciers, and polar ice caps 2.4%, and other land surface water such as rivers, lakes, and ponds 0.6%. Additionally, a minute amount of the Earth's water is contained within biological bodies and manufactured products. Oceans An ocean is a major body of saline water, and a principal component of the hydrosphere. Approximately 71% of the Earth's surface (an area of some 361 million square kilometers) is covered by ocean, a continuous body of water that is customarily divided into several principal oceans and smaller seas. More than half of this area is over deep. Average oceanic salinity is around 35 parts per thousand (ppt) (3.5%), and nearly all seawater has a salinity in the range of 30 to 38 ppt. Though generally recognized as several 'separate' oceans, these waters comprise one global, interconnected body of salt water often referred to as the World Ocean or global ocean. This concept of a global ocean as a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography. The major oceanic divisions are defined in part by the continents, various archipelagos, and other criteria: these divisions are (in descending order of size) the Pacific Ocean, the Atlantic Ocean, the Indian Ocean, the Southern Ocean, and the Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays and other names. There are also salt lakes, which are smaller bodies of landlocked saltwater that are not interconnected with the World Ocean. Two notable examples of salt lakes are the Aral Sea and the Great Salt Lake. Lakes A lake (from Latin word lacus) is a terrain feature (or physical feature), a body of liquid on the surface of a world that is localized to the bottom of basin (another type of landform or terrain feature; that is, it is not global) and moves slowly if it moves at all. On Earth, a body of water is considered a lake when it is inland, not part of the ocean, is larger and deeper than a pond, and is fed by a river. The only world other than Earth known to harbor lakes is Titan, Saturn's largest moon, which has lakes of ethane, most likely mixed with methane. It is not known if Titan's lakes are fed by rivers, though Titan's surface is carved by numerous river beds. Natural lakes on Earth are generally found in mountainous areas, rift zones, and areas with ongoing or recent glaciation. Other lakes are found in endorheic basins or along the courses of mature rivers. In some parts of the world, there are many lakes because of chaotic drainage patterns left over from the last ice age. All lakes are temporary over geologic time scales, as they will slowly fill in with sediments or spill out of the basin containing them. Ponds A pond is a body of standing water, either natural or human-made, that is usually smaller than a lake. A wide variety of human-made bodies of water are classified as ponds, including water gardens designed for aesthetic ornamentation, fish ponds designed for commercial fish breeding, and solar ponds designed to store thermal energy. Ponds and lakes are distinguished from streams via current speed. While currents in streams are easily observed, ponds and lakes possess thermally driven micro-currents and moderate wind driven currents. These features distinguish a pond from many other aquatic terrain features, such as stream pools and tide pools. Rivers A river is a natural watercourse, usually freshwater, flowing towards an ocean, a lake, a sea or another river. In a few cases, a river simply flows into the ground or dries up completely before reaching another body of water. Small rivers may also be called by several other names, including stream, creek, brook, rivulet, and rill; there is no general rule that defines what can be called a river. Many names for small rivers are specific to geographic location; one example is Burn in Scotland and North-east England. Sometimes a river is said to be larger than a creek, but this is not always the case, due to vagueness in the language. A river is part of the hydrological cycle. Water within a river is generally collected from precipitation through surface runoff, groundwater recharge, springs, and the release of stored water in natural ice and snowpacks (i.e., from glaciers). Streams A stream is a flowing body of water with a current, confined within a bed and stream banks. In the United States, a stream is classified as a watercourse less than wide. Streams are important as conduits in the water cycle, instruments in groundwater recharge, and they serve as corridors for fish and wildlife migration. The biological habitat in the immediate vicinity of a stream is called a riparian zone. Given the status of the ongoing Holocene extinction, streams play an important corridor role in connecting fragmented habitats and thus in conserving biodiversity. The study of streams and waterways in general involves many branches of inter-disciplinary natural science and engineering, including hydrology, fluvial geomorphology, aquatic ecology, fish biology, riparian ecology, and others. Ecosystems Ecosystems are composed of a variety of biotic and abiotic components that function in an interrelated way. The structure and composition is determined by various environmental factors that are interrelated. Variations of these factors will initiate dynamic modifications to the ecosystem. Some of the more important components are soil, atmosphere, radiation from the sun, water, and living organisms. Central to the ecosystem concept is the idea that living organisms interact with every other element in their local environment. Eugene Odum, a founder of ecology, stated: "Any unit that includes all of the organisms (ie: the "community") in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e.: exchange of materials between living and nonliving parts) within the system is an ecosystem." Within the ecosystem, species are connected and dependent upon one another in the food chain, and exchange energy and matter between themselves as well as with their environment. The human ecosystem concept is based on the human/nature dichotomy and the idea that all species are ecologically dependent on each other, as well as with the abiotic constituents of their biotope. A smaller unit of size is called a microecosystem. For example, a microsystem can be a stone and all the life under it. A macroecosystem might involve a whole ecoregion, with its drainage basin. Wilderness Wilderness is generally defined as areas that have not been significantly modified by human activity. Wilderness areas can be found in preserves, estates, farms, conservation preserves, ranches, national forests, national parks, and even in urban areas along rivers, gulches, or otherwise undeveloped areas. Wilderness areas and protected parks are considered important for the survival of certain species, ecological studies, conservation, and solitude. Some nature writers believe wilderness areas are vital for the human spirit and creativity, and some ecologists consider wilderness areas to be an integral part of the Earth's self-sustaining natural ecosystem (the biosphere). They may also preserve historic genetic traits and that they provide habitat for wild flora and fauna that may be difficult or impossible to recreate in zoos, arboretums, or laboratories. Life Although there is no universal agreement on the definition of life, scientists generally accept that the biological manifestation of life is characterized by organization, metabolism, growth, adaptation, response to stimuli, and reproduction. Life may also be said to be simply the characteristic state of organisms. Present day organisms from viruses to humans possess a self-replicating informational molecule (genome), either DNA or RNA (as in some viruses), and such an informational molecule is probably intrinsic to life. It is likely that the earliest forms of life were based on a self-replicating informational molecule (genome), perhaps RNA or a molecule more primitive than RNA or DNA. The specific deoxyribonucleotide/ribonucleotide sequence in each currently extant individual organism contains sequence information that functions to promotes survival, reproduction, and the capacity to acquire resources necessary for reproduction, and such sequences probably emerged early in the evolution of life. Survival functions present early in the evolution of life likely also included genomic sequences that promote the avoidance of damage to the self-replicating molecule and also the capability to repair such damages that do occur. Repair of some genome damages may have involved using information from another similar molecule by a process of recombination (a primitive form of sexual interaction). Properties common to terrestrial organisms (plants, animals, fungi, protists, archaea, and bacteria) are that they are cellular, carbon-and-water-based with complex organization, having a metabolism, a capacity to grow, respond to stimuli, and reproduce. An entity with these properties is generally considered life. However, not every definition of life considers all of these properties to be essential. Human-made analogs of life may also be considered to be life. The biosphere is the part of Earth's outer shell—including land, surface rocks, water, air and the atmosphere—within which life occurs, and which biotic processes in turn alter or transform. From the broadest geophysiological point of view, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere (rocks), hydrosphere (water), and atmosphere (air). The entire Earth contains over 75 billion tons (150 trillion pounds or about 6.8×1013 kilograms) of biomass (life), which lives within various environments within the biosphere. Over nine-tenths of the total biomass on Earth is plant life, on which animal life depends very heavily for its existence. More than 2 million species of plant and animal life have been identified to date, and estimates of the actual number of existing species range from several million to well over 50 million. The number of individual species of life is constantly in some degree of flux, with new species appearing and others ceasing to exist on a continual basis. The total number of species is in rapid decline. Evolution The origin of life on Earth is not well understood, but it is known to have occurred at least 3.5 billion years ago, during the hadean or archean eons on a primordial Earth that had a substantially different environment than is found at present. These life forms possessed the basic traits of self-replication and inheritable traits. Once life had appeared, the process of evolution by natural selection resulted in the development of ever-more diverse life forms. Species that were unable to adapt to the changing environment and competition from other life forms became extinct. However, the fossil record retains evidence of many of these older species. Current fossil and DNA evidence shows that all existing species can trace a continual ancestry back to the first primitive life forms. When basic forms of plant life developed the process of photosynthesis the sun's energy could be harvested to create conditions which allowed for more complex life forms. The resultant oxygen accumulated in the atmosphere and gave rise to the ozone layer. The incorporation of smaller cells within larger ones resulted in the development of yet more complex cells called eukaryotes. Cells within colonies became increasingly specialized, resulting in true multicellular organisms. With the ozone layer absorbing harmful ultraviolet radiation, life colonized the surface of Earth. Microbes The first form of life to develop on the Earth were unicellular, and they remained the only form of life until about a billion years ago when multi-cellular organisms began to appear. Microorganisms or microbes are microscopic, and smaller than the human eye can see. Microorganisms can be single-celled, such as Bacteria, Archaea, many Protista, and a minority of Fungi. These life forms are found in almost every location on the Earth where there is liquid water, including in the Earth's interior. Their reproduction is both rapid and profuse. The combination of a high mutation rate and a horizontal gene transfer ability makes them highly adaptable, and able to survive in new and sometimes very harsh environments, including outer space. They form an essential part of the planetary ecosystem. However, some microorganisms are pathogenic and can post health risk to other organisms. Viruses are infectious agents, but they are not autonomous life forms, as it is the case for viroids, satellites, DPIs and prions. Plants and animals Originally Aristotle divided all living things between plants, which generally do not move fast enough for humans to notice, and animals. In Linnaeus' system, these became the kingdoms Vegetabilia (later Plantae) and Animalia. Since then, it has become clear that the Plantae as originally defined included several unrelated groups, and the fungi and several groups of algae were removed to new kingdoms. However, these are still often considered plants in many contexts. Bacterial life is sometimes included in flora, and some classifications use the term bacterial flora separately from plant flora. Among the many ways of classifying plants are by regional floras, which, depending on the purpose of study, can also include fossil flora, remnants of plant life from a previous era. People in many regions and countries take great pride in their individual arrays of characteristic flora, which can vary widely across the globe due to differences in climate and terrain. Regional floras commonly are divided into categories such as native flora or agricultural and garden flora. Some types of "native flora" actually have been introduced centuries ago by people migrating from one region or continent to another, and become an integral part of the native, or natural flora of the place to which they were introduced. This is an example of how human interaction with nature can blur the boundary of what is considered nature. Another category of plant has historically been carved out for weeds. Though the term has fallen into disfavor among botanists as a formal way to categorize "useless" plants, the informal use of the word "weeds" to describe those plants that are deemed worthy of elimination is illustrative of the general tendency of people and societies to seek to alter or shape the course of nature. Similarly, animals are often categorized in ways such as domestic, farm animals, wild animals, pests, etc. according to their relationship to human life. Animals as a category have several characteristics that generally set them apart from other living things. Animals are eukaryotic and usually multicellular, which separates them from bacteria, archaea, and most protists. They are heterotrophic, generally digesting food in an internal chamber, which separates them from plants and algae. They are also distinguished from plants, algae, and fungi by lacking cell walls. With a few exceptions—most notably the two phyla consisting of sponges and placozoans—animals have bodies that are differentiated into tissues. These include muscles, which are able to contract and control locomotion, and a nervous system, which sends and processes signals. There is also typically an internal digestive chamber. The eukaryotic cells possessed by all animals are surrounded by a characteristic extracellular matrix composed of collagen and elastic glycoproteins. This may be calcified to form structures like shells, bones, and spicules, a framework upon which cells can move about and be reorganized during development and maturation, and which supports the complex anatomy required for mobility. Human interrelationship Human impact Although humans comprise a minuscule proportion of the total living biomass on Earth, the human effect on nature is disproportionately large. Because of the extent of human influence, the boundaries between what humans regard as nature and "made environments" is not clear cut except at the extremes. Even at the extremes, the amount of natural environment that is free of discernible human influence is diminishing at an increasingly rapid pace. A 2020 study published in Nature found that anthropogenic mass (human-made materials) outweighs all living biomass on earth, with plastic alone exceeding the mass of all land and marine animals combined. And according to a 2021 study published in Frontiers in Forests and Global Change, only about 3% of the planet's terrestrial surface is ecologically and faunally intact, with a low human footprint and healthy populations of native animal species. Philip Cafaro, professor of philosophy at the School of Global Environmental Sustainability at Colorado State University, wrote in 2022 that "the cause of global biodiversity loss is clear: other species are being displaced by a rapidly growing human economy." The development of technology by the human race has allowed the greater exploitation of natural resources and has helped to alleviate some of the risk from natural hazards. In spite of this progress, however, the fate of human civilization remains closely linked to changes in the environment. There exists a highly complex feedback loop between the use of advanced technology and changes to the environment that are only slowly becoming understood. Human-made threats to the Earth's natural environment include pollution, deforestation, and disasters such as oil spills. Humans have contributed to the extinction of many plants and animals, with roughly 1 million species threatened with extinction within decades. The loss of biodiversity and ecosystem functions over the last half century have impacted the extent that nature can contribute to human quality of life, and continued declines could pose a major threat to the existence of human civilization, unless a rapid course correction is made. The value of natural resources to human society is poorly reflected in market prices because except for labour costs the natural resources are available free of charge. This distorts market pricing of natural resources and at the same time leads to underinvestment in our natural assets. The annual global cost of public subsidies that damage nature is conservatively estimated at $4–6 trillion (million million). Institutional protections of these natural goods, such as the oceans and rainforests, are lacking. Governments have not prevented these economic externalities. Humans employ nature for both leisure and economic activities. The acquisition of natural resources for industrial use remains a sizable component of the world's economic system. Some activities, such as hunting and fishing, are used for both sustenance and leisure, often by different people. Agriculture was first adopted around the 9th millennium BCE. Ranging from food production to energy, nature influences economic wealth. Although early humans gathered uncultivated plant materials for food and employed the medicinal properties of vegetation for healing, most modern human use of plants is through agriculture. The clearance of large tracts of land for crop growth has led to a significant reduction in the amount available of forestation and wetlands, resulting in the loss of habitat for many plant and animal species as well as increased erosion. Aesthetics and beauty Beauty in nature has historically been a prevalent theme in art and books, filling large sections of libraries and bookstores. That nature has been depicted and celebrated by so much art, photography, poetry, and other literature shows the strength with which many people associate nature and beauty. Reasons why this association exists, and what the association consists of, are studied by the branch of philosophy called aesthetics. Beyond certain basic characteristics that many philosophers agree about to explain what is seen as beautiful, the opinions are virtually endless. Nature and wildness have been important subjects in various eras of world history. An early tradition of landscape art began in China during the Tang Dynasty (618–907). The tradition of representing nature as it is became one of the aims of Chinese painting and was a significant influence in Asian art. Although natural wonders are celebrated in the Psalms and the Book of Job, wilderness portrayals in art became more prevalent in the 1800s, especially in the works of the Romantic movement. British artists John Constable and J. M. W. Turner turned their attention to capturing the beauty of the natural world in their paintings. Before that, paintings had been primarily of religious scenes or of human beings. William Wordsworth's poetry described the wonder of the natural world, which had formerly been viewed as a threatening place. Increasingly the valuing of nature became an aspect of Western culture. This artistic movement also coincided with the Transcendentalist movement in the Western world. A common classical idea of beautiful art involves the word mimesis, the imitation of nature. Also in the realm of ideas about beauty in nature is that the perfect is implied through perfect mathematical forms and more generally by patterns in nature. As David Rothenburg writes, "The beautiful is the root of science and the goal of art, the highest possibility that humanity can ever hope to see". Matter and energy The natural sciences view matter as obeying certain laws of nature which scientists seek to understand. Matter is commonly defined as the substance of which physical objects are composed. It constitutes the observable universe. The visible components of the universe are now believed to compose only 4.9 percent of the total mass. The remainder is believed to consist of 26.8 percent cold dark matter and 68.3 percent dark energy. The exact arrangement of these components is still unknown and is under intensive investigation by physicists. The behaviour of matter and energy throughout the observable universe appears to follow well-defined physical laws. These laws have been employed to produce cosmological models that successfully explain the structure and the evolution of the universe we can observe. The mathematical expressions of the laws of physics employ a set of twenty physical constants that appear to be static across the observable universe. The values of these constants have been carefully measured, but the reason for their specific values remains a mystery. Beyond Earth Outer space, also simply called space, refers to the relatively empty regions of the Universe outside the atmospheres of celestial bodies. Outer space is used to distinguish it from airspace (and terrestrial locations). There is no discrete boundary between Earth's atmosphere and space, as the atmosphere gradually attenuates with increasing altitude. Outer space within the Solar System is called interplanetary space, which passes over into interstellar space at what is known as the heliopause. Outer space is sparsely filled with several dozen types of organic molecules discovered to date by microwave spectroscopy, blackbody radiation left over from the Big Bang and the origin of the universe, and cosmic rays, which include ionized atomic nuclei and various subatomic particles. There is also some gas, plasma and dust, and small meteors. Additionally, there are signs of human life in outer space today, such as material left over from previous crewed and uncrewed launches which are a potential hazard to spacecraft. Some of this debris re-enters the atmosphere periodically. Although Earth is the only body within the Solar System known to support life, evidence suggests that in the distant past the planet Mars possessed bodies of liquid water on the surface. For a brief period in Mars' history, it may have also been capable of forming life. At present though, most of the water remaining on Mars is frozen. If life exists at all on Mars, it is most likely to be located underground where liquid water can still exist. Conditions on the other terrestrial planets, Mercury and Venus, appear to be too harsh to support life as we know it. But it has been conjectured that Europa, the fourth-largest moon of Jupiter, may possess a sub-surface ocean of liquid water and could potentially host life. Astronomers have started to discover extrasolar Earth analogs – planets that lie in the habitable zone of space surrounding a star, and therefore could possibly host life as we know it. See also Biophilic design Force of nature Human nature Natural building Natural history Natural landscape Natural law Natural resource Natural science Natural theology Naturalism Nature reserve Nature versus nurture Nature worship Nature-based solutions Naturism Rewilding Media: National Wildlife, a publication of the National Wildlife Federation Natural History, by Pliny the Elder Natural World (TV series) Nature, by Ralph Waldo Emerson Nature, a prominent scientific journal Nature (TV series) The World We Live In (Life magazine) Organizations: Nature Detectives The Nature Conservancy Philosophy: Balance of nature (biological fallacy), a discredited concept of natural equilibrium in predator–prey dynamics Mother Nature Naturalism, any of several philosophical stances, typically those descended from materialism and pragmatism that do not distinguish the supernatural from nature; this includes the methodological naturalism of natural science, which makes the methodological assumption that observable events in nature are explained only by natural causes, without assuming either the existence or non-existence of the supernatural Nature (philosophy) Notes and references Further reading Farber, Paul Lawrence (2000), Finding Order in Nature: The Naturalist Tradition from Linnaeus to E. O. Wilson. Johns Hopkins University Press: Baltimore. External links The IUCN Red List of Threatened Species (iucnredlist.org) Environmental science Environmental social science concepts Main topic articles
Nature
Environmental_science
7,540
49,643,679
https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20620
Zinc finger protein 620 is a protein that in humans is encoded by the ZNF620 gene. References Further reading Human proteins
Zinc finger protein 620
Chemistry
28
30,304,860
https://en.wikipedia.org/wiki/Rabi%20resonance%20method
Rabi resonance method is a technique developed by Isidor Isaac Rabi for measuring the nuclear spin. The atom is placed in a static magnetic field and a perpendicular rotating magnetic field. We present a classical treatment in here. Theory When only the static magnetic field (B0) is turned on, the spin will precess around it with Larmor frequency ν0 and the corresponding angular frequency is ω0. According to mechanics, the equation of motion of the spin J is: where μ is the magnetic moment. g is g-factor, which is dimensionless and reflecting the environmental effect on the spin. Solving gives the angular frequency (Larmor frequency) with the magnetic field pointing on z-axis: The minus sign is necessary. It reflects that the J is rotating in left-hand when the thumb is pointing as the H field. when turned on the rotating magnetic field (BR), with angular frequency ω. In the rotating frame of the rotating field, the equation of motion is: or if , the static field is cancelled, and the spin now precesses around HR with angular frequency Rabi frequency Since the rotating field is perpendicular to the static field, the spin in rotating frame is now able to flip between up and down. By sweeping ωR, one can obtain a maximum flipping and determine the magnetic moment. Experiment The experiment setup contains 3 parts: an inhomogeneous magnetic field in front, the rotating field at the middle, and another inhomogeneous magnetic field at the end. Atoms after passing the first inhomogeneous field will split into 2 beams corresponding the spin up and spin down state. Select one beam (spin up state, for example) and let it pass the rotating field. If the rotating field has frequency (ω) equal to the Larmor frequency, it will produce a high intensity of the other beam (spin down state). By sweeping the frequency to obtain a maximum intensity, one can find out the Larmor frequency and the magnetic moment of the atom. References and notes https://web.archive.org/web/20160325004825/https://www.colorado.edu/physics/phys7550/phys7550_sp07/extras/Ramsey90_RMP.pdf See also Rabi frequency Rabi cycle Rabi problem Quantum optics Atomic physics Atomic, molecular, and optical physics
Rabi resonance method
Physics,Chemistry
492
38,934,394
https://en.wikipedia.org/wiki/Kiev%20Fortified%20Region
The Kiev Fortified Region (Russian abbreviation КиУР, УР-1, 1-й укреплённый район, 1-й укрепрайон) is a fortified district in the Kyiv area, a complex of defensive structures, consisting of permanent and field fortifications and engineering obstacles. It was built in the period from 1929 to 1941 for the protection of the old border of the USSR. The total length of the fortified region is about 85 km between the flanks, which are anchored on the river Dnieper, and the depth of the defensive zone ranges from 1 to 6 km. The fortifications had a significant impact in the fighting for the defence of the city in 1941. Initial construction According to Order No 90 of the Revolutionary Military Council of the USSR, dated 19 March 1928, a program of fortifications on the country's borders was to be carried out, and in 1928, construction began on the first thirteen fortified regions, including Kiev. Building started in 1929. The defence zone was identified, and in it were built 120 long-term machine gun emplacements and 45 artillery observation and observation points in a construction program lasting 4 years, from 1929 to 1933. But in 1932, further construction of the fortified area was discontinued. Second World War After the annexation of the Poland's eastern territories in 1939, Stalin wanted to push the Soviet defences up to the new borders, creating a series of new fortified regions. The old defences of the Stalin Line were neglected. Only in June 1940 did Stalin finally agree with Zhukov that the old Stalin Line should be partially manned, but the troops found the fortifications 'overgrown with grass' and completely lacking in fixed defences. The poor state of the Kiev defences was not unknown to the Soviet leadership: a NKVD report from 1939 stated that 'Only 5 of the 257 structures in the area were prepared for combat' and went on to list a host of deficiencies ranging from uncleared forests limiting fields of fire, no communication or support equipment, and old seals which had decayed. The report went on to state that although the deficiencies had been reported, nothing had been done about them. Restoration of the Kiev fortified area began on June 24, 1941, when the commander of the Southwestern Front, Mikhail Kirponos, gave an order for the rehabilitation of the fortified area, including equipping and arming the pillboxes and the construction of field fortifications. For these tasks, the population of Kiev was mobilized. By 30 June 50,000 people were involved in the construction, by 2 July 160,000; in the last days of construction, 200,000 workers were involved. Since the original construction had not anticipated tank attack, no specific measures had been incorporated into the defence system. To eliminate this drawback, 30 km and 15 km long anti-tank ditches were dug. Also installed were metal hedgehogs and anti-personnel obstacles, among them 16 km of electrified barbed wire and a large number of minefields. By early July, the German advance had punched a hole in the centre of the Southern Front's defences, and on 9 July, thinking that Kiev was there for the taking, General von Kleist issued orders for III Panzer Corps to capture the city and establish a deep bridgehead east of the Dnieper. There are reports that the 37th Army drew some of its initial staff from the Fortified Region staff. Pictures References Fortifications in Ukraine Military history of Ukraine World War II defensive lines Fortified regions of the Soviet Union
Kiev Fortified Region
Engineering
722
44,912,985
https://en.wikipedia.org/wiki/C23H32O4
{{DISPLAYTITLE:C23H32O4}} The molecular formula C23H32O4 may refer to: Prorenoic acid Desoxycorticosterone acetate Hydroxyprogesterone acetate, an orally active progestin related to hydroxyprogesterone caproate Norgestomet, a progestin medication which is used in veterinary medicine
C23H32O4
Chemistry
86
11,908,069
https://en.wikipedia.org/wiki/Polyconvex%20function
In the calculus of variations, the notion of polyconvexity is a generalization of the notion of convexity for functions defined on spaces of matrices. The notion of polyconvexity was introduced by John M. Ball as a sufficient conditions for proving the existence of energy minimizers in nonlinear elasticity theory. It is satisfied by a large class of hyperelastic stored energy densities, such as Mooney-Rivlin and Ogden materials. The notion of polyconvexity is related to the notions of convexity, quasiconvexity and rank-one convexity through the following diagram: Motivation Let be an open bounded domain, and denote the Sobolev space of mappings from to . A typical problem in the calculus of variations is to minimize a functional, of the form , where the energy density function, satisfies -growth, i.e., for some and . It is well-known from a theorem of Morrey and Acerbi-Fusco that a necessary and sufficient condition for to weakly lower-semicontinuous on is that is quasiconvex for almost every . With coercivity assumptions on and boundary conditions on , this leads to the existence of minimizers for on . However, in many applications, the assumption of -growth on the energy density is often too restrictive. In the context of elasticity, this is because the energy is required to grow unboundedly to as local measures of volume approach zero. This led Ball to define the more restrictive notion of polyconvexity to prove the existence of energy minimizers in nonlinear elasticity. Definition A function is said to be polyconvex if there exists a convex function such that where is such that Here, stands for the matrix of all minors of the matrix , and where . When , and when , , where denotes the cofactor matrix of . In the above definitions, the range of can also be extended to . Properties If takes only finite values, then polyconvexity implies quasiconvexity and thus leads to the weak lower semicontinuity of the corresponding integral functional on a Sobolev space. If or , then polyconvexity reduces to convexity. If is polyconvex, then it is locally Lipschitz. Polyconvex functions with subquadratic growth must be convex, i.e., if there exists and such that for every , then is convex. Examples Every convex function is polyconvex. For the case , the determinant function is polyconvex, but not convex. In particular, the following type of function that commonly appears in nonlinear elasticity is polyconvex but not convex: References Convex analysis Calculus of variations Matrices Types of functions
Polyconvex function
Mathematics
566
10,225
https://en.wikipedia.org/wiki/Elliptic%20curve
In mathematics, an elliptic curve is a smooth, projective, algebraic curve of genus one, on which there is a specified point . An elliptic curve is defined over a field and describes points in , the Cartesian product of with itself. If the field's characteristic is different from 2 and 3, then the curve can be described as a plane algebraic curve which consists of solutions for: for some coefficients and in . The curve is required to be non-singular, which means that the curve has no cusps or self-intersections. (This is equivalent to the condition , that is, being square-free in .) It is always understood that the curve is really sitting in the projective plane, with the point being the unique point at infinity. Many sources define an elliptic curve to be simply a curve given by an equation of this form. (When the coefficient field has characteristic 2 or 3, the above equation is not quite general enough to include all non-singular cubic curves; see below.) An elliptic curve is an abelian variety – that is, it has a group law defined algebraically, with respect to which it is an abelian group – and serves as the identity element. If , where is any polynomial of degree three in with no repeated roots, the solution set is a nonsingular plane curve of genus one, an elliptic curve. If has degree four and is square-free this equation again describes a plane curve of genus one; however, it has no natural choice of identity element. More generally, any algebraic curve of genus one, for example the intersection of two quadric surfaces embedded in three-dimensional projective space, is called an elliptic curve, provided that it is equipped with a marked point to act as the identity. Using the theory of elliptic functions, it can be shown that elliptic curves defined over the complex numbers correspond to embeddings of the torus into the complex projective plane. The torus is also an abelian group, and this correspondence is also a group isomorphism. Elliptic curves are especially important in number theory, and constitute a major area of current research; for example, they were used in Andrew Wiles's proof of Fermat's Last Theorem. They also find applications in elliptic curve cryptography (ECC) and integer factorization. An elliptic curve is not an ellipse in the sense of a projective conic, which has genus zero: see elliptic integral for the origin of the term. However, there is a natural representation of real elliptic curves with shape invariant as ellipses in the hyperbolic plane . Specifically, the intersections of the Minkowski hyperboloid with quadric surfaces characterized by a certain constant-angle property produce the Steiner ellipses in (generated by orientation-preserving collineations). Further, the orthogonal trajectories of these ellipses comprise the elliptic curves with , and any ellipse in described as a locus relative to two foci is uniquely the elliptic curve sum of two Steiner ellipses, obtained by adding the pairs of intersections on each orthogonal trajectory. Here, the vertex of the hyperboloid serves as the identity on each trajectory curve. Topologically, a complex elliptic curve is a torus, while a complex ellipse is a sphere. Elliptic curves over the real numbers Although the formal definition of an elliptic curve requires some background in algebraic geometry, it is possible to describe some features of elliptic curves over the real numbers using only introductory algebra and geometry. In this context, an elliptic curve is a plane curve defined by an equation of the form after a linear change of variables ( and are real numbers). This type of equation is called a Weierstrass equation, and said to be in Weierstrass form, or Weierstrass normal form. The definition of elliptic curve also requires that the curve be non-singular. Geometrically, this means that the graph has no cusps, self-intersections, or isolated points. Algebraically, this holds if and only if the discriminant, , is not equal to zero. The discriminant is zero when . (Although the factor −16 is irrelevant to whether or not the curve is non-singular, this definition of the discriminant is useful in a more advanced study of elliptic curves.) The real graph of a non-singular curve has two components if its discriminant is positive, and one component if it is negative. For example, in the graphs shown in figure to the right, the discriminant in the first case is 64, and in the second case is −368. Following the convention at Conic_section#Discriminant, elliptic curves require that the discriminant is negative. The group law When working in the projective plane, the equation in homogeneous coordinates becomes : This equation is not defined on the line at infinity, but we can multiply by to get one that is : This resulting equation is defined on the whole projective plane, and the curve it defines projects onto the elliptic curve of interest. To find its intersection with the line at infinity, we can just posit . This implies , which in a field means . on the other hand can take any value, and thus all triplets satisfy the equation. In projective geometry this set is simply the point , which is thus the unique intersection of the curve with the line at infinity. Since the curve is smooth, hence continuous, it can be shown that this point at infinity is the identity element of a group structure whose operation is geometrically described as follows: Since the curve is symmetric about the -axis, given any point , we can take to be the point opposite it. We then have , as lies on the -plane, so that is also the symmetrical of about the origin, and thus represents the same projective point. If and are two points on the curve, then we can uniquely describe a third point in the following way. First, draw the line that intersects and . This will generally intersect the cubic at a third point, . We then take to be , the point opposite . This definition for addition works except in a few special cases related to the point at infinity and intersection multiplicity. The first is when one of the points is . Here, we define , making the identity of the group. If we only have one point, thus we cannot define the line between them. In this case, we use the tangent line to the curve at this point as our line. In most cases, the tangent will intersect a second point and we can take its opposite. If and are opposites of each other, we define . Lastly, If is an inflection point (a point where the concavity of the curve changes), we take to be itself and is simply the point opposite itself, i.e. itself. Let be a field over which the curve is defined (that is, the coefficients of the defining equation or equations of the curve are in ) and denote the curve by . Then the -rational points of are the points on whose coordinates all lie in , including the point at infinity. The set of -rational points is denoted by . is a group, because properties of polynomial equations show that if is in , then is also in , and if two of , , are in , then so is the third. Additionally, if is a subfield of , then is a subgroup of . Algebraic interpretation The above groups can be described algebraically as well as geometrically. Given the curve over the field (whose characteristic we assume to be neither 2 nor 3), and points and on the curve, assume first that (case 1). Let be the equation of the line that intersects and , which has the following slope: The line equation and the curve equation intersect at the points , , and , so the equations have identical values at these values. which is equivalent to Since , , and are solutions, this equation has its roots at exactly the same values as and because both equations are cubics they must be the same polynomial up to a scalar. Then equating the coefficients of in both equations and solving for the unknown . follows from the line equation and this is an element of , because is. If , then there are two options: if (case 3), including the case where (case 4), then the sum is defined as 0; thus, the inverse of each point on the curve is found by reflecting it across the -axis. If , then and (case 2 using as ). The slope is given by the tangent to the curve at (xP, yP). A more general expression for that works in both case 1 and case 2 is where equality to relies on and obeying . Non-Weierstrass curves For the curve (the general form of an elliptic curve with characteristic 3), the formulas are similar, with and . For a general cubic curve not in Weierstrass normal form, we can still define a group structure by designating one of its nine inflection points as the identity . In the projective plane, each line will intersect a cubic at three points when accounting for multiplicity. For a point , is defined as the unique third point on the line passing through and . Then, for any and , is defined as where is the unique third point on the line containing and . For an example of the group law over a non-Weierstrass curve, see Hessian curves. Elliptic curves over the rational numbers A curve E defined over the field of rational numbers is also defined over the field of real numbers. Therefore, the law of addition (of points with real coordinates) by the tangent and secant method can be applied to E. The explicit formulae show that the sum of two points P and Q with rational coordinates has again rational coordinates, since the line joining P and Q has rational coefficients. This way, one shows that the set of rational points of E forms a subgroup of the group of real points of E. Integral points This section is concerned with points P = (x, y) of E such that x is an integer. For example, the equation y2 = x3 + 17 has eight integral solutions with y > 0: (x, y) = (−2, 3), (−1, 4), (2, 5), (4, 9), (8, 23), (43, 282), (52, 375), (, ). As another example, Ljunggren's equation, a curve whose Weierstrass form is y2 = x3 − 2x, has only four solutions with y ≥ 0 : (x, y) = (0, 0), (−1, 1), (2, 2), (338, ). The structure of rational points Rational points can be constructed by the method of tangents and secants detailed above, starting with a finite number of rational points. More precisely the Mordell–Weil theorem states that the group E(Q) is a finitely generated (abelian) group. By the fundamental theorem of finitely generated abelian groups it is therefore a finite direct sum of copies of Z and finite cyclic groups. The proof of the theorem involves two parts. The first part shows that for any integer m > 1, the quotient group E(Q)/mE(Q) is finite (this is the weak Mordell–Weil theorem). Second, introducing a height function h on the rational points E(Q) defined by h(P0) = 0 and if P (unequal to the point at infinity P0) has as abscissa the rational number x = p/q (with coprime p and q). This height function h has the property that h(mP) grows roughly like the square of m. Moreover, only finitely many rational points with height smaller than any constant exist on E. The proof of the theorem is thus a variant of the method of infinite descent and relies on the repeated application of Euclidean divisions on E: let P ∈ E(Q) be a rational point on the curve, writing P as the sum 2P1 + Q1 where Q1 is a fixed representant of P in E(Q)/2E(Q), the height of P1 is about of the one of P (more generally, replacing 2 by any m > 1, and by ). Redoing the same with P1, that is to say P1 = 2P2 + Q2, then P2 = 2P3 + Q3, etc. finally expresses P as an integral linear combination of points Qi and of points whose height is bounded by a fixed constant chosen in advance: by the weak Mordell–Weil theorem and the second property of the height function P is thus expressed as an integral linear combination of a finite number of fixed points. The theorem however doesn't provide a method to determine any representatives of E(Q)/mE(Q). The rank of E(Q), that is the number of copies of Z in E(Q) or, equivalently, the number of independent points of infinite order, is called the rank of E. The Birch and Swinnerton-Dyer conjecture is concerned with determining the rank. One conjectures that it can be arbitrarily large, even if only examples with relatively small rank are known. The elliptic curve with the currently largest exactly-known rank is y2 + xy + y = x3 − x2 − x + It has rank 20, found by Noam Elkies and Zev Klagsbrun in 2020. Curves of rank higher than 20 have been known since 1994, with lower bounds on their ranks ranging from 21 to 29, but their exact ranks are not known and in particular it is not proven which of them have higher rank than the others or which is the true "current champion". As for the groups constituting the torsion subgroup of E(Q), the following is known: the torsion subgroup of E(Q) is one of the 15 following groups (a theorem due to Barry Mazur): Z/NZ for N = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 12, or Z/2Z × Z/2NZ with N = 1, 2, 3, 4. Examples for every case are known. Moreover, elliptic curves whose Mordell–Weil groups over Q have the same torsion groups belong to a parametrized family. The Birch and Swinnerton-Dyer conjecture The Birch and Swinnerton-Dyer conjecture (BSD) is one of the Millennium problems of the Clay Mathematics Institute. The conjecture relies on analytic and arithmetic objects defined by the elliptic curve in question. At the analytic side, an important ingredient is a function of a complex variable, L, the Hasse–Weil zeta function of E over Q. This function is a variant of the Riemann zeta function and Dirichlet L-functions. It is defined as an Euler product, with one factor for every prime number p. For a curve E over Q given by a minimal equation with integral coefficients , reducing the coefficients modulo p defines an elliptic curve over the finite field Fp (except for a finite number of primes p, where the reduced curve has a singularity and thus fails to be elliptic, in which case E is said to be of bad reduction at p). The zeta function of an elliptic curve over a finite field Fp is, in some sense, a generating function assembling the information of the number of points of E with values in the finite field extensions Fpn of Fp. It is given by The interior sum of the exponential resembles the development of the logarithm and, in fact, the so-defined zeta function is a rational function in T: where the 'trace of Frobenius' term is defined to be the difference between the 'expected' number and the number of points on the elliptic curve over , viz. or equivalently, . We may define the same quantities and functions over an arbitrary finite field of characteristic , with replacing everywhere. The L-function of E over Q is then defined by collecting this information together, for all primes p. It is defined by where N is the conductor of E, i.e. the product of primes with bad reduction ), in which case ap is defined differently from the method above: see Silverman (1986) below. For example has bad reduction at 17, because has . This product converges for Re(s) > 3/2 only. Hasse's conjecture affirms that the L-function admits an analytic continuation to the whole complex plane and satisfies a functional equation relating, for any s, L(E, s) to L(E, 2 − s). In 1999 this was shown to be a consequence of the proof of the Shimura–Taniyama–Weil conjecture, which asserts that every elliptic curve over Q is a modular curve, which implies that its L-function is the L-function of a modular form whose analytic continuation is known. One can therefore speak about the values of L(E, s) at any complex number s. At s=1 (the conductor product can be discarded as it is finite), the L-function becomes The Birch and Swinnerton-Dyer conjecture relates the arithmetic of the curve to the behaviour of this L-function at s = 1. It affirms that the vanishing order of the L-function at s = 1 equals the rank of E and predicts the leading term of the Laurent series of L(E, s) at that point in terms of several quantities attached to the elliptic curve. Much like the Riemann hypothesis, the truth of the BSD conjecture would have multiple consequences, including the following two: A congruent number is defined as an odd square-free integer n which is the area of a right triangle with rational side lengths. It is known that n is a congruent number if and only if the elliptic curve has a rational point of infinite order; assuming BSD, this is equivalent to its L-function having a zero at s = 1. Tunnell has shown a related result: assuming BSD, n is a congruent number if and only if the number of triplets of integers (x, y, z) satisfying is twice the number of triples satisfying . The interest in this statement is that the condition is easy to check. In a different direction, certain analytic methods allow for an estimation of the order of zero in the center of the critical strip for certain L-functions. Admitting BSD, these estimations correspond to information about the rank of families of the corresponding elliptic curves. For example: assuming the generalized Riemann hypothesis and BSD, the average rank of curves given by is smaller than 2. Elliptic curves over finite fields Let K = Fq be the finite field with q elements and E an elliptic curve defined over K. While the precise number of rational points of an elliptic curve E over K is in general difficult to compute, Hasse's theorem on elliptic curves gives the following inequality: In other words, the number of points on the curve grows proportionally to the number of elements in the field. This fact can be understood and proven with the help of some general theory; see local zeta function and étale cohomology for example. The set of points E(Fq) is a finite abelian group. It is always cyclic or the product of two cyclic groups. For example, the curve defined by over F71 has 72 points (71 affine points including (0,0) and one point at infinity) over this field, whose group structure is given by Z/2Z × Z/36Z. The number of points on a specific curve can be computed with Schoof's algorithm. Studying the curve over the field extensions of Fq is facilitated by the introduction of the local zeta function of E over Fq, defined by a generating series (also see above) where the field Kn is the (unique up to isomorphism) extension of K = Fq of degree n (that is, ). The zeta function is a rational function in T. To see this, consider the integer such that There is a complex number such that where is the complex conjugate, and so we have We choose so that its absolute value is , that is , and that . Note that . can then be used in the local zeta function as its values when raised to the various powers of can be said to reasonably approximate the behaviour of , in that Using the Taylor series for the natural logarithm, Then , so finally For example, the zeta function of E : y2 + y = x3 over the field F2 is given by which follows from: as , then , so . The functional equation is As we are only interested in the behaviour of , we can use a reduced zeta function and so which leads directly to the local L-functions The Sato–Tate conjecture is a statement about how the error term in Hasse's theorem varies with the different primes q, if an elliptic curve E over Q is reduced modulo q. It was proven (for almost all such curves) in 2006 due to the results of Taylor, Harris and Shepherd-Barron, and says that the error terms are equidistributed. Elliptic curves over finite fields are notably applied in cryptography and for the factorization of large integers. These algorithms often make use of the group structure on the points of E. Algorithms that are applicable to general groups, for example the group of invertible elements in finite fields, F*q, can thus be applied to the group of points on an elliptic curve. For example, the discrete logarithm is such an algorithm. The interest in this is that choosing an elliptic curve allows for more flexibility than choosing q (and thus the group of units in Fq). Also, the group structure of elliptic curves is generally more complicated. Elliptic curves over a general field Elliptic curves can be defined over any field K; the formal definition of an elliptic curve is a non-singular projective algebraic curve over K with genus 1 and endowed with a distinguished point defined over K. If the characteristic of K is neither 2 nor 3, then every elliptic curve over K can be written in the form after a linear change of variables. Here p and q are elements of K such that the right hand side polynomial x3 − px − q does not have any double roots. If the characteristic is 2 or 3, then more terms need to be kept: in characteristic 3, the most general equation is of the form for arbitrary constants b2, b4, b6 such that the polynomial on the right-hand side has distinct roots (the notation is chosen for historical reasons). In characteristic 2, even this much is not possible, and the most general equation is provided that the variety it defines is non-singular. If characteristic were not an obstruction, each equation would reduce to the previous ones by a suitable linear change of variables. One typically takes the curve to be the set of all points (x,y) which satisfy the above equation and such that both x and y are elements of the algebraic closure of K. Points of the curve whose coordinates both belong to K are called K-rational points. Many of the preceding results remain valid when the field of definition of E is a number field K, that is to say, a finite field extension of Q. In particular, the group E(K) of K-rational points of an elliptic curve E defined over K is finitely generated, which generalizes the Mordell–Weil theorem above. A theorem due to Loïc Merel shows that for a given integer d, there are (up to isomorphism) only finitely many groups that can occur as the torsion groups of E(K) for an elliptic curve defined over a number field K of degree d. More precisely, there is a number B(d) such that for any elliptic curve E defined over a number field K of degree d, any torsion point of E(K) is of order less than B(d). The theorem is effective: for d > 1, if a torsion point is of order p, with p prime, then As for the integral points, Siegel's theorem generalizes to the following: Let E be an elliptic curve defined over a number field K, x and y the Weierstrass coordinates. Then there are only finitely many points of E(K) whose x-coordinate is in the ring of integers OK. The properties of the Hasse–Weil zeta function and the Birch and Swinnerton-Dyer conjecture can also be extended to this more general situation. Elliptic curves over the complex numbers The formulation of elliptic curves as the embedding of a torus in the complex projective plane follows naturally from a curious property of Weierstrass's elliptic functions. These functions and their first derivative are related by the formula Here, and are constants; is the Weierstrass elliptic function and its derivative. It should be clear that this relation is in the form of an elliptic curve (over the complex numbers). The Weierstrass functions are doubly periodic; that is, they are periodic with respect to a lattice ; in essence, the Weierstrass functions are naturally defined on a torus . This torus may be embedded in the complex projective plane by means of the map This map is a group isomorphism of the torus (considered with its natural group structure) with the chord-and-tangent group law on the cubic curve which is the image of this map. It is also an isomorphism of Riemann surfaces from the torus to the cubic curve, so topologically, an elliptic curve is a torus. If the lattice is related by multiplication by a non-zero complex number to a lattice , then the corresponding curves are isomorphic. Isomorphism classes of elliptic curves are specified by the -invariant. The isomorphism classes can be understood in a simpler way as well. The constants and , called the modular invariants, are uniquely determined by the lattice, that is, by the structure of the torus. However, all real polynomials factorize completely into linear factors over the complex numbers, since the field of complex numbers is the algebraic closure of the reals. So, the elliptic curve may be written as One finds that and with -invariant and is sometimes called the modular lambda function. For example, let , then which implies , , and therefore of the formula above are all algebraic numbers if involves an imaginary quadratic field. In fact, it yields the integer . In contrast, the modular discriminant is generally a transcendental number. In particular, the value of the Dedekind eta function is Note that the uniformization theorem implies that every compact Riemann surface of genus one can be represented as a torus. This also allows an easy understanding of the torsion points on an elliptic curve: if the lattice is spanned by the fundamental periods and , then the -torsion points are the (equivalence classes of) points of the form for integers and in the range . If is an elliptic curve over the complex numbers and then a pair of fundamental periods of can be calculated very rapidly by is the arithmetic–geometric mean of and . At each step of the arithmetic–geometric mean iteration, the signs of arising from the ambiguity of geometric mean iterations are chosen such that where and denote the individual arithmetic mean and geometric mean iterations of and , respectively. When , there is an additional condition that . Over the complex numbers, every elliptic curve has nine inflection points. Every line through two of these points also passes through a third inflection point; the nine points and 12 lines formed in this way form a realization of the Hesse configuration. The Dual Isogeny Given an isogeny of elliptic curves of degree , the dual isogeny is an isogeny of the same degree such that Here denotes the multiplication-by- isogeny which has degree Construction of the Dual Isogeny Often only the existence of a dual isogeny is needed, but it can be explicitly given as the composition where is the group of divisors of degree 0. To do this, we need maps given by where is the neutral point of and given by To see that , note that the original isogeny can be written as a composite and that since is finite of degree , is multiplication by on Alternatively, we can use the smaller Picard group , a quotient of The map descends to an isomorphism, The dual isogeny is Note that the relation also implies the conjugate relation Indeed, let Then But is surjective, so we must have Algorithms that use elliptic curves Elliptic curves over finite fields are used in some cryptographic applications as well as for integer factorization. Typically, the general idea in these applications is that a known algorithm which makes use of certain finite groups is rewritten to use the groups of rational points of elliptic curves. For more see also: Elliptic curve cryptography Elliptic-curve Diffie–Hellman key exchange (ECDH) Supersingular isogeny key exchange Elliptic curve digital signature algorithm (ECDSA) EdDSA digital signature algorithm Dual EC DRBG random number generator Lenstra elliptic-curve factorization Elliptic curve primality proving Alternative representations of elliptic curves Hessian curve Edwards curve Twisted curve Twisted Hessian curve Twisted Edwards curve Doubling-oriented Doche–Icart–Kohel curve Tripling-oriented Doche–Icart–Kohel curve Jacobian curve Montgomery curve See also Arithmetic dynamics Elliptic algebra Elliptic surface Comparison of computer algebra systems Isogeny j-line Level structure (algebraic geometry) Modularity theorem Moduli stack of elliptic curves Nagell–Lutz theorem Riemann–Hurwitz formula Wiles's proof of Fermat's Last Theorem Notes References Serge Lang, in the introduction to the book cited below, stated that "It is possible to write endlessly on elliptic curves. (This is not a threat.)" The following short list is thus at best a guide to the vast expository literature available on the theoretical, algorithmic, and cryptographic aspects of elliptic curves. , winner of the MAA writing prize the George Pólya Award Chapter XXV External links LMFDB: Database of Elliptic Curves over Q The Arithmetic of elliptic curves from PlanetMath Interactive elliptic curve over R and over Zp – web application that requires HTML5 capable browser. Analytic number theory Group theory
Elliptic curve
Mathematics
6,278
22,143,100
https://en.wikipedia.org/wiki/Shearography
Shearography or Speckle pattern shearing interferometry is a measuring and testing method similar to holographic interferometry. It uses coherent light or coherent soundwaves to provide information about the quality of different materials in nondestructive testing, strain measurement, and vibration analysis. Shearography is extensively used in production and development in aerospace, wind rotor blades, automotive, and materials research areas. Advantages of shearography are the large area testing capabilities (up to 1 m2 per minute), non-contact properties, its relative insensitivity to environmental disturbances, and its good performance on honeycomb materials, which is a big challenge for traditional nondestructive testing methods. Shearing function When a surface area is illuminated with a highly coherent laser light, a stochastical interference pattern is created. This interference pattern is called a speckle, and is projected on a rigid camera's CCD chip. Analogous with Electronic speckle pattern interferometry (ESPI), to obtain results from the speckle we need to compare it with a known reference light. Shearography uses the test object itself as the known reference; it shears the image so a double image is created. The superposition of the two images, a shear image, represents the surface of the test object at this unloaded state. This makes the method much less sensitive to external vibrations and noise. By applying a small load, the material will deform. A nonuniform material quality will generate a nonuniform movement of the surface of the test object. A new shearing image is recorded at the loaded state and is compared with the sheared image before load. If a flaw is present, it will be seen. Phase-shift technology To increase the sensitivity of the measurement method, a real-time phase shift process is used in the sensor. This contains a stepping mirror that shifts the reference beam, which is then processed with a best fit-algorithm and presents the information in real time. Applications The main applications are in composite nondestructive testing, where typical flaws are: Disbonds, Delaminations, Wrinkles, Porosity, Foreign objects, and Impact damages. Industries where Shearography is used are: Aerospace, Space, Boats, Wind power, Automotive, Tires, and Art conservation. Inspection standards The methodology of shearography is standardized by ASTM International: ASTM E2581-07, "Standard Practice for Shearography on Polymer Matrix Composites, Sandwich Core Materials and Filament Wound Pressure Vessel’s in Aerospace Applications" The following NDT personnel certification documents contain references to shearography: BS EN 4179:2009 NAS 410, 2008 Rev 3 ASNT SNT-TC-1A, 2006 edition ASNT CP-105, 2006 edition References Nondestructive testing
Shearography
Materials_science
573
1,401,941
https://en.wikipedia.org/wiki/Morley%20rank
In mathematical logic, Morley rank, introduced by , is a means of measuring the size of a subset of a model of a theory, generalizing the notion of dimension in algebraic geometry. Definition Fix a theory T with a model M. The Morley rank of a formula φ defining a definable (with parameters) subset S of M is an ordinal or −1 or ∞, defined by first recursively defining what it means for a formula to have Morley rank at least α for some ordinal α. The Morley rank is at least 0 if S is non-empty. For α a successor ordinal, the Morley rank is at least α if in some elementary extension N of M, the set S has countably infinitely many disjoint definable subsets Si, each of rank at least α − 1. For α a non-zero limit ordinal, the Morley rank is at least α if it is at least β for all β less than α. The Morley rank is then defined to be α if it is at least α but not at least α + 1, and is defined to be ∞ if it is at least α for all ordinals α, and is defined to be −1 if S is empty. For a definable subset of a model M (defined by a formula φ) the Morley rank is defined to be the Morley rank of φ in any ℵ0-saturated elementary extension of M. In particular for ℵ0-saturated models the Morley rank of a subset is the Morley rank of any formula defining the subset. If φ defining S has rank α, and S breaks up into no more than n < ω subsets of rank α, then φ is said to have Morley degree n. A formula defining a finite set has Morley rank 0. A formula with Morley rank 1 and Morley degree 1 is called strongly minimal. A strongly minimal structure is one where the trivial formula x = x is strongly minimal. Morley rank and strongly minimal structures are key tools in the proof of Morley's categoricity theorem and in the larger area of model theoretic stability theory. Examples The empty set has Morley rank −1, and conversely anything of Morley rank −1 is empty. A subset has Morley rank 0 if and only if it is finite and non-empty. If V is an algebraic set in Kn, for an algebraically closed field K, then the Morley rank of V is the same as its usual Krull dimension. The Morley degree of V is the number of irreducible components of maximal dimension; this is not the same as its degree in algebraic geometry, except when its components of maximal dimension are linear spaces. The rational numbers, considered as an ordered set, has Morley rank ∞, as it contains a countable disjoint union of definable subsets isomorphic to itself. See also Cherlin–Zilber conjecture Group of finite Morley rank U-rank References Alexandre Borovik, Ali Nesin, "Groups of finite Morley rank", Oxford Univ. Press (1994) B. Hart Stability theory and its variants (2000) pp. 131–148 in Model theory, algebra and geometry, edited by D. Haskell et al., Math. Sci. Res. Inst. Publ. 39, Cambridge Univ. Press, New York, 2000. Contains a formal definition of Morley rank. David Marker Model Theory of Differential Fields (2000) pp. 53–63 in Model theory, algebra and geometry, edited by D. Haskell et al., Math. Sci. Res. Inst. Publ. 39, Cambridge Univ. Press, New York, 2000. Model theory
Morley rank
Mathematics
753
9,361,398
https://en.wikipedia.org/wiki/Lipofectamine
Lipofectamine or Lipofectamine 2000 is a common transfection reagent, produced and sold by Invitrogen, used in molecular and cellular biology. It is used to increase the transfection efficiency of RNA (including mRNA and siRNA) or plasmid DNA into in vitro cell cultures by lipofection. Lipofectamine contains lipid subunits that can form liposomes in an aqueous environment, which entrap the transfection payload, e.g. DNA plasmids. Lipofectamine consists of a 3:1 mixture of DOSPA (2,3‐dioleoyloxy‐N‐ [2(sperminecarboxamido)ethyl]‐N,N‐dimethyl‐1‐propaniminium trifluoroacetate) and DOPE, which complexes with negatively charged nucleic acid molecules to allow them to overcome the electrostatic repulsion of the cell membrane. Lipofectamine's cationic lipid molecules are formulated with a neutral co-lipid (helper lipid). The DNA-containing liposomes (positively charged on their surface) can fuse with the negatively charged plasma membrane of living cells, due to the neutral co-lipid mediating fusion of the liposome with the cell membrane, allowing nucleic acid cargo molecules to cross into the cytoplasm for replication or expression. In order for a cell to express a transgene, the nucleic acid must reach the nucleus of the cell to begin transcription. However, the transfected genetic material may never reach the nucleus in the first place, instead being disrupted somewhere along the delivery process. In dividing cells, the material may reach the nucleus by being trapped in the reassembling nuclear envelope following mitosis. But also in non-dividing cells, research has shown that Lipofectamine improves the efficiency of transfection, which suggests that it additionally helps the transfected genetic material penetrate the intact nuclear envelope. This method of transfection was invented by Dr. Yongliang Chu. See also Lipofection Transfection Vectors in gene therapy Cationic liposome References US Active US7479573B2, Yongliang Chu; Malek Masoud & Gulliat Gebeyehu, "Transfection reagents", assigned to Life Technologies Corp and Invitrogen Group Molecular biology Gene delivery
Lipofectamine
Chemistry,Biology
507
44,173,789
https://en.wikipedia.org/wiki/Mustafa%20Prize
The Mustafa Prize is a science and technology award, granted to top researchers and scientists from the Organisation of Islamic Cooperation (OIC) member states. The prize is granted to scholars of the Islamic world as one of the symbols of scientific excellence in recognition of the outstanding scientists and pioneers of scientific and technological cooperation and development in the world. The science and technology $500,000 prize, Medal, and Diploma are awarded to Muslim researchers and scientists, regardless of whether they live in Muslim-majority nations or elsewhere, as well as non-Muslim scientists in Muslim countries. In 2016, science journal called the prize, the Muslim Nobel. The Mustafa Prize is held biennially during the Islamic Unity week in Iran. The prize is awarded in the four categories of "Information and Communication Science and Technology," "Life and Medical Science and Technology," "Nanoscience and Nanotechnology," and "All Areas of Science and Technology". These areas include the following UNESCO fields of education: natural sciences, mathematics, and statistics; information and communication technologies; engineering, manufacturing, and construction; agriculture, forestry, fisheries and veterinary; health and welfare as well as cognitive science and Islamic economics and banking. History, governance, and nominations The Mustafa Prize is biennially given by Iran's government to leading researchers and scientists from countries of the Organization of Islamic Cooperation. The inaugural prize was given in 2016. The Mustafa Science and Technology Foundation has formed several committees to organize the Mustafa Prize. The Mustafa Prize Policy-Making Council was established in 2013. Its secretary said in 2017 that the prize and its governing bodies had no formal political relations with any country. The MSTF Advisory Board is composed of volunteer high-rank academics, public sector officials, technologists, and business leaders from the Islamic community who will advise and recommend the MSTF at a strategic level and help it in achieving its objectives through promoting public awareness, fundraising, and networking. Other communities created to achieve the goals of Mustafa Foundation are Safir Al-Mustafa Club, Mustafa Prize Volunteers Community, The Mustafa Art Museum, the MSTF Laboratory Network, and the MSTF innovation labs. Nature interpreted the establishment of the prize as growing importance of domestic science in Iran and the nurturing of scientific cooperation and exchange with other nations. For the first Information and Communication Science and Technology, Life and Medical Science and Technology, and Nanoscience and Nanotechnology, the nominees should be citizens of one of the 57 Islamic countries with no restrictions on religion, gender and/or age. However, for the category "All Areas of Science and Technology," only Muslims may be nominated with no restrictions on citizenship, gender and/or age. Nominations maybe made by scientific centers and universities, science and technology associations and centers of excellence, academies of science of Islamic countries, and science and technology parks. Laureates See also List of general science and technology awards References External links Academic awards Awards established in 2013 International awards Science and technology in Iran Organisation of Islamic Cooperation Islamic awards
Mustafa Prize
Technology
602
27,004,332
https://en.wikipedia.org/wiki/3-Deoxyglucosone
3-Deoxyglucosone (3DG) is a sugar that is notable because it is a marker for diabetes. 3DG reacts with protein to form advanced glycation end-products (AGEs), which contribute to diseases such as the vascular complications of diabetes, atherosclerosis, hypertension, Alzheimer's disease, inflammation, and aging. Biosynthesis 3DG is made naturally via the Maillard reaction. It forms after glucose reacts with primary amino groups of lysine or arginine found in proteins. Because of the increased concentration of the reactant glucose, more 3DG forms with excessive blood sugar levels, as in uncontrolled diabetes. Glucose reacts non-enzymatically with protein amino groups to initiate glycation. The formation of 3DG may account for the numerous complications of diabetes as well as aging. 3DG arises also via the degradation of fructose 3-phosphate (F3P). 3DG plays a central role in the development of diabetic complications via the action of fructosamine-3-kinase. Biochemistry As a dicarbonyl sugar, i.e. one with the grouping R-C(O)-C(O)-R, 3DG is highly reactive toward amine groups. Amines are common in amino acids as well as some nucleic acids. The products from the reaction of 3DG with protein amino groups are called advanced glycation end-products (AGEs). AGEs include imidazolones, pyrraline, N6-(carboxymethyl)lysine, and pentosidine. 3DG as well as AGEs play a role in the modification and cross-linking of long-lived proteins such as crystallin and collagen, contributing to diseases such as the vascular complications of diabetes, atherosclerosis, hypertension, Alzheimer's disease, inflammation, and aging. 3DG has a variety of potential biological effects, particularly when it is present at elevated concentrations in diabetic states: Diabetics with nephropathy were found to have elevated plasma levels of 3DG compared with other diabetics. Glycated diet, which elevates systemic 3DG levels, leads to diabetes-like tubular and glomerular kidney pathology. Increased 3DG is correlated to increased glomerular basement membrane width. 3DG inactivates aldehyde reductase. Aldehyde reductase is the cellular enzyme that protects the body from 3DG. Detoxification of 3DG to 3-deoxyfructose (3DF) is impaired in diabetic humans since their ratio of 3DG to 3DF in urine and plasma differs significantly from non-diabetic individuals. 3DG is a teratogenic factor in diabetic embryopathy, leading to embryo malformation. This appears to arise from 3DG accumulation, which leads to superoxide-mediated embryopathy. Women with pre-existing diabetes or severe diabetes that develops during pregnancy are between 3 and 4 times more likely than other women to give birth to infants with birth defects. 3DG induces apoptosis in macrophage-derived cell lines and is toxic to cultured cortical neurons and PC12 cells. 3DG and ROS 3DG induces reactive oxygen species (ROS) that contribute to the development of diabetic complications. Specifically, 3DG induces heparin-binding epidermal growth factor, a smooth muscle mitogen that is abundant in atherosclerotic plaques. This observation suggests that an increase in 3DG may trigger atherogenesis in diabetes. 3DG also inactivates some enzymes that protect cells from ROS. For example, glutathione peroxidase, a central antioxidant enzyme that uses glutathione to remove ROS, and glutathione reductase, which regenerates glutathione, are both inactivated by 3DG. Diabetic humans show increased oxidative stress. 3DG-induced ROS result in oxidative DNA damage. 3DG can be internalized by cells and internalized 3DG is responsible for the production of intracellular oxidative stress. Detoxification Although of uncertain medical significance, a variety of compounds react with 3DG, possibly deactivating it. One such agent is aminoguanidine (AG). AG reduces AGE associated retinal, neural, arterial, and renal pathologies in animal models. The problem with AG is that it is toxic in the quantities needed for efficacy. Additional reading * References Deoxy sugars Advanced glycation end-products
3-Deoxyglucosone
Chemistry,Biology
968
11,174,336
https://en.wikipedia.org/wiki/In-place%20matrix%20transposition
In-place matrix transposition, also called in-situ matrix transposition, is the problem of transposing an N×M matrix in-place in computer memory, ideally with O(1) (bounded) additional storage, or at most with additional storage much less than NM. Typically, the matrix is assumed to be stored in row-major or column-major order (i.e., contiguous rows or columns, respectively, arranged consecutively). Performing an in-place transpose (in-situ transpose) is most difficult when N ≠ M, i.e. for a non-square (rectangular) matrix, where it involves a complex permutation of the data elements, with many cycles of length greater than 2. In contrast, for a square matrix (N = M), all of the cycles are of length 1 or 2, and the transpose can be achieved by a simple loop to swap the upper triangle of the matrix with the lower triangle. Further complications arise if one wishes to maximize memory locality in order to improve cache line utilization or to operate out-of-core (where the matrix does not fit into main memory), since transposes inherently involve non-consecutive memory accesses. The problem of non-square in-place transposition has been studied since at least the late 1950s, and several algorithms are known, including several which attempt to optimize locality for cache, out-of-core, or similar memory-related contexts. Background On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid data movement. However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm (e.g. Frigo & Johnson, 2005), transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality. Since these situations normally coincide with the case of very large matrices (which exceed the cache size), performing the transposition in-place with minimal additional storage becomes desirable. Also, as a purely mathematical problem, in-place transposition involves a number of interesting number theory puzzles that have been worked out over the course of several decades. Example For example, consider the 2×4 matrix: In row-major format, this would be stored in computer memory as the sequence (11, 12, 13, 14, 21, 22, 23, 24), i.e. the two rows stored consecutively. If we transpose this, we obtain the 4×2 matrix: which is stored in computer memory as the sequence (11, 21, 12, 22, 13, 23, 14, 24). If we number the storage locations 0 to 7, from left to right, then this permutation consists of four cycles: (0), (1 2 4), (3 6 5), (7) That is, the value in position 0 goes to position 0 (a cycle of length 1, no data motion). Next, the value in position 1 (in the original storage: 11, 12, 13, 14, 21, 22, 23, 24) goes to position 2 (in the transposed storage 11, 21, 12, 22, 13, 23, 14, 24), while the value in position 2 (11, 12, 13, 14, 21, 22, 23, 24) goes to position 4 (11, 21, 12, 22, 13, 23, 14, 24), and position 4 (11, 12, 13, 14, 21, 22, 23, 24) goes back to position 1 (11, 21, 12, 22, 13, 23, 14, 24). Similarly for the values in position 7 and positions (3 6 5). Properties of the permutation In the following, we assume that the N×M matrix is stored in row-major order with zero-based indices. This means that the (n,m) element, for n = 0,...,N−1 and m = 0,...,M−1, is stored at an address a = Mn + m (plus some offset in memory, which we ignore). In the transposed M×N matrix, the corresponding (m,n) element is stored at the address a' = Nm + n, again in row-major order. We define the transposition permutation to be the function a' = P(a) such that: for all This defines a permutation on the numbers . It turns out that one can define simple formulas for P and its inverse (Cate & Twigg, 1977). First: where "mod" is the modulo operation. If 0 ≤ a = Mn + m < MN − 1, then Na mod (MN−1) = MN n + Nm mod (MN − 1) = n + Nm. Second, the inverse permutation is given by: (This is just a consequence of the fact that the inverse of an N×M transpose is an M×N transpose, although it is also easy to show explicitly that P−1 composed with P gives the identity.) As proved by Cate & Twigg (1977), the number of fixed points (cycles of length 1) of the permutation is precisely , where gcd is the greatest common divisor. For example, with N = M the number of fixed points is simply N (the diagonal of the matrix). If and are coprime, on the other hand, the only two fixed points are the upper-left and lower-right corners of the matrix. The number of cycles of any length k>1 is given by (Cate & Twigg, 1977): where μ is the Möbius function and the sum is over the divisors d of k. Furthermore, the cycle containing a=1 (i.e. the second element of the first row of the matrix) is always a cycle of maximum length L, and the lengths k of all other cycles must be divisors of L (Cate & Twigg, 1977). For a given cycle C, every element has the same greatest common divisor . Let s be the smallest element of the cycle, and . From the definition of the permutation P above, every other element x of the cycle is obtained by repeatedly multiplying s by N modulo MN−1, and therefore every other element is divisible by d. But, since N and are coprime, x cannot be divisible by any factor of larger than d, and hence . This theorem is useful in searching for cycles of the permutation, since an efficient search can look only at multiples of divisors of MN−1 (Brenner, 1973). Laflin & Brebner (1970) pointed out that the cycles often come in pairs, which is exploited by several algorithms that permute pairs of cycles at a time. In particular, let s be the smallest element of some cycle C of length k. It follows that MN−1−s is also an element of a cycle of length k (possibly the same cycle). The length k of the cycle containing s is the smallest k > 0 such that . Clearly, this is the same as the smallest k>0 such that , since we are just multiplying both sides by −1, and . Algorithms The following briefly summarizes the published algorithms to perform in-place matrix transposition. Source code implementing some of these algorithms can be found in the references, below. Accessor transpose Because physically transposing a matrix is computationally expensive, instead of moving values in memory, the access path may be transposed instead. It is trivial to perform this operation for CPU access, as the access paths of iterators must simply be exchanged, however hardware acceleration may require that still be physically realigned. Square matrices For a square N×N matrix An,m = A(n,m), in-place transposition is easy because all of the cycles have length 1 (the diagonals An,n) or length 2 (the upper triangle is swapped with the lower triangle). Pseudocode to accomplish this (assuming zero-based array indices) is: for n = 0 to N - 1 for m = n + 1 to N swap A(n,m) with A(m,n) This type of implementation, while simple, can exhibit poor performance due to poor cache-line utilization, especially when N is a power of two (due to cache-line conflicts in a CPU cache with limited associativity). The reason for this is that, as m is incremented in the inner loop, the memory address corresponding to A(n,m) or A(m,n) jumps discontiguously by N in memory (depending on whether the array is in column-major or row-major format, respectively). That is, the algorithm does not exploit locality of reference. One solution to improve the cache utilization is to "block" the algorithm to operate on several numbers at once, in blocks given by the cache-line size; unfortunately, this means that the algorithm depends on the size of the cache line (it is "cache-aware"), and on a modern computer with multiple levels of cache it requires multiple levels of machine-dependent blocking. Instead, it has been suggested (Frigo et al., 1999) that better performance can be obtained by a recursive algorithm: divide the matrix into four submatrices of roughly equal size, transposing the two submatrices along the diagonal recursively and transposing and swapping the two submatrices above and below the diagonal. (When N is sufficiently small, the simple algorithm above is used as a base case, as naively recurring all the way down to N=1 would have excessive function-call overhead.) This is a cache-oblivious algorithm, in the sense that it can exploit the cache line without the cache-line size being an explicit parameter. Non-square matrices: Following the cycles For non-square matrices, the algorithms are more complex. Many of the algorithms prior to 1980 could be described as "follow-the-cycles" algorithms. That is, they loop over the cycles, moving the data from one location to the next in the cycle. In pseudocode form: for each length>1 cycle C of the permutation pick a starting address s in C let D = data at s let x = predecessor of s in the cycle while x ≠ s move data from x to successor of x let x = predecessor of x move data from D to successor of s The differences between the algorithms lie mainly in how they locate the cycles, how they find the starting addresses in each cycle, and how they ensure that each cycle is moved exactly once. Typically, as discussed above, the cycles are moved in pairs, since s and MN−1−s are in cycles of the same length (possibly the same cycle). Sometimes, a small scratch array, typically of length M+N (e.g. Brenner, 1973; Cate & Twigg, 1977) is used to keep track of a subset of locations in the array that have been visited, to accelerate the algorithm. In order to determine whether a given cycle has been moved already, the simplest scheme would be to use O(MN) auxiliary storage, one bit per element, to indicate whether a given element has been moved. To use only O(M+N) or even auxiliary storage, more-complex algorithms are required, and the known algorithms have a worst-case linearithmic computational cost of at best, as first proved by Knuth (Fich et al., 1995; Gustavson & Swirszcz, 2007). Such algorithms are designed to move each data element exactly once. However, they also involve a considerable amount of arithmetic to compute the cycles, and require heavily non-consecutive memory accesses since the adjacent elements of the cycles differ by multiplicative factors of N, as discussed above. Improving memory locality at the cost of greater total data movement Several algorithms have been designed to achieve greater memory locality at the cost of greater data movement, as well as slightly greater storage requirements. That is, they may move each data element more than once, but they involve more consecutive memory access (greater spatial locality), which can improve performance on modern CPUs that rely on caches, as well as on SIMD architectures optimized for processing consecutive data blocks. The oldest context in which the spatial locality of transposition seems to have been studied is for out-of-core operation (by Alltop, 1975), where the matrix is too large to fit into main memory ("core"). For example, if d = gcd(N,M) is not small, one can perform the transposition using a small amount (NM/d) of additional storage, with at most three passes over the array (Alltop, 1975; Dow, 1995). Two of the passes involve a sequence of separate, small transpositions (which can be performed efficiently out of place using a small buffer) and one involves an in-place d×d square transposition of blocks (which is efficient since the blocks being moved are large and consecutive, and the cycles are of length at most 2). This is further simplified if N is a multiple of M (or vice versa), since only one of the two out-of-place passes is required. Another algorithm for non-coprime dimensions, involving multiple subsidiary transpositions, was described by Catanzaro et al. (2014). For the case where is small, Dow (1995) describes another algorithm requiring additional storage, involving a square transpose preceded or followed by a small out-of-place transpose. Frigo & Johnson (2005) describe the adaptation of these algorithms to use cache-oblivious techniques for general-purpose CPUs relying on cache lines to exploit spatial locality. Work on out-of-core matrix transposition, where the matrix does not fit in main memory and must be stored largely on a hard disk, has focused largely on the N = M square-matrix case, with some exceptions (e.g. Alltop, 1975). Reviews of out-of-core algorithms, especially as applied to parallel computing, can be found in e.g. Suh & Prasanna (2002) and Krishnamoorth et al. (2004). References P. F. Windley, "Transposing matrices in a digital computer," Computer Journal 2, p. 47-48 (1959). G. Pall, and E. Seiden, "A problem in Abelian Groups, with application to the transposition of a matrix on an electronic computer," Math. Comp. 14, p. 189-192 (1960). J. Boothroyd, "Algorithm 302: Transpose vector stored array," ACM Transactions on Mathematical Software 10 (5), p. 292-293 (1967). Susan Laflin and M. A. Brebner, "Algorithm 380: in-situ transposition of a rectangular matrix," ACM Transactions on Mathematical Software 13 (5), p. 324-326 (1970). Source code. Norman Brenner, "Algorithm 467: matrix transposition in place," ACM Transactions on Mathematical Software 16 (11), p. 692-694 (1973). Source code. W. O. Alltop, "A computer algorithm for transposing nonsquare matrices," IEEE Trans. Comput. 24 (10), p. 1038-1040 (1975). Esko G. Cate and David W. Twigg, "Algorithm 513: Analysis of In-Situ Transposition," ACM Transactions on Mathematical Software 3 (1), p. 104-110 (1977). Source code. Bryan Catanzaro, Alexander Keller, and Michael Garland, "A decomposition for in-place matrix transposition," Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of parallel programming (PPoPP '14), pp. 193–206 (2014). Murray Dow, "Transposing a matrix on a vector computer," Parallel Computing 21 (12), p. 1997-2005 (1995). Donald E. Knuth, The Art of Computer Programming Volume 1: Fundamental Algorithms, third edition, section 1.3.3 exercise 12 (Addison-Wesley: New York, 1997). M. Frigo, C. E. Leiserson, H. Prokop, and S. Ramachandran, "Cache-oblivious algorithms," in Proceedings of the 40th IEEE Symposium on Foundations of Computer Science (FOCS 99), p. 285-297 (1999). J. Suh and V. K. Prasanna, "An efficient algorithm for out-of-core matrix transposition," IEEE Trans. Computers 51 (4), p. 420-438 (2002). S. Krishnamoorthy, G. Baumgartner, D. Cociorva, C.-C. Lam, and P. Sadayappan, "Efficient parallel out-of-core matrix transposition," International Journal of High Performance Computing and Networking 2 (2-4), p. 110-119 (2004). M. Frigo and S. G. Johnson, "The Design and Implementation of FFTW3," Proceedings of the IEEE 93 (2), 216–231 (2005). Source code of the FFTW library, which includes optimized serial and parallel square and non-square transposes, in addition to FFTs. Faith E. Fich, J. Ian Munro, and Patricio V. Poblete, "Permuting in place," SIAM Journal on Computing 24 (2), p. 266-278 (1995). Fred G. Gustavson and Tadeusz Swirszcz, "In-place transposition of rectangular matrices," Lecture Notes in Computer Science 4699, p. 560-569 (2007), from the Proceedings of the 2006 Workshop on State-of-the-Art [sic] in Scientific and Parallel Computing (PARA 2006) (Umeå, Sweden, June 2006). External links Source code OFFT - recursive block in-place transpose of square matrices, in Fortran Jason Stratos Papadopoulos, blocked in-place transpose of square matrices, in C, sci.math.num-analysis newsgroup (April 7, 1998). See "Source code" links in the references section above, for additional code to perform in-place transposes of both square and non-square matrices. libmarshal Blocked in-place transpose of rectangular matrices for the GPUs. Numerical linear algebra Permutations Articles with example pseudocode
In-place matrix transposition
Mathematics
4,028
18,194,922
https://en.wikipedia.org/wiki/HD%20161988
HD 161988, also known as HR 6635, is a solitary, orange hued star located in the southern circumpolar constellation Apus. It has an apparent magnitude of 6.07, allowing it to be faintly visible to the naked eye. Parallax measurements place it at a distance of 621 light years, and it is currently receding with a heliocentric radial velocity of . The object has a stellar classification of K2 III, indicating that it is a red giant. Gaia Data Release 3 models place it on the red giant branch. At present it has 3.05 times the mass of the Sun and an enlarged radius of . It shines at 185 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 161988 has an iron abundance 74% that of the Sun, making it slightly metal deficient. Like most giants, it spins modestly with a projected rotational velocity of . HD 161988 has a 14th magnitude optical companion located away along a position angle of 122°. References External links Image HD 161988 Apus 161988 K-type giants 6635 087926 CD-76 00919 Double stars Apodis, 63
HD 161988
Astronomy
259
45,711,405
https://en.wikipedia.org/wiki/Symbolism%20of%20domes
The symbolic meaning of the dome has developed over millennia. Although the precise origins are unknown, a mortuary tradition of domes existed across the ancient world, as well as a symbolic association with the sky. Both of these traditions may have a common root in the use of the domed hut, a shape which was associated with the heavens and translated into tombs. The mortuary tradition has been expressed in domed mausolea, martyria, and baptisteries. The celestial symbolism was adopted by rulers in the Middle East to emphasize their divine legitimacy and was inherited by later civilizations down to the present day as a general symbol of governmental authority. Origins The meaning of the dome has been extensively analyzed by architectural historians. According to Nicola Camerlenghi, it may not be possible to arrive at a single "fixed meaning and universal significance" for domes across all building types and locations throughout history, since the shape, function, and context for individual buildings were determined locally, even if inspired by distant predecessors, and meaning could change over time. Mortuary tradition According to E. Baldwin Smith, from the late Stone Age the dome-shaped tomb was used as a reproduction of the ancestral, god-given shelter made permanent as a venerated home of the dead. The instinctive desire to do this resulted in widespread domical mortuary traditions across the ancient world, from the stupas of India to the tholos tombs of Iberia. Michele Melaragno notes that the Scythians built such domed tombs, as did some Germanic tribes in a paraboloid shape. Per Smith, by Hellenistic and Roman times, the domical tholos had become the customary cemetery symbol. Lukas Nickel writes that the conception of a round heaven over a square earth may have contributed to the Han Chinese' rapid adoption in the first century AD of square base cloister vault chambers in their tomb architecture. Celestial tradition Smith writes that in the process of transforming the hut shape from its original pliable materials into more difficult stone construction, the dome had also become associated with celestial and cosmic significance, as evident from decoration such as stars and celestial chariots on the ceilings of domed tombs. This cosmological thinking was not limited to domed ceilings, being part of a symbolic association between any house, tomb, or sanctuary and the universe as a whole, but it popularized the use of the domical shape. Michele Melaragno writes that the nomadic tribes of central Asia are the origin of a symbolic tradition of round domed-tents being associated with the sky and heavens that eventually spread to the Middle East and the Mediterranean. Rudolf Wittkower writes that a "cosmic interpretation of the dome remained common well into the eighteenth century." Divine ruler Herbert Howe writes that throughout the Middle East domes were symbolic of "the tent of the ruler, and especially of the god who dwells in the tent of the heavens." Passages in the Old Testament and intertestamental literature document this, such as Psalms 123:1, Isaiah 40:22, I Kings 8:30, Isaiah 66:1, Psalms 19:4, and Job 22:14. Theresa Grupico states that domes and tent-canopies were also associated with the heavens in Ancient Persia and the Hellenistic-Roman world. A dome over a square base reflected the geometric symbolism of those shapes. The circle represented perfection, eternity, and the heavens. The square represented the earth. An octagon was intermediate between the two. According to Michael Walter, a tradition of the "golden dome" identifying the ruler with the cosmos, sun, and astrological values originated in Persia and spread to later Roman and Turkic courts. Michele Melaragno writes that Persian kings used domed tents in their official audiences to symbolize their divinity, and this practice was adopted by Alexander the Great. According to Smith, the distinct symbolism of the heavenly or cosmic tent stemming from the royal audience tents of Achaemenid and Indian rulers was adopted by Roman rulers in imitation of Alexander, becoming the imperial baldachin. This probably began with Nero, whose Domus Aurea, meaning "Golden House", also made the dome a feature of Roman palace architecture. Smith notes that one way the Romans depicted the celestial tent in architecture was as a corrugated or gored dome. Melaragno writes that the allegory of Alexander the Great's domical tent in Roman imperial architecture coincided with the "divinification" of Roman emperors and served as a symbol of this. According to Nicholas Temple, Nero's octagonal domed room in his Domus Aurea was an early example of an imperial reception hall, the symbolism of which "signaled an elevation of the status of the emperor as living deity, which in the case of Nero related specifically to his incarnation as Helios and the Persian Mithra." The domical octagonal hall in the Flavian Domus Augustana is another example of the octagon's imperial symbolism in antiquity, which may have been related to hero ruler-ship, mediation between the terrestrial and celestial, or the divine harmony of the octave. Colum Hourihane writes that the semi-domed apse became a symbol of Roman imperial authority under Domitian and depictions into the Byzantine period used overhead domes or semidomes to identify emperors. Karl Swoboda writes that even by the time of Diocletian, the dome probably symbolized sovereignty over the whole world. Nicholas Temple states that Roman imperial reception halls or throne rooms were often domed with circular or octagonal plans and "functioned as a ceremonial space between the emperor, his court and the gods", becoming a common feature of imperial palaces from the time of Constantine onwards. Christianity E. Baldwin Smith writes that, by the Christian era, "cosmic imagery had come to transcend the mortuary, divine and royal symbolism already associated with the dome" but the Christian use of domes acknowledged earlier symbolic associations. Thomas Mathews writes that Christianity's rejection of astrology was reflected in the omission of signs of the zodiac imagery from their dome decoration. According to Gillian MacKie, early Christian domes were often decorated at the base with imagery of the Four Evangelists, symbolizing "the idea that the microcosmic vision of heaven was supported by the word of God as revealed in the Gospels." According to Susan Balderstone, domed centralized plans, whether octagonal, circular, or tetraconch, were "associated with the influence of Arianism in the fourth century and with the Monophysites in the fifth century." Rudolf Wittkower writes that many centralized domed churches and sanctuaries dedicated to the Virgin Mary were related to the martyrium over her tomb, her assumption into heaven, and her status as Queen of Heaven. Robert Stalley writes that mausolea, martyria, and baptisteries shared similar forms in the Roman architectural tradition as domed centralized plans due to representing the linked ideas of "death, burial, resurrection, and salvation". Martyria Smith writes that the dual sepulchral and heavenly symbolism was adopted by early Christians in both the use of domes in architecture and in the ciborium, a domical canopy like the baldachin used as a ritual covering for relics or the church altar. The traditional mortuary symbolism led the dome to be used in Christian central-type martyria in the Syrian area, the growing popularity of which spread the form. The spread and popularity of the cult of relics also transformed the domed central-type martyria into the domed churches of mainstream Christianity. According to Nicholas Temple, the use of centralized buildings for the burials of heroes was common by the time the Anastasis Rotunda was built in Jerusalem, but the use of centralized domed buildings to symbolize resurrection was a Christian innovation. Richard Krautheimer notes that the octagonal pattern of Roman mausolea corresponded to the Christian idea of the number eight symbolizing spiritual regeneration. Baptisteries In Italy in the 4th century, baptisteries began to be built like domed mausolea and martyria, which spread in the 5th century. Smith writes that this reinforced the theological emphasis on baptism as a re-experience of the death and resurrection of Jesus Christ. Krautheimer writes that "baptism is the death of the old Adam and the resurrection of the new man; eight is the symbolic number of regeneration, salvation, and resurrection, as the world started the eighth day after creation began, and Christ rose from the dead on the eighth day of the Passion." According to , octagonal baptisteries originated in Milan, Rome, and Ravenna and were typical for the western empire, but rare in the eastern empire. Theresa Grupico states that the octagon, which is transitional between the circle and the square, came to represent Jesus' resurrection in early Christianity and was used in the ground plans of martyria and baptisteries for that reason. The domes themselves were sometimes octagonal, rather than circular. Nicholas Temple proposes the imperial reception hall as an additional source of influence on baptisteries, conveying the idea of reception or redemptive passage to salvation. Iconography of assembled figures and the throne of Christ would also relate to this. Throne halls Michele Melaragno writes that the concept of "Christ the King" was the Christian counterpoint to the Roman tradition of emperor deification and so absorbed the dome symbolism associated with it. E. Baldwin Smith writes that "[i]n the West during the Carolingian period the churchmen and rulers revived, or took over from the Byzantine East, the use of cupolas as a mark of royal and divine presence." Like the throne room of the Eastern Roman emperor, or Chrysotriklinos, Charlemagne's throne in the Palatine Chapel at Aachen was located in a domed octagonal space. In the words of Allan George Doig, the throne at Aachen was located in "an intermediary place between earth and heaven" on the gallery level, directly across from an image of Christ's throne on the dome. According to Jodi Magness, the dome's image of Christ on a throne referred to a passage about the apocalypse from the Book of Revelation. According to Herbert Schutz, the symbolism of the octagon at Aachen related to the emperor's role as God's representative on Earth in achieving a universal "Imperium Christianum" and the geometry of objects and architecture acted as a "wordless text" to suggest ideas, such as the "renovatio imperii". Winand Klassen writes that the domed space symbolized the dual secular and divine nature of the restored empire. Churches Middle Ages Literary evidence exists that the idea of the cosmic temple had been applied to the Christian basilica by the end of the 4th century, in the form of a speech by Eusebius on a church in Tyre. However, it is only in the mid 6th century that the earliest literary evidence of a cosmological interpretation of a domed church building exists, in a hymn composed for the cathedral church of Edessa. Kathleen E. McVey traces this to a blending by Jacob of Serugh of the two major but contradictory schools of biblical exegesis at the time: the building-as-microcosm tradition of the Antioch school combined with the Alexandrian view of the cosmos and firmament as composed of spheres and hemispheres, which was rejected by the Antioch school. Gold was used as the color of Heaven, and Charles Stewart notes that the emphasis on light from windows beneath the domes of Justinian's imperial commissions corresponds to the Neo-Platonist idea of light as a symbol of wisdom. Andrzej Piotrowski writes that Byzantine churches after Justinian's Hagia Sophia often had gold-covered domes with a ring of windows and that gold, as "the most precious metal and the paradigm of purity, was a sign of light and divinity in the writings of St. Basil and Pseudo-Dionysius. It 'does not rust, decompose, or wear and can be beaten to the fineness of air. Gold was used to invoke the transcendental nature of the Incarnate Christ.'" Beginning in the late eighth century, portraits of Christ began to replace gold crosses at the centers of church domes, which Charles Stewart suggests may have been an over-correction in favor of images after the periods of Iconoclasm in the eighth and ninth centuries. One of the first was on the nave dome of Hagia Sophia in Thessaloniki, and this eventually developed into the bust image known as the Pantokrator. Otto Demus writes that Middle Byzantine churches were decorated in a systematic manner and can be seen as having three zones of decoration, with the holiest at the top. This uppermost zone contained the dome, drum and apse. The dome was reserved for the Pantokrator (meaning "ruler of all"), the drum usually contained images of angels or prophets, and the apse semi-dome usually depicted the Virgin Mary, typically holding the Christ Child and flanked by angels. Maria Evangelatou writes that Mary became the most commonly depicted figure in the apse semi-dome during the growth of her cult after the end of Iconoclasm in the ninth century for a number of reasons, including that her power as intercessor for the faithful lent itself to depictions on such a focal point for the congregation, and that her role in the Incarnation and role as a bridge between heaven and earth were reinforced by the location of the apse just below the dome. Anna Freze writes that the octagonal churches and octagon domed churches of the Byzantine Empire during the 9th to 11th centuries were closely linked to imperial commissions. The octagonal patterns were mean to convey "the idea of basilea as the sacral power and status of a Byzantine emperor" and the octagon, also being a symbol of regeneration, suggests an origin for this in the architectural restorations of Basil I following the iconoclast periods. Barry Bridgwood and Lindsay Lennie write that the domes of Eastern Orthodox church architecture followed the influence of Eastern Orthodoxy into central and eastern Europe and that the architectural differences with Western church architecture are analogous to the theological schism between the Western and Eastern Churches. Nicola Camerlenghi writes that domes were status symbols among the competing cities and communes of medieval Italy and this contributed to the boom in dome construction there beginning in the 11th century. Renaissance According to James Mitchell, in the Renaissance the dome began to be a symbol throughout Europe of the unity of religion. The astrological depiction of star constellations in the small dome above the altar of the Old Sacristy of the Basilica of San Lorenzo in Florence, has been calculated by Patricia Fortini Brown to represent July 6, 1439 at about noon, the date of the closing session of the Council of Florence, in which the Articles of Union between Eastern and Western Christendom were signed by Latin and Greek delegates. Janna Israel writes that the adoption of Byzantine architectural forms in Venice at the end of the fifteenth century, such as low domes on pendentives, helped "in the construction of a more harmonious and seamless history between Venice and Byzantium, glossing over the divisions that had actually defined the relationship between the two powers for almost a thousand years." Nathaniel Curtis writes that the large domes of the Renaissance implied "ideas of power, dominance or centralization – as the capitol of a nation or of a state." He notes that Guadet said of St. Peter's, "it is less the roof of the greatest of all churches than the covering and sign of this centre to which converges the entire unity of Catholicism." According to Linda Koch, it has been recognized that Renaissance humanism inspired a general revival of antiquity and attempted to reconcile Christianity with Roman paganism. Irene Giustina writes that, in Renaissance Italy, the pointed dome was considered structurally safer but was also "against the rules of antique architecture". The pointed profile was considered barbarian and timburios were used as much to conceal the dome's shape externally as for structural reasons. Semicircular dome profiles were preferred. Sylvie Duvernoy writes that the 1450 architectural treatise written by Leon Battista Alberti was inspired by Vitruvius' ancient book but written from a humanist perspective and, unlike Vitruvius, advocated for central plans because the circle was "the favourite shape of nature". Of the nine church designs provided in the book, six were circular or polygonal centrally planned designs, with the polygonal shapes recommended to be drawn with equal angles so that they can be inscribed in a circle. Centrally planned churches in Europe spread from the middle of the fifteenth century onward as part of the Renaissance. Counter-Reformation The appearance of the oval in architecture has been extensively discussed by architectural historians. Although not an idea originating in the Renaissance, by the beginning of the 1500s the idea of the oval was "in the air", according to Santiago Huerta. During the discussions of the Council of Trent (1545–1563), which began the Counter-Reformation of the Catholic Church in response to the Protestant Reformation, the circle and square were declared too pagan for Christian churches. Although the council did not make any direct pronouncements regarding architecture and, according to Hanno-Walter Kruft, the effects of those reforms actually adopted by the Council were varied, the one known written example of the Council's resolutions being applied to architecture, Cardinal Charles Borromeo's Instructiones fabricae et supellectilis ecclesiasticae of 1577, "condemns the circular form as heathenish." The publication was addressed only to Borromeo's own diocese of Milan, but gained currency throughout Europe. According to Michael Earls, the oval dome reconciled the "long axis, favored by the liturgy of the counter-reformation and the central plan so beloved by the spatial idealists." Victoria Hammond writes that, in addition to the oval form's inherent appeal, its use in domes may have been influenced by the European Age of Exploration, as well as by the theory of the elliptical orbits of planets. Sylvie Duvernoy notes that, while Johannes Kepler was too young to have influenced the initial popularity of oval churches, the 1609 publication of his discovery of the elliptical motion of planets could have contributed to their persistence. Sylvie Duvernoy writes that the use of a circular plan dome and an oval plan dome in the twin domed churches built between 1662 and 1679 at the northern entrance to the city of Rome, Santa Maria dei Miracoli and Santa Maria in Montesanto, indicates that the two forms were then considered symbolically equivalent. Michał Kurzej argues that the domed transept likely "became a distinguishing feature of Roman Catholic Church buildings" in the 16th century and that imitation of Italian architecture outside of Italy at this time indicated partiality towards Roman Catholicism over Protestantism. Wolfgang Born writes that, according to Walter Tunk, "The Bavarian type of the bulbous dome is said to have originated from a fusion between a pointed spire and a dome." Hans Schindler states that "the onion spire carried the prestige of well-known pilgrimage churches and allowed a new church to indicate its kinship with them". Eastern Orthodoxy Piotr Krasny writes that the "five domes crowning traditional Ruthenian Orthodox churches were believed to symbolise the Five Patriarchs, who according to Orthodox ecclesiology wielded equal power in the Church. In the 17th century the five domes were replaced by one, symbolizing the pope's primacy which was acknowledged by the Uniat Church." Miecyzslaw Wallis writes: "In old Russian temples, one dome would symbolize Christ, three, the Trinity, five, Christ and the four evangelists, thirteen domes, Christ and the twelve apostles." Islam According to Oleg Grabar, the domes of the Islamic world, which rejected Christian-style iconography, continued the other traditions. Muslim royalty built palatial pleasure domes in continuation of the Roman and Persian imperial models, although many have not survived, and domed mausolea from Merv to India developed the form. Robert Hillenbrand notes that dome ceilings with solar or stellar decoration continued the symbolism of the dome of heaven, and the domed audience hall of the palace of Abu Muslim in Merv presented the ruler as cosmocrator. According to Hillenbrand, understanding symbolic intent in Muslim architecture is made difficult by the lack of explicit evidence from literary sources, seemingly inconsistent associations between plan or elevation and symbolic meaning, and Islam's rejection of sculptural and figurative decoration. The lack of documentation and even tradition for such symbolism may indicate that the meaning was only ever intended for a small learned elite. Hillenbrand writes that the understanding of color symbolism in Islam has suffered from a lack of literary evidence, but green floral patterns have been proposed as representing fertility and blue tilework proposed as representing good luck or heaven. Anwaar Mohyuddin and Nasra Khan note that the color green, used on the dome of the Prophet's tomb, was associated with Muhammad's Quraysh tribe, was purportedly his favorite color, and has become a symbol of Islam itself. Royalty Jonathan M. Bloom states that the term qubbat al-khaḍrā''' was used by medieval sources to describe features found in several early Muslim palaces and is conventionally translated to mean "green dome". The conventional explanation for the term is that early domes were made of timber covered with copper sheeting that gained a green patina over time. However, grammatical analysis of the Arabic term suggests that it had other possible meanings and he writes that it may have been originally intended to convey something like "dome of heaven". Bloom writes that mosques did not normally have domes until the 11th century, perhaps because of the existing association of domes with palaces and tombs. Grabar emphasizes that, in the early centuries of Islam, domes were closely associated with royalty. A dome built in front of the mihrab of a mosque, for example, was at least initially meant to emphasize the place of a prince during royal ceremonies. Over time such domes became primarily focal points for decoration or the direction of prayer. The use of domes in mausolea can likewise reflect royal patronage or be seen as representing the honor and prestige that domes symbolized, rather than having any specific funerary meaning. According to Andrew Peterson, the wide variety of dome forms in medieval Islam reflected dynastic, religious, and social differences as much as practical building considerations. E. Baldwin Smith states that the form of brick melon domes in the Near East with corrugations on the exterior may have been an extension of an earlier tradition of such domes in wood from the palace architecture of Alexandria. Smith also suggests that the "sculptured baldachins and cupolas" of Mughal architecture were an adoption from Ottoman architecture via traveling artists, craftsmen, and architects from the Ottoman court. Theology Doğan Kuban writes that even seemingly minor variations in shape, structure, and functional use had theoretical implications, and were the "result of complex and culturally significant developments in the Islamic world, where the dome and minaret became symbols of Islam." Camilla Edwards writes that "the dome, and its decorative elements are fundamental to Islamic belief" and are often found in three structures that can serve as places of worship: mosques, madrasas, and mausolea. Oleg Grabar characterizes forms in Islamic architecture as having relatively low levels of symbolism. While conceding this in a general sense, Yasser Tabbaa maintains that certain forms were initially very highly symbolic and only lost such associations over time. The phenomenon of muqarnas domes, in particular, is an example. Tabbaa explains the development and spread of muqarnas domes throughout the Islamic world beginning in the early 11th century as expressing a theological idea of the universe propounded by the Ash'arites (a modification of the Atomism of Aristotle with Occasionalism), which rose to prominence in Baghdad at this time. Only later was the style used in a purely decorative manner. Theresa Grupico writes that the use of the octagon in the Dome of the Rock, imperial funerary architecture, or mosque architecture may be a borrowing from earlier Byzantine or Persian use or reflect the idea of Paradise having "eight gardens with eight doors". Rina Avner writes that the Dome of the Rock was designed to express the Muslim rejection of the Christian tenets of the divinity of Christ and the role of Mary as "God-bearer". Its octagonal shape likely references the octagonal Church of the Kathisma, an octagonal Christian shrine three miles away that was built around a rock said to have served as a seat for the Virgin Mary. The inner span of the Dome of the Rock is slightly wider that of the Church of the Kathisma. The dome of the Dome of the Rock has been compared to that of the nearby domed Church of the Anastasis. A 10th century source writes that the Dome of the Rock "was meant to compete with and surpass the churches of Jerusalem in beauty, especially with respect to the overwhelming size of the dome of the Anastasis." Grupico writes that Ottoman mosques, such as the Mosque of Suleyman the Great in Istanbul, have been interpreted as "challenging" the Hagia Sophia or "inviting similarities" of message beyond the merely visual. The use of Koranic text to decorate the pendentives of domes in the Islamic world replaced the human depictions of Christian iconography, such as the Four Evangelists, but similarly represented the way to the Word of God. Government Early modern legislatures Thomas Markus writes that the dome became a symbol of democratic political power over the course of the eighteenth, nineteenth, and twentieth centuries. The Irish Parliament House in Dublin included an octagonal dome over a central chamber for the House of Commons. Edward McParland writes that the location of the space, especially relative to the barrel-vaulted House of Lords, which was off axis on the east side of the building, may have symbolized a political dominance by the House of Commons. Kendall Wallis writes that the decision to build the national capitol building of the United States with a large dome "took a form laden with symbolic sacred meaning and ascribed a radically secular meaning to it." The decorative use of coffers was meant to evoke a connection with the classical origins of democracy and republicanism. "It represented the legislative power of the republic", sanctified. The ideas of religious association and sky symbolism also persisted in their resonance with the providential overtones of America's sense of its vocation in the world and, more pronounced in the state capitols, in the stars and sky scenes depicted on the domes. Those state capitol domes built after the American Civil War that resembled the second national capitol dome referred symbolically to the Federal government and so to the idea of "the Union". Charles Goodsell suggests a link between the function of a capitol as the "headquarters" of government, the root word of "capitol" being caput'' or "head", and the physical resemblance of a capitol dome to a great head. Dictatorship Richard Overy writes that both Hitler and Stalin planned, but never completed, enormous domed assembly halls as part of their efforts to establish global capital cities. Hitler's Volkshalle, or "People's Hall", was meant to have a dome 250 meters wide and hold 200,000 people. The Palace of the Soviets in Moscow was meant to be the tallest building in the world, rising above a domed congress hall 100 meters wide for 21,000 world socialist delegates. The foundations were begun for the Palace of the Soviets on the site of the demolished Cathedral of Christ the Saviour, but technical problems postponed the project and it was abandoned after Stalin's death in the 1950s. Overy states that these were meant to be monuments to dictatorship and utopian civilization that would last for ages. Modern legislatures According to Giovanni Rizzoni, although the dome traditionally represented absolute power, the modern glass dome of the German Reichstag building expresses both the sovereignty of the people, who as tourists are literally above the legislature while touring the dome, and the accessibility of parliamentary democracy, due to the transparency of the glass dome and the window it provides into the legislative chamber below. William Seale writes that the dome is an accepted architectural symbol across the world for democratic legislatures. Notes References Bibliography Dome Sacral architecture Dome
Symbolism of domes
Engineering
5,790
7,352,871
https://en.wikipedia.org/wiki/Cantlop%20Bridge
Cantlop Bridge is a single span cast-iron road bridge over the Cound Brook, located to the north of Cantlop in the parish of Berrington, Shropshire. It was constructed in 1818 to a design possibly by Thomas Telford, having at least been approved by him, and replaced an unsuccessful cast iron coach bridge constructed in 1812. The design of the bridge was innovative for the period, using a light-weight design of cast-iron lattice ribs to support the road deck in a single span, and appears to be a scaled-down version of a Thomas Telford bridge at Meole Brace, Shropshire. The bridge is the only surviving Telford-approved cast-iron bridge in Shropshire, and is a Grade II* listed building and scheduled monument. It originally carried the turnpike road from Shrewsbury to Acton Burnell. History and description Thomas Telford worked as the county surveyor of Shropshire between 1787 and 1834, and the bridge is reported to have once held a cast iron plate above the centre of the arch inscribed with "Thomas Telford Esqr - Engineer - 1818", which is apparently visible in historic photographs, but has not been in place since at least 1985. The bridge design incorporates dressed red and grey sandstone abutments with ashlar dressings, these are slightly curved and ramped, with chamfered ashlar quoins, string courses, and moulded cornices. The structural cast-iron consists of a single segmental span with four arched lattice ribs, braced by five transverse cast-ironmembers. The road deck is formed from cast-iron metal deck plates, tarmacked over, and now finished with gravel. The original parapets have at some point been replaced with painted cast-iron railings with dograils, dogbars and shaped end balusters. Present-day The bridge today remains as a monument only, being closed to vehicular traffic. It was bypassed by a more modern adjacent concrete bridge built in the 1970s. It is in the care of English Heritage and is freely accessible to pedestrians. A layby exists for visitors to park and there is an information board. See also Grade II* listed buildings in Shropshire Council (A–G) Listed buildings in Berrington, Shropshire Notes References Blackwall, A 1985. 'Historic Bridges of Shropshire', Shrewsbury: Shropshire Libraries Burton, A 1999. 'Thomas Telford', London: Aurum Press Sutherland, R J M 1997. 'Structural Iron, 1750–1850', Aldershot: Ashgate Bridges by Thomas Telford Bridges in Shropshire Structural engineering English Heritage sites in Shropshire Grade II* listed buildings in Shropshire
Cantlop Bridge
Engineering
538
14,747,754
https://en.wikipedia.org/wiki/List%20of%20states%20and%20union%20territories%20of%20India%20by%20vaccination%20coverage
This is a list of the States of India ranked in order of percentage of children between 12–23 months of age who received all recommended vaccines, including all required doses of the BCG vaccine, Hepatitis B vaccine, polio vaccine, DPT vaccine, and the MMR vaccine. This information was compiled from National Family Health Survey - 4 and 5 published by International Institute for Population Sciences. Overall vaccination coverage in the country increased from 62.0% in 2015-16 to 76.6% in 2019-21 (Urban: 63.9% to 75.5%; Rural: 61.3% to 77.0%). List Union Territory by vaccination coverage Notes References Vaccination coverage Vaccination
List of states and union territories of India by vaccination coverage
Biology
150
16,396,377
https://en.wikipedia.org/wiki/EXPEC%20Advanced%20Research%20Center
The Exploration and Petroleum Engineering Center - Advanced Research Center (EXPEC ARC) is located in Dhahran, Saudi Arabia. It is a research center that belongs to Saudi Aramco and is responsible for upstream oil and gas technology development. The center has over 250 scientists from various disciplines, spread across six technology teams and one laboratory division which tackle various aspects of oil and gas exploration, development, and production. These teams are: Geophysics Technology, Geology Technology, Reservoir Engineering Technology, Computational Modeling Technology, Production Technology, and Drilling Technology. Saudi Aramco
EXPEC Advanced Research Center
Chemistry
115
2,092,576
https://en.wikipedia.org/wiki/Dym%20equation
In mathematics, and in particular in the theory of solitons, the Dym equation (HD) is the third-order partial differential equation It is often written in the equivalent form for some function v of one space variable and time The Dym equation first appeared in Kruskal and is attributed to an unpublished paper by Harry Dym. The Dym equation represents a system in which dispersion and nonlinearity are coupled together. HD is a completely integrable nonlinear evolution equation that may be solved by means of the inverse scattering transform. It obeys an infinite number of conservation laws; it does not possess the Painlevé property. The Dym equation has strong links to the Korteweg–de Vries equation. C.S. Gardner, J.M. Greene, Kruskal and R.M. Miura applied [Dym equation] to the solution of corresponding problem in Korteweg–de Vries equation. The Lax pair of the Harry Dym equation is associated with the Sturm–Liouville operator. The Liouville transformation transforms this operator isospectrally into the Schrödinger operator. Thus by the inverse Liouville transformation solutions of the Korteweg–de Vries equation are transformed into solutions of the Dym equation. An explicit solution of the Dym equation, valid in a finite interval, is found by an auto-Bäcklund transform Notes References Solitons Exactly solvable models Integrable systems
Dym equation
Physics
305
37,983,880
https://en.wikipedia.org/wiki/European%20Polymer%20Journal
European Polymer Journal is a monthly peer-reviewed scientific journal, established in 1965 and published by Elsevier. The journal is publishing both original research and review papers on topic of the physics and chemistry of polymers. In 2006, it launched the polymer nanotechnology section. Richard Hoogenboom (Ghent University), is the editor-in-chief. References Chemistry journals Materials science journals Elsevier academic journals
European Polymer Journal
Materials_science,Engineering
83
37,723,436
https://en.wikipedia.org/wiki/University%20of%20Brighton%20Design%20Archives
The University of Brighton Design Archives centres on British and global design organisations of the twentieth century. It is located within the University of Brighton Grand Parade campus in the heart of Brighton and is an international research resource. It has many archival collections that were generated by design institutions and individual designers History The University of Brighton Design Archives has its origins in the deposit of the archive of the Design Council (formerly the Council of Industrial Design) in 1994. The organisation was restructured by recommendation of the 1993–94,"Future Design Council" report (also known as the Sorrell Report) and consequently its records needed to be relocated. Various repositories were considered and the University of Brighton was selected since it offered the newly established Design History Research Centre (DHRC) led by Professor Jonathan Woodham and Dr Patrick Maguire, who provided research expertise in the area of design and the state. In 1996 an award from the Getty Foundation Archive Program supported not only the acquisition of the Design Council Archive but also the appointment of a curator and a research officer. Collections development The Design Archives has developed its collections since the 1990s and each archive has been acquired according to a specific collecting policy; to document aspects of twentieth-century design history with a strategic focus on the connections between them. Acquisitions have included the archives of James Gardner and FHK Henrion, who both worked with the Council of Industrial Design (later the Design Council) in the early parts of their career. Subsequent additions of individual designer's archives include those of Alison Settle a journalist and editor of British Vogue, and Council member in the 1950s; whose archive had been deposited in the university's library. The archives of Bernard Schottlander, Paul Clark, and Barbara Jones, designers from different periods each having connections with the Design Council's work. Communication designers HA Rothholz, Edwin Embleton and Anthony Froshaug. Architects Joseph Emberton and Theo Crosby, and the display and set designer, Natasha Kroll. The archive also holds a collection of papers reflecting all aspects of the work of engineer, designer and former senior project officer at the Design Council, WH Mayall. The acquisition of the archive of the International Council of Graphic Design Associations (ICOGRADA) in 2002-3 marked the development of an international perspective for the collection. ICOGRADA is the professional world body for graphic design and visual communication, founded in London in 1963. The ICOGRADA archive comprises a significant body of documentation relating to governance, administration and educational activities, an important collection of 1500 posters from around the world, and library holdings. In 2007 the International Council of Societies of Industrial Design (ICSID) archive came to the University of Brighton and further extended the international reach of the Design Archives. Online access Since 2005 the Design Archives has contributed catalogue data to the Archives Hub, a gateway to thousands of archives across more than 200 UK institutions. Records are added regularly as part of the Design Archives' ongoing cataloguing programme. Increasingly, digital objects are being added to these records. Online access to the Design Archives' visual resources has been available in digital form since 1997–1998 with the JIDI: JISC Image Digitisation Initiative, which funded the digitisation of parts of the Design Council Photographic Library, including the 1951 Festival of Britain material. In 1999, the Archives participated in Scran (Scottish Cultural Resources Access Network), contributing images of exhibits at the 1947 Enterprise Scotland Exhibition. A further 3,000 images were added to the Visual Arts Data Service for free public access in 2001. In 2000, the Design Archives developed a more structured e-learning resource, 'Designing Britain 1945–1975: The Visual Experience of Post-War Society'. A £132,000 grant from Jisc supported the creation of seven modules, each containing around 100 visual records and contextual texts by subject specialists. The Design Archives was one of eleven image collections to take part in the JISC-funded 'Digital Images for Education' project, receiving £43,000 in funding, and delivering over 2,300 images from across the wealth of its holdings to this subscription-based service, launched in 2011. The key emphasis of this resource is on film and digital images that capture local history, UK history, and world history during the preceding 25 years. The Archives was also among nine Higher Education partners contributing data and expertise to the "Look Here!" Project, funded and led by the Visual Arts Data Service. In 2015 the Design Archives won funding from the Arts and Humanities Research Council for the project Exploring British Design, a prototype web portal to connect information about British design held in different museums, archives and libraries. Recent recognition In 2009 the Design Archives team was expanded as a result of further investment by the University of Brighton. In recognition of its national and international role in higher education, the Design Archives received a 3-year Higher Education Funding Council for England (HEFCE) grant of £180,000 in 2010. The award followed a review of university museums and galleries, led by Sir Muir Russell, which resulted in HEFCE widening its definition of university collections eligible for support. In 2017, the Design Archives successfully reapplied for funding from this competitive source for the next four years. The Design Archives now form part of a group of 33 university museum, galleries and collections to receive this direct support. The industry publication Design Week named the Design Archives as one of the five key design research collections in the UK. In October 2018, it was announced that the University of Brighton Design Archives had been awarded the prestigious Sir Misha Black Award for Innovation in Design Education. In his oration, Professor Sir Christopher Frayling said the award was "primarily, for their pioneering work since the early 2000s in the areas of access and digitisation — engaging their various publics, specialist and non-specialist — in both processes and content, and putting the Brighton Design Archives at the forefront of debate about the very nature and significance of archival work today." Exhibitions The Design Archives initiates exhibitions and contributes to exhibitions at other institutions. Some examples include: 1999 – Ministry of Taste exhibition at Cornerhouse, Manchester, then firstsite, Colchester and Virgin Atlantic, Heathrow. The exhibition showed colour product photography from the Design Council Archive. 2000 – Artist in Residence Project funded by the Arts Council, South East Arts, Brighton & Hove Council, University of Brighton. Nine-month residency by artists Marysia Lewandowska and Neil Cummings, who produced research and student projects leading to the exhibition 'Documents: Adrift in Taste’, University of Brighton Gallery, 2–22 December 2000. 2004 – The Design Archives co-ordinated research in Britain for the exhibition and publication The Ecstasy of Things, for the Fotomuseum Winterthur, Switzerland. 2004 - Airworld – Design and Architecture for Air Travel, 7 May – 14 November, Vitra Design Museum, Weil am Rhein then on tour to Vitra Museum, Berlin and other venues. The Design Archives loaned material from the FHK Henrion Archive. 2006 - Brighton Photo Biennial: collaboration between the Design Archives and Gabriel Kuri, as selected for the Design Centre, London 2007 - Indoors and Out: The Sculpture and Design of Bernard Schottlander exhibition at the University of Brighton, and at the Henry Moore Institute, Leeds. 2008 – Designs for Solidarity: Photography and the Cuban Political Poster 1965–1975, an exhibition for the Brighton Photo Biennial. 2010 - The House of Vernacular, Brighton Photo Biennial, 2 October – 28 November. Images from the Design Archives were selected by Martin Parr for inclusion in this exhibition. 2011 – Festival of Britain 50th Anniversary Exhibition, Royal Festival Hall, London. 2013 - Black Eyes & Lemonade: Curating Popular Art, 9 March – 1 September 2013. Whitechapel Gallery. A collaboration with the National Museum of Folklore and the Whitechapel Gallery, the exhibition included items from the Barbara Jones Archive, the F H K Henrion Archive, the James Gardner Archive, and the Design Council Archive. 2015 - History Is Now: 7 Artists Take On Britain, 10 February - 26 April 2015. Hayward Gallery. 2015 - Joseph Emberton: the architecture of display 2015, 18 February - 17 May 2015. Pallant House. Publications Woodham, Jonathan M (1995). ‘Redesigning a Chapter in the History of British Design: The Design Council Archive at the University of Brighton.’ Journal of Design History 8 (3): pp. 225–229. Maguire, P. J. and J. M. Woodham (1997). Design and Cultural Politics in post-war Britain: The "Britain can make it" exhibition of 1946. London; Washington: Leicester University Press. Moriarty, Catherine and Paul Bayley (2000). Ministry of Taste: Images from the Design Council Archive. Manchester: Cornerhouse Publications, 12 p. Essay published to accompany the exhibition of the same name at Cornerhouse Manchester, 1999. Moriarty, Catherine (2000). "A Back Room Service? The Council of Industrial Design Photographic Library 1945 -1965." Journal of Design History 13 (1): pp. 39–57. Woodham, Jonathan M (2004). ‘The Design Archive at Brighton: serendipity and strategy.’ Art libraries journal 29 (3): pp. 15–21. Moriarty, Catherine (2005). "Design and Photography" in R. Lenman, ed., The Oxford Companion to the Photograph. Oxford University Press, p. 306. Whitworth, Lesley (2005). "Inscribing design on the nation: the creators of the British Council of Industrial Design." Business and Economic History Online 3: pp. 1–14. Moriarty, Catherine (2007). “Bernard Schottlander’s industrial design as a system of appearances” in Indoors and Out: the sculpture and design of Bernard Schottlander. Leeds: Henry Moore Institute. Whitworth, Lesley (2007). 'The Housewives' Committee of the Council of Industrial Design: a brief episode of domestic reconnoitring', in Elizabeth Darling and Lesley Whitworth (eds), Women and the Making of Built Space in England, 1870–1950. Aldershot and Burlington: Ashgate, pp. 180–196. Whitworth, Lesley (2008). “The Design Archives at the University of Brighton: A resource for business historians.” Business Archives: Sources and History 96 (November): pp. 69–82. Whitworth, Lesley (2009) Promoting product quality: the Co-op and the Council of Industrial Design In: Black, Lawrence and Robertson, Nicole, eds. Consumerism and the Co-operative movement in modern British history: Taking stock. Manchester: Manchester University Press, pp. 174–196. Woodham, Jonathan; Lyon, Philippa, eds. (2009). ‘Art and Design at Brighton 1859–2009: from Arts and Manufactures to the Creative and Cultural Industries. Brighton: University of Brighton. Breakell, Sue (2010). "Evolving archival interfaces and the University of Brighton Design Archives." Art Libraries Journal 35 (4): pp. 12–17. Breakell, Sue and Whitworth, Lesley (2013). "Émigré Designers in the University of Brighton Design Archives", Journal of Design History, first published online 4 March 2013 doi:10.1093/jdh/ept006. Moriarty, Catherine (2011). "From Archive to Retroscope: pushing forward resource integration." ISEA 17th International Symposium on Electronic Art, Istanbul, September 2011. Whitworth, Lesley (2012) Collective responsibility: the public and the (UK) Council of Industrial Design in the 1940s In: Edquist, Harriet and Vaughan, Laurene, eds. The design collective: an approach to practice. Cambridge Scholars, pp. 164–181. References External links The Design Archives collections are represented on these Web resources: Brighton School of Art Archives at Archives Hub Design Archives, University of Brighton at Flickr Design Archives, University of Brighton at Visual Arts Data Service (VADS) These online resources were created by Design Archives staff to increase understanding of the Archives, its activities and collections: Designing Britain 1945–1975 Conserving the Archive Blog Finnish Design Project 2012 MacDonald 'Max' Gil - A Digital Resource 2011 Design history British design Archives in East Sussex 1994 establishments in England Design Council University of Brighton
University of Brighton Design Archives
Engineering
2,525
8,145,633
https://en.wikipedia.org/wiki/Motor%20soft%20starter
A motor soft starter is a device used with AC electrical motors to temporarily reduce the load and torque in the powertrain and electric current surge of the motor during start-up. This reduces the mechanical stress on the motor and shaft, as well as the electrodynamic stresses on the attached power cables and electrical distribution network, extending the lifespan of the system. It can consist of mechanical or electrical devices, or a combination of both. Mechanical soft starters include clutches and several types of couplings using a fluid, magnetic forces, or steel shot to transmit torque, similar to other forms of torque limiter. Electrical soft starters can be any control system that reduces the torque by temporarily reducing the voltage or current input, or a device that temporarily alters how the motor is connected in the electric circuit. Operating principles Whenever the armature of an electric motor is moving, both the motor action and generator action are occurring simultaneously; the electromagnetic force produced by generator action opposes the desired motor action and effectively creates a variable motor resistance which increases with motor speed. When a voltage is applied to the motor, this resistance dictates the current drawn by the motor. At rest, the resistance is relatively low, so the starting or inrush current can be high if the full line voltage is applied to the motor. Compared to DC motors, AC motors tend to have significantly higher stator resistance and correspondingly lower inrush current. Nevertheless, across-the line starting of induction motors is accompanied by inrush currents up to 7-10 times higher than running current, and higher efficiency motors can experience inrush currents 10-15 times running current. In addition, starting torque can be up to 3 times higher than running torque. The starting torque transient can create a sudden mechanical stress on the machine, which leads to a reduced service life. Moreover, the high inrush current stresses the power supply, which may lead to voltage dips. As a result, lifespan of sensitive equipment may be reduced. Another common side-effect, especially in residential installations, is voltage sag in the site's power supply created by the high inrush current, visible as flickering lights. A soft starter continuously controls the motor's voltage supply during the start-up phase. This way, the motor is adjusted to the machine's load behavior. Mechanical operating equipment is accelerated smoothly. This lengthens service life, improves operating behavior, and smooths work flows. Electrical soft starters can use solid state devices to control the current flow and therefore the voltage applied to the motor. They can be connected in series with the line voltage applied to the motor, or can be connected inside the delta (Δ) loop of a delta-connected motor, controlling the voltage applied to each winding. Solid state soft starters can control one or more phases of the voltage applied to the induction motor with the best results achieved by three-phase control. Soft starters controlled via two phases have the disadvantage that the uncontrolled phase will always shows some current unbalance with respect to the controlled phases. Typically, the voltage is controlled by reverse-parallel-connected silicon-controlled rectifiers (thyristors), but in some circumstances with three-phase control, the control elements can be a reverse-parallel-connected SCR and diode. Another way to limit motor starting current is a series reactor. If an air core is used for the series reactor then a very efficient and reliable soft starter can be designed which is suitable for all types of 3 phase induction motor [ synchronous / asynchronous ] ranging from 25 kW 415 V to 30 MW 11 kV. Using an air core series reactor soft starter is very common practice for applications like pump, compressor, fan etc. Usually high starting torque applications do not use this method. Applications Soft starters can be set up to the requirements of the individual application. Compared to variable-frequency drives, soft starters require very few user adjustments. Some soft starters also include a "learning" process to automatically adapt the drive settings to the characteristics of a motor load, to reduce the power inrush requirement at the start. In pump applications, a soft starter can avoid pressure surges that could lead to water hammer. Conveyor belt systems can be smoothly started, avoiding jerk and stress on drive components. Fans or other systems with belt drives can be started slowly to avoid belt slipping as well as air pressure surges. Soft starters are seen in electrical R/C helicopters, and allow the rotor blades to spool-up in a smooth, controlled manner rather than a sudden surge. In all systems, a soft start limits the inrush current and so improves stability of the power supply and reduces transient voltage drops that may affect other loads. See also Adjustable-speed drive Braking chopper DC motor starter section of Electric motor Electronic speed control Korndorfer starter Motor controller Space Vector Modulation Thyristor drive Variable-frequency drive Variable-speed air compressor Vector control (motor) References Control engineering Electric motors Electromagnetic components Electric power systems components Power electronics
Motor soft starter
Technology,Engineering
1,019
18,006,752
https://en.wikipedia.org/wiki/Comparison%20of%20HP%20graphing%20calculators
A graphing calculator is a class of hand-held calculator that is capable of plotting graphs and solving complex functions. While there are several companies that manufacture models of graphing calculators, Hewlett-Packard is a major manufacturer. The following table compares general and technical information for Hewlett-Packard graphing calculators: See also Comparison of Texas Instruments graphing calculators Casio graphic calculators HP calculators List of Hewlett-Packard pocket calculators References HP calculators HP calculators Graphing calculators
Comparison of HP graphing calculators
Technology
120
28,568,276
https://en.wikipedia.org/wiki/Sipuleucel-T
Sipuleucel-T, sold under the brand name Provenge, developed by Dendreon Pharmaceuticals, LLC, is a cell-based cancer immunotherapy for prostate cancer (CaP). It is an autologous cellular immunotherapy. Medical uses Sipuleucel-T is indicated for the treatment of metastatic, asymptomatic or minimally symptomatic, metastatic castrate-resistant hormone-refractory prostate cancer (HRPC). Other names for this stage are metastatic castrate-resistant (mCRPC) and androgen independent (AI) or (AIPC). This stage leads to mCRPC with lymph node involvement and distal (distant) tumors; this is the lethal stage of CaP. The prostate cancer staging designation is T4,N1,M1c. Treatment method A course of treatment consists of three basic steps: The patient's white blood cells, primarily dendritic cells, a type of antigen-presenting cells (APCs), are extracted in a leukapheresis procedure. The blood product is sent to a production facility and incubated with a fusion protein (PA2024) consisting of two parts: The antigen prostatic acid phosphatase (PAP), which is present in 95% of prostate cancer cells and An immune signaling factor granulocyte-macrophage colony stimulating factor (GM-CSF) that helps the APCs to mature. The activated blood product (APC8015) is returned from the production facility to the infusion center and reinfused into the patient. Premedication with acetaminophen and antihistamine is recommended to minimize side effects. Side effects Common side effects include: bladder pain; bloating or swelling of the face, arms, hands, lower legs, or feet; bloody or cloudy urine; body aches or pain; chest pain; chills; confusion; cough; diarrhea; difficult, burning, or painful urination; difficulty with breathing; difficulty with speaking up to inability to speak; double vision; sleeplessness; and inability to move the arms, legs, or facial muscles. Society and culture Legal status Sipuleucel-T was approved by the U.S. Food and Drug Administration (FDA) on April 29, 2010, to treat asymptomatic or minimally symptomatic metastatic HRPC. Shortly afterward, sipuleucel-T was added to the compendium of cancer treatments published by the National Comprehensive Cancer Network (NCCN) as a "category 1" (highest recommendation) treatment for HRPC. The NCCN Compendium is used by Medicare and major health care insurance providers to decide whether a treatment should be reimbursed. Research Clinical trials Completed Sipuleucel-T showed overall survival (OS) benefit to patients in three double-blind randomized phase III clinical trials, D9901, D9902a, and IMPACT. The IMPACT trial served as the basis for FDA licensing. This trial enrolled 512 patients with asymptomatic or minimally symptomatic metastatic HRPC randomized in a 2:1 ratio. The median survival time for sipuleucel-T patients was 25.8 months comparing to 21.7 months for placebo-treated patients, an increase of 4.1 months. 31.7% of treated patients survived for 36 months vs. 23.0% in the control arm. Overall survival was statistically significant (P=0.032). The longer survival without tumor shrinkage or change in progression is surprising. This may suggest the effect of an unmeasured variable. The trial was conducted pursuant to a FDA Special Protocol Assessment (SPA), a set of guidelines binding trial investigators to specific agreed-upon parameters with respect to trial design, procedures and endpoints; compliance ensured overall scientific integrity and accelerated FDA approval. The D9901 trial enrolled 127 patients with asymptomatic metastatic HRPC randomized in a 2:1 ratio. The median survival time for patients treated with sipuleucel-T was 25.9 months comparing to 21.4 months for placebo-treated patients. Overall survival was statistically significant (P=0.01). The D9902a trial was designed like the D9901 trial but enrolled 98 patients. The median survival time for patients treated with sipuleucel-T was 19.0 months comparing to 15.3 months for placebo-treated patients, but did not reach statistical significance. Ongoing As of August 2014, the PRO Treatment and Early Cancer Treatment (PROTECT) trial, a phase IIIB clinical trial started in 2001, was tracking subjects but no longer enrolling new subjects. Its purpose is to test efficacy for patients whose CaP is still controlled by either suppression of testosterone by hormone treatment or by surgical castration. Such patients have usually failed primary treatment of either surgical removal of the prostate, (EBRT), internal radiation, BNCT or (HIFU) for curative intent. Such failure is called biochemical failure and is defined as a PSA reading of 2.0 ng/mL above nadir (the lowest reading taken post primary treatment). As of August 2014, a clinical trial administering sipuleucel-T in conjunction with ipilimumab (Yervoy) was tracking subjects but no longer enrolling new subjects; the trial evaluates the clinical safety and anti-cancer effects (quantified in PSA, radiographic and T cell response) of the combination therapy in patients with advanced prostate cancer. References Further reading External links Cancer treatments Prostate cancer
Sipuleucel-T
Biology
1,172
28,160,580
https://en.wikipedia.org/wiki/Mechanical%20Galleon
The Mechanical Galleon is an elaborate nef or table ornament in the form of a ship, which is also an automaton and clock. It was constructed in about 1585 by Hans Schlottheim in southern Germany. It was in the possession of Augustus, Elector of Saxony (who would have been one of the model courtiers shown on the ship). The model is now in the British Museum in London. Two other similar models are located in museums in France and Austria, the Château d'Écouen north of Paris, and the Kunsthistorisches Museum in Vienna. Construction Nefs were extravagant ship-shaped table ornaments in precious metal that had been popular for some centuries among the very wealthy. Earlier types, such as the Burghley Nef now in the Victoria and Albert Museum in London, usually functioned as containers for salt, spices or other things, but the figures on deck in this example leave no room for a function of this sort. It is also mostly made of gilded brass, where earlier royal examples were usually in gold or at least silver-gilt. In the sixteenth century there was an enthusiasm for clockwork automata, the production of which was funded by potentates including Rudolf II, Holy Roman Emperor and Suleyman the Magnificent. One of the craftsmen who made these automata was Hans Schlottheim. This particular piece was believed to have been owned by Rudolf II in Prague but recent evidence points to it having been on the inventory of the Kunstkammer of Augustus I, Elector of Saxony in Dresden in 1585. Hans Schlottheim was a goldsmith and clockmaker who lived from 1547 to 1625. The important development that made these automata possible was the discovery of coiled tempered steel. It was possible to store potential energy in the coiled spring steel to create a portable energy supply. Clockwork was new and would have been regarded as "magic" in the sixteenth century. The nef could be moved on wheels, which was usual; the wheels have now been removed. The hours and the quarter hours of the clock were struck using upside down bells in the crow's nests which were rung by hammers held by model seamen. There is a clock on the ship but it is small and almost lost in the detail at the base of the tallest mast. Mechanical music played accompanied by a drum on a skin hidden within the hull. Seven electors, including Augustus, Elector of Saxony, walk before the seated figure of the Rudolf II, the Holy Roman Emperor. The highly prestigious prince electors of the Holy Roman Empire decided who would be enthroned as the Holy Roman Emperor. Finally the ship would make noises and smoke as the cannons fired and trumpets blared. It was fancied that the Mechanical Galleon "might have enlivened the dullest imperial banquets by racing along the table, guns blazing and trumpets blowing". The complexity of this nef meant that Hans Schlottheim had to include three separate clockwork mechanisms. One conventionally powered the chiming clock, but also provided the power for the seven revolving electors. The music, including the drum, was powered by another motor and the third gave the ship movement. The mechanism was said to require rewinding every 24 hours. One of Schlottheim's other masterpieces was a clock which at the twelfth hour presented a mechanical nativity scene. Joseph rocked Jesus's cradle followed by the approach of the Three Kings and the shepherds. The Madonna then bowed to welcome them. At this point angels moved up and down whilst God was revealed giving a benediction. This clock is thought to have been lost during World War II. Condition In 2010, the Mechanical Galleon no longer functions. The drumskin that was used to drum as it rolled is no longer present, and the original wheels have been replaced with ball-shaped feet. The British Museum notes that the eight figures on deck are not the originals but casts taken from one original figure. However they also mention that they may have one of the figures, but are unsure as to whether it is from this nef. It is known that the figures on deck would have held drums and trumpets. Provenance Octavius Morgan made a number of generous donations to the British Museum including this automaton in 1866. Historically, it is believed to be an artefact mentioned in an inventory of the Green Vault (Grünes Gewölbe) treasury of Augustus, Elector of Saxony of Dresden in 1585. It had been thought to have been owned by Rudolf II. The inventory records "A gilded ship, skilfully made, with a quarter and full hour striking clock, which is to be wound every 24 hours. Above with three masts, in the crows' nests of which the sailors revolve and strike the quarters and hours with hammers on the bells. Inside, the Holy Roman Emperor sits on the Imperial throne, and in front of him pass the seven electors with heralds, paying homage as they receive their fiefs. Furthermore ten trumpeters and a kettle-drummer alternately announce the banquet. Also a drummer and three guardsmen, and sixteen small cannons, eleven of which may be loaded and fired automatically." There are known to be two similar nefs by the same craftsman. The most similar is in the Musée de la Renaissance in Écouen, France. The Nef that did belong to Rudolf II is silver and is in the Kunsthistorisches Museum in Vienna. History of the World This automaton was featured in the History of the World in 100 Objects, which was a series of radio programmes that started in 2010 and that were created in a partnership between the BBC and the British Museum. The leading figures in this partnership were Neil MacGregor and Mark Damazer. Damazer said Replica The Museum Speelklok in Utrecht has a working replica of a shooting ship, including a real miniature cannon. Gallery References Bibliography J. J. Haspels, Automatic musical instruments, (Nirota, Muziekdruk C.V., Koedijk, 1987) J. Fritsch (ed.), Ships of curiosity: three Rena (Paris, Réunion des Musées Nationaux, 2001) D. Roberts, Mystery, novelty and fantasy c (Atglen Pa., Schiffer Publishing, 1999) H. Tait, Clocks and watches (London, The British Museum Press, 1983) Automata (mechanical) Individual clocks in England Model boats Prehistory and Europe objects in the British Museum
Mechanical Galleon
Engineering
1,337
1,639,011
https://en.wikipedia.org/wiki/Gippsland%20Lakes
The Gippsland Lakes are a network of coastal lakes, marshes and lagoons in East Gippsland, Victoria, Australia covering an overall area of about between the rural towns of Lakes Entrance, Bairnsdale and Sale. The largest of the lakes are Lake Wellington (Gunai language: Murla), Lake King (Gunai: Ngarrang) and Lake Victoria (Gunai: Toonallook). The lakes are collectively fed by the Avon, Thomson, Latrobe, Mitchell, Nicholson and Tambo Rivers, and drain into Bass Strait through a short canal about southwest of Lakes Entrance town centre. History The Gippsland Lakes were formed by two principal processes. The first is river delta alluvial deposition of sediment brought in by the rivers which flow into the lakes. Silt deposited by this process forms into long jetties which can run many kilometres into a lake, as exemplified by the Mitchell River silt jetties that run into Lake King. The second process is the action of sea current in Bass Strait which created the Ninety Mile Beach and cut off the river deltas from the sea. Once the lakes were closed off a new cycle started, whereby the water level of the lakes would gradually rise until the waters broke through the barrier beach and the level would drop down until it equalised with sea-level. Eventually the beach would close-off the lakes and the cycle would begin anew. Sometimes it would take many years before a new channel to the sea was formed and not necessarily in the same place as the last one. In 1889, a wall was built to fix the position of a naturally occurring channel between the lakes and the ocean at Lakes Entrance, to stabilise the water level, create a harbour for fishing boats and open up the lakes to shipping. This entrance needs to be dredged regularly, or the same process that created the Gippsland Lakes would render the entrance too shallow for seagoing vessels to pass through. Due to flooding in 2011, Gippsland Lakes experienced blooms of bioluminescent Noctiluca scintillans. Overview Tourism The Gippsland Lakes provide a major hub for tourism, particularly for recreational boating and fishing enthusiasts. The lakes network can be explored by cruise, water taxi, or boat and kayak hire. On the fringes of the lakes are several tourist towns that swell to support the tourist trade, particularly in the summer months. Lakes Entrance is the largest of the towns on the lakes with a population of 4,500. The town is well serviced with resorts, hotels and facilities. It is located with easy access to both the lakes network and the surf beach on Ninety Mile Beach, which is patrolled each summer. Metung is a small village located on the tip of a peninsula sitting in the Gippsland Lakes, surrounded almost completely by water. It is an upmarket tourist destination with many dining options and artisan galleries. Much of Paynesville’s accommodation and infrastructure are located on the network of canals. One of the key attractions is Raymond Island, known for its koala population. The diversity of the brine waters of the lakes, surf beaches along Ninety Mile Beach and fresh water streams that feed the lakes, make the Gippsland Lakes a popular fishing destination. Local fish varieties include bream, mullet, flathead, luderick and trevally. Paynesville, Lakes Entrance and Metung all offer a number of jetties, boat ramps and berthing facilities. Environment The lakes support numerous species of wildlife and there exist two protected areas within: The Lakes National Park and Gippsland Lakes Coastal Park. The Gippsland Lakes wetlands are protected by the international Ramsar Convention on wetlands. There are also approximately 400 indigenous flora species and 300 native fauna species. Three plants, two of them being orchid species, are listed as endangered. The numbers of southern right whales and humpback whales using the Lake Entrance area show increases in recent years, as the populations have started to recover from illegal hunts by the Soviet Union with help from Japan in 1960s-1970s. Gippsland Lakes' marine water quality is monitored by the Environment Protection Authority of Victoria, along with Port Phillip and Western Port. The water quality in East Gippsland Lakes in the 2021-2022 period declined compared to the previous one mostly due to the heavy rains bringing nutrients to the water in 2021 which lead to blue-green algae to bloom until May 2022. Burrunan dolphins The lakes are home to about 50 of the recently described species of bottlenose dolphin, the Burrunan dolphin (Tursiops australis). The other 150 or so of this rare species are to be found in Port Phillip. Birds The wetlands provide habitat for about 20,000 waterbirds – including birds from as far afield as Siberia and Alaska. The lakes have been identified by BirdLife International as an Important Bird Area (IBA) because they regularly support over 1% of the global populations of black swans, chestnut teals and musk ducks, as well as many fairy terns. Photo See also Banksia Swamp Gippsland Lakes Coastal Park References External links Official East Gippsland tourism website Gippsland Lakes Coastal Park webpage at Parks Victoria Gippsland Coastal Board The Lakes National Park & Gippsland Lakes Coastal Park Plan Gippsland Ports Authority website Gippsland Lakes Ministerial Advisory Committee website Lakes of Victoria (state) East Gippsland catchment West Gippsland catchment Rivers of Gippsland (region) Ramsar sites in Australia Regions of Victoria (state) Important Bird Areas of Victoria (state) Places with bioluminescence
Gippsland Lakes
Chemistry,Biology
1,141
4,321,511
https://en.wikipedia.org/wiki/Frost%20diagram
A Frost diagram or Frost–Ebsworth diagram is a type of graph used by inorganic chemists in electrochemistry to illustrate the relative stability of a number of different oxidation states of a particular substance. The graph illustrates the free energy vs oxidation state of a chemical species. This effect is dependent on pH, so this parameter also must be included. The free energy is determined by the oxidation–reduction half-reactions. The Frost diagram allows easier comprehension of these reduction potentials than the earlier-designed Latimer diagram, because the “lack of additivity of potentials” was confusing. The free energy ΔG° is related to the standard electrode potential E° shown in the graph by the formula: or , where n is the number of transferred electrons, and F is the Faraday constant . The Frost diagram is named after , who originally invented it as a way to "show both free energy and oxidation potential data conveniently" in a 1951 paper. X,Y axes of the Frost diagram The Frost diagram shows on its x axis the oxidation state of the species in question, and on its y axis the difference in Gibbs free energy, ΔG°, of the half-reduction reaction of the species multiplied by the sign minus and divided by the Faraday constant, F. The term -ΔG°/F = nE°, i. e., the number, n, of electrons exchanged in the reduction reaction multiplied by the standard potential, E°, expressed in volt. Unit and scale The standard free-energy scale is measured in electron-volts, and the nE° = 0 value is usually the neutral species of the pure element. The Frost diagram normally shows free-energy values above and below nE° = 0 and is scaled in integers. The y axis of the graph displays the free energy. Increasing stability (lower free energy) is lower on the graph, so the higher free energy and higher on the graph a species of an element is, the more unstable and reactive it is. The oxidation state (sometimes also called oxidation number as on the x axis of two illustrating figures on this page) of the species is shown on the x axis of the Frost diagram. Oxidation states are unitless and are also scaled in positive and negative integers. Most often, the Frost diagram displays oxidation state in increasing order, but in some cases it is displayed in decreasing order. The neutral species of the pure element with a free energy of zero (nE° = 0) also has an oxidation state equal to zero. However, the energy of some allotropes may not be zero. The slope of the line therefore represents the standard potential between two oxidation states. In other words, the steepness of the line shows the tendency for those two reactants to react and to form the lowest-energy product. There is a possibility of having either a positive or a negative slope. A positive slope between two species indicates a tendency for an oxidation reaction, while a negative slope between two species indicates a tendency for reduction. For example, if the manganese in [HMnO4]− has an oxidation state of +6 and nE° = 4, and in MnO2 the oxidation state is +4 and nE° = 0, then the slope Δy/Δx is 4/2 = 2, yielding a standard potential of +2. The stability of any terms can be similarly found by this graph. Species thermodynamical stability indicated by peaks and dips The slope of the line between any two points on a Frost diagram gives the standard reduction potential, E°, for the corresponding half-reaction. On the Frost diagram for nitrogen here below, the slope of the straight line between (at the origin of the plot) and nitrite () being slightly more pronounced than for nitrate, indicates that nitrite is a stronger oxidant than nitrate (). This is confirmed by the values of E° determined for their respective half-reactions of reduction towards gaseous : (E° = 1.455 V, ∆G° = –842 J/mol) (E° = 1.250 V, ∆G° = –1 206 J/mol) Although nitrous acid is located above nitrate in the redox scale and so is a stronger oxidant than nitrate, the Gibbs free energy of the half-reaction for nitrate reduction is more important (∆G° < 0 indicates an exothermic reaction releasing energy) because of the larger number (n) of electrons transferred in the half-reaction (10 versus 6). A species located above the line between two surrounding species (thus shown at the top of a peak), is unstable and prone to disproportionation (↙↘), while a species located below the line joining two surrounding species (thus shown in a dip) lies in a thermodynamic sink, and is intrinsically stable, giving rise to comproportionation (↘↙). On the Frost diagram for nitrogen, hydrazoic acid () and hydroxylamine () are both located at the top of a peak and so can easily disproportionate towards the two more stable surrounding species: ammonium () and molecular nitrogen (). So, in aqueous solution: – under acidic conditions, hydrazoic acid disproportionates as: – under neutral, or basic, conditions, the azide anion disproportionates as: Disproportionation and comproportionation In regards to electrochemical reactions, two main types of reactions can be visualized using the Frost diagram. Comproportionation is when two equivalents of an element, differing in oxidation state, combine to form a product with an intermediate oxidation state. Disproportionation is the opposite reaction, in which two equivalents of an element, identical in oxidation state, react to form two products with differing oxidation states. Disproportionation: 2 Mn+ → Mm+ + Mp+. Comproportionation: Mm+ + Mp+ → 2 Mn+. 2 n = m + p in both examples. Using a Frost diagram, one can predict whether one oxidation state would undergo disproportionation, or two oxidation states would undergo comproportionation. Looking at two slopes among a set of three oxidation states on the diagram, assuming the two standard potentials (slopes) are not equal, the middle oxidation state will either be in a “hill” or in a “valley” shape. A hill is formed when the left slope is steeper than the right, and a valley is formed when the right slope is steeper than the left. An oxidation state that is on “top of the hill” tends to favor disproportionation into the adjacent oxidation states. The adjacent oxidation states, however, will favor comproportionation if the middle oxidation state is in the “bottom of a valley”. By Jensen's inequality, drawing the line between the oxidation state to the left and the one to the right and seeing if the species lies above or below this line is a quick way to determine concavity/convexity (concavity would indicate comproportionation, for example). pH dependence The pH dependence is given by the factor −0.059m/n per pH unit, where m relates to the number of protons in the equation, and n the number of electrons exchanged. Electrons are always exchanged in electrochemistry, but not necessarily protons. If there is no proton exchange in the reaction equilibrium, the reaction is said to be pH-independent. This means that the values for the electrochemical potential rendered in a redox half-reaction, whereby the elements in question change oxidation states are the same whatever the pH conditions under which the procedure is carried out. The Frost diagram is also a useful tool for comparing the trends of standard potentials (slope) of acidic and basic solutions. The pure, neutral element transitions to different compounds depending whether the species is in acidic and basic pHs. Though the value and amount of oxidation states remain unchanged, the free energies can vary greatly. The Frost diagram allows the superimposition of acidic and basic graphs for easy and convenient comparison. Possible confusion related to non-standard conventions / pH used in textbooks Arthur Frost stated in his own original publication that there may be potential criticism for his Frost diagram. He predicts that “the slopes may not be as easily or accurately recognized as they are the direct numerical values of the oxidation potentials [of the Latimer diagram]”. Many inorganic chemists use both the Latimer and Frost diagrams in tandem, using the Latimer for quantitative data, and then converting those data into a Frost diagram for visualization. Frost suggested that the numerical values of standard potentials could be added next to the slopes to provide supplemental information. In a paper published in the Journal of Chemical Education, Martinez de Ilarduya and Villafañe (1994) warn users of Frost diagrams to be aware of the pH conditions (acid or basic) considered to construct the diagrams. Frost diagrams nE° = −ΔG°/F, classically constructed with the standard potential E°, implicitly refers to acid conditions ([] = 1 M, pH = 0).However, in some textbooks the Frost diagram of an element may be confusing for the reader, because the redox potential depends on pH and some notations, or conventions, may differ from the standard conditions and be unclear. Because ions participate into redox reactions to balance acid–base reactions related to the anions released in solution during reduction, or at the contrary consumed by oxidation reactions, according to Le Chatelier's principle, the oxidizing power of oxidizing agents is exacerbated under acidic conditions () while the reducing power of reducing agents is exacerbated under basic conditions (). Some textbooks present the reduction potentials calculated under standard conditions, so with [] = 1 M (pH = 0, acid-solution), E° (), while also discussing redox processes occurring in a basic-solution. To attempt to overcome the problem, in the Phillips and Williams Inorganic Chemistry textbook, however, the reduction potentials for basic solutions are calculated with non-standard conditions and unusual conventions ([] = 1 M, pH = 14) according to the following formula: E°(OH) = E°(pH 14) = E°basic − E° () = E°basic + 0.828 V. So, to avoid confusion for the reader, it is important to use clear conventions and notations, and to also systematically indicate the pH value (0 or 14) for which the Frost diagrams have been constructed, or even better, to present both curves (for pH 0 and 14) on the same diagram to put in evidence the effect of pH on the redox equilibrium. See also Pourbaix diagram Ellingham diagram References External links Diagrams providing useful oxidation-reduction information Electrochemistry
Frost diagram
Chemistry
2,264
10,386,571
https://en.wikipedia.org/wiki/Rebound%20effect%20%28conservation%29
In energy conservation and energy economics, the rebound effect (or take-back effect) is the reduction in expected gains from new technologies that increase the efficiency of resource use, because of behavioral or other systemic responses. These responses diminish the beneficial effects of the new technology or other measures taken. A definition of the rebound effect is provided by Thiesen et al. (2008) as, “the rebound effect deals with the fact that improvements in efficiency often lead to cost reductions that provide the possibility to buy more of the improved product or other products or services.”  A classic example from this perspective is a driver who substitutes a vehicle with a fuel-efficient version, only to reap the benefits of its lower operating expenses to commute longer and more frequently." A body of scientific literature argues that improvements in technological efficiency and efficiency improvements in general have induced increases in consumption. Generally, economists and researchers seem to agree that there exists a rebound effect, but they disagree about its volume and importance. While the literature on the rebound effect generally focuses on the effect of technological improvements on energy consumption, the theory can also be applied to the use of any natural resource or other input, such as labor. The rebound effect is generally expressed as a ratio of the lost benefit compared to the expected environmental benefit when holding consumption constant. For instance, if a 5% improvement in vehicle fuel efficiency results in only a 2% drop in fuel use, there is a 60% rebound effect (since = 60%). The 'missing' 3% might have been consumed by driving faster or further than before. The existence of the rebound effect is uncontroversial. However, debate continues as to the magnitude and impact of the effect in real world situations. Depending on the magnitude of the rebound effect, there are five different rebound effect (RE) types: Super conservation (RE < 0): the actual resource savings are higher than expected savings – the rebound effect is negative. Zero rebound (RE = 0): The actual resource savings are equal to expected savings – the rebound effect is zero. Partial rebound (0 < RE < 1): The actual resource savings are less than expected savings – the rebound effect is between 0% and 100%. This is sometimes known as 'take-back', and is the most common result of empirical studies on individual markets. Full rebound (RE = 1): The actual resource savings are equal to the increase in usage – the rebound effect is at 100%. Backfire (RE > 1): The actual resource savings are negative because usage increased beyond potential savings – the rebound effect is higher than 100%. This situation is commonly known as the Jevons paradox. In order to avoid the rebound effect, environmental economists have suggested that any cost savings from efficiency gains be taxed in order to keep the cost of use the same. Furthermore, strategies for increasing efficiency may lead to unsustainable development patterns if they are not complemented by sufficiency-oriented strategies demand reduction measures. History The rebound effect was first described by William Stanley Jevons in his 1865 book The Coal Question, where he observed that the invention in Britain of a more efficient steam engine meant that the use of coal became economically viable for many new uses. This ultimately led to increased coal demand and much increased coal consumption, even as the amount of coal required for any particular use fell. According to Jevons, "It is a confusion of ideas to suppose that the economical use of fuel is equivalent to diminished consumption. The very contrary is the truth." When studying the increase in energy consumption due to coal burning, Jevons initially presented the idea of rebound effect in academic literature in 1865. As a result, the notion became known as the 'Jevons paradox.' Subsequent scientific study had not been mainstream until the 1980s; once economists adopted Jevons' theories due to global oil crises and growing global warming fears. Although the concept of rebound effect was developed from the original paradox theory by Jevons, the contemporary economics have traversed, to expand the scope of what is meant by rebound effects and to provide Jevons Paradox a more concise definition. The concept of rebound effects have taken various iterations in different disciplines and has come to encompass several spheres of challenges and negative externalities. Walnum et al. (2014) carried out a systematic study of rebound effect research and observed the presence of seven viewpoints in which each provides unique interpretations and suppositions on the phenomenon: psychological study, ecological economics, energy economics, ecological economics, socio-technological discipline, evolutionary economics and urban planning. An eighth important position that of industrial ecology was also identified in further studies. However, most contemporary authors credit Daniel Khazzoom for the re-emergence of the rebound effect in the research literature. Although Khazzoom did not use the term, he raised the idea that there is a less than one-to-one correlation between gains in energy efficiency and reductions in energy use, because of a change in the 'price content' of energy in the provision of the final consumer product. His study was based on energy efficiency gains in home appliances, but the principle applies throughout the economy. A commonly studied example is that of a more fuel-efficient car. As each kilometre of travel becomes cheaper, there will be an increase in driving speed and/or kilometres driven, as long as the price elasticity of demand for car travel is not zero. Other examples might include the growth in garden lighting after the introduction of energy-saving Light Emitting Diodes or the increasing size of houses driven partly by higher fuel efficiency in home heating technologies. If the rebound effect is larger than 100%, all gains from the increased fuel efficiency would be wiped out by increases in demand (the Jevons paradox). Khazzoom's thesis was criticized heavily by Michael Grubb and Amory Lovins who stated that there was a connection between energy efficiency improvements in an individual market, and an economy-wide reduction in energy consumption. Developing Khazzoom's idea further, and prompting heated debate in the Energy Policy journal at that time, Len Brookes wrote of the fallacies in the energy-efficiency solution to greenhouse gas emissions. His analysis showed that any economically justified improvements in energy efficiency would in fact stimulate economic growth and increase total energy use. For improvements in energy efficiency to contribute to a reduction in economy-wide energy consumption, the improvement must come at a greater economic cost. Commenting in regard to energy efficiency advocates, he concludes that, "the present high profile of the topic seems to owe more to the current tide of green fervor than to sober consideration of the facts, and the validity and cost of solutions." Khazzoom-Brookes postulate In 1992, economist Harry Saunders coined the term "Khazzoom-Brookes postulate" to describe the idea that energy efficiency gains paradoxically result in increases in energy use (the modern day equivalent of the Jevons paradox). He modeled energy efficiency gains using a variety of neoclassical growth models, and showed that the postulate is true over a wide range of assumptions. In the conclusion of his paper, Saunders stated that: This work provided a theoretical grounding for empirical studies and played an important role in defining the problem of the rebound effect. It also reinforced an emerging ideological divide between energy economists on the extent of the yet to be named effect. The two tightly held positions are: Technological improvements in energy efficiency enable economic growth that was otherwise impossible without the improvement; as such, energy efficiency improvements will usually back-fire in the long term. Technological improvements in energy efficiency may result in a small take-back. However, even in the long term, energy efficiency improvements usually result in large overall energy savings. Even though many studies have been undertaken in this area, neither position has yet claimed a consensus view in the academic literature. Recent studies have demonstrated that direct rebound effects are significant (about 30% for energy), but that there is not enough information about indirect effects to know whether or how often back-fire occurs. Economists tend to the first position, but most governments, businesses, and environmental groups adhere to the second. Governments and environmental groups often advocate further research into fuel efficiency and radical increases in the efficient use of energy as the primary means for reducing energy use and reducing greenhouse gas emissions (to alleviate the impacts of climate change). However, if the first position more accurately reflects economic reality, current efforts to invent fuel-efficient technologies may not much reduce energy use, and may in fact paradoxically increase oil and coal consumption, and greenhouse gas emissions, over the long run. Types of effects The full rebound effect can be distinguished into three different economic reactions to technological changes: Direct rebound effect: An increase in consumption of a good is caused by the lower cost of use. This is caused by the substitution effect. Indirect rebound effect: The lower cost of a service enables increased household consumption of other goods and services. For example, the savings from a more efficient cooling system may be put into another luxury good. This is caused by the income effect. Economy wide effect: The fall in service cost reduces the price of other goods, creates new production possibilities and increases economic growth. In the example of improved vehicle fuel efficiency, the direct effect would be the increased fuel use from more driving as driving becomes cheaper. The indirect effect would incorporate the increased consumption of other goods enabled by household cost savings from increased fuel efficiency. Since consumption of other goods increases, the embodied fuel used in the production of those goods would increase as well. Finally, the economy-wide effect would include the long-term effect of the increase in vehicle fuel efficiency on production and consumption possibilities throughout the economy, including any effects on economic growth rates. Direct and indirect effects For cost reducing resource efficiency, distinguishing between direct and indirect effects is shown in Figure 1 below. The horizontal axis shows units of consumption of the targets good (which could be for example clothes washing, and measured in terms of kilograms of clean clothes) with consumption of all other goods and services on the vertical axis. An economical technology change that enables each unit of washing to be produced with less electricity results in a reduction of the price per unit of washing. This shifts the household budget line rightwards. The result is a substitution effect because of the decreased relative price, but also an income effect due to the increased real income. The substitution effect increases consumption of washing from Q1 to QS, and the income effect from QS to Q2. The total increase in consumption of washing from Q1 to Q2 and the resulting increase in electricity consumption is the direct effect. The indirect effect comprises the increase in other consumption, from O1 to O2. The scale of each of these effects depends on the elasticity of demand for each of the goods, and the embodied resource or externality associated with each good. Indirect effects are difficult to measure empirically. In the manufacturing sector, it has been estimated that there is about a 24% rebound effect due to increases in fuel efficiency. A parallel effect will happen for cost saving efficient technologies for producers, where output and substitution effects will occur. The rebound effect can increase the difficulty of projecting the reduction in greenhouse emissions from an improvement in energy efficiency. Estimation of the scale of direct effects on residential electricity, heating and motor fuel consumption has been common motivation for research of rebound effects. Evaluation and econometric methods are the two approaches generally employed in estimating the size of this effect. Evaluation methods rely on quasi-experimental studies and measure the before and after changes to energy consumption from the implementation of energy efficient technology, while econometric methods utilize elasticity estimates to forecast the likely effects from changes in the effective price of energy services. Research has found that in developed countries, the direct rebound effect is usually small to moderate, ranging from roughly 5% to 40% in residential space heating and cooling. Some of the direct rebound effect can be attributed to consumers who were previously unable to use a service. However, the rebound effect may be more significant in the context of the undeveloped markets in developing economies. Indirect effects from conservation For conservation measures, indirect effects closely approximate the total economy-wide effect. Conservation measures constitute a change in consumption patterns away from particular targeted goods towards other goods. Figure 2 shows that a change in preference of a household results in a new consumption pattern that has less of the target good (QT to QT'), and more of all other goods (QO to QO'). The resource consumption or externalities embodied in this other consumption is the indirect effect. Although a persuasive view has prevailed that indirect effects with respect to energy and greenhouse emissions should be very small due to energy directly comprising only a small component of household expenditure, this view is gradually being eroded. Many recent studies based on life-cycle analysis show the energy consumed indirectly by households is often higher than consumed directly through electricity, gas, and motor fuel, and is a growing proportion. This is evident in the results of recent studies that indicate indirect effects from household conservation can range from 10% to 200% depending on the scenario, with higher indirect rebounds from diet changes aiming to reduce food miles. Economy wide effects Even if the direct and indirect rebound effects add up to less than 100%, technological improvements that increase efficiency may still result in economy-wide effects that results in increased resource use for the economy as a whole. In particular, this would happen if increased resource efficiency enables an expansion of production in the economy, and an increase in the rate of economic growth. For example, for the case of energy use, more efficient technology is equivalent to a lower price for energy resources. It is well known that changes in energy costs have a large impact on economic growth rates. In the 1970s, sharp increases in petroleum prices led to stagflation (recession and inflation) in the developed countries, whereas in the 1990s lower petroleum prices contributed to higher economic growth. An improvement in energy efficiency has the same effect as lower fuel prices, and leads to faster economic growth. Economists generally believe that especially for the case of energy use, more efficient technologies will lead to increased use, because of this growth effect. To model the scale of this effect, economists use computational general equilibrium (CGE) models. While CGE methodology is by no means perfect, results indicate that economy-wide rebound effects are likely to be very high, with estimates above 100% being rather common. One simple CGE model has been made available online for use by economists. Income level variation Research has shown that the direct rebound effects for energy services is lower at high income levels, due to less price sensitivity. Studies have found that own-price elasticity of gas consumption by UK households was two times greater for households in the lowest income decile when compared to the highest decile. Studies have also observed higher rebounds in low-income houses for improvements in heating technology. Evaluation methods have also been used to assess the scale of rebound effects from efficient heating installations in lower income homes in the United Kingdom. This research found that direct effects are close to 100% in many cases. High income households in developed countries are likely to set the temperature at the optimum comfort level, regardless of the cost – therefore any cost reduction does not result in increased heating, for it was already optimal. But low-income households are more price sensitive, and have made thermal sacrifices due to the cost of heating. In this case, a high direct rebound is likely. This analogy can be extended to most household energy consumption. The size of the rebound effect is likely to be higher in developing countries according to macro-level assessments and case studies. One case study was undertaken in rural India to evaluate the impact of an alternative energy scheme. Households were given solar powered lighting in an attempt to reduce the use of kerosene for lighting to zero except for seasons with insufficient sunshine. The scheme was also designed to encourage a future willingness to pay for efficient lighting. The results were surprising, with high direct rebounds between 50 and 80%, and total direct and indirect rebound above 100%. Because the new lighting source was essentially zero cost, operating hours for lighting went up from an average of 2 to 6 per day, with new lighting consisting of a combination of both the no-cost solar lamps and also kerosene lamps. Also, more cooking was undertaken which enabled an increased trade of food with neighboring villages. Rebounds with respect to time The individual opportunity of cost is an often overlooked cause of the rebound effect. Just as improved workplace tools result in an increased expectation of productivity, so does the increased availability of time result in an increase in demand for a service. Research articles often examine increasingly convenient and more rapid modes of transportation to determine the rebound effect in energy demand. Because time cost forms a major part of the total cost of commuter transport, rapid modes will reduce real costs, but will also encourage longer commuting distances which will in turn increase energy consumption. While important, it is almost impossible to estimate empirically the scale of such effects due to the subjective nature of the value of time. Time saved can either be used towards additional work or leisure which may have differing degrees of rebound effect. Labor time saved at work due to the increased labour productivity is likely to be spent on further labor time at higher productive rates. For leisure time saving, this may simply encourage people to diversify their leisure interests to fill their generally fixed period of leisure time. Suggested solutions In order to ensure that efficiency enhancing technological improvements actually reduce fuel use, the ecological economists Mathis Wackernagel and William Rees have suggested that any cost savings from efficiency gains be "taxed away or otherwise removed from further economic circulation. Preferably they should be captured for reinvestment in natural capital rehabilitation." This can be achieved through, for example, the imposition of a green tax, a cap and trade program, higher fuel taxes or the proposed "restore" approach where part of the savings is directed back to the resource. Policies can also directly address projected yearly consumption of energy rather than device efficiency, especially for systems where the use can be accurately projected, such as street lighting. Perhaps due to the ongoing discussions and lack of mutual understandings related to importance and influence of rebound effects, it is outlined that policy responses to mitigate risks and address challenges related to rebound effects remain scarce and too little ambitious. Vivanco, Kemp, and van der Voet suggest several strategies: Economy-wide increases in environmental efficiency. The strategy of “consuming more efficiently” aims at reducing the overall magnitude of positive rebound effects by improving the environmental intensity of consumption as a whole (for example, with the help of technology). Shifts to greener consumption patterns - with the support of the "consuming differently" strategy, changes in consumption habits towards products with less burden on the environment (e.g. electricity produced only from renewable energy) are encouraged. Downsizing consumption. The strategy of "consuming less" is aimed at changing people's individual consumption habits so that environmentally damaging products are not consumed at all (i.e. to avoid purchasing new or temporarily improved products). See also Efficient (disambiguation) Jevons paradox Le Chatelier's Principle Moral credential - the psychological phenomenon whereby a person who takes one ethical action may feel less inclined to take a different one in addition Risk compensation Snackwell effect Outline of organizational theory Notes and references External links Rebound effect at the Encyclopedia of Earth Energy conservation Paradoxes in economics Industrial ecology Alternative energy economics 5
Rebound effect (conservation)
Chemistry,Engineering
3,955
58,247,646
https://en.wikipedia.org/wiki/C19H22FN3O3
{{DISPLAYTITLE:C19H22FN3O3}} The molecular formula C19H22FN3O3 (molar mass: 359.39 g/mol, exact mass: 359.1645 u) may refer to: Enrofloxacin (ENR) Grepafloxacin
C19H22FN3O3
Chemistry
74
8,788,180
https://en.wikipedia.org/wiki/List%20of%20professional%20architecture%20organizations
This is a list of professional architecture organizations listed by country. Many of them are members of the International Union of Architects. Africa Ghana Ghana Institute of Architects Kenya Architectural Association of Kenya Nigeria Nigerian Institute of Architects South Africa South Africa Institute of Architects Asia Bangladesh Institute of Architects Bangladesh Hong Kong Hong Kong Institute of Architects (HKIA) India Indian Institute of Architects (IIA) Japan Architectural Institute of Japan (AIJ) Japan Institute of Architects (JIA) Pakistan Institute of Architects Pakistan Philippines United Architects of the Philippines Europe Armenia Armenian Union of Architects Latvia The Latvian Association of Architecs (LAS) Denmark Danish Association of Architects (Akademisk Arkitektforening) (AA) Finland Suomen Arkkitehtiliitto SAFA Germany Bund Deutscher Architekten Greece Technical Chamber of Greece Ireland The Royal Institute of the Architects of Ireland Netherlands Netherlands Architecture Institute Poland Association of Polish Architects Spain Consejo Superior de los Colegios de Arquitectos de España United Kingdom Royal Institute of British Architects (RIBA) Chartered Institute of Architectural Technologists (CIAT) Royal Incorporation of Architects in Scotland (RIAS) Royal Society of Architects in Wales (RSAW) Royal Society of Ulster Architects (RSUA) North Wales Society of Architects (NWSA) North America Canada Royal Architectural Institute of Canada Architectural Institute of British Columbia Ontario Association of Architects Ordre des architects du Quebec United States The American Institute of Architects The Society of American Registered Architects Oceania Australia Australian Institute of Architects News Zealand New Zealand Institute of Architects References Architecture Professional organizations
List of professional architecture organizations
Engineering
312
17,132,682
https://en.wikipedia.org/wiki/John%20Zaborszky
John Zaborszky (May 13, 1914 – February 11, 2008) was a noted Hungarian-born applied mathematician and a professor in the Department of systems science and mathematics, Washington University in St. Louis. He received the Richard E. Bellman Control Heritage Award in 1986. He was elected to the National Academy of Engineering in 1984. Biography Zaborszky earned a master's degree and PhD in 1937 and 1943, respectively, "under auspices of the Regent of Hungary" from the Technical University of Budapest. He continued as a docent at that institution and was chief engineer of the city's municipal power system before emigrating to the United States in 1947. He was an assistant professor at UMR and in 1954 moved to St. Louis to join Washington University. In 1974, he founded and was first chairman of the Systems Science Department. He was the 1970 President of the IEEE Control Systems Society and he received its Distinguished Member Award in 1983. He was an IEEE Fellow and was elected to Eta Kappa Nu (ΗΚΝ). References External links Washington University Obituary Zaborszky Distinguished Lecture Series National Academy of Engineering: John Zaborszky 1914 births 2008 deaths Control theorists Richard E. Bellman Control Heritage Award recipients Members of the United States National Academy of Engineering Washington University in St. Louis mathematicians Hungarian emigrants to the United States Missouri University of Science and Technology faculty
John Zaborszky
Engineering
281
74,289,074
https://en.wikipedia.org/wiki/Dieter%20Haidt
Dieter Haidt (born 1940) is a German physicist, known for his contribution to the 1973 discovery of weak neutral currents. The discovery was made in the Gargamelle experiment, which used a heavy liquid bubble chamber detector in operation at CERN from 1970 to 1979. Education and career In 1958 Haidt graduated from the Kepler-Gymnasium in Tübingen. He then studied physics at the University of Tübingen, where he graduated with a Diplom in experimental physics in 1965. He then moved to RWTH Aachen University, where he was a member of the X2 collaboration. He was also a visiting scholar at University College London in 1966. In 1969 he received his doctorate at RWTH Aachen University summa cum laude and in 1970 he received the Borchers Medal. From 1970 he was a member of the Gargamelle collaboration at CERN (from RWTH Aachen University) and from 1971 to 1978 he was employed at CERN. In 1973, the Gargamelle collaboration discovered weak neutral currents. The collaboration searched for weak neutral currents in neutrino reactions without muon generation. The discovery's rapid recognition depended, to a considerable extent, on calculations by Haidt, who showed that the existence of weak neutral currents was a new type of effect (and not, e.g., interactions between neutrons). Other prominent physicists involved in the Gargamelle experiment include Antonino Pullia (1935–2020), Helmut Faissner (1928–2007), and André Lagarrigue (1924–1975). Haidt was a spokesperson for the neutrino-propane experiment at the Gargamelle bubble chamber. He was involved in neutrino experiments at the BEBC detector. From 1979 to 2004 he was a senior scientist at DESY. From 1979 to 1986 he was a member of the JADE collaboration at DESY and from 1994 of the H1 collaboration. He was a member of the Physics Research Committee (PRC) at DESY and organized the DESY seminars. In 2007 he received emeritus status. For the academic year 1987–1988 he was a visiting scientist at Japanese particle physics laboratory known as KEK. In 2011 he shared the Enrico Fermi Prize with Antonino Pullia. In 2009, the Gargamelle collaboration received the European Physical Society's High-Energy and Particle Physics Prize. From 1986 to 1997 he was an editor for the Zeitschrift für Physik C and from 1997 to 2006 he was the editor-in-chief of its successor, the European Physical Journal C. References 1940 births Living people People associated with CERN German experimental physicists Particle physicists University of Tübingen alumni RWTH Aachen University alumni
Dieter Haidt
Physics
553
1,590,804
https://en.wikipedia.org/wiki/Method%20of%20distinguished%20element
In the mathematical field of enumerative combinatorics, identities are sometimes established by arguments that rely on singling out one "distinguished element" of a set. Definition Let be a family of subsets of the set and let be a distinguished element of set . Then suppose there is a predicate that relates a subset to . Denote to be the set of subsets from for which is true and to be the set of subsets from for which is false, Then and are disjoint sets, so by the method of summation, the cardinalities are additive Thus the distinguished element allows for a decomposition according to a predicate that is a simple form of a divide and conquer algorithm. In combinatorics, this allows for the construction of recurrence relations. Examples are in the next section. Examples The binomial coefficient is the number of size-k subsets of a size-n set. A basic identity—one of whose consequences is that the binomial coefficients are precisely the numbers appearing in Pascal's triangle—states that: Proof: In a size-(n + 1) set, choose one distinguished element. The set of all size-k subsets contains: (1) all size-k subsets that do contain the distinguished element, and (2) all size-k subsets that do not contain the distinguished element. If a size-k subset of a size-(n + 1) set does contain the distinguished element, then its other k − 1 elements are chosen from among the other n elements of our size-(n + 1) set. The number of ways to choose those is therefore . If a size-k subset does not contain the distinguished element, then all of its k members are chosen from among the other n "non-distinguished" elements. The number of ways to choose those is therefore . The number of subsets of any size-n set is 2n. Proof: We use mathematical induction. The basis for induction is the truth of this proposition in case n = 0. The empty set has 0 members and 1 subset, and 20 = 1. The induction hypothesis is the proposition in case n; we use it to prove case n + 1. In a size-(n + 1) set, choose a distinguished element. Each subset either contains the distinguished element or does not. If a subset contains the distinguished element, then its remaining elements are chosen from among the other n elements. By the induction hypothesis, the number of ways to do that is 2n. If a subset does not contain the distinguished element, then it is a subset of the set of all non-distinguished elements. By the induction hypothesis, the number of such subsets is 2n. Finally, the whole list of subsets of our size-(n + 1) set contains 2n + 2n = 2n+1 elements. Let Bn be the nth Bell number, i.e., the number of partitions of a set of n members. Let Cn be the total number of "parts" (or "blocks", as combinatorialists often call them) among all partitions of that set. For example, the partitions of the size-3 set {a, b, c} may be written thus: We see 5 partitions, containing 10 blocks, so B3 = 5 and C3 = 10. An identity states: Proof: In a size-(n + 1) set, choose a distinguished element. In each partition of our size-(n + 1) set, either the distinguished element is a "singleton", i.e., the set containing only the distinguished element is one of the blocks, or the distinguished element belongs to a larger block. If the distinguished element is a singleton, then deletion of the distinguished element leaves a partition of the set containing the n non-distinguished elements. There are Bn ways to do that. If the distinguished element belongs to a larger block, then its deletion leaves a block in a partition of the set containing the n non-distinguished elements. There are Cn such blocks. See also Combinatorial principles Combinatorial proof References Combinatorics Mathematical principles
Method of distinguished element
Mathematics
858
65,297,106
https://en.wikipedia.org/wiki/Adaptive%20design%20%28medicine%29
In an adaptive design of a clinical trial, the parameters and conduct of the trial for a candidate drug or vaccine may be changed based on an interim analysis. Adaptive design typically involves advanced statistics to interpret a clinical trial endpoint. This is in contrast to traditional single-arm (i.e. non-randomized) clinical trials or randomized clinical trials (RCTs) that are static in their protocol and do not modify any parameters until the trial is completed. The adaptation process takes place at certain points in the trial, prescribed in the trial protocol. Importantly, this trial protocol is set before the trial begins with the adaptation schedule and processes specified. Adaptions may include modifications to: dosage, sample size, drug undergoing trial, patient selection criteria and/or "cocktail" mix. The PANDA (A Practical Adaptive & Novel Designs and Analysis toolkit) provides not only a summary of different adaptive designs, but also comprehensive information on adaptive design planning, conduct, analysis and reporting. Purpose The aim of an adaptive trial is to more quickly identify drugs or devices that have a therapeutic effect, and to zero in on patient populations for whom the drug is appropriate. When conducted efficiently, adaptive trials have the potential to find new treatments while minimizing the number of patients exposed to the risks of clinical trials. Specifically, adaptive trials can efficiently discover new treatments by reducing the number of patients enrolled in treatment groups that show minimal efficacy or higher adverse-event rates. Adaptive trials can adjust almost any part of its design, based on pre-set rules and statistical design, such as sample size, adding new groups, dropping less effective groups and changing the probability of being randomized to a particular group, for example. History In 2004, a Strategic Path Initiative was introduced by the United States Food and Drug Administration (FDA) to modify the way drugs travel from lab to market. This initiative aimed at dealing with the high attrition levels observed in the clinical phase. It also attempted to offer flexibility to investigators to find the optimal clinical benefit without affecting the study's validity. Adaptive clinical trials initially came under this regime. The FDA issued draft guidance on adaptive trial design in 2010. In 2012, the President's Council of Advisors on Science and Technology (PCAST) recommended that the FDA "run pilot projects to explore adaptive approval mechanisms to generate evidence across the lifecycle of a drug from the pre-market through the post-market phase." While not specifically related to clinical trials, the council also recommended that they "make full use of accelerated approval for all drugs meeting the statutory standard of addressing an unmet need for a serious or life-threatening disease, and demonstrating an impact on a clinical endpoint other than survival or irreversible morbidity, or on a surrogate endpoint, likely to predict clinical benefit." By 2019, the FDA updated their 2010 recommendations and issued "Adaptive Design Clinical Trials for Drugs and Biologics Guidance". In October of 2021, the FDA Center for Veterinary Medicine issued the Guidance Document "Adaptive and Other Innovative Designs for Effectiveness Studies of New Animal Drugs". Characteristics Traditionally, clinical trials are conducted in three steps: The trial is designed. The trial is conducted as prescribed by the design. Once the data are ready, they are analysed according to a pre-specified analysis plan. Types Overview Any trial design that can change its design, during active enrollment, could be considered an adaptive clinical trial. There are a number of different types, and real life trials may combine elements from these different trial types: In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained. Dose finding design Phase I of clinical research focuses on selecting a particular dose of a drug to carry forward into future trials. Historically, such trials have had a "rules-based" (or "algorithm-based") design, such as the 3+3 design. However, these "A+B" rules-based designs are not appropriate for phase I studies and are inferior to adaptive, model-based designs. An example of a superior design is the continual reassessment method (CRM). Group sequential design Group sequential design is the application of sequential analysis to clinical trials. At each interim analysis, investigators will use the current data to decide whether the trial should either stop or should continue to recruit more participants. The trial might stop either because the evidence that the treatment is working is strong ("stopping for benefit") or weak ("stopping for futility"). Whether a trial may stop for futility only, benefit only, or either, is stated in advance. A design has "binding stopping rules" when the trial must stop when a particular threshold of (either strong or weak) evidence is crossed at a particular interim analysis. Otherwise it has "non-binding stopping rules", in which case other information can be taken into account, for example safety data. The number of interim analyses is specified in advance, and can be anything from a single interim analysis (a "two-stage" design") to an interim analysis after every participant ("continuous monitoring"). For trials with a binary (response/no response) outcome and a single treatment arm, a popular and simple group sequential design with two stages is the Simon design. In this design, there is a single interim analysis partway through the trial, at which point the trial either stops for futility or continues to the second stage. Mander and Thomson also proposed a design with a single interim analysis, at which point the trial could stop for either futility or benefit. For single-arm, single-stage binary outcome trials, a trial's success or failure is determined by the number of responses observed by the end of the trial. This means that it may be possible to know the conclusion of the trial (success or failure) with certainty before all the data are available. Planning to stop a trial once the conclusion is known with certainty is called non-stochastic curtailment. This reduces the sample size on average. Planning to stop a trial when the probability of success, based on the results so far, is either above or below a certain threshold is called stochastic curtailment. This reduces the average sample size even more than non-stochastic curtailment. Stochastic and non-stochastic curtailment can also be used in two-arm binary outcome trials, where a trial's success or failure is determined by the number of responses observed on each arm by the end of the trial. Usage The adaptive design method developed mainly in the early 21st century. In November 2019, the US Food and Drug Administration provided guidelines for using adaptive designs in clinical trials. In 2020 COVID-19 related trials In April 2020, the World Health Organization published an "R&D Blueprint (for the) novel Coronavirus" (Blueprint). The Blueprint documented a "large, international, multi-site, individually randomized controlled clinical trial" to allow "the concurrent evaluation of the benefits and risks of each promising candidate vaccine within 3–6 months of it being made available for the trial." The Blueprint listed a Global Target Product Profile (TPP) for COVID‑19, identifying favorable attributes of safe and effective vaccines under two broad categories: "vaccines for the long-term protection of people at higher risk of COVID-19, such as healthcare workers", and other vaccines to provide rapid-response immunity for new outbreaks. The international TPP team was formed to 1) assess the development of the most promising candidate vaccines; 2) map candidate vaccines and their clinical trial worldwide, publishing a frequently-updated "landscape" of vaccines in development; 3) rapidly evaluate and screen for the most promising candidate vaccines simultaneously before they are tested in humans; and 4) design and coordinate a multiple-site, international randomized controlled trialthe "Solidarity trial" for vaccinesto enable simultaneous evaluation of the benefits and risks of different vaccine candidates under clinical trials in countries where there are high rates of COVID‑19 disease, ensuring fast interpretation and sharing of results around the world. The WHO vaccine coalition prioritized which vaccines would go into PhaseII and III clinical trials, and determined harmonized PhaseIII protocols for all vaccines achieving the pivotal trial stage. The global "Solidarity" and European "Discovery" trials of hospitalized people with severe COVID‑19 infection applied adaptive design to rapidly alter trial parameters as results from the four experimental therapeutic strategies emerge. The US National Institute of Allergy and Infectious Diseases (NIAID) initiated an adaptive design, international Phase III trial (called "ACTT") to involve up to 800 hospitalized COVID‑19 people at 100 sites in multiple countries. Breast cancer An adaptive trial design enabled two experimental breast cancer drugs to deliver promising results after just six months of testing, far shorter than usual. Researchers assessed the results while the trial was in process and found that cancer had been eradicated in more than half of one group of patients. The trial, known as I-Spy 2, tested 12 experimental drugs. I-SPY 1 For its predecessor I-SPY 1, 10 cancer centers and the National Cancer Institute (NCI SPORE program and the NCI Cooperative groups) collaborated to identify response indicators that would best predict survival for women with high-risk breast cancer. During 2002–2006, the study monitored 237 patients undergoing neoadjuvant therapy before surgery. Iterative MRI and tissue samples monitored the biology of patients to chemotherapy given in a neoadjuvant setting, or presurgical setting. Evaluating chemotherapy's direct impact on tumor tissue took much less time than monitoring outcomes in thousands of patients over long time periods. The approach helped to standardize the imaging and tumor sampling processes, and led to miniaturized assays. Key findings included that tumor response was a good predictor of patient survival, and that tumor shrinkage during treatment was a good predictor of long-term outcome. Importantly, the vast majority of tumors identified as high risk by molecular signature. However, heterogeneity within this group of women and measuring response within tumor subtypes was more informative than viewing the group as a whole. Within genetic signatures, level of response to treatment appears to be a reasonable predictor of outcome. Additionally, its shared database has furthered the understanding of drug response and generated new targets and agents for subsequent testing. I-SPY 2 I-SPY 2 is an adaptive clinical trial of multiple Phase 2 treatment regimens combined with standard chemotherapy. I-SPY 2 linked 19 academic cancer centers, two community centers, the FDA, the NCI, pharmaceutical and biotech companies, patient advocates and philanthropic partners. The trial is sponsored by the Biomarker Consortium of the Foundation for the NIH (FNIH), and is co-managed by the FNIH and QuantumLeap Healthcare Collaborative. I-SPY 2 was designed to explore the hypothesis that different combinations of cancer therapies have varying degrees of success for different patients. Conventional clinical trials that evaluate post-surgical tumor response require a separate trial with long intervals and large populations to test each combination. Instead, I-SPY 2 is organized as a continuous process. It efficiently evaluates multiple therapy regimes by relying on the predictors developed in I-SPY 1 that help quickly determine whether patients with a particular genetic signature will respond to a given treatment regime. The trial is adaptive in that the investigators learn as they go, and do not continue treatments that appear to be ineffective. All patients are categorized based on tissue and imaging markers collected early and iteratively (a patient's markers may change over time) throughout the trial, so that early insights can guide treatments for later patients. Treatments that show positive effects for a patient group can be ushered to confirmatory clinical trials, while those that do not can be rapidly sidelined. Importantly, confirmatory trials can serve as a pathway for FDA Accelerated Approval. I-SPY 2 can simultaneously evaluate candidates developed by multiple companies, escalating or eliminating drugs based on immediate results. Using a single standard arm for comparison for all candidates in the trial saves significant costs over individual Phase 3 trials. All data are shared across the industry. I-SPY 2 is comparing 11 new treatments against 'standard therapy', and is estimated to complete in Sept 2017. By mid 2016 several treatments had been selected for later stage trials. Alzheimer's Researchers under the EPAD project by the Innovative Medicines Initiative are utilizing an adaptive trial design to help speed development of Alzheimer's disease treatments, with a budget of 53 million euros. The first trial under the initiative was expected to begin in 2015 and to involve about a dozen companies. As of 2020, 2,000 people over the age of 50 have been recruited across Europe for a long term study on the earliest stages of Alzheimer's. The EPAD project plans to use the results from this study and other data to inform 1,500 person selected adaptive clinical trials of drugs to prevent Alzheimer's. Bayesian designs The adjustable nature of adaptive trials inherently suggests the use of Bayesian statistical analysis. Bayesian statistics inherently address updating information such as that seen in adaptive trials that change from updated information derived from interim analysis. The problem of adaptive clinical trial design is more or less exactly the bandit problem as studied in the field of reinforcement learning. According to FDA guidelines, an adaptive Bayesian clinical trial can involve: Interim looks to stop or to adjust patient accrual Interim looks to assess stopping the trial early either for success, futility or harm Reversing the hypothesis of non-inferiority to superiority or vice versa Dropping arms or doses or adjusting doses Modification of the randomization rate to increase the probability that a patient is allocated to the most appropriate treatment (or arm in the multi-armed bandit model) The Bayesian framework Continuous Individualized Risk Index which is based on dynamic measurements from cancer patients can be effectively used for adaptive trial designs. Platform trials rely heavily on Bayesian designs. For regulatory submission of Bayesian clinical trial design, there exist two Bayesian decision rules that are frequently used by trial sponsors. First, posterior probability approach is mainly used in decision-making to quantify the evidence to address the question, "Does the current data provide convincing evidence in favor of the alternative hypothesis?" The key quantity of the posterior probability approach is the posterior probability of the alternative hypothesis being true based on the data observed up to the point of analysis. Second, predictive probability approach is mainly used in decision-making is to answer the question at an interim analysis: "Is the trial likely to present compelling evidence in favor of the alternative hypothesis if we gather additional data, potentially up to the maximum sample size (or current sample size)?" The key quantity of the predictive probability approach is the posterior predictive probability of the trial success given the interim data. In most regulatory submissions, Bayesian trial designs are calibrated to possess good frequentist properties. In this spirit, and in adherence to regulatory practice, regulatory agencies typically recommend that sponsors provide the frequentist type I and II error rates for the sponsor's proposed Bayesian analysis plan. In other words, the Bayesian designs for the regulatory submission need to satisfy the type I and II error requirement in most cases in the frequentist sense. Some exception may happen in the context of external data borrowing where the type I error rate requirement can be relaxed to some degree depending on the confidence of the historical information. Statistical analysis The problem of adaptive clinical trial design is more or less exactly the bandit problem as studied in the field of reinforcement learning. Added complexity The logistics of managing traditional, non-adaptive design clinical trials may be complex. In adaptive design clinical trials, adapting the design as results arrive adds to the complexity of design, monitoring, drug supply, data capture and randomization. Furthermore, it should be stated in the trial's protocol exactly what kind of adaptation will be permitted. Publishing the trial protocol in advance increases the validity of the final results, as it makes clear that any adaptation that took place during the trial was planned, rather than ad hoc. According to PCAST "One approach is to focus studies on specific subsets of patients most likely to benefit, identified based on validated biomarkers. In some cases, using appropriate biomarkers can make it possible to dramatically decrease the sample size required to achieve statistical significance—for example, from 1500 to 50 patients." Adaptive designs have added statistical complexity compared to traditional clinical trial designs. For example, any multiple testing, either from looking at multiple treatment arms or from looking at a single treatment arm multiple times, must be accounted for. Another example is statistical bias, which can be more likely when using adaptive designs, and again must be accounted for. While an adaptive design may be an improvement over a non-adaptive design in some respects (for example, expected sample size), it is not always the case that an adaptive design is a better choice overall: in some cases, the added complexity of the adaptive design may not justify its benefits. An example of this is when the trial is based on a measurement that takes a long time to observe, as this would mean having an interim analysis when many participants have started treatment but cannot yet contribute to the interim results. Risks Shorter trials may not reveal longer term risks, such as a cancer's return. Resources (external links) See also References Sources External links Gottlieb K. (2016) The FDA adaptive trial design guidance in a nutshell - A review in Q&A format for decision makers. PeerJ Preprints 4:e1825v1 Clinical trials Drugs Design of experiments
Adaptive design (medicine)
Chemistry
3,581
1,428,576
https://en.wikipedia.org/wiki/Religious%20vows
Religious vows are the public vows made by the members of religious communities pertaining to their conduct, practices, and views. In the Buddhist tradition, in particular within the Mahayana and Vajrayana traditions, many different kinds of religious vows are taken by the lay community as well as by the monastic community, as they progress along the path of their practice. In the monastic tradition of all schools of Buddhism, the Vinaya expounds the vows of the fully ordained Nuns and Monks. In the Christian tradition, such public vows are made by the religious cenobitic and eremitic of the Catholic Church, Lutheran Churches, Anglican Communion, and Eastern Orthodox Churches, whereby they confirm their public profession of the evangelical counsels of poverty, chastity, and obedience or Benedictine equivalent. The vows are regarded as the individual's free response to a call by God to follow Jesus Christ more closely under the action of the Holy Spirit in a particular form of religious living. A person who lives a religious life according to vows they have made is called a votary or a votarist. The religious vow, being a public vow, is binding in Church law. One of its effects is that the person making it ceases to be free to marry. In the Catholic Church, by joining the consecrated life, one does not become a member of the hierarchy but becomes a member of a state of life which is neither clerical nor lay, the consecrated state. Nevertheless, the members of the religious orders and those hermits who are in Holy Orders are members of the hierarchy. Christianity In the Western Churches Since the 6th century, monks and nuns following the Rule of Saint Benedict have been making the Benedictine vow at their public profession of obedience (placing oneself under the direction of the abbot/abbess or prior/prioress), stability (committing oneself to a particular monastery), and "conversion of manners" (which includes celibate chastity and forgoing private ownership). During the 12th and 13th centuries mendicant orders emerged, such as the Franciscans and Dominicans, whose vocation emphasizing mobility and flexibility required them to drop the concept of "stability". They therefore profess chastity, poverty and obedience, like the members of many other orders and religious congregations founded subsequently. The public profession of the evangelical counsels (or counsels of perfection), confirmed by vow or other sacred bond, are a requirement according to Church Law. The "clerks regular" of the 16th century and after, such as the Jesuits and Redemptorists, followed this same general format, though some added a "fourth vow", indicating some special apostolate or attitude within the order. Fully professed Jesuits (known as "the professed of the fourth vow" within the order), take a vow of particular obedience to the Pope to undertake any mission laid out in their Formula of the Institute. Poor Clares additionally profess a vow of enclosure. The Missionaries of Charity, founded by St. Teresa of Calcutta centuries later (1940s) take a fourth vow of special service to "the poorest of the poor". In the Catholic Church In the Catholic Church, the vows of members of religious orders and congregations are regulated by canons 654-658 of the Code of Canon Law. These are public vows, meaning vows accepted by a superior in the name of the Church, and they are usually of two durations: temporary, and, after a few years, final vows (permanent or "perpetual"). Depending on the order, temporary vows may be renewed a number of times before permission to take final vows is given. There are exceptions: the Jesuits' first vows are perpetual, for instance, and the Sisters of Charity take only temporary but renewable vows. Religious vows are of two varieties: simple vows and solemn vows. The highest level of commitment is exemplified by those who have taken their solemn, perpetual vows. There once were significant technical differences between them in canon law; but these differences were suppressed by the current Code of Canon Law in 1983, although the nominal distinction is maintained. Only a limited number of religious congregations may invite their members to solemn vows; most religious congregations are only authorized to take simple vows. Even in congregations with solemn vows, some members with perpetual vows may have taken them simply rather than solemnly. A perpetual vow can be superseded by the pope, when he decides that a man under perpetual vows should become a bishop of the Church. In these cases, the ties to the order the new bishop had are dissolved as if the bishop had never been a member; hence, such a person as Pope Francis, for example, has had no formal ties to his old order for years. However, if the bishop was a member in good standing, he will be regarded, informally, as "one of us", and he will always be welcome in any of the order's houses. There are other forms of consecrated life in the Catholic Church for both men and women. They make a public profession of the evangelical counsels of chastity, poverty, and obedience, confirmed by a vow or other sacred bond, regulated by canon law but live consecrated lives in the world (i.e. not as members of a religious institute). Such are the secular institutes, the diocesan hermits (canon 603) and the consecrated virgins (canon 604). These make a public profession of the evangelical counsels by a vow or other sacred bond. Also similar are the societies of apostolic life. Diocesan hermits individually profess the three evangelical counsels in the hands of their local ordinary. Consecrated virgins living in the world do not make religious vows, but express by a public so-called sanctum propositum ("holy purpose") to follow Christ more closely. The prayer of consecration that constitutes such virgins "sacred persons" inserts them into the Ordo Virginum and likewise places them in the consecrated life in the Catholic Church. In the Lutheran Church In the Anglican Communion In the Eastern Orthodox Church Although the taking of vows was not a part of the earliest monastic foundations (the wearing of a particular monastic habit is the earliest recorded manifestation of those who had left the world), vows did come to be accepted as a normal part of the tonsure service in the Christian East. Previously, one would simply find a spiritual father and live under his direction. Once one put on the monastic habit, it was understood that one had made a lifetime commitment to God and would remain steadfast in it to the end. Over time, however, the formal Tonsure and taking of vows was adopted to impress upon the monastic the seriousness of the commitment to the ascetic life he or she was adopting. The vows taken by Orthodox monks are: Chastity, poverty, obedience, and stability. The vows are administered by the abbot or hieromonk who performs the service. Following a period of instruction and testing as a novice, a monk or nun may be tonsured with the permission of the candidate's spiritual father. There are three degrees of monasticism in the Orthodox Church: The ryassaphore (one who wears the ryassa however, there are no vows at this level the Stavrophore (one who wears the cross), and the Schema-monk (one who wears the Great Schema; i.e., the full monastic habit). The one administering the tonsure must be an ordained priest, and must be a monk of at least the rank he is tonsuring the candidate into. However, a Bishop (who, in the Orthodox Church, must always be a monk) may tonsure a monk or nun into any degree regardless of his own monastic rank. Jain ethics and five vows Jainism teaches five ethical duties, which it calls five vows. These are called anuvratas (small vows) for Jain laypersons, and mahavratas (great vows) for Jain mendicants. For both, its moral precepts preface that the Jain has access to a guru (teacher, counsellor), deva (Jina, god), doctrine, and that the individual is free from five offences: doubts about the faith, indecisiveness about the truths of Jainism, sincere desire for Jain teachings, recognition of fellow Jains, and admiration for their spiritual pursuits. Such a person undertakes the following Five vows of Jainism: Ahiṃsā, "intentional non-violence" or "noninjury": The first major vow taken by Jains is to cause no harm to other human beings, as well as all living beings (particularly animals). This is the highest ethical duty in Jainism, and it applies not only to one's actions, but demands that one be non-violent in one's speech and thoughts. Satya, "truth": This vow is to always speak the truth. Neither lie, nor speak what is not true, and do not encourage others or approve anyone who speaks an untruth. Asteya, "not stealing": A Jain layperson should not take anything that is not willingly given. Additionally, a Jain mendicant should ask for permission to take it if something is being given. Brahmacharya, "celibacy": Abstinence from sex and sensual pleasures is prescribed for Jain monks and nuns. For laypersons, the vow means chastity, faithfulness to one's partner. Aparigraha, "non-possessiveness": This includes non-attachment to material and psychological possessions, avoiding craving and greed. Jain monks and nuns completely renounce property and social relations, own nothing and are attached to no one. Jainism also prescribes seven supplementary vows, including three guņa vratas (merit vows) and four śikşā vratas. The Sallekhana (or Santhara) vow is a "religious death" ritual vow observed at the end of life, historically by Jain monks and nuns, but rare in the modern age. In this vow, there is voluntary and gradual reduction of food and liquid intake to end one's life by choice and with dispassion, In Jainism this is believed to reduce negative karma that affects a soul's future rebirths. References Citations Sources External links Taking Monastic Vows Orthodox monks at Valaam Monastery Religious (Catholicism) Asceticism Christian worship and liturgy Religious practices Christian monasticism Eastern Christian monasticism Sacramentals
Religious vows
Biology
2,128
1,259,483
https://en.wikipedia.org/wiki/Powertec%20RPA
RPE RP-V8 is the name of a naturally-aspirated V8 engine series developed by Radical Sportscars in Peterborough, England for use in the SR8 sportscar. The design is loosely based on the inline-four engine produced by Suzuki for their Hayabusa motorcycle. The company have designed their own cylinder block and use existing Suzuki cylinder heads. The two cylinder banks are inclined at 72-degree angle. Lubrication is provided by a dry sump system. The engine is mated to a purpose-built transaxle designed by Quaife. There are currently two versions of the engine available, which have been updated for 2011. First is the base model which retains the original bore and stroke of the K8 Hayabusa design and produces . Second is the bored and stroked model which produces up to . Specifications (from Radical Sportscars) Engine maximum motive power: maximum revs: 10,500 revolutions per minute steel flat-plane crankshaft Twin pump lubrication system Four scavenge pump dry sump system Rotary vane coolant pump Pre-engage starter motor Belt-driven 45 amp alternator 45 mm eight-throttle body induction system Bellhousing and clutch Twin-plate dry clutch Integral oil tank capacity: Integral rear engine mount Engine - gearbox spacing: Dry weight: Transaxle Six-speed constant mesh manual transmission transaxle Sequential shift - 6 forward, 1 back Torque-biased limited-slip differential Integral oil cooling pump Pressure fed lubrication system Dimensions Length: Width: Height: Dry weight: References Automobile engines Automotive technology tradenames Internal combustion piston engines V8 engines Gasoline engines by model
Powertec RPA
Technology
331
36,394,411
https://en.wikipedia.org/wiki/List%20of%20IOMMU-supporting%20hardware
This article contains a list of virtualization-capable IOMMU-supporting hardware. Intel based List of Intel and Intel-based hardware that supports VT-d (Intel Virtualization Technology for Directed I/O). CPUs Server The vast majority of Intel server chips of the Xeon E3, Xeon E5, and Xeon E7 product lines support VT-d. The first—and least powerful—Xeon to support VT-d was the E5502 launched Q1'09 with two cores at 1.86 GHz on a 45 nm process. Many or most Xeons subsequent to this support VT-d. See Advanced Search: feature=VT-d and segment=server for the full list. Desktop VT-d on i7 3930K and i7 3960X only works on C2 stepping. Motherboards Intel Gigabyte ASRock Asus (1) 48 GB with CPU as xeon x5680 and 8GB DIMMs MSI Chipset Intel Z490 Intel Z370 Intel Z170 Intel X99 Intel X79 Intel Q170 Intel Q150 Intel Q87 Intel Q77 Intel Q67 Intel Q45 Intel P55 Intel Q35, X38, X48 Q45 Intel HM87, QM87, HM86, C222, X99, C612, C226 AMD based List of AMD and AMD-based hardware that supports IOMMU. AMD's implementation of IOMMU is also known as AMD-Vi. Please note that just because a motherboard uses a chipset that supports IOMMU does not mean it is able to and the bios must have an ACPI IVRS table to enable the use of it. At least one Asus board is known to have faulty BIOSes with corrupt ACPI IVRS tables; for such cases, under Linux, it is possible to specify custom mappings to override the faulty and/or missing BIOS-provided ones through the use of the ivrs_ioapic and ivrs_hpet kernel parameters. CPUs List of AMD-Vi and AMD-RVI capable AMD CPUs. All Ryzen processors so far (1xxx-7xxx) support it. Desktop Server AMD Opteron (3000, 4000 and 6000 series at least) AMD EPYC Series of Products Dell Poweredge 710 (4 x pcie 8-way sockets. Needs end opening for 16-way cards). Successfully set up libvirt qemu with Nvidia 1650 for gaming and Nvidia 720 for Kodi running two VMs simultaneously. 7.1 HDMI passthrough and 2160p. Motherboards Chipset AMD X570 AMD X470 AMD X370 AMD X300 AMD B350 AMD 890FX AMD 9-series AMD A55, A75, A85, A88X SR5650/SR5670/SR5690 Tested graphics card List of GPUs tested on some VirtualMachine with IOMMU. qemu-kvm can't assign VGA and other PCI device at same time, due to SeaBIOS limitations (fixed on git). AMD Note: Newer AMD cards no longer have FLR bug as of 2021. This bug required a host reboot when GPU is in undefined state. https://github.com/gnif/vendor-reset Nvidia References Hardware virtualization Computer peripherals Lists of computer hardware
List of IOMMU-supporting hardware
Technology
752
78,960,807
https://en.wikipedia.org/wiki/Littermate%20syndrome
Littermate syndrome (sometimes referred to as littermate dependency) is a blanket term for a variety of behavioral problems in dogs, which are attributed to their being raised alongside other dogs of the same age (regardless of whether they are actually from the same litter). The existence of littermate syndrome is disputed. Behaviors which have been connected to littermate syndrome include leash reactivity, fear aggression, neophobia, and separation anxiety relative to the other dog, as well as aggression towards each other and towards their owner. The American Kennel Club posits that littermate syndrome is the result of puppies "bond[ing] more closely with each other than with [their owner]", arguing that they will distract each other during training and thereby mutually impede their socialization. A 2019 article in the Journal of the International Association of Animal Behavior Consulants argues that there is no scientific evidence of littermate syndrome existing, only anecdotal, and that the syndrome's various aspects all have different causes, including poor management of the dogs' environment, and insufficient opportunities for behavioral enrichment; as well, the article emphasizes that many dogs are raised alongside their siblings without the occurrence of littermate syndrome, and further suggests that the label "syndrome" may wrongly give the impression that the behavioral problems are irremediable. Biologist and ethologist Marc Bekoff has declared it to be a "myth", specifying that while the relevant behaviors may be real, the overall phenomenon is "rare enough not to warrant being called a syndrome". References Dog training and behavior Ethology
Littermate syndrome
Biology
320
159,594
https://en.wikipedia.org/wiki/Methaqualone
Methaqualone is a hypnotic sedative. It was sold under the brand names Quaalude ( ) and Sopor among others, which contained 300 mg of methaqualone, and sold as a combination drug under the brand name Mandrax, which contained 250 mg methaqualone and 25 mg diphenhydramine within the same tablet, mostly in Europe. Commercial production of methaqualone was halted in the mid-1980s due to widespread abuse and addictiveness. It is a member of the quinazolinone class. Medical use The sedative–hypnotic activity of methaqualone was recognized in 1955. Its use peaked in the early 1970s for the treatment of insomnia, and as a sedative and muscle relaxant. Methaqualone was not recommended for use while pregnant and is in pregnancy category D. Similar to other GABAergic agents, methaqualone will produce tolerance and physical dependence with extended periods of use. Overdose An overdose of methaqualone can lead to coma and death. Additional effects are delirium, convulsions, hypertonia, hyperreflexia, vomiting, kidney failure, and death through cardiac or respiratory arrest. Methaqualone overdose resembles barbiturate poisoning, but with increased motor difficulties and a lower incidence of cardiac or respiratory depression. The standard single tablet adult dose of Quaalude brand of methaqualone was 300 mg when made by Lemmon. A dose of 8000 mg is lethal and a dose as little as 2000 mg could induce a coma if taken with an alcoholic beverage. Pharmacology Pharmacodynamics Methaqualone primarily acts as a sedative, relieving anxiety and promoting sleep. Methaqualone binds to GABA-A receptors, and it shows negligible affinity for a wide array of other potential targets, including other receptors and neurotransmitter transporters. Methaqualone is a positive allosteric modulator at many subtypes of GABA-A receptor, similar to classical benzodiazepines such as diazepam. GABA-A receptors are inhibitory, so methaqualone tends to inhibit action potentials, similar to GABA itself or other GABA-A agonists. Unlike most benzodiazepines, methaqualone acts as a negative allosteric modulator at a few GABA-A receptor subtypes, which tends to cause an excitatory response in neurons expressing those receptors. Because methaqualone can be either excitatory or inhibitory depending on the subunit composition of the GABA-A receptor, it can be characterized as a mixed GABA-A receptor modulator. The methaqualone binding site is distinct from the benzodiazepine, barbiturate, and neurosteroid binding sites on the GABA-A receptor complex, and it may partially overlap with the etomidate binding site. Pharmacokinetics Methaqualone peaks in the bloodstream within several hours, with a half-life of 20–60 hours. History Methaqualone was first synthesized in India in 1951 by Indra Kishore Kacker and Syed Husain Zaheer, who were conducting research on finding new antimalarial medications. In 1962, methaqualone was patented in the United States by Wallace and Tiernan. By 1965, it was the most commonly prescribed sedative in Britain, where it has been sold legally under the names Malsed, Malsedin, and Renoval. In 1965, a methaqualone/antihistamine combination was sold as the sedative drug Mandrax in Europe, by Roussel Laboratories (now part of Sanofi S.A.). In 1972, it was the sixth-bestselling sedative in the US, where it was legal under the brand name Quaalude. Quaalude in the United States was originally manufactured in 1965 by the pharmaceutical firm William H. Rorer, Inc., based in Fort Washington, Pennsylvania. The drug name "Quaalude" is a portmanteau, combining the words "quiet interlude" and shared a stylistic reference to another drug marketed by the firm, Maalox. In 1978, Rorer sold the rights to manufacture Quaalude to the Lemmon Company of Sellersville, Pennsylvania. At that time, Rorer chairman John Eckman commented on Quaalude's bad reputation stemming from illegal manufacture and use of methaqualone, and illegal sale and use of legally prescribed Quaalude: "Quaalude accounted for less than 2% of our sales, but created 98% of our headaches." Both companies still regarded Quaalude as an excellent sleeping drug. Lemmon, well aware of Quaalude's public image problems, used advertisements in medical journals to urge physicians "not to permit the abuses of illegal users to deprive a legitimate patient of the drug". Lemmon also marketed a small quantity under another name, Mequin, so doctors could prescribe the drug without the negative connotations. The rights to Quaalude were held by the JB Roerig & Company division of Pfizer, before the drug was discontinued in the United States in 1985, mainly due to its psychological addictiveness, widespread abuse, and illegal recreational use. A 2024 Hungarian investigative documentary reported on large-scale production and sales of the drug by the Hungarian People's Republic to the United States in the 1970s and 1980s. It asserts that a Hungarian state-owned company utilized connections to Colombian drug cartels to facilitate the sale of extraordinary amounts to the United States. Society and culture Methaqualone became increasingly popular as a recreational drug and club drug in the late 1960s and 1970s, known variously as "ludes" or "disco biscuits" due to its widespread use during the popularity of disco in the 1970s, or "sopers" (also "soaps") in the United States and Canada, and "mandrakes" and "mandies" in the United Kingdom, Australia and New Zealand. The substance was sold both as a free base and as a salt (hydrochloride). Brand names It was sold under the brand name Quaalude (sometimes stylized "Quāālude" in the United States and Canada), and Mandrax in the UK, South Africa, and Australia. Regulation Methaqualone was initially placed in Schedule I as defined by the UN Convention on Psychotropic Substances, but was moved to Schedule II in 1979. In Canada, methaqualone is listed in Schedule III of the Controlled Drugs and Substances Act and requires a prescription, but it is no longer manufactured. Methaqualone is banned in India. In the United States it was withdrawn from the market in 1983 and made a Schedule I drug in 1984. Recreational Methaqualone became increasingly popular as a recreational drug in the late 1960s and 1970s, known variously as "ludes" or "sopers" and "soaps" (sopor is a Latin word for sleep) in the United States and "mandrakes" and "mandies" in the UK, Australia and New Zealand. The drug was more tightly regulated in Britain under the Misuse of Drugs Act 1971 and in the U.S. from 1973. It was withdrawn from many developed markets in the early 1980s. In the United States it was withdrawn in 1983 and made a Schedule I drug in 1984. It has a DEA ACSCN of 2565 and in 2022 the aggregate annual manufacturing quota for the United States was 60 grams. Mention of its possible use in some types of cancer and AIDS treatments has periodically appeared in the literature since the late 1980s. Research does not appear to have reached an advanced stage. The DEA has also added the methaqualone analogue mecloqualone (also a result of some incomplete clandestine syntheses) to Schedule I as ACSCN 2572, with a manufacturing quota of 30 g. Gene Haislip, the former head of the Chemical Control Division of the Drug Enforcement Administration (DEA), told the PBS documentary program Frontline, "We beat 'em." By working with governments and manufacturers around the world, the DEA was able to halt production and, Haislip said, "eliminated the problem". Methaqualone was manufactured in the United States under the name Quaalude by the pharmaceutical firms Rorer and Lemmon with the numbers 714 stamped on the tablet, so people often referred to Quaalude as 714's, "Lemmons", or "Lemmon 7's". Methaqualone was also manufactured in the US under the trade names Sopor and Parest. After the legal manufacture of the drug ended in the United States in 1982, underground laboratories in Mexico continued the illegal manufacture of methaqualone throughout the 1980s, continuing the use of the "714" stamp, until their popularity waned in the early 1990s. Drugs purported to be methaqualone are in a significant majority of cases found to be inert, or contain diphenhydramine or benzodiazepines. Illicit methaqualone is one of the most commonly used recreational drugs in South Africa. Manufactured clandestinely, often in India, it comes in tablet form, but is smoked with marijuana. This method of ingestion is known as "white pipe". It is popular elsewhere in Africa and in India. Chemical weapon – Project Coast Illegal efforts to weaponize methaqualone have occurred. During the 1980s, the apartheid regime in South Africa ordered the covert manufacture of a large amount of methaqualone at the front company Delta G Scientific Company, as part of a secret chemical weapons program known as Project Coast. Methaqualone was given the codename MosRefCat (Mossgas Refinery Catalyst). Details of this activity came to light during the 1998 hearings of the post-apartheid Truth and Reconciliation Commission. Sexual assault Actor Bill Cosby admitted in a 2015 civil deposition to giving methaqualone to women before allegedly sexually assaulting them. Film director Roman Polanski was convicted in 1977 of sexually assaulting a 13-year-old girl after giving her alcohol and methaqualone. Popular culture Quaaludes are mentioned in the 1983 film Scarface, when Al Pacino's character Tony Montana says, "Another quaalude... she'll love me again." Quaaludes are also referenced extensively in the 2013 film The Wolf of Wall Street. Parody glam rocker "Quay Lewd", one of the costumed performance personae used by Tubes singer Fee Waybill, was named after the drug. Many songs also refer to quaaludes, including the following: David Bowie's "Time" ("Time, in quaaludes and red wine") and "Rebel Rebel" ("You got your cue line/And a handful of 'ludes"); "Cosmic Doo Doo" by the American country music singer-songwriter Blaze Foley ("Got some quaaludes in their purse"); "That Smell" by Lynyrd Skynyrd ("Can't speak a word when you're full of 'ludes"); "Flakes" by Frank Zappa ("(Wanna buy some mandies, Bob?)"); "Straight Edge" by Minor Threat ("Laugh at the thought of eating ludes"); and "Kind of Girl" by French Montana ("That high got me feelin' like the Quaaludes from Wolf of Wall Street"). Season 18 of Law & Order: Special Victims Unit addresses Quaalude administration as a date rape drug in episode 9, "Decline and Fall", which aired January 18, 2017. In True Detective season 1, Rust Cohle's use of Quaaludes is briefly mentioned in several episodes. It is also used by Patrick Melrose in Edward St Aubyn's 1992 novel Bad News. Further reading References External links Erowid Vault – Methaqualone (Quaaludes) GABAA receptor positive allosteric modulators Quinazolinones Sedatives Hypnotics Amidines Withdrawn drugs South Africa and weapons of mass destruction 2-Tolyl compounds
Methaqualone
Chemistry,Biology
2,570
5,082,160
https://en.wikipedia.org/wiki/Autapomorphy
In phylogenetics, an autapomorphy is a distinctive feature, known as a derived trait, that is unique to a given taxon. That is, it is found only in one taxon, but not found in any others or outgroup taxa, not even those most closely related to the focal taxon (which may be a species, family or in general any clade). It can therefore be considered as an apomorphy in relation to a single taxon. The word autapomorphy, introduced in 1950 by German entomologist Willi Hennig, is derived from the Greek words αὐτός, autos "self"; ἀπό, apo "away from"; and μορφή, morphḗ = "shape". Discussion Because autapomorphies are only present in a single taxon, they do not convey information about relationship. Therefore, autapomorphies are not useful to infer phylogenetic relationships. However, autapomorphy, like synapomorphy and plesiomorphy is a relative concept depending on the taxon in question. An autapomorphy at a given level may well be a synapomorphy at a less-inclusive level. An example of an autapomorphy can be described in modern snakes. Snakes have lost the two pairs of legs that characterize all of Tetrapoda, and the closest taxa to Ophidia – as well as their common ancestors – all have two pairs of legs. Therefore, the Ophidia taxon presents an autapomorphy with respect to its absence of legs. The autapomorphic species concept is one of many methods that scientists might use to define and distinguish species from one another. This definition assigns species on the basis of amount of divergence associated with reproductive incompatibility, which is measured essentially by number of autapomorphies. This grouping method is often referred to as the "monophyletic species concept" or the "phylospecies" concept and was popularized by D.E. Rosen in 1979. Within this definition, a species is seen as "the least inclusive monophyletic group definable by at least one autapomorphy". While this model of speciation is useful in that it avoids non-monophyletic groupings, it has its criticisms as well. N.I. Platnick, for example, believes the autapomorphic species concept to be inadequate because it allows for the possibility of reproductive isolation and speciation while revoking the "species" status of the mother population. In other words, if a peripheral population breaks away and becomes reproductively isolated, it would conceivably need to develop at least one autapomorphy to be recognized as a different species. If this can happen without the larger mother population also developing a new autapomorphy, then the mother population cannot remain a species under the autapomorphic species concept: it would no longer have any apomorphies not also shared by the daughter species. Phylogenetic similarities: These phylogenetic terms are used to describe different patterns of ancestral and derived character or trait states as stated in the above diagram in association with synapomorphies. Homoplasy in biological systematics is when a trait has been gained or lost independently in separate lineages during evolution. This convergent evolution leads to species independently sharing a trait that is different from the trait inferred to have been present in their common ancestor. Parallel Homoplasy – derived trait present in two groups or species without a common ancestor due to convergent evolution. Reverse Homoplasy – trait present in an ancestor but not in direct descendants that reappears in later descendants. Apomorphy – a derived trait. Apomorphy shared by two or more taxa and inherited from a common ancestor is synapomorphy. Apomorphy unique to a given taxon is autapomorphy. Synapomorphy/Homology – a derived trait that is found in some or all terminal groups of a clade, and inherited from a common ancestor, for which it was an autapomorphy (i.e., not present in its immediate ancestor). Underlying synapomorphy – a synapomorphy that has been lost again in many members of the clade. If lost in all but one, it can be hard to distinguish from an autapomorphy. Autapomorphy – a distinctive derived trait that is unique to a given taxon or group. Symplesiomorphy – an ancestral trait shared by two or more taxa. Plesiomorphy – a symplesiomorphy discussed in reference to a more derived state. Pseudoplesiomorphy – is a trait that cannot be identified as neither a plesiomorphy nor an apomorphy that is a reversal. Reversal – is a loss of derived trait present in ancestor and the reestablishment of a plesiomorphic trait. Convergence – independent evolution of a similar trait in two or more taxa. Hemiplasy References Phylogenetics Evolutionary biology terminology
Autapomorphy
Biology
1,084
20,271,419
https://en.wikipedia.org/wiki/Divergence%20%28computer%20science%29
In computer science, a computation is said to diverge if it does not terminate or terminates in an exceptional state. Otherwise it is said to converge. In domains where computations are expected to be infinite, such as process calculi, a computation is said to diverge if it fails to be productive (i.e. to continue producing an action within a finite amount of time). Definitions Various subfields of computer science use varying, but mathematically precise, definitions of what it means for a computation to converge or diverge. Rewriting In abstract rewriting, an abstract rewriting system is called convergent if it is both confluent and terminating. The notation t ↓ n means that t reduces to normal form n in zero or more reductions, t↓ means t reduces to some normal form in zero or more reductions, and t↑ means t does not reduce to a normal form; the latter is impossible in a terminating rewriting system. In the lambda calculus an expression is divergent if it has no normal form. Denotational semantics In denotational semantics an object function f : A → B can be modelled as a mathematical function where ⊥ (bottom) indicates that the object function or its argument diverges. Concurrency theory In the calculus of communicating sequential processes (CSP), divergence occurs when a process performs an endless series of hidden actions. For example, consider the following process, defined by CSP notation: The traces of this process are defined as: Now, consider the following process, which hides the tick event of the Clock process: As cannot do anything other than perform hidden actions forever, it is equivalent to the process that does nothing but diverge, denoted . One semantic model of CSP is the failures-divergences models, which refines the stable failures model by distinguishes processes based on the sets of traces after which they can diverge. See also Infinite loop Termination analysis Notes References J. M. R. Martin and S. A. Jassim (1997). "How to Design Deadlock-Free Networks Using CSP and Verification Tools: A Tutorial Introduction" in Proceedings of the WoTUG-20. Programming language theory Process (computing) Rewriting systems Lambda calculus Denotational semantics
Divergence (computer science)
Technology
460
40,464,294
https://en.wikipedia.org/wiki/Hydrologic%20Research%20Center%20%28US%29
Hydrologic Research Center (HRC.), founded in 1993, is a public-benefit non-profit research, technology transfer, and science cooperation and education organization, dedicated to the development of effective and sustainable solutions to global water issues. HRC's purpose is to provide a conduit for academic and other up-to-date research to be made suitable for effective application to field operational problems that involve water management and flood disaster mitigation. The vision of HRC is to assist in limiting societal vulnerability and preserving resiliency in basic human needs, livelihoods, agriculture, water resources, healthy ecosystems, and natural resources. Around the world flash flooding and flooding are the most common natural disasters and the leading cause of natural disaster fatalities worldwide – 40% of all natural disasters. HRC partners with local governments in over 70 countries and other trusted nongovernmental organizations to promote sustainable programs that include education in flash floods, management of water resources, and the development of Flash Flood Guidance Systems to provide vital early warning of flash floods. Research Journal Journal of Hydrology , Elsevier Hydrology Research ISSN Print: 0029-1277, IWA Publishing Journal of the American Water Resources Association Online , John Wiley & Sons, Inc See also Flash flood Hydrology Flash Flood Guidance Systems Meteorology Flash flood watch Flash flood warning References External links Hydrologic Research Center Flood control in the United States Water organizations Hydrology organizations Environmental organizations based in California Environmental research institutes International environmental organizations Scientific organizations based in the United States Non-profit organizations based in San Diego International water associations
Hydrologic Research Center (US)
Environmental_science
316
15,323
https://en.wikipedia.org/wiki/Internet%20Protocol
The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet. IP has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information. IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974, which was complemented by a connection-oriented service that became the basis for the Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as TCP/IP. The first major version of IP, Internet Protocol version 4 (IPv4), is the dominant protocol of the Internet. Its successor is Internet Protocol version 6 (IPv6), which has been in increasing deployment on the public Internet since around 2006. Function The Internet Protocol is responsible for addressing host interfaces, encapsulating data into datagrams (including fragmentation and reassembly) and routing datagrams from a source host interface to a destination host interface across one or more IP networks. For these purposes, the Internet Protocol defines the format of packets and provides an addressing system. Each datagram has two components: a header and a payload. The IP header includes a source IP address, a destination IP address, and other metadata needed to route and deliver the datagram. The payload is the data that is transported. This method of nesting the data payload in a packet with a header is called encapsulation. IP addressing entails the assignment of IP addresses and associated parameters to host interfaces. The address space is divided into subnets, involving the designation of network prefixes. IP routing is performed by all hosts, as well as routers, whose main function is to transport packets across network boundaries. Routers communicate with one another via specially designed routing protocols, either interior gateway protocols or exterior gateway protocols, as needed for the topology of the network. Addressing methods There are four principal addressing methods in the Internet Protocol: Version history In May 1974, the Institute of Electrical and Electronics Engineers (IEEE) published a paper entitled "A Protocol for Packet Network Intercommunication". The paper's authors, Vint Cerf and Bob Kahn, described an internetworking protocol for sharing resources using packet switching among network nodes. A central control component of this model was the Transmission Control Program that incorporated both connection-oriented links and datagram services between hosts. The monolithic Transmission Control Program was later divided into a modular architecture consisting of the Transmission Control Protocol and User Datagram Protocol at the transport layer and the Internet Protocol at the internet layer. The model became known as the Department of Defense (DoD) Internet Model and Internet protocol suite, and informally as TCP/IP. The following Internet Experiment Note (IEN) documents describe the evolution of the Internet Protocol into the modern version of IPv4: IEN 2 Comments on Internet Protocol and TCP (August 1977) describes the need to separate the TCP and Internet Protocol functionalities (which were previously combined). It proposes the first version of the IP header, using 0 for the version field. IEN 26 A Proposed New Internet Header Format (February 1978) describes a version of the IP header that uses a 1-bit version field. IEN 28 Draft Internetwork Protocol Description Version 2 (February 1978) describes IPv2. IEN 41 Internetwork Protocol Specification Version 4 (June 1978) describes the first protocol to be called IPv4. The IP header is different from the modern IPv4 header. IEN 44 Latest Header Formats (June 1978) describes another version of IPv4, also with a header different from the modern IPv4 header. IEN 54 Internetwork Protocol Specification Version 4 (September 1978) is the first description of IPv4 using the header that would become standardized in 1980 as . IEN 80 IEN 111 IEN 123 IEN 128/RFC 760 (1980) IP versions 1 to 3 were experimental versions, designed between 1973 and 1978. Versions 2 and 3 supported variable-length addresses ranging between 1 and 16 octets (between 8 and 128 bits). An early draft of version 4 supported variable-length addresses of up to 256 octets (up to 2048 bits) but this was later abandoned in favor of a fixed-size 32-bit address in the final version of IPv4. This remains the dominant internetworking protocol in use in the Internet Layer; the number 4 identifies the protocol version, carried in every IP datagram. IPv4 is defined in (1981). Version number 5 was used by the Internet Stream Protocol, an experimental streaming protocol that was not adopted. The successor to IPv4 is IPv6. IPv6 was a result of several years of experimentation and dialog during which various protocol models were proposed, such as TP/IX (), PIP () and TUBA (TCP and UDP with Bigger Addresses, ). Its most prominent difference from version 4 is the size of the addresses. While IPv4 uses 32 bits for addressing, yielding c. 4.3 billion () addresses, IPv6 uses 128-bit addresses providing c. addresses. Although adoption of IPv6 has been slow, , most countries in the world show significant adoption of IPv6, with over 41% of Google's traffic being carried over IPv6 connections. The assignment of the new protocol as IPv6 was uncertain until due diligence assured that IPv6 had not been used previously. Other Internet Layer protocols have been assigned version numbers, such as 7 (IP/TX), 8 and 9 (historic). Notably, on April 1, 1994, the IETF published an April Fools' Day RfC about IPv9. IPv9 was also used in an alternate proposed address space expansion called TUBA. A 2004 Chinese proposal for an IPv9 protocol appears to be unrelated to all of these, and is not endorsed by the IETF. Reliability The design of the Internet protocol suite adheres to the end-to-end principle, a concept adapted from the CYCLADES project. Under the end-to-end principle, the network infrastructure is considered inherently unreliable at any single network element or transmission medium and is dynamic in terms of the availability of links and nodes. No central monitoring or performance measurement facility exists that tracks or maintains the state of the network. For the benefit of reducing network complexity, the intelligence in the network is located in the end nodes. As a consequence of this design, the Internet Protocol only provides best-effort delivery and its service is characterized as unreliable. In network architectural parlance, it is a connectionless protocol, in contrast to connection-oriented communication. Various fault conditions may occur, such as data corruption, packet loss and duplication. Because routing is dynamic, meaning every packet is treated independently, and because the network maintains no state based on the path of prior packets, different packets may be routed to the same destination via different paths, resulting in out-of-order delivery to the receiver. All fault conditions in the network must be detected and compensated by the participating end nodes. The upper layer protocols of the Internet protocol suite are responsible for resolving reliability issues. For example, a host may buffer network data to ensure correct ordering before the data is delivered to an application. IPv4 provides safeguards to ensure that the header of an IP packet is error-free. A routing node discards packets that fail a header checksum test. Although the Internet Control Message Protocol (ICMP) provides notification of errors, a routing node is not required to notify either end node of errors. IPv6, by contrast, operates without header checksums, since current link layer technology is assumed to provide sufficient error detection. Link capacity and capability The dynamic nature of the Internet and the diversity of its components provide no guarantee that any particular path is actually capable of, or suitable for, performing the data transmission requested. One of the technical constraints is the size of data packets possible on a given link. Facilities exist to examine the maximum transmission unit (MTU) size of the local link and Path MTU Discovery can be used for the entire intended path to the destination. The IPv4 internetworking layer automatically fragments a datagram into smaller units for transmission when the link MTU is exceeded. IP provides re-ordering of fragments received out of order. An IPv6 network does not perform fragmentation in network elements, but requires end hosts and higher-layer protocols to avoid exceeding the path MTU. The Transmission Control Protocol (TCP) is an example of a protocol that adjusts its segment size to be smaller than the MTU. The User Datagram Protocol (UDP) and ICMP disregard MTU size, thereby forcing IP to fragment oversized datagrams. Security During the design phase of the ARPANET and the early Internet, the security aspects and needs of a public, international network could not be adequately anticipated. Consequently, many Internet protocols exhibited vulnerabilities highlighted by network attacks and later security assessments. In 2008, a thorough security assessment and proposed mitigation of problems was published. The IETF has been pursuing further studies. See also ICANN IP routing List of IP protocol numbers List of IP version numbers Next-generation network New IP (proposal) References External links Internet layer protocols Internet
Internet Protocol
Technology
1,955
20,329,980
https://en.wikipedia.org/wiki/Agaricus%20abruptibulbus
Agaricus abruptibulbus is a species of mushroom in the genus Agaricus. It is commonly known as the abruptly-bulbous agaricus or the flat-bulb mushroom. First described by the mycologist Charles Horton Peck, this bulbous-stemmed edible species smells slightly of anise or bitter almond, and turns yellow when bruised or cut. The mushroom is medium-sized, with a white, yellow-staining cap on a slender stipe that has a wide, flat bulb on the base. Taxonomy and classification The species was originally named Agaricus abruptus by American mycologist Charles Horton Peck in 1900. In his 1904 publication Report of the State Botanist, he changed the name to Agaricus abruptibulbus. He explained that Elias Magnus Fries had earlier named a species in the subgenus Flammula, which he called Agaricus abruptus; the subgenus was later raised to the rank of genus, and the species was given the name Flammula abruptus. Under the transitioning nomenclatural conventions of the time, it was unclear if Agaricus abruptus would remain available for use, so Peck changed the name. Agaricus abruptibulbus belongs to Agaricus section Arvenses, a clade within the genus Agaricus. Along with A. abruptibulbus, section Arvenses contains the species A. silvicola, A. arvensis, and formerly also A. semotus. Some American authors consider this species to be synonymous with A. silvicola, while some in Europe have synonymized it with the similar species A. essettei. American mycologists Steve Trudell and Joseph Ammirati noted in a 2009 field guide: "The name A. abruptibulbus has been applied to forms with bulbous stipe bases, but variation in stipe shape is so great that the use of this name has been largely abandoned." Description The cap is whitish in color and convex in shape, reaching up to in diameter, sometimes with an umbo. After being scratched, cut, or bruised, the cap turns yellow. The stipe is long by thick and bulbous at the base. A large, white annular ring is present on the stipe. The gill attachment is free, and the color is initially grayish but turns brownish after the spores have developed. Specimens smell slightly of anise. The spore print is brown to purple-brown. Spores are elliptical in shape, and are 6–8 by 4–5 μm. The surface of the cap will stain yellow if a drop of dilute potassium hydroxide is applied. This species has a positive Schäffer's reaction, resulting in an orange color. Similar species Agaricus silvicola is very similar in appearance and also grows in woodlands, but it may be distinguished by the lack of an abruptly bulbous base. Agaricus arvensis has a more robust stature, lacks the bulbous base, and grows in grassy open areas like meadows and fields. It has larger spores than A. abruptibulbus, typically 7.0–9.2 by 4.4–5.5 μm. Cadmium bioaccumulation Agaricus abruptibulbus is known to bioaccumulate the toxic element cadmium—in other words, it absorbs cadmium faster than it loses it—so specimens collected in the wild often have higher concentrations of this element than the soil in which they are found. Furthermore, when cultivated in the laboratory, the presence of cadmium in the culture medium stimulates growth up to 100% in the presence of 0.5 mg cadmium per liter of nutrient medium. It is believed that the cadmium-binding ability comes from a low molecular weight metal-binding protein named cadmium-mycophosphatin. Distribution The fungus has been reported in New York, Mississippi, Quebec, Canada, Germany, China, and India. See also List of Agaricus species References External links Roger's Mushrooms Mushroom Hobby Mushroom of Quebec French language, with several photographs abruptibulbus Edible fungi Fungi described in 1900 Fungi of Europe Fungi of North America Taxa named by Charles Horton Peck Fungus species
Agaricus abruptibulbus
Biology
870
14,003,306
https://en.wikipedia.org/wiki/Second%20battle%20of%20Tembien
The second battle of Tembien was fought on the northern front of the Second Italo-Ethiopian War. This battle consisted of attacks by Italian forces under Marshal Pietro Badoglio on Ethiopian forces under Ras Kassa Haile Darge and Ras Seyoum Mangasha. This battle, which resulted in a decisive defeat of Ethiopian forces, was primarily fought in the area around the Tembien Province. The battle is notable for the large-scale use of mustard gas by the Italians. Background On 3 October 1935, General Emilio De Bono advanced into Ethiopia from Eritrea without a declaration of War. De Bono advanced towards Addis Ababa with a force of approximately 100,000 Italian soldiers and 25,000 Eritreans. In December, after a brief period of inactivity and minor setbacks for the Italians, De Bono was replaced by Badoglio. Ethiopian Emperor Haile Selassie launched the Christmas Offensive late in the year to test Badoglio. Initially successful, the goals of this offensive were overly ambitious. As the progress of the Christmas Offensive slowed, Italian plans to renew the advance on the northern front got under way. In addition to being granted permission to use poison gas, Badoglio received additional ground forces. The elements of the Italian III Corps and the Italian IV Corps arrived in Eritrea during early 1936. By mid-January 1936, Badoglio was ready to renew the Italian advance. In response to his frequent exhortations, Badoglio cabled Mussolini: "It has always been my rule to be meticulous in preparation so that I may be swift in action." Preparation In early January 1936 Ethiopian forces were in the hills overlooking the Italian positions and launching attacks against them on a regular basis. The Ethiopians facing the Italians were in three groups. In the center, near Abbi Addi and along the Beles River in the Tembien, were Ras Kassa with approximately 40,000 men and Ras Seyoum with about 30,000 men. On the Ethiopian right was Ras Mulugeta Yeggazu and his army of approximately 80,000 men in positions atop Amba Aradam. Ras Imru Haile Selassie with approximately 40,000 men was on the Ethiopian left in the area around Seleclaca in Shire Province. Only a minority of the Ethiopian soldiers had received military training, there were few modern weapons and less than one rifle per man. Badoglio had five army corps at his disposal. On his right, he had the Italian IV Corps and the Italian II Corps facing Ras Imru in the Shire. In the Italian center was the Eritrean Corps facing Ras Kassa and Ras Seyoum in the Tembien. Facing Ras Mulugeta dug into Amba Aradam was the Italian I Corps and III Corps. Italian dictator Benito Mussolini was impatient for an Italian offensive to get under way. Initially, Badoglio saw the destruction of Ras Mulugeta's army as his first priority. This force would have to be dislodged from its strong positions on Amba Aradam in order for the Italians to continue the advance towards Addis Ababa. But Ras Kassa and Ras Seyoumm were exerting such pressure from the Tembien that Badoglio decided that he would have to deal with them first. If the Ethiopian center was to successfully advance, I Corps and III Corps facing Ras Mulugeta would be cut off from reinforcement and resupply. From 20 January to 24 January, the first battle of Tembien was fought. This was fiercely fought, with the Ethiopians cutting off the Italian 1st CC.NN. Division "23 Marzo" for several days and Badoglio drawing up contingency plans for withdrawing the entire army. Eventually Italian pressure and the large scale use of mustard gas told and the threat Ras Kassa posed to the I Corps and III Corps was neutralized. From the 10 to 19 February, Badoglio attacked the army of Ras Mulugeta, dug in on Amba Aradam during the Battle of Enderta. The Italians made good use of their artillery and aerial superiority, and again the heavy use of Mustard gas. Ras Mulugeta was killed and his army collapsed and was destroyed as a fighting force in the ensuing rout. With this completed, Badoglio turned back to the center to complete what he had started with the first battle of Tembien. He would leave the army of Ras Imru Haile Selassie for another day. Badoglio now had three times the men fielded by the three remaining Ethiopian armies; extra divisions had arrived in Eritrea and the network of roads he needed to guarantee resupply had been all but completed. Even so, Badoglio stockpiled 48,000 shells and 7 million rounds of ammunition in forward areas before he started the attack. Badoglio planned to send the III Corps towards Gaela to cut off the main line of withdrawal for Ras Kassa. After establishing itself across the roads running south from the Abbi Addi region, the Eritrean Corps would advance south from the Worsege (Italian: Uarieu) and Ab'aro passes. These moves by the III Corps and the Eritrean Corps would place the armies of Ras Kassa and Ras Seyoum in a trap. It is possible that Ras Kassa anticipated Badoglio's plan. He sent a wireless message to Emperor Haile Selassie requesting permission to withdraw from the Tembien. The request was superfluous, Selassie had already indicated that Ras Kassa should fall back towards Amba Aradam and link up with the remnants of Ras Mulageta's army. Battle In accordance with Badoglio's plan, the Eritrean Corps advanced from the mountain passes and the III Corps moved up from the Geba Valley. The second battle of the Tembien was fought on terrain which favoured the defence. It was a region of forests, ravines, and torrents where the Italians were unable to deploy artillery properly or use armoured vehicles. However the Ethiopian soldiers of Ras Seyoum failed to take full advantage of the terrain. The right wing of the Ethiopian armies rested on Uork Amba (the "mountain of gold"). The Ethiopians established a strong point there. Amba Work blocked the road to Abbi Addi on which the Eritrean Corps and the III Corp planned to converge. One-hundred-and-fifty Alpini and Blackshirt commandos were ordered to capture it under cover of darkness. Armed with grenades and knives, the commandos found the Ethiopians on the summit unprepared when they scaled the peak. Early on the morning of the 27 February, the army of Ras Seyoum was drawn up in battle array in front of Abbi Addi. Heralded by the wail of battle horns and the roll of the war drums (negarait), a large force of Ethiopians left the shelter of the woods covering Debra Ansa to attack the Italians in the open. From 8:00am to 4:00pm, wave after wave of Ethiopians attempted to break through or get around the positions established by the Alpini and the Blackshirts of the Eritrean columns. Armed for the most part with swords and clubs, the attacks were mowed down and turned back by concentrated machine gun fire. As the attacks wavered the Italian commander counterattacked. Ras Seyoum decided that his men could take no more. His army left more than one-thousand dead on the battlefield as it fled. With his right flank in the air, Ras Seyoum ordered his army to pull back to the Tekezé fords. But, as his men straggled back along the one road open to them, they were bombed repeatedly. The rocky ravine where they were to cross the river turned out to be a bottleneck. The Italian bombers focused on the concentrated solid mass of defeated Ethiopians and soon the area was turned into a charnel house. Meanwhile, Ras Kassa and his army on Debra Amba had not yet seen action. Ras Kassa now decided to do what the Emperor had indicated and started to withdraw his army towards Amba Aradam. His army in turn was heavily bombed. On 29 February, the III Corps and the Eritrean Corps linked up about three miles west of Abbi Addi and the trap was complete. Even so, a large portion of both Ethiopian armies managed to escape Badoglio's dragnet. However, the men who escaped were demoralized and with little or no equipment. By the time Ras Kassa and Ras Seyoum reached Haile Sellassie's headquarters at Quorom two weeks later, they were accompanied by little more than the men of their personal bodyguards. Aftermath Writing as a correspondent at Italian Military Headquarters, Herbert L. Matthews of the New York Times, cabled the following to his paper: A United Press correspondent wrote: Ras Mulugeta was dead. Ras Kassa and Ras Seyoum were beaten. All three armies commanded by these men had been effectively destroyed. Only one of the four main northern armies remained intact. Badoglio now turned his attention towards Ras Imru and his forces in the Shire. Both Ras Kassa and Ras Seyoum were present at Maychew, the final battle of the war. See also List of Second Italo-Ethiopian War weapons of Ethiopia List of Italian military equipment in the Second Italo-Ethiopian War References Footnotes Citations Bibliography See also Ethiopian Order of Battle Second Italo-Abyssinian War Army of the Ethiopian Empire Italian Order of Battle Second Italo-Abyssinian War Royal Italian Army 1936 in Ethiopia Conflicts in 1936 Battles of the Second Italo-Ethiopian War History of the Tigray Region Military operations involving chemical weapons February 1936 Dogu'a Tembien Italian war crimes in Ethiopia de:Tembienschlacht#Zweite Tembienschlacht
Second battle of Tembien
Chemistry
2,034
7,355,118
https://en.wikipedia.org/wiki/Brontok
Brontok is a computer worm running on Microsoft Windows. It is able to disperse by e-mail. Variants include: Brontok.A Brontok.D Brontok.F Brontok.G Brontok.H Brontok.I Brontok.K Brontok.Q Brontok.U Brontok.BH The most affected countries were Russia, Vietnam and Brazil, followed by Spain, Mexico, Iran, Azerbaijan, India and the Philippines. Other names Other names for this worm include: W32/Rontokbro.gen@MM, W32.Rontokbro@mm, BackDoor.Generic.1138, W32/Korbo-B, Worm/Brontok.a, Win32.Brontok.A@mm, Worm.Mytob.GH, W32/Brontok.C.worm, Win32/Brontok.E, Win32/Brontok.X@mm, and W32.Rontokbro.D@mm. Origin Brontok originated in Indonesia. It was first discovered in 2005. The name refers to elang brontok, a bird species native to South & Southeast Asia. It arrives as an attachment of e-mail named kangen.exe (kangen itself means "to miss someone/thing"). The virus/email itself contains a message in Indonesian (and some English). When translated, this reads: [By: HVM31 JowoBot #VM Community] -- stop the collapse in this country—1. Try the Hoodlums, the Smugglers, the Bribers, the gamblers, & drugs Port (Send to "Nusakambangan") -- 2.Stop Free Sex, Abortion, & Prostitution (Go To HELL) 3.Stop (sea and river pollution), forest burning, & wild hunting. 4.SAY NO TO DRUGS!!! - THE END IS NEAR - 5. Do you think you're smart? Inspired by: (Spizaetus Cirrhatus) that is almost extinct [By: HVM31 JowoBot #VM Communityunity -- It also contains a JavaScript pop-up. The worm also carried out a ping flood attack on two websites: Israel.gov.il and playboy.com, possibly in an act of hacktivism. A number of other websites with .com TLD were also attacked, prompting popular Indonesian forum Kaskus to switch to .us TLD until May 2012. Brontok inspired the creation of a more persistent trojan/worm such as Daprosy Worm which attacked internet cafes in July 2009. Symptoms When Brontok is first run, it copies itself to the user's application data directory. It then sets itself to start up with Windows, by creating a registry entry in the HKLM\Software\Microsoft\Windows\CurrentVersion\Run registry key. It disables the Windows Registry Editor (regedit.exe) and modifies Windows Explorer settings. It removes the option of "Folder Options" in the Tools menu so that the hidden files, where it is concealed, are not easily accessible to the user. It also turns off Windows firewall. In some variants, when a window is found containing certain strings (such as "application data") in the window title, the computer reboots. User frustration also occurs when an address typed into Windows Explorer is blanked out before completion. Using its own mailing engine, it sends itself to email addresses it finds on the computer, even faking the own user's email address as the sender. The computer also restarts when trying to open the Windows Command Prompt and prevents the user from downloading files. It also pop ups the default Web browser and loads a web page (HTML) which is located in the "My Pictures" (or on Windows Vista, "Pictures") folder. It creates .exe files in folders usually named as the folder itself (..\documents\documents.exe) this also includes all mapped network drives. Removal Brontok can be removed by most antivirus software although there are various standalone tools available by antivirus providers. References Email worms Hacking in the 2000s Cybercrime in India Windows malware Denial-of-service attacks Internet in Russia Internet in Brazil Internet in Vietnam Internet in Spain Internet in Azerbaijan Internet in Mexico Internet in Iran Cybercrime in the Philippines Attacks in Azerbaijan Attacks in Brazil Attacks in India Attacks in Iran Attacks in Mexico Attacks in the Philippines Attacks in Russia Attacks in Vietnam Internet in Israel Attacks in Israel Playboy
Brontok
Technology
957
54,095,989
https://en.wikipedia.org/wiki/Polygenic%20adaptation
Polygenic adaptation describes a process in which a population adapts through small changes in allele frequencies at hundreds or thousands of loci. Many traits in humans and other species are highly polygenic, i.e., affected by standing genetic variation at hundreds or thousands of loci. Under normal conditions, the genetic variation underlying such traits is governed by stabilizing selection, in which natural selection acts to hold the population close to an optimal phenotype. However, if the phenotypic optimum changes, then the population can adapt by small directional shifts in allele frequencies spread across all the variants that affect the trait. Polygenic adaptation can occur relatively quickly (as described in the breeder's equation), however it is difficult to detect from genomic data because the changes in allele frequencies at individual loci are very small. Polygenic adaptation represents an alternative to adaptation by selective sweeps. In classic selective sweep models, a single new mutation sweeps through a population to fixation, purging variation from a region of linkage around the selected site. More recent models have focused on partial sweeps, and on soft sweeps - i.e., sweeps that start from standing variation or comprise multiple sweeping variants at the same locus. All of these models focus on adaptation through genetic changes at a single locus and they generally assume large changes in allele frequencies. The concept of polygenic adaptation is related to classical models from quantitative genetics. However, traditional models in quantitative genetics usually abstract away the contributions of individual loci by focusing instead on means and variances of genetic scores. In contrast, population genetics models and data analysis have generally emphasized models of adaptation through sweeps at individual loci. The modern formulation of polygenic adaptation in population genetics was developed in a pair of 2010 review articles. Examples of polygenic adaptation Polygenic adaptation is presumed to be the dominant mode of adaptation in artificial selection, when plants or animals undergo rapid responses to selective pressures. However, in most cases the actual genetic loci involved are not yet known (but see e.g.,). At present the best-understood examples of polygenic adaptation are in humans, and particularly for height, a trait that can be interpreted using data from genome-wide association studies. In a 2012 paper, Joel Hirschhorn and colleagues showed that there was a consistent tendency for the "tall" alleles at genome-wide significant loci to be at higher frequencies in northern Europeans than in southern Europeans. They interpreted this observation to indicate that the difference in average height between northern and southern Europeans is at least partly genetic (as opposed to environmental) and that it was driven by selection. This result has been replicated by subsequent studies, however the environmental factor driving the selection remains unclear. A study of recent polygenic adaptation in the English has shown that selection on height has had small effects on allele frequencies (<1%) across most of the genome, and found evidence for polygenic adaptation in a wide variety of other traits as well including selection for increased infant birth size and increased female hip and waist size. References Population statistics Genetics
Polygenic adaptation
Biology
624
3,500,440
https://en.wikipedia.org/wiki/Japan%20wax
Japan wax (木蝋 Mokurō), also known as sumac wax, sumach wax, vegetable wax, China green tallow, and Japan tallow, is a pale-yellow, waxy, water-insoluble solid with a gummy feel, obtained from the berries of certain sumacs native to Japan and China, such as Toxicodendron vernicifluum (lacquer tree) and Toxicodendron succedaneum (Japanese wax tree). Japan wax is a byproduct of lacquer manufacture. The fruits of the Toxicodendron trees are harvested, steamed, and pressed for the waxy substance which hardens when cool. It is not a true wax but a fat that contains 95% palmitin. Japan wax is sold in flat squares or disks and has a rancid odor. It is extracted by expression and heat, or by the action of solvents. Uses Japan wax is used in candles, furniture polishes, floor waxes, wax matches, soaps, food packaging, pharmaceuticals, cosmetics, pastels, crayons, buffing compounds, metal lubricants, adhesives, thermoplastic resins, and as a substitute for beeswax. Because it undergoes rancidification, it is seldom used in foods. Properties Melting point = or . Specific gravity ≈ 0.975 Soluble in benzene, ether, naphtha and alkalis; insoluble in water and cold ethanol. Iodine value = 4.5–12.6 Acid value = 6–209 Saponification value = 220 References Waxes Non-timber forest products
Japan wax
Physics
348
40,351,929
https://en.wikipedia.org/wiki/Nareline
Nareline is a bio-active alkaloid isolated from Alstonia boonei, a medicinal tree of West Africa. Notes A Review of the Ethnobotany and Pharmacological Importance of Alstonia boonei De Wild (Apocynaceae) Tryptamine alkaloids Heterocyclic compounds with 6 rings Methoxy compounds
Nareline
Chemistry
77
35,158,738
https://en.wikipedia.org/wiki/Right%20to%20sexuality
The right to sexuality incorporates the right to express one's sexuality and to be free from discrimination on the grounds of sexual orientation. Although it is equally applicable to heterosexuality, it also encompasses human rights of people of diverse sexual orientations, including lesbian, gay, asexual and bisexual people, and the protection of those rights. The inalienable nature of rights belonging to every person by virtue of being human. No right to sexuality exists explicitly in international human rights law; rather, it is found in a number of international human rights instruments including the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights. Definition The concept of the right to sexuality is difficult to define, as it comprises various rights from within the framework of international human rights law. Sexual orientation is defined in the Cambridge dictionary is described as "The fact of someone being sexually or romantically attracted to people of a particular gender, or more than one gender". Pronouns are used to identify as to what an individual goes by. For a female, it is typically used as She/Her. For a male, it is typically used as He/Him. Freedom from discrimination on the grounds of sexual orientation is found in the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). The UDHR provides for non-discrimination in Article 2, which states that: "Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, color, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Furthermore, no distinction shall be made on the basis of the political, jurisdictional or international status of the country or territory to which a person belongs, whether it be independent, trust, non-self-governing or under any other limitation of sovereignty." Sexual orientation can be read into Article 2 as "other status" or alternatively as falling under "sex". In the ICCPR, Article 2 sets out a similar provision for non-discrimination: "Each State Party to the present Covenant undertakes to respect and to ensure to all individuals within its territory and subject to its jurisdiction the rights recognized in the present Covenant, without distinction of any kind, such as race, color, sex, language, religion, political or other opinion, national or social origin, property, birth or other status." In Toonen v Australia the United Nations Human Rights Committee (UNHRC) found that the reference to "sex" in Article 2 of the ICCPR included sexual orientation, thereby making sexual orientation prohibited grounds of distinction in respect of the enjoyment of rights under the ICCPR. The right to be free from discrimination is the basis of the right to sexuality, but it is closely related to the exercise and protection of other fundamental human rights. Background Individuals of diverse sexual orientations have been discriminated against historically and continue to be a "vulnerable" group in society today. Forms of discrimination experienced by people of diverse sexual orientations include the denial of the right to life, the right to work and the right to privacy, non-recognition of personal and family relationships, interference with human dignity, interference with security of the person, violations of the right to be free from torture, discrimination in access to economic, social and cultural rights, including housing, health and education, and pressure to remain silent and invisible. 67 countries maintain laws that make same-sex consensual sex between adults a criminal offence, and seven countries (or parts thereof) impose the death penalty for same-sex consensual sex. They are Iran, Saudi Arabia, Yemen, Mauritania, the twelve northern states of Nigeria, and the southern parts of Somalia. The right to sexuality has only relatively recently become the subject of international concern, with the regulation of sexuality traditionally falling within the jurisdiction of the nation state. Today numerous international non-governmental organizations and intergovernmental organizations are engaged in the protection of the rights of people of diverse sexual orientation as it is increasingly recognized that discrimination on grounds of sexual orientation is widespread and an unacceptable violation of human rights. Acts of violence Acts of violence against LGBT people are often especially vicious compared to other bias-motivated crimes and include killings, kidnappings, beatings, rape, and psychological violence, including threats, coercion and arbitrary depravations of liberty. Examples of violent acts against people of diverse sexual orientation are too numerous to account here, and they occur in all parts of the world. A particularly distressing example is the sexual assault and murder of fifteen lesbians in Thailand in March 2012. In that example, two lesbian couples were killed by men who objected to their relationship and who were embarrassed when they were unable to convince the women into heterosexual relationships with them. In another disturbing case which took place in 2017 in a church located in Brazil, a 13-year-old lesbian girl fell victim to sexual abuse after confessing to her bishop her sexual orientation. The bishop then proceeded to anoint the girl with an oil under the pretext of “gay healing” to which the young girl was left traumatized and in need of psychological care.     In another case, Nex Benedict, a 16-year-old teenager, was found dead inside of their own home. Nex Benedict was a teenager who was using they/them pronouns, getting comfortable with their sexuality and expressing how they truly were. At school, many other teens bullied and picked on them due to their pronouns. It got to the point where Benedict was beaten; they died the following day. Often acts of violence against people of diverse sexual orientations are perpetrated by the victim's own family. In the case of Zimbabwe, 18 yr. old Tina Machida was raped multiple times as a lesbian which was organized by her own family in an attempt to "normalize" her from homosexuality. In those cases, as in many other cases of violence against people of diverse sexual orientation, State law enforcement authorities are complicit in human rights abuses for failing to persecute violators of rights. Breach of the right to privacy The right to privacy is a protected freedom under the UDHR, and the ICCPR which reflects the "widespread, if not universal, human need to pursue certain activities within an intimate sphere, free of outside interference. The possibility to do so is fundamental to personhood." Intimate relationships, whether between two people of the same sex or of different sexes, are among those activities that are subject to a right of privacy. The privacy rights extend much further then to just the things you do in your home but to your medical records, who your sexual partners are, and what your sexual status is. It has been successfully argued in several cases that criminalization of homosexual relationships is an interference with the right to privacy, including decisions in the European Court of Human Rights and the UNHRC. The freedom to decide on one's consensual adult relationships, including the gender of that person, without the interference of the State is a fundamental human right. To prohibit the relationships of people of diverse sexual orientations is a breach of the right to sexuality and the right to privacy. Freedom of expression, assembly and association Every person, by their individual autonomy, is free to express themselves, assemble and join in association with others. Freedom of expression is a protected human right under Article 19 of the UDHR and Article 19 of the ICCPR, as is the right to freedom of assembly under Article 20 of the UDHR and Article 21 of the ICCPR. LGBT people are discriminated against in respect of their ability to defend and promote their rights. Gay pride marches, peaceful demonstrations and other events promoting LGBT rights are often banned by State governments. In 2011 gay pride marches were banned in Serbia and another march in Moscow was broken up by police, who arrested thirty leading gay rights activists. Many individuals use pronouns that they feel are comfortable with their sexuality. When it comes to a female, it is typically she/her, and when it comes to a male, they use he/him. Nowadays people use they/them pronouns as a way to show that is how they go as an individual. There are different pronouns that people are also coming up with in order to go by as. Yogyakarta principles In 2005, twenty-nine experts undertook the drafting of the Yogyakarta Principles on the Application of International Human Rights Law in relation to Sexual Orientation and Gender Identity. The document was intended to set out experiences of human rights violations against people of diverse sexual orientation and transgender people, the application of international human rights law to those experiences and the nature of obligations on States in respect of those experiences. The Principles can be broadly categorized into the following: Principles 1 to 3 set out the universality of human rights and their application to all persons. Principles 4 to 11 address fundamental rights to life, freedom from violence and torture, privacy, access to justice and freedom from arbitrary detention. Principles 12 to 18 set out non-discrimination in relation of economic, social and cultural rights, including employment, accommodation, social security, education and health. Principles 19 to 21 emphasized the importance of freedom of expression, identity and sexuality, without State interference, including peaceful assembly. Principles 22 and 23 set out the right to seek asylum from persecution of based on sexual orientation. Principles 24 to 26 set out the right to participate in family and cultural life and public affairs. Principle 27 sets out the right to promote and defend human rights without discrimination based on sexual orientation. Principles 28 and 29 emphasized the importance of holding those who violate human rights accountable, and ensuring redress for those who face rights violations. The Yogyakarta Principles is an instrument of soft law and is therefore not binding. But it does provide an important standard for States in their obligation to protect the rights of individuals of diverse sexual orientations. The United Nations On June 17, 2011 the United Nations Human Rights Council in a Resolution on Human Rights, Sexual Orientation and Gender Identity, adopted by a vote of 23 in favor, 19 against, and 3 abstentions, requested the commission of a study to document discriminatory laws and acts of violence against people based on their sexual orientation and gender identity. The 2011 Resolution was intended to shed light on how international human rights could be used to prevent acts of violence and discrimination against people of diverse sexual orientation. On 15 December 2011 the first Report on human rights of LGBT people was released by the Office of the United Nations High Commissioner for Human Rights. The Report made the following recommendations. To prevent such acts of violence occurring, United Nations Member States are recommended to: Promptly investigate all reported killings and serious incidents of violence against LGBT people, regardless of whether carried out privately or publicly, by State or non-State actors, ensuring accountability for such violations and the establishment of reporting mechanisms for such incidents. Take measures to prevent torture and other forms of cruel, inhuman or degrading treatment, ensure accountability for such violations and establish reporting mechanisms. Repeal laws that criminalize homosexuality, same-sex sexual conduct, other criminal laws that detain people based on their sexuality and abolish the death penalty for offenses involving consensual sexual relations within same-sex relationships. Enact comprehensive anti-discrimination legislation, ensuring that combating discrimination based on sexual orientation is in the mandates of national human rights bodies. Ensure that freedom of expression, association and peaceful assembly can be exercised safely without discrimination on sexual orientation or gender identity. Implement appropriate training programs for law enforcement personnel and support public information campaigns to counter homophobia and transphobia amongst the general public and in schools. Facilitate legal recognition of the preferred gender of transgender persons. Further action is yet to be taken by the United Nations, although a proposed declaration on sexual orientation and gender identity was brought before the United Nations General Assembly in 2008. However, that declaration has not been officially adopted by the General Assembly and remains open for signatories. See also Antisexualism Bodily integrity Criminalization of homosexuality Decriminalization of sex work Disability and sexuality Homophobia LGBT rights by country or territory LGBT rights opposition LGBT social movements Religion and homosexuality Reproductive rights Sex-positive movement Sex workers' rights Sexual and reproductive health and rights Sexual Freedom Awards References Sources External links Amnesty International USA: LGBT legal status around the world Office of the High Commissioner for Human Rights Sexual Orientation and Gender Identity in International Human Rights Law International Commission of Jurists. Discrimination Freedom of expression Freedom of speech Human rights Human sexuality LGBTQ
Right to sexuality
Biology
2,542
47,959,180
https://en.wikipedia.org/wiki/Huawei%20Watch
The Huawei Watch and latest Huawei Watch 4 series are HarmonyOS-based (formerly Android Wear and LiteOS-based) smartwatches developed by Huawei. The Huawei Watch is the first smartwatch produced by Huawei. It was announced at the 2015 Mobile World Congress and released at IFA Berlin on September 2. The Huawei Watch 3 was introduced in June 2021 after the United States Department of Commerce added Huawei to its Entity List in May 2019. Hardware First generation Huawei Watch's form factor is based on the circular design of traditional watches, supporting a 42 mm (1.4 inch) AMOLED screen. The screen's resolution is 400 x 400 pixels and 285.7 ppi. The case is 316L stainless steel, covered with sapphire crystal glass in front and available in six finishes: Black Leather, Steel Link Bracelet, Stainless Steel Mesh, Black-plated Link Bracelet, Alligator-pressed Brown Leather, and Rose Gold-plated Link Bracelet. The watch uses a 1.2 GHz Qualcomm Snapdragon 400 APQ8026 processor. All versions of the Huawei Watch have 512MB of RAM and 4GB of internal storage, along with a gyroscope, accelerometer, vibration motor, and heart rate sensor. It supports Wi-Fi and Bluetooth 4.1 LE, and support for GPS locating. The watch uses a magnetic charging cradle, with a day and a half of battery life. Software The first generation Huawei Watch runs on the Android Wear operating system. It works with iOS (8.2 and later) and Android (4.3 and later) devices. It currently supports Google Now voice commands and is compatible with Wear OS. The watch can process calls and receive messages and emails. Reception In Tech Advisors review, Chris Martin wrote, "This is a great looking smartwatch, although it is quite large. Specs match other Android Wear smartwatches but we're worried about the small battery." The President of Huawei U.S., Xu Zhejiang, said, "It embodies Huawei's technology innovation heritage, pursuit of premium design, and integration of useful functionality that we strive to develop in each product." The Phandroid said, "it is the classiest Android Wear smartwatch available right now". Versions In March 2020, Huawei announced the Huawei Watch GT 2e powered by Huawei's proprietary OS. It launched in India in May 2020. The smartwatch features the same 1.39-inch AMOLED touch display with a 454 x 454p resolution. It is powered by Huawei’s in-house Kirin A1 chipset along with GPS and Bluetooth audio that has 4GB of onboard storage. The onboard software can track over 100 different sports and exercises. The watch also features oxygen saturation monitoring with an SpO2 sensor that can calculate the wearer's maximum rate of oxygen consumption. Huawei Watch GT 2 Pro released September 2020 powered by LiteOS In June 2021, Huawei announced the Huawei Watch 3 running on HarmonyOS 2.0. Huawei launched the Huawei Watch GT 3 SE on October 29, 2022, intended as a more cost-effective iteration of their Watch GT3 Pro. The watch targets customers who seek a more budget-friendly option. Initially, the Huawei Watch GT 3 SE will be on the market for only Vietnam and Poland, with plans for a later release in other regions. The model is available in two colors, black and green, and is priced at around €170 in Poland. As a successor to the Huawei Watch GT 3, Huawei launched Watch GT 4 and Watch Ultimate Design on 25 September, 2023 alongside Watch 4 released earlier in the year in June 2023. This version runs on HarmonyOS 4 and works with HarmonyOS, Android and iOS smartphones. Huawei Watch Fit 3 successor to Huawei Watch Fit (2020) and Huawei Watch Fit 2 (2022) series, announced and released on May 7, 2024 with a smaller screen and HarmonyOS 4.2 preinstalled. See also Wearable computer Microsoft Band Apple Watch Pebble References External links Huawei Watch 3 Official website Android (operating system) devices Products introduced in 2015 Wear OS devices Smartwatches Huawei products
Huawei Watch
Technology
888
36,380,999
https://en.wikipedia.org/wiki/Glossary%20of%20string%20theory
This page is a glossary of terms in string theory, including related areas such as supergravity, supersymmetry, and high energy physics. Conventions αβγ How are these related? There is only one dimensional constant in string theory, and that is the inverse string tension with units of area. Sometimes is therefore replaced by a length . The string tension is mostly defined as the fraction Tension is energy or work per unit length. In natural units and , and hence has dimension of length/energy or length/mass. Since has the dimension of action, i.e. momentum times length, it follows that in natural units mass =1/length, and so has the unit of area. The slope of a Regge trajectory in Regge theory is the derivative of spin or angular momentum with respect to mass-squared, i.e. Since angular momentum is moment of momentum , i.e. length times mass with , is dimensionless in natural units, and has units of or area like the inverse string tension. !$@ A B C D E F G H I J K L M N O P Q R S T U V W XYZ See also List of string theory topics References Becker, Katrin, Becker, Melanie, and John H. Schwarz (2007) String Theory and M-Theory: A Modern Introduction . Cambridge University Press. Binétruy, Pierre (2007) Supersymmetry: Theory, Experiment, and Cosmology. Oxford University Press. . Dine, Michael (2007) Supersymmetry and String Theory: Beyond the Standard Model. Cambridge University Press. . Michael Green, John H. Schwarz and Edward Witten (1987) Superstring theory. Cambridge University Press. The original textbook. Vol. 1: Introduction. . Vol. 2: Loop amplitudes, anomalies and phenomenology. . Kiritsis, Elias (2007) String Theory in a Nutshell. Princeton University Press. . Polchinski, Joseph (1998) String Theory. Cambridge University Press. Vol. 1: An introduction to the bosonic string. . Vol. 2: Superstring theory and beyond. . Szabo, Richard J. (Reprinted 2007) An Introduction to String Theory and D-brane Dynamics. Imperial College Press. . Zwiebach, Barton (2004) A First Course in String Theory. Cambridge University Press. . Contact author for errata. External links Particle physics glossary at interactions.org String theory String theory Wikipedia glossaries using description lists
Glossary of string theory
Astronomy
518
2,332,073
https://en.wikipedia.org/wiki/Lid
A lid or cover is part of a container, and serves as the closure or seal, usually one that completely closes the object. Lids can be placed on small containers such as tubs as well as larger lids for open-head pails and drums. Some lids have a security strip or a tamper-evident band to hold the lid on securely until opening is desired or authorized. These are usually irreversible to indicate that the container has been opened. They can be made of varying materials ranging from plastic to metal. History Lids have been found on pottery dating back as far as 3100 BC. Ancient Egyptian canopic jars with lids held the organs of mummified bodies as early as 2686 BC. The coffee lid market is valued at roughly $180 million. An estimated 14 billion lids were sold in 2009 in the United States. Some containers such as tubs or jars now have a plastic film heat sealed onto the container: this is often called a lidding film. Examples Cultural references The word is used metaphorically, as in "keeping the lid on the secret" and "flipped his lid". Other meanings or usages include: A well-known myth concerns Pandora opening the lid of Pandora's box and unleashing terrible evils into the world. An old saying that you never have to put a lid on a bucket of crabs (because when one gets near the top, another will inevitably pull it down) is often used as a metaphor for group situations where an individual feels held back by others. An old Yiddish saying, that "every pot will find its lid" refers to people finding an appropriate match in marriage. The term "lid" is commonly used slang as a synonym for an ounce of herbal cannabis. Lids are referred to in the Bible, in the Book of Numbers: See also Closure (container) References Sources Soroka, W, "Fundamentals of Packaging Technology", IoPP, 2002, Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, Seals (mechanical) Packaging pt:Tampa
Lid
Physics
425
2,251,939
https://en.wikipedia.org/wiki/Sex%20hormone%20receptor
The sex hormone receptors, or sex steroid receptors, are a group of steroid hormone receptors that interact with the sex hormones, the androgens, estrogens, and progestogens, as well as with sex-hormonal agents such as anabolic steroids, progestins, and antiestrogens. They include the: Androgen receptor (AR) (A, B) - binds and is activated by androgens such as testosterone and dihydrotestosterone (DHT) Estrogen receptor (ER) (α, β) - binds and is activated by estrogens such as estradiol, estrone, and estriol Progesterone receptor (PR) (A, B) - binds and is activated by progestogens such as progesterone In addition, sex steroids have been found to bind and activate membrane steroid receptors, such as estradiol and GPER. See also Gonadotropin-releasing hormone receptor Gonadotropin receptor Steroid hormone receptor References Intracellular receptors G protein-coupled receptors Transcription factors
Sex hormone receptor
Chemistry,Biology
231
13,046
https://en.wikipedia.org/wiki/Geometric%20mean
In mathematics, the geometric mean is a mean or average which indicates a central tendency of a finite collection of positive real numbers by using the product of their values (as opposed to the arithmetic mean which uses their sum). The geometric mean of numbers is the th root of their product, i.e., for a collection of numbers , the geometric mean is defined as When the collection of numbers and their geometric mean are plotted in logarithmic scale, the geometric mean is transformed into an arithmetic mean, so the geometric mean can equivalently be calculated by taking the natural logarithm of each number, finding the arithmetic mean of the logarithms, and then returning the result to linear scale using the exponential function , The geometric mean of two numbers is the square root of their product, for example with numbers and the geometric mean is The geometric mean of the three numbers is the cube root of their product, for example with numbers , , and , the geometric mean is The geometric mean is useful whenever the quantities to be averaged combine multiplicatively, such as population growth rates or interest rates of a financial investment. Suppose for example a person invests $1000 and achieves annual returns of +10%, −12%, +90%, −30% and +25%, giving a final value of $1609. The average percentage growth is the geometric mean of the annual growth ratios (1.10, 0.88, 1.90, 0.70, 1.25), namely 1.0998, an annual average growth of 9.98%. The arithmetic mean of these annual returns – 16.6% per annum – is not a meaningful average because growth rates do not combine additively. The geometric mean can be understood in terms of geometry. The geometric mean of two numbers, and , is the length of one side of a square whose area is equal to the area of a rectangle with sides of lengths and . Similarly, the geometric mean of three numbers, , , and , is the length of one edge of a cube whose volume is the same as that of a cuboid with sides whose lengths are equal to the three given numbers. The geometric mean is one of the three classical Pythagorean means, together with the arithmetic mean and the harmonic mean. For all positive data sets containing at least one pair of unequal values, the harmonic mean is always the least of the three means, while the arithmetic mean is always the greatest of the three and the geometric mean is always in between (see Inequality of arithmetic and geometric means.) Formulation The geometric mean of a data set is given by: That is, the nth root of the product of the elements. For example, for , the product is , and the geometric mean is the fourth root of 24, approximately 2.213. Formulation using logarithms The geometric mean can also be expressed as the exponential of the arithmetic mean of logarithms. By using logarithmic identities to transform the formula, the multiplications can be expressed as a sum and the power as a multiplication: When since This is sometimes called the log-average (not to be confused with the logarithmic average). It is simply the arithmetic mean of the logarithm-transformed values of (i.e., the arithmetic mean on the log scale), using the exponentiation to return to the original scale, i.e., it is the generalised f-mean with . A logarithm of any base can be used in place of the natural logarithm. For example, the geometric mean of , , , and can be calculated using logarithms base 2: Related to the above, it can be seen that for a given sample of points , the geometric mean is the minimizer of , whereas the arithmetic mean is the minimizer of . Thus, the geometric mean provides a summary of the samples whose exponent best matches the exponents of the samples (in the least squares sense). In computer implementations, naïvely multiplying many numbers together can cause arithmetic overflow or underflow. Calculating the geometric mean using logarithms is one way to avoid this problem. Related concepts Iterative means The geometric mean of a data set is less than the data set's arithmetic mean unless all members of the data set are equal, in which case the geometric and arithmetic means are equal. This allows the definition of the arithmetic-geometric mean, an intersection of the two which always lies in between. The geometric mean is also the arithmetic-harmonic mean in the sense that if two sequences () and () are defined: and where is the harmonic mean of the previous values of the two sequences, then and will converge to the geometric mean of and . The sequences converge to a common limit, and the geometric mean is preserved: Replacing the arithmetic and harmonic mean by a pair of generalized means of opposite, finite exponents yields the same result. Comparison to arithmetic mean The geometric mean of a non-empty data set of positive numbers is always at most their arithmetic mean. Equality is only obtained when all numbers in the data set are equal; otherwise, the geometric mean is smaller. For example, the geometric mean of 2 and 3 is 2.45, while their arithmetic mean is 2.5. In particular, this means that when a set of non-identical numbers is subjected to a mean-preserving spread — that is, the elements of the set are "spread apart" more from each other while leaving the arithmetic mean unchanged — their geometric mean decreases. Geometric mean of a continuous function If is a positive continuous real-valued function, its geometric mean over this interval is For instance, taking the identity function over the unit interval shows that the geometric mean of the positive numbers between 0 and 1 is equal to . Applications Average proportional growth rate The geometric mean is more appropriate than the arithmetic mean for describing proportional growth, both exponential growth (constant proportional growth) and varying growth; in business the geometric mean of growth rates is known as the compound annual growth rate (CAGR). The geometric mean of growth over periods yields the equivalent constant growth rate that would yield the same final amount. As an example, suppose an orange tree yields 100 oranges one year and then 180, 210 and 300 the following years, for growth rates of 80%, 16.7% and 42.9% respectively. Using the arithmetic mean calculates a (linear) average growth of 46.5% (calculated by ). However, when applied to the 100 orange starting yield, 46.5% annual growth results in 314 oranges after three years of growth, rather than the observed 300. The linear average overstates the rate of growth. Instead, using the geometric mean, the average yearly growth is approximately 44.2% (calculated by ). Starting from a 100 orange yield with annual growth of 44.2% gives the expected 300 orange yield after three years. In order to determine the average growth rate, it is not necessary to take the product of the measured growth rates at every step. Let the quantity be given as the sequence , where is the number of steps from the initial to final state. The growth rate between successive measurements and is . The geometric mean of these growth rates is then just: Normalized values The fundamental property of the geometric mean, which does not hold for any other mean, is that for two sequences and of equal length, . This makes the geometric mean the only correct mean when averaging normalized results; that is, results that are presented as ratios to reference values. This is the case when presenting computer performance with respect to a reference computer, or when computing a single average index from several heterogeneous sources (for example, life expectancy, education years, and infant mortality). In this scenario, using the arithmetic or harmonic mean would change the ranking of the results depending on what is used as a reference. For example, take the following comparison of execution time of computer programs: Table 1 The arithmetic and geometric means "agree" that computer C is the fastest. However, by presenting appropriately normalized values and using the arithmetic mean, we can show either of the other two computers to be the fastest. Normalizing by A's result gives A as the fastest computer according to the arithmetic mean: Table 2 while normalizing by B's result gives B as the fastest computer according to the arithmetic mean but A as the fastest according to the harmonic mean: Table 3 and normalizing by C's result gives C as the fastest computer according to the arithmetic mean but A as the fastest according to the harmonic mean: Table 4 In all cases, the ranking given by the geometric mean stays the same as the one obtained with unnormalized values. However, this reasoning has been questioned. Giving consistent results is not always equal to giving the correct results. In general, it is more rigorous to assign weights to each of the programs, calculate the average weighted execution time (using the arithmetic mean), and then normalize that result to one of the computers. The three tables above just give a different weight to each of the programs, explaining the inconsistent results of the arithmetic and harmonic means (Table 4 gives equal weight to both programs, the Table 2 gives a weight of 1/1000 to the second program, and the Table 3 gives a weight of 1/100 to the second program and 1/10 to the first one). The use of the geometric mean for aggregating performance numbers should be avoided if possible, because multiplying execution times has no physical meaning, in contrast to adding times as in the arithmetic mean. Metrics that are inversely proportional to time (speedup, IPC) should be averaged using the harmonic mean. The geometric mean can be derived from the generalized mean as its limit as goes to zero. Similarly, this is possible for the weighted geometric mean. Financial The geometric mean has from time to time been used to calculate financial indices (the averaging is over the components of the index). For example, in the past the FT 30 index used a geometric mean. It is also used in the CPI calculation and recently introduced "RPIJ" measure of inflation in the United Kingdom and in the European Union. This has the effect of understating movements in the index compared to using the arithmetic mean. Applications in the social sciences Although the geometric mean has been relatively rare in computing social statistics, starting from 2010 the United Nations Human Development Index did switch to this mode of calculation, on the grounds that it better reflected the non-substitutable nature of the statistics being compiled and compared: The geometric mean decreases the level of substitutability between dimensions [being compared] and at the same time ensures that a 1 percent decline in say life expectancy at birth has the same impact on the HDI as a 1 percent decline in education or income. Thus, as a basis for comparisons of achievements, this method is also more respectful of the intrinsic differences across the dimensions than a simple average. Not all values used to compute the HDI (Human Development Index) are normalized; some of them instead have the form . This makes the choice of the geometric mean less obvious than one would expect from the "Properties" section above. The equally distributed welfare equivalent income associated with an Atkinson Index with an inequality aversion parameter of 1.0 is simply the geometric mean of incomes. For values other than one, the equivalent value is an Lp norm divided by the number of elements, with p equal to one minus the inequality aversion parameter. Geometry In the case of a right triangle, its altitude is the length of a line extending perpendicularly from the hypotenuse to its 90° vertex. Imagining that this line splits the hypotenuse into two segments, the geometric mean of these segment lengths is the length of the altitude. This property is known as the geometric mean theorem. In an ellipse, the semi-minor axis is the geometric mean of the maximum and minimum distances of the ellipse from a focus; it is also the geometric mean of the semi-major axis and the semi-latus rectum. The semi-major axis of an ellipse is the geometric mean of the distance from the center to either focus and the distance from the center to either directrix. Another way to think about it is as follows: Consider a circle with radius . Now take two diametrically opposite points on the circle and apply pressure from both ends to deform it into an ellipse with semi-major and semi-minor axes of lengths and . Since the area of the circle and the ellipse stays the same, we have: The radius of the circle is the geometric mean of the semi-major and the semi-minor axes of the ellipse formed by deforming the circle. Distance to the horizon of a sphere (ignoring the effect of atmospheric refraction when atmosphere is present) is equal to the geometric mean of the distance to the closest point of the sphere and the distance to the farthest point of the sphere. The geometric mean is used in both in the approximation of squaring the circle by S.A. Ramanujan and in the construction of the heptadecagon with "mean proportionals". Aspect ratios The geometric mean has been used in choosing a compromise aspect ratio in film and video: given two aspect ratios, the geometric mean of them provides a compromise between them, distorting or cropping both in some sense equally. Concretely, two equal area rectangles (with the same center and parallel sides) of different aspect ratios intersect in a rectangle whose aspect ratio is the geometric mean, and their hull (smallest rectangle which contains both of them) likewise has the aspect ratio of their geometric mean. In the choice of 16:9 aspect ratio by the SMPTE, balancing 2.35 and 4:3, the geometric mean is , and thus ... was chosen. This was discovered empirically by Kerns Powers, who cut out rectangles with equal areas and shaped them to match each of the popular aspect ratios. When overlapped with their center points aligned, he found that all of those aspect ratio rectangles fit within an outer rectangle with an aspect ratio of 1.77:1 and all of them also covered a smaller common inner rectangle with the same aspect ratio 1.77:1. The value found by Powers is exactly the geometric mean of the extreme aspect ratios, 4:3(1.33:1) and CinemaScope(2.35:1), which is coincidentally close to (). The intermediate ratios have no effect on the result, only the two extreme ratios. Applying the same geometric mean technique to 16:9 and 4:3 approximately yields the 14:9 (...) aspect ratio, which is likewise used as a compromise between these ratios. In this case 14:9 is exactly the arithmetic mean of and , since 14 is the average of 16 and 12, while the precise geometric mean is but the two different means, arithmetic and geometric, are approximately equal because both numbers are sufficiently close to each other (a difference of less than 2%). Paper formats The geometric mean is also used to calculate B and C series paper formats. The format has an area which is the geometric mean of the areas of and . For example, the area of a B1 paper is , because it is the geometric mean of the areas of an A0 () and an A1 () paper The same principle applies with the C series, whose area is the geometric mean of the A and B series. For example, the C4 format has an area which is the geometric mean of the areas of A4 and B4. An advantage that comes from this relationship is that an A4 paper fits inside a C4 envelope, and both fit inside a B4 envelope. Other applications Spectral flatness: in signal processing, spectral flatness, a measure of how flat or spiky a spectrum is, is defined as the ratio of the geometric mean of the power spectrum to its arithmetic mean. Anti-reflective coatings: In optical coatings, where reflection needs to be minimised between two media of refractive indices n0 and n2, the optimum refractive index n1 of the anti-reflective coating is given by the geometric mean: . Subtractive color mixing: The spectral reflectance curve for paint mixtures (of equal tinting strength, opacity and dilution) is approximately the geometric mean of the paints' individual reflectance curves computed at each wavelength of their spectra. Image processing: The geometric mean filter is used as a noise filter in image processing. Labor compensation: The geometric mean of a subsistence wage and market value of the labor using capital of employer was suggested as the natural wage by Johann von Thünen in 1875. See also Arithmetic-geometric mean Generalized mean Geometric mean theorem Geometric standard deviation Harmonic mean Heronian mean Heteroscedasticity Log-normal distribution Muirhead's inequality Product Pythagorean means Quadratic mean Quadrature (mathematics) Quasi-arithmetic mean (generalized f-mean) Rate of return Weighted geometric mean Notes References External links Calculation of the geometric mean of two numbers in comparison to the arithmetic solution Arithmetic and geometric means When to use the geometric mean Practical solutions for calculating geometric mean with different kinds of data Geometric Mean on MathWorld Geometric Meaning of the Geometric Mean Geometric Mean Calculator for larger data sets Computing Congressional apportionment using Geometric Mean Non-Newtonian calculus website Geometric Mean Definition and Formula The Distribution of the Geometric Mean The geometric mean? Means Non-Newtonian calculus
Geometric mean
Physics,Mathematics
3,608
24,367,789
https://en.wikipedia.org/wiki/Nokia%205530%20XpressMusic
The Nokia 5530 XpressMusic is a smartphone by Nokia announced on 15 June 2009. Part of the XpressMusic series of phones, it emphasizes music and multimedia playback. It is Nokia's third touchscreen phone (after the 5800 and N97) based on the Symbian OS (S60) 5th edition platform. In terms of specifications, it rests between the lower Nokia 5230 and the higher 5800. Bearing a much lower price tag, it lacks the 5800's 3G capability and GPS receiver, but has a more compact and sleek design than both models, as well as stereo speakers. See also Nokia X6 Nokia 5530 series References External links Stress testing a Nokia 5530 Nokia smartphones Portable media players S60 (software platform) Devices capable of speech recognition Mobile phones introduced in 2009
Nokia 5530 XpressMusic
Technology
169
54,697,349
https://en.wikipedia.org/wiki/Metacresol%20purple
Metacresol purple or m-cresol purple, also called ''m''-cresolsulfonphthalein, is a triarylmethane dye and a pH indicator. It is used as a capnographic indicator for detecting detect end-tidal carbon dioxide to ensure successful tracheal intubation in an emergency. It can be used to measure the pH in subzero temperatures of saline or hypersaline media. In colorimetric capnography, the indicator is incorporated in an aqueous matrix that provides a pH just above the indicator's colour change. When exposed to carbon dioxide (CO2), it undergoes a colour change from purple to yellow, because when CO2 dissolves in the matrix, it forms carbonic acid. In chemistry, it has two useful indicator ranges: pH 1.2–2.8: red to yellow pH 7.4–9.0: yellow to purple See also Bromocresol purple References PH indicators Chemicals in medicine Triarylmethane dyes Phenol dyes Purple
Metacresol purple
Chemistry,Materials_science
222
7,997,903
https://en.wikipedia.org/wiki/Polyester%20resin
Polyester resins are synthetic resins formed by the reaction of dibasic organic acids and polyhydric alcohols. Maleic anhydride is a commonly used raw material with diacid functionality in unsaturated polyester resins. Unsaturated polyester resins are used in sheet moulding compound, bulk moulding compound and the toner of laser printers. Wall panels fabricated from polyester resins reinforced with fiberglassso-called fiberglass reinforced plastic (FRP)are typically used in restaurants, kitchens, restrooms and other areas that require washable low-maintenance walls. They are also used extensively in cured-in-place pipe applications. Departments of Transportation in the USA also specify them for use as overlays on roads and bridges. In this application they are known AS Polyester Concrete Overlays (PCO). These are usually based on isophthalic acid and cut with styrene at high levelsusually up to 50%. Polyesters are also used in anchor bolt adhesives though epoxy based materials are also used. Many companies have and continue to introduce styrene free systems mainly due to odor issues, but also over concerns that styrene is a potential carcinogen. Drinking water applications also prefer styrene free. Most polyester resins are viscous, pale coloured liquids consisting of a solution of a polyester in a reactive diluent which is usually styrene, but can also include vinyl toluene and various acrylates. Unsaturated polyester Unsaturated polyesters are condensation polymers formed by the reaction of polyols (also known as polyhydric alcohols), organic compounds with multiple alcohol or hydroxy functional groups, with unsaturated and in some cases saturated dibasic acids. Typical polyols used are glycols including ethylene glycol, propylene glycol, and diethylene glycol; typical acids used are phthalic acid, isophthalic acid, terephthalic acid, and maleic anhydride. Water, a condensation by-product of esterification reactions, is continuously removed by distillation, driving the reaction to completion via Le Chatelier's principle. Unsaturated polyesters are generally sold to parts manufacturers as a solution of resin in reactive diluent; styrene is the most common diluent and the industry standard. The diluent allows control over the viscosity of the resin, and is also a participant in the curing reaction. The initially liquid resin is converted to a solid by cross-linking chains. This is done by creating free radicals at unsaturated bonds, which propagate in a chain reaction to other unsaturated bonds in adjacent molecules, linking them in the process. Unsaturation is generally in the form of maleate and fumarate species along the polymer chain. Maleate/fumarate generally does not self-polymerize via radical reactions, but readily reacts with styrene. Maleic anhydride and styrene are known to form alternating copolymers, and are in fact the textbook case of this phenomenon. This is one reason that styrene has been so hard to displace in the market as the industry standard reactive diluent for unsaturated polyester resins, despite increasing efforts to displace the material such as California's Proposition 65. The initial free radicals are induced by adding a compound that easily decomposes into free radicals. This compound is known as the catalyst within the industry, but initiator is a more appropriate term. Transition metal salts are usually added as a catalyst for the chain-growth crosslinking reaction, and in the industry this type of additive is known as a promoter; the promoter is generally understood to lower the bond dissociation energy of the radical initiator. Cobalt salts are the most common type of promoter used. Common radical initiators used are organic peroxides such as benzoyl peroxide or methyl ethyl ketone peroxide. Polyester resins are thermosetting and, as with other resins, cure exothermically. The use of excessive initiator especially with a catalyst present can, therefore, cause charring or even ignition during the curing process. Excessive catalyst may also cause the product to fracture or form a rubbery material. Unsaturated polyesters (UPR) are utilized in many different industrially relevant markets, but in general are used as the matrix material for various types of composites. Glass fiber-reinforced composites comprise the largest segment into which UPRs are used and can be processed via SMC, BMC, pultrusion, cured-in-place pipe (known as relining in Europe), filament winding, vacuum molding, spray-up molding, resin transfer molding (RTM). Wind turbine blades also use them as well as many more processes. UPRs are also used in non-reinforced applications with common examples being gel coats, shirt buttons, mine-bolts, bowling ball cores, polymer concrete, and engineered stone/cultured marble. Chemistry In organic chemistry, an ester is formed as the condensation product of a carboxylic acid and an alcohol, with water formed as the condensate by-product. An ester can also be produced with an acyl halide and an alcohol, in which case the condensate by-product is a hydrogen halide. Polyesters are a category of polymers in which ester functionality repeats within the main chain. Polyesters are a classic example of step-growth polymer, in which a difunctional (or higher order) acid or acyl halide is reacted with a difunctional (or higher order) alcohol. Polyesters are produced commercially both as saturated and unsaturated resins. The most common and highest volume produced polyester is Polyethylene terephthalate (PET), which is an example of a saturated polyester and finds utilization in such applications as fibers for clothing and carpet, food and liquid containers (such as a water/soda bottles), as well as films. In unsaturated polyester (UPR) chemistry, unsaturation sites are present along the chain, usually by incorporation of maleic anhydride, but maleic acid and fumaric acid are also used. Maleic acid and fumaric acid are isomers where maleic is the cis-isomer and fumaric is the trans-isomer. The ester forms of these two molecules are maleate and fumarate, respectively. When curing a UPR, the fumarate form is known to react more rapidly with the styrene radical, so isomerization catalysts, such as N,N-dimethylacetoacetamide (DMAA), are often employed in the synthesis process which converts the maleates into fumarates; the isomerization can also be encouraged with increased reaction time and temperature. Within the UPR industry, the classification of the resins is generally based on the primary saturated acid. For example, a resin containing primarily terephthalic acid is known as a Tere resin, a resin containing primarily phthalic anhydride is known as an Ortho resin, and a resin containing primarily isophthalic acid is known as an Iso resin. Dicyclopentadiene (DCPD) is also a common UPR raw material, and can be incorporated two different ways. In one process, the DCPD is cracked in situ to form cyclopentadiene which can then be reacted with maleate/fumarate groups along the polymer chain via a Diels-alder reaction. This type of resin is known as a Nadic resin and is referred to as a poor man's Ortho, due to sharing many similar properties of an Ortho resin along with the extremely low cost of DCPD raw material. In another process, maleic anhydride is first opened with water or another alcohol to form maleic acid and is then reacted with DCPD where an alcohol from the maleic acid reacts across one of the double bonds of the DCPD. This product is then used to end-cap the UPR resin which yields a product with unsaturation on the end-groups. This type of resin is referred to as a DCPD resin. Ortho resins comprise the most common type of UPR, and many are known as general purpose resins. FRP composites utilizing ortho resins are found in such application as boat hulls, bath ware, and bowling ball cores. Iso resins are generally on the higher end of UPR products, both because of the relatively higher cost of the isophthalic acid as well as the superior properties they possess. Iso resins are the primary type of resin used in gel coat applications, which is similar to a paint, but is sprayed into a mold before the FRP is molded leaving a coating on the part. Gel coat resins must have lower color (almost clear) so as to not impart additional color to the part or so that they can be dyed properly. Gel coats must also have strong resistance to UV-weathering and water blistering. Tere resins are often used when high modulus and strength are desired, but the low color properties of an Iso resin is not necessary. Terephthalic acid is generally lower cost than isophthalic acid, but both give similar strength characteristics to a UPR product. There exists a special sub-set of Tere resins, known as PET UPR resins, which are produced by catalytically cracking PET resin in the reactor to yield a mixture of terephthalic acid and ethylene glycol. Additional acids and glycols are then added along with maleic anhydride and a new polymer is produced. The end product is functionally the same as a Tere resin, but can often be lower cost to manufacture as scrap PET can be sourced cheaply. If a glycol-modified PET (PET-G) is used, exceptional properties can be imparted to the resin due to some of the exotic materials used in PET-G production. Tere and PET-UPR resins are used in many applications including cured-in-place pipe. Biodegradation Lichens have been shown to deteriorate polyester resins, as can be seen in archaeological sites in the Roman city of Baelo Claudia Spain. Advantages Polyester resin offers the following advantages: Adequate resistance to water and variety of chemicals. Adequate resistance to weathering and ageing. Low cost. Polyesters can withstand a temperature up to 80 °C. Polyesters have good wetting to glass fibres. Relatively low shrinkage at between 4–8% during curing. Linear thermal expansion ranges from 100–200 x 10−6 K−1. Disadvantages Polyester resin has the following disadvantages: Strong styrene odour More difficult to mix than other resins, such as a two-part epoxy The toxic nature of its fumes, and especially of its catalyst, MEKP, pose a safety risk if proper protection isn't used Not appropriate for bonding many substrates The finished cure is most likely weaker than an equal amount of an epoxy resin See also Polyester Styrene Thermoset polymer matrix Thermosetting polymer Vinyl ester References Polyesters Synthetic resins Thermosetting plastics Polymer chemistry
Polyester resin
Chemistry,Materials_science,Engineering
2,429
4,031,031
https://en.wikipedia.org/wiki/XVRML
xVRML (eXtensible Virtual Reality Modeling Language, usually pronounced ex-vermal) is a standard file format for representing 3-dimensional (3D) interactive computer graphics, designed particularly with the World Wide Web in mind. Format xVRML is a text-file format from the xVRML Project at RIT. While xVRML evolved from VRML; it now has an easy-to-learn, XML-based syntax, for which it utilizes an XML Schema to insure both a clear structure and understandable constraints. Downloads The specifications, documentation, and example files, as well as information about a viewer application (Carina), may all be found at the xVRML Project website. All but the examples may be downloaded from the Project SourceForge site. An extensive and growing object library is available for public use through the xVRML Project site. References External links Graphics file formats XML-based standards Virtual reality Vector graphics markup languages
XVRML
Technology
199
2,900,961
https://en.wikipedia.org/wiki/Total%20body%20irradiation
Total body irradiation (TBI) is a form of radiotherapy used primarily as part of the preparative regimen for haematopoietic stem cell (or bone marrow) transplantation. As the name implies, TBI involves irradiation of the entire body, though in modern practice the lungs are often partially shielded to lower the risk of radiation-induced lung injury. Total body irradiation in the setting of bone marrow transplantation serves to destroy or suppress the recipient's immune system, preventing immunologic rejection of transplanted donor bone marrow or blood stem cells. Additionally, high doses of total body irradiation can eradicate residual cancer cells in the transplant recipient, increasing the likelihood that the transplant will be successful. Dosage Doses of total body irradiation used in bone marrow transplantation typically range from 10 to >12 Gy. For reference, an unfractionated (i.e. single exposure) dose of 4.5 Gy is fatal in 50% of exposed individuals without aggressive medical care. The 10-12 Gy is typically delivered across multiple fractions to minimise toxicities to the patient. Early research in bone marrow transplantation by E. Donnall Thomas and colleagues demonstrated that this process of splitting TBI into multiple smaller doses resulted in lower toxicity and better outcomes than delivering a single, large dose. The time interval between fractions allows other normal tissues some time to repair some of the damage caused. However, the dosing is still high enough that the ultimate result is the destruction of both the patient's bone marrow (allowing donor marrow to engraft) and any residual cancer cells. Non-myeloablative bone marrow transplantation uses lower doses of total body irradiation, typically about 2 Gy, which do not destroy the host bone marrow but do suppress the host immune system sufficiently to promote donor engraftment. Usage in other cancers In addition to its use in bone marrow transplantation, total body irradiation has been explored as a treatment modality for high-risk Ewing sarcoma. However, subsequent findings suggest that TBI in this setting causes toxicity without improving disease control, and TBI is not currently used in the treatment of Ewing sarcoma outside of clinical trials. Fertility Total body irradiation results in infertility in most cases, with recovery of gonadal function occurring in 10−14% of females. The number of pregnancies observed after hematopoietic stem cell transplantation involving such a procedure is lower than 2%. Fertility preservation measures mainly include cryopreservation of ovarian tissue, embryos or oocytes. Gonadal function has been reported to recover in less than 20% of males after TBI. References Transplantation medicine Radiobiology
Total body irradiation
Chemistry,Biology
568
4,807,473
https://en.wikipedia.org/wiki/Homestake%20experiment
The Homestake experiment (sometimes referred to as the Davis experiment or Solar Neutrino Experiment and in original literature called Brookhaven Solar Neutrino Experiment or Brookhaven 37Cl (Chlorine) Experiment) was an experiment headed by astrophysicists Raymond Davis, Jr. and John N. Bahcall in the late 1960s. Its purpose was to collect and count neutrinos emitted by nuclear fusion taking place in the Sun. Bahcall performed the theoretical calculations and Davis designed the experiment. After Bahcall calculated the rate at which the detector should capture neutrinos, Davis's experiment turned up only one third of this figure. The experiment was the first to successfully detect and count solar neutrinos, and the discrepancy in results created the solar neutrino problem. The experiment operated continuously from 1970 until 1994. The University of Pennsylvania took it over in 1984. The discrepancy between the predicted and measured rates of neutrino detection was later found to be due to neutrino "flavour" oscillations. Methodology The experiment took place in the Homestake Gold Mine in Lead, South Dakota. Davis placed a 380 cubic meter (100,000 gallon) tank of perchloroethylene, a common dry-cleaning fluid, 1,478 meters (4,850 feet) underground. A big target deep underground was needed to prevent interference from cosmic rays, taking into account the very small probability of a successful neutrino capture, and, therefore, very low effect rate even with the huge mass of the target. Perchloroethylene was chosen because it is rich in chlorine. Upon interaction with an electron neutrino, a 37Cl atom transforms into a radioactive isotope of 37Ar, which can then be extracted and counted. The reaction of the neutrino capture is The reaction threshold is 0.814 MeV, i.e. the neutrino should have at least this energy to be captured by the 37Cl nucleus. Because 37Ar has a half-life of 35 days, every few weeks, Davis bubbled helium through the tank to collect the argon that had formed. A small (few cubic cm) gas counter was filled by the collected few tens of atoms of 37Ar (together with the stable argon) to detect its decays. In such a way, Davis was able to determine how many neutrinos had been captured. Conclusions Davis' figures were consistently very close to one-third of Bahcall's calculations. The first response from the scientific community was that either Bahcall or Davis had made a mistake. Bahcall's calculations were checked repeatedly, with no errors found. Davis scrutinized his own experiment and insisted there was nothing wrong with it. The Homestake experiment was followed by other experiments with the same purpose, such as Kamiokande in Japan, SAGE in the former Soviet Union, GALLEX in Italy, Super Kamiokande, also in Japan, and SNO (Sudbury Neutrino Observatory) in Ontario, Canada. SNO was the first detector able to detect neutrino oscillation, solving the solar neutrino problem. The results of the experiment, published in 2001, revealed that of the three "flavours" between which neutrinos are able to oscillate, Davis's detector was sensitive to only one. After it had been proven that his experiment was sound, Davis shared the 2002 Nobel Prize in Physics for contributions to neutrino physics with Masatoshi Koshiba of Japan, who worked on the Kamiokande and the Super Kamiokande (the prize was also shared with Riccardo Giacconi for his contributions to x-ray astronomy). See also Cowan–Reines neutrino experiment (a previous experiment by Reines and Cowan which discovered the antineutrino) Sanford Underground Research Facility References Physics experiments Neutrino observatories
Homestake experiment
Physics
813
238,725
https://en.wikipedia.org/wiki/Loebner%20Prize
The Loebner Prize was an annual competition in artificial intelligence that awarded prizes to the computer programs considered by the judges to be the most human-like. The format of the competition was that of a standard Turing test. In each round, a human judge simultaneously held textual conversations with a computer program and a human being via computer. Based upon the responses, the judge would attempt to determine which was which. The contest was launched in 1990 by Hugh Loebner in conjunction with the Cambridge Center for Behavioral Studies, Massachusetts, United States. In 2004 and 2005, it was held in Loebner's apartment in New York City. Within the field of artificial intelligence, the Loebner Prize is somewhat controversial; the most prominent critic, Marvin Minsky, called it a publicity stunt that does not help the field along. Beginning in 2014, it was organised by the AISB at Bletchley Park. It has also been associated with Flinders University, Dartmouth College, the Science Museum in London, University of Reading and Ulster University, Magee Campus, Derry, UK City of Culture. For the final 2019 competition, the format changed. There was no panel of judges. Instead, the chatbots were judged by the public and there were to be no human competitors. The prize has been reported as defunct as of 2020. Prizes Originally, $2,000 was awarded for the most human-seeming program in the competition. The prize was $3,000 in 2005 and $2,250 in 2006. In 2008, $3,000 was awarded. In addition, there were two one-time-only prizes that have never been awarded. $25,000 is offered for the first program that judges cannot distinguish from a real human and which can convince judges that the human is the computer program. $100,000 is the reward for the first program that judges cannot distinguish from a real human in a Turing test that includes deciphering and understanding text, visual, and auditory input. The competition was planned to end after the achievement of this prize. Competition rules and restrictions The rules varied over the years and early competitions featured restricted conversation Turing tests but since 1995 the discussion has been unrestricted. For the three entries in 2007, Robert Medeksza, Noah Duncan and Rollo Carpenter, some basic "screening questions" were used by the sponsor to evaluate the state of the technology. These included simple questions about the time, what round of the contest it is, etc.; general knowledge ("What is a hammer for?"); comparisons ("Which is faster, a train or a plane?"); and questions demonstrating memory for preceding parts of the same conversation. "All nouns, adjectives and verbs will come from a dictionary suitable for children or adolescents under the age of 12." Entries did not need to respond "intelligently" to the questions to be accepted. For the first time in 2008 the sponsor allowed introduction of a preliminary phase to the contest opening up the competition to previously disallowed web-based entries judged by a variety of invited interrogators. The available rules do not state how interrogators are selected or instructed. Interrogators (who judge the systems) have limited time: 5 minutes per entity in the 2003 competition, 20+ per pair in 2004–2007 competitions, 5 minutes to conduct simultaneous conversations with a human and the program in 2008–2009, increased to 25 minutes of simultaneous conversation since 2010. Criticisms The prize has long been scorned by experts in the field, for a variety of reasons. It is regarded by many as a publicity stunt. Marvin Minsky scathingly offered a "prize" to anyone who could stop the competition. Loebner responded by jokingly observing that Minsky's offering a prize to stop the competition effectively made him a co-sponsor. The rules of the competition have encouraged poorly qualified judges to make rapid judgements. Interactions between judges and competitors was originally very brief, for example effectively 2.5 mins of questioning, which permitted only a few questions. Questioning was initially restricted to a single topic of the contestant's choice, such as "whimsical conversation", a domain suiting standard chatbot tricks. Competition entrants do not aim at understanding or intelligence but resort to basic ELIZA style tricks, and successful entrants find deception and pretense is rewarded. Contests 2003 In 2003, the contest was organised by Professor Richard H. R. Harper and Dr. Lynne Hamill from the Digital World Research Centre at the University of Surrey. Although no bot passed the Turing test, the winner was Jabberwock, created by Juergen Pirner. Second was Elbot (Fred Roberts, Artificial Solutions). Third was Jabberwacky, (Rollo Carpenter). 2006 In 2006, the contest was organised by Tim Child (CEO of Televirtual) and Huma Shah. On August 30, the four finalists were announced: Rollo Carpenter Richard Churchill and Marie-Claire Jenkins Noah Duncan Robert Medeksza The contest was held on 17 September in the VR theatre, Torrington Place campus of University College London. The judges included the University of Reading's cybernetics professor, Kevin Warwick, a professor of artificial intelligence, John Barnden (specialist in metaphor research at the University of Birmingham), a barrister, Victoria Butler-Cole and a journalist, Graham Duncan-Rowe. The latter's experience of the event can be found in an article in Technology Review. The winner was 'Joan', based on Jabberwacky, both created by Rollo Carpenter. 2007 The 2007 competition was held on October 21 in New York City. The judges were: computer science professor Russ Abbott, philosophy professor Hartry Field, psychology assistant professor Clayton Curtis and English lecturer Scott Hutchins. No bot passed the Turing test, but the judges ranked the three contestants as follows: 1st: Robert Medeksza, creator of Ultra Hal 2nd: Noah Duncan, a private entry, creator of Cletus 3rd: Rollo Carpenter from Icogno, creator of Jabberwacky The winner received $2,250 and the annual medal. The runners-up received $250 each. 2008 The 2008 competition was organised by professor Kevin Warwick, coordinated by Huma Shah and held on October 12 at the University of Reading, UK. After testing by over one hundred judges during the preliminary phase, in June and July 2008, six finalists were selected from thirteen original entrant artificial conversational entities (ACEs). Five of those invited competed in the finals: Brother Jerome, Peter Cole and Benji Adams Elbot, Fred Roberts / Artificial Solutions Eugene Goostman, Vladimir Veselov, Eugene Demchenko and Sergey Ulasen Jabberwacky, Rollo Carpenter Ultra Hal, Robert Medeksza In the finals, each of the judges was given five minutes to conduct simultaneous, split-screen conversations with two hidden entities. Elbot of Artificial Solutions won the 2008 Loebner Prize bronze award, for most human-like artificial conversational entity, through fooling three of the twelve judges who interrogated it (in the human-parallel comparisons) into believing it was human. This is coming very close to the 30% traditionally required to consider that a program has actually passed the Turing test. Eugene Goostman and Ultra Hal both deceived one judge each that it was the human. Will Pavia, a journalist for The Times, has written about his experience; a Loebner finals' judge, he was deceived by Elbot and Eugene. Kevin Warwick and Huma Shah have reported on the parallel-paired Turing tests. 2009 The 2009 Loebner Prize Competition was held September 6, 2009, at the Brighton Centre, Brighton UK in conjunction with the Interspeech 2009 conference. The prize amount for 2009 was $3,000. Entrants were David Levy, Rollo Carpenter, and Mohan Embar, who finished in that order. The writer Brian Christian participated in the 2009 Loebner Prize Competition as a human confederate, and described his experiences at the competition in his book The Most Human Human. 2010 The 2010 Loebner Prize Competition was held on October 23 at California State University, Los Angeles. The 2010 competition was the 20th running of the contest. The winner was Bruce Wilcox with Suzette. 2011 The 2011 Loebner Prize Competition was held on October 19 at the University of Exeter, Devon, United Kingdom. The prize amount for 2011 was $4,000. The four finalists and their chatterbots were Bruce Wilcox (Rosette), Adeena Mignogna (Zoe), Mohan Embar (Chip Vivant) and Ron Lee (Tutor), who finished in that order. That year there was an addition of a panel of junior judges, namely Georgia-Mae Lindfield, William Dunne, Sam Keat and Kirill Jerdev. The results of the junior contest were markedly different from the main contest, with chatterbots Tutor and Zoe tying for first place and Chip Vivant and Rosette coming in third and fourth place, respectively. 2012 The 2012 Loebner Prize Competition was held on the 15th of May in Bletchley Park in Bletchley, Buckinghamshire, England, in honor of the Alan Turing centenary celebrations. The prize amount for 2012 was $5,000. The local arrangements organizer was David Levy, who won the Loebner Prize in 1997 and 2009. The four finalists and their chatterbots were Mohan Embar (Chip Vivant), Bruce Wilcox (Angela), Daniel Burke (Adam), M. Allan (Linguo), who finished in that order. That year, a team from the University of Exeter's computer science department (Ed Keedwell, Max Dupenois and Kent McClymont) conducted the first-ever live webcast of the conversations. 2013 The 2013 Loebner Prize Competition was held, for the only time on the Island of Ireland, on September 14 at the Ulster University, Magee College, Derry, Northern Ireland, UK. The four finalists and their chatbots were Steve Worswick (Mitsuku), Dr. Ron C. Lee (Tutor), Bruce Wilcox (Rose) and Brian Rigsby (Izar), who finished in that order. The judges were Professor Roger Schank (Socratic Arts), Professor Noel Sharkey (Sheffield University), Professor Minhua (Eunice) Ma (Huddersfield University, then University of Glasgow) and Professor Mike McTear (Ulster University). For the 2013 Junior Loebner Prize Competition the chatbots Mitsuku and Tutor tied for first place with Rose and Izar in 3rd and 4th place respectively. 2014 The 2014 Loebner Prize Competition was held at Bletchley Park, England, on Saturday 15 November 2014. The event was filmed live by Sky News. The guest judge was television presenter and broadcaster James May. After 2 hours of judging, 'Rose' by Bruce Wilcox was declared the winner. Bruce will receive a cheque for $4000 and a bronze medal. The ranks were as follows: Rose – Rank 1 ($4000 & Bronze Medal); Izar – Rank 2.25 ($1500); Uberbot – Rank 3.25 ($1000); and Mitsuku – Rank 3.5 ($500). The Judges were Dr Ian Hocking, Writer & Senior Lecturer in Psychology, Christ Church College, Canterbury; Dr Ghita Kouadri-Mostefaoui, Lecturer in Computer Science and Technology, University of Bedfordshire; Mr James May, Television Presenter and Broadcaster; and Dr Paul Sant, Dean of UCMK, University of Bedfordshire. 2015 The 2015 Loebner Prize Competition was again won by 'Rose' by Bruce Wilcox. The judges were Jacob Aaron, Physical sciences reporter for New Scientist; Rory Cellan-Jones, Technology correspondent for the BBC; Brett Marty, Film Director and Photographer; Ariadne Tampion, Writer. 2016 The 2016 Loebner Prize was held at Bletchley Park on 17 September 2016. After 2 hours of judging the final results were announced. The ranks were as follows: 1st place: Mitsuku 2nd place: Tutor 3rd place: Rose 2017 The 2017 Loebner Prize was held at Bletchley Park on 16 September 2017. This was the first contest where a new message by message protocol was used, rather than the traditional one character at a time. The ranks were as follows, and were announced by a Nao_(robot): 1st place: Mitsuku 2nd place: Midge 3rd place: Uberbot 4th place: Rose 2018 The 2018 Loebner Prize was held at Bletchley Park on 8 September 2018. This was the final time it would be held in its traditional Turing Test format and its final time at Bletchley Park. The ranks were as follows: 1st place: Mitsuku 2nd place: Tutor 3rd place: Colombina 4th place: Uberbot 2019 The 2019 Loebner Prize was held at the University of Swansea from 12th–15th September, as part of a larger exhibition which looked at creativity in computers. The format of the contest changed from being a traditional Turing Test, with selected judges and humans, into a 4 day testing session where members of the general public, including schoolchildren, could interact with the bots, knowing in advance that the bots were not humans. Seventeen bots took part instead of the usual 4 finalists. Steve Worswick won for a record 5th time with Mitsuku, which enabled him to be included in the Guinness Book of Records. A selected jury of judges also examined and voted for the ones they liked best. The ranks were as follows: Most humanlike chatbot: 1st place: Mitsuku – 24 points 2nd place: Uberbot – 6 points 3rd place: Anna – 5 points Best overall chatbot 1st place: Mitsuku – 19 points 2nd place: Uberbot – 5 points 3rd place: Arckon – 4 points Winners Official list of winners. See also List of computer science awards Artificial intelligence Glossary of artificial intelligence Robot Artificial general intelligence Confederate effect Computer game bot Turing Test References External links Artificial intelligence competitions Computer science competitions Computer science awards Chatbots
Loebner Prize
Technology
2,906
7,443,433
https://en.wikipedia.org/wiki/Calvert%20L.%20Willey%20Award
The Calvert L. Willey Award has been awarded every year since 1989. It is awarded to a member of the Institute of Food Technologists (IFT) who displayed meritorious and imaginative service to IFT. The award is named for Calvert L. Willey (1920-1994) who served as Executive Secretary and later Executive Director from 1961 until his retirement in 1987. Willey was given a distinguished service award by IFT at the 1987 Annual Meeting in Las Vegas, Nevada. This distinguished service award would be named in his honor and presented for the first time as an annual award at the 1989 Annual Meeting in Chicago, Illinois. It was the first IFT Award to be named for a living person. Award winners receive an USD 3000 honorarium and a plaque from IFT. Winners References List of past winners - Official site Food technology awards
Calvert L. Willey Award
Technology
170
52,170,563
https://en.wikipedia.org/wiki/CD4%2B/CD8%2B%20ratio
The CD4+/CD8+ ratio is the ratio of T helper cells (with the surface marker CD4) to cytotoxic T cells (with the surface marker CD8). Both CD4+ and CD8+ T cells contain several subsets. The CD4+/CD8+ ratio in the peripheral blood of healthy adults and mice is about 2:1, and an altered ratio can indicate diseases relating to immunodeficiency or autoimmunity. An inverted CD4+/CD8+ ratio (namely, less than 1/1) indicates an impaired immune system. Conversely, an increased CD4+/CD8+ ratio corresponds to increased immune function. Obesity and dysregulated lipid metabolism in the liver leads to loss of CD4+, but not CD8+ cells, contributing to the induction of liver cancer. Regulatory CD4+ cells decline with expanding visceral fat, whereas CD8+ T-cells increase. Decreased ratio with infection A reduced CD4+/CD8+ ratio is associated with reduced resistance to infection. Patients with tuberculosis show a reduced CD4+/CD8+ ratio. HIV infection leads to low levels of CD4+ T cells (lowering the CD4+/CD8+ ratio) through a number of mechanisms, including killing of infected CD4+. Acquired immunodeficiency syndrome (AIDS) is (by one definition) a CD4+ T cell count below 200 cells per μL. HIV progresses with declining numbers of CD4+ and expanding number of CD8+ cells (especially CD8+ memory cells), resulting in high morbidity and mortality. When CD4+ T cell numbers decline below a critical level, cell-mediated immunity is lost, and the body becomes progressively more susceptible to opportunistic infections. Declining CD4+/CD8+ ratio has been found to be a prognostic marker of HIV disease progression. COVID-19 In COVID-19 B cell, natural killer cell, and total lymphocyte counts decline, but both CD4+ and CD8+ cells decline to a far greater extent. Low CD4+ predicted greater likelihood of intensive care unit admission, and CD4+ cell count was the only parameter that predicted length of time for viral RNA clearance. Decreased ratio with aging A declining CD4+/CD8+ ratio is associated with ageing, and is an indicator of immunosenescence. Compared to CD4+ T-cells, CD8+ T-cells show a greater increase in adipose tissue in obesity and aging, thereby reducing the CD4+/CD8+ ratio. Amplication of numbers of CD8+ cells are required for adipose tissue inflammation and macrophage infiltration, whereas numbers of CD4+ cells are reduced under those conditions. Antibodies against CD8+ T-cells reduces inflammation associated with diet-induced obesity, indicating that CD8+ T-cells are an important cause of the inflammation. CD8+ cell recruitment of macrophages into adipose tissue can initiate a vicious cycle of further recruitment of both cell types. Elderly persons commonly have a CD4+/CD8+ ratio less than one. A study of Swedish elderly found that a CD4+/CD8+ ratio less than one was associated with short-term likelihood of death. Immunological aging is characterized by low proportions of naive CD8+ cells and high numbers of memory CD8+ cells, particularly when cytomegalovirus is present. Exercise can reduce or reverse this effect, when not done at extreme intensity and duration. Both effector helper T cells (Th1 and Th2) and regulatory T cells (Treg) cells have a CD4 surface marker, such that although total CD4+ T cells decrease with age, the relative percent of CD4+ T cells increases. The increase in Treg with age results in suppressed immune response to infection, vaccination, and cancer, without suppressing the chronic inflammation associated with aging. See also Helper/suppressor ratio List of distinct cell types in the adult human body References Clusters of differentiation Immunology T cells
CD4+/CD8+ ratio
Biology
856
877,912
https://en.wikipedia.org/wiki/Rangeland
Rangelands are grasslands, shrublands, woodlands, wetlands, and deserts that are grazed by domestic livestock or wild animals. Types of rangelands include tallgrass and shortgrass prairies, desert grasslands and shrublands, woodlands, savannas, chaparrals, steppes, and tundras. Rangelands do not include forests lacking grazable understory vegetation, barren desert, farmland, or land covered by solid rock, concrete, or glaciers. Rangelands are distinguished from pasture lands because they grow primarily native vegetation rather than plants established by humans. Rangelands are also managed principally with practices such as managed livestock grazing and prescribed fire rather than more intensive agricultural practices of seeding, irrigation, and the use of fertilizers. Grazing is an important use of rangelands but the term rangeland is not synonymous with grazingland. Livestock grazing can be used to manage rangelands by harvesting forage to produce livestock, changing plant composition, or reducing fuel loads. Fire is also an essential regulator of range vegetation, whether set by humans or resulting from lightning. Fires tend to reduce the abundance of woody plants and promote herbaceous plants, including grasses, forbs, and grass-like plants. The suppression or reduction of periodic wildfires from desert shrublands, savannas, or woodlands frequently invites the dominance of trees and shrubs to the near exclusion of grasses and forbs. Rangelands cover approximately 80 million square kilometers globally, with 9.5 million square kilometers protected and 67 million square kilometers used for livestock production. These areas sustain about 1 billion animals, managed by pastoralists across over 100 countries, illustrating their crucial role in both ecological conservation and agricultural productivity. The United Nations (UN) has declared 2026 the International Year of Rangelands and Pastoralists, with the Food and Agriculture Organization leading the initiative. Etymology and definition The United States Environmental Protection Agency defines rangeland as "lands on which the native vegetation (climax or natural potential plant community) is predominantly grasses, grass-like plants, forbs, or shrubs suitable for grazing or browsing use." The EPA classifies natural grassland and savannas as rangeland, and in some cases includes wetlands, deserts, tundra, and "certain forb and shrub communities." The primary difference between rangeland and pasture is management; rangelands tend to have natural vegetation along with a few introduced plant species, but all managed by grazing, while pastures have forage that is adapted for livestock and managed, by seeding, mowing, fertilization and irrigation. Types of rangeland According to the UNCCD, 35% of rangelands are deserts and xeric shrublands, 26% tropical and subtropical grasslands, savannas and shrublands, 15% tundra, 13% temperate grasslands, savannahs and shrublands, 6% montane grasslands and shrublands, 4% mediterranean forests, woodlands and scrub, as well as 1% flooded grasslands and savannahs. Prairie Prairies are considered part of the temperate grasslands, savannas and shrublands biome by ecologists, based on similar temperate climates, moderate rainfall, and grasses, herbs, and shrubs, rather than trees, as the dominant vegetation type. Temperate grassland regions include the Pampas of Argentina, and the steppes of Eurasia. Grasslands Grasslands are areas where the vegetation is dominated by grasses (Poaceae) and other herbaceous (non-woody) plants. However, sedge (Cyperaceae) and rush (Juncaceae) families can also be found. Grasslands occur naturally on all continents except Antarctica. In temperate latitudes, such as northwest Europe and the Great Plains and California in North America, native grasslands are dominated by perennial bunch grass species, whereas in warmer climates annual species form a greater component of the vegetation. Steppe Steppe, in physical geography, refers to a biome region characterized by grassland plain without trees apart from those near rivers and lakes. The prairie (especially the shortgrass and mixed prairie) is an example of a steppe, though it is not usually called such. It may be semi-desert, or covered with grass or shrubs or both, depending on the season and latitude. The term is also used to denote the climate encountered in regions too dry to support a forest, but not dry enough to be a desert. Pampas Pampas are the fertile South American lowlands that include the Argentine provinces of Buenos Aires, La Pampa, Santa Fe, Entre Ríos and Córdoba, most of Uruguay, and the State of Rio Grande do Sul, in the southernmost end of Brazil covering more than . These vast plains are only interrupted by the low Ventana and Tandil hills near Bahía Blanca and Tandil (Argentina), with a height of and respectively. The climate is mild, with precipitation of to , more or less evenly distributed through the year, making the soils appropriate for agriculture. This area is also one of the distinct physiography provinces of the larger Paraná-Paraguay Plain division. These plains contain unique wildlife because of the different terrains around it. Some of this wildlife includes the rhea, the badger, and the prairie chicken. Shrubland Shrubland is a plant community characterized by vegetation dominated by shrubs, often also including grasses, herbs, and geophytes. Shrubland may either occur naturally or be the result of human activity. It may be the mature vegetation type in a particular region and remain stable over time, or a transitional community that occurs temporarily as the result of a disturbance, such as fire. A stable state may be maintained by regular natural disturbance such as fire or browsing. Shrubland may be unsuitable for human habitation because of the danger of fire. The term "shrubland" was first coined in 1903. Woodland Woodland is a low-density forest forming open habitats with plenty of sunlight and limited shade. Woodlands may support an understory of shrubs and herbaceous plants including grasses. Woodland may form a transition to shrubland under drier conditions or during early stages of primary or secondary succession. Higher densities and areas of trees, with largely closed canopy, provide extensive and nearly continuous shade are referred to as forest. Savanna Savanna is a grassland ecosystem characterized by the trees being sufficiently small or widely spaced so that the canopy does not close. The open canopy allows sufficient light to reach the ground to support an unbroken herbaceous layer consisting primarily of C4 grasses. Desert Desert is a landscape or region that receives an extremely low amount of precipitation, defined as areas with an average annual precipitation of less than per year, or as areas where more water is lost by evapotranspiration than falls as precipitation. In the Köppen climate classification system, deserts are classed as BWh (hot desert) or BWk (temperate desert). In the Thornthwaite climate classification system, deserts would be classified as arid megathermal climates. Tundra Tundra is a biome where the tree growth is hindered by low temperatures and short growing seasons. The term tundra comes through Russian тундра from the Kildin Sami word tūndâr "uplands," "treeless mountain tract." There are three types of tundra: Arctic tundra, alpine tundra, and Antarctic tundra In tundra, the vegetation is composed of dwarf shrubs, sedges and grasses, mosses, and lichens. Scattered trees grow in some tundra. The ecotone (or ecological boundary region) between the tundra and the forest is known as the tree line or timberline. Uses of rangeland Rangelands produce a wide variety of goods and services desired by society, including livestock forage (Grazing), wildlife habitat, water, mineral resources, wood products, wildland recreation, open space and natural beauty. The geographic extent and many important resources of rangelands make their proper use and management vitally important to people everywhere. Economic benefits Rangelands are vital economic assets, contributing substantially to national economies, particularly through livestock production. For instance, in Ethiopia, rangelands account for 19% of the national GDP, while in Brazil, they contribute one-third of the agribusiness GDP through cattle farming. These vast areas not only support direct agricultural outputs but also bolster related industries, enhancing employment and promoting economic growth. Their management and sustainability are crucial for continuing these economic contributions and supporting the livelihoods dependent on them. Rangeland degradation challenges The degradation of Earth's extensive rangelands due to overuse, inappropriate cultivation, misuse, climate change, and biodiversity loss represents a significant threat to humanity's food supply and the well-being or survival of billions of people. In 2024, the United Nations Convention to Combat Desertification (UNCCD) reported that up to 50% of rangelands are degraded. These areas suffer from reduced soil fertility, woody encroachment, erosion, salinization, alkalinization, and soil compaction, which all inhibit plant growth and contribute to drought and fluctuations in precipitation. This degradation is primarily driven by the conversion of pastures to cropland, urban expansion, increasing demands for food, fiber, and fuel, excessive grazing, abandonment by pastoralists, and policies that incentivize overexploitation. The UNCCD observes that the loss of rangeland attracts little public attention and rarely features in international policy discussions. Global extent Rangelands cover up to 8 billion hectares of land globally and therewith 54% of the terrestrial surface. 78% of rangelands occur in drylands. Canada Rangeland is a prominent feature of rural Canada. A provincial jurisdiction, administration and policy regarding range use varies across the country. As in many other Commonwealth countries, public tenures on crown land for the purpose of range activities are common in geographically compatible areas. Reconciling the economic needs of ranchers and the need for environmental conservation is one of the primary themes in modern range discourse. In western Canada, both grassland and forested range are significant. In British Columbia, 70 percent of grassland range is privately owned and 60 percent of the total annual livestock forage requirement is provided by grazing on Crown rangeland (34 million hectares), 80 percent of which is forested range. Grassland range predominates in much of the prairie provinces’ ranching area; however, forested range is particularly important in the boreal region. Certain rangelands are preserved as provincially-protected areas similar to parks, others are managed as community resources. For example, in Alberta since 2003 there has been legislation allowing the creation of "Heritage Rangelands" within the parks system. As of 2012 there were 2 heritage rangelands and 6 proposed future heritage rangelands run by Alberta Parks. There are also 32 provincial grazing reserves located throughout Alberta administered as public lands by Alberta Sustainable Resource Development. The federal government has administered several "Community Pastures" in Western Canada that were reclaimed lands suffering erosion during the 1930s. In 2012, it was announced that this federal involvement would be phased out over a six-year period. United States Of the land within the United States borders, 36% is considered rangeland. The western side of the United States is 53% rangeland. Around 399 million acres (1,610,000 km2) of rangeland are privately owned. The Bureau of Land Management manages about 167 million acres (676,000 km2) of publicly owned rangeland, with the United States Forest Service managing approximately 95 million acres (380,000 km2) more. Ranchers may lease portions of this public rangeland and pay a fee based on the number and type of livestock and the period for which they are on the land. Historically much of the land in the western United States was used for grazing and much of some states still is. In many of those states, such as Arizona, an open-range law applies which requires a land owner to fence cattle out rather than in; thus cattle are theoretically allowed to roam free. In modern times open-range laws can conflict with urban development as occasional stray cows, bulls, or even herds wander into subdivisions or onto highways. North American rangelands - grasslands Tall Grass Prairie Mixed Grass Prairie Short Grass Prairie Pacific Bunchgrass Annual Grasslands North American rangelands - shrublands Sagebrush Steppe Salt Desert Shrublands Desert Shrublands Australia Australia’s rangelands extend from tropical savannas in the north dominated by summer rainfall, though large areas of desert in central Australia to the southern rangelands dominated by winter rainfall. They cover approximately 80 per cent of the Australian continent and equate broadly with the ‘Outback’. However, rangelands also occur in higher rainfall areas where limitations other than rainfall restrict use to management of the natural landscape. The rangelands are where values and societal benefits are based primarily on natural resources. They are areas which have not been intensively developed for agriculture but extensive livestock production is a major land use, accounting for 55 per cent of the rangelands. Conservation reserves utilise around 11 per cent of the rangelands and the rangelands have areas of significant biodiversity and natural attractions on a world scale.  Although mining and petroleum extraction uses a very small percentage of the rangelands, it economically contributes most to Australia’s Gross Domestic Product compared with other rangeland industries (cattle, sheep and goat production, tourism, harvesting of native products). Indigenous land tenures of various types cover around 59 per cent of the rangelands and overlap with grazing and conservation uses.  Although rangelands cover 80 per cent of Australia’s land mass, at the 2016 Census, they were home to just over two per cent of the population (394,000 people), with 28 per cent of rangeland residents identifying as being Indigenous. South America Rangelands in South America are located in regions with climate ranging from arid to sub-humid. Annual precipitation in these areas ranges from approximately 150 to 1500 mm (6–60 inches). Within South America, rangelands cover about 33% of the total land area. South American rangelands include; grasslands, shrublands, savannas, and hot and cold deserts. Rangelands in South America exclude hyperarid deserts. Examples of the South American rangelands include the Patagonian Steppe, the Monte, the Pampas, the "Llanos" or "Cerrado," the "Chaco" and the "Caatinga." The change in the intensity and location of tropical thunderstorms and other weather patterns is the driving force in the climates of southern South America. Africa In Kenya, rangelands make up for 85% of the land surface area, and are largely inhabited by nomadic pastoralists who are largely dependent on livestock. This movement often brings along an incursion of different diseases with the common one being the rinderpest virus in the Kenyan wildlife population from the Somali ecosystem. Asia In the past, rangelands in western China supported a pastoral economy and large wildlife populations. Now the rangelands have shrunk due to population growth, economic, government, and social factors. Rangeland types in China include; Semi-desert, Dry Alpine Grasslands, Alpine Dwarf Shrub, Wetland types. Gallery See also Applied ecology Coastal plain Coastal prairie Experimental range Field Forage Grassland Grass valley Holistic management Meadow Pasture Potrero Plain Prairie Range condition scoring Savanna Steppe Veld References External links Rangelands 1979-2003 archive - freely available volumes published by The Society For Range Management Society for Range Management Bureau of Land Management USDA Forest Service University of Idaho - Rangeland Ecology and Management Information about Australian Rangelands Grasslands Temperate grasslands, savannas, and shrublands
Rangeland
Biology
3,125
3,471,089
https://en.wikipedia.org/wiki/Recrystallization%20%28metallurgy%29
In materials science, recrystallization is a process by which deformed grains are replaced by a new set of defect-free grains that nucleate and grow until the original grains have been entirely consumed. Recrystallization is usually accompanied by a reduction in the strength and hardness of a material and a simultaneous increase in the ductility. Thus, the process may be introduced as a deliberate step in metals processing or may be an undesirable byproduct of another processing step. The most important industrial uses are softening of metals previously hardened or rendered brittle by cold work, and control of the grain structure in the final product. Recrystallization temperature is typically 0.3–0.4 times the melting point for pure metals and 0.5 times for alloys. Definition Recrystallization is defined as the process in which grains of a crystal structure come in a new structure or new crystal shape. A precise definition of recrystallization is difficult to state as the process is strongly related to several other processes, most notably recovery and grain growth. In some cases it is difficult to precisely define the point at which one process begins and another ends. Doherty et al. defined recrystallization as: "... the formation of a new grain structure in a deformed material by the formation and migration of high angle grain boundaries driven by the stored energy of deformation. High angle boundaries are those with greater than a 10-15° misorientation" Thus the process can be differentiated from recovery (where high angle grain boundaries do not migrate) and grain growth (where the driving force is only due to the reduction in boundary area). Recrystallization may occur during or after deformation (during cooling or subsequent heat treatment, for example). The former is termed dynamic while the latter is termed static. In addition, recrystallization may occur in a discontinuous manner, where distinct new grains form and grow, or a continuous manner, where the microstructure gradually evolves into a recrystallized microstructure. The different mechanisms by which recrystallization and recovery occur are complex and in many cases remain controversial. The following description is primarily applicable to static discontinuous recrystallization, which is the most classical variety and probably the most understood. Additional mechanisms include (geometric) dynamic recrystallization and strain induced boundary migration. Secondary recrystallization occurs when a certain very small number of {110}<001> (Goss) grains grow selectively, about one in 106 primary grains, at the expense of many other primary recrystallized grains. This results in abnormal grain growth, which may be beneficial or detrimental for product material properties. The mechanism of secondary recrystallization is a small and uniform primary grain size, achieved through the inhibition of normal grain growth by fine precipitates called inhibitors. Goss grains are named in honor of Norman P. Goss, the inventor of grain-oriented electrical steel circa 1934. Laws of recrystallization There are several, largely empirical laws of recrystallization: Thermally activated. The rate of the microscopic mechanisms controlling the nucleation and growth of recrystallized grains depend on the annealing temperature. Arrhenius-type equations indicate an exponential relationship. Critical temperature. Following from the previous rule it is found that recrystallization requires a minimum temperature for the necessary atomic mechanisms to occur. This recrystallization temperature decreases with annealing time. Critical deformation. The prior deformation applied to the material must be adequate to provide nuclei and sufficient stored energy to drive their growth. Deformation affects the critical temperature. Increasing the magnitude of prior deformation, or reducing the deformation temperature, will increase the stored energy and the number of potential nuclei. As a result, the recrystallization temperature will decrease with increasing deformation. Initial grain size affects the critical temperature. Grain boundaries are good sites for nuclei to form. Since an increase in grain size results in fewer boundaries this results in a decrease in the nucleation rate and hence an increase in the recrystallization temperature Deformation affects the final grain size. Increasing the deformation, or reducing the deformation temperature, increases the rate of nucleation faster than it increases the rate of growth. As a result, the final grain size is reduced by increased deformation. Driving force During plastic deformation the work performed is the integral of the stress and strain in the plastic deformation regime. Although the majority of this work is converted to heat, some fraction (~1–5%) is retained in the material as defects—particularly dislocations. The rearrangement or elimination of these dislocations will reduce the internal energy of the system and so there is a thermodynamic driving force for such processes. At moderate to high temperatures, particularly in materials with a high stacking fault energy such as aluminium and nickel, recovery occurs readily and free dislocations will readily rearrange themselves into subgrains surrounded by low-angle grain boundaries. The driving force is the difference in energy between the deformed and recrystallized state ΔE which can be determined by the dislocation density or the subgrain size and boundary energy (Doherty, 2005): where ρ is the dislocation density, G is the shear modulus, b is the Burgers vector of the dislocations, γs is the subgrain boundary energy and ds is the subgrain size. Nucleation Historically it was assumed that the nucleation rate of new recrystallized grains would be determined by the thermal fluctuation model successfully used for solidification and precipitation phenomena. In this theory it is assumed that as a result of the natural movement of atoms (which increases with temperature) small nuclei would spontaneously arise in the matrix. The formation of these nuclei would be associated with an energy requirement due to the formation of a new interface and an energy liberation due to the formation of a new volume of lower energy material. If the nuclei were larger than some critical radius then it would be thermodynamically stable and could start to grow. The main problem with this theory is that the stored energy due to dislocations is very low (0.1–1 J m−3) while the energy of a grain boundary is quite high (~0.5 J m−3). Calculations based on these values found that the observed nucleation rate was greater than the calculated one by some impossibly large factor (~1050). As a result, the alternate theory proposed by Cahn in 1949 is now universally accepted. The recrystallized grains do not nucleate in the classical fashion but rather grow from pre-existing sub-grains and cells. The 'incubation time' is then a period of recovery where sub-grains with low-angle boundaries (<1–2°) begin to accumulate dislocations and become increasingly misoriented with respect to their neighbors. The increase in misorientation increases the mobility of the boundary and so the rate of growth of the sub-grain increases. If one sub-grain in a local area happens to have an advantage over its neighbors (such as locally high dislocation densities, a greater size or favorable orientation) then this sub-grain will be able to grow more rapidly than its competitors. As it grows its boundary becomes increasingly misoriented with respect to the surrounding material until it can be recognized as an entirely new strain-free grain. Kinetics Recrystallization kinetics are commonly observed to follow the profile shown. There is an initial 'nucleation period' t0 where the nuclei form, and then begin to grow at a constant rate consuming the deformed matrix. Although the process does not strictly follow classical nucleation theory it is often found that such mathematical descriptions provide at least a close approximation. For an array of spherical grains the mean radius R at a time t is (Humphreys and Hatherly 2004): where t0 is the nucleation time and G is the growth rate dR/dt. If N nuclei form in the time increment dt and the grains are assumed to be spherical then the volume fraction will be: This equation is valid in the early stages of recrystallization when f<<1 and the growing grains are not impinging on each other. Once the grains come into contact the rate of growth slows and is related to the fraction of untransformed material (1-f) by the Johnson-Mehl equation: While this equation provides a better description of the process it still assumes that the grains are spherical, the nucleation and growth rates are constant, the nuclei are randomly distributed and the nucleation time t0 is small. In practice few of these are actually valid and alternate models need to be used. It is generally acknowledged that any useful model must not only account for the initial condition of the material but also the constantly changing relationship between the growing grains, the deformed matrix and any second phases or other microstructural factors. The situation is further complicated in dynamic systems where deformation and recrystallization occur simultaneously. As a result, it has generally proven impossible to produce an accurate predictive model for industrial processes without resorting to extensive empirical testing. Since this may require the use of industrial equipment that has not actually been built there are clear difficulties with this approach. Factors influencing the rate The annealing temperature has a dramatic influence on the rate of recrystallization which is reflected in the above equations. However, for a given temperature there are several additional factors that will influence the rate. The rate of recrystallization is heavily influenced by the amount of deformation and, to a lesser extent, the manner in which it is applied. Heavily deformed materials will recrystallize more rapidly than those deformed to a lesser extent. Indeed, below a certain deformation recrystallization may never occur. Deformation at higher temperatures will allow concurrent recovery and so such materials will recrystallize more slowly than those deformed at room temperature e.g. contrast hot and cold rolling. In certain cases deformation may be unusually homogeneous or occur only on specific crystallographic planes. The absence of orientation gradients and other heterogeneities may prevent the formation of viable nuclei. Experiments in the 1970s found that molybdenum deformed to a true strain of 0.3, recrystallized most rapidly when tensioned and at decreasing rates for wire drawing, rolling and compression (Barto & Ebert 1971). The orientation of a grain and how the orientation changes during deformation influence the accumulation of stored energy and hence the rate of recrystallization. The mobility of the grain boundaries is influenced by their orientation and so some crystallographic textures will result in faster growth than others. Solute atoms, both deliberate additions and impurities, have a profound influence on the recrystallization kinetics. Even minor concentrations may have a substantial influence e.g. 0.004% Fe increases the recrystallization temperature by around 100 °C (Humphreys and Hatherly 2004). It is currently unknown whether this effect is primarily due to the retardation of nucleation or the reduction in the mobility of grain boundaries i.e. growth. Influence of second phases Many alloys of industrial significance have some volume fraction of second phase particles, either as a result of impurities or from deliberate alloying additions. Depending on their size and distribution such particles may act to either encourage or retard recrystallization. Small particles Recrystallization is prevented or significantly slowed by a dispersion of small, closely spaced particles due to Zener pinning on both low- and high-angle grain boundaries. This pressure directly opposes the driving force arising from the dislocation density and will influence both the nucleation and growth kinetics. The effect can be rationalized with respect to the particle dispersion level where is the volume fraction of the second phase and r is the radius. At low the grain size is determined by the number of nuclei, and so initially may be very small. However the grains are unstable with respect to grain growth and so will grow during annealing until the particles exert sufficient pinning pressure to halt them. At moderate the grain size is still determined by the number of nuclei but now the grains are stable with respect to normal growth (while abnormal growth is still possible). At high the unrecrystallized deformed structure is stable and recrystallization is suppressed. Large particles The deformation fields around large (over 1 μm) non-deformable particles are characterised by high dislocation densities and large orientation gradients and so are ideal sites for the development of recrystallization nuclei. This phenomenon, called particle stimulated nucleation (PSN), is notable as it provides one of the few ways to control recrystallization by controlling the particle distribution. The size and misorientation of the deformed zone is related to the particle size and so there is a minimum particle size required to initiate nucleation. Increasing the extent of deformation will reduce the minimum particle size, leading to a PSN regime in size-deformation space. If the efficiency of PSN is one (i.e. each particle stimulates one nuclei), then the final grain size will be simply determined by the number of particles. Occasionally the efficiency can be greater than one if multiple nuclei form at each particle but this is uncommon. The efficiency will be less than one if the particles are close to the critical size and large fractions of small particles will actually prevent recrystallization rather than initiating it (see above). Bimodal particle distributions The recrystallization behavior of materials containing a wide distribution of particle sizes can be difficult to predict. This is compounded in alloys where the particles are thermally-unstable and may grow or dissolve with time. In various systems, abnormal grain growth may occur giving rise to unusually large crystallites growing at the expense of smaller ones. The situation is more simple in bimodal alloys which have two distinct particle populations. An example is Al-Si alloys where it has been shown that even in the presence of very large (<5 μm) particles the recrystallization behavior is dominated by the small particles (Chan & Humphreys 1984). In such cases the resulting microstructure tends to resemble one from an alloy with only small particles. Recrystallization Temperature The recrystallization temperature is temperature at which recrystallization can occur for a given material and processing conditions. This is not a set temperature and is dependent upon factors including the following: Increasing annealing time decreases recrystallization temperature Alloys have higher recrystallization temperatures than pure metals Increasing amount of cold work decreases recrystallization temperature Smaller cold-worked grain sizes decrease the recrystallization temperature See also Phase diagram References Metallurgy Phase transitions
Recrystallization (metallurgy)
Physics,Chemistry,Materials_science,Engineering
3,066
53,021,322
https://en.wikipedia.org/wiki/Fusome
The fusome is a membranous structure found in the developing germ cell cysts of many insect orders. Initial description of the fusome occurred in the 19th century and since then the fusome has been extensively studied in Drosophila melanogaster male and female germline development. This structure has roles in maintaining germline cysts, coordinating the number of mitotic divisions prior to meiosis, and oocyte determination by serving as a structure for intercellular communication. Structure In D. melanogaster, germline cysts form from four mitotic divisions with incomplete cytokinesis that originated from one germline stem cell. Incomplete cytokinesis results in intercellular bridges connecting every cell in the cyst, called ring canals3. The four mitotic divisions result in cysts of 16 cells connected by 15 ring canals. The fusome is composed of membrane vesicles and originates from endoplasmic reticulum. Fusome material is inside ring canals and can range in size from 1 to 10 um depending on the stage of development. 1.1 Fusome Development The spectrosome is a round structure in germline stem cells that develops into the fusome in cyst cells. Fusome divides asymmetrically into daughter cells in females by attaching to one spindle pole during meiosis, resulting in one cell receiving all fusome material. Fusome is generated de novo in the ring canal connecting the two cells. The two fusome parts then fuse together to connect the cells. Asymmetric fusome partitioning and new formation followed by fusion occurs at each mitotic division. In spermatogenesis, the fusome partitioning is symmetric and the fusome is still present during the meiotic divisions. 1.2 Fusome components Many proteins and organelles associate with the fusome throughout germ cell development. Cytoskeleton components, such as alpha and beta spectrins, hu-li tai shao (hts), and ankyrin were the first proteins identified in the fusome. Centrosomes travel along the fusome and the fusome is involved in microtubule organization. The interactions between the fusome and microtubules result in cyst polarity in oogenesis. Associations between the fusome and microtubules change throughout the cell cycle. Mitochondria associates with the fusome and travel through ring canals to the oocyte. Microtubules travel through ring canals and form the tracks for transport of materials between cells. Function There are numerous functions of the fusome as a structure necessary for cell-cell communication in developing germ cell cysts. The fusome connects cells, allowing for transport of proteins and RNAs between cells and synchronous activities. Mutations in essential fusome components can result in infertility. 2.1 Role in cell cycle synchrony Developing cells in germline cysts undergo mitotic divisions synchronously and in males all cells in a cyst also undergo meiosis synchronously. The fusome is a track where an event can happen and then feedback mechanisms quickly communicate to each cell to ensure a specific outcome occurs simultaneously in every cell. Cells in a cyst fail to divide synchronously if the fusome is disrupted. The rosette formation of germline cyst cells allows cells to be in the closest configuration for communication. Throughout the cell cycle, different cyclins associate with the fusome to induce synchronous cell divisions. Cyclin A and Cyclin E localize to the fusome in female germline cysts and are required for the correct number of mitotic divisions to occur. Abnormal cyclin levels result in too few or too many divisions. Cyclin E at the fusome is phosphorylated for degradation by the SCF complex and if not degraded, an extra division occurs. The fusome may be the degradation site for other cell cycle proteins. Myt1 kinase inhibits CycA/Cdk1 in males during G2. Without Myt1 regulation, fusome and centrosome behavior is abnormal, resulting in cells with irregular spindles. 2.2 Differences in male vs female fusomes In females, the fusome plays a role in cell fate and differentiation. Asymmetric fusome distribution and centriole orientation determines which cell in the developing female germline cyst becomes the oocyte. One of the two cells from the first division within the cyst becomes the oocyte and contains the most fusome material. The fusome degrades after the 16-cell cyst forms. In females, the connections are the channels through which nurse cells send proteins and RNAs to the oocyte along polarized microtubules. In males, the fusome is necessary for ensuring quality control in individual cysts. DNA damage in one cell leads to all cells in a cyst dying by communication through the fusome, either by disseminating a death signal or additive DNA damage inducing apoptosis. This ensures mature sperm cells have intact genomes before fertilizing an egg. In addition, the fusome connections ensure haploid spermatids have proteins and RNA made by the other chromosome for “gamete equivalency”. Similar structures in other animals Fusomes were previously thought to be specific to insect gametogenesis. Fusome-like structures have been identified in Xenopus laevis oogenesis by electron microscopy and immunostaining for fusome components such as spectrin and hts. Intercellular bridges also connect developing germ cells in mammals, contributing to cell cycle synchrony and gamete quality control by sharing substances between cells. Future studies are required to elucidate all of the functions that arise from cell-cell communication through intercellular bridges. In addition, a future area of research is to determine why some organisms lack fusomes. Do these organisms have another structure that carries out the role of the fusome or are these roles not necessary in germline cyst development of these other organisms? See also Intercellular junctions Gametogenesis Spectrin Cyclin References ^PG Wilson Cell Biol Int. 2005 May;29(5):360-9. Centrosome inheritance in the male germ line of Drosophila requires hu-li tai-shao function. External links Huynh JR. (2006) Fusome as a Cell-Cell Communication Channel of Drosophila Ovarian Cyst. In: Cell-Cell Channels. Springer, New York, NYhttps://www.ncbi.nlm.nih.gov/books/NBK6300/ http://www.oxfordreference.com/view/10.1093/acref/9780195307610.001.0001/acref-9780195307610-e-2383?rskey=LqAWUj&result=2381 Lighthouse, D. V., M. Buszczak, and A. C. Spradling. (2008). New components of the Drosophila fusome suggest it plays novel roles in signaling and transport. Dev Biol 317: 59–71. doi:10.1016/j.ydbio.2008.02.009 de Cuevas, M., M. A. Lilly, and A. C. Spradling. (1997). Germline cyst formation in Drosophila. Annu. Rev. Genet. 31: 405–428. DOI: 10.1146/annurev.genet.31.1.405 Yamashita, Y. M., H. Yuan, J. Cheng, and A. J. Hunt. (2010). Polarity in stem cell division: asymmetric stem cell division in tissue homeostasis.  Cold Spring Harb Perspect Biol 2:a001313 doi: 10.1101/cshperspect.a001313 Rieger R. Michaelis A., Green M. M. (1976). Glossary of genetics and cytogenetics: Classical and molecular. Heidelberg - New York: Springer-Verlag. . King R. C., Stransfield W. D. (1998): Dictionary of genetics. Oxford University Press, New York, Oxford, ; . Cell biology
Fusome
Biology
1,743
28,654,390
https://en.wikipedia.org/wiki/Multi-function%20structure
A multi-function material is a composite material. The traditional approach to the development of structures is to address the load-carrying function and other functional requirements separately. Recently, however, there has been increased interest in the development of load-bearing materials and structures which have integral non-load-bearing functions, guided by recent discoveries about how multifunctional biological systems work. Introduction With conventional structural materials, it has been difficult to achieve simultaneous improvement in multiple structural functions, but the increasing use of composite materials has been driven in part by the potential for such improvements. The multi-functions can vary from mechanical to electrical and thermal functions. The most widely used composites have polymer matrix materials, which are typically poor conductors. Enhanced conductivity could be achieved with reinforcing the composite with carbon nanotubes for instance. Functions Among the many functions that can be attained are power transmission, electrical/thermal conductivity, sensing and actuation, energy harvesting/storage, self-healing capability, electromagnetic interference (EMI) shielding and recyclability and biodegradability. See also functionally graded materials which are composite materials where the composition or the microstructure are locally varied so that a certain variation of the local material properties is achieved. However, functionally graded materials can be designed for specific function and applications. Many applications such as re-configurable aircraft wings, shape-changing aerodynamic panels for flow control, variable geometry engine exhausts, turbine blade, wind turbine configuration at different wind speed, microelectromechanical systems (micro-switches), mechanical memory cells, valves, micropumps, flexible direction panel position in solar cells, innovative architecture (adaptive shape panels for roofs and windows), flexible and foldable electronic devices and optics (shape changing mirrors for active focusing in adaptive optical systems). References Composite materials Structural engineering Mechanics Mechanical engineering
Multi-function structure
Physics,Engineering
379
60,252,975
https://en.wikipedia.org/wiki/Modular%20switch
A modular switch or chassis switch is a type of network switch which can be configured using field-replaceable units. These units, often referred to as blades, can add more ports, bandwidth, and capabilities to a switch. These blades can be heterogenous, and this allows for a network based on multiple different protocols and cable types. Blades can typically be configured in a parallel or failover configuration, which can allow for higher bandwidth, or redundancy in the event of failure. Modular switches also typically support hot-swap of switch modules, this can be very important in managing downtime. Modular switches also support additional line cards which can provide new functions to the switch that would previously have been unavailable, such as a firewall. An example of a modular computer network switch is the Cisco Catalyst 6500, which can be configured with up to 13 slots, and supports connections from RJ45 to QSFP+. See also Stackable switch References Further reading Examples of Network Equipment, University of Aberdeen Internet Communications Engineering Course, 2019. Introducing Backpack: Our second-generation modular open switch, Facebook Backpack introduction Networking hardware
Modular switch
Technology,Engineering
228
26,652,374
https://en.wikipedia.org/wiki/Cyclic%20polytope
In mathematics, a cyclic polytope, denoted C(n, d), is a convex polytope formed as a convex hull of n distinct points on a rational normal curve in Rd, where n is greater than d. These polytopes were studied by Constantin Carathéodory, David Gale, Theodore Motzkin, Victor Klee, and others. They play an important role in polyhedral combinatorics: according to the upper bound theorem, proved by Peter McMullen and Richard Stanley, the boundary Δ(n,d) of the cyclic polytope C(n,d) maximizes the number fi of i-dimensional faces among all simplicial spheres of dimension d − 1 with n vertices. Definition The moment curve in is defined by . The -dimensional cyclic polytope with vertices is the convex hull of distinct points with on the moment curve. The combinatorial structure of this polytope is independent of the points chosen, and the resulting polytope has dimension d and n vertices. Its boundary is a (d − 1)-dimensional simplicial polytope denoted Δ(n,d). Gale evenness condition The Gale evenness condition provides a necessary and sufficient condition to determine a facet on a cyclic polytope. Let . Then, a -subset forms a facet of if and only if any two elements in are separated by an even number of elements from in the sequence . Neighborliness Cyclic polytopes are examples of neighborly polytopes, in that every set of at most d/2 vertices forms a face. They were the first neighborly polytopes known, and Theodore Motzkin conjectured that all neighborly polytopes are combinatorially equivalent to cyclic polytopes, but this is now known to be false. Number of faces The number of i-dimensional faces of the cyclic polytope Δ(n,d) is given by the formula and completely determine via the Dehn–Sommerville equations. Upper bound theorem The upper bound theorem states that cyclic polytopes have the maximum possible number of faces for a given dimension and number of vertices: if Δ is a simplicial sphere of dimension d − 1 with n vertices, then The upper bound conjecture for simplicial polytopes was proposed by Theodore Motzkin in 1957 and proved by Peter McMullen in 1970. Victor Klee suggested that the same statement should hold for all simplicial spheres and this was indeed established in 1975 by Richard P. Stanley using the notion of a Stanley–Reisner ring and homological methods. See also Combinatorial commutative algebra References Polyhedral combinatorics
Cyclic polytope
Mathematics
553
3,563,269
https://en.wikipedia.org/wiki/Purchase%20line
The Purchase Line is the name commonly given to the line dividing Indian from British Colonial lands established in the Treaty of Fort Stanwix of 1768 in western Pennsylvania. In New York State documents, it is referred to as the Line of Property. That article contains the treaty text and other sections. History The relevant section of the treaty reads: "from thence" (Kittanning) "a direct Line to the nearest Fork of the west branch of Susquehanna" This line was not clearly defined until in a meeting between Indian and Pennsylvania representatives in 1773 at the well-known "Canoe Place" or upper limit of canoe navigation on the Susquehanna at its confluence with Cush Cushion Creek at present-day Cherry Tree, Pennsylvania. This was agreed to as the "nearest point" of the treaty. This became the tri-point between present-day Clearfield, Cambria, and Indiana counties, although the borough of Cherry Tree, Pennsylvania was later included entirely in Indiana for convenience. The line is still a boundary through most of its length: in Armstrong County running ESE from Kittanning it separates the townships of Rayburn and Valley (north) from Manor and Kittanning (south). There is gap through the town of Cowanshannock, but at the knick point of the boundary between Armstrong County and Indiana County the line begins again, separating the Indiana County townships of South Mahoning, East Mahoning, Grant and Montgomery (north) from Washington, Rayne and Green (south). The hamlet of Purchase Line is on PA 286 in Green township just south of the actual line. Maps References "The Documentary History of the State of New York", by E.B. O'Callaghan, M.D.; Albany: Weed, Parsons & Co., 1850 (Vol. 1 pp. 379–381 text of treaty) External links Purchase Line, PA is at coordinates Cherry Tree, PA is at coordinates Kittanning, PA is at coordinates Pre-statehood history of Pennsylvania Aboriginal title in the United States Borders
Purchase line
Physics
413
5,666,922
https://en.wikipedia.org/wiki/List%20of%20plants%20of%20Atlantic%20Forest%20vegetation%20of%20Brazil
A list of native plants found in the Atlantic Forest Biome of southeastern and southern Brazil. Additions occur as botanical discoveries and reclassifications are presented. They are grouped under their botanical Families. Acanthaceae Mendoncia velloziana Mart. Mendoncia puberula Mart. Aphelandra squarrosa Nees Aphelandra stephanophysa Nees Aphelandra rigida Glaz. et Mildbr. Justicia polita (Nees) Profice Justicia clausseniana (Nees) Profice Justicia nervata (Lindau) Profice Amaranthaceae Pfaffia pulverulenta (Mart.) Kuntze Amaryllidaceae Hippeastrum calyptratum Herb. Anacardiaceae Astronium fraxinifolium Schott Astronium graveolens Jacq. Tapirira guianensis Aubl. Annonaceae Annona cacans Warm. Duguetia salicifolia R.E.Fr. Guatteria australis A.St.-Hil. Guatteria dusenii R.E.Fr. Guetteria nigrescens Mart. Rollinia laurifolia Schltdl. Rollinia sylvatica (A.St.-Hil.) Mart. Rollinia xylopiifolia (A.St.-Hil.) R.E.Fr. Xylopia brasiliensis Spreng. Apocynaceae Aspidosperma cylindrocarpon Müll.Arg. Aspidosperma melanocalyx Müll.Arg. Aspidosperma parvifolium A.DC. Forsteronia refracta Müll.Arg. Mandevilla funiformis (Vell.) K.Schum. Mandevilla pendula (Ule) Woodson Odontadenia lutea (Vell.) Markgr. Peschiera australis (Müll.Arg.) Miers Aquifoliaceae Ilex breviscupis Reissek Ilex integerrima Reissek Ilex microdonta Reissek Ilex paraguariensis A.St.-Hil. Ilex taubertiana Loes. Ilex theezans Mart. Ilex pubiflora Reissek Araceae Anthurium galeottii K.Koch. Anthurium harrisii G.Don Anthurium longifolium G.Don Anthurium lhotzkyanum Schott Anthurium scandens (Aubl.) Engl. subsp. scandens Anthurium solitarium Schott Anthurium theresiopolitanum Engl. Asterostigma luschnatianum Schott Philodendron appendiculatum Nadruz et Mayo Philodendron altomacaense Nadruz et Mayo Philodendron edmundoi G.M.Barroso Philodendron eximium Schott Philodendron fragile Nadruz et Mayo Philodendron hatschbachii Nadruz et Mayo Philodendron roseopetiolatum Nadruz et Mayo Philodendron ochrostemon Schott Philodendron ornatum Schott Philodendron propinquum Schott Xanthosoma sagittifolium (L.) Schott Araliaceae Didymopanax acuminatus Marchal Didymopanax angustissimus Marchal Oreopanax capitatus (Jacq.) Decne. et Planch. Araucariaceae Araucaria angustifolia (Bertol.) Kuntze Arecaceae Astrocaryum aculeatissimum (Schott) Burret Attalea dubia (Mart.) Burret Euterpe edulis Mart. Geonoma pohliana Mart. Geonoma wittigiana Glaz. ex Drude Lytocaryum hoehnei (Burret) Toledo Lytocaryum insigne (Drude) Toledo Asclepiadaceae Ditassa mucronata Mart. Gonioanthela hilariana (E.Fourn.) Malme Jobinia lindbergii E.Fourn. Jobinia hatschbachii Fontella et E.A.Schwarz Jobinia paranaensis Fontella et C.Valente Oxypetalum insigne var. glaziovii (E.Fourn.) Fontella et E. A.Schwarz Oxypetalum lutescens E.Fourn. Oxypetalum pachuglossum Decne. Macroditassa lagoensis (E.Fourn.) Malme Macroditassa laxa (Malme) Fontella et de Lamare Matelea glaziovii (E.Fourn.) Morillo Asteraceae Baccharis brachylaenoides DC. var. brachylaenoides Baccharis intermixta Gardner Baccharis microdonta DC. Baccharis semiserrata DC. var. semiserrata Baccharis trimera (Less.) DC. Dasyphyllum brasiliense (Spreng.) Cabrera Dasyphyllum spinescens (Less.) Cabrera Dasyphyllum tomentosum var. multiflorum (Baker) Cabrera Eupatorium adamantium Gardner Eupatorium pyrifolium DC. Eupatorium rufescens P.W.Lund. ex DC. Eupatorium vauthierianum DC. Gochnatia rotundifolia Less. Hatschbachiella polyclada (Dusén ex Malme) R.M.King & H.Rob. Mikania acuminata DC. Mikania aff. myriantha DC. Mikania argyreiae DC. Mikania buddleiaefolia DC. Mikania cabrerae G.M.Barroso Mikania chlorolepis Baker Mikania conferta Gardner Mikania glomerata Spreng. Mikania hirsutissima DC. Mikania lanuginosa DC. Mikania lindbergii Baker var. lindbergii Mikania lindbergii var. collina Baker Mikania microdonta DC. Mikania rufescens Sch. Bip. ex Baker Mikania trinervis Hook. et Arn. Mikania vitifolia DC. Mutisia speciosa Aiton ex. Hook. Piptocarpha macropoda (DC.) Baker Piptocarpha oblonga (Gardner) Baker Piptocarpha quadrangularis (Vell.) Baker Piptocarpha reitziana Cabrera Senecio brasiliensis (Spreng.) Less. Senecio desiderabilis Vell. Senecio glaziovii Baker Senecio organensis Casar. Symphyopappus itatiayensis R.M.King et H.Rob. Vanillosmopsis erythropappa (DC.) Sch.Bip. Vernonia aff. puberula Less. Vernonia diffusa Less. Vernonia discolor (Spreng.) Less. Vernonia macahensis Glaz. ex G.M.Barroso Vernonia macrophylla Less. Vernonia petiolaris DC. Vernonia puberula Less. Vernonia stellata (Spreng.) S.F.Blake Wunderlichia insignis Baill. Balanophoraceae Langsdorffia hipogaea Mart. Scybalium glaziovii Eichler Basellaceae Boussingaultia tucumanensis var. brasiliensis Hauman Begoniaceae Begonia angularis Raddi var. angularis Begonia arborescens Raddi Begonia coccinea Ruiz ex Klotzsch Begonia collaris Brade Begonia cucullata Willd. var. cucullata Begonia dentatiloba A.DC. Begonia digitata Raddi Begonia fischeri Schrank Begonia fruticosa A.DC. Begonia isoptera Dryand. Begonia herbacea Vell. Begonia hispida Schott ex A.DC. var. hispida Begonia hugelii Hort.Berol. ex A.DC. Begonia integerrima Spreng. var. integerrima Begonia lobata Schott Begonia semidigitata Brade Begonia paleata A.DC. Begonia pulchella Raddi Begonia solananthera A.DC. Begonia valdensium A.DC. var. valdensium Bignoniaceae Anemopaegma chamberlaynii (Sims) Bureau & K.Schum. Callichlamys latifolia (Rich.) K. Schum. Fridericia speciosa Mart. Haplolophium bracteatum Cham. Lundia corymbifera (Vahl) Sandwith Schlegelia parviflora (Oerst.) Monach. Stizophyllum perforatum (Cham.) Miers Tabebuia chrysotricha (Mart. ex A.DC.) Standl. Tabebuia heptaphylla (Vell.) Toledo Urbanolophium glaziovii (Bureau & K.Schum.) Melch. Bombacaceae Bombacopsis glabra (Pasq.) A.Robyns Chorisia speciosa A.St.-Hil. – Floss silk tree Eriotheca candolleana (K.Schum.) A.Robyns Spirotheca rivieri (Decne.) Ulbrich Boraginaceae Cordia ecalyculata Vell. Cordia ochnacea DC. Cordia sellowiana Cham. Cordia trichoclada DC. Tournefortia breviflora DC. Bromeliaceae Aechmea blanchetiana (Baker) L.B.Sm. Aechmea bromeliifolia (Rudge) Baker Aechmea caesia E.Morren ex Baker Aechmea pineliana (Brongn.ex Planch.) Baker var. pineliana Ananas ananassoides (Baker) L.B.Sm. Billbergia amoena var. rubra M.B.Foster Billbergia pyramidalis var. concolor L.B.Sm. Billbergia pyramidalis (Sims) var. pyramidalis Lindl. Billbergia sanderiana E.Morren Canistrum lindenii (Regel) Mez Neoregelia carolinae (Beer) L.B.Sm. Neoregelia bragarum (E.Pereira & L.B.Sm.) Leme Neoregelia farinosa (Ule) L.B.Sm. Neoregelia lymaniana R.Braga & Sucre Nidularium innocentii Lem. var. innocentii Nidularium microps E.Morren ex Mez var. microps Nidularium procerum Lindm. Nidularium scheremetiewii Regel Pitcairnia carinata Mez Pitcairnia flammea Lindl. var. flammea Quesnelia lateralis Wawra Quesnelia liboniana (De Jonghe) Mez Tillandsia aeris-incola (Mez) Mez Tillandsia geminiflora Brongn. var. geminiflora Tillandsia spiculosa Griseb. var. spiculiosa Tillandsia stricta Sol. ex Sims. var. stricta Tillandsia tenuifolia L. var. tenuifolia Vriesea bituminosa Wawra var. bituminosa Vriesea carinata Wawra Vriesea haematina L.B.Sm. Vriesea heterostachys (Baker) L.B.Sm. Vriesea hieroglyphica (Carrière) E.Morren var. hieroglyphica Vriesea hydrophora Ule Vriesea inflata (Wawra) Wawra Vriesea longicaulis (Baker) Mez Vriesea longiscapa Ule Vriesea paraibica Wawra Vriesea sparsiflora L.B.Sm. Vriesea vagans (L.B.Sm.) L.B.Sm. Wittrockia cyathiformis (Vell.) Leme Wittrockia flavipetala (Wand.) Leme & H.Luther Wittrockia gigantea (Baker) Leme Wittrockia superba Lindm. Wittrockia tenuisepala (Leme) Leme Cactaceae Hatiora salicornioides (Haw.) Britton & Rose Lepismium houlletianum (Lem.) Barthlott Rhipsalis capilliformes F.A.C.Weber Rhipsalis clavata F.A.C.Weber Rhipsalis elliptica G.Lindb. ex K.Schum. Rhipsalis floccosa Salm-Dyck ex Pfeiff. Rhipsalis houlletiana Lem. Rhipsalis trigona Pfeiff. Schlumbergera truncata (Haw.) Moran Campanulaceae Centropogon tortilis E.Wimm. Siphocampylus longepedunculatus Pohl Cannaceae Canna coccinea Mill. Canna paniculata Ruiz & Pav. Caprifoliaceae Lonicera japonica Thunb. ex Murray – Japanese Honeysuckle Celastraceae Celastrus racemosus Turcz. Maytenus brasiliensis Mart. Maytenus communis Reiss. Chloranthaceae Hedyosmum brasiliense Miq. Chrysobalanaceae Couepia venosa Prance Licania kunthiana Hook.f. Clethraceae Clethra scabra var. laevigata (Meisn.) Sleumer Cletha scabra Pers. var. scabra Clusiaceae Clusia criuva Cambess. Clusia fragrans Gardner Clusia lanceolata Cambess. Clusia marizii Gomes da Silva & Weinberg Clusia organensis Planch. & Triana Clusia studartiana C.M.Vieira & Gomes da Silva Kielmeyera insignis N.Saddi Rheedia gardneriana Planch. & Triana Tovomita glazioviana Engl. Tovomitopsis saldanhae Engl. Combretaceae Terminalia januarensis DC. Commelinaceae Dichorisandra thyrsiflora J.C.Mikan Tradescantia sp. Convolvulaceae Ipomoea demerariana Choisy (=Ipomoea phyllomega (Vell.) House) Cornaceae Griselina ruscifolia (Clos) Taub. Cucurbitaceae Anisosperma passiflora (Vell.) Silva Manso Apodanthera argentea Cogn. Cayaponia cf. tayuya (Vell.) Cogn. Melothria cucumis Vell. var. cucumis Melothrianthus smilacifolius (Cogn.) Mart. Crov. Cunoniaceae Lamanonia ternata Vell. Weinmannia paullinifolia Pohl ex Ser. Cyperaceae Pleurostachys densefoliata H.Pfeiff. Pleurostachys millegrana (Nees) Steud. Rhynchospora exaltata Kunth Scleria panicoides Kunth Dichapetalaceae Stephanopodium organense (Rizzini) Prance Dioscoreaceae Dioscorea subhastata Vell. Hyperocarpa filiformes (Griseb.) G.M.Barroso, E.F.Guim. & Sucre Elaeocarpaceae Sloanea monosperma Vell. Ericaceae Gaultheria eriophylla (Pers.) Sleumer ex B.L.Burtt Gaylussacia aff. fasciculata Gardner Gaylussacia brasiliensis (Spreng.) Meisn. Erythroxylaceae Erythroxylum citrifolium A.St.-Hil. Erythroxylum cuspidifolium Mart. Euphorbiaceae Alchornea triplinervia (Spreng.) Müll.Arg. Croton floribundus Spreng. Croton organensis Baill. Croton salutaris Casar. Fragariopsis scandens A.St.-Hil. Hieronyma alchorneoides Allemão Pera obovata (Klotzsch) Baill. Phyllanthus glaziovii Müll.Arg. Sapium glandulatum Pax Tetrorchidium parvulum Müll.Arg. Fabaceae: Caesalpinioideae Bauhinia microstachya (Raddi) J.F.Macbr. Copaifera trapezifolia Hayne Sclerolobium beaurepairei Harms, synonym of Tachigali beaurepairei Sclerolobium friburgense Harms Sclerolobium rugosum Mart. ex Benth. Senna macranthera (DC. ex Collad.) H.S.Irwin & Barneby var. macranthera Senna multijuga var. lindleyana (Gardner) H.S.Irwin & Barneby Tachigali paratyensis (Vell.) H.C. Lima (= Tachigali multijuga Benth.). Fabaceae: Faboideae Andira fraxinifolia Benth. Camptosema spectabile (Tul.) Burkart Crotalaria vitellina var. laeta (Mart. ex Benth.) Windler & S. Skinner Dalbergia foliolosa Benth. Dalbergia frutescens (Vell.) Britton Dalbergia glaziovii Harms Dalbergia lateriflora Benth. Dioclea schottii Benth. Erythrina falcata Benth. Lonchocarpus glaziovii Taub. Machaerium cantarellianum Hoehne Machaerium gracile Benth. Machaerium nyctitans (Vell.) Benth. Machaerium oblongifolium Vogel Machaerium reticulatum (Poir.) Pers. Machaerium triste Vogel Myrocarpus frondosus Allemão Ormosia fastigiata Tul. Ormosia friburgensis Glaz. Pterocarpus rohrii Vahl Swartzia myrtifolia var. elegans (Schott) R. S. Cowan Zollernia glaziovii Yakovlev Zollernia ilicifolia (Brongn.) Vogel Fabaceae: Mimosoideae Abarema langsdorfii (Benth.) Barneby & Grimes Acacia lacerans Benth. Acacia martiusiana (Steud.) Burkart Calliandra tweediei Benth. Inga barbata Benth. Inga cylindrica (Vell.) Mart. Inga dulcis (Vell.) Mart. Inga lancifolia Benth. Inga lenticellata Benth. Inga lentiscifolia Benth. Inga leptantha Benth. Inga marginata Willd. = Inga semialata (Vell.) Mart. Inga mendoncaei Harms = Inga organensis Pittier Inga platyptera Benth. Inga sessilis (Vell.) Mart. Mimosa extensa Benth. Piptadenia gonoacantha (Mart.) J. F. Macbr. Piptadenia micracantha Benth. Gentianaceae Macrocarpaea glaziovii Gilg Gesneriaceae Besleria fasciculata Wawra Besleria macahensis Brade Besleria melancholica (Vell.) C. V. Morton Codonanthe cordifolia Chautems Codonanthe gracilis (Mart.) Hanst. Nematanthus crassifolius subsp. chloronema (Mart.) Chautems Nematanthus hirtellus (Schott) Wiehler Nematanthus lanceolatus (Poir.) Chautems Nematanthus serpens (Vell.) Chautems Sinningia cooperi (Paxt.) Wiehler Sinningia incarnata (Aubl.) D. L. Denham Vanhouttea fruticulosa (Hoehne) Chautems Hippocrateaceae Cheiloclinium neglectum A.C.Sm. Hippocratea volubilis L. Salacia amygdalina Peyr. Tontelea leptophylla A.C.Sm. Humiriaceae Humiriastrum glaziovii (Urb.) Cuatrec. var. glaziovii Humiriastrum glaziovii var. angustifolium Cuatrec. Vantanea compacta (Schnizl.) Cuatrec. subsp. compacta var. compacta Vantanea compactasubsp. compacta var. grandiflora (Urb.) Cuatrec. Icacinaceae Citronella paniculata (Mart.) R.A.Howard Labiatae Salvia rivularis Gardner Scutellaria uliginosa A.St.-Hil. ex Benth. Lacistemataceae Lacistema pubescens Mart. Lauraceae Aniba firmula (Nees et Mart.) Mez Beilschmiedia fluminensis Kosterm. Beilschmiedia rigida (Mez) Kosterm. Cinnamomum glaziovii (Mez) Kosterm. Cinnamomum riedelianum Kosterm. Cryptocarya micrantha Meisn. Cryptocarya moschata Nees et Mart. ex Nees Endlicheria paniculata (Spreng.) J.F.Macbr. Nectandra leucantha Nees Nectandra oppositifolia Nees Nectandra puberula (Schott) Nees Ocotea acypahilla (Nees) Mez Ocotea catharinensis Mez Ocotea diospyrifolia (Meisn.) Mez Ocotea dispersa (Nees) Mez Ocotea divaricata (Nees) Mez Ocotea domatiata Mez Ocotea glaziovii Mez Ocotea indecora (Schott) Mez Ocotea teleiandra (Meisn.) Mez Ocotea notata (Nees) Mez Ocotea odorifera (Vell.) Rohwer Ocotea porosa (Nees) Barroso Ocotea puberula (Rich.) Nees Ocotea pulchra Vattimo-Gil Ocotea silvestris Vattimo-Gil Ocotea spixiana (Nees) Mez Ocotea tabacifolia Meisn.) Rohwer Ocotea urbaniana Mez Ocotea vaccinioides Meisn. Persea fulva Koop var. fulva Persea pyrifolia Nees & Mart. ex Nees Rhodostemonodaphne macrocalyx (Meisn.) Rohwer ex Madriñán Lecythidaceae Cariniana estrellensis (Raddi) Kuntze Lentibulariaceae Utricularia geminiloba Benj. Lobeliaceae Lobelia thapsoidea Schott Loganiaceae Spigelia macrophylla (Pohl) DC. Loranthaceae Phoradendron crassifolium (Pohl & DC.) Eichler Phoradendron warmingii var. rugulosum (Urb.) Rizzini Psittacanthus flavo-viridis Eichler Psittacanthus pluricotyledonarius Rizzini Psittacanthus robustus (Mart.) Mart. Struthanthus concinnus Mart. Struthanthus marginatus (Desr.) Blume Struthanthus salicifolius (Mart.) Mart. Struthanthus syringaefolius (Mart.) Mart. Magnoliaceae Magnolia Magnolia ovata (A.St.-Hil.) Spreng. Malpighiaceae Banisteriopsis membranifolia (A. Juss.) B. Gates Byrsonima laevigata (Poir.) DC. Byrsonima laxiflora Griseb. Byrsonima myricifolia Griseb. Heteropteris anomala A. Juss. var. anomala Heteropteris leschenaultiana A. Juss. Heteropteris nitida (Lam.) DC. Heteropteris sericea (Cav.) A. Juss. var. sericea Hiraea gaudichaudiana (A. Juss.) A. Jsss. Stigmaphyllon gayanum A. Juss Tetrapterys crebiflora A. Juss. Tetrapterys lalandiana A. Juss. Tetrapterys lucida A. Juss. Malvaceae Abutilon rufirnerve A.St.-Hil. var. rufirnerve Marantaceae Stromanthe sanguinea Sond. Marcgraviaceae Marcgravia polyantha Delpino Norantea cuneifolia (Gardner) Delpino Melastomataceae Behuria glazioviana Cogn. Behuria mouraei Cogn. Bertolonia grazielae Baumgratz Bertolonia sanguinea var. santos-limae (Brade) Baumgratz Bisglaziovia behurioides Cogn. Clidemia octona (Bonpl.) L. Wms. Henriettella glabra (Vell.) Cogn. Huberia glazioviana Cogn. Huberia minor Cogn. Huberia parvifolia Cogn. Huberia triplinervis Cogn. Leandra acutiflora (Naudin) Cogn. Leandra amplexicaulis DC. Leandra aspera Cogn. Leandra atroviridis Cogn. Leandra aurea (Cham.) Cogn. Leandra breviflora Cogn. Leandra carassanae (DC.) Cogn. Leandra confusa Cogn. Leandra dasytricha (A.Gray) Cogn. Leandra eriocalyx Cogn. Leandra fallax (Cham.) Cogn. Leandra foveolata (DC.) Cogn. Leandra fragilis Cogn. Leandra gracilis var. glazioviana Cogn. Leandra hirta Raddi Leandra hirtella Cogn. Leandra laevigata (Triana) Cogn. Leandra laxa Cogn. Leandra magdalenensis Brade Leandra melastomoides Raddi Leandra mollis Cogn. Leandra multiplinervis (Naudin) Cogn. Leandra multisetosa Cogn. Leandra neurotricha Cogn. Leandra nianga Cogn. Leandra nutans Cogn. Leandra purpurascens Cogn. Leandra quinquedentata (DC.) Cogn. Leandra schwackei Cogn. Leandra sphaerocarpa Cogn. Leandra tetragona Cogn. Leandra trauninensis Cogn. Leandra xanthocoma (Naudin.) Cogn. Leandra xanthostachya Cogn. Marcetia taxifolia (A.St.-Hil.) DC. Meriania claussenii Triana Meriania robusta Cogn. Miconia altissima Cogn. Miconia argyrea Cogn. Miconia augustii Cogn. Miconia brasiliensis (Spreng.) Triana Miconia brunnea DC. Miconia budlejoides Triana Miconia chartacea Triana Miconia cinnamomifolia (DC.) Naudin Miconia depauperata Gardner Miconia dichroa Cogn. Miconia divaricata Gardner Miconia doriana Cogn. Miconia fasciculata Gardner Miconia formosa Cogn. Miconia gilva Cogn. Miconia glazioviana Cogn. Miconia jucunda (DC.) Triana Miconia latecrenata (DC.) Naudin Miconia longicuspis Cogn. Miconia octopetala Cogn. Miconia organensis Gardner Miconia ovalifolia Cogn. Miconia molesta Cogn. Miconia paniculata (DC.) Naudin Miconia paulensis Naudin Miconia penduliflora Cogn. Miconia prasina (Sw.) DC. Miconia pseudo-eichlerii Cogn. Miconia pusilliflora (DC.) Naudin Miconia rabenii Cogn. Miconia saldanhaei var. grandiflora Cogn. Miconia sellowiana Naudin Miconia staminea (Desr.) DC. Miconia subvernicosa Cogn. Miconia theaezans (Bonpl.) Cogn. Miconia tristis Spring Miconia urophylla DC. Miconia willdenowii Klotzsch ex Naudin Mouriri arborea Gardner Mouriri chamissoana Cogn. Mouriri doriana Cogn. Ossaea angustifolia (DC.) Triana var. brevifolia Cogn. Ossaea brachystachya (DC.) Triana Ossaea confertiflora (DC.) Triana Pleiochiton micranthum Cogn. Pleiochiton parvifolium Cogn. Pleiochiton roseum Cogn. Pleiochiton setulosum Cogn. Pleroma semidecandrum (Schrank & Mart. ex DC.) Triana (syn. Tibouchina semidecandra) Tibouchina alba Cogn. Tibouchina arborea (Gardner) Cogn. Tibouchina benthamiana var. punicea Cogn. Tibouchina canescens (D.Don) Cogn. Tibouchina estrellensis (Raddi) Cogn. Tibouchina fissinervia (DC.) Cogn. Tibouchina imperatoris Cogn. Tibouchina moricandiana (DC.) Baill. Tibouchina nervulosa Cogn. Tibouchina ovata Cogn. Tibouchina petroniana Cogn. Tibouchina saldanhaei Cogn. Tibouchina schwackei Cogn. Trembleya parviflora (D.Don.) Cogn. Meliaceae Cabralea canjerana (Vell.) Mart. subsp. canjerana Cedrela odorata L. Guarea macrophylla subsp. tuberculata (Vell.) T.D.Penn. Trichilia casaretti C.DC. Trichilia emarginata (Turcz.) C.DC. Menispermaceae Abuta selloana Eichler Chondodendron platyphyllum (A.St.-Hil.) Miers Monimiaceae Macropeplus ligustrinus var. friburgensis Perkins Mollinedia acutissima Perkins Mollinedia argyrogyna Perkins Mollinedia engleriana Perkins Mollinedia fasciculata Perkins Mollinedia gilgiana Perkins Mollinedia glaziovii Perkins Mollinedia longicuspidata Perkins Mollinedia lowtheriana Perkins Mollinedia marliae Peixoto & V.Pereira Mollinedia myriantha Perkins Mollinedia oligantha Perkins Mollinedia pachysandra Perkins Mollinedia salicifolia Perkins Mollinedia schottiana (Spreng.) Perkins Mollinedia stenoplylla Perkins Siparuna chlorantha Perkins Moraceae Cecropia cf.lyratiloba Miq. Cecropia glaziovii Snethl. Cecropia hololeuca Miq. Coussapoa microcarpa (Schott) Rizzini Ficus luschnathiana (Miq.) Miq. Ficus organensis (Miq.) Miq. Ficus trigona L.f. Sorocea bonplandii (Baill.) W.C.Burger & Alii Myristicaceae Virola gardneri (A.DC.) Warb. Myrsinaceae Cybianthus brasiliensis (Mez) G.Agostini Cybianthus glaber A.DC. Rapanea acuminata Mez Rapanea ferruginea (Ruiz & Pav.) Mez Rapanea guianensis Aubl. Rapanea lancifolia Mez Rapanea schwackeana Mez Rapanea umbellata (Mart.) Mez Myrtaceae Calycorectes schottianus O.Berg Calyptranthes concinna DC. Calyptranthes glazioviana Kiaersk. Calyptranthes lucida Mart. ex DC. Calyptranthes obovata Kiaersk. Campomanesia guaviroba (DC.) Kiaersk. Campomanesia laurifolia Gardner Eugenia cambucarana Kiaersk. Eugenia cuprea (O.Berg) Nied. Eugenia curvato-petiolata Kiaersk. Eugenia ellipsoidea Kiaersk. Eugenia gracillima Kiaersk. Eugenia stictosepala Kiaersk. Eugenia subavenia O.Berg Marlierea Marlierea aff.teuscheriana (O. Berg.) D. Legrand Marlierea mar 'tinelii G. M. Barroso & Peixoto Marlierea silvatica (Gardner) Kiaersk. Marlierea suaveolens Cambess. Myrceugenia kleinii D.Legrand & Kausel Myrceugenia pilotantha (Kiaersk.) Landrum Myrceugenia scutellata D. Legrand Myrcia anacardiifolia Gardner Myrcia coelosepala Kiaersk. Myrcia fallax (Rich.) DC. Myrcia fenzliana O.Berg Myrcia glabra (O.Berg) D.Legrand Myrcia glazioviana Kiaersk. Myrcia guajavifolia O.Berg Myrcia laruotteana Cambess. Myrcia lineata (O. Berg) G. M. Barroso & Peixoto Myrcia longipes (O. Berg) Kiaersk. Myrcia multiflora (Lam.) DC. Myrcia pubipetala Miq. Myrcia rhabdoides Kiaersk. Myrcia rufula Miq. Myrcia spectabilis DC. Myrcia tomentosa (Aubl.) DC. Myrcia warmingiana Kiaersk. Myrciaria floribunda (H. West. ex Willd.) O. Berg –Guavaberry Myrciaria tenella (DC.) O. Berg Pimenta pseudocaryophyllus var. fulvescens (DC.) Landrum Plinia martinellii G. M. Barroso & M. Peron Psidium guineense Sw. Psidium Psidium robustum O. Berg Psidium spathulatum Mattos Siphoneugena densiflora O. Berg Siphoneugena kiaerskoviana (Burret) Kausel Nyctaginaceae Guapira opposita (Vell.) Reitz Ochnaceae Luxemburgia glazioviana Beauverd Ouratea parviflora (DC.) Baill. Ouratea vaccinioides (A.St.-Hil.) Engl. Olacaceae Heisteria silvianii Schwacke Oleaceae Linociera micrantha Mart. Onagraceae Fuchsia glazioviana Taub. Fuchsia regia subsp. serrae P.E.Berry Orchidaceae Barbosella porschii (Kraenzl.) Schltr. Beadlea warmingii (Rchb.f.) Garay Chytroglossa marileoniae Rchb.f. Dichaea pendula (Aubl.) Cogn. Epidendrum addae Pabst Epidendrum paranaense Barb.Rodr. Epidendrum saxatile Lindl. Epidendrum xanthinum Lindl. Gomesa recurva Lodd. Maxillaria cerifera Barb.Rodr. Maxillaria ubatubana var. mantiqueirana Hoehne Miltonia cuneata Lindl. Oncidium cf.hookeri Rolfe Oncidium uniflorum Booth ex Lindl. Pabstia jugosa (Lindl.) Garay Pabstia triptera (Rolfe) Garay Phymatidium aquinoi Schltr. Phymatidium delicatulum Lindl. Phymatidium falcifolium Lindl. Phymatidium tillandsoides Barb.Rodr. Pleurothallis aff.hamosa Barb.Rodr. Pleurothallis trifida Lindl. Prescottia epiphyta Barb.Rodr. Rodrigueziopsis microphyta (Barb.Rodr.) Schltr. Scaphyglottis modesta (Rchb.f.) Schltr. Sophronitis aff.grandiflora Lindl. Sophronitis aff.mantiqueirae (Fowlie) Fowlie Zygopetalum crinitum Lodd. Zygopetalum triste Barb.Rodr. Passifloraceae Passiflora actinia Hook. Passiflora alata Dryand. Passiflora amethystina J.C.Mikan Passiflora deidamioides Harms Passiflora odontophylla Harms ex Glaz. Passiflora organensis Gardner Passiflora rhamnifolia Mast. Passiflora speciosa Gardner Passiflora vellozii Gardner Phytolaccaceae Phytolacca thyrsiflora Fenzl ex J.A.Schmidt. Seguieria langsdorffii Moq. Piperaceae Ottonia diversifolia Kunth Peperomia alata Ruiz & Pav. Peperomia corcovadensis Gardner Peperomia glabella (Sw.) A. Dietr. Peperomia lyman-smithii Yunck. Peperomia rhombea Ruiz & Pav. Peperomia rotundifolia (L.) H. B. & K. Peperomia tetraphylla (G. Forst.) Hook. & Arn. Piper aequilaterum C. DC. Piper caldense C. DC. Piper chimonanthifolium Kunth Piper gaudichaudianum Kunth Piper glabratum Kunth Piper hillianum C. DC. Piper lhotzkyanum Kunth Piper malacophyllum (C. Presl) C. DC. Piper permucronatum Yunck. Piper pseudopothifolium C. DC. Piper richardiifolium Kunth Piper tectonifolium Kunth Piper translucens Yunck. Piper truncatum Vell. Poaceae Chusquea aff. oxylepis (Hack.) Ekman Chusquea aff. tenella Nees Chusquea anelytroides Rupr. ex Döll Chusquea capitata Nees Chusquea capituliflora Trin. Guadua tagoara (Nees) Kunth Merostachys aff. ternata Nees Merostachys fischeriana Rupr. ex Döll Podocarpaceae Podocarpus lambertii Klotzsch ex Endl. Podocarpus sellowii Klotzsch ex Endl. Polygalaceae Polygala laureola A.St.-Hil. & Moq. Polygala oxyphylla DC. Securidaca macrocarpa A.W.Benn. Polygonaceae Ruprechtia laxiflora Meisn. Proteaceae Roupala consimilis Mez Roupala longepetiolata Pohl Roupala rhombifolia Mart. ex Meisn. Roupala warmingii Meisn. Quiinaceae Quiina glaziovii Engl. Ranunculaceae Clematis dioica var. australis Eichler Clematis dioica var. brasiliana (DC.) Eichler Rosaceae Prunus brasiliensis Schott ex Spreng. Rubus urticaefolius Poir. Rubiaceae Alibertia longiflora K.Schum. Amaioua intermedia Mart. Bathysa australis (A.St.-Hil.) Benth. & Hook.f. Bathysa cuspidata (A.St.-Hil.) Hook.f. Bathysa mendocaei K.Schum. Chomelia brasiliana Chomelia estrellana Müll.Arg. Coccocypselum lanceolatum (Ruiz & Pav.) Pers. Coccocypselum sessiliflorum Standl. Coussarea congestiflora Müll.Arg. Coussarea friburgensis M. Gomes Coussarea speciosa K.Schum. ex. Glaz. Coutarea hexandra (Jacq.) K.Schum. Diodia alataNees & Mart. Emmeorrhiza umbellata (Spreng.) K.Schum. Faramea dichotoma K.Schum. ex M.Gomes Faramea multiflora var. salicifolia (C. Presl.) Steyerm. Faramea urophylla Müll.Arg. Galium hypocarpium subsp. indecorum (Cham. & Schltdl.) Dempster Hillia parasitica Jacq. Hindsia longiflora (Cham.) Benth. Hoffmannia duseniiStandl. Ixora brevifolia Benth. Manettia beyrichiana K.Schum. Manettia congesta (Vell.) K.Schum. Manettia fimbriata Cham. & Schltdl. Manettia mitis (Vell.) K. Schum. Posoqueria acutifolia Mart. Posoqueria latifolia (Rudge) Roem. & Schult. Psychotria alto-macahensis M.Gomes Psychotria appendiculata Müll.Arg. Psychotria brachyanthema Standl. Psychotria caudata M.Gomes Psychotria constricta Müll.Arg. Psychotria leiocarpa Cham. & Schltdl. Psychotria nemerosa Gardner Psychotria nitidula Cham. & Schltdl. Psychotria pallens Gardner Psychotria pubigera Schltdl. Psychotria ruelliifolia (Cham. & Schltdl.) Müll.Arg. Psychotria stachyoides Benth. Psychotria suterella Müll.Arg. Psychotria ulei Standl. Psychotria vellosiana Benth. Randia armata (Sw.) DC. Rudgea corniculata Benth. Rudgea gardenoides (Cham.) Müll.Arg. Rudgea eugenioides Standl. Rudgea insignis Müll.Arg. Rudgea jasminoides (Cham.) Müll.Arg. Rudgea leiocarpoides Müll.Arg. Rudgea nobilis Müll.Arg. Rudgea recurva Müll.Arg. Rustia gracilis K.Schum. Tocoyena sellowiana (Cham. & Schltdl.) K.Schum. Rutaceae Dictyoloma incanescens DC. Zanthoxylum rhoifolium Lam. (= Fagara rhoifolia (Lam.) Engl.) Sabiaceae Meliosma brasiliensis Urb. Meliosma sellowii Urb. Salicaceae Casearia arborea (Rich.) Urb. Casearia decandra Jacq. Casearia obliqua Spreng. Casearia pauciflora Cambess. Casearia sylvestris Sw. Xylosma ciliatifolia (Clos) Eichler Xylosma prockia (Turcz.) Turcz. Sapindaceae Allophylus edulis (A.St.-Hil.) Radlk. Cupania emarginata Cambess. Cupania oblongifolia Mart. Cupania racemosa (Vell.) Radlk. Cupania zanthoxyloides Cambess. Matayba guianensis Aubl. Paullinia carpopoda Cambess. Paullinia meliaefolia Juss. Paullinia trigonia Vell. Serjania communis Cambess. var. communis Serjania elegans Cambess. Serjania gracilis Radlk. Serjania laruotteana Cambess. Serjania lethalis A.St.-Hil. Serjania noxia Cambess. Serjania reticulata Cambess. Thinouia scandens (Cambess.) Triana & Planch. Sapotaceae Chrysophyllum imperiale Chrysophyllum viride Martius & Eichler Micropholis compta Pierre Micropholis crassipedicellata (Mart. & Eichl.) Pierre Pouteria guianensis Aubl. Pouteria microstrigosa T.D.Penn. Pouteria macahensis T.D.Penn. Scrophulariaceae Velloziella dracocephaloides Baill. Simaroubaceae Picramnia glazioviana Engl. subsp. glazioviana Simarouba amara Aubl. Smilacaceae Smilax japicanga Griseb. Smilax quinquenervia Vell. Smilax spicata Vell. Smilax staminea Griseb. Solanaceae Acnistus arborescens (L.) Schltdl. Athenaea anonacea Sendtn. Athenaea picta (Mart.) Sendtn. Aureliana brasiliana (Hunz.) Barboza & Hunz. Aureliana fasciculata (Vell.) Sendtn. var. fasciculata Brunfelsia brasiliensis (Spreng.) L.B.Sm. & Downs var. brasiliensis Brunfelsia hydrangaeformis (Pohl) Benth. subsp. hydrangaefomis Capsicum campylopodium Sendtn. Cestrum lanceolatum Miers var. lanceolatum Cestrum aff.sessiliflorum Schott ex Sendtn. Cestrum stipulatum Vell. Cyphomandra calycina Sendtn. Dyssochroma viridiflora (Sims) Miers Sessea regnellii Taub. Solanum aff.schizandrum Sendtn. Solanum argenteum Dunal Solanum caeruleum Vell. Solanum cinnamomeum Sendtn. Solanum decorum Sendtn. var. decorum Solanum granuloso-leprosum Dunal Solanum inaequale Vell. Solanum inodorum Vell. Solanum leucodendron Sendtn. Solanum megalochiton var. villoso-tomentosum Dunal Solanum odoriferum Vell. Solanum stipulatum Vell. Solanum swartzianum Roem. & Schult. var. swartzianum Solanum undulatum Dunal Symplocaceae Symplocos celastrinea Mart. ex Miq. Symplocos crenata (Vell.) Mattos Symplocos glandulosomarginata Hoehne Symplocos nitidiflora Brand. Symplocos tertandra Mart. ex Miq. Symplocos uniflora (Pohl) ex Benth. Theaceae Laplacea fruticosa (Schrad.) Kobuski Thymelaeaceae Daphnopsis martii Meisn. Daphnopsis utilis Warm. Tiliaceae Luehea divaricata Mart. Umbelliferae Hydrocotyle leucocephala Cham. & Schltdl. Valerianaceae Valeriana scandens L. Verbenaceae Aegiphila fluminensis Vell. Aegiphila obducta Vell. Aegiphila sellowiana Cham. Violaceae Anchietea pyrifolia (Mart.) G.Don var. pyrifolia Vitaceae Cissus pulcherrima Vell. Cissus sulcicaulis (Baker) Planch. Vochysiaceae Vochysia dasyantha Warm. Vochysia glazioviana Warm. Vochysia magnifica Warm. Vochysia oppugnata (Vell.) Warm. Vochysia rectiflora var. glabrescens Warm. Vochysia saldanhana Warm. Vochysia schwackeana Warm. Vochysia spathulata Warm. Vochysia tucanorum Mart. Winteraceae Drimys brasiliensis Miers Zingiberaceae Hedychium coronarium J.König See also Ecoregions of the Atlantic Forest biome Official list of endangered flora of Brazil List of plants of Amazon Rainforest vegetation of Brazil List of plants of Caatinga vegetation of Brazil List of plants of Cerrado vegetation of Brazil List of plants of Pantanal vegetation of Brazil References LIMA, H. C.; MORIM, M. P.; GUEDES-BRUNI, R. R.; SYLVESTRE, L. S.; PESSOA, S. V. A.; SILVA NETO, S.; QUINET, A. (2001) Reserva Ecológica de Macaé de Cima, Nova Friburgo, Rio de Janeiro: Lista de espécies vasculares (List of vascular plants) — Rio de Janeiro Botanical Garden. Restinga.net — Atlantic Coast restingas. External links Atlantic Forest Atlantic Forest Atlantic Forest Atlantic Forest Brazil
List of plants of Atlantic Forest vegetation of Brazil
Biology
11,202
30,527,926
https://en.wikipedia.org/wiki/Elie%20Bursztein
Elie Bursztein, (born 1980) is a French computer scientist and software engineer. He is Google and DeepMind AI cybersecurity technical and research lead. Education and early career Bursztein obtained a computer engineering degree from EPITA in 2004, a master's degree in computer science from Paris Diderot University/ENS in 2005, and a PhD in computer science from École normale supérieure Paris-Saclay in 2008 with a dissertation titled Anticipation games: Game theory applied to network security. Before joining Google, Bursztein was a post-doctoral fellow at Stanford University's Security Laboratory, where he collaborated with Dan Boneh and John Mitchell on web security, game security, and applied cryptographic research. His work at Stanford University included the first cryptanalysis of the inner workings of Microsoft's DPAPI (Data Protection Application Programming Interface), the first evaluation of the effectiveness of private browsing, and many advances to CAPTCHA security and usability. Bursztein has discovered, reported, and helped fix hundreds of vulnerabilities, including securing Twitter's frame-busting code, exploiting Microsoft's location service to track the position of mobile devices, and exploiting the lack of proper encryption in the Apple App Store to steal user passwords and install unwanted applications. Career at Google Bursztein joined Google in 2012 as a research scientist. He founded the Anti-Abuse Research Team in 2014 and became the lead of the Security and Anti-Abuse Research teams in 2017. In 2023, he became Google and DeepMind AI cybersecurity technical and research lead. Bursztein's contributions at Google include: 2022 Creating the first post quantum resilient security keys. 2020 Developing a deep-learning engine that helps to block malicious documents targeting Gmail users. 2019 Inventing Google's password-checking service Password Checkup that allows billion of users to check whether their credentials have been compromised due to data breaches while preserving their privacy. 2019 Developing Keras tuner which became the default hypertuner for TensorFlow and TFX. 2018 Conducting the first large-scale study on the illegal online distribution of child sexual abuse material in partnership with NCMEC. 2017 Finding the 1st SHA-1 full collision. 2015 Deprecating security questions at Google after completing the first large in-the-wild study on the effectiveness of security questions, which showed that they were both insecure and had a very low recall rate. 2014 Redesigning Google CAPTCHA to make it easier for humans, resulting in a 6.7% improvement in the pass rate. 2013 Strengthening Google accounts protections against hijackers and fake accounts. Awards and honors Best academic papers awards 2023 ACNS best workshop paper award for Hybrid Post-Quantum Signatures in Hardware Security Keys 2021 USENIX Security distinguished paper award for "Why wouldn't someone think of democracy as a target?": Security practices & challenges of people involved with U.S. political campaigns Bursztein 2019 USENIX Security distinguished paper award for Protecting accounts from credential stuffing with password breach alerting 2019 CHI best paper award for “They don’t leave us alone anywhere we go”: Gender and digital abuse in South Asia 2017 Crypto best paper award for The first collision for full SHA-1 2015 WWW best student paper award for Secrets, lies, and account recovery: Lessons from the use of personal knowledge questions at Google 2015 S&P Distinguished Practical Paper award for Ad Injection at Scale: Assessing Deceptive Advertisement Modifications 2011 S&P best student paper award for OpenConflict: Preventing real time map hacks in online games 2008 WISPT best paper award for Probabilistic protocol identification for hard to classify protocol Industry awards 2019 Recognized as one of the 100 most influential French people in cybersecurity 2017 BlackHat Pwnie award for the first practical SHA-1 collision 2015 IRTF Applied Networking Research Prize for Neither snow nor rain nor MITM … An empirical analysis of email delivery security 2010 Top 10 Web Hacking Techniques for Attacking HTTPS with cache injection Philanthropy In 2023 Elie founded the Etteilla Foundation dedicated to preserving and promoting the rich heritage of playing cards and donated his extensive collection of historical playing cards decks and tarots to it. Trivia Bursztein is an accomplished magician and he posted magic tricks weekly on Instagram during the 2019 pandemic. In 2014, following his talk on hacking Hearthstone using machine learning, he decided not to make his prediction tool open source at Blizzard Entertainment’s request. Selected publications References External links Elie Bursztein's personal site Elie Bursztein on Google Scholar Living people 1980 births Hackers Modern cryptographers Computer security academics French computer scientists French cryptographers Google employees DeepMind people
Elie Bursztein
Technology
978
12,169,836
https://en.wikipedia.org/wiki/Singapore%20whiskered%20bat
The Singapore whiskered bat (Vespertilio oreias) is or was a possible species of vesper bat endemic to Singapore. No specimens have been found since its original scientific description in 1840 by Dutch zoologist Coenraad Temminck. Taxonomy There is some uncertainty regarding its genus classification as either Vespertilio (Temminck 1840), Myotis (Tate 1941), or Kerivoula (Csorba 2016). All contending genera share Vespertilionidae as the family. Modern analysis of the type specimen found it to have skull fragments from another species and the skin to be in too poor a condition to confirm it as a distinct species. Additionally, it is zoogeographically hard to believe that a bat species could be limited to the island of Singapore. The holotype is in Naturalis Biodiversity Center in Leiden, Netherlands. References Mouse-eared bats Mammals of Singapore Mammals described in 1840 Nomina dubia Taxonomy articles created by Polbot Taxa named by Coenraad Jacob Temminck Bats of Southeast Asia Taxobox binomials not recognized by IUCN Species known from a single specimen
Singapore whiskered bat
Biology
240
75,564,364
https://en.wikipedia.org/wiki/Gliese%20414%20Ab
Gliese 414 Ab, also known as GJ 414 Ab, is a sub-Neptune exoplanet orbiting Gliese 414 A, an orange dwarf located 39 light-years from Earth, in the constellation Ursa Major. It is at least 7.6 times more massive than the Earth and is 3 times larger, having a diameter of . It orbits its host star at a distance of , completing one revolution every 51 days. The distance of Gliese 414 A b from its star makes it to be located in the inner part of the optmistic habitable zone, and the planet has a equilibrium temperature of 35.5 °C. Characteristics Gliese 414 Ab is classified as a sub-Neptune planet. With a diameter of , it is about 3 times larger than Earth, but 24% smaller than Neptune. Having a minimum mass of , it is likely that it is not a rocky planet, but instead has an volatile-rich envelope. NASA Eyes on Exoplanets cites it as a Neptune-like planet. Gliese 414 Ab completes one orbit around its star every 51 days, being located at a distance of . Its orbit is highly eccentric (e = 0.45), which means that the distance from its star varies from 0.13 to 0.34 astronomical units to . The orbital variation of Gliese 414 Ab causes to occasionally be located within its star's habitable zone, which has an inner limit of 0.21 AU according to a more optimistic model. According to another model, the star's habitable zone is located from 0.37 to 0.7 AU. As it orbits close to the habitable zone, it is a warm planet, having a surface temperature estimated at around 36 °C. The margin of error of 33.5 implies that the temperature can be as high as 69.1°C, and as low as 2°C. Discovery Gliese 414 Ab was discovered in 2020 by analyzing radial velocity data from Keck's HIRES instrument and the Automated Planet Finder at Lick Observatory, as well as photometric data from KELT. Host star Its parent star, known as Gliese 414 A, is an orange dwarf about 70% the size of the Sun. In addition to Gliese 414 Ab, the star also hosts Gliese 414 Ac, a Super-Neptune orbiting at a distance 6 times greater, of 1.4 AU. It also has a red dwarf companion, located at a distance of 408 AU from the main star. Notes and references 414ab Exoplanets discovered in 2020 Exoplanets detected by radial velocity Exoplanets discovered by KELT Ursa Major
Gliese 414 Ab
Astronomy
559
11,127,857
https://en.wikipedia.org/wiki/Sydowiella%20depressula
Sydowiella depressula is a fungal plant pathogen infecting caneberries. References Fungal plant pathogens and diseases Small fruit diseases Fungi described in 1873 Fungus species
Sydowiella depressula
Biology
36
32,173
https://en.wikipedia.org/wiki/United%20States%20Military%20Academy
The United States Military Academy (USMA, West Point, or Army) is a United States service academy in West Point, New York. It was originally established as a fort during the American Revolutionary War, as it sits on strategic high ground overlooking the Hudson River north of New York City. The academy was founded in 1802, and it is the oldest of the five American service academies and educates cadets for commissioning into the United States Army. The academic program grants the Bachelor of Science degree with a curriculum that grades cadets' performance upon a broad academic program, military leadership performance, and mandatory participation in competitive athletics. The U.S. Military Academy at West Point is classified as a liberal arts college by U.S. News and is ranked #8 in the 2025 edition of the Best Colleges in the National Liberal Arts Colleges category. Candidates for admission must apply directly to the academy and receive a nomination, usually from a member of Congress. Other nomination sources include the president and vice president. Students are officers-in-training and are referred to as "cadets" or collectively as the "United States Corps of Cadets" (USCC). The Army fully funds tuition for cadets in exchange for an active duty service obligation upon graduation. About 1,300 cadets enter the academy each July, with about 1,000 cadets graduating. The academy's traditions have influenced other institutions because of its age and unique mission. It was the first American college to have an accredited civil engineering program and the first to have class rings, and its technical curriculum became a model for engineering schools. West Point's student body has a unique rank structure and lexicon. The academy fields 15 men's and nine women's National Collegiate Athletic Association (NCAA) sports teams. Cadets compete in one sport every fall, winter, and spring season at the intramural, club, or intercollegiate level. Its football team was a national power in the early and mid-20th century, winning three national championships. Among the country's public institutions, the academy is the top producer of Marshall and Rhodes scholars. Its alumni are collectively referred to as "The Long Gray Line," which include U.S. Presidents Dwight D. Eisenhower and Ulysses S. Grant; Confederate President Jefferson Davis; Confederate general Robert E. Lee; American poet Edgar Allan Poe; U.S. generals Douglas MacArthur and George Patton; presidents of Costa Rica, Nicaragua, and the Philippines; and 76 Medal of Honor recipients. History Colonial period, founding, and early years The Continental Army first occupied West Point, New York, on 27 January 1778, and it is the oldest continuously operating Army post in the United States. Between 1778 and 1780, the Polish engineer and military hero Tadeusz Kościuszko oversaw the construction of the garrison defenses. However, Kościuszko's plan of a system of small forts did not meet with the approval of New York Governor (and General) George Clinton or the other general officers. It was determined that a battery along the river to "annoy the shipping" was more appropriate, and Washington's chief engineer, Rufus Putnam, directed the construction of a major fortification on a hill above sea level that commanded the West Point plain. General Alexander McDougall named it Fort Putnam. The Great Hudson River Chain and high ground above the narrow "S" curve in the river enabled the Continental Army to prevent the Royal Navy from sailing upriver and dividing Patriot forces in the Northern colonies from the south. While the fortifications at West Point were known as Fort Arnold during the war, as commander, Benedict Arnold committed his act of treason, attempting to turn the fort over to the British. After Arnold betrayed the patriot cause, the Army changed the name of the fortifications at West Point, New York, to Fort Clinton, named after General James Clinton. With the peace after the American Revolutionary War, various ordnance and military stores were left deposited at West Point. "Cadets" underwent training in artillery and engineering studies at the garrison since 1794. During the Quasi-War, Alexander Hamilton laid out plans for the establishment of a military academy at West Point and introduced "A Bill for Establishing a Military Academy" in the House of Representatives. In 1801, shortly after his inauguration as president, Thomas Jefferson directed that plans be set in motion to establish at West Point the United States Military Academy. He selected Jonathan Williams to serve as its first superintendent. Congress formally authorized the establishment and funding of the school with the Military Peace Establishment Act of 1802, which Jefferson signed on 16 March. The academy officially commenced operations on 4 July 1802. The academy graduated Joseph Gardner Swift, its first official graduate, in October 1802. He later returned as Superintendent from 1812 to 1814. In its tumultuous early years, the academy featured few standards for admission or length of study. Cadets ranged in age from 10 years to 37 years and attended between 6 months to 6 years. The impending War of 1812 caused the United States Congress to authorize a more formal system of education at the academy and increased the size of the Corps of Cadets to 250. In 1817, Colonel Sylvanus Thayer became the Superintendent and established the curriculum, elements of which are still in use . Thayer instilled strict disciplinary standards, set a standard course of academic study, and emphasized honorable conduct. He was very much inspired by the curriculum of the French where he had been sent upon his demand for two years in order to study the scientific and technological achievements developed by the French Republican faction and bring them back to the United States. Known as the "Father of the Military Academy," he is honored with a monument on campus for the profound impact he had upon the academy. Founded as a school of engineering, for the first half of the 19th century, USMA produced graduates who gained recognition for engineering the bulk of the nation's initial railway lines, bridges, harbors and roads. The academy was the only engineering school in the country until the founding of Rensselaer Polytechnic Institute in 1824. It was so successful in its engineering curriculum that it significantly influenced every American engineering school founded prior to the Civil War. In 1835, during the Army's first year of the Second Seminole War, they had only three generals: Winfield Scott, Edmund P. Gaines, and Thomas S. Jesup. The Army's remaining fourteen generals "held their rank by brevet only," and none of them were West Point graduates. Nearly the only way to obtain a commission up to 1835, was through the academy, "which created loud complaint", and added to the "Jacksonian Democracy...a deep desire to get rid of the Academy, where, Jacksonians were sure, an aristocratic tradition was being bred." The Mexican–American War brought the academy to prominence as graduates proved themselves in battle for the first time. Future Civil War commanders Ulysses S. Grant and Robert E. Lee, who also later became the superintendent of the academy, first distinguished themselves in battle in Mexico. In all, 452 of 523 graduates who served in the war received battlefield promotions or awards for bravery. The school experienced a rapid modernization during the 1850s, often romanticized by the graduates who led both sides of the Civil War as the "end of the Old West Point era." New barracks brought better heat and gas lighting, while new ordnance and tactics training incorporated new rifle and musket technology and accommodated transportation advances created by the steam engine. With the outbreak of the Civil War, West Point graduates filled the general officer ranks of the rapidly expanding Union and Confederate armies. 294 graduates served as general officers for the Union, and 151 served as general officers for the Confederacy. Of all living graduates at the time of the war, 105 (10%) were killed, and another 151 (15%) were wounded. Nearly every general officer of note from either army during the Civil War was a graduate of West Point, and a West Point graduate commanded the forces of one or both sides in every one of the 60 major battles of the war. After the Civil War Immediately following the Civil War, the academy enjoyed unprecedented fame as a result of the role its graduates had played. However, the post-war years were a difficult time for the academy as it struggled to admit and reintegrate cadets from former confederate states. The first cadets from Southern states were re-admitted in 1868, and 1870 saw the admission of the first black cadet, James Webster Smith of South Carolina. Smith endured harsh treatment and was eventually dismissed for academic deficiency under controversial circumstances in 1874. As a result, Henry O. Flipper of Georgia became the first black graduate in 1877, graduating 50th in a class of 76. Two of the most notable graduates during this period were George Washington Goethals from the class of 1880, and John J. Pershing from the class of 1886. Goethals gained prominence as the chief engineer of the Panama Canal, and Pershing would become famous for his exploits against the famed Pancho Villa in Mexico and later for leading American Forces during World War I. Besides the integration of southern-state and black cadets, the post-war academy also struggled with the issue of hazing. In its first 65 years, hazing was uncommon or non-existent beyond small pranks played upon the incoming freshmen, but took a harsher tone as Civil War veterans began to fill the incoming freshman classes. The upper class cadets saw it as their duty to "teach the plebes their manners." Hazing at the academy entered the national spotlight with the death of former cadet Oscar L. Booz on 3 December 1900. Congressional hearings, which included testimony by cadet Douglas MacArthur, investigated his death and the pattern of freshmen's systemic hazing. When MacArthur returned as superintendent, he made an effort to end the practice of hazing the incoming freshmen by placing Army sergeants in charge of training new cadets during freshman summer. The practice of hazing continued on some levels well into the late 20th century, but is no longer allowed in the present day. The demand for junior officers during the Spanish–American War caused the class of 1899 to graduate early, and the Philippine–American War did the same for the class of 1901. This increased demand for officers led Congress to increase the Corps of Cadets' size to 481 cadets in 1900. The period between 1900 and 1915 saw a construction boom as much of West Point's old infrastructure was rebuilt. Many of the academy's most famous graduates graduated during the 15-year period between 1900 and 1915: Douglas MacArthur (1903), Joseph Stilwell (1904), Henry "Hap" Arnold (1907), George S. Patton (1909), Dwight D. Eisenhower, and Omar Bradley (both 1915). The class of 1915 is known as "the class the stars fell on" for the exceptionally high percentage of general officers that rose from that class (59 of 164). The outbreak of America's involvement in World War I caused a sharp increase in the demand for army officers, and the academy accelerated graduation of all four classes then in attendance to meet this requirement, beginning with the early graduation of the First Class on 20 April 1917, the Second Class in August 1917, and both the Third and Fourth Classes just before the Armistice of 11 November 1918, when only freshman cadets remained (those who had entered in the summer of 1918). In all, wartime contingencies and post-war adjustments resulted in ten classes, varying in length of study from two to four years, within a seven-year period before the regular course of study was fully resumed. Douglas MacArthur became superintendent in 1919, instituting sweeping reforms to the academic process, including introducing a greater emphasis on history and humanities. He made major changes to the field training regimen and the Cadet Honor Committee was formed under his watch in 1922.</ref> MacArthur was a firm supporter of athletics at the academy, as he famously said "Upon the fields of friendly strife are sown the seeds that, upon other fields, on other days, will bear the fruits of victory." West Point was first officially accredited in 1925, and in 1933 began granting Bachelor of Science degrees to all graduates. In 1935, the academy's authorized strength increased to 1,960 cadets. World War II and Cold War As World War II engulfed Europe, Congress authorized an increase to 2,496 cadets in 1942 and began graduating classes early. The class of 1943 graduated six months early in January 1943, and the next four classes graduated after only three years. To accommodate this accelerated schedule, summer training was formally moved to a recently acquired piece of land southwest of main post. The site would later become Camp Buckner. The academy had its last serious brush with abolition or major reform during the war, when some members of Congress charged that even the accelerated curriculum allowed young men to "hide out" at West Point and avoid combat duty. A proposal was put forth to convert the academy to an officer's training school with a six-month schedule, but this was not adopted. West Point played a prominent role in WWII; four of the five five-star generals were alumni and nearly 500 graduates died. Immediately following the war in 1945, Maxwell Taylor (class of 1922) became superintendent. He expanded and modernized the academic program and abolished antiquated courses in fencing and horsemanship. Unlike previous conflicts, the Korean War did not disrupt class graduation schedules. More than half of the Army leadership during the war was composed of West Point graduates. The Class of 1950, which graduated only two weeks prior to the war's outbreak, suffered some of the heaviest casualties of any 20th century class and became known sourly as "the class the crosses fell on." A total of 157 alumni perished in the conflict. Garrison H. Davidson became superintendent in 1956 and instituted several reforms that included refining the admissions process, changing the core curriculum to include electives, and increasing the academic degree standards for academy instructors. The 1960s saw the size of the Corps expand to 4,400 cadets while the barracks and academic support structure grew proportionally. West Point was not immune to the social upheaval of American society during the Vietnam War. The first woman joined the faculty of the all-male institution amidst controversy in 1968. West Point granted its first honorable discharge in 1971 to an African-American West Point cadet, Cornelius M. Cooper, of California, who applied for conscientious objector status in 1969. The academy struggled to fill its incoming classes as its graduates led troops in Southeast Asia, where 333 graduates died. Modern era Following the 1973 end of American involvement in Vietnam, the strain and stigma of earlier social unrest dissolved, and West Point enjoyed surging enrollments. On 20 May 1975, an amendment to the Defense Authorization Bill of 1976 opening the service academies to women was approved by the House of Representatives, 303–96. The Senate followed suit on 6 June. President Ford signed the bill on 7 October 1975. West Point admitted its first 119 female cadets in 1976. Also in 1976, physics professor James H. Stith became the first tenured African American Professor. In 1979, Cadet, later General, Vincent K. Brooks became the first African American to lead the Corps of Cadets. Kristin Baker, ten years later, became the first female First Captain (a depiction of her is now on display in the Museum), the highest ranking senior cadet at the academy in 1989. Six other women have been appointed as First Captain: Grace H. Chung in 2003, Stephanie Hightower in 2005, Lindsey Danilack in 2013, Simone Askew in 2017, Reilly McGinnis in 2020, and Lauren Drysdale in 2022. Simone Askew was the first African American woman to lead the Corps. In the 21st century, women comprise approximately 20% of entering new cadets. In 1985, cadets were formally authorized to declare an academic major; all previous graduates had been awarded a general bachelor of science degree. Five years later there was a major revision of the Fourth Class System, as the Cadet Leader Development System (CLDS) became the guidance for the development of all four classes. The class of 1990 was the first one to be issued a standard and mandatory computer to every member of the class at the beginning of Plebe year, the Zenith 248 SX. The academy was also an early adopter of the Internet in the mid-1990s, and was recognized in 2006 as one of the nation's "most wired" campuses. During the Gulf War, alumnus General Schwarzkopf was the commander of Allied Forces, and the American senior generals in Iraq, Generals Petraeus, Odierno and Austin, and Afghanistan, retired General Stanley McChrystal and General David Rodriguez, are also alumni. Following the September 11 attacks, applications for admission to the academy increased dramatically, security on campus was increased, and the curriculum was revamped to include coursework on terrorism and military drills in civilian environments. One graduate was killed during the 9/11 terrorist attacks and ninety graduates have died during operations in Afghanistan, Iraq, and the ongoing Global War on Terror. The Class of 2005 has been referred to as The Class of 9/11 as the attacks occurred during their first year at the academy, and they graduated 911 students. In 2008 gender-neutral lyrics were incorporated into West Point's "Alma Mater" and "The Corps" – replacing lines like "The men of the Corps" with "The ranks of the Corps." In December 2009, President Barack Obama delivered a major speech in Eisenhower Hall Theater outlining his policy for deploying 30,000 additional troops to Afghanistan as well as setting a timetable for withdrawal. President Obama also provided the commencement address in 2014. After the Don't Ask, Don't Tell policy was lifted 20 September 2011, the academy began admitting and retaining openly gay cadets. By March 2012, cadets were forming a gay-straight alliance group called Spectrum. By March 2015, Spectrum had two faculty and 40 cadet members, a mixture of gay, straight, bi, and undecided. According to a Vanity Fair essay, the LGBT cadets were well accepted. After the ban on transgender service members was lifted in 2016, the Class of 2017 saw the first openly transgender graduate. However, she was denied a commission and was honorably discharged. Brig. Gen. Diana Holland became West Point's first woman Commandant of Cadets in January 2016. In 2020, the campus confronted its first major pandemic in a century, with the COVID-19 pandemic causing limitations on classes, and the relocation of the traditional Army-Navy football game to ensure social distancing. For the first time in many years, the 121st iteration of the game was held at West Point rather than the traditional Lincoln Financial Field in Philadelphia. Ultimately, West Point beat Navy 15–0. Campus The academy is located approximately north of New York City on the western bank of the Hudson River. West Point, New York, is incorporated as a federal military reservation in Orange County and is adjacent to Highland Falls. Based on the significance both of the Revolutionary War fort ruins and of the military academy itself, the majority of the academy area was declared a National Historic Landmark in 1960. In 1841, Charles Dickens visited the academy and said "It could not stand on more appropriate ground, and any ground more beautiful can hardly be." One of the most visited and scenic sites on post, Trophy Point, overlooks the Hudson River to the north, and is home to many captured cannon from past wars as well as the Stanford White-designed Battle Monument. Though the entire military reservation encompasses , the academic area of the campus, known as "central area" or "the cadet area", is entirely accessible to cadets or visitors by foot. In 1902, the Boston architectural firm Cram, Goodhue, and Ferguson was awarded a major construction contract that set the predominantly neogothic architectural style still seen today. Most of the buildings of the central cadet area are in this style, as typified by the Cadet Chapel, completed in 1910. These buildings are nearly all constructed from granite that has a predominantly gray and black hue. The barracks that were built in the 1960s were designed to mimic this style. Other buildings on post, notably the oldest private residences for the faculty, are built in the Federal, Georgian, or English Tudor styles. A few buildings, such as Cullum Hall and the Old Cadet Chapel, are built in the Neoclassical style. The academy grounds are home to numerous monuments and statues. The central cadet parade ground, the Plain, hosts the largest number, and includes the Washington Monument, Thayer Monument, Eisenhower Monument, MacArthur Monument, Kosciuszko Monument, and Sedgwick Monument. Patton Monument was first dedicated in front of the cadet library in 1950, but in 2004 it was placed in storage to make room for the construction of Jefferson Hall. With the completion of Jefferson Hall, Patton's statue was relocated and unveiled at a temporary location on 15 May 2009, where it will remain until the completion of the renovation of the old cadet library and Bartlett Hall. There is also a statue commemorating brotherhood and friendship from the in the cadet central area just outside Nininger Hall. The remaining campus area is home to 27 other monuments and memorials. The West Point Cemetery is the final resting place of many notable graduates and faculty, including George Armstrong Custer, Winfield Scott, William Westmoreland, Earl Blaik, Margaret Corbin, and eighteen Medal of Honor recipients. The cemetery is also the burial place of several recent graduates who have died during the ongoing conflict in Iraq and Afghanistan. Many of the older grave sites have large and ornate grave markers, the largest belonging to Egbert Viele (class of 1847), chief engineer of Brooklyn's Prospect Park. The cemetery is also home to a monument to Revolutionary War heroine Margaret Corbin. Athletic facilities West Point is home to historic athletic facilities like Michie Stadium and Gillis Field House as well as modern facilities such as the Lichtenberg Tennis Center, Anderson Rugby Complex, and the Lou Gross Gymnastics Facility. Michie Stadium recently underwent a significant upgrade in facilities for the football team, and the academy installed a new artificial turf field in the summer of 2008. West Point Museum The visitor center is just outside the Thayer Gate in the village of Highland Falls and offers the opportunity to arrange for a guided tour. These tours, which are the only way the general public can access the academy grounds, leave the visitor center several times a day. The old West Point Visitor Center was housed in the now-demolished Ladycliff College library building. On 9 September 2016, West Point broke ground in order to begin construction of the new 31,000 square foot Malek West Point Visitors Center. It is being built on the location of the former visitor center. The Malek West Point Visitors Center is named after Frederic Malek, USMA Class of 1959 and a 2014 Distinguished Graduate. The West Point Museum is directly adjacent to the visitor center, in the renovated Olmsted Hall on the grounds of the former Ladycliff College. Originally opened to the public in 1854, the West Point Museum is the oldest military museum in the country. During the summer months, the museum operates access to the Fort Putnam historic site on main post and access to the 282 acre Constitution Island. Some of the most notable items on display at the museum are George Washington's pistols, Napoleon's sword, a dagger carried by Hermann Göring when he was captured, a revolver that belonged to Göring, and a silver-plated party book, signed by Charles Lindbergh, Herbert Hoover and Mussolini, among others. In addition, a gold-plated Liliput pistol that belonged to Adolf Hitler is on display. Administration Academy leadership The commanding officer at the USMA is the Superintendent, equivalent to the president or chancellor of a civilian university. In recent years, the position of superintendent has been held by a lieutenant general (three star general). The 61st Superintendent, Lieutenant General Steven W. Gilland, took command on 27 June 2022, replacing Darryl A. Williams. The academy is a direct reporting unit, and as such, the Superintendent reports directly to the Army Chief of Staff (CSA). There are two other general officer positions at the academy. Brigadier General R.J. Garcia is the Commandant of Cadets, equivalent to a dean of students at the civilian level. Brigadier General Shane Reeves is the Dean of the Academic Board, equivalent to a provost at the civilian level. Brigadier General Diana Holland was the first female commandant. Brigadier General Jebb was the first female Dean. There are 13 academic departments at USMA, each with a colonel as the head of department. These 13 tenured colonels comprise the core of the Academic Board. These officers are titled "Professors USMA" or PUSMA. The academy is also overseen by the Board of Visitors (BOV). The BOV is a panel of Senators, Congressional Representatives, and presidential appointees who "shall inquire into the morale and discipline, curriculum, instruction, physical equipment, fiscal affairs, academic methods, and other matters relating to the academy that the board decides to consider." Currently the BOV is chaired by Representative John Shimkus and is composed of three Senators, five Representatives and six presidential appointees. Admission requirements Candidates must be between 17 and 23 years old (waivers have been accepted for 24-year olds in rare cases where the candidate is in the military and deployed and therefore unable to attend before their 24th birthday), unmarried, and with no legal obligation to support a child. Above-average high school or previous college grades and strong performance on standardized testing are expected. The interquartile range on the old SAT was 1100–1360 and 68% ranked in the top fifth of their high school class. To be eligible for appointment, candidates must also undergo a Candidate Fitness Assessment and a complete physical exam. Up to 60 students from foreign countries are present at USMA, educated at the expense of the sponsoring nation, with tuition assistance based on the GNP of their country. Of these foreign cadets the Code of Federal Regulations specifically permits one Filipino cadet designated by the President of the Philippines. The actual application process consists of two main requirements: candidates apply to USMA for admission and separately provide a nomination. The majority of candidates receive a nomination from their United States Representative or Senator. Some receive a nomination from the Vice President or even the President of the United States. The nomination process is not political. The academy applicant typically provides written essays and letters of recommendation. The applicant then submits to a formal interview. Admission to West Point is selective: 7.74% of applicants were admitted (total of 1232) to the Class of 2024. Candidates may have previous college experience, but they may not transfer, meaning that regardless of previous college credit, they enter the academy as a fourth class cadet and undergo the entire four-year program. Candidates considered academically disqualified and not selected may receive an offer to attend to the United States Military Academy Preparatory School. Upon graduation from USMAPS, these candidates are appointed to the academy if they receive the recommendation of the USMAPS Commandant and meet medical admission requirements. The West Point Association of Graduates (WPAOG) also offers scholarship support to people who are qualified but not selected. The scholarships usually cover around $7,000 to civilian universities; the students who receive these scholarships do so under the stipulation that they will be admitted to and attend West Point a year later. Those who do not must repay the AOG. Marion Military Institute, New Mexico Military Institute, Georgia Military College, Hargrave Military Academy, Greystone Preparatory School at Schreiner University, and Northwestern Preparatory School are approved programs that students attend on the AOG scholarship prior to admission to West Point. Research centers and institutes The Academy's research centers and institutes include: Army Cyber Institute Combating Terrorism Center Center for Data Analysis and Statistics Center for Environmental and Geographical Science Center for Holocaust and Genocide Studies Center for Innovation and Engineering Center for Languages, Cultures, and Regional Studies Center for Leadership and Diversity in STEM Center for Molecular Science Center for Oral History Center for the Study of Civil-Military Operations Cyber Research Center Digital History Center Lieber Institute for Law & Warfare Mathematical Sciences Center Modern War Institute Operations Research Center Photonics Research Center Robotics Research Center Systems Design and Analysis Center West Point Center for the Rule of Law West Point Insider Threat Program West Point Leadership Center West Point Music Research Center West Point Simulation Center Curriculum West Point is a highly residential baccalaureate college, with a full-time, four-year undergraduate program that emphasizes instruction in the arts, sciences, and professions with no graduate program. There are forty-five academic majors, the most popular of which are foreign languages, management information systems, history, economics, and mechanical engineering. West Point is accredited by the Middle States Commission on Higher Education. Military officers compose 75% of the faculty, while civilian professors make up the remaining 25%. A cadet's class rank, which determines their Army branch and assignment upon graduation, is calculated as a combination of academic performance (55%), military leadership performance (30%), and physical fitness and athletic performance (15%). Academics The academy's teaching style forms part of the Thayer method, which was implemented by Sylvanus Thayer during his tour as Superintendent. This form of instruction emphasizes small classes with daily homework, and strives to make students actively responsible for their own learning by completing homework assignments prior to class and bringing the work to class to discuss collaboratively. The academic program consists of a structured core of thirty-one courses balanced between the arts and sciences. The academy operates on the semester system, which it labels as "terms" (Term 1 is the fall semester; Term 2 is the spring semester). Although cadets choose their majors in the spring of their freshmen year, all cadets take the same course of instruction until the beginning of their second year. This core course of instruction consists of mathematics, information technology, chemistry, physics, engineering, history, physical geography, philosophy, leadership and general psychology, English composition and literature, foreign language, political science, international relations, economics, and constitutional law. Some advanced cadets may "validate" out of the base-level classes and take advanced or accelerated courses earlier as freshmen or sophomores. Regardless of major, all cadets graduate with a Bachelor of Science degree. Military As all cadets are commissioned as second lieutenants upon graduation, military and leadership education is nested with academic instruction. Military training and discipline fall under the purview of the Office of the Commandant of Cadets. Entering freshmen, or fourth class cadets, are referred to as New Cadets, and enter the academy on Reception Day or R-day, which marks the start of cadet basic training (CBT), known colloquially as Beast Barracks, or simply Beast. Most cadets consider Beast to be their most difficult time at the academy because of the transition from civilian to military life. Their second summer, cadets undergo cadet field training (CFT) at nearby Camp Buckner, where they train in more advanced field craft and military skills. During a cadet's third summer, they may serve as instructors for CBT or CFT. Rising Firstie (senior) cadets also spend one-month training at Camp Buckner, where they train for modern tactical situations that they will soon face as new platoon leaders. Cadets also have the opportunity during their second, third and fourth summers to serve in active army units and military schools around the world. The schools include Airborne, Air Assault, Sapper, Pathfinder, etc. Active duty officers in the rank of captain or major serve as Company Tactical Officers (TAC Officers). The role of the TAC is to mentor, train, and teach the cadets proper standards of good order and discipline and be a good role model. There is one TAC for every cadet company. There is also one senior Non-Commissioned Officer to assist each TAC, known as TAC-NCOs. The Department of Military Instruction (DMI) is responsible for all military arts and sciences education as well as planning and executing the cadet summer training. Within DMI there is a representative from each of the Army's branches. These "branch reps" serve as proponents for their respective branches and liaise with cadets as they prepare for branch selection and graduation. Within DMI sits the Modern War Institute, a research center devoted to the study of contemporary conflict and the evolving character of war. Physical The Department of Physical Education (DPE) administers the physical program, which includes both physical education classes, physical fitness testing, and competitive athletics. The head of DPE holds the title of Master of the Sword, dating back to the 19th century when DPE taught swordsmanship as part of the curriculum. All cadets take a prescribed series of physical fitness courses such as military movement (applied gymnastics), boxing, survival swimming, and beginning in 2009, advanced combatives. Cadets can also take elective physical activity classes such as scuba, rock climbing, and aerobic fitness. As with all soldiers in the Army, cadets also must pass the Army Physical Fitness Test twice per year. Additionally, every year, cadets must pass the Indoor Obstacle Course Test (IOCT), which DPE has administered in Hayes Gymnasium since 1944. Since Douglas MacArthur's tenure as superintendent, every cadet has been required to participate in either an intercollegiate sport, a club sport, or an intramural (referred to as "company athletics") sport each semester. Moral and ethical training Moral and ethical development occurs throughout the entirety of the cadet experience by living under the honor code and through formal leadership programs available at the academy. These include instruction in the values of the military profession through Professional Military Ethics Education (PME2), voluntary religious programs, interaction with staff and faculty role models, and an extensive guest-speaker program. The foundation of the ethical code at West Point is found in the academy's motto, "Duty, Honor, Country." West Point's Cadet Honor Code reads simply that: "A cadet will not lie, cheat, steal, or tolerate those who do." Cadets accused of violating the Honor Code face an investigative and hearing process. If they are found guilty by a jury of their peers, they face severe consequences ranging from being "turned back" (repeating an academic year) to separation from the academy. Cadets previously enforced collective censure by an unofficial sanction known as "silencing" by not speaking to cadets accused of violating the honor code, but the practice ended in 1973 after national scrutiny. Although the academy's honor code is well known and has been influential for many other colleges and universities, the academy has experienced several significant violations. For example, 151 junior cadets were found guilty of "violating the honor code" in their exams in 1976. In 2020, more than 70 cadets were also accused of cheating on exams. Cadet life Rank and organization Cadets are not referred to as freshmen, sophomores, juniors, or seniors. Instead they are officially called fourth class, third class, second class, and first class cadets. Colloquially, freshmen are plebes, sophomores are yearlings or yuks, juniors are cows, and seniors are firsties. Only some of the origins of the class names are known. Plebeians were the lower class of ancient Roman society, while yearling is a euphemism for a year-old animal. There are a number of theories for the origin of the term cow; however the most prevalent and probably accurate one is that cadets had no leave until the end of their yearling year, when they were granted a summer-long furlough. Their return as second classmen was heralded as "the cows coming home." The Corps of Cadets is officially organized into a brigade. The senior ranking cadet, the Brigade Commander, is known traditionally as the First Captain. The brigade is organized into four regiments. Within each regiment there are three battalions, each consisting of three companies. Companies are lettered A through I, with a number signifying which regiment it belongs to. For example, there are four "H" companies: H1, H2, H3, and H4. First class cadets hold the leadership positions within the brigade from the First Captain down to platoon leaders within the companies. Leadership responsibility decreases with the lower classes, with second class cadets holding the rank of cadet sergeant, third class cadets holding the rank of cadet corporal, and fourth class cadets as cadet privates. Life in the corps Because of the academy's congressional nomination process, students come from all 50 states, Puerto Rico, the District of Columbia, the Mariana Islands, Guam, American Samoa, and the US Virgin Islands. The academy is also authorized up to 60 international exchange cadets, who undergo the same four-year curriculum as fully integrated members of the Corps of Cadets. Cadets attend the United States Military Academy free of charge, with all tuition and board paid for by the Army in return for a service commitment of five years of active duty and three years of reserve status upon graduation. Most graduates are commissioned as second lieutenants in the Army. Foreign cadets are commissioned into the armies of their home countries. Since 1959, cadets have also had the option of "cross-commissioning," or requesting a commission in one of the other armed services, provided they meet that service's eligibility standards. Every year, a small number of graduates do this, usually in a one-for-one "trade" with a similarly inclined cadet or midshipman at one of the other service academies. Starting on the first day of a cadet's second class year, non-graduates after that point are expected to fulfill their obligations in enlisted service. Cadets receive a monthly stipend of $1,017.00 for books, uniforms, and other necessities, as of 2015. From this amount, pay is automatically deducted for the cost of uniforms, books, supplies, services, meals, and other miscellaneous expenses. All remaining money after deductions is used at the individual cadets' discretion. All cadets receive meals in the dining halls and have access to internet on approved, issued devices. The student population was 4,389 cadets for the 2016–2017 academic year. The student body has recently been around 20% female. All cadets reside on campus for their entire four years in one of the nine barracks buildings. Most cadets are housed with one roommate, but some rooms are designed for three cadets. Cadets are grouped into companies identified by alpha-numeric codes. All companies live together in the same barracks area. The commandant may decide to have cadets change companies at some point in their cadet career. This process is known as scrambling and the method of scrambling has changed several times in recent years. All 4,000 cadets dine together at breakfast and lunch in the Washington Hall during the weekdays. The cadet fitness center, Arvin Cadet Physical Development Center (usually just called "Arvin" by cadets and faculty), which was rebuilt in 2004, houses extensive physical fitness facilities and equipment for student use. Each class of cadets elects representatives to serve as class president and fill several administrative positions. They also elect a ring and crest committee, which designs the class's crest, the emblem that signifies their class and is embossed upon their class rings. Each class crest is required to contain the initials USMA and their class motto. The class motto is proposed by the class during cadet basic training and voted on by the class prior to the beginning of their freshman academic year. Class mottos typically have wording that rhymes or is phonetically similar with their class year. Cadets today live and work within the framework of the West Point Leader Development System (WPLDS), which specifies the roles that a cadet plays throughout their four years at the academy. Cadets begin their USMA careers as trainees (new cadets), then advance in rank, starting as CDT Privates (freshmen) and culminating as CDT Officers (seniors). Freshmen have no leadership responsibilities, but have a host of duties to perform as they learn how to follow orders and operate in an environment of rigid rank structure, while seniors have significant leadership responsibilities and significantly more privileges that correspond to their rank. Activities Cadets have a host of extracurricular activities available, most run by the office of the Directorate of Cadet Activities (DCA). DCA sponsors or operates 113 athletic and non-sport clubs. Many cadets join several clubs during their time at the academy and find their time spent with their clubs a welcome respite from the rigors of cadet life. DCA is responsible for a wide range of activities that provide improved quality of life for cadets, including: three cadet-oriented restaurants, the Cadet Store, and the Howitzer and Bugle Notes. The Howitzer is the annual yearbook, while Bugle Notes, also known as the "plebe bible," is the manual of plebe knowledge. Plebe knowledge is a lengthy collection of traditions, songs, poems, anecdotes, and facts about the academy, the army, the Old Corps, and the rivalry with Navy that all plebes must memorize during cadet basic training. During plebe year, plebes may be asked, and are expected to answer, any inquiry about plebe knowledge asked by upper class cadets. Other knowledge is historical in nature, including information as found in Bugle Notes. However, some knowledge changes daily, such as "the days" (a running list of the number of days until important academy events), the menu in the mess hall for the day, or the lead stories in The New York Times. Each cadet class celebrates at least one special "class weekend" per academic year. Fourth class cadets participate in Plebe Parent Weekend during the first weekend of spring break. In February, third class cadets celebrate the winter season with Yearling Winter Weekend. In late January the second class cadets celebrate 500th Night, marking the remaining 500 days before graduation. First class cadets celebrate three different formal occasions. In late August, first class cadets celebrate Ring Weekend, in February they mark their last 100 days with 100th Night, and in May they have a full week of events culminating in their graduation. All of the "class weekends" involve a formal dinner and social dance, known in old cadet slang as a "hop," held at Eisenhower Hall. Grant Hall, formerly the cadet mess hall at West Point, is now a social center. Athletics Since 1899, Army's mascot has officially been a mule because the animal symbolizes strength and perseverance. The academy's football team was nicknamed "The Black Knights of the Hudson" due to the black color of its uniforms. This nickname has since been officially shortened to the "Black Knights." U.S. sports media use "Army" as a synonym for the academy. "On Brave Old Army Team" is the school's fight song. Army's chief sports rival is the Naval Academy due to its long-standing football rivalry and the interservice rivalry with the Navy in general. Fourth class cadets verbally greet upper-class cadets and faculty with "Beat Navy," while the tunnel that runs under Washington Road is named the "Beat Navy" tunnel. Army also plays the U.S. Air Force Academy for the Commander-in-Chief's Trophy. In the first half of the 20th century, Army and Notre Dame were football rivals, but that rivalry has since died out. Notre Dame beat Army 44 – 6 in 2016. Football Army football began in 1890, when Navy challenged the cadets to a game of the relatively new sport. Navy defeated Army at West Point that year, but Army avenged the loss in Annapolis the following year. The rival academies still clash every December in what is traditionally the last regular-season Division I college-football game. The 2015 football season marked Navy's fourteenth consecutive victory over Army, the longest streak in the series since inception. The following year, Army won 21–17. Army's football team reached its pinnacle of success under coach Earl Blaik when Army won consecutive national championships in 1944, 1945 and 1946, and produced three Heisman Trophy winners: Doc Blanchard (1945), Glenn Davis (1946) and Pete Dawkins (1958). Past NFL coaches Vince Lombardi and Bill Parcells were Army assistant coaches early in their careers. The football team plays its home games at Michie Stadium, where the playing field is named after Earl Blaik. Cadets' attendance is mandatory at football games and the Corps stands for the duration of the game. At all home games, one of the four regiments marches onto the field in formation before the team takes the field and leads the crowd in traditional Army cheers. From 1992 through 1996, Army won all of the games against Navy for the first time since the legendary days of Blanchard and Davis, and it introduced the fraternal group of players identifying themselves as the Fat Man Club, initiated by the offensive linemen of the Class of 1996. Between the 1998 and 2004 seasons, Army's football program was a member of Conference USA, but has since reverted to its former independent status. Other sports Though football may receive a lot of media attention due to its annual rivalry game, West Point has a long history of athletics in other NCAA sports. Army is a member of the Division I Patriot League in most sports, while its men's ice hockey program competes in Atlantic Hockey. John P. Riley Jr. was the hockey coach at West Point for more than 35 years. Every year, Army faces the Royal Military College of Canada (RMC) Paladins in the annual West Point Weekend hockey game. This series was first conceived in 1923. The men's lacrosse team has won eight national championships and appeared in the NCAA tournament sixteen times. In its early years, lacrosse was used by football players, like the "Lonesome End" Bill Carpenter, to stay in shape during the off-season. The 2005–06 women's basketball team went 20–11 and won the Patriot League tournament. They went to the 2006 NCAA Women's Division I Basketball Tournament as a 15th-ranked seed, where they lost to Tennessee, 102–54. It was the first March Madness tournament appearance for any Army basketball team. The head coach of that team, Maggie Dixon, died soon after the season at only 28 years of age. Bob Knight, formerly the winningest men's basketball coach in NCAA history, began his head coaching career at Army in the late 1960s before moving on to Indiana and Texas Tech. One of Knight's players at Army was Mike Krzyzewski, who later was head coach at Army before moving on to Duke, where he has won five national championships. Approximately 15% of cadets are members of a club sport team. West Point fields a total of 24 club sports teams that have been very successful in recent years, winning national championships in judo, boxing, orienteering, pistol, triathlon, crew, cycling, and team handball. The majority of the student body, about 65%, competes in intramural sports, known at the academy as "company athletics." DPE's Competitive Sports committee runs the club and company athletics sports programs and was recently named one of the "15 Most Influential Sports Education Teams in America" by the Institute for International Sport. The fall season sees competition in basketball, flag-football, team handball, soccer, ultimate disc, and wrestling; while the spring season sees competition in combative grappling, floor hockey, orienteering, flicker ball, and swimming. In the spring, each company also fields a team entry into the annual Sandhurst Competition, a military skills event conducted by the Department of Military Instruction. Traditions Due to West Point's age and its unique mission of producing Army officers, it has many time-honored traditions. The list below are some of the traditions unique to or started by the academy. Cullum number The Cullum number is a reference and identification number assigned to each graduate. It was created by brevet Major General George W. Cullum (USMA Class of 1833) who, in 1850, began the monumental work of chronicling the biographies of every graduate. He assigned number one to the first West Point graduate, Joseph Gardner Swift, and then numbered all successive graduates in sequence. Before his death in 1892, General Cullum completed the first three volumes of a work that eventually comprised 10 volumes, titled General Cullum's Biographical Register of the Officers and Graduates of the United States Military Academy, and covering USMA classes from 1802 through 1850. From 1802 through the Class of 1977, graduates were listed by General Order of Merit. Beginning with the Class of 1978, graduates were listed alphabetically within each class. Ten graduates have an "A" suffix after their Cullum Number. For various reasons these graduates were omitted from the original class rosters, and a suffix letter was added to avoid renumbering the entire class and subsequent classes. Former cadets, those who attended but did not graduate but separated under honorable conditions, are also given Cullum numbers in a different configuration. Class ring West Point began the collegiate tradition of the class ring, beginning with the class of 1835. The class of 1836 chose no rings, and the class of 1879 had cuff links in lieu of a class ring. Before 1917, cadets could design much of the ring individually, but now only the center stone can be individualized. One side of the ring bears the academy crest, while the other side bears the class crest and the center stone ring bears the words West Point and the class year. The academy library has a large collection of cadet rings on display. Senior cadets receive their rings during Ring Weekend in the early fall of their senior year. Immediately after senior cadets return to the barracks after receiving their rings, fourth class cadets take the opportunity to surround senior cadets from their company and ask to touch their rings. After the cadets recite a poem known as the Ring Poop, the senior usually grants the freshmen permission to touch the ring. In 2002, the Memorial Class ring donor program began. Donations of class rings are melted and merged. A portion of the original gold is infused with gold from preceding melts to become part of the rings for each 'Firstie' class. Thayer Award West Point is home to the Sylvanus Thayer Award. Given annually by the academy since 1958, the award honors an outstanding citizen whose service and accomplishments in the national interest exemplify the academy's motto, "Duty, Honor, Country." Currently, the award guidelines state that the recipient not be a graduate of the academy. The award has been awarded to many notable American citizens, to include George H. W. Bush, Colin Powell, Tom Brokaw, Sandra Day O'Connor, Henry Kissinger, Ronald Reagan, Barry Goldwater, Carl Vinson, Barbara Jordan, William J. Perry, Bob Hope, Condoleezza Rice and Leon E. Panetta. Sedgwick's spurs A monument to Civil War Union General John Sedgwick stands on the outskirts of the Plain. Sedgwick's bronze statue has spurs with rowels that freely rotate. Legend states that if a cadet is in danger of failing a class, they are to don their full-dress parade uniform the night before the final exam. The cadet visits the statue and spins the rowels at the stroke of midnight. Then the cadet runs back to the barracks as fast as possible. According to legend, if Sedgwick's ghost catches them, they will fail the exam. Otherwise the cadet will pass the exam and the course. Although being out of their rooms after midnight is officially against regulations, violations have been known to be overlooked for the sake of tradition. Goat-Engineer game As part of the run-up to the Navy football game, the Corps of Cadets plays the Goat-Engineer game. First played in 1907, it is a game between the "Goats" (the bottom half of the senior (Firstie) class academically), and the "Engineers" (the top half). The game is played with full pads and helmets using eight-man football rules. The location has changed over the years, with recent venues being Richard Shea Stadium/Doubleday Field, Michie Stadium, and Daly Field. Legend states that Army will beat Navy if the goats win, and the opposite if the engineers win. In recent years, female cadets have begun playing a flag football contest, so there are now two Goat-Engineer games, played consecutively the same night. Walking the area From the earliest days of the academy, one form of punishment for cadets who commit regulatory infractions has been a process officially known as punishment tours. This process is better known to the cadets as "hours" because as punishment, cadets must walk a specified number of hours in penalty. Cadets are "awarded" punishment tours based upon the severity of the infraction. Being late to class or having an unkempt room may result in as little as 5 hours while more severe misconduct infractions may result in upwards of 60 to 80 hours. In its most traditional form, punishment tours are "walked off" by wearing the dress gray uniform under arms and walking back and forth in a designated area of the cadet barracks courtyard, known as "Central Area." Cadets who get into trouble frequently and spend many weekends "walking off their hours" are known as "area birds." Cadets who walk more than 100 total hours in their career are affectionately known as "Century Men." An alternate form of punishment to walking hours is known as "fatigue tours," where assigned hours may be "worked off" by manual labor, such as cleaning the barracks. Certain cadets whose academics are deficient may also conduct "sitting tours," where they have to "sit hours" in a designated academic room in a controlled study environment, for which they receive half credit towards their reduction of tours. Cadets' uniforms are inspected before their tours begin each day. The inspection process is arduous and considered part of the punishment, but the time spent does not count against the awarded number of tours. A small number of cadets may be relieved of their tours that day if their uniforms are exceptionally presentable. Another tradition associated with punishment tours is that any visiting head of state has the authority to grant "amnesty," releasing all cadets with outstanding hours from the remainder of their assigned tours. Sandhurst Military Skills Competition In 1967 the Royal Military Academy Sandhurst presented West Point with a British Army officer's sword for use as a trophy in a military skills competition at West Point. In 2019 the Sandhurst competition spans two days, 12 and 13 April, with teams from USMA, the ROTC programs, the Naval, Coast Guard, and the Air Force academies. International academies including the UK, Canada, Australia and Ireland have won the Sandhurst Military Skills Competition. Education of dependents The Department of Defense Education Activity (DoDEA) maintains, for children of military personnel, West Point Elementary School and West Point Middle School on the academy grounds. The academy is physically in the Highland Falls Central School District. USMA sends high school aged students who are dependents of on-base military personnel to James I. O'Neill High School of Highland Falls schools under contract. In 2021, 190 children living on West Point attended O'Neill. In 2021 the agency at West Point announced that the bid to educate West Point High School students would be competitive. In March 2022 the O'Neill contract was renewed. Ranking The U.S. Military Academy at West Point is considered a liberal arts college by U.S. News and its ranked #8 in the 2025 edition of Best Colleges is National Liberal Arts Colleges. Preparatory school The U.S. Military Academy Preparatory School, known as USMAPS, the Prep School, or West Point Prep, was formally established in 1946, but the history of "prepping" soldiers for West Point has been done since Congress enacted legislation in 1916 authorizing soldier appointments to West Point. The school exists today to prepare soldiers and civilians with the academic, leadership, and physical skills to become successful cadets at the United States Military Academy. It is primarily an academic institution that accepts students and soldiers. Notable alumni An unofficial motto of the academy's history department is "Much of the history we teach was made by people we taught." Graduates of the academy refer to themselves as "The Long Gray Line," a phrase taken from the academy's traditional hymn "The Corps." The academy has produced just under 65,000 alumni, including two Presidents of the United States: Ulysses S. Grant and Dwight D. Eisenhower; the president of the Confederate States of America, Jefferson Davis; and four foreign heads of state or government: Former Nicaraguan President Anastasio Somoza Debayle, Former Philippine President Fidel V. Ramos, Former Costa Rican President José María Figueres, and Current Cambodian Prime Minister Hun Manet. Alumni currently serving in public office include Senator Jack Reed, Governor of Nebraska David Heineman, Governor of Louisiana John Bel Edwards, Congressmen Warren Davidson, Mark Green, Brett Guthrie, John Shimkus and Steve Watkins. Military leaders The academy has graduated many notable leaders since its inception in 1802. During the Civil War, a West Point graduate commanded one or both armies in every one of the 60 major battles of the war. Graduates included Ulysses S. Grant, George McClellan, George G. Meade, Phillip Sheridan, William Tecumseh Sherman, John Bell Hood, Stonewall Jackson, Robert E. Lee, Simon Bolivar Buckner, James Longstreet, J.E.B. Stuart and Oliver O. Howard. George Armstrong Custer graduated last in his class of 1861. The Spanish–American War saw the first combat service of Lt. (later, Brigadier General) John "Gatling Gun" Parker, the first Army officer to employ machine guns in offensive fire support of infantry and Brig. General Irving Hale, who holds the highest grade point average from the Academy, and helped found the VFW. During World War I, academy alumni included General of the Armies John J. Pershing, and Major Generals Charles T. Menoher and Mason Patrick. West Point was the alma mater of many notable World War II generals, Henry H. Arnold, Omar Bradley, Mark Wayne Clark, Robert L. Eichelberger, James M. Gavin, Leslie Groves, Douglas MacArthur, George S. Patton, Joseph Stilwell, Maxwell D. Taylor, James Van Fleet, Jonathan Mayhew Wainwright IV, and Simon Bolivar Buckner, Jr. the highest ranking General to be killed in combat during World War II, with many of these graduates also serving in commanding roles in the Korean War. During the Vietnam War, notable graduates general officers included Creighton Abrams, Hal Moore, and William Westmoreland. West Point also produced some famous generals and statesmen of recent note including John Abizaid, Stanley A. McChrystal, Wesley Clark, Alexander Haig, Barry McCaffrey, Norman Schwarzkopf, Jr., Brent Scowcroft, Lloyd Austin, and former Director of the Central Intelligence Agency, retired General David Petraeus. A total of 76 graduates have been awarded the Medal of Honor. West Point has also graduated 18 NASA astronauts, including five who went to the moon. Business leaders West Point has been emerging as a "CEO Factory" due to the number of alumni who go on to success in the business world. Noted alumni include: Jim Kimsey, founder of AOL; Bob McDonald, CEO of Procter & Gamble who was later nominated to be the Secretary of Veteran Affairs; Alex Gorsky, CEO of Johnson & Johnson; Keith McLoughlin, President and CEO of Electrolux, Jeffrey W. Martin, CEO of Sempra Energy, Alden Partridge, founder of Norwich University. Bill Roedy, former chief executive and managing director of MTV Europe. Sports Contributions to sport include three Heisman Trophy winners: Glenn Davis, Doc Blanchard, and Pete Dawkins. Abner Doubleday, once thought by some to have created baseball, graduated from West Point in 1842. Government West Point alumni include many prominent government officials, including Brent Scowcroft, the National Security Advisor under presidents Gerald Ford and George H. W. Bush, and Eric Shinseki, former Secretary of Veterans Affairs under President Barack Obama. West Point graduate Frank Medina organized and led the nationwide campaign that brought the Congressional Gold Medal to the 65th Infantry Regiment, also known as the Borinqueneers. Among American universities, the academy is fifth on the list of total winners for Rhodes Scholarships, seventh for Marshall Scholarships and fourth on the list of Hertz Fellowships. The official alumni association of West Point is the West Point Association of Graduates (WPAOG or AOG), headquartered at Herbert Hall. West Point Garrison and Stewart Army Subpost As an active-duty U.S. Army installation, there are several regular Army units that provide support for the USMA and the West Point installation. The U.S. Army Garrison includes a Headquarters and Headquarters Company, Provost Marshal and Military Police, Religious Program Support, Keller Army Community Hospital, the West Point Dental Activity, the USMA Band (a regular Army band—USMA cadets are not members of the USMA band), and the Directorate of Human Resources (DHR). The DHR is the higher headquarters for: Military Personnel Division (MPD), Army Continuing Education System (ACES), Administrative Services Division (ASD) and the Army Substance Abuse Program (ASAP). The 1st Battalion, 1st Infantry Regiment (1–1 INF) and the 2d Army Aviation Detachment, both stationed on nearby Stewart Army Subpost, provide military training and aviation support to the USMA and the West Point Garrison. Additionally, active duty Army support is sometimes provided by the 10th Mountain Division, based at Fort Drum, NY. See also United States Military Academy grounds and facilities List of monuments at the United States Military Academy Kosciuszko's Garden Army University U.S. Naval Academy U.S. Air Force Academy U.S. Coast Guard Academy U.S. Merchant Marine Academy Redoubt Four (West Point) West Point Cadets' Sword West Point Band The Jazz Knights Notes References Citations Sources On integrating women Betros, Lance. Carved from Granite: West Point since 1902 (Texas A&M University Press, 2012). External links Army Athletics website () 1802 establishments in New York (state) Educational institutions established in 1802 Forts in New York (state) Forts on the National Register of Historic Places in New York (state) Historic Civil Engineering Landmarks Military academies Military academies of the United States Military and war museums in New York (state) National Historic Landmarks in New York (state) New York (state) in the American Revolution Patriot League Tourist attractions in Orange County, New York U.S. Route 9W United States Army Direct Reporting Units United States Army museums United States Army posts United States Army schools Universities and colleges in New York (state) National Register of Historic Places in Orange County, New York American Revolution on the National Register of Historic Places United States military service academies United States senior military colleges New York State Register of Historic Places in Orange County
United States Military Academy
Engineering
12,859